Search is not available for this dataset
text
stringlengths 1
1.92M
| id
stringlengths 14
6.21k
| metadata
dict |
---|---|---|
\section{Introduction}
Superconducting films are candidate substances for the improvement of
electronics technology in a myriad of applications. While the low
resistance is very attractive in this regard, it has proved difficult
to control the nonlinear behaviour of such materials in response to
electromagnetic field\cites{O07}. When a magnetic field is strong enough to
penetrate into a superconductor in the form of quantised magnetic flux
tubes, the vortex state obtains as a mixed state of superconducting
phase punctuated by the vortices themselves. Vortices are surrounded
by a supercurrent and can be forced into motion by the current
resulting from an applied field.
As a topological defect, a vortex is not only stable under
perturbations\cites{Tinkham,Fg06} but cannot decay. The
collection of vortices in a type-II superconductor forms what is
called vortex matter, and it is this which determines the physical
properties of the system rather then the underlying material
properties, in particular driving phase
transitions\citep{Tinkham,RL09}. In the mixed state, a superconductor
is not perfect; it exhibits neither perfect diamagnetism nor zero
electrical resistance. The transport current ${\bf J}$ generates a
Lorentz force ${\bf F}=\Phi_0 {\bf J} \times \hat {\bf h}$ on the
vortex
and forces it into motion, dissipating energy.
In reaching thermal equilibrium, energy is transferred via interactions
between phonons and quasiparticle excitations.
Small-scale imperfections such as defects
scatter the quasiparticles, affecting their dynamics.
In \emph{dirty} superconductors, impurities are plentiful and vortices experience a large friction.
This implies a fast momentum-relaxation process.
In contrast is the \emph{clean} limit, where impurities are rare and no such relaxation process is available.
It is in this situation of slow relaxation that the Hall effect appears.
Generally, the $H$-$T$ phase diagram\cites{RL09} of the vortex matter
has two phases. In the \emph{pinned phase} vortices are trapped by an
attractive potential due to the presence of large-scale defects, thus resistivity
vanishes. This phase contains what are known as glass states.
There is then
the \emph{unpinned phase} in which vortices can move when forced and
so a finite resistivity appears. This phase is also known as the
flux-flow region and can be of two types. One type is a liquid state
where vortices can move independently; the other type is a solid state
in which vortices form a periodic Abrikosov lattice\cites{Ab57}
resulting from their long-range interacton.
One model for the transition between the pinned and unpinned phases appears in \citep{GB97}.
In the unpinned phase,
the system is driven from equilibrium and experiences a
relaxation process. There are several ways to describe such a system.
A microscopic description\cites{Kpn02} invoking interactions between a
vortex and quasi-particle excitations at the vortex core provides a good
understanding of friction and sports good agreement with experiments
in the sparse-vortex region $H\ll H_{c2}$. There is also a macroscopic
description, the London approach, where vortices are treated either as
interacting point-like particles or an elastic manifold subject to a
pinning potential, driving force and friction\cites{CC91,GR66}. In the
small-field region, vortices behave as an array of elastic strings.
In the dense-vortex region $H\gg H_{c1}$, where the magnetic field is
nearly homogeneous due to overlap between vortices, Ginzburg-Landau
(GL) theory, which describes the system as a field, provides a more
reasonable model.
In dynamical cases, time-dependent GL
(TDGL) theory is appropriate\cites{HT71, TDGLnumerical,Kpn02}; in GL-type models,
additional simplification can come from the lowest Landau level (LLL)
approximation which has proven to be successful in the vicinity of the
superconducting-normal (S-N) phase transition line $H\sim H_{c2}$.
This has been pursued in the static case\cites{RL09} (without driving force)
and in the dynamic case with a time-independent transport
current\cites{LMR04}. It may be noted that in the glass state, zero
resistance within the LLL approximation cannot be
attained\cites{ZR07}.
Based on TDGL theory, we will study the dynamical response of a dense
vortex lattice forced into motion by an alternating current induced by
an external electromagnetic field.
Vortices are considered which are free from being pinned and thermally
excited, which in addition to thermal noise would produce entanglement and bending.
We assume the vortices can transfer work done by an external field to a heat bath.
Experimentally, a
low-temperature superconductor far away from the clean limit
is the best
candidate for attaining these conditions. We do not consider thermal
fluctuation effects specific to high-temperature superconductors.
In a dissipative system driven by a single-harmonic electric field
$E\cos\Omega\tau$, long after its saturation time we can expect the
system to have settled into steady-state behaviour, where the vortices are
vibrating periodically with some phase.
The TDGL model in the presence of external electromagnetic field is
analysed and solved in \secref{models}. The dynamical S-N
phase transition surface $\Tcdy(H,E,\Omega)$ is located in
$\{T,H,E,\Omega\}$-space. This surface coincides with the mean-field
upper-critical field $H_{c2}(T)$ in the absence of the applied field,
and with the phase-transition surface in the presence of the constant
driving field considered by Hu and Thompson\cites{HT71}. We will
provide an analytical formalism for perturbative expansion in the
distance to $\Tcdy$, valid in the flux-flow region. The
response of vortex matter forced into motion by the transport current
is studied in \secref{response}. The current-density distribution
and the motion of vortices are treated in \secref{motionofvortices}.
In analysing the vortex lattice configuration in \secref{CML}, a
method is utilised whereby the heat-generation rate is maximised.
Next are discussed power dissipation, generation of higher harmonics,
and the Hall effect. An experimental comparison is made in
\secref{Discussion} with Far-Infrared (FIR) measurement on NbN.
Finally, some conclusions are made in \secref{Conclusion}.
\section{Flux-flow solution}
\label{models}
Let us consider a dense vortex system prepared by exposing a type-II
superconducting material to a constant external magnetic field
${\bf H}=(0,0,-H)$ with magnitude $H_{c2}>H\gg H_{c1}$.
We also select the $c$ axis of the superconductor to be in the $z$ direction.
Let the superconductor carry an alternating electric current along the $y$ direction,
generated by an electric field $E(\tau)=E\cos\Omega\tau$ as shown in \citefg{twovortexfigure}.
Such a system when disturbed from its equilibrium state will undergo a
relaxation process. For our system, the TDGL equation\cites{HT71,KS98} is
a useful extension of the equilibrium GL theory.
In the dense-vortex region of the $H$-$T$ phase diagram, vortices
overlap and a homogeneous magnetic field obtains. Describing the
response of such a system by a field, the order parameter $\Psi$ in
the GL approach, is more suitable
than describing vortices as
particle-like flux tubes, as is done in the London approach\cites{CC91}.
\subsection{Time-dependent Ginzburg-Landau model}
A strongly type-II superconductor is characterised by its large
penetration depth $\lambda$ and small coherence length $\xi$,
$\kappa\equiv\lambda/\xi\gg1$.
The difference between induced magnetic field and
external magnetic field is ${\bf H}-{\bf B}=-4\pi{\bf M}$. In the
vicinity of the phase-transition line $H_{c2}(T)$
vortices overlap significantly, and ${\bf H}\approx{\bf B}$ making ${\bf M}$ small.
In this case, the magnetic field may be treated as homogeneous within the sample.
We will have in mind an experimental arrangement using a planar sample
very thin compared with its lateral dimensions.
Since the characteristic length for inhomogeneity of electric field\cites{HT71}
$\xi_E^2=4\pi\lambda^2 \sigma_n/\gamma$ is then typically large compared with sample thickness,
this implies that the electric field may also be treated as homogeneous throughout\cites{LMR04,HT71},
eliminating the need to consider Maxwell's equations explicitly.
In equilibrium, the Gibbs free energy of the system is given by\cites{Tinkham}
\begin{eqnarray}
\label{GLfree}
F[\Psi]&=&\int\textup{d} {\bf r} \bigg\{
\frac{\hbar^2}{2m_{ab}}|{\bf D}\Psi|^2
+\frac{\hbar^2}{2 m_c}|\partial_z \Psi|^2 \nonumber\\
&&{\ \ \ \ }-\alpha (\Tnst-T)|\Psi|^2+\frac{\beta}2 |\Psi|^4 \bigg\}
\end{eqnarray}
where $\Tnst$ is the critical temperature at zero field.
Covariant derivatives employed here preserve local
gauge symmetry and are two-dimensional;
$D_{\tau}=\partial_{\tau}+i\frac{e^*}{\hbar} A_{\tau}$
and ${\bf D}=\nabla^{(2)}-i \frac{e^*}{\hbar c} {\bf A}$.
Governing the dynamics of the field $\Psi$ is the
TDGL equation
\begin{equation}
\label{1TDGL}
\frac{\hbar^2 \gamma}{2 m_{ab}} D_{\tau}\Psi =-\frac{\delta F}{\delta \Psi^*}
.\end{equation}
This determines the characteristic relaxation time of the order
parameter. Microscopic derivation of TDGL can be found in
\citep{GE68,Kpn02} in which the values of $\alpha$, $\beta$ and $\gamma$
are studied. In the macroscopic case, these are viewed simply as
parameters of the model. At microscopic scale, disorder is accounted
for by $\gamma$, the inverse of the diffusion constant; the relation of $\gamma$
to normal-state conductivity is discussed in \appref{Gamma}.
In standard fashion, ${\bf E}=-\nabla A_{\tau}-\frac1c\partial_{\tau} {\bf A}$
while ${\bf B}=\nabla \times {\bf A}$.
Our set of equations is completed\cites{Tinkham} by including Amp\`ere's law,
writing for the total current density
\begin{equation}
\label{Amplaw}
{\bf J}_0=\big(\frac c {4\pi}\big)\nabla \times \nabla \times {\bf A}= \sigma_n {\bf E}+{\mathcal J}_0
.\end{equation}
As we shortly make a rescaling of quantities, we have written $0$
subscripts here for clarity. The first term is the normal-state
conductivity. The second term can be written using a Maxwell-type equation
relating the vector-potential with the supercurrent,
\begin{eqnarray}
\label{current}
{\mathcal J}_0&=&-i \frac{\hbar e^*}{2m_{ab}}\left[
\Psi^* {\bf D} \Psi
-\Psi ({\bf D} \Psi)^*
\right]
.\end{eqnarray}
This is a gauge-invariant model; we fix the gauge by
considering the explicit vector potential
${\bf A({\bf r})}=(B y,0,0)$ and $A_{\tau}({\bf r},\tau)=y E\cos\Omega\tau$,
corresponding to an alternating transport current.
Each vortex lattice cell has exactly one fluxon. We do not assume the electric field
and the motion of vortices are in any particular direction relative
to the vortex lattice, by way of rendering visible any anisotropy.
For convenience, we define some rescaled quantities. The rescaled
temperature and magnetic field are $t=T/\Tnst$ and $b=H/H^0_{c2}$.
$H^0_{c2}$ denotes the mean field upper-critical field,
extrapolated from the $\Tnst$ region down to zero temperature.
In the $a$-$b$ plane of the crystal we make use of \emph{magnetic length}
$\xi_{\ell}$. We define $\xi_{\ell}^2=\xi^2/b$ where
$\xi^2=\hbar^2/2m_{ab}\alpha \Tnst$. The scale on the $c$-axis is
$\xi_c/\sqrt b$ with $\xi_c^2=\hbar^2/2m_c\alpha \Tnst$.
The coordinate anisotropy in $z$ is absorbed into this choice
of normalisation, as can be seen in \eqref{Lop}.
The order parameter $\Psi$ is scaled by $\sqrt{2b \alpha \Tnst/\beta}$. The time
scale is normalised as $\tau_s=\gamma\xi_{\ell}^2/2$. Therefore,
frequency is $\omega=\Omega\tau_s$.
Note that $\omega$ is then inversely proportional to $b$. The
amplitude of the external electric field is normalised with
$E_0=2\hbar/e^*\xi^3\gamma$ so that $e=E/E_0$.
After our rescaling
the TDGL equation takes the simple form
\begin{equation}
\label{2TDGL}
L\Psi-\frac1{2b}(1-t)\Psi+|\Psi |^2\Psi=0
\end{equation}
where the operator $L$ is defined as
\begin{equation}
\label{Lop}
L=D_{\tau}-\frac12{\bf D}^2-\frac12\partial_z^2
.\end{equation}
With our specified vector potential, covariant derivatives are
$D_{\tau}=\partial_{\tau}+ivy\cos\omega\tau$, $D_x=\partial_x-iy$ and
$D_y=\partial_y$. We define $v=eb^{-3/2}$ for convenience.
The TDGL equation is invariant under translation in $z$, thus
the dependence of the solution in the $z$ direction can be
decoupled.
$L$ is not hermitian;
\begin{equation}
L^{\dagger}=-D_{\tau}-\frac12{\bf D}^2-\frac12\partial_z^2
\end{equation}
where the conjugation is with respect to the usual inner product, defined below.
We will make extensive use of the Eigenfunctions of $L$ and $L^\dagger$ in what follows.
The Eigenvalue equation
\begin{equation}
L \varphi_{n,k_x} = \varepsilon_n \varphi_{n,k_x}
\end{equation}
defines the set of Eigenfunctions of $L$ appropriate for our analysis;
this can be seen in \appref{LTDGL_sol}.
The convention is that
$\varepsilon_n =\varepsilon_{n'}$
when and only when $n=n'$.
Taking corresponding\footnote{One `follows the sign' in front of the $D_{\tau}$ in
$L$ and switches it in the resulting $\varphi$ to get the `corresponding'
Eigenfunction $\tilde{\varphi}$ for $L^\dagger$.}
Eigenfunctions of $L^\dagger$ to be $\tilde{\varphi}_{n,k_x}$, the orthonormality
$\langle \tilde{\varphi}_{n,k_x} , \varphi_{n',k'_x} \rangle = \delta_{nn'}\delta(k_x-k_x')$
may be chosen, so long as $\langle \tilde{\varphi}_{n,k_x} , \varphi_{n,k_x}\rangle \ne 0$.
Shown in \appref{LTDGL_sol}, crystal structure determines linear combinations of these
basis elements with respect to $k_x$; the resulting $\varphi_n$ functions are then
useful for expansion purposes below.
The inner product is $\langle \tilde{\varphi}_m,\varphi_n\rangle = \langle \tilde{\varphi}_m^* \varphi_n\rangle$
where the brackets $\langle\cdots\rangle$ denote an integral\footnote{
Shortly we will be dealing with a periodic system, and we will normalise such integrations by the
unit cell volume and the period in time.} over space and time.
To define averages over only time or space alone,
we write $\langle\cdots\rangle_\tau$ or $\langle\cdots\rangle_{\bf r}$ respectively.
\begin{figure}[ht]
\includegraphics[width=4cm]{Fig_VortexAngleA.eps}
\includegraphics[width=4cm]{Fig_VortexAngleB.eps}
\caption{
Possible vortex lattice configurations:
(a) Typical large angle, plotted for $\theta_A=60\deg$;
(b) Typical small angle, plotted for $\theta_A=30\deg$.
In the static case, both $\theta_A=60\deg$ and $\theta_A=30\deg$
are two particular angles will correspond to the same energy.
The applied constant magnetic field ${\bf H}$ is along the $-z$ direction and
time-dependent oscillating electric field ${\bf E}(\tau)$ is in the $y$ direction.
$\theta_A$ is the the apex angle of the two defining lattice vectors.
The two vectors for the rhombic vortex lattice are
${\bf a_1}=(\sqrt{2 \pi/\wp},0)$ and
${\bf a_2}=(\sqrt{2\pi/\wp}/2,\sqrt{2\pi\wp})$
where $\wp=\frac12 \tan \theta_A$.
The motion of vibrating vortices, indicated by the red arrow, is disccused in \secref{motionofvortices}.
}
\label{twovortexfigure}
\end{figure}
\subsection{Solution of TDGL equation}
\label{TDGLsolution}
States of the system can be parametrised
by $(t,b,e,\omega)$. By changing temperature $t$, a system with some
fixed $(b,e,\omega)$ may experience a normal-superconducting phase
transition as temperature passes below a critical value
$\tcdy(b,e,\omega)$. Such a point of transition is also
known as a \emph{bifurcation point}.
The material is said to be in the normal phase when $\Psi$ vanishes everywhere;
otherwise the superconducting phase obtains, with $\Psi$ describing the vortex matter.
Because of the vortices, the resistivity in the superconducting phase need not be zero.
The S-N phase-transition boundary $\tcdy(b,e,\omega)$ separates the two phases.
To study the condensate, we will use a bifurcation expansion to solve \eqref{2TDGL}.
We expand $\Psi$ in powers of distance from the phase transition boundary $\tcdy$.
\subsubsection{Dynamical phase-transition surface}
As in the static case, we can locate the dynamical phase-transition
boundary by means of the linearised TDGL equation\cites{Tinkham,KS98}.
This is because the order parameter vanishes at the phase transition,
and we do not need to consider the nonlinear term. The linearised
TDGL equation is written
\begin{equation}
\label{LTDGL}
L\Psi-\frac1{2b}(1-t) \Psi=0
.\end{equation}
Of the Eigenvalues of $L$, only the smallest one $\varepsilon_{\bf 0}$,
corresponding to the highest superconducting temperature
$\tcdy$, has physical meaning. The S-N phase transition occurs
when the trajectory in parameter space intersects with the surface
\begin{equation}
\label{dyptline}
\varepsilon_{\bf 0}-\frac1{2b}(1-t)=0
\end{equation}
where the lowest Eigenvalue is calculated in \appref{LTDGL_sol};
\begin{eqnarray}
\label{eigenvalue0}
\varepsilon_{\bf 0}=\frac12+\frac{v^2}{4 (1+\omega^2)}
.\end{eqnarray}
Utilising a $b$-independent frequency $\nu=b\omega$ and
amplitude $e$ of input signal, we write
\begin{equation}
1-t-b=\frac{e^2/2}{b^2+\nu^2}
.\end{equation}
In the absence of external driving field, $e=0$, the phase-transition
surface coincides with the well-known static-phase transition line
$1-t-b=0$ in the mean-field approach. With time-independent electric
field at $\nu=0$, where the vortex lattice is driven by a fixed
direction of current flow, the dynamical phase-transition surface
coincides with that proposed in \citep{HT71}, but with a factor of
$1/2$. This amplitude difference is familiar from elementary
comparisons of DC and AC circuits.
In the above equation, we can see that in the static case $e=0$,
the superconducting region is $1-t-b>0$. In addition when $e\ne0$,
the superconducting region in the $b$-$t$ plane is smaller than the
corresponding region in the static case, as can be seen
in \citefg{TwoPhaseTransitionProfile}(a).
Finally, increasing frequency will increase the size of the
superconducting region, as in \citefg{TwoPhaseTransitionProfile}(b);
in the high-frequency limit, the area will reach its maximum, which is
the superconducing area from the static case.
As with any damped system, response is diminished at higher frequencies.
The superconducting state does not survive at small magnetic field;
for example at $e=0.2$ in \citefg{TwoPhaseTransitionProfile}(a), the
material is in the normal state over most of the $H$-$T$ phase diagram.
Later in this paper we will consider interpretation of this phenomenon.
In particular, when discussing
energy dissipation in \secref{powerlose}, we will see that the main contribution to the
dissipation is via the centre of the vortex core. At small magnetic
field, since there are fewer cores to dissipate the work done by the
electric field, the superconducting state is destroyed and the order
parameter vanishes.
\begin{figure}[ht]
\includegraphics[width=4.2cm]{PhaseDiagram_bt.eps}
\includegraphics[width=4.2cm]{PhaseDiagram_et.eps}
\caption{Dynamical superconducting-normal phase transition:
(a) Critical temperature $\tcdy$ as a function of $b$ for various $e$ at $\nu=0.1$ and
(b) $\tcdy$ as a function of $e$ for various $\nu$ at $b=0.1$.
The straight line in (a) is the $e=0$ curve and corresponds to
the mean-field phase-transition line $1-t-b=0$.
States above each line are normal phase while
the region below each line is superconducting.
$e$ suppresses the superconducting phase as shown in (a),
while $\nu$ removes this suppression effect, as shown in (b).
}
\label{TwoPhaseTransitionProfile}
\end{figure}
\subsubsection{Perturbative expansion}
\label{ptsol}
That the vortex matter dominates the physical properties of the system
is especially pronounced in the pinning-free flux-flow region.
Here we solve \eqref{2TDGL} by a bifurcation expansion\cites{L65,LMR04}.
Since the amplitude of the solution grows when the system departs from
the phase transition surface where $\Psi=0$, we can define a
distance from this surface as
\begin{equation}
\label{distance}
\epsilon=\frac1{2b}(1-t)-\varepsilon_{\bf 0},
\end{equation}
and expand $\Psi$ in $\epsilon$. The TDGL in terms of $\epsilon$ is
\begin{equation}
\label{shTDGL}
\Lshift\Psi-\epsilon \Psi+ \Psi|\Psi|^2=0
\end{equation}
where $\Lshift=L-\varepsilon_{\bf 0}$ is the operator $L$ shifted by its smallest Eigenvalue.
$\Psi$
is then written
\begin{equation}
\label{ansatz}
\Psi=\sum_{i=0}^\infty \epsilon^{i+1/2} \Phi^{(i)}
,\end{equation}
and it is convenient to expand $\Phi^{(i)}$ in terms of our Eigenfunctions of $\Lshift$
\begin{equation}
\label{Phi_i}
\Phi^{(i)}=\sum_{n=0}^\infty c^{(i)}_{n}\varphi_{n}
.\end{equation}
In principle, all coefficients $c^{(i)}_n$ in \eqref{ansatz} can be
obtained by using the orthogonal properties of the basis, which are explained in \appref{LTDGL_sol}.
Inserting $\Psi$ from equation \eqref{ansatz} into TDGL equation \eqref{shTDGL},
and collecting terms with the same order of $\epsilon$, we find that
for $i=0$
\begin{equation}
\label{eq1}
\Lshift \Phi^{(0)}=0
\end{equation}
and for $i=1$
\begin{equation}
\label{eq2}
\Lshift\Phi^{(1)}-\Phi^{(0)}+\Phi^{(0)}|\Phi^{(0)}|^2=0
.\end{equation}
For $i=2$
\begin{equation}
\label{eq3}
\Lshift\Phi^{(2)}-\Phi^{(1)}+c^{(0)2}
\big(
2\Phi^{(1)}|\Phi_0|^2+\Phi^{(1)*}\Phi_0^2
\big)=0
\end{equation}
and so on.
Observing \eqref{eq1}, the solution for the equation is
\begin{equation}
\label{solution}
\Phi^{(0)}=c^{(0)}_0 \varphi_0
\end{equation}
where $\varphi_0$ is a particular linear combination of all Eigenfunctions with the smallest Eigenvalue.
The coefficient of $\epsilon^{1/2}$ can be obtained by calculating
the inner product of ${\tilde \varphi}_0$ with \eqref{eq2},
\begin{equation}
\label{c0}
c^{(0)}_0=\frac1{\sqrt{\beta_{0}}}
.\end{equation}
In the same way, the coefficient of the next order $\epsilon^{3/2}$,
can be obtained by finding the inner product of ${\tilde \varphi_0}$
with the $i=2$ equation \eqref{eq3},
\begin{equation}
\label{c10}
c^{(1)}_0=\frac1{2 \beta_0 }\sum_{n=1}^{\infty} \big(
2 c^{(1)}_n \langle {\tilde \varphi_0},\varphi_n |\varphi_0|^2\rangle
+c^{(1)*}_n \langle{\tilde\varphi_0},\varphi_n^* \varphi_0^2\rangle
\big)
.\end{equation}
The inner product of ${\tilde \varphi}_m$ on \eqref{eq2}
gives the coefficient for $m>0$
\begin{equation}
\label{c1m}
c^{(1)}_m=-\frac{\beta_m/\beta_0}{\varepsilon_m-\varepsilon_0}
,\end{equation}
and
\begin{equation}
\label{betam}
\beta_{m}\equiv \langle {\tilde \varphi_m},\varphi_0|\varphi_0|^2\rangle
.\end{equation}
The solution of TDGL is then
\begin{equation}
\label{solall}
\Psi=\epsilon^{1/2} \frac{\varphi_0}{\sqrt{\beta_0}}
+\epsilon^{3/2}\sum_{n=0}^\infty c^{(1)}_n \varphi_n
+{\mathcal O} (\epsilon^{5/2})
.\end{equation}
In this paper we will restrict
our discussion to the region near $\tcdy$ where the
next-order correction can be disregarded;
\begin{equation}
\label{sol}
\Psi\approx\sqrt{\frac{\epsilon} {\beta_0}} \varphi_0
.\end{equation}
We would like to emphasise that our discussion at this order
is valid in the vicinity of the phase-transition boundary and in particular
for a superconducting system without vortex pinning. In such a system,
vortices move in a viscous way, resulting in flux-flow resistivity;
no divergence of conductivity is expected.
Our results based on \eqref{sol} were calculated
at $\epsilon^{1/2}$ order, where only the lowest
eigenvalue $n=0$ of the TDGL operator $L$ makes an appearance.
The next-order correction is at order $\epsilon^{3/2}$, and there is now a contribution
from higher Landau levels. From the symmetry argument in
\citep{LR99,L65}, as long as the hexagonal lattice remains the stable
configuration for the system, the next-order contribution comes from
the sixth Landau level with a factor $(\varepsilon_6-\varepsilon_0)^{-1}$.
Even in the putative case of a lattice deformed slightly away from
a hexagonal configuration, the next contributing term is $n=2$,
since in our system the lattice will remain rhombic.
\subsection{Vortex-lattice solution}
\label{MLS}
The vortex lattice has been experimentally observed since the 1960s
and its long-range correlations have been clearly observed\cites{Kim99}
with dislocation fraction of the order $10^{-5}$.
Remarkably, the same techniques can be used to study the
structure and orientation of moving vortex lattice with steady current\cites{Fg02},
and with alternating current in the small-frequency
regime\cites{Fg06}. In this subsection, we will discuss the
configuration of the vortex lattice in the presence of alternating
transport current in the long-time limit.
In the dynamical case, the presence of an electric field breaks the
rotational symmetry of an effectively isotropic system to the discrete
symmetry $y \rightarrow -y$. In contrast, a rhombic lattice preserves
at least a symmetry of this kind along two axes, and the special case
of a hexagonal lattice preserves sixfold symmetry.
The area of a vortex cell is determined by the quantised flux in the
vortex, which is $2\pi$ in terms of our rescaled variables.
As shown in \citefg{twovortexfigure}, we choose a unit cell $C$ defined
by two elementary vectors ${\bf a}_1$ and ${\bf a}_2$. We will first
construct a solution for an arbitrary rhombic lattice parameterised by
an apex angle $\theta_A$.
Consideration of translational symmetry in the $x$ direction leads to
the discrete parameter $k_l=2\pi l/a_1=\sqrt{2 \pi \wp}l$.
In \appref{LTDGL_sol} we show that in the long-time limit the
lowest-eigenvalue steady-state Eigenfunctions of $L$ must therefore combine to form
\begin{equation}
\label{rhossol}
\varphi_0=\sqrt[4]{2 \sigma}\sum_{l=-\infty}^{\infty}
e^{i \frac{\pi}2 l(l-1)}
e^{i k_l (x- v \sin\omega \tau/\omega)} u_{k_l}(y,\tau)
.\end{equation}
Here $\varphi_0$ is normalised as
\begin{eqnarray}
\label{normalised}
\langle\tilde{\varphi}_0,\varphi_0\rangle \nonumber\equiv1
.\end{eqnarray}
The function $u$ is given by
\begin{eqnarray}
\label{ukv}
u_{k_l}(y,\tau)=c(\tau)e^{ -\frac12
\big[
y-k_l+i {\tilde v} \cos(\omega \tau-\theta)
\big]^2
}
,\end{eqnarray}
with
\begin{eqnarray}
\label{ct}
c(\tau)=e^{
-\frac{{\tilde v}^2}{4}
\big[
\sin^2\theta+
\cos 2(\omega \tau-\theta)+
\frac1{2\omega}\sin2(\omega \tau-\theta)
\big]
}
.\end{eqnarray}
In analogy with a forced vibrating system in mechanics, a
phase $\theta=\tan^{-1} \omega$ and a reduced velocity $\tilde v=v\cos\theta$
have been introduced for convenience in \eqref{ct}. The zero electric-field
limit, large-frequecy limit and zero frequency limit are consistent
with previous studies concluded in \appref{LTDGL_sol}.
In our approximation, the $\beta_0$ in \eqref{sol} is a
time-independent quantity from \eqref{betam} and
\begin{eqnarray}
\label{beta0}
\beta_0&=&
\frac{\sqrt{\sigma}}{2 \pi}
\int_0^{2\pi}\textup{d} (\omega\tau)
\bigg\{
e^{\frac{v_{\omega}^2}{4}
\big( 1+
\cos2\omega\tau
+(\omega-1/\omega)\sin2\omega\tau
\big)
}
\nonumber\\
&&
\sum_{p=-\infty}^{\infty}
e^{- \frac12(
k_p-i v_{\omega} \cos \omega\tau
)^2
} \nonumber\\
&& \sum_{q=-\infty}^{\infty} (-)^{pq}
e^{-\frac12(
k_q-iv_{\omega} \cos \omega\tau
)^2\big)
}
\bigg\}
.\end{eqnarray}
where $v_{\omega}=v/(1+\omega^2)$.
In the small signal limit $v\rightarrow0$, $\beta_0$ reduces to the
Abrikosov constant. The Abrikosov constant with either
$\theta_A=30\deg$ or $60\deg$ minimises the GL free energy
\eqref{GLfree} in the static state\cites{L65}.
To be more explicit, $\beta_0$ can be expanded in terms of the amplitude of input signal.
In powers of $v$,
\begin{eqnarray}
\label{expbeta0}
\frac{\beta_0}{\beta_A}=1+\frac12v_{\omega}^2\big(\frac32-
\frac{\beta_b}{\beta_A}\big)+\mathcal O(v^3)
\end{eqnarray}
and we find it convenient to write in terms of $v_{\omega}$.
The first term in $\beta_0$ is the Abrikosov constant
\begin{equation}
\beta_A=\sqrt{\sigma}\sum_{p,q=-\infty}^{\infty}(-)^{pq}e^{-\frac12(k_p^2+k_q^2)}
.\end{equation}
For hexagonal lattices $\beta_A\sim1.159$, whereas for a square lattice $\beta_A\sim1.18$.
The next term in $\beta_0$ is $v_{\omega}^2$ with a coefficient
\begin{equation}
\beta_b=\frac{\sqrt{\sigma}}2\sum_{p,q=-\infty}^{\infty}(-)^{pq}
(k_p^2+k_q^2)e^{-\frac12(k_p^2+k_q^2)}
.\end{equation}
We see that at high frequency, the correction in higher order terms
of $v_{\omega}$ can be disregarded.
\section{Response}
\label{response}
In this section we discuss the current distribution and motion of vortices,
energy transformation of the work done on the system into heat,
nonlinear response and finally the Hall effect.
\subsection{Motion of vortices}
\label{motionofvortices}
In addition to the conventional conductivity attributable to the
normal state, there is an overwhelming contribution due to the
superconducting condensate in the flux-flow regime, tempered only by
the dissipative properties of the vortex matter. In this section we
will examine the supercurrent density to investigate the motion of the
vortex lattice. We consider a hexagonal lattice in a fully dissipative system;
the non-dissipative part known as the Hall effect will be discussed in
\secref{halleffect}.
The supercurrent density ${\mathcal J}({\bf r},\tau)$ is obtained
by substitution of the solution \eqref{sol} into \eqref{current}.
\begin{eqnarray}
\label{jx}
{\mathcal J}_x=
\frac {\epsilon}{\beta_0}
\sum_{p,q=-\infty}^{\infty}
\bigg( \frac{k_p+k_q}2-y\bigg) g_{p,q}({\bf r},\tau)
,\end{eqnarray}
and
\begin{eqnarray}
\label{jy}
{\mathcal J}_y=
\frac {\epsilon}{\beta_0}
\sum_{p,q=-\infty}^{\infty}
\bigg(
\frac{i (k_p-k_q)}{2}-{\tilde v}\cos(\omega \tau -\theta)
\bigg)
g_{p,q}
\end{eqnarray}
where
\begin{eqnarray}
g_{p,q}=e^{-i\frac{\pi}2(p^2-q^2-p+q)}
e^{i(k_q-k_p)(x-\frac{v}{\omega} \sin\omega \tau)}
u^*_{k_p}u_{k_q}\nonumber
,\end{eqnarray}
and $u$ is given in \eqref{ukv}.
Observing \citefg{CurrentDistribution}, we conceptually split the current
into two components. One part is the circulating current surrounding
the moving vortex core as in the static case; we refer to this
component as the \emph{diamagnetic current}. The other part which we
term the \emph{transport current} is the component which forces
vortices into motion.\footnote{Thinking of the system being embedded
in three-dimensional space, the circular and transport currents are
essentially the curl and gradient components of the current.}
\begin{figure}[ht]
\includegraphics[width=4.2cm]{Fig_CurrentDensity000.eps}
\includegraphics[width=4.2cm]{Fig_CurrentDensity008.eps}
\caption{Current flow at $v=1,\omega=1$:
(a) $t=0$; vortex cores move to the right.
(b) $t=4\pi/5$; vortex cores move to the left.
Vortices are drawn back and forth as the direction of the transport
current density alternates. The magnitude of the current density has
maximal regions which tend to circumscribe the cores; the maxima in
these regions move in the plane and their manner of motion can be
described as leading the motion of the vortices by a small phase. The
average current in a unit cell leads the motion of the vortex in time
by a phase of $\pi/2$. }
\label{CurrentDistribution}
\end{figure}
The diamagnetic current may be excised from our consideration by
integrating the current density over the unit cell $C$; that is,
we consider $\langle \mathcal J \rangle_{\bf r}$.
We have $\langle {\mathcal J}_x\rangle_{\bf r}=0$ and
\begin{eqnarray}
\label{avgj}
\langle {\mathcal J}_y\rangle_{\bf r}
&=&-\frac {\epsilon}{\beta_0} {\tilde v} \cos(\omega \tau -\theta)
e^{-\frac{{\tilde v}^2 }{4\omega}\sin2(\omega \tau-\theta)
+\frac{v^2_{\omega}}{2}}
.\end{eqnarray}
With our conventions, the transport current is along the
$y$-direction.
Considering the Lorentz force between the magnetic
flux in the vortices and the transport current, we expect the force on
the vortex lattice to be perpendicular.
We identify the locations of vortex cores to be where $|\Psi|^2=0$.
The velocity of the vortex cores turns out to be
\begin{equation}
\label{vv}
v_c(\tau)={\tilde v}\cos ( \omega \tau-\theta)
\end{equation}
along the $x$-direction. Note that vortex lattice moves coherently.
The vortex motion the electric field with a
phase $\theta$ which increases with frequency and reaches
$\pi/2$ asymptotically. The maximum velocity of vortex motion
$\tilde{v}$ decreases with increasing frequency.
In \citefg{CurrentDistribution} we show the current distribution
and the resultant oscillation of vortices.
As anticipated, the transport current and the motion of
vortices \eqref{vv} are perpendicular as the vortices follow
the input signal. The current density diminishes near the core;
it is small there compared to its average value.
In steady-state motion, since the vortices move coherently in our approximation,
the interaction force between vortices is balanced as in the static case.
Since the system is entirely dissipative, the motion that the vortices
collectively undergo is viscous flow. The vortex lattice responds to the
Lorentz driving force as a damped oscillator, and this is the origin of
the frequency-dependent response.
\subsection{Configuration of moving vortex lattice}
\label{CML}
In static case the system is described by the GL equation.
Solving this equation, which is \eqref{1TDGL} but
with zero on the left-hand side, will select some lattice configuration.
The global minima of the free energy correspond to a hexagonal lattice,
while there may be other configurations producing local minima.
In the static case the lattice configuration can be determined in
practice by building an Ansatz from the linearized GL solution\cites{L65}
and then using a variational procedure to minimise the full free energy.
In the dynamic case, there is no free energy to minimise; we must embrace
another method of making a physical prediction regarding the vortex lattice
configuration. Let us follow \citep{LMR04} and take as the preferred
structure the one with highest heat-generation rate. Though we have
at present no precise derivation, our physical justification of this
prescription is that the system driven out of equilibrium can reach
steady-state and stay in condensate only if the system can efficiently
dissipate the work done by the driving force. Therefore, whatever the
cause, the lattice structure most conducive to the maintenance of the
superconducting state will correspond to the maximal heat generation
rate.
The heat-generating rate\cites{KS98} is
\begin{eqnarray}
\label{entropy}
\langle {\dot Q} \rangle_{\bf r} &=&2 \langle ~ |D_{\tau} \psi|^2\rangle_{\bf r} \\\nonumber
&=&\frac{\epsilon}{\beta_0}\frac{{\tilde v}^2} {4}
e^{\frac{{\tilde v}^2}{2} \cos^2 \theta-\frac{{\tilde v}^2}{4\omega}\sin2(\omega\tau-\phi)}\\\nonumber
&&\big\{ \cos2(\omega\tau-\theta)+1+\frac{{\tilde v}^2}{8}\big[\cos4(\omega\tau-\theta)+1\big]
\big\}
.\end{eqnarray}
$\beta_0$ is given explictly in \eqref{beta0} and is the only parameter involving the apex
angle $\theta_A$ of the moving vortex lattice. Here $\beta_0$ plays
the same r\^ole as the Abrikosov constant $\beta_A$ in the static case.
Corresponding to maximising the heat-generating rate, the preferred
structure can be obtained by simply minimising $\beta_0$ with respect
to
$\theta_A$. This shows from the current viewpoint of maximal heat-generation
rate that vortices are again expected to move coherently.
In \eqref{beta0} or \eqref{expbeta0} it is seen that the moving
lattice is distorted by the external electric field but this influence
subsides at high frequency.
Numerical solution for minimising $\theta_A$ shows that while near the high-frequency limit there remain
two local minima for $\beta_0$ with respect to $\theta_A$, the solution near
$\theta_A=60\deg$ is favoured slightly over that at $30\deg$ as the global minimum.
This is as presented in \citefg{twovortexfigure}(a).
The two minima tend to approach each other slightly as the frequency begins to decrease further.
In an experimental setting, this provides an avenue for testing the
empirical validity of the maximal heat generation prescription,
in particular in terms of the direction of lattice movement\cites{Fiory71}.
We put forth the physical interpretation
that at high frequency the friction force becomes less important, and
the distortion is lessened.
Since interactions dominate the lattice structure the system at
high frequency will have many similarities with the static case.
\subsection{Energy dissipation in superconducting state}
\label{powerlose}
Energy supplied by the applied alternating current is absorbed and
dissipated by the vortex matter, and the heat generation does not
necessarily occur when and where the energy is first supplied.
In \citefg{TwoEnergyProfile}, we show an example of this transportation
of energy by the condensate.
On the left is shown a contour plot of the work
$\langle P \rangle_{\tau}=\langle{\mathcal
J}\cdot {\bf v}\rangle_{\tau}$ done by the input signal; points
along a given contour are of equal power absorption. On the
right of \citefg{TwoEnergyProfile} is shown the heat-generating rate\cites{KS98},
$\langle\dot Q\rangle_{\tau}=2\langle|D_{\tau}
\psi|^2\rangle_{\tau}$. The periodic maximal regions are near the
vortex cores in both patterns.
\begin{figure}[ht]
\includegraphics[width=4.2cm]{WorkContoursA.eps}
\includegraphics[width=4.2cm]{WorkContoursB.eps}
\caption{Work contours of superconducting component at $v=1$ and $\omega=1$:
(a) Work $\langle P \rangle_{\tau}$ and
(b) Heat generating rate $\langle \dot Q \rangle_{\tau}$.
The vortex cores are denoted by `$+$' in both figures, shown in the $x$-$y$ plane.
The maximum displacements of vortex cores are shown by the arrow.
The maximal region around the core in (a) is elongated by the current.
The similar horizontal broadening around the core in (b) is caused by the vortex motion.
Energy is transported; maxima in (a) and (b) do not coincide.
}
\label{TwoEnergyProfile}
\end{figure}
In \citefg{TwoEnergyProfile}(b), one can see that the system dissipates energy via vortex cores.
From a microscopic point of view, Cooper pairs break into quasiparticles inside the core;
these couple to the crystal lattice through phonons and impurities to transfer heat.
The interaction between vortices and excitation of vortex cores manifests
as friction\cites{Kpn02}.
The power loss of the system averaged over time and space
is $\langle P \rangle=\langle \dot Q \rangle$.
\begin{eqnarray}
\langle P \rangle=\frac{\epsilon}{\beta_0}
\frac{{\tilde v}^2}{2} e^{\frac{v_{\omega}^2}{2}}
\bigg[
I_0 \bigg( \frac{{\tilde v}^2}{4 \omega}\bigg)
+\omega I_1\bigg( \frac{{\tilde v}^2}{4 \omega}\bigg)
\bigg]
\end{eqnarray}
where $I_n$ is a Bessel function of the first kind.
\begin{figure}[ht]
\psfrag{avpow}{$\langle P \rangle$}
\psfrag{eps}{$\epsilon$}
\psfrag{nu}{$\nu$}
\includegraphics[width=9cm]{SuperCurrentPowerLoss.eps}
\caption{Power loss of supercurrent $\langle {\mathcal J}\cdot e\rangle$ (upper panel)
and expansion parameter $\epsilon$ (lower panel) as a function of frequency $\nu$.}
\label{PowerLoss}
\end{figure}
In \citefg{PowerLoss} is shown the power loss and also $\epsilon$ as a
function of frequency. $\epsilon$ is proportional the density of
Cooper pairs, and can be thought of as an indication of how robust is
the superconductivity.
As frequency increases, while $\epsilon$ tends to an asymptotic value,
$\langle P \rangle$ achieves a maximum and then decreases; this
maximum is due to fluctuations of order parameter caused by the input
signal. In a fully dissipative system as considered here, the maximum
of each curve is not a resonance phenomenon but is instead caused by
fluctuation of the order parameter resulting from the influence of the
applied field.
A parallel may be drawn between what we have observed in this section
and the suppression of the superconductivity by macroscopic thermal
fluctuations commonly observed in high-temperature superconductors.
In our case, the vortices in a high-$T_c$ superconductor undergo
oscillation due to the driving force of the external field. We may think
of this as being analogous to the fluctuations of vortices due to
thermal effects alone in a low-$T_c$ superconductor. Although the
method of excitation is different, the external electromagnetic
perturbation in the present case essentially plays the same r\^ole as the
thermal fluctuations in low-$T_c$ situation.
Finally, we point out that $\epsilon$ seems to be an appropriate
parameter for determining the amount of power loss. Generically, it
seems that for points deeped inside the superconducting region, that
is at large $\epsilon$ compared with its saturation value at high
$\nu$, the power loss due to the dissipative effects
of the vortex matter becomes suppressed. We suggest the possibility
that this effect, which is na\"ively intuitive, is in fact physical
and more widely applicable than merely the present model.
\subsection{Generation of higher harmonics}
The practical application of superconducting materials
is dependent on how well one can control the inherent nonlinear behaviour.
In this section we will focus on the generation of higher harmonics in
the mixed state, in response to a single-frequency input signal.
The periodic transport current $\langle{\mathcal J}\rangle_{\bf r}$ is
an odd function of input signal, and it turns out that the response
motion also contains only odd harmonics.
From \eqref{avgj} we can calculate
the Fourier expansion for transport current.
\begin{equation}
\langle{\mathcal J}\rangle_{\bf r}=v {\rm Re}
\big[ \sum_{n=0}^{\infty}
\sigma^{(2n+1)} e^{i(2n+1)\omega\tau}
\big]
,\end{equation}
where the Fourier coefficient $\sigma^{(2n+1)}$ is
\begin{equation}
\label{hharmonics}
\sigma^{(2n+1)}=
\frac{\epsilon
e^{\frac12v_{\omega}^2}
i^n
}
{\beta_0\sqrt{1+\omega^2}}
\bigg[
iI_{n+1}\left( \frac{{\tilde v}^2}{4 \omega}\right)
+I_n \left( \frac{{\tilde v}^2}{4 \omega}\right)
\bigg]
e^{-i(2n+1)\theta
}
.\end{equation}
We see the response goes beyond simple ohmic behaviour
and the coefficients are proportional to $\epsilon$.
Experimentally, one way of measuring these coefficients is a lock-in
technique\cites{lockin} which is adept at extracting a signal with a
known wave from even an extremely noisy environment.
To make contact with more standard parameters and satisfy our intuition,
we expand the first two harmonics in terms of $v$.
The fundamental harmonic, $\sigma^{(1)}$ expanded in powers of $v^2$ is
\begin{eqnarray}
\label{s1}
\sigma^{(1)}&=&\frac{a_h}{\beta_A (1-i \omega)}\\\nonumber
&&\mbox{}-
\frac{a_h}{4 \beta_A (1-i \omega)}\frac{v^2}{1+\omega^2}\bigg(\frac{1-\beta_B/\beta_A}
{1+\omega^2}+\frac1{a_h}+\frac{i}{2\omega}\bigg)\\\nonumber
&&\mbox{}+{\mathcal O}(v^4)
\end{eqnarray}
where $a_h=(1-t-b)/2b$.
The first term is the ohmic conductivity denoted as $\sigma_0^{(1)}$,
and is reminiscent of Drude conductivity for free charged particles.
This is not an unexpected parallel, since the Cooper pairs in a
superconducting system can be imagined to behave like a free-particle gas.
Taking this viewpoint, in the small-signal limit, the ratio
$\rm Im \ \sigma / \rm Re \ \sigma=\omega=\Omega \tau_s$ gives the relaxation time of the
charged particles.
Subsequent higher-order corrections all contain $\omega$
in such a way that their contributions are suppressed at large $\omega$.
The coefficient of the $n=1$ harmonic expanded in powers of $v^2$ is
\begin{equation}
\label{s3}
\sigma^{(3)}=\frac {a_h}{8\beta_A} \frac {v^2/\omega}{\omega(3-\omega^2)-i(3\omega^2-1)}
+{\mathcal O}(v^4)
\end{equation}
which decreases quickly with increasing $\omega$.
In \citefg{GeneratingHarmonics}, we show the generation of higher harmonics
for three different states in the dynamical phase diagram.
For each harmonic labeled by $n$, $|\sigma^{(2n+1)}|$ as a function
of $\nu$ has the same onset as $\epsilon$.
We can see that $|\sigma^{(2n+1)}|$ reaches a maximum and then starts
to decay while $\epsilon$ saturates.
The coefficients of harmonics with $n>0$ decay to zero in the high $\nu$ limit,
where the state is well inside the superconducting region.
We pointed out in \secref{powerlose} and reaffirm here that $\epsilon$ plays a significant
r\^ole in determining the extent of nonlinearity in the system.
In turn, the parameter which controls this is $\omega$. When $\omega$ is large,
$\epsilon$ is brought closer to its saturation value $\epsilon_\infty$,
causing the higher harmonics to be suppressed,
and also lessening distortion of the vortex lattice.
Finally, for a given harmonic, $|\sigma^{(2n+1)}|$ is generally smaller when
$\epsilon_\infty$ is smaller; this can be seen by comparing (a) and
(c) of \citefg{GeneratingHarmonics}.
One might point out that the nonlinear behaviour is decreased at, for example, large $\omega$.
Nevertheless, we view the parameter $\epsilon/\epsilon_\infty$ as more intrinsic to the system,
rather than simply characterising the input signal.
A limited parallel can be drawn between the effect of thermal noise in high-$T_c$
superconducting systems, and the effect of the electromagnetic perturbation in our present case.
It seems that in either case the fluctuation influence can be reduced by moving the state
deeper inside the superconducting region.
\begin{figure*}[ht!]
\psfrag{si1}{$\sigma^{(1)}$}
\psfrag{si3}{$\sigma^{(3)}$}
\psfrag{si5}{$\sigma^{(5)}$}
\psfrag{Abs}[r][r]{$|\sigma|$}
\psfrag{ResS}[r][r]{$\rm Re \ \sigma$}
\psfrag{ImsS}[r][r]{$\rm Im \ \sigma$}
\psfrag{mImsS}[r][r]{$-\rm Im \ \sigma$}
\psfrag{nu}{$\nu$}
\psfrag{eps}{$\epsilon$}
\psfrag{aaa}{(a)}
\psfrag{bbb}{(b)}
\psfrag{ccc}{(c)}
\includegraphics[width=18cm]{HarmonicGeneration.eps}
\caption{Generation of harmonics: $\sigma^{(1)}$, $\sigma^{(3)}$
and $\sigma^{(5)}$ are plotted with respect to frequency $\nu$ at $t=0.2$.
(a) $b=0.3$, $e=0.5$. (b) $b=0.3$, $e=0.7$. (c) $b=0.5$, $e=0.5$.
The distance from the dynamic phase transition boundary $\epsilon$ is shown in the
separate bottom panel. The scale of figure (c) is half that of (a) and (b).
}
\label{GeneratingHarmonics}
\end{figure*}
\subsection{Flux-flow Hall effect}
\label{halleffect}
In contrast to the fully dissipative system we have considered, in
this section we will discuss an effect caused by the non-dissipative
component, namely the Hall effect. In a clean system, vortices move
without dissipation; a transverse electric field with respect to
current appears. The non-dissipative part is subject to a
Gross-Pitaevskii description, using a type of nonlinear Schr\"odinger
equation\cites{Kpn02}, with a non-dissipative part to the relaxation
$\gamma$ from \eqref{1TDGL}. The
fully dissipative operator $L$ in our previous discussion
can be generalised by using a complex relaxation
coefficient $r=1+i\zeta$. We thus define
\begin{equation}
{\mathcal L}=r D_{\tau}-\frac12(D^2_x+\partial_y^2+\partial_z^2)
.\end{equation}
The ratio $\zeta=\rm Im \ \gamma /\rm Re \ \gamma$ is typically on the order
of $10^{-3}$ for a conventional superconductor, and $10^{-2}$ for a
high-$T_c$ superconductor\cites{Kpn02}.
The Hall Effect is small here. In normal metals, the non-dissipative
part gives the cyclotron frequency. If $\tau_e$ is the relaxation
time of a free electron in a dirty metal, then for typical values of
$\omega_c \tau_e\ll1$ the Hall effect becomes negligible. Because the
supply of conducting electrons is limited, the transverse component
increases at the expense of the longitudinal component as the mean
free path of excitations grows. It is equivalent to an increase in
the imaginary part of the relaxation constant at the expense of the
real part.
The Eigenvalues and Eigenfunctions of ${\mathcal L}$ can be obtained
easily by replacing the $v$ in previous results with $r v$, $\omega$
with $r \omega$ and $\tau$ with $\tau/r$. The transport current along
the $x$-direction is no longer zero in the presence of the
non-dissipative component; it is propotional to $\zeta$. The
frequency-dependent Hall conductivity can be obtained from the
first-order expension in $v$,
\begin{eqnarray}
\label{sh}
\sigma^{h(1)}_{0}=\frac{a_h}{\beta_A}\frac{\zeta}{(1-i\omega)^2-\omega^2\zeta^2}
\end{eqnarray}
while the Hall contribution in the $y$ direction is expected to be
negligible, as it is of the order of $\zeta^2$.
In principle, the crossover between non-dissipative systems and
dissipative systems can be tuned using the ratio $\zeta$. In a
non-dissipative system, which is the clean limit, the Hall effect is
important and taking account of the imaginary part of TDGL is
necessary. On the contrary, in a strong dissipative system where
excitations are in thermal equilibrium via scattering, the TDGL
equation gives satisfactory agreement.
\section{Experimental Comparison}
\label{Discussion}
Far-Infrared spectroscopy can be performed using monochromatic
radiation which is pulsed at a high rate, known as Fast Far-Infrared
Spectroscopy. This technique sports the advantage of avoiding
overheating in the system, making it a very effective tool in observing
the dynamical response of vortices. In particular, one can study the
imaginary part of conductivity contributed mainly from superconducting
component.
In \citefg{FIR} is shown a comparison with an NbN experiment measuring
the imaginary part of conductivity.
The sample has the gap energy $2\Delta=5.3$~meV.
The resulting value of $2\Delta/T_c$ is larger than the value expected from
BCS theory\cites{Ikb09}.
We consider frequency-dependent conductivity in the case of linearly
polarised incident light with a uniform magnetic field along the $z$ axis.
The theoretical conductivity contains both a superconducting and a normal contribution.
The total conductivity is obtained from the total current as in
\eqref{Amplaw} where the normal-part conductivity in the condenstate
is the conductivity appearing in the Drude model.
According to our previous discussion, the nonlinear effect of the
input signal on NbN is unimportant in the THz region, which corresponds to $\nu\sim17$.
An approximation where the flux-flow conductivity includes only the term
$\sigma^{(1)}_0$ from \eqref{s1}, and Hall coefficient
$\sigma^{h(1)}_0$ from \eqref{sh} is shown in \citefg{FIR} and the
agreement with experiment is good.
The na\"ive way in which we have treated the normal-part contribution
is essentially inapplicable to the real-part conductivity. This is
because the real-part conductivity contains information about
interactions with the quasi-particles inside the core, making further
consideration necessary\cites{LL09}.
\begin{figure}[ht]
\psfrag{imagsigunits}{$\rm Im \ \sigma$ [$10^3$/$\Omega$cm]}
\psfrag{omegamev}{$\omega$ [meV]}
\psfrag{sigadded}[r][r]{$\sigma_n+\sigma$}
\psfrag{3T}{3~T}
\psfrag{5T}{5~T}
\psfrag{7T}{7~T}
\includegraphics[width=6cm,angle=270]{ExperimentImagConductivity.eps}
\caption{Experimental comparison of imaginary part of conductivity at
high frequency:
The NbN experimental data are from \litfig{6(b)} of \citep{Ikb09}.
Material parameters $\Tnst=15.3$~K and $H^0_{c2}=14.1$~T calculated
using \litfig{3(b)} of \citep{Ikb09}.
Normal-state conductivity is $\sigma_n=20\cdot10^4 (\mu \Omega$~cm$)^{-1}$
and relaxation time of an electron $\tau_e=5$~fs are taken from \citep{Ikb09}.
The theoretical curve has one fitting parameter $\kappa=44.5$.}
\label{FIR}
\end{figure}
\section{Conclusion}
\label{Conclusion}
The time-dependent Ginzburg-Landau equation
has been solved analytically to study the dynamical response of the free vortex lattice.
Based on the bifurcation method, which involves an expansion in the distance to the phase
transition boundary, we obtained a perturbative solution to all
orders. We studied the response of the vortex lattice in the
flux-flow region just below the phase transition, at first order in
this expansion.
We have seen that there are certain parameters which can
be tuned using the applied field and temperature, providing a feasible
superconducting system where one can study precise control of nonlinear
phenomena in vortex matter.
Under a perturbation by electromagnetic waves, the steady-state
solution shows that there is a diamagnetic current circulating the
vortex core, and a transport current parallel to the external electric
field with a frequency-dependent phase shift and amplitude. Vortices
move perpendicularly to the transport current and coherently.
Using a technique of maximising the heat-generation rate, we showed
that the preferred structure based on energy dissipation is
a hexagonal lattice, with a certain level of distortion appearing
as the signal is increased or the frequency is lowered.
Energy flowing into the system via the applied field is dissipated
through the vortex cores.
We showed that the superconducting part may be thought of as having
inductance in space and time.
We have written transport current
beyond a simple linear
expression. A comparison between different harmonics of three
different states in our four-dimensional parameter space
indicated that the nonlinearity becomes unimportant at
high frequency and small amplitude, and the influence of the input
signal is decreased when the system moves deeper inside the
superconducting region, away from the phase-transition boundary.
To observe the configuration of moving vortices, techniques such as
muon-spin rotation\cites{Fg02,Fg06}, SANs\cites{Fg06}, STM\cites{stm} and others\cites{AbrikosovTechniques}
seem to be promising options.
To provide the kind of input signal considered here, methods such as
short-pulse FIR Spectroscopy as used in \citep{Ikb09} might be applied.
The coefficient we defined
in \eqref{hharmonics} corresponds to conductivity. We have also seen
that a simple parametrisation by complex quantities like conductivity
and surface impedence is insufficient to capture the detailed
behaviour of the system; in performing experiments, it should be kept
in mind that the nonlinearity can be measured in terms of more
appropriate variables as we have shown.
We have viewed the forcing of the system by the applied field to be
somewhat analogous to thermal fluctuations, in the sense that they
both result in vibration of the vortex lattice. Hence, the influence
of the electromagnetic fluctuation is stronger at the nucleation
region of superconductivity than deep inside the superconducting
phase. Besides, since at high frequency the motion of vortices is
limited, the influence from electric field is suppressed, as is the
Hall effect.
\bigskip
\begin{acknowledgments}
Fruitful discussions with B.~Rosenstein, V.~Zhuravlev, and J.~Kol{\'a}{\v c}ek
are greatly appreciated.
The authors kindly thank G.~Bel and P.~Lipavsk{\'y} for critical reading of
the manuscript and many useful comments.
The authors also have benefitted from comments of A.~Gurevich.
We thank J.~R.~Clem for pointing out to us reference \citep{Fiory71}.
\texttt{NSC99-2911-I-216-001}
\end{acknowledgments}
| proofpile-arXiv_065-4808 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
In this paper we are interested in the following question: given a finite
measure $\mu$, at what speed can it be approximated
by finitely supported measures? To give a sense to the question, one needs
a distance on the space of measures; we shall use the Wasserstein distances $W_p$,
with arbitrary exponent $p\in[1,+\infty)$
(definitions are recalled in Section \ref{sec:recalls}).
This problem has been called {\it Quantization for probability distribution},
the case of exponent $p=1$ has also been studied under the name of
{\it location problem}, and
the case $p=2$ is linked with {\it optimal centroidal Voronoi tessellations}.
After submission of the present article, we became aware that the previous works
cover much more of the material presented than we first thought; see
Subsection \ref{sec:biblio} for detailled references.
This problem could be of interest for a numerical study of transportation problems,
where measures can be represented by discrete ones. One would need to know the number of
points needed to achieve some precision in the approximation.
We shall restrict our attention to compactly supported Borelian measures on
Riemannian manifolds.
\subsection{Statement of the results}
First we show that the order of convergence is determined by the dimension
of the measure (see definitions in Section \ref{sec:recalls});
$\Delta_N$ denotes the set of all measures supported in at most $N$ points.
\begin{theo}\label{theo:dimension}
If $\mu$ is compactly supported and Alhfors regular of dimension $s>0$, then
\[W_p(\mu,\Delta_N) \approx \frac1{N^{1/s}}.\]
\end{theo}
Here we write $\approx$ to say that one quantity is bounded above and below by
positive multiples of the other. Examples of Ahlfors regular measures are given
by the volume measures on submanifolds, and self-similar measures (see for example \cite{Falconer}).
Theorem \ref{theo:dimension},
to be proved in a slightly more general and precise form in section \ref{sec:dimension},
is simple and unsurprising; it reminds of ball packing and covering, and indeed
relies on a standard covering argument.
In the particular case of absolutely continuous measures, one can give much finer estimates.
First, it is easily seen that if $\square^d$ denotes the uniform measure on a
Euclidean unit cube of dimension $d$, then there is a constant $\theta(d,p)$ such that
\[W_p(\square^d,\Delta_N)\sim \frac{\theta(d,p)}{N^{1/d}}\]
(Proposition \ref{prop:cube}).
Note that determining the precise value of $\theta(d,p)$ seems difficult; known cases are discussed in Section
\ref{sec:biblio}.
The main result of this paper is the following, where ``$\mathop {\mathrm{vol}}\nolimits$'' denotes
the volume measure on the considered Riemannian manifold and is the default
measure for all integrals.
\begin{theo}\label{theo:cont}
If $\mu=\rho\mathop {\mathrm{vol}}\nolimits$ where $\rho$ is a compactly supported function on a Riemannian manifold
$(M,g)$, then for all $1\leqslant p<\infty$ we have
\begin{equation}
W_p(\mu,\Delta_N) \mathop {\sim}\limits \frac{\theta(d,p)\, |\rho|^{1/p}_{\frac d{d+p}}}{N^{1/d}}
\label{eq:main}
\end{equation}
where $|\rho|_\beta=(\int_M \rho^\beta)^{1/\beta}$ is the $L^\beta$ ``norm'',
here with $\beta<1$ though.
Moreover, if $(\mu_N)$ is a sequence of finitely supported measures such that $\mu_N\in\Delta_N$
minimizes $W_p(\mu,\mu_N)$, then the sequence of probability measures $(\bar\mu_N)$ that are
uniform on the support of $\mu_N$ converges weakly to the multiple of
$\rho^{\frac{d}{p+d}}$ that has mass $1$.
\end{theo}
Theorem \ref{theo:cont} is proved in Section \ref{sec:cont}. Note that the hypothesis
that $\mu$ has compact support
is obviously needed: otherwise, $|\rho|_{d/(d+p)}$ could be infinite.
Even when $\mu$ is in $L^{d/(d+p)}$,
there is the case where it is supported
on a sequence of small balls going to infinity. Then the location of the balls is
important in the quality
of approximation and not only the profile of the density function. However,
this hypothesis could probably be relaxed to a moment condition.
Theorem \ref{theo:cont} has no real analog for measures
of fractional dimension.
\begin{theo}\label{theo:example}
There is a $s$-dimensional Ahlfors regular measure $\kappa$ on $\mathbb{R}$ (namely,
$\kappa$ is the Cantor dyadic measure) such that
$W_p(\kappa,\Delta_N) N^{1/s}$ has no limit.
\end{theo}
Section \ref{sec:examples} is devoted to this example.
Part of the interest of Theorem \ref{theo:cont} comes from the following observation, to be
discussed in Section \ref{sec:CVT}:
when $p=2$, the support of a distance minimizing $\mu_N\in\Delta_N$ generates a centroidal
Voronoi tessellation,
that is, each point is the center of mass (with respect to $\mu$)
of its Voronoi cell. We thus get the asymptotic repartition
of an important family of centroidal Voronoi tessellations, which enables us to
prove some sort of energy equidistribution principle.
\subsection{Discussion of previously known results}\label{sec:biblio}
There are several previous works closely related to the content of this paper.
\subsubsection{Foundations of Quantization for Probability Distributions}
The book \cite{Graf-Luschgy} by Graf and Luschgy (see also the references
therein), that we only discovered recently,
contains many results on the present problem.
Theorem \ref{theo:dimension} is proved there in section 12, but our proof
seems more direct. Theorem \ref{theo:cont} is proved in the Euclidean
case in Sections 6 and 7 (with a weakening of the compact support assumption).
A generalization of Theorem \ref{theo:example} is proved in Section 14, yet
we present a proof for the sake of self-completeness.
The case $p=1$, $M=\mathbb{R}^n$ is usually called the {\it location problem}.
In this setting, Theorem \ref{theo:cont}
has also been proved by Bouchitt\'e, Jimenez and Rajesh
\cite{Bouchitte} under the additionnal assumption that $\rho$ is lower semi-continuous.
Our main motivation to publish this work despite these overlaps
is that the case of measures on manifold
should find applications; for example, good approximations of the curvature
measure of a convex body by discrete measures should give good approximations
of the body by polyhedra.
It seems that the quantization, the
location problem and the study of optimal CVTs, although the last two
are particular cases of the first one, have been only studied independently.
We hope that
notincing this proximity will encourage progress on each question to be
translated in the others.
\subsubsection{Around the main theorem}
Mosconi and
Tilli in \cite{Mosconi-Tilli} have studied (for any exponent $p$, in
$\mathbb{R}^n$) the {\it irrigation problem}, where the approximating measures
are supported on connected sets of length $<\ell$ (the length being the $1$-dimensional
Hausdorff measure)
instead of being supported on $N$ points; the order of approximation is then $\ell^{1/(d-1)}$.
Brancolini, Buttazzo, Santambrogio and Stepanov compare in \cite{Brancolini} the location problem
with its ``short-term planning'' version, where the support of $\mathop {\mathrm{supp}}\nolimits\mu_N$ is constructed
by adding one point to that of $\mu_{N-1}$, minimizing the cost only locally in $N$.
\subsubsection{Approximation constants for cubes}
Some values of $\theta(d,p)$ have been determined.
First, it is easy to compute them in dimension $1$:
\[\theta(1,p)=\frac{(p+1)^{-1/p}}{2}.\]
The case $d=2$ has been solved by Fejes T\'oth \cite{FejesToth,FejesToth2},
(and by Newmann \cite{Newman} for $p=2$ and Morgan and Bolton \cite{Morgan-Bolton}
for $p=1$), see also \cite{Graf-Luschgy} Section 8.
In particular
\[\theta(2,2)=5\sqrt{3}/54 \qquad \theta(2,1)=2^{-2/3}3^{-7/4}(4+\ln 27).\]
When $d=2$ and for all $p$, the hexagonal lattice is optimal
(that is, the given $\theta$ is the distance
between the uniform measure on a regular hexagon and a Dirac mass at its center).
All other cases are open to our knowledge.
For numerical evidence in the
case $p=2,d=3$ see Du and Wang \cite{Du-Wang}. Note that in the limit case
$p=\infty$, determining $\theta$ amounts to determining the minimal density of a ball covering of $\mathbb{R}^d$,
which is arguably as challenging as determining the maximal density of a ball packing, a well-known
open problem if $d>3$.
\subsubsection{Random variations}
Concerning the order of convergence, it is worth
comparing with the problem of estimating the distance from a measure $\mu$
to empirical measures $\bar\mu_N=N^{-1}\sum_k \delta_{X_k}$ where $X_1,\ldots, X_N$
are independent random variables of law $\mu$. It seems that $\bar\mu_N$ is almost optimal
in the sense that $W_2(\mu,\bar\mu_N)\sim C\, N^{-1/d}$ almost surely (under moment conditions, but here
we take $\mu$ compactly supported so this is not an issue); Horowitz and Karandikar have
shown in \cite{Horowitz-Karandikar} that $W_2(\mu,\bar\mu_N)$
has the order at most $N^{-1/(d+4)}$ and the better exponent above is suggested in the
Mathematical Review of that paper.
Let us also briefly note that the optimal matching problem for random data is
related to our problem. Simply put, one can say that if $\bar\mu'_N$ is another empirical measure
of $\mu$, then $W_2(\bar\mu_N,\bar\mu'_N)$ also has the order $N^{-1/d}$ if $d\geqslant 3$
(see for example Dobri\'c and Yukich \cite{Dobric-Yukich}). In the same
flavour, other optimisation problems for random data have been studied
(minimal length covering tree, traveling salesperson problem, bipartite version of those, etc.)
\subsubsection{Centroidal Voronoi Tesselations}
In the case $p=2$, the problem is linked to (optimal) {\it centroidal Voronoi Tesselation},
see Section \ref{sec:CVT} and \cite{Du-Faber-Gunzburger}.
In that paper (Section 6.4.1), the principle of energy equidistibution is given in
the $1$-dimensional case for smooth density $\rho$. Our corollary \ref{coro:equid}
in the last section generalize this to non regular densities, all exponents, and
all dimensions; it is however quite a direct consequence
of Theorem \ref{theo:cont}.
\subsection{Related open questions}
The number $N$ of points of the support may be the first measure of complexity of a finitely
supported measure that one comes up with, but it is not necessarily the most relevant.
Concerning the problem of numerical analysis of transportation problems, numbers are usually
encoded in a computer by floating numbers. One could therefore define the complexity of a measure
supported on points of decimal coordinates, with decimal quantity of mass at each point as the
memory size needed to describe it, and search to minimize the distance to a given $\mu$ among
measures of given complexity.
Another possible notion of complexity is entropy : one defines
\[h\left(\sum_i m_i \delta_{x_i}\right)=-\sum_i m_i\ln(m_i).\]
A natural question is to search a $\mu_h$ that minimizes the distance to $\mu$ among the finitely
supported measures of entropy at most $h$, and to study the behavior of $\mu_h$ when we let
$h\to\infty$.
\subsection*{Acknowledgements} I am grateful to Romain Joly, Vincent Mun\-nier,
Herv\'e Pajot, R\'emy Peyre and C\'edric Villani for interesting discussions or comments.
\section{Recalls and definitions}\label{sec:recalls}
\subsection{Notations}
Given two sequences $(u_n)$, $(v_n)$ of non-negative real numbers, we shall write:
\begin{itemize}
\item $u_n\lesssim v_n$ to mean that there exists a positive real $a$ and an integer $N_0$ such that
$u_n \leqslant a v_n$ for all $n\geqslant N_0$,
\item $u_n\approx v_n$ to mean $u_n\lesssim v_n$ and $u_n\gtrsim v_n$.
\end{itemize}
From now on, $M$ is a given Riemannian manifold of dimension $d$. By a \emph{domain}
of $M$ we mean a compact domain with piecewise smooth boundary (and possibly corners)
and finitely many connected components.
\subsection{Ahlfors regularity and a covering result}
We denote by $B(x,r)$ the closed ball of radius $r$ and center $x$; sometimes,
when $B=B(x,r)$ and $k\in\mathbb{R}$, we denote by $kB$ the ball $B(x,kr)$.
Let $\mu$ be a finite, compactly supported measure on a manifold $M$ of
dimension $d$, and let $s\in(0,+\infty)$.
One says that $\mu$ is \emph{Ahlfors regular} of dimension $s$ if there is a constant $C$
such that for all $x\in\mathop {\mathrm{supp}}\nolimits\mu$ and for all $r\leqslant\mathop {\mathrm{diam}}\nolimits(\mathop {\mathrm{supp}}\nolimits\mu)$,
one has
\[C^{-1} r^s \leqslant \mu(B(x,r)) \leqslant C r^s.\]
This is a strong condition, but
is satisfied for example by auto-similar measures, see \cite{Hutchinson,Falconer}
for definitions and Section \ref{sec:examples} for the most famous example of the Cantor measure.
Note that if $\mu$ is Ahlfors regular of dimension $s$,
then $s$ is the Hausdorff dimension of $\mathop {\mathrm{supp}}\nolimits\mu$ (and therefore $s\leqslant d$),
see \cite[Sec. 8.7]{Heinonen}.
We shall need the following classical
covering result.
\begin{prop}[$5\delta$ covering]
If $X$ is a closed set and $\mathscr{F}$ is a family of balls of uniformly bounded diameter such that
$X\subset \bigcup_\mathscr{F} B$, then there is a subfamilly $\mathscr{G}$ of $\mathscr{F}$
such that:
\begin{itemize}
\item $X\subset \bigcup_\mathscr{G} 5B$,
\item $B\cap B'=\varnothing$ whenever $B\neq B' \in\mathscr{G}$.
\end{itemize}
\end{prop}
\subsection{Wasserstein distances}
Here we recall some basic facts on optimal transportation and Wasserstein distances.
For more information, the reader is suggested to look for example at Villani's book \cite{Villani}
which provides a very good introduction to this topic.
First consider the case $p<\infty$, which shall attract most of our attention.
A finite measure $\mu$ on $M$ is said to have \emph{finite $p$-th moment} if for some
(hence all) $x_0\in M$ the following holds:
\[\int_{\mathbb{R}^d} d(x_0,x)^p \mu(\mathrm{d} x) < +\infty.\]
In particular, any compactly supported finite measure has finite $p$-th moment for all $p$.
Let $\mu_0,\mu_1$ be two finite measures having finite $p$-th moment and the same mass.
A \emph{transport plan} between $\mu_0$ and $\mu_1$ is a measure $\Pi$ on $M\times M$
that has $\mu_0$ and $\mu_1$ as marginals, that is :
$\Pi(A\times M)=\mu_0(A)$ and $\Pi(M\times A)=\mu_1(A)$ for all Borelian set $A$.
One shall think of $\Pi$ has a assignement of mass: $\Pi(A\times B)$ represents
the mass sent from $A$ to $B$.
The $L^p$ \emph{cost} of a transport plan is defined as
\[c_p(\Pi) = \int_{M\times M} d(x,y)^p \Pi(\mathrm{d} x\mathrm{d} y).\]
One defines the $L^p$ \emph{Wasserstein distance} by
\[W_p(\mu_0,\mu_1)=\inf_\Pi c_p(\Pi)^{1/p}\]
where the infimum is on all tranport plan between $\mu_0$ and $\mu_1$.
One can show that there is always a tranport plan that achieves this infimum, and
that $W_p$ defines a distance on the set of measures with finite $p$-th moment and given mass.
Moreover, if $M$ is compact $W_p$ metrizes the weak topology.
If $M$ is non-compact, it defines a finer topology.
Most of the time, one restricts itself to probability measures. Here, we shall use
extensively mass transportation between submeasures of the main measures
under study, so that
we need to consider measures of arbitrary mass. Given positive measures $\mu$ and $\nu$,
we write that $\mu\leqslant \nu$ if $\mu(A)\leqslant\nu(A)$ for all borelian set $A$, which means
that $\nu-\mu$ is also a positive measure.
It is important to
notice that $c_p(\Pi)$ is homogeneous of degree $1$ in the total mass and of degree
$p$ on distances, so that in the case $M=\mathbb{R}^d$ if $\varphi$ is a similitude of ratio $r$,
we have $W_p(m\, \varphi_\#\mu_0, m\, \varphi_\#\mu_1)=m^{1/p}\, r\, W_p(\mu_0,\mu_1)$.
The case $p=\infty$ is obtained as a limit of the finite case, see \cite{Champion-De_Pascale-Petri}.
Let $\mu_0$ and $\mu_1$ be compactly supported measures of the same mass and
let $\Pi$ be a transport plan between $\mu_0$ and $\mu_1$. The $L^\infty$ \emph{length}
of $\Pi$ is defined as
\[\ell_\infty(\Pi) = \sup\{d(x,y)\,|\, x,y\in\mathop {\mathrm{supp}}\nolimits\Pi\}\]
that is, the maximal distance moved by some infinitesimal amount of mass when applying $\Pi$.
The $L^\infty$ distance between $\mu_0$ and $\mu_1$ then is
\[W_\infty(\mu_0,\mu_1)=\inf_\Pi \ell_\infty(\Pi)\]
where the infimum is on all transport plan from $\mu_0$ to $\mu_1$.
In a sense, the $L^\infty$ distance is a generalisation to measures of the Hausdorff metric
on compact sets. We shall use $\ell_\infty$, but not $d_\infty$. The problem of minimizing
$W_\infty(\mu,\Delta_N)$ is a matter of covering $\mathop {\mathrm{supp}}\nolimits\mu$ (independently of $\mu$ itself),
a problem with quite a different taste than our.
\section{Preparatory results}
The following lemmas are useful tools we shall need; the first two at least
cannot pretend to any kind of originality by themselves.
\begin{lemm}[monotony]
Let $\mu$ and $\nu$ be finite measures of equal mass and $\tilde\mu\leqslant\mu$.
Then there is a measure $\tilde\nu\leqslant\nu$
(in particular, $\mathop {\mathrm{supp}}\nolimits\tilde\nu\subset\mathop {\mathrm{supp}}\nolimits\nu$) such that
\[W_p(\tilde\mu,\tilde\nu)\leqslant W_p(\mu,\nu).\]
\end{lemm}
\begin{proof}
Let $\Pi$ be an optimal transportation plan from $\mu$ to $\nu$.
We construct a low-cost transportation plan from $\tilde\mu$ to $\tilde\nu$
by disintegrating $\Pi$.
There is family of finite measures $(\eta_x)_{x\in M}$ such that
$\Pi = \int \eta_x \mu(\mathrm{d} x)$, that is
\[\Pi(A\times B) = \int_{A} \eta_x(B) \mu(\mathrm{d} x)\]
for all Borelian $A$ and $B$.
Define
\[\tilde\Pi(A\times B) = \int_A \eta_x(B) \tilde\mu(\mathrm{d} x)\]
and let $\tilde\nu$ be the second factor projection of $\tilde\Pi$.
Since $\tilde\Pi\leqslant \Pi$, we have $\tilde\nu\leqslant \nu$
and $c_p(\tilde\Pi)\leqslant c_p(\Pi)$; moreover $\tilde\Pi$ is a transport plan
from $\tilde\mu$ to $\tilde\nu$ by definition of $\tilde\nu$.
\end{proof}
\begin{lemm}[summing]
Let $(\mu,\nu)$ and $(\tilde\mu,\tilde\nu)$ be finite measures with pairwise
equal masses. Then
\[W_p^p(\mu+\tilde\mu,\nu+\tilde\nu)\leqslant W_p^p(\mu,\nu)+W_p^p(\tilde\mu,\tilde\nu).\]
\end{lemm}
\begin{proof}
Let $\Pi$ and $\tilde\Pi$ be optimal transport plans between respectively $\mu$ and $\nu$, $\tilde\mu$ and $\tilde\nu$.
Then $\Pi+\tilde\Pi$ is a transport plan between $\mu+\tilde\mu$ and $\nu+\tilde\nu$ whose cost
is $c_p(\Pi+\tilde\Pi)=c_p(\Pi)+c_p(\tilde\Pi)$.
\end{proof}
This very simple results have a particularly important consequence concerning our question.
\begin{lemm}[$L^1$ stability]
Let $\mu$ and $\tilde\mu$ be finite compactly supported measures on $M$,
$\varepsilon\in(0,1)$ and $(\mu_N)$ be any sequence of $N$-supported measures.
There is a sequence of $N$-supported measures $\tilde\mu_N$ such that there are at most
$\varepsilon N$ points in $\mathop {\mathrm{supp}}\nolimits\tilde\mu_N\setminus\mathop {\mathrm{supp}}\nolimits\mu_N$ and
\[W_p^p(\tilde\mu,\tilde\mu_N)\leqslant W_p^p(\mu,\mu_{N_1})
+O\left(\frac{|\mu-\tilde\mu|_{TV}}{(\varepsilon N)^{p/d}}\right)\]
where $N_1$ is equivalent to (and at least) $(1-\varepsilon) N$, $|\cdot|_{TV}$ is the
total variation norm and the constant in the $O$ depends only on the geometry of
a domain where both $\mu$ and $\tilde\mu$ are concentrated.
In particular we get \[W_p^p(\mu,\Delta_N)\leqslant W_p^p(\tilde\mu,\Delta_{N_1})
+O\left(\frac{|\mu-\tilde\mu|_{TV}}{(\varepsilon N)^{p/d}}\right).\]
\end{lemm}
The name of this result has been chosen to emphasize that the total variation distance
between two absolutely continuous measures is half the $L^1$ distance between their densities.
\begin{proof}
We can write $\tilde\mu=\mu' + \nu$ where $\mu'\leqslant\mu$ and $\nu$ is a positive
measure of total mass at most
$|\mu-\tilde\mu|_{VT}$. If $D$ is a domain supporting both $\mu$ and $\tilde\mu$,
it is a classical fact that there is a constant $C$ (depending only on $D$) such
that for all integer $K$,
there are points $x_1,\ldots,x_{K^d}\in D$ such that each point
of $D$ is at distance at most $C/K$ from one of the $x_i$. For example if
$D$ is a Euclidean cube of side length $L$, by dividing it regularly one
can achieve $C=L\sqrt{d}/2$.
Take $K=\lfloor(\varepsilon N)^{1/d}\rfloor$; then
by sending each point of $D$ to a closest $x_i$, one constructs a transport
plan between $\nu$ and a $K^d$-supported measure $\nu_N$ whose cost is at most
$|\mu-\tilde\mu|_{VT} (C/K)^p$.
Let $N_1=N-K^d$. The monotony lemma gives a measure $\mu'_N\leqslant\mu_{N_1}$ (in particular,
$\mu'_N$ is $N_1$-supported) such that
\[W_p(\mu',\mu'_N)\leqslant W_p(\mu,\mu_{N_1}).\]
The summing lemma now shows that
\[W_p^p(\tilde\mu,\mu'_N+\nu_N)\leqslant W_p^p(\mu,\mu_{N_1})+
\frac{C^p |\mu-\tilde\mu|_{VT}}{K^p}.\]
\end{proof}
Note that the presence of the $|\mu-\tilde\mu|_{TV}$ factor will be crucial in the sequel,
but would not be present in the limit case $p=\infty$, which is therefore very different.
\begin{lemm}[metric stability]
Assume $D$ is a compact domain of $M$, endowed with
two different Riemannian metrics $g$ and $g'$ (defined on a open neighborhood of $D$).
Denote by $|g'-g|$ the minimum number $r$ such that
\[e^{-2r} g_x(v,v)\leqslant g'_x(v)\leqslant e^{2r} g_x(v,v)\]
for all $x\in D$ and all $v\in T_x M$.
Then, denoting by $W_p$ the Wasserstein metric computed using the distance
$d$ induced by $g$, and by $W'_p$ the one obtained from the distance $d'$
induced by $g'$, one has for all measures $\mu,\nu$ supported on $D$ and of equal mass:
\[e^{-|g'-g|} W_p(\mu,\nu) \leqslant W'_p(\mu,\nu) \leqslant e^{|g'-g|} W_p(\mu,\nu).\]
\end{lemm}
\begin{proof}
For all $x,y\in D$ one has $d'(x,y)\leqslant e^r d(x,y)$
by computing the $g'$-length of a $g$-minimizing (or almost minimizing to avoid
regularity issues on the boundary) curve connecting $x$ to $y$. The same reasonning
applies to transport plans:
if $\Pi$ is optimal from $\mu$ to $\nu$ according to $d$,
then the $d'$ cost of $\Pi$ is at most $e^{pr}$ times the $d$-cost of $\Pi$,
so that $W'_p(\mu,\nu) \leqslant e^r W_p(\mu,\nu)$. The other inequality follows
by symmetry.
\end{proof}
Let us end with a result showing that no mass is moved very far away by an optimal
transport plan to a $N$-supported measure if $N$ is large enough.
\begin{lemm}[localization]\label{lem:length}
Let $\mu$ be a compactly supported finite measure.
If $\mu_N$ is a closest $N$-supported measure to $\mu$ in $L^p$
Wasserstein distance
and $\Pi_N$ is a $L^p$ optimal transport plan between $\mu$ and $\mu_N$,
then when $N$ goes to $\infty$,
\[\ell_\infty(\Pi_N)\to 0.\]
\end{lemm}
\begin{proof}
Assume on the contrary that there are sequences $N_k\to\infty$,
$x_k\in\mathop {\mathrm{supp}}\nolimits\mu$ and a number $\varepsilon>0$ such that
$\Pi_{N_k}$ moves $x_k$ by a distance at least $\varepsilon$.
There is a covering of $\mathop {\mathrm{supp}}\nolimits\mu$ by a finite number of balls of radius
$\varepsilon/3$. Up to extracting a subsequence, we can assume that all
$x_k$ lie in one of this balls, denoted by $B$. Since $B$ is a neighborhood
of $x_k$ and $x_k\in\mathop {\mathrm{supp}}\nolimits\mu$, we have $\mu(B)>0$. Since $\Pi_{N_k}$ is
optimal, it moves $x_k$ to a closest point in $\mathop {\mathrm{supp}}\nolimits\mu_{N_k}$, which must be at distance
at least $\varepsilon$ from $x_k$. Therefore, every point in $B$ is at distance
at least $\varepsilon/3$ from $\mathop {\mathrm{supp}}\nolimits\mu_{N_k}$, so that
$c_p(\Pi_{N_k})\geqslant \mu(B) (\varepsilon/3)^p >0$, in contradiction
with $W_p(\mu,\Delta_N)\to 0$.
\end{proof}
\section{Approximation rate and dimension}\label{sec:dimension}
Theorem \ref{theo:dimension} is the union of the two following propositions.
Note that the estimates given do not depend much on $p$, so that in fact
Theorem \ref{theo:dimension} stays true when $p=\infty$.
\begin{prop}\label{prop:majo}
If $\mu$ is a compactly supported probability measure on $M$ and if for some $C>0$
and for all $r\leqslant\mathop {\mathrm{diam}}\nolimits(\mathop {\mathrm{supp}}\nolimits\mu)$, one has
\[C^{-1} r^s\leqslant \mu(B(x,r))\]
then for all $N$
\[W_p(\mu,\Delta_N)\leqslant \frac{5C^{1/s}}{N^{1/s}}.\]
\end{prop}
\begin{proof}
The $5\delta$ covering proposition above implies that given any $\delta>0$, there is a subset
$\mathscr{G}$ of
$\mathop {\mathrm{supp}}\nolimits\mu$ such that
\begin{itemize}
\item $\mathop {\mathrm{supp}}\nolimits\mu \subset \bigcup_{x\in \mathscr{G}} B(x,5\delta)$,
\item $B(x,\delta)\cap B(x',\delta)=\varnothing$ whenever $x\neq x' \in\mathscr{G}$.
\end{itemize}
In particular, as soon as $\delta<\mathop {\mathrm{diam}}\nolimits(\mathop {\mathrm{supp}}\nolimits\mu)$ one has
\[1\geqslant \sum_{x\in\mathscr{G}} \mu(B(x,\delta)) \geqslant |\mathscr{G}| C^{-1} \delta^s\]
so that $\mathscr{G}$ is finite, with $ |\mathscr{G}|\leqslant C\delta^{-s}$.
Let $\tilde\mu$ be a measure supported on $\mathscr{G}$, that minimizes the $L^p$ distance
to $\mu$ among those. A way to construct $\tilde\mu$ is to assign to a point $x\in\mathscr{G}$
a mass equal to the $\mu$-measure of its Voronoi cell, that is of the set of points
nearest to $x$ than to any other points in $\mathscr{G}$. The mass at a point at equal
distance from several elements of $\mathscr{G}$ can be split indifferently between those.
The previous discussion also gives a transport plan from $\mu$ to $\tilde\mu$,
where each bit of mass moves a distance at most $5\delta$, so that
$W_p(\mu,\tilde\mu)\leqslant 5\delta$ (whatever $p$).
Now, let $N$ be a positive integer and choose $\delta = (C/N)^{1/s}$. The family $\mathscr{G}$
obtained from that $\delta$ has less than $N$ elements, so that
$W_p(\mu,\Delta_N)\leqslant 5(C/N)^{1/s}$.
\end{proof}
\begin{prop}
If $\mu$ is a probability measure on $M$ and if for some $C>0$
and for all $r$, one has
\[ \mu(B(x,r))\leqslant C r^s \]
then for all $N$,
\[ \left(\frac{s}{s+p}\right)^{1/p} C^{-1/s}\ \frac1{N^{1/s}}\leqslant W_p(\mu,\Delta_N).\]
\end{prop}
\begin{proof}
Consider a measure $\mu_N\in\Delta_N$ that minimizes the distance
to $\mu$. For all $\delta>0$, the union of the balls centered at $\mathop {\mathrm{supp}}\nolimits\mu_N$ and of radius
$\delta$ has $\mu$-measure at most $NC\delta^s$. In any transport plan from $\mu$ to
$\mu_N$, a quantity of mass at least $1-NC\delta^s$ travels a distance at least $\delta$,
so that in the best case the quantity of mass traveling a distance between $\delta<(NC)^{-1/s}$
and $\delta+\mathrm{d}\delta$ is $NCs\delta^{s-1}\mathrm{d} \delta$. It follows that
\[W_p(\mu,\mu_N)^p\geqslant \int_0^{(NC)^{-1/s}} sNC\delta^{s-1}\delta^p \mathrm{d} \delta\]
so that $W_p(\mu,\mu_N)\geqslant (s/(s+p))^{1/p}(NC)^{-1/s}$.
\end{proof}
In fact, Theorem \ref{theo:dimension} applies to more general measures, for example
combination of Ahlfors regular ones, thanks to the following.
\begin{lemm}
If $\mu=a_1\mu^1+a_2\mu^2$ where $a_i>0$ and $\mu^i$ are probability measures
such that
$W_p(\mu^2,\Delta_N)\lesssim W_p(\mu^1,\Delta_N)$
and $W_p(\mu^1,\Delta_N)\lesssim W_p(\mu^1,\Delta_{2N})$
then
$W_p(\mu,\Delta_N)\approx W_p(\mu^1,\Delta_N)$.
\end{lemm}
\begin{proof}
By the monotony lemma, $W_p(a_1\mu^1,\Delta_N)\leqslant W_p(\mu,\Delta_N)$ so that
$W_p(\mu^1,\Delta_N)\leqslant a_1^{-1/p}W_p(\mu,\Delta_N)$.
The summing lemma gives
\[W_p(\mu,\Delta_{2N})\leqslant\big(W_p(a_1\mu^1,\Delta_N)^p+W_p(a_2\mu^2,\Delta_N)^p\big)^{1/p}\]
so that
\[W_p(\mu,\Delta_{2N})\lesssim W_p(a_1\mu^1,\Delta_N)\lesssim W_p(\mu^1,\Delta_{2N}).\]
Since $W_p(\mu,\Delta_{2N+1})\leqslant W_p(\mu,\Delta_{2N})$ we also get
\[W_p(\mu,\Delta_{2N+1})\lesssim W_p(\mu^1,\Delta_{2N}) \lesssim W_p(\mu^1,\Delta_{4N})\leqslant W_p(\mu^1,\Delta_{2N+1})\]
\end{proof}
The following is now an easy consequence of this lemma.
\begin{coro}
Assume that $\mu=\sum_{i=1}^k a_i \mu^i$ where
$a_i>0$ and $\mu^i$ are probability measures that
are compactly supported and Ahlfors regular of dimension $s_i>0$.
Let $s=\max_i(s_i)$. Then
\[W_p(\mu,\Delta_N)\approx \frac1{N^{1/s}}.\]
\end{coro}
\section{Absolutely continuous measures}\label{sec:cont}
In this section we prove Theorem \ref{theo:cont}.
To prove the Euclidean case, the idea is to approximate (in the $L^1$ sense)
a measure with density by a combination of uniform measures in squares. Then
a measure on a manifold can be decomposed as a combination of measures supported in
charts, and by metric stability the problem reduces to the Euclidean case.
The following key lemma shall be used several times to extend the class
of measures for which we have precise approximation estimates.
\begin{lemm}[Combination]\label{lemm:combination}
Let $\mu$ be an absolutely continuous measure on $M$.
Let $D_i$ ($1\leqslant i\leqslant I$) be domains of $M$ whose interiors
do not overlap, and assume we can decompose $\mu=\sum_{i=1}^I \mu^i$
where $\mu^i$ is non-zero and supported on $D_i$.
Assume moreover that there are
numbers $(\alpha_1,\ldots,\alpha_I)=\alpha$ such that
\[W_p(\mu^i,\Delta_N)\sim \frac{\alpha_i}{N^{1/d}}.\]
Let $\mu_N\in\Delta_N$ be a sequence minimizing the $W_p$ distance to $\mu$
and define $N_i$ (with implicit dependency on $N$)
as the number of points of $\mathop {\mathrm{supp}}\nolimits\mu_N$ that lie on $D_i$,
the points lying on a common boundary of two or more domains being attributed
arbitrarily to one of them.
If the vector $(N_i/N)_i$ has a cluster point $x=(x_i)$, then
$x$ minimizes
\[F(\alpha;x)=\left(\sum_i \frac{\alpha_i^p}{x_i^{p/d}}\right)^{1/p}\]
and if $(N_i/N)_i\to x$ when $N\to\infty$, then
\[W_p(\mu,\mu_N)\sim \frac{F(\alpha;x)}{N^{1/d}}.\]
\end{lemm}
Note that the assumption that none of the $\mu^i$ vanish is obviously
unecessary (but convenient). If some of the $\mu^i$ vanish, one only has to
dismiss them.
\begin{proof}
For simplicity we denote $c_p(N)=W_p^p(\mu,\Delta_N)$.
Let $\varepsilon$ be any positive, small enough number.
We can find a $\delta>0$ and domains $D'_i\subset D_i$
such that: each point of $D'_i$ is at distance at least $\delta$
from the complement of $D_i$; if $\mu'^i$ is the restriction
of $\mu^i$ to $D'_i$, $|\mu'^i-\mu^i|_{VT}\leqslant \varepsilon^{1+p/d}$.
Assume $x$ is the limit of $(N_i/N)_i$ when $N\to \infty$.
Let us first prove that none of the $x_i$ vanishes.
Assume the contrary for some index $i$: then $N_i=o(N)$.
For each $N$ choose an optimal transport plan $\Pi_N$ from $\mu$
to $\mu_N$.
Let $\nu_{N}\leqslant \mu^i$ be the part of $\mu^i$
that is sent by $\Pi_N$ to the $N_i$ points of $\mathop {\mathrm{supp}}\nolimits\mu_N$ that lie
in $D_i$, constructed as in the summing lemma,
and let $m_N=\mu^i(D'_i)-\nu_N(D'_i)$ be the mass
that moves from $D'_i$ to the exterior of $D_i$ under $\Pi_N$.
Then the cost of $\Pi_N$ is bounded from below by
$m_N \delta^p+W_p^p(\nu_N,\Delta_{N_i})$. Since it goes to zero,
we have $m_N\to0$ and up to extracting a subsequence $\nu_N\to\nu$
where $\mu'^i\leqslant\nu\leqslant \mu^i$. The cost of $\Pi_N$
is therefore bounded from below by all number less than
$W_p^p(\nu_N,\Delta_{N_i})\lesssim N_i^{-1/d} \ll N^{-1/d}$,
a contradiction.
Now, let $\varepsilon$ be any positive, small enough number.
By considering optimal transport plans between $\mu^i$
and optimal $N_i$-supported measures of $D_i$, we get that
\begin{eqnarray*}
c_p(N) &\leqslant& \sum_i W_p^p(\mu^i,\Delta_{N_i})\\
&\leqslant& \sum_i \frac{(\alpha_i+\varepsilon)^p}{N_i^{p/d}}
\end{eqnarray*}
when all $N_i$ are large enough, which happens if $N$ itself is large enough given
that $x_i\neq0$.
For $N$ large enough, the localization lemma ensures that no mass is
moved more than $\delta$ by an optimal transport plan between
$\mu$ and $\mu_N$. This implies that the cost $c_p(N)$ is bounded
below by $\sum_i W_p^p(\mu'^i,\Delta_{N_i})$. By $L^1$-stability
this gives the bound
\[c_p(N)\geqslant \frac{\alpha_i^p(1-\varepsilon)^{p/d}+O(\varepsilon)}{(x_iN)^{p/d}}.\]
The two inequalities above give us
\[c_p(N) N^{p/d} \to \sum_i \frac{\alpha_i^p}{x_i^{p/d}}=F^p(\alpha;x).\]
Now, if $x$ is a mere cluster point of $(N_i/N)$, this
still holds up to a subsequence. If $x$ did not minimize
$F(\alpha;x)$, then by taking best approximations of $\mu^i$
supported on $x'_iN$ points where $x'$ is a minimizer, we would get by the same computation
a sequence $\mu'_N$ with better asymptotic behavior than $\mu_N$
(note that we used the optimality of $\mu_N$ only to bound from above
each $W_p^p(\mu^i,\Delta_{N_i})$).
\end{proof}
The study of the functional $F$ is straightforward.
\begin{lemm}
Fix a positive vector $\alpha=(\alpha_1,\alpha_2,\ldots,\alpha_I)$ and consider the simplex
$X=\{(x_i)_i\,|\, \sum_i x_i =1,\, x_i> 0\}$. The function
$F(\alpha;\cdot)$ has a unique minimizer $x^0=(x_i^0)$ in $X$,
which is proportionnal to $(\alpha_i^{\frac {dp}{d+p}})_i$, with
\[F(\alpha ; x^0)=\left(\sum_i \alpha_i^{\frac {dp}{d+p}} \right)^{\frac{d+p}{dp}}
=:|\alpha|_{\frac{dp}{d+p}}.\]
As a consequence, in the combination lemma the vector $(N_i/N)$
must converge to $x^0$.
\end{lemm}
\begin{proof}
First $F(\alpha;\cdot)$ is continuous and goes to $\infty$ on the boundary of $X$,
so that is must have
a minimizer. Any minimizer must be a critical point of $F^p$ and therefore satisfy
\[\sum_i \alpha_i^p x_i^{-p/d-1} \eta_i=0\]
for all vector $(\eta_i)$ such that $\sum_i \eta_i=0$. This holds only when
$\alpha_i^p x_i^{-p/d-1}$ is a constant and we get the uniqueness of $x^0$ and its expression:
\[x^0_i = \frac{\alpha_i^{\frac{dp}{d+p}}}{\sum_j\alpha_j^{\frac{dp}{d+p}}}.\]
The value of $F(\alpha; x^0)$ follows.
In the combination lemma, we now by compacity that $(N_i/N)$ must have cluster points,
all of which must minimize $F(\alpha;\cdot)$. Since there is only one minimizer,
$(N_i/N)$ has only one cluster point and must converge to $x^0$.
\end{proof}
We are now ready to tackle more and more cases in Theorem \ref{theo:cont}.
As a starting point, we consider the uniform measure $\square^d$ on the unit
cube of $\mathbb{R}^d$ (endowed with the canonical metric).
\begin{prop}\label{prop:cube}
There is a number $\theta(d,p)>0$ such that
\[W_p(\square^d,\Delta_N)\sim \frac{\theta(d,p)}{N^{1/d}}.\]
\end{prop}
The proof is obviously not new, since it is the same argument that shows
that an optimal packing (or covering) of the Euclidean space must have a well-defined
density (its upper and lower densities are equal).
\begin{proof}
Let $c(N)=W_p^p(\square^d,\Delta_N)$.
We already know that $c(N)\approx N^{-p/d}$, so let
$A=\liminf N^{p/d} c(N)$ and consider any $\varepsilon>0$.
Let $N_1$ be an integer such that $c(N_1)\leqslant (A+\varepsilon) N_1^{-p/d}$
and let $\mu_1\in \Delta_{N_1}$ be nearest to $\mu$.
For any integer $\ell$, we can write $\ell=k^d+q$ where $k=\lfloor\ell^{1/d}\rfloor$
and $q$ is an integer; then $q = O(\ell^{1-1/d})=o(\ell)$ where the $o$ depends only on $d$.
\begin{figure}[tp]\begin{center}
\includegraphics{cube}
\caption{An optimal $N$-supported measure can be used to construct a good
$k^dN$-supported measure for all $k$.}\label{fig:cube}
\end{center}\end{figure}
Divide the cube into $k^d$ cubes of side length $1/k$, and consider the element
$\mu_k$ of $\Delta_{k^dN_1}$ obtained by duplicating $\mu_1$ in each of the cubes,
with scaling factor $k^{-1}$ and mass factor $k^{-d}$ (see figure \ref{fig:cube}).
The obvious transport plan obtained
in the same way from the optimal one between $\square^d$ and $\mu_1$ has total
cost $k^{-p}c(N_1)$, so that
\[c(\ell N_1)\leqslant k^{-p} \frac{A+\varepsilon}{N_1^{p/d}}=
\left(\frac\ell {k^d}\right)^{p/d} \frac{A+\varepsilon}{(\ell N_1)^{p/d}}.\]
But since $k^d\sim\ell$, for $\ell$ large enough we get
\[c(\ell N_1)\leqslant\frac{A+2\varepsilon}{(\ell N_1)^{p/d}}.\]
Now $N\sim \lfloor N/N_1\rfloor N_1$ so that for $N$ large enough
$c(N)\leqslant(A+3\varepsilon) N^{-p/d}$.
This proves that $\limsup N^{p/d} c(N)\leqslant A+3\varepsilon$ for all
$\varepsilon>0$.
\end{proof}
Note that we used the self-similarity of the cube at many different scales;
the result does not hold with more general self-similar (fractal) measures,
see Section \ref{sec:examples}.
Now, the combination lemma enables us to extend the validity domain of
Equation \eqref{eq:main}.
\begin{lemm}
Let $\mu=\rho\lambda$ where $\lambda$ is the Lebesgue measure on
$\mathbb{R}^d$, $\rho$ is a $L^1$ non-negative function supported on
a union of cubes $C_i$ with non-overlapping interiors, side length $\delta$,
and assume $\rho$ is constant on each cube, with value $\rho_i$.
Then Equation \eqref{eq:main} holds.
\end{lemm}
\begin{proof}
Let $\mu^i$ the restriction of $\mu$ to $C_i$, removing any cube where $\rho$
vanishes identically. Then from Proposition \ref{prop:cube} we get
$W_p(\mu^i,\Delta_N)\sim \alpha_i N^{-1/d}$ where
\[\alpha_i=\theta(d,p)(\rho_i\delta^d)^{1/p} \delta= \theta(d,p) \rho_i^{1/p} \delta^{\frac{d+p}p}\]
due to the homogeneity of $W_p$:
$\mu_i$ is obtained from $\square^d$ by multiplication by $\rho_i\delta^d$
and dilation of a factor $\delta$. By the combination lemma, we get
$W_p(\mu,\Delta_N)\sim \min F(\alpha;\cdot) N^{-1/d}$
where
\begin{eqnarray*}
\min F(\alpha,\cdot)&=&\theta(d,p)\left|\sum_i \rho_i^{\frac d{d+p}}
\delta^d\right|^{\frac{d+p}{dp}}\\
&=& \theta(d,p) |\rho|_{\frac d{d+p}}^{1/p}.
\end{eqnarray*}
\end{proof}
\begin{lemm}
Equation \eqref{eq:main} holds whenever $\mu$ is an absolutely
continuous measure defined on a compact domain of $\mathbb{R}^d$.
\end{lemm}
\begin{proof}
For simplicity, we denote $\beta=d/(d+p)$.
Let $C$ be a cube containing the support of $\mu$.
Choose some $\varepsilon>0$. Let $\tilde\mu=\tilde\rho\lambda$ be a measure such that
$\tilde\rho$ is constant on each cube of a regular subdivision of $C$, is zero outside $C$,
satisfies $|\rho-\tilde\rho|_1\leqslant 2\varepsilon^{1+p/d}$ and
such that $|\rho-\tilde\rho|_\beta\leqslant \varepsilon|\rho|_\beta$.
The stability lemma shows that
\[W_p^p(\mu,\Delta_N) \leqslant W_p^p(\tilde\mu,\Delta_{(1-\varepsilon)N})
+O\left(\frac{|\rho-\tilde\rho|_1}{2(\varepsilon N)^{p/d}}\right)\]
so that, using the hypotheses on $\tilde\rho$ and the previous lemma,
\[W_p^p(\mu,\Delta_N) \leqslant
\frac{(\theta(d,p)+\varepsilon)^p |\rho|_\beta (1+\varepsilon)(1-\varepsilon)^{-p/d}
+O(\varepsilon)}{N^{p/d}}\]
for $N$ large enough.
Symmetrically, we get (again for $N$ large enough)
\begin{eqnarray*}
W_p^p(\mu,\Delta_N) &\geqslant& W_p^p(\tilde\mu,\Delta_{N/(1-\varepsilon)})
-O\left(\frac{|\rho-\tilde\rho|_1}{2(\varepsilon N)^{p/d}}\right) \\
&\geqslant&
\frac{(\theta(d,p)-\varepsilon)^p|\rho|_\beta(1-\varepsilon)^{1+p/d}-O(\varepsilon)}{N^{p/d}}.
\end{eqnarray*}
Letting $\varepsilon\to0$, the claimed equivalent follows.
\end{proof}
\begin{lemm}
Equation \eqref{eq:main} holds whenever $\mu$ is an absolutely
continuous measure defined on a compact domain of $\mathbb{R}^d$,
endowed with any Riemannian metric.
\end{lemm}
\begin{proof}
Denote by $g$ the Riemannian metric, and let $C$ be a Euclidean cube
containing the support of $\mu$. Let $\varepsilon$ be any positive number,
and choose a
regular subdivision of $C$ into cubes $C_i$ of center $p_i$ such that for all $i$,
the restriction $g_i$ of $g$ to $C_i$ is almost constant:
$|g(p)-g(p_i)|\leqslant \varepsilon/2$ for all $p\in C_i$. Denote
by $\tilde g$ the piecewise constant metric with value $g(p_i)$ on $C_i$.
Note that even if $\tilde g$ is not continuous, at each discontinuity
point $x$ the possible choices for the metric are within a factor
$e^{2\varepsilon}$ one from another, and one defines that $\tilde g(x)(v,v)$
is the least of the $g(p_i)(v,v)$ over all $i$ such that $x\in C_i$. In this
way, $\tilde g$ defines a distance function close to the distance induced
by $g$ and the metric stability lemma holds with the same proof.
If one prefers not using discontinuous metrics, then it is also possible to
consider slightly smaller cubes $C'_i\subset C_i$, endow $C'_i$ with a constant
metric, and interpolate the metric between the various cubes. Then one uses
the $L^1$ stability in addition to the metric stability in the sequel.
Denote by $\rho$ the density of $\mu$ with respect to the volume form defined by
$g$, by $\mu^i$ the restriction of $\mu$ to $C_i$ and by $\rho_i$ the density of
$\mu^i$.
A domain of $\mathbb{R}^d$ endowed with a constant metric is isometric
to a domain of $\mathbb{R}^d$ with the Euclidean metric so that we can apply
the preceding lemma to each $\mu^i$: denoting by $W'_p$ the Wasserstein distance
computed from the metric $\tilde g$,
\[ W'_p(\mu^i,\Delta_N)\sim\frac{\delta(d,p) |\rho_i|_{\frac d{d+p}}^{1/p}}{N^{1/d}}.\]
The combination lemma then ensures that
$W'_p(\mu,\Delta_N)\sim \min F(\alpha;\cdot) N^{-1/d}$
where
\begin{eqnarray*}
\min F(\alpha,\cdot)&=&\theta(d,p)\left(\sum_i \int_{C_i} \rho_i^{\frac d{d+p}}
\right)^{\frac{d+p}{dp}}\\
&=& \theta(d,p) |\rho|_{\frac d{d+p}}^{1/p}.
\end{eqnarray*}
The metric stability lemma gives
\[e^{-\varepsilon} W'_p(\mu,\Delta_N) \leqslant W_p(\mu,\Delta_N)
\leqslant e^\varepsilon W'_p(\mu,\Delta_N)\]
and we only have left to let $\varepsilon\to 0$.
\end{proof}
We can finally end the proof of the main theorem.
\begin{proof}[Proof of Theorem \ref{theo:cont}]
Here $\mu$ is an absolutely
continuous measure defined on a compact domain $D$ of $M$.
Divide the domain into a finite number of subdomains $D_i$, each of which
is contained in a chart. Using this chart, each $D_i$ is identified with
a domain of $\mathbb{R}^d$ (endowed with the pulled-back metric of $M$).
By combination, the previous lemma shows that Equation \eqref{eq:main}
holds.
Let us now give the asymptotic distribution of the support of any
distance minimizing $\mu_N$. Let $A$ be any domain in $M$.
Let $x$ be the limit of the proportion of $\mathop {\mathrm{supp}}\nolimits\mu_N$ that lies inside $A$ ($x$ exists up
to extracting a subsequence). Since the domains generates the Borel $\sigma$-algebra, we
only have to prove that $x=\int_A \rho^\beta / \int_M \rho^\beta$. But this
follows from the combination lemma applied to the restriction of $\mu$
to $A$ and to its complement.
\end{proof}
\section{The dyadic Cantor measure}\label{sec:examples}
In this section we study the approximation problem for the dyadic Cantor measure
$\kappa$ to prove Theorem \ref{theo:example}.
Let $S^0, S^1$ be the dilations of ratio $1/3$ and fixed point $0,1$.
The map defined by
\[\mathscr{S}:\mu\mapsto 1/2\, S^0_\#\mu + 1/2\, S^1_\#\mu\]
is $1/3$-Lipschitz on the complete metric space of probability measures
having finite $p$-th moment endowed with the $L^p$ Wasserstein metric.
It has therefore a unique fixed point, called the dyadic Cantor measure
and denoted by $\kappa$. It can be considered as the ``uniform'' measure on
the usual Cantor set.
By convexity of the cost function and symmetry, $c_1:=W_p(\kappa,\Delta_1)$
is realized by the Dirac measure at $1/2$. Using the contractivity
of $\mathscr{S}$, we see at once that $W_p(\kappa,\Delta_{2^k})\leqslant 3^{-k} c_1$.
Denote by $s=\log 2/\log 3$ the dimension of $\kappa$. We have
\[ W_p(\kappa,\Delta_{2^k})(2^k)^{1/s}\leqslant c_1\]
for all integer $k$.
To study the case when the number of points is not a power of $2$, and to get
lower bounds in all cases, we introduce a notation to code the regions of $\mathop {\mathrm{supp}}\nolimits\kappa$.
Let $I^0=[0,1]$ and given a word $w=\epsilon_n\ldots\epsilon_1$ where
$\epsilon_i\in\{0,1\}$, define $I_w^n = S_{\epsilon_n} S_{\epsilon_{n-1}} \cdots S_{\epsilon_1} [0,1]$.
The \emph{soul} of such an interval is the open interval of one-third length with the same center.
The \emph{sons} of $I_w^n$ are the two intervals $I_{\epsilon w}^{n+1}$
where $\epsilon\in\{0,1\}$, and an interval is the \emph{father} of its sons.
The two sons of an interval are \emph{brothers}. Finally,
we say that $n$ is the \emph{generation} of the interval $I_w^n$.
Let $N$ be an integer, and $\mu_N\in\Delta_N$ be a measure closest to $\kappa$,
whose support is denoted by $\{x_1,\ldots,x_N\}$.
An interval $I_w^n$ is said to be \emph{terminal} if there is an $x_i$ in its soul.
A point in $I_w^n$ is always closer to the center of $I_w^n$ than to the center of its
father. This and the optimality of $\mu_N$ implies that a terminal interval contains only one $x_i$, at its center.
Since the restriction of $\kappa$ to $I_w^n$ is a copy of $\kappa$
with mass $2^{-n}$ and size $3^{-n}$, it follows that
\[W_p(\kappa,\mu_N)^p=W_p^p\sum_{I_w^n} 2^{-n}3^{-np}\]
where the sum is on terminal intervals. A simple convexity arguments shows that the terminal intervals
are of at most two (successive) generations.
Consider the numbers $N_k=3\cdot2^k$. The terminal intervals of an optimal $\mu_{N_k}$
must be in generations $k+1$ (for $2^k$ of them) and $k+2$ (for $2^{k+1}$ of them).
Therefore
\[W_p(\kappa,\mu_{N_k})^p=c_1^p\left(3^{-(k+1)p}+3^{-(k+2)p} \right)/2\]
and finally
\[W_p(\kappa,\Delta_{N_k}) N_k^{1/s} = c_1 \left(\frac{1+3^{-p}}2\right)^{1/p} 3^{\frac{\log3}{\log2}-1}.\]
Note that the precise repartition of the support does not have any importance (see figure
\ref{fig:cantor}).
\begin{figure}[tp]\begin{center}
\input{cantor.pstex_t}
\caption{The four first steps of the construction of the Cantor set; the Cantor measure is
equally divided between the intervals of a given step. The bullets show the supports
of two optimal approximation of $\kappa$ by $6$-supported measures. We see that there is
no need for the support to be equally distributed between the intervals of the first
generation.}\label{fig:cantor}
\end{center}\end{figure}
To see that $W_p(\kappa,\Delta_N)N^{1/s}$ has no limit, it is now sufficient to estimate
the factor of $c_1$ in the right-hand side of the above formula. First we remark that
$\left(\frac{1+3^{-p}}2\right)^{1/p}$ is greater than
$1-(1-3^{-p})/(2p)$ which is increasing in $p$ and takes for $p=1$ the value
$2/3$. Finally, we compute $2/3\cdot3^{\frac{\log3}{\log2}-1}\simeq 1.27 >1$.
Note that the fundamental property of $\kappa$ we used is that the points in a
given $I_w^n$ are closest to its center than to that of its father. The same
method can therefore be used to study the approximation of sparser Cantor measure, or
to some higher-dimensionnal analogue like the one generated by four
contractions of ratio $1/4$ on the plane, centered at the four vertices of a square.
Moreover, one could study into more details the variations in the approximations
$W_p(\kappa,\Delta_N)$. As said before, here our point was only to show the limitations
to Theorem \ref{theo:cont}.
\section{Link with Centroidal Voronoi Tessellations}\label{sec:CVT}
Here we explain the link between our optimization problem and the centroidal Voronoi tessellations
(CVTs in short).
For a complete account on CVTs, the reader can consult
\cite{Du-Faber-Gunzburger} from where all definitions below
are taken. Since we use the concept of barycenter, we consider only the case $M=\mathbb{R}^d$
(with the Euclidean metric). As before, $\lambda$ denotes the Lebesgue measure.
\subsection{A quick presentation}
Consider a compact convex domain $\Omega$ in $\mathbb{R}^d$ and a density (positive, $L^1$)
function $\rho$ on $\Omega$.
Given a $N$-tuple $X=(x_1,\ldots,x_N)$ of so-called \emph{generating points}, one defines
the associated \emph{Voronoi Tessellation} as the collection of convex sets
\[V_i = \big\{x\in\Omega\,\big|\, |x-x_i|\leqslant |x-x_j|\mbox{ for all } j\in\llbracket 1,N \rrbracket \big\}\]
and we denote it by $V(X)$. One says that $V_i$ is the \emph{Voronoi cell} of
$x_i$. It is a tiling of $\Omega$, in particular the cells cover $\Omega$
and have disjoint interiors.
Each $V_i$ has a center of mass, equivalently defined as
\[g_i=\frac{\int_{V_i} x\rho(x) \mathrm{d} x}{\int_{V_i} \rho(x)\mathrm{d} x}\]
or as the minimizer of the energy functionnal
\[\mathscr{E}_{V_i}(g)=\int_{V_i} |x-g|^2\rho(x)\mathrm{d} x.\]
One says that $(V_i)_i$ is a \emph{centroidal Voronoi tessellation} or \emph{CVT},
if for all $i$, $g_i=x_i$. The existence of CVTs comes easily by considering the following
optimization problem: search for a $N$-tuple of points $X=(x_1,\ldots,x_N)$
and a tiling $V$ of $\Omega$ by $N$ sets $V_1,\ldots,V_N$ which together minimize
\[\mathscr{E}_V(X)=\sum_{i=1}^N \mathscr{E}_{V_i}(x_i)\]
A compacity argument shows that such a minimizer exists, so
let us explain why a minimizer must be a CVT together with its generating set.
First, each $x_i$ must be the center of mass of $V_i$, otherwise one could reduce
the total energy by moving $x_i$ to $g_i$ and changing nothing else. But also, $V_i$ should
be the Voronoi cell of $x_i$, otherwise there is a $j\neq i$ and a set of positive measure in $V_i$
whose points are closest to $x_j$ than to $x_i$. Transfering this set from $V_i$ to $V_j$
would reduce the total cost.
We observe that
this optimization problem is exactly that of approximating the measure
$\rho\lambda$ in $L^2$ Wasserstein distance; more precisely,
finding the $N$-tuple $x$ that minimizes $\inf_V \mathscr{E}_V(X)$ is equivalent
to finding the support of an optimal $\mu_N\in\Delta_N$ closest to $\rho\lambda$,
and then the Voronoi tesselation generated by $X$ gives the mass of $\mu_N$
at each $x_i$ and the optimal transport from $\rho\lambda$ to $\mu_N$.
One says that a CVT is \emph{optimal} when its generating set is a global minimizer
of the energy functional
\[\mathscr{E}(X)=\mathscr{E}_{V(X)}(X).\]
Optimal CVTs are most important in applications, which include
for example mesh generation and image analysis (see \cite{Du-Faber-Gunzburger}).
\subsection{Equidistribution of Energy}
The \emph{principle of energy equidistribution} says that if $X$ generates an optimal
CVT, the energies $\mathscr{E}_{V_i}(x_i)$ of the generating points should be asymptotically
independent of $i$ when $N$ goes to $\infty$.
Our goal here is to deduce a mesoscopic version of this principle from Theorem \ref{theo:cont}.
A similar result holds for any exponent, so that we introduce the $L^p$ energy functionals
$\mathscr{E}^p_{V_i}(x_i) = \int_{V_i} |x-x_i|^p\rho(x)\mathrm{d} x$,
$\mathscr{E}^p_V(X) = \sum_i \mathscr{E}^p_{V_i}(x_i)$ and
$\mathscr{E}^p(X) = \inf_V \mathscr{E}^p_V(x) = \mathscr{E}^p_{V(X)}(X)$.
In particular, an optimal $X$ for this last functional is the support
of an element of $\Delta_N$ minimizing the $L^p$ Wasserstein distance to $\rho\lambda$.
Note that for $p\neq2$ an $x$ minimizing $\mathscr{E}^p(x)$ need not generate
a CVT, since the minimizer of $\mathscr{E}^p_{V_i}$ is not always
the center of mass of $V_i$ (but it is unique as soon as $p>1$).
\begin{coro}\label{coro:equid}
Let $A$ be a cube of $\Omega$.
Let $X^N=\{x_1^N,\ldots,x_N^N\}$ be a sequence
of $N$-sets minimizing $\mathscr{E}^p$ for the density $\rho$, and denote
by $\bar{\mathscr{E}}^p_A(N)$ the average energy of the points of $X^N$ that lie in $A$.
Then
\[\bar{\mathscr{E}}^p_A(N) N^{\frac{d+p}d}\]
has a limit when $N\to\infty$, and this limit does not depend on $A$.
\end{coro}
The cube $A$ could be replaced by any domain,
but not by any open set. Since the union of the $X_N$ is countable, there are
indeed open sets of arbitrarily small measure containing all the points $(x_i^N)_{N,i}$.
\begin{proof}
Fix some $\varepsilon>0$ and let
$A'\subset A$ be the set of points that are at distance at least
$\varepsilon$ from $\Omega\setminus A$ and by $A''\supset A$ the set of points
at distance at most $\varepsilon$ from $A$.
First, the numbers $N',N''$ of points of $X^N$ in $A',A''$ satisfy
\[N'\sim N \frac{\int_{A'} \rho^{d/(d+p)}}{\int_\Omega \rho^{d/(d+p)}}\qquad
N''\sim N \frac{\int_{A''} \rho^{d/(d+p)}}{\int_\Omega \rho^{d/(d+p)}}.\]
The localization lemma implies that the maximal distance by which mass is moved by the optimal
transport between $\rho\lambda$ and the optimal $X^N$-supported
measure tends to $0$, so that for $N$ large enough the energy of all points in $A$
is at least the minimal cost between $\rho_{|A'}\lambda$ and
$\Delta_{N'}$ and at most
the minimal cost between $\rho_{|A''}\lambda$ and
$\Delta_{N''}$.
Letting $\varepsilon\to 0$ we thus get
that the total energy of all points of $X^N$ lying in $A$ is equivalent
to
\[\theta(d,p)\frac{\left(\int_A \rho^{d/(d+p)}\right)^{(d+p)/d}}{\left(N\int_A \rho^{d/(d+p)}\right)^{p/d}}
= \theta(d,p) N^{-p/d}\int_A \rho^{d/(d+p)} \]
As a consequence we have
$\bar{\mathscr{E}}_A(N)\sim \left(\theta(d,p) \int_\Omega\rho^{d/(d+p)}\right) N^{-(d+p)/d}$.
\end{proof}
\bibliographystyle{smfplain}
| proofpile-arXiv_065-4824 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
\label{Introduction}
Crystallographic methods of structure solution are the gold-standard for determining atomic arrangements in crystals including, in the absence of single crystals, structure solution from powder diffraction data~\cite{pecha;b;fopdascom05,david;b;sdfpdd02}. Here we show that crystal structure solution is also possible from experimentally determined atomic pair distribution functions (PDF) using the Liga algorithm that was developed for nanostructure determination~\cite{juhas;n06,juhas;aca08}. The PDF is the Fourier transform of the properly normalized intensity data from an isotropically scattering sample such as a glass or a crystalline powder. It is increasingly used as a powerful way to study atomic structure in nanostructured materials~\cite{billi;jssc08,egami;b;utbp03}. Such nanostructures do not scatter with well defined Bragg peaks and are not amenable to crystallographic analysis~\cite{billi;s07}, but refinements of models to PDF data yield quantitatively reliable structural information~\cite{proff;jac99,farro;jpcm07,tucke;jpcm07}. Recently \emph{ab initio} structure solution was demonstrated from PDF data of small elemental clusters~\cite{juhas;n06}. Here we show that these methods can be extended to solve the structure of a range of crystalline materials.
Whilst it is unlikely that this kind of structure solution will replace crystallographic methods for well ordered crystals, this work demonstrates both that structure solution from PDF data can be extended to compounds, and that robust structure solutions are possible from the experimentally determined PDFs of a wide range of materials. We also note that there may be an application for this approach when the space-group of the crystal is not known, as the Liga algorithm does not make use of such symmetry information. In fact, the space group can be determined afterwards analyzing the symmetry of the solved electron density map \cite{palat;jac08}.
However, this approach is promising for the case where the local
structure deviates from the average crystallographic structure, as has
been observed in a number of complex crystals, for example the
magnetoresistive La$_{1-x}$Ca$_x$MnO$_3$ system~\cite{qiu;prl05,bozin;pb06} or ferroelectric lead-based perovskites~\cite{dmows;jpcs00,juhas;prb04}.
The PDF contains this local information due to the inclusion of diffuse scattering intensities in the Fourier transform and it is possible to focus the modeling on a
specific length-scale when searching for matching structure models, allowing in principle structure solutions of local, intermediate, and long-range order to be obtained separately.
The procedure assumes a
periodic system with known lattice parameters and stoichiometry,
otherwise there is no information on location or symmetry of the
atom sites in the unit cell. To solve the unit cell structure the technique
constructs a series of trial clusters using the PDF-extracted
distances. The tested structures are created with a direct use of
distance information in the experimental data giving it significantly better performance than procedures that search by
random structure updates such as Monte Carlo based minimization schemes~\cite{juhas;n06,juhas;aca08}.
\section{Experimental procedures}
\label{ExperimentalProcedures}
The extended Liga procedure has been tested with
experimental x-ray PDFs collected from inorganic test materials.
Powder samples of Ag, BaTiO$_{3}$, C-graphite, CaTiO$_3$, CdSe,
CeO$_2$, NaCl, Ni, PbS, PbTe, Si, SrTiO$_3$, TiO$_2$ (rutile),
Zn, ZnS (sphalerite) and ZnS (wurtzite) were obtained from commercial
suppliers. Samples were ground in agate mortar to decrease their
crystallite size and improve powder averaging.
The experimental PDFs were measured using synchrotron
x-ray diffraction at the 6ID-D beamline of the Advanced Photon Source,
Argonne National Laboratory using the x-ray energies of 87 and 98~keV.
The samples were mounted using a thin kapton tape in a circular,
10~mm hole of a 1~mm thick flat plate holder, which was positioned
in transmission geometry with respect to the beam. The x-ray data were
measured using the ``Rapid Acquisition'' (RA-PDF) setup, where the
diffracted intensities were scanned by a MAR345 image plate
detector, placed about 20~cm behind the sample~\cite{chupa;jac03}.
All measurements were performed at room temperature.
The raw detector images were integrated using the Fit2D program
\cite{hamme;esrf98} to reduce them to a standard intensity
vs.\ $2\theta$ powder data. The integrated data were then
converted by the PDFgetX2 program~\cite{qiu;jac04i} to experimental PDFs.
The conversion to PDF was conducted with corrections for Compton
scattering, polarization and fluorescence effect, as available in
the PDFgetX2 program. The maximum value of the scattering
wavevector $Q_{\max}$ ranged from 19~Å$^{-1}$ to 29~Å$^{-1}$,
based on a visual inspection of the noise in the $F(Q) = Q [S(Q) - 1]$
curves.
The PDF function $G(r)$ was obtained by a Fourier transformation
of $F(Q)$,
\begin{equation}
\label{eq;sqtogr}
G(r) = \frac{2}{\pi}\int_{Q_{\min}}^{Q_{\max}} F(Q) \sin Qr \> \mathrm{d}Q,
\end{equation}
and provided a scaled measure of finding a pair of atoms separated by
distance $r$
\begin{equation}
\label{eq;grassum}
G(r) = \frac{1}{N r \langle f \rangle^2}
\sum_{i \neq j} f_i f_j \delta(r - r_{ij}) - 4 \pi \rho_0 r.
\end{equation}
The $G(r)$ function has a convenient property that its peak amplitudes
and standard deviations remain essentially constant with $r$ and is thus
suitable for curve fitting. A detailed discussions of the PDF theory,
data acquisition and applications for structure analysis can be found
in~\cite{egami;b;utbp03,farro;aca09}.
\section{Methods}
\label{Methods}
The structure solution procedure was carried out in three
separate steps, as described in the sections below.
The first step consists of peak search and profile fitting in
the experimental PDF to identify prominent inter-atomic distances up to
a cutoff distance $d_{\mathit{cut}}$.
We have developed an automated
peak extraction method which eases this task. In the second step these
distances are used as inputs for the Liga algorithm, which searches for
unit cell positions that give structure with the best match in pair
lengths. If the sample has several chemical species, a final ``coloring'' step
is necessary to assign proper atom species to the unit cell
sites. This can be done by making use of PDF peak amplitude information.
However, we have found that coloring can be also solved by optimizing the
overlap of the empirical atom radii at the neighboring sites, which is
simpler to implement and works with greater reliability.
To verify the quality and uniqueness of the structure, the Liga
algorithm has been run for each sample multiple (at least 10) times
with the same inputs, but different seeds of the random number generator. For most
samples the resulting structures were all equivalent, but sometimes
the program gave several geometries with similar agreement to
the PDF-extracted pair distances. In all these cases the correct
structure could be resolved in the coloring step, where it displayed
significantly lower atom radii overlap and converged to known structure
solution. A small number of structures would not solve by this process
and the reasons for failure are discussed below.
\subsection{Extraction of pair distances from the experimental PDF}
In the PDF frequent
pair distances generate sharp peaks in the measured $G(r)$ curve
with amplitudes following Equation~(\ref{eq;grassum}). The peaks
are broadened
to approximately Gaussian shape that reflects atom thermal vibrations
and limited experimental resolution. Additional broadening and
oscillations are introduced to the PDF due to the maximum wavevector
$Q_{\max}$ that can be achieved in the measurement. This cutoff in
$Q_{\max}$ in effect convolutes ideal peak profiles with a
sinc function $\sin(Q_{\max} r) / r$ thus creating satellite
termination ripples.
Recovering the underlying peaks from the PDF is not trivial. The
experimental curve can have false peaks due to termination ripples.
Nearby peaks can overlap and produce complicated profiles that are
difficult to decompose. To simplify the process of extracting
inter-atomic distances we have developed an automated method for peak
fitting that adds peak profiles to fit the data to some user-defined
tolerance, while using as few peaks as possible to avoid over-fitting. This
method grows peak-like clusters of data points while fitting one or more
model peaks to each cluster. Adjacent clusters iteratively combine
until there is a single cluster with a model that fits the entire data
set. This allows a steady growth in model complexity by progressively
refining earlier and less accurate models. Furthermore, most adjustable
parameters can be estimated, in principle, from experimental knowns. A
full description of the peak extraction method will be presented in a
future paper.
The present work uses the simplest model for peaks, fitting the $G(r)$
data with Gaussian peaks over $r$ and using an assumed value of $\rho_{0}$. This
model ignores the effect of termination ripples, but for our data the
spurious peaks due to these ripples were usually identifiable by
their small size. Furthermore, the Liga algorithm is not required to
use every distance it is given, and should exhibit a limited tolerance
of faulty distances. The peak fitting procedure returns positions,
widths and integrated areas of the extracted peaks, of which only the
peak positions were used for structure determination.
The peak extraction procedure was implemented in Mathematica 6
and tested on both the experimental and
simulated data. A typical runtime was about 5 minutes.
Since the structures are known we can compare the results of the peak extraction with
the expected results. For both experimental and simulated PDFs of the tested structures
these compared qualitatively well to the ideal distances up to
$\sim$10-15~Å, including accurate identification of some
obscured peaks. Past that range the number of distinct, but very close,
distances in the actual structure is so great that reliable peak
extraction is much more difficult. For this reason we only performed
peak extraction up to 10~Å before running the trials
described in Section~\ref{Results}. Apart from removing peaks below a noise
threshold in order to filter termination ripples out, and one difficult
peak in the graphite data, all distances used in the structure solution
trials below come directly from the peak extraction method.
\subsection{Unit cell reconstruction using the Liga algorithm}
In the second step the Liga algorithm searches for the atom
positions in the unit cell that make the best agreement
to the extracted pair distances. The quality of distance match
is expressed by cost $C_d$ defined as a mean square difference between
observed and modeled pair distances.
\begin{equation}
\label{eq;ligacost}
C_d = \frac{1}{P} \sum_{d_k < d_{\mathit{cut}}}
\left( t_{k,\mathit{near}} - d_k \right)^2
\end{equation}
The index $k$ goes over all pair distances $d_k$ in the model that are
shorter than the cutoff length $d_{\mathit{cut}}$ and compares them with
the nearest observed distance $t_{k,\mathit{near}}$, while $P$ is the
number of model distances. This cost definition considers
only distance values as extracted from the PDF peak positions,
and ignores their relative occurrences.
For multi-component systems there is in fact no
straightforward way of extracting distance multiplicities,
because it is not known what atom pairs are present in each PDF peak.
Nevertheless, the cost definition still imposes strict
requirements on the model structure, as displayed in
Fig.~\ref{fig2dLattice}. A
site in the unit cell must be at a good, matching distance not
only from all other cell sites, but also from all of their translational
images within the cutoff radius.
\begin{figure}
\includegraphics[clip=true]{figures/fig2dLattice}
\label{fig2dLattice}
\caption{Schematic calculation of the distance cost $C_d$. To achieve
low $C_d$ a unit cell site needs to be at a correct distance from
other cell sites and from their translational images.
}
\end{figure}
To find an optimum atom position in the unit cell structure the
Liga algorithm uses input pair distances in an iterative build-up and
disassembly of partial structures~\cite{juhas;n06}.
The procedure maintains a pool of partial unit cell structures at each
possible size from a single atom up to a complete unit cell.
These ``candidate clusters'' are assigned to ``divisions''
according to the number of sites they contain, therefore there are
as many divisions as is the number of atoms in a complete unit cell.
The structures at each division compete
against each other in a stochastic process, where the probability of
winning equals the reciprocal distance cost $C_d$, and a win is thus more
likely for low-cost structures. The winning cluster is selected for
``promotion,''
where it adds one or more atoms to the
structure and thus advances to a higher division. At the new division a
poorly performing high-cost candidate is ``relegated'' to the original
division of the promoted structure, thus keeping the total number of
structures at each division constant. The relegation is accomplished
by removing cell sites that have the largest contributions to the
total cost of the structure. Both promotion and relegation
steps are followed by downhill relaxation of the worst site,
i.e., the site with the largest share of the total cost $C_d$.
The process of promotion and relegation is performed at every division
in a ``season'' of competitions. These seasons are repeated many
times until a full sized structure
attains sufficiently low cost or until a user-specified time
limit. A complete description of the Liga algorithm details
can be found in~\cite{juhas;aca08}.
\subsection{Atom assignment}
The Liga algorithm used in the structure solution step has no notion of
chemical species and therefore returns only coordinates of the atom
sites in the unit cell. For a multi-component system an
additional step, dubbed coloring, is necessary to assign
chemical elements over known cell sites. To assess the
quality of different assignments we have tested two definitions
for a cost of a particular coloring. The first method uses
a weighted residuum, $R_w$, from a least-squares PDF
refinement to the input PDF data~\cite{egami;b;utbp03}.
The PDF refinement was performed with a fully automated PDFfit2 script,
where the atom positions were all fixed and only the atomic displacement
factors, PDF scale factor and $Q$-resolution damping factor were
allowed to vary. The second procedure
defines coloring cost $C_c$ as an average overlap of the empirical
atomic radii, so that
\begin{equation}
\label{eq;coloringcost}
C_c = \frac{1}{N} \sum_{d_{k} < r_{k,1} + r_{k,2}}
\left( r_{k,1} + r_{k,2} - d_{k} \right)^2
\end{equation}
The index $k$ runs over all atom pairs considering periodic boundary
conditions,
$r_{k,1}$ and $r_{k,2}$ are the empirical radii values of
the first and second atom in the pair $k$, and $N$ is the number of
atoms in the unit cell.
Considering an $N$ atom structure with $s$ different atom species,
the total number of possible assignments is given by the multinomial
expression $N! / (n_1! \: n_2! \: \ldots \: n_s!)$\@.
For a 1:1 binary system the number of possible assignments tends to
$2^N$ with increasing $N$. Such exponential growth in possible
configurations makes it quickly impossible to compare them all in
an exhaustive way.
We have therefore employed a simple downhill search, which starts with
a random element assignment. The initial coloring cost $C_c$ is calculated
together with a cost change for every possible
swap of two atoms between unit cell sites. The site flip that
results in the largest decrease of the total coloring cost is accepted and all
cost differences are evaluated again. The site swap is then repeated
until a minimum configuration is achieved, where all site flips
increase the coloring cost. The downhill procedure was
verified by repeating it 5 times using different initial assignments.
In nearly all cases these runs converged to the same
atom configurations.
The downhill procedure was performed using both definitions of the
coloring cost. For the coloring cost obtained by PDF fitting the
procedure was an order of magnitude slower and less reliable, as the
underlying PDF refinements could converge badly for poor atom assignments.
The second method, which calculated cost from radii-overlap, was
considerably
faster and more robust. For all tested materials, the overlap-based
coloring assigned all atoms correctly, when run on correct structure
geometry. The overlap cost was evaluated using either the covalent
radii by~\cite{corde;d08}
or the ionic radii from~\cite{shann;aca76} for more ionic
compounds. For some ions the Shannon table provides
several radii values depending on their coordination number or
spin state. Although these variants in ionic radii can vary by
as much as about 30\%, the choice of particular radius had no
effect on the best assignment for all studied structures.
\section{Results}
\label{Results}
The experimental x-ray PDFs were acquired from 16 test samples
with well known crystal structures. To verify that the measured
PDF data were consistent with established structure results structure
refinements of the known structures were carried out using the PDFgui program~\cite{farro;jpcm07}. The PDF fits were done with structure data
obtained from the Crystallography Open Database (COD)~\cite{grazu;jac09}.
The structure parameters were all kept constant in the refinements,
which modified only parameters related to the PDF extraction, such
as PDF scale, $Q$ resolution dampening envelope and a small rescaling
of the lattice parameters. These refinements are summarized in
Table~\ref{tab;PDFrefinements}, where low values of the fitting residual
$R_w$ confirm good agreement between experimental PDFs and expected
structure results.
The PDF datasets were then subjected to the peak search, Liga structure
solution and coloring procedures as described above. To check the
stability of this method, several structures were solved using an enlarged
periodicity of 1$\times$1$\times$2, 1$\times$2$\times$2 or
2$\times$2$\times$2 super cells. The lattice parameters used in the
Liga crystallography step were obtained from the positions of the
nearest PDF peaks. In several cases, such as for BaTiO$_{3}$ where peak
search could not resolve tetragonal splitting, the cell parameters were
taken from the respective CIF reference, as listed in
Table~\ref{tab;PDFrefinements}.
The structure solution was considered successful if the found structure
displayed the same nearest neighbor coordination as its CIF reference and
no site was offset by more than 0.3~Å from its correct
position. The solution accuracy was evaluated by finding the best
overlay of the found structure to the reference CIF data. The
optimum overlay was obtained by an exhaustive search over all symmetry
operations defined in the CIF file and over all mappings of solved atom
sites to all reference sites containing the same element. The
overlaid structures were then compared for the differences in
fractional coordinates and for the root mean square distortion $s_r$ of the solved
sites from their correct positions. Table~\ref{tab;SolvedStructures}
shows a summary of these results for all tested structures.
\begin{table}
\caption{List of measured x-ray PDFs and their fitting residua
$R_w$ with respect to established structures from the literature.
}
\label{tab;PDFrefinements}
\begin{tabular}{lll}
\hline
sample & $R_w$ & CIF reference \\
\hline
Ag & 0.095 & \cite{wycko;bk63} \\
BaTiO$_{3}$ & 0.123 & \cite{megaw;ac62} \\
C (graphite) & 0.248 & \cite{wycko;bk63} \\
CaTiO$_{3}$ & 0.083 & \cite{sasak;acc87} \\
CdSe & 0.149 & \cite{wycko;bk63} \\
CeO$_{2}$ & 0.098 & \cite{wycko;bk63} \\
NaCl & 0.161 & \cite{jurge;ic00} \\
Ni & 0.109 & \cite{wycko;bk63} \\
PbS & 0.085 & \cite{ramsd;ami25} \\
PbTe & 0.070 & \cite{wycko;bk63} \\
Si & 0.085 & \cite{wycko;bk63} \\
SrTiO$_{3}$ & 0.143 & \cite{mitch;apa02} \\
TiO$_{2}$ (rutile) & 0.146 & \cite{meagh;canmin79} \\
Zn & 0.105 & \cite{wycko;bk63} \\
ZnS (sphalerite) & 0.102 & \cite{skinn;ami61} \\
ZnS (wurtzite) & 0.174$^1$ & \cite{wycko;bk63} \\
\hline
\multicolumn{3}{l}{
$^1$ refined as mixture of wurtzite and sphalerite phases
} \\
\hline
\end{tabular}
\end{table}
The procedure converged to a correct structure for 14 out of 16 studied
samples and failed to find the remaining 2. The convergence was
more robust for high-symmetry structures, such as Ag (\textit{f.c.c.}),
NaCl or ZnS sphalerite, which could be reliably solved also in
enlarged [222] supercells. For all successful runs the distance cost $C_d$
of the Liga-solved structure was comparable to the one from the CIF
reference and the atom overlap measure $C_c$ was close to zero.
ZnS sphalerite shows a notable difference between the $C_d$ values of
the solution and its CIF reference, however this was caused by using
a PDF peak position as a cell parameter for the solved structure.
Apparently the PDF peak extracted at $r \approx a$ was slightly
offset with respect to other peaks, nevertheless the Liga algorithm
still produced atom sites with correct fractional coordinates. The mean
displacement $s_r$ for ZnS is 0~Å, because solved structures and CIF
references were compared using lattice parameters rescaled to their
CIF values.
\begin{table}
\caption{Summary of tested structure solutions from x-ray PDF data}
\label{tab;SolvedStructures}
\begin{tabular}{lr*{8}{l}}
\hline
\multicolumn{2}{l}{sample \hfill atoms} &
\multicolumn{2}{l}{cost $C_d$ (0.01~Å$^2$)} &
\multicolumn{2}{l}{cost $C_c$ (Å$^2$)} &
\multicolumn{4}{l}{deviation of coordinates} \\
(supercell) & & Liga & CIF & Liga & CIF &
$s_x$ & $s_y$ & $s_z$ & $s_r$ (Å) \\
\hline
\multicolumn{10}{l}{successful solutions} \\
\hline
Ag [111] & 4 & 0.0232 & 0.136 & 0 & 0.001 &
0 & 0 & 0 & 0 \\
Ag [222] & 32 & 0.0097 & 0.136 & 0 & 0.001 &
0.00025 & 0.00024 & 0.00003 & 0.0014 \\
BaTiO$_3$ [111] & 5 & 0.370 & 0.394 & 0.040 & 0.042 &
0.0057 & 0.0066 & 0.014 & 0.064 \\
BaTiO$_3$ [112] & 10 & 0.392 & 0.394 & 0.058 & 0.042 &
0.00023 & 0.039 & 0.018 & 0.16 \\
C graphite [111] & 4 & 0.396 & 0.574 & 0.010 & 0.016 &
0.0029 & 0.0029 & 0.036 & 0.14 \\
C graphite [221] & 16 & 0.420 & 0.574 & 0.010 & 0.016 &
0.0086 & 0.0065 & 0.036 & 0.15 \\
CdSe [111] & 4 & 0.107 & 0.138 & 0 & 0.001 &
0 & 0 & 0.0055 & 0.027 \\
CdSe [221] & 16 & 0.0856 & 0.138 & 0 & 0.001 &
0.00010 & 0.00013 & 0.0057 & 0.028 \\
CeO$_2$ [111] & 12 & 0.515 & 0.554 & 0 & 0 &
0 & 0 & 0 & 0 \\
NaCl [111] & 8 & 1.75 & 1.71 & 0 & 0 &
0 & 0 & 0 & 0 \\
NaCl [222] & 64 & 1.20 & 1.71 & 0 & 0 &
0.00031 & 0.00031 & 0.00035 & 0.0032 \\
Ni [111] & 4 & 0.0024 & 0.0024 & 0 & 0 &
0 & 0 & 0 & 0 \\
Ni [222] & 32 & 0.0025 & 0.0024 & 0 & 0 &
0.00015 & 0.00013 & 0.00013 & 0.0008 \\
PbS [111] & 8 & 0.0125 & 0.0104 & 0.010 & 0.011 &
0 & 0 & 0 & 0 \\
PbS [222] & 64 & 0.0140 & 0.0104 & 0.010 & 0.011 &
0.00005 & 0.00004 & 0.00005 & 0.0005 \\
PbTe [111] & 8 & 0.0024 & 0.0127 & 0.097 & 0.090 &
0 & 0 & 0 & 0 \\
PbTe [222] & 64 & 0.0022 & 0.0127 & 0.097 & 0.090 &
0.00011 & 0.00011 & 0.00008 & 0.0011 \\
Si [111] & 8 & 0.0045 & 0.0045 & 0 & 0 &
0 & 0 & 0 & 0 \\
Si [222] & 64 & 0.0048 & 0.0045 & 0 & 0 &
0.00010 & 0.00009 & 0.00008 & 0.0009 \\
SrTiO$_3$ [111] & 5 & 0.437 & 0.437 & 0.002 & 0.002 &
0 & 0 & 0 & 0 \\
Zn [111] & 2 & 0.495 & 0.470 & 0 & 0 &
0 & 0 & 0.027 & 0.095 \\
Zn [222] & 16 & 0.564 & 0.470 & 0 & 0 &
0.00010 & 0.00006 & 0.020 & 0.080 \\
ZnS sphalerite [111] & 8 & 0.150 & 0.0647 & 0 & 0 &
0 & 0 & 0 & 0 \\
ZnS sphalerite [222] & 64 & 0.160 & 0.0647 & 0 & 0 &
0.00029 & 0.00033 & 0.00031 & 0.0028 \\
ZnS wurtzite [111] & 4 & 0.141 & 0.152 & 0 & 0 &
0 & 0 & 0.0038 & 0.017 \\
ZnS wurtzite [221] & 16 & 0.165 & 0.152 & 0 & 0 &
0.00003 & 0.00002 & 0.0039 & 0.017 \\
\hline
\multicolumn{10}{l}{failed solutions} \\
\hline
CaTiO$_3$ [111] & 20 & 0.4967 & 0.902 & 0.52 & 0.072 &
0.16 & 0.14 & 0.17 & 1.6 \\
TiO$_2$ rutile [111] & 6 & 0.5358 & 0.758 & 0.40 & 0.009 &
0.081 & 0.24 & 0.00004 & 0.94 \\
\hline
\multicolumn{10}{p{\textwidth}}{
$C_d$, $C_c$ -- distance and atom overlap cost as defined in equations
(\ref{eq;ligacost}), (\ref{eq;coloringcost})
$s_x$, $s_y$, $s_z$ -- standard deviation in fractional coordinates
normalized to a simple [111] cell
$s_r$ (Å) -- root mean square displacement of the solved sites from the
reference CIF positions
} \\
\hline
\end{tabular}
\end{table}
The structure determination did not work for 2 lower-symmetry
samples of CaTiO$_3$ and TiO$_2$ rutile. In both of these cases,
the simulated structure showed significantly lower distance cost
$C_d$ while its atom overlap $C_c$ was an order of magnitude higher
than for the correct structure and clearly indicated an unphysical result.
Such results were caused by a poor quality of the extracted
distances, which contained significant errors and omissions with
respect to an ideal distance list. The peak search and distance extraction
is more difficult for lower symmetry structures, because their pair
distances are more spread and produce small features that can be
below the technique resolution. Because of poor distance data,
the Liga algorithm converged to incorrect geometries that actually
displayed a better match with the input distances. Both CaTiO$_3$ and
TiO$_2$ were easily solved when run with ideal distances calculated
from the CIF structure.
The results in Table~\ref{tab;SolvedStructures} suggest several ways
to extend the method and improve its success rate. First, the
Liga geometry solution and coloring steps can be performed
together, in other words the structure coloring step needs to be merged to
a chemistry aware Liga procedure. Since atom overlap
cost $C_c$ is meaningful and can be easily evaluated for partial
structures, the total cost minimized by the Liga algorithm should
equal a weighted sum of $C_c$ and distance cost $C_d$. Such a cost
definition would steer the Liga algorithm away from faulty
structures found for CaTiO$_3$ and TiO$_2$ rutile, because both of them
had huge atom overlaps $C_c$. Another improvement is to perform PDF
refinement for a full sized structure and update its cost formula so
that the PDF fit residuum $R_w$ is used instead of distance cost $C_d$.
Such modification would prevent the cost advantage for wrong structures
due to errors and omissions in the extracted distances. The assumption
is that the distance data are still good enough to let
the Liga algorithm construct the correct structure in one of its many
trials. Finally, the cost definition for partial structures can be
enhanced with other structural criteria such as bond valence sums (BVS)
agreement~\cite{brese;acb91,norbe;jac09}. Bond valence sums are
not well determined for incomplete intermediate structures and thus
cannot fully match their expected values. However,
BVS are always increasing, therefore a BVS of some ion that is
significantly larger than its expected value is a clear sign of
such a partial structure's poor quality.
\section{Conclusions}
We have demonstrated the Liga algorithm for structure determination from
PDF can be extended from its original scope of single-element non-periodic
molecules~\cite{juhas;n06,juhas;aca08} to multi-component crystalline
systems. The procedure assumes known lattice parameters and it solves
structure geometry by optimizing pair distances to match the PDF
extracted values, while the chemical assignment is obtained from
minimization of the atomic radii overlap. The procedure was tested
on x-ray PDF data from 16 test samples, of which in 14 cases it gave
the correct structure solution. These are promising results, considering
the technique is at a prototype stage and will be further developed to
improve its ease of use and rate of convergence. The procedure can be
easily amended by a final PDF refinement step. Such an implementation
could significantly reduce the overhead in PDF analysis of crystalline materials,
because its most difficult step, a design of suitable structure model,
would become fully automated.
\ack{\textbf{Acknowledgements}}
We gratefully acknowledge Dr.~Emil Božin, Dr.~Ahmad Masadeh and
Dr.~Douglas Robinson for help with x-ray measurements at the
Advanced Photon Source at the Argonne National Laboratory (APS, ANL).
We thank Dr.~Christopher Farrow for helpful suggestions and
consultations and Dr.~Christos Malliakas for providing NaCl and ZnS
wurtzite samples. We appreciate the computing time and support
at the High Performance Computing Center of the Michigan State
University, where we performed all calculations. This work has
been supported by the National Science Foundation (NSF) Division of
Materials Research through grant DMR-0520547. Use of the APS is supported by the U.S. DOE, Office
of Science, Office of Basic Energy Sciences, under Contract
No. W-31-109-Eng-38. The 6ID-D beamline in the MUCAT sector at the APS is
supported by the U.S. DOE, Office of Science, Office of
Basic Energy Sciences, through the Ames Laboratory under
Contract No. W-7405-Eng-82.
| proofpile-arXiv_065-4825 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
The present paper is devoted to the
the study of a stochastic process followed by a particle
moving through a scattering thermal bath
while accelerated by an external field.
The field prevents the particle from acquiring the Maxwell distribution of the bath.
Our aim here is not only to establish the precise form of the stationary velocity distribution,
as it was e.g. the case in the analysis presented in \cite{GP86}, but also to answer
the physically relevant question of the dynamics of approach towards the long-time
asymptotic state. The evolution of the distribution in
position space will be thus also discussed.
\bigskip
We consider a one-dimensional dynamics described by
the Boltzmann kinetic equation
\begin{equation}
\label{I1}
\left( \frac{\partial}{\partial t}+v\frac{\partial}{\partial r} +
a\frac{\partial}{\partial v} \right)f(r,v;t)=v_{\text{\tiny int}}^{1-\gamma}
\rho \int \hbox{d} w |v-w|^{\gamma}[\, f(r,w;t)\,\phi(v)-f(r,v;t)\,\phi(w)\,]
\end{equation}
Here $f(r,v;t)$ is the probability density for finding
the propagating particle at point $r$ with velocity $v$ at time $t$. The
thermal bath particles are not coupled to the external field. Before binary encounters with the accelerated particle they are assumed to be in an equilibrium state with uniform temperature $T$ and density $\rho$
\begin{equation}
\label{I2}
\rho \,\phi(v) = \rho \sqrt{\frac{m}{2\pi k_{B}T}}\exp\left(-\frac{mv^{2}}{2k_{B}T} \right)=
\frac{\rho}{v_{\text{\tiny th}}\sqrt{2\pi}}\exp\left[-\frac{1}{2}\left(\frac{v}{v_{\text{\tiny th}}} \right)^{2}\right]
\end{equation}
Here $\phi(v)$ is the Maxwell distribution, and
\begin{equation}
\label{I3}
v_{\text{\tiny th}} = \sqrt{\frac{k_{B}T}{m}} \,.
\end{equation}
denotes the corresponding thermal velocity.
The differential operator on the left-hand side of (\ref{I1})
generates motion with a constant acceleration $a$.
The accelerated motion is permanently perturbed
by instantaneous exchanges of velocities with thermalized bath particles.
This is modeled by the Boltzmann collision term on the right hand side of equation (\ref{I1}),
which accounts for elastic encounters between equal mass particles. The collision frequency
depends therein on the absolute relative velocity
$|v-w|$ through a simple power law with exponent $\gamma$. Finally $v_{\text{\tiny int}}$ is some characteristic velocity of the underlying interparticle interaction.
\bigskip
In the case of hard rods ($\gamma = 1$) the factor $|v-w|$
is the main source of difficulties in the attemps to rigorously determine
the evolution of $f(r,v;t)$, since it prevents the effective use of
Laplace and Fourier transformations.
It was thus quite remarkable that a stationary velocity distribution could be
analytically determined in that case, leading in particular to an explicit
expression for the current at any value of the external acceleration \cite{GP86}. In that case, kinetic equation (\ref{I1})
has been solved exactly
only at zero temperature where $\phi (v)|_{T=0} = \delta(v)$ \cite{JP83}.
Also, when $\phi(v)$ is replaced by the distribution
$[\delta(v-v_{0})+\delta(v+v_{0})]/2$ with a discrete velocity spectrum $\pm v_{0}$, an explicit analytic solution
has been derived and analyzed in \cite{JP1986} and \cite{JPRS2006}.
The physically relevant conclusions from those works can be summarized as follows
\begin{itemize}
\item[(i)] the approach to the asymptotic stationary velocity distribution is exponentially fast
\item[(ii)] in the reference system moving with average velocity,
the hydrodynamic diffusion mode governs the spreading of the distribution in position space
\item[(iii)] the Green-Kubo autocorrelation formula for the diffusion coefficient applies in the non-equilibrium steady state
\end{itemize}
Our aim is to show that the general features (i)-(iii) persist when $\phi(v)$
is the Maxwell distribution with temperature $T>0$. However, in
the present study, we restrict the analysis
to cases $\gamma =0$ and $\gamma =2$,
which are much simpler than the hard-rod one. Indeed, it turns out that
the Fourier-Laplace transformation can then be effectively used to solve
the initial value problem for equation (\ref{I1}).
The simplifications occuring when $\gamma =0$ or $\gamma =2$
have been already exploited in other studies:
for recent applications to granular fluids, see e.g.
\cite{BCG2000}-\cite{MP2007} and references quoted therein.
\bigskip
In terms of dimensionless variables
\begin{equation}
\label{I4}
w = v/v_{\text{\tiny th}}, \;\;\;\;
x=r \, \rho \left(v_{\text{\tiny th}}/v_{\text{\tiny int}}\right)^{\gamma-1},
\;\;\;\; \tau = t \,\rho \, v_{\text{\tiny th}}
\left(v_{\text{\tiny th}}/v_{\text{\tiny int}}\right)^{\gamma-1}\, ,
\end{equation}
the kinetic equation (\ref{I1}) takes the form
\begin{equation}
\left( \frac{\partial}{\partial \tau}+w\frac{\partial}{\partial x} +
\epsilon\frac{\partial}{\partial w} \right)F(x,w;\tau) =
\int \hbox{d} u |w-u|^{\gamma} [F(x,u;\tau)\Phi(w)-F(x,w;\tau)\Phi(u)] \, ,
\label{I5}
\end{equation}
where $\Phi(w)$ is the dimensionless normalized gaussian
\begin{equation}
\label{I7}
\Phi(w)= \frac{1}{\sqrt{2\pi}}e^{-w^{2}/2} \, ,
\end{equation}
and $\epsilon$ is the dimensionless parameter
\begin{equation}
\label{I6}
\epsilon = \left(v_{\text{\tiny th}}/v_{\text{\tiny int}}\right)^{1-\gamma} \,
\frac{am\rho^{-1}}{ k_{B}T}\
\end{equation}
proportional to the ratio between the energy $am\rho^{-1}$ provided
to the particle on a mean free path,
and thermal energy $k_{B}T$. That parameter
can thus be looked upon as a measure of the strength of the field.
Integration of (\ref{I5}) over the position space yields
the kinetic equation for the velocity distribution
\[ G(w;\tau)=\int \hbox{d} x F(x,w;\tau) \; ,\]
which reads
\begin{equation}
\left( \frac{\partial}{\partial \tau} + \epsilon\frac{\partial}{\partial w} \right)G(w;\tau) =
\int \hbox{d} u |w-u|^{\gamma} [G(u;\tau)\Phi(w)-G(w;\tau)\Phi(u)] \, .
\label{I8}
\end{equation}
\bigskip
The paper is organized as follows. In Section II, we
consider the so-called Maxwell gas ($\gamma =0$). The explicit solution of the kinetic
equation (\ref{I5}) enables a thorough discussion of the approach to the stationary state, together with a study of the structure of the stationary velocity distribution. In Section III, we proceed to
a similar analysis for the very hard particle model ($\gamma =2$). Section IV contains conclusions. Some calculations have been relegated to Appendices.
\section{The Maxwell gas}
We consider here the simple version $\gamma =0$ of equation (\ref{I5}). One usually then
refers to the Maxwell gas dynamics, in which the collision frequency
does not depend on the speed of approach (see e.g. \cite{UFM63}).
This case can be viewed upon as a very crude approximation to the
hard rod dynamics ($\gamma=1$) obtained by replacing the relative speed $|v-c|$ of colliding particles
by constant thermal velocity $v_{\text{\tiny th}}$, while $v_{\text{\tiny int}}$ is identified with $v_{\text{\tiny th}}$.
Here, kinetic equation (\ref{I5}) takes the form
\begin{eqnarray}
\left( \frac{\partial}{\partial \tau}+w\frac{\partial}{\partial x} +
\epsilon\frac{\partial}{\partial w} \right)F(x,w;\tau) & =
& \int \hbox{d} u [F(x,u;\tau)\Phi(w)-F(x,w;\tau)\Phi(u)] \nonumber \\
& = & M_{0}(x;\tau)\Phi(w) - F(x,w;\tau) \label{II1}
\end{eqnarray}
where $M_{0}(x;\tau )$ denotes the zeroth moment
\begin{equation}
\label{II2}
M_{0}(x;\tau) = \int \hbox{d} u F(x,u;\tau) \; .
\end{equation}
\bigskip
Equation (\ref{II1}) can be conveniently rewritten as an integral equation
\begin{multline}
F(x,w;\tau) = e^{-\tau}F(x-w\tau+\epsilon\tau^{2}/2, w-\epsilon\tau;0) \\
+ \int_{0}^{\tau}d\eta e^{-\eta}\Phi(w-\epsilon\eta)M_{0}(x-w\eta+\epsilon\eta^{2}/2; \tau-\eta) \, ,
\label{II3}
\end{multline}
with an explicit dependence on the initial condition $F(x,w;0)$.
Integration of equation (\ref{II3}) over $x$ yields
\begin{equation}
\label{II4}
G(w;\tau) = \int \hbox{d} x F(x,w;\tau) = e^{-\tau}G_{\text{\tiny in}}( w-\epsilon\tau) +
N_{0} \int_{0}^{\tau} \hbox{d} \eta e^{-\eta}\Phi(w-\epsilon\eta) \, ,
\end{equation}
where $G_{\text{\tiny in}}(w) = G(w;0)$ is the initial condition, and $N_{0}=\int \hbox{d} w \int \hbox{d} x F(x,w;\tau) = \int \hbox{d} w G(w;\tau)$ is the conserved normalization factor.
\subsection{Stationary solution and relaxation of the velocity distribution}
Putting $N_{0}=1$ in formula (\ref{II4}) yields the evolution law for the normalized velocity distribution
\begin{equation}
\label{IIG}
G(w;\tau) = \int \hbox{d} x F(x,w;\tau) = e^{-\tau}G_{\text{\tiny in}}( w-\epsilon\tau) +
\int_{0}^{\tau} \hbox{d} \eta e^{-\eta}\Phi(w-\epsilon\eta) \, ,
\end{equation}
The first term on the right hand side of (\ref{IIG}) describes the
decaying memory of the initial distribution : $G_{\text{\tiny in}}(w)$
propagates in the direction of the field with constant velocity $\epsilon$,
while its amplitude is exponentially damped. Clearly, for times $\tau \gg 1$
that term can be neglected.
\bigskip
The second term in formula (\ref{IIG}) describes the approach
to the asymptotic stationary distribution
\begin{eqnarray}
\label{II5}
G_{\text{\tiny st}}(w) = G(w;\infty) & = & \int_{0}^{\infty} \hbox{d} \eta
\; e^{-\eta}\;\Phi(w-\epsilon\eta) \nonumber \\
&=& \frac{1}{2\epsilon}\exp{\left(\frac{1}{2\epsilon^{2}}-\frac{w}{\epsilon} \right) }
\left( 1+\text{Erf}\left(\frac{w\epsilon-1}
{\epsilon\sqrt{2}}\right)\right)\, ,
\end{eqnarray}
where
\[ \text{Erf}(\xi)=\frac{2}{\sqrt{\pi}}\int_0^\xi \hbox{d} u \; \exp(-u^2) \]
is the familiar error function.
It is interesting to compare the decay-law of
$G_{\text{\tiny st}}(w)$ at large velocities, to that
corresponding to the case of hard-rod collisions. Using expression (\ref{II5})
we find the asymptotic formula
\begin{equation}
\label{II16}
G_{\text{\tiny st}}(w) \sim \frac{1}{\epsilon}
\exp{\left(\frac{1}{2\epsilon^{2}}-\frac{w}{\epsilon} \right) }
\end{equation}
when $w \to +\infty$. In contradistinction to the hard-rod case governed
by an $\epsilon$-dependent gaussian law (see \cite{GP86})
we find here a purely exponential decay. The thermal bath is unable
to impose via collisions its own gaussian decay because of insufficient collision frequency.
The replacement of the relative speed in the Boltzmann
collision operator by thermal velocity implies thus qualitative changes in
the shape of the stationary velocity distribution. The plot of
$G_{\text{\tiny st}}(w)$ for different values of $\epsilon$ is shown in Fig.~\ref{AP09a}.
\begin{figure}
\includegraphics[width=0.9\textwidth]{AP09a.eps}
\caption{\label{AP09a} Stationary velocity distribution
$G_{\text{\tiny st}}(w)$ for three values of $\epsilon$.}
\end{figure}
\bigskip
Basic properties (i)-(iii) discussed in the
Introduction turn out to be valid. Indeed, the inequality
\begin{equation}
\label{II6}
G_{\text{\tiny st}}(w) - \int_{0}^{\tau}\hbox{d} \eta
\; e^{-\eta}\; \Phi(w-\epsilon\eta)
=\int_{\tau}^{\infty}\hbox{d} \; \eta e^{-\eta}\; \Phi(w-\epsilon\eta)
< \frac{e^{-\tau}}{\epsilon}
\end{equation}
displays an uniform exponentially fast approach towards the stationary state.
In particular, using formula (\ref{IIG}), we find that the average velocity
$<w>(\tau)$ approaches the asymptotic value
\begin{equation}
\label{II7}
<w>_{\text{\tiny st}} = \epsilon
\end{equation}
according to
\begin{equation}
\label{II8}
<w>(\tau) = \int \hbox{d} w \, w\, G(w;\tau) = \epsilon +
e^{-\tau}[ <w>_{\text{\tiny in}} - \epsilon ]
\end{equation}
We encounter here an exceptional situation
where the linear response is exact for any value of
the external field.
\bigskip
Equation (\ref{II4}) with $N_{0}$ put equal to zero can be used for the evaluation of the time-displaced velocity autocorrelation
function
\begin{equation}
\label{II9}
\Gamma(\tau) = <[ w(\tau) - <w>_{\text{\tiny st}} ]
[w(0) - <w>_{\text{\tiny st}} ]>_{\text{\tiny st}} \, .
\end{equation}
where $<...>_{\text{\tiny st}}$ denotes the average over
stationary state (\ref{II5}). The calculation presented in Appendix~\ref{B}
provides the formula
\begin{equation}
\label{II10}
\Gamma(\tau) = e^{-\tau} [ 1 + \epsilon^{2} ] \, ,
\end{equation}
which yields a remarkably simple field dependence of the diffusion coefficient
\begin{equation}
\label{II11}
D(\epsilon) = \int_{0}^{\infty} \hbox{d} \tau \; \Gamma(\tau) = 1 + \epsilon^{2} \, .
\end{equation}
\subsection{Relaxation of density: appearence of a hydrodynamic mode}
Let us turn now to the analysis of the evolution of the normalized density $n(x;\tau)=M_{0}(x;\tau)$
in position space. It turns out that one can solve
the complete integral equation (\ref{II3}) by applying
to both sides Fourier and Laplace transformations. If we set
\begin{equation}
\label{II12bis}
\tilde{F}(k,w;z) = \int_0^{\infty} \hbox{d} \tau\, e^{-z\tau}
\int \hbox{d} x \, e^{-ikx}\, F(x,w;\tau) \, ,
\end{equation}
we find
\begin{multline}
\label{II12}
\tilde{F}(k,w;z) = \int_{0}^{\infty}\hbox{d} \tau \; {\rm exp}
\left[ -ik\left( w\tau - \epsilon\frac{\tau^{2}}{2} \right) -
(z+1)\tau \right] \\
\left\lbrace \hat{F}_{\text{\tiny in}}(k,w-\epsilon\tau)
+ \tilde{n}(k;z) \Phi(w-\epsilon\tau) \right\rbrace \; ,
\end{multline}
where $\tilde{n}(k;z)$ is the Fourier-Laplace transform of $n(x;\tau)$, and
\[ \hat{F}_{\text{\tiny in}}(k,w)= \int \hbox{d} x \, e^{-ikx}\, F(x,w;0) \]
denotes the spatial Fourier transform of the initial condition.
Equation (\ref{II12}) when integrated over the velocity space yields the formula
\begin{equation}
\label{II13}
\tilde{n}(k;z) =\frac{1}{\zeta(k;z)}\int \hbox{d} w \int_{0}^{\infty}\hbox{d} \tau \; {\rm exp}
\left[ -ik\left( w\tau - \epsilon\frac{\tau^{2}}{2} \right) - (z+1)\tau \right]
\hat{F}_{\text{\tiny in}}(k,w-\epsilon\tau)
\end{equation}
with
\begin{equation}
\label{II14}
\zeta(k;z) = 1 - \int_{0}^{\infty}\hbox{d} \tau \; {\rm exp}
\left[ - (z+1)\tau - (ik\epsilon + k^{2})\frac{\tau^{2}}{2} \right] \, .
\end{equation}
The insertion of (\ref{II13}) into (\ref{II12}) provides
a complete solution for $\tilde{F}(k,w;z)$ corresponding to a given initial condition.
\bigskip
Formula (\ref{II13}) shows that the time-dependence of the
spatial distribution is defined by roots of the function
$\zeta(k;z)$. In order to find the long-time hydrodynamic mode
$z_{\rm{hy}}(k)$, we have to look for the root of $\zeta(k;z)$
which approaches $0$ when $k\to 0$. If we assume the asymptotic form
\[ z_{\rm{hy}}(k) = c_{1}k + c_{2}k^{2} +o(k^2) \;\;\; \text{when}
\;\;\; k\to 0\; , \]
we find a unique self-consistent solution to equation
$\zeta(k;z)=0$ of the form
\begin{equation}
\label{II15}
z_{\rm{hy}}(k)= -i\epsilon k - (1+\epsilon^{2})k^{2} + o(k^2) =
-i\epsilon k - D(\epsilon) k^{2} + o(k^2) \, .
\end{equation}
It has the structure of a propagating diffusive mode.
It is important to note that
the diffusion coefficient $D(\epsilon)$ equals $(1+\epsilon^{2})$
in accordance with the Green-Kubo result (\ref{II11}).
We thus see that, in the reference system moving with
constant velocity $\epsilon$,
a classical diffusion process takes place in position space.
\bigskip
It has been argued in the literature that, in general,
$z_{\rm{hy}}(k)$ is not an analytic function of $k$ at $k=0$
(see \textsl{e.g.} Ref.~\cite{ED75}). Here, that question can be
precisely investigated as follows. According to the integral
expression (\ref{II14}) of $\zeta(k;z)$, the hydrodynamic mode is
a function of $\xi=ik\epsilon + k^2$. By combining differentiations
with respect to $\xi$ under the integral sign with
integration by parts, we find that $z_{\rm{hy}}(\xi)$ satisfies the
second order differential equation
\begin{equation}
\label{IIhyddiff}
\xi \frac{\hbox{d}^2 z_{\rm{hy}}^2}{\hbox{d} \xi^2} = 1+
\frac{\hbox{d} z_{\rm{hy}}}{\hbox{d} \xi} \; .
\end{equation}
Then, since $z_{\rm{hy}}(0)=0$, we find that $z_{\rm{hy}}(\xi)$ can be
formally represented by an infinite entire series in $\xi$,
\begin{equation}
\label{IIhydTaylor}
z_{\rm{hy}}(\xi) = \sum_{n=1}^{\infty} c_n \xi^n \; ,
\end{equation}
with $c_1=-1$, $c_2=1$ and
\[ |c_{n+1}| \geq 2^{n-1} \; n! \;\;\; \text{for} \;\;\; n \geq 2 \; .\]
Thus, the radius of convergence of Taylor series (\ref{IIhydTaylor})
is zero, so $\xi=0$ is a singular point of function $z_{\rm{hy}}(\xi)$, as
well as $k=0$ is a singular point of function $z_{\rm{hy}}(k)$. The nature
of that singularity can be found by rewriting the root equation defining
$z_{\rm{hy}}(\xi)$ as the implicit equation
\begin{equation}
\label{IIhydimp}
1-\text{Erf}\left(\frac{z_{\rm{hy}}+1}{\sqrt{2\xi}}\right) =
\sqrt{\frac{2\xi}{\pi}} \;
\exp\left(-\frac{(z_{\rm{hy}}+1)^2}{2\xi}\right) \; .
\end{equation}
The introduction of function $\sqrt{\xi}$ requires to define cut-lines
ending at points $k=0$ and $k=-i\epsilon$ which are the two roots of
equation $\xi(k)=0$. Since the integral in the r.h.s. of
expression (\ref{II14}) diverges for $k$ imaginary of the form
$k=iq$ with $q > 0$ or $q < -\epsilon$, it is natural to define
such cut-lines as $[i0, i\infty[$ and $]-i\infty, -i\epsilon]$. The
corresponding choice of determination for $\sqrt{\xi}$ is defined by
$\sqrt{\xi(k^+)}= i\sqrt{q\epsilon+q^2}$ for $k^+=0^+ + iq$ with $q>0$,
where $\sqrt{q\epsilon+q^2}$ is the usual real positive square root of the
real positive number $(q\epsilon+q^2)$. Notice that, when
complex variable $k$ makes a complete tour around point $k=0$ starting
from $k^+=0^+ + iq$ on one side of the cut-line
and ending at $k^-=0^- + iq$ on
the other side (with vanishing difference $k^+-k^-$),
$\sqrt{\xi(k)}$ changes sign from $\sqrt{\xi^+}$ to
$\sqrt{\xi^-}=-\sqrt{\xi^+}$ with obvious notations. As shown by adding
both implicit equations (\ref{IIhydimp}) for
$k^+$ and $k^-$ respectively, $z_{\rm{hy}}^+$ does not reduce to
$z_{\rm{hy}}^-$. The difference $(z_{\rm{hy}}^+-z_{\rm{hy}}^-)$ is of order
$\exp(-1/(2|k|\epsilon))$, so $k=0$ is an essential singularity.
\section{Very hard particles}
Another interesting case is that of the so-called very hard particle model,
where the collision frequency is proportional to the
kinetic energy of the relative motion of the colliding pair.
The corresponding
exponent in the collision term of the Boltzmann equation (\ref{I1})
is now $\gamma=2$. This allows us
to simplify the resolution of the kinetic equation. Owing to this fact,
the very hard particle model, similarly to the Maxwell gas, has been studied in
numerous works (see e.g. \cite{MHE1984}-\cite{CDT2005}, and references given therein).
\bigskip
Using dimensionless variables (\ref{I4}), we thus write
the kinetic equation as
\begin{equation}
\left( \frac{\partial}{\partial \tau}+w\frac{\partial}{\partial x} + \epsilon\frac{\partial}{\partial w} \right)F(x,w;\tau) =
\int \hbox{d} u |w-u|^{2} [F(x,u;\tau)\Phi(w)-F(x,w;\tau)\Phi(u)]
\label{III1}
\end{equation}
\[ = [ w^2 M_{0}(x;\tau)-2w M_{1}(x;\tau)+M_{2}(x;\tau)]\Phi(w)-(w^2+1)F(x,w;\tau)\]
where the moments $M_{j}(x;\tau)$ ($j=1,2,...$) are defined by
\begin{equation}
\label{IIIM}
M_{j}(x;\tau) = \int \\d w w^j F(x,w;\tau) \; .
\end{equation}
The evolution equation of the velocity distribution $G(w;\tau)$ becomes
\begin{equation}
\label{III2}
\left( \frac{\partial}{\partial \tau}+\epsilon\frac{\partial}{\partial w} \right)G(w;\tau) =
[ N_{2}(\tau)-2wN_{1}(\tau)+w^{2}N_{0}]\Phi(w)-(w^{2}+1)G(w;\tau) \, ,
\end{equation}
with the integrated moments
\begin{equation}
\label{IIIN}
N_{j}(\tau)=\int \hbox{d} x\,M_{j}(x;\tau), \;\; j=0,1,2 \; .
\end{equation}
Notice that the integrated zeroth moment does not depend on time since the evolution
conserves the initial normalization condition
\[N_{0}(\tau)=\int \hbox{d} w \int \hbox{d} x F(x,w;\tau) = N_{0}\; .\]
Hence, when $F(x,w;\tau)$ is a normalized probability density $N_{0}(\tau)=N_{0}=1$.
\bigskip
The simplification related to the
choice $\gamma=2$, and more generally when $\gamma$ is an even integer,
concerns the collision term in the
general kinetic equation (\ref{I1}) which can be expressed in such cases in
terms of a finite number of moments of the distribution function.
The resolution of that equation becomes then straightforward within
standard methods (see Appendix~\ref{A}).
\subsection{Laplace transform of the velocity distribution}
The expression for the Laplace transform of the normalized velocity distribution follows directly from the general formula
(\ref{C3}) derived in Appendix~\ref{A} by putting $k=0$, and choosing
$\tilde{M}_{0}(0,z)= \tilde{N}_{0}(z) = 1/z$.
Within definition
\begin{equation}
\label{III7}
S(w;z) =(z+1) w + \frac{w^3}{3} \,
\end{equation}
for the function $S(k,w;z)$ evaluated at $k=0$ (see definition (\ref{S})),
we find
\begin{multline}
\label{III6}
\epsilon \tilde{G}(w;z) = \frac{\epsilon}{z}\Phi(w) +
\int_{-\infty}^{w}\hbox{d} u \exp \{ [S(u;z)-S(w;z)]/\epsilon \}\; \{
G_{\text{\tiny in}}(u) \\
+ [\tilde{N}_{2}(z)-2u\tilde{N}_{1}(z)+ \frac{(\epsilon u -z-1)}{z}]
\Phi(u) \} \; .
\end{multline}
\bigskip
The two functions $\tilde{N}_{1}(z)$ and $ \tilde{N}_{2}(z)$
satisfy the system of equations
\begin{eqnarray}
\label{III8}
0 & = & A^{\text{\tiny (in)}}_{00}(0;z) + [ \tilde{N}_{2}(z) - (z+1)/z]A_{00}(0;z)+[\epsilon/z -2\tilde{N}_{1}(z)]A_{01}(0;z) \nonumber \\
\epsilon \tilde{N}_{1}(z) & = & A^{\text{\tiny (in)}}_{10}(0;z) + [ \tilde{N}_{2}(z) - (z+1)/z]A_{10}(0;z)+[\epsilon/z-2\tilde{N}_{1}(z)]A_{11}(0;z)
\end{eqnarray}
which is identical to (\ref{C7}) taken at $k=0$, while
\begin{equation}
\label{IIIA}
A_{jl}(0;z)= \int \hbox{d} w \int_{-\infty}^{w}\hbox{d} u\; \exp \{ [S(u;z)-S(w;z)]/\epsilon \}\, w^{j}\, u^{l} \Phi(u) \; .
\end{equation}
Analogous formula holds for $A^{\text{\tiny (in)}}_{jl}(0;z)$
with the Maxwell distribution $\Phi(u)$ replaced by the initial
condition $G(u;0)=G_{\text{\tiny in}}(u)$.
Once system (\ref{III8}) has been solved, the insertion of the resulting expressions for $\tilde{N}_{1}(z)$ and $\tilde{N}_{2}(z)$ into
formula (\ref{III6}) yields eventually an explicit solution of the kinetic equation for the velocity distribution
\begin{multline}
\label{III9}
\tilde{G}(w;z) = \frac{\Phi(w)}{z} + \frac{1}{\epsilon}\int_{-\infty}^w \hbox{d} u \, \exp \left\{ [S(u;z)-S(w;z)]/{\epsilon}\right\} \\
\times\left\{ G_{\text{\tiny in}}(u) + [A_{\epsilon}(z)\, u - B_{\epsilon}(z) ]\Phi(u) \right\} \; .
\end{multline}
With the shorthand notations $A_{jl}(z) =
A_{jl}(0;z)$ and $A^{\text{\tiny (in)}}_{jl}(z)=A^{\text{\tiny (in)}}_{jl}(0;z)$, the formulae for coefficients
$A_{\epsilon}(z)$ and $ B_{\epsilon}(z)$ read
\begin{equation}
\label{Aepsilon}
A_{\epsilon}(z)= \frac{1}{\Delta(z)} \left[\frac{\epsilon^2}{z} A_{00}(z)-2A_{00}(z)A_{10}^{\text{\tiny (in)}}(z)
+2A_{10}(z)A_{00}^{\text{\tiny (in)}}(z)\right]
\end{equation}
and
\begin{equation}
\label{Bepsilon}
B_{\epsilon}(z) = \frac{1}{\Delta(z)}
\left[\frac{\epsilon^2}{z} A_{01}(z)+ \epsilon A_{00}^{\text{\tiny (in)}}(z)
+ 2A_{11}(z)A_{00}^{\text{\tiny (in)}}(z)
-2A_{01}(z) A_{10}^{\text{\tiny (in)}}(z)\right] \; ,
\end{equation}
where $\Delta(z)$, in accordance with the definition given in (\ref{C9}),
is
\begin{equation}
\label{III22}
\Delta (z)= \epsilon A_{00}(z) + 2\,\left( A_{00}(z)A_{11}(z)-A_{10}(z)A_{01}(z)\right) \, .
\end{equation}
\subsection{Stationary solution}
At large times, $\tau \to \infty$, we expect the velocity distribution to
reach some stationary state $G_{\text{\tiny st}}(w) = G(w;\infty)$. This
can be easily checked by investigating the behaviour of $\tilde{G}(w;z)$ in the neighbourhood of $z=0$ at fixed velocity $w$.
\bigskip
All integrals over $u$ in formula (\ref{III9}) do converge for any complex
value of $z$. Moreover, all their derivatives with respect to $z$ are also well
defined, as shown by differentiation under the integral sign. Thus, such integrals are entire
functions of $z$. The sole quantities in expression (\ref{III9})
which become singular at $z=0$
are the coefficients $A_{\epsilon}(z)$ and $B_{\epsilon}(z)$, and obviously
the term $\Phi(w)/z$.
In fact, both $A_{\epsilon}(z)$ and $B_{\epsilon}(z)$ exhibit simple poles at
$z=0$. Hence, the stationary solution of the kinetic
equation (\ref{III2}) does emerge when $\tau \to \infty$, and it
is given by the residue of the simple pole of $\tilde{G}(w;z)$
at $z=0$, namely
\begin{equation}
\label{III10}
G_{\text{\tiny st}}(w) = \Phi(w) + \frac{\epsilon }{\Delta(0)}
\int_{-\infty}^w \hbox{d} u \exp \left[\frac{S(u;0)-S(w;0)}{\epsilon}\right][ A_{00}(0)\, u\, - A_{01}(0) ]\Phi(u) \, .
\end{equation}
In that expression, $A_{ij}(0)$ and $\Delta(0)$ are the non-zero values at $z=0$
of the analytic functions $A_{ij}(z)=A_{ij}(0;z)$ and $\Delta(z)=\Delta(0;z)$.
Formula (\ref{III10}) does not depend on initial condition
$G_{\text{\tiny in}}$. All initial conditions evolve towards
the same unique stationary distribution (\ref{III10}). It can be
checked that the direct resolution of the static version of
kinetic equation (\ref{III2}) obtained by setting
$\partial G/\partial \tau = 0$ does provide formula (\ref{III10}).
\bigskip
Since the external field accelerates the particle, the stationary solution
is asymmetric with respect to the reflection $w \rightarrow -w$, and positive
velocities are favoured. This leads to a finite current
\begin{equation}
\langle w \rangle_{\text{\tiny st}} =
\int \hbox{d} w\, w \, G_{\text{\tiny st}}(w) =
\frac{\epsilon }{\Delta(0)} [A_{00}(0)A_{11}(0)-A_{01}(0)A_{10}(0) ] \; .
\label{III11}
\end{equation}
The asymptotic expansion at large velocities
of $G_{\text{\tiny st}}(w)$, inferred
from formula (\ref{III10}), reads
\begin{equation}
G_{\text{\tiny st}}(w) = \frac{1}{\sqrt{2\pi}}e^{-w^{2}/2}
\left[ 1 + \frac{\epsilon^{2} A_{00}(0)}{\Delta(0)w} + O(\frac{1}{w^2})
\right]
\,\,\,\,\, \text{when} \,\,\,\,\, |w| \to \infty \, .
\label{III12}
\end{equation}
Therefore, the external field does not influence the leading large-velocity
behaviour of $G_{\text{\tiny st}}(w)$, which is identical to
that of the thermal bath. Its effects only
arise in the first correction to the leading behaviour which is
smaller by a factor of order $1/w$.
The stationary distribution is drawn in Fig.~\ref{AP09b} for
several increasing field strengths, $\epsilon = 1$, $\epsilon = 10$ and
$\epsilon = 100$.
\begin{figure}
\includegraphics[width=0.9\textwidth]{AP09b.eps}
\caption{\label{AP09b} Stationary velocity distribution
$G_{\text{\tiny st}}(w)$ for three values of $\epsilon$.}
\end{figure}
\bigskip
Let us study now the limit $\epsilon \to 0$ which corresponds
to a weak external field. The main contributions to the
integrals over $u$ in (\ref{III9}) arise from the region close to $w$. That observation
motivates the use of a new integration variable $y=(w-u)/\epsilon$.
The Taylor expansions of the resulting integrands in powers of $\epsilon$ generate then
entire series in $\epsilon$, the first terms of which read
\begin{equation}
\label{III13}
\int_{-\infty}^w \hbox{d} u \, u\, \Phi(u)
\exp \left[\frac{S(u;0)-S(w;0)}{\epsilon}\right] =
\epsilon \, \frac{w \Phi(w)}{1+w^2} + O(\epsilon^2)
\end{equation}
and
\begin{equation}
\label{III14}
\int_{-\infty}^w \hbox{d} u \, \Phi(u)
\exp \left[\frac{S(u,0)-S(w,0)}{\epsilon}\right] =
\epsilon \, \frac{\Phi(w)}{1+w^2} + O(\epsilon^2) \, .
\end{equation}
Consequently, also functions
$A_{ij}(0)$ and $\Delta(0)$ can be represented by power series
in $\epsilon$ as they are obtained by calculating appropriate moments of expansions (\ref{III13}) and (\ref{III14}) over the velocity space.
The corresponding small-$\epsilon$
expansion of the stationary velocity distribution reads
\begin{equation}
\label{III15}
G_{\text{\tiny st}}(w) = \Phi(w) + \epsilon \left[ \frac{b\, w }{1+w^2} \right]\Phi(w)
+ O(\epsilon^2) \, ,
\end{equation}
where
\[ b = \left[ 1+2 \int \hbox{d} w\, \frac{w^2}{1+w^2}\Phi(w) \right]^{-1} \; . \]
Of course, at $\epsilon = 0$, $G_{\text{\tiny st}}(w)$ reduces to
the Maxwell distribution. The first correction is
of order $\epsilon$, as expected from linear response theory.
The corresponding current (\ref{III10}) reduces to
\begin{equation}
\langle w \rangle_{\text{\tiny st}} = \sigma \epsilon
+ O(\epsilon^2) \, ,
\label{III16}
\end{equation}
where the conductivity $\sigma$ is given by
\begin{equation}
\sigma = \frac{1}{2}( 1 - b )
\label{III17}
\end{equation}
It will be shown in the sequel that $\sigma = D_0= D(\epsilon = 0)$,
where $D(\epsilon) $ is the diffusion coefficient given by the Green-Kubo formula.
\bigskip
Consider now the strong field limit $\epsilon \to \infty$. The
corresponding behaviours of $A_{ij}(0)$ and $\Delta(0)$ are derived from the
integral representations obtained in Appendix~\ref{C}. We then
find at fixed $w$
\begin{equation}
\label{III18}
G_{\text{\tiny st}}(w) = \frac{\epsilon^{-1/3}}{\int_0^\infty \hbox{d} y \exp (-y^3/3)}
\int_{-\infty}^w \hbox{d} u \, \Phi(u)
\exp \left[\frac{S(u,0)-S(w,0)}{\epsilon}\right] + O(\epsilon^{-2/3})
\end{equation}
For $w$ of order 1, the dominant term in the large-$\epsilon$ expansion of the integral in (\ref{III18}) reduces to
\[ \int_{-\infty}^{w}\hbox{d} u \, \Phi(u)=\frac{1}{2}\left(1+\text{Erf}\left(\frac{w}{\sqrt{2}}\right)\right) \]
and thus varies from $0$ to $1$ around the origin $w=0$.
For larger values of the velocity, $w \sim \epsilon^{1/3}$,
that integral behaves as
$\exp (-w^3/(3\epsilon)$. The next term
in the expansion (\ref{III18}) remains of order $\epsilon^{-2/3}$.
Thus, when $\epsilon \to \infty$ at fixed $\epsilon^{-1/3} w$ the stationary solution is given by
\begin{equation}
\label{III19}
G_{\text{\tiny st}}(w) \sim \theta(w)\, \frac{\epsilon^{-1/3}}{\int_0^\infty \hbox{d} y \exp (-y^3/3)}
\, \exp \left[-(\epsilon^{-1/3}w)^3/3 \right] \, ,
\end{equation}
where $\theta$ is the Heaviside step function. The whole distribution is shifted toward
high velocities $w \sim \epsilon^{1/3}$, so that the
resulting current (\ref{III11}) is of the same order of magnitude,
\textsl{i.e.}
\begin{equation}
\langle w \rangle_{\text{\tiny st}} \sim \frac{3^{1/3} \Gamma(2/3)}{\Gamma(1/3)} \; \epsilon^{1/3} \,\,\,\,
\text{when} \,\,\,\, \epsilon \to \infty \, ,
\label{III20}
\end{equation}
where $\Gamma$ is the Euler Gamma function.
That behavior can be recovered within the following simple interpretation.
At strong fields, the average velocity of the particle becomes large compared
to the thermal velocity of scatterers. Since
at each collision the particle exchanges its velocity with a thermalized
scatterer, the variation of particle velocity between two successive collisions is
of the order of $\langle v \rangle_{\text{\tiny st}}$.
On the other hand, in the stationary state the same velocity variation is due to the acceleration $a$
coming from the external field, so it is of the order
$a \tau_{\text{\tiny coll}}$ where $\tau_{\text{\tiny coll}}$
is the mean time between two successive collisions.
This time can be reasonably
estimated as the inverse collision frequency
for a relative velocity $|v-c|$ of order $\langle v \rangle_{\text{\tiny st}}$.
The consistency of those estimations requires the relation
\begin{equation}
\langle v \rangle_{\text{\tiny st}} \sim a \; \frac{v_{\text{\tiny int}}}
{\rho \, \langle v \rangle_{\text{\tiny st}}^2} \,
\label{III21}
\end{equation}
which indeed implies the $\epsilon^{1/3}$-behaviour
(\ref{III20}) of the average velocity in dimensionless units.
Contrary to the Maxwell case where the current
remains linear in the applied field, here the current deviates
from its linear-response form when the field increases : it grows
more slowly because collisions are more efficient in dissipating
the energy input of the field. In Fig.~\ref{AP09c}, we plot
$\langle w \rangle_{\text{\tiny st}}$ as a function of $\epsilon$.
\begin{figure}
\includegraphics[width=0.9\textwidth]{AP09c.eps}
\caption{\label{AP09c} Average current
$\langle w \rangle_{\text{st}}$ as a function of $\epsilon$.
The dashed line represents the linear Kubo term in the small-$\epsilon$
expansion (\ref{III16}) with
conductivity $\sigma \simeq 0.2039$. The dotted line describes
asymptotic formula (\ref{III20}) with
$3^{1/3} \Gamma(2/3)/\Gamma(1/3) \simeq 0.7290$ valid in the limit
$\epsilon \to \infty$.}
\end{figure}
\subsection{Relaxation towards the stationary solution}
Let us study now the relaxation of the velocity distribution
$G(w;\tau)$ towards the stationary solution $G_{\text{\tiny st}}(w)$.
The decay of $[ G(w,\tau) - G_{\text{\tiny st}}(w) ]$ when $\tau \to \infty$
is controlled by the singularities of $\tilde{G}(w;z)$
in the complex plane, different from the pole at $z=0$.
As already mentioned, all integrals in expression (\ref{III9})
are entire functions of $z$, so the singularities at $z \neq 0$
arise only in the
coefficients $A_{\epsilon}(z)$ and $B_{\epsilon}(z)$. Thus, the first important
conclusion is that the relaxation is uniform for the whole velocity spectrum.
\bigskip
According to expressions (\ref{Aepsilon}) and (\ref{Bepsilon})
defining $A_{\epsilon}(z)$ and $B_{\epsilon}(z)$ respectively,
the singularities of those coefficients at points $z\neq 0$,
correspond to zeros of the function $\Delta(z)$ given by expression
(\ref{III22}). Since the analytic functions $A_{ij}(z)$ and
$\Delta (z)$ do not depend on initial condition
$G_{\text{\tiny in}}$. the relaxation is an intrinsic dynamical process, as expected.
\bigskip
After some algebra detailed in Appendix~\ref{C},
we find that $\Delta (z)$ reduces to
the Laplace transform
\begin{equation}
\label{III23}
\Delta (z)=\epsilon^{2} \, \int_0^{\infty} \hbox{d} y f_{\epsilon}(y)
\exp (-zy)
\end{equation}
of the real, positive, and monotonously decreasing function
\begin{equation}
\label{III24}
\epsilon^{2} f_{\epsilon}(y)= \frac{\epsilon^{2} (1+3y)}{(1+y)(1+2y)^{1/2}}
\exp \left( -y -\epsilon^2 \frac{y^3(2+y)}{6(1+2y)} \right) \, .
\end{equation}
Owing to the fast
decay of $f_{\epsilon}(y)$ the integral (\ref{III23}) converges for any $z$, so
$\Delta (z)$ is an entire function of $z$. Also, the monotonic decay of $f_{\epsilon}(y)$
and its positivity imply some general properties for the roots of $\Delta (z)$.
First of all, $\Delta (z)$ cannot vanish for $\Re (z) \geq 0$.
Moreover, as $\Delta (z)$ is strictly positive
for $z$ real, the zeros of $\Delta (z)$ appear in complex conjugate pairs,
while they are isolated with strictly negative real parts
and nonvanishing imaginary parts. Consequently, the long-time relaxation of the velocity distribution is governed by
the pair of zeros which is closest to the imaginary axis. Noting them as
$z^{\pm}=-\lambda \pm i \omega$ with $\omega \neq 0$ and $0 < \lambda$, we conclude that
$G(w;\tau)$ relaxes towards $G_{\text{\tiny st}}(w)$
via exponentially damped oscillations
\begin{equation}
\label{III25}
G(w;\tau) - G_{\text{\tiny st}}(w) \sim C(w) \cos [\omega \tau + \eta(w)]
\exp (-\lambda \tau), \,\,\,\,
\text{when} \,\,\,\, \tau \to \infty \,
\end{equation}
where $C(w)$ and $\eta(w)$ are an amplitude and a
phase respectively. It should be noticed that both functions
$C(w)$ and $\eta(w)$ depend on initial conditions.
\bigskip
At a given value of $\epsilon$, the zeros $z^\pm$ are found
by solving numerically the equation $\Delta (z^\pm)=0$.
In the weak- or strong-field limits, we can derive
asymptotic formulae for such zeros as follows.
First, as indicated by numerically computing $z^\pm$ for small values of $\epsilon$,
$z^\pm$ collapse to $z_0=-1$ when $\epsilon \to 0$. The corresponding asymptotical
behaviour can be derived by noting that, for $z$ close to
$z_0$, the leading contributions to $\Delta(z)$ in integral
(\ref{III23}) arise from large values of $y$. Then, we set
$y=\xi/\epsilon^{2/3}$ and $z=-1 +s\; \epsilon^{2/3}$, which
provide
\begin{equation}
\label{III26bis}
\Delta (-1 +s\; \epsilon^{2/3}) \sim \frac{3\;\epsilon^{5/3}}{\sqrt{2}} \,
\int_0^\infty \hbox{d} \xi \; \xi^{-1/2} \exp(-s\;\xi-\xi^3/12)
\end{equation}
when $\epsilon \to 0$ at fixed $s$. By numerical methods, we find the
pair of complex conjugated zeros $s_0^\pm $ of integral
\[ \int_0^\infty \hbox{d} \xi \; \xi^{-1/2} \exp(-s\;\xi-\xi^3/12) \]
which are the closest to the imaginary axis. Therefore, when $\epsilon \to 0$,
damping factor $\lambda(\epsilon)$ goes to $1$ according to
\begin{equation}
\label{III26ter}
\lambda(\epsilon)=1-\Re(s_0^\pm)\;\epsilon^{2/3} +o(\epsilon^{2/3})
\end{equation}
with $\Re(s_0^\pm) \simeq -1.169$, while frequency $\omega(\epsilon)$
vanishes as $\Im(s_0^+)\;\epsilon^{2/3}$ with $ \Im(s_0^+) \simeq 2.026$.
Notice that for fixed $z$, not located on the real half-axis $]-\infty,-1]$,
$\Delta(z)$ behaves as
\begin{equation}
\label{III26}
\Delta (z) \sim \epsilon^{2} \, \Delta_0(z)
\end{equation}
when $\epsilon \to 0$, with
\begin{multline}
\label{III27}
\Delta_0(z) = \sqrt{\frac{\pi}{2(z+1)}} \, e^{(z+1)/2}
\left[ 1 - \text{Erf}\left(\sqrt{(z+1)/2}\right)\right] \\
\times \left[3-\sqrt{2\pi(z+1)} e^{(z+1)/2} \left(1 - \text{Erf}\left(\sqrt{(z+1)/2}\right)\right)\right] \, .
\end{multline}
Here, $\sqrt{(z+1)/2}$ is defined as the usual real positive
square root $\sqrt{(x+1)/2}$
for real $z=x$ belonging to the half axis $x > -1$,
while the complementary half-axis
$z=x \leq -1$ is a cut-line ending at the branching point $z=-1$. That point
is the singular point of $1/\Delta_0(z)$ closest to the imaginary axis, as
strongly suggested by a numerical search of the zeros of $\Delta_0(z)$. Therefore,
both $\lambda(\epsilon)$ and $\omega(\epsilon)$ are continuous functions of
$\epsilon$ at $\epsilon=0$ with $\lambda(0)=1$ and $\omega(0)=0$. At $\epsilon=0$, the exponentially damped oscillating decay (\ref{III25}) becomes
an exponentially damped monotonic decay multiplied by power-law $t^{-3/2}$.
That power-law arises from the presence of a singular term of order
$\sqrt{(z+1)/2}$ in the expansion of $\tilde{G}(w;z)$ around the branching
point $z=-1$.
\bigskip
When $\epsilon \to \infty$, the zeros of $\Delta (z)$
are obtained by simultaneously changing $y$ to
$\xi/\epsilon^{2/3}$ in the integral (\ref{III23}) and by rescaling
$z$ as $\epsilon^{2/3}s$. This provides
\begin{equation}
\label{III28}
\Delta (\epsilon^{2/3}s) \sim \epsilon^{4/3} \, \Delta_\infty (s)
\,\,\,\,\text{when}\,\,\,\, \epsilon \to \infty\,\,\,\,\text{at fixed}\,\,\,\, s\, ,
\end{equation}
with
\begin{equation}
\label{III29}
\Delta_\infty (s)= \int_0^{\infty} \hbox{d} \xi \exp \left(-s\, \xi -\xi^3/3 \right)\, .
\end{equation}
Therefore, when $\epsilon \to \infty$, $z^{\pm}$ behave as
$z^{\pm} \sim \epsilon^{2/3}s_\infty^\pm$, where $s_\infty^\pm$ are the
zeros of $\Delta_\infty (s)$ closest to the imaginary axis. The corresponding
large-$\epsilon$ asymptotical behaviour of the damping factor
$\lambda(\epsilon)$ is
\begin{equation}
\label{III28bis}
\lambda(\epsilon)=-\Re(s_\infty^\pm)\;\epsilon^{2/3} + o(\epsilon^{2/3})
\end{equation}
with $\Re(s_\infty^\pm) \simeq -2.726$, while frequency $\omega(\epsilon)$
diverges as $\Im(s_\infty^+)\;\epsilon^{2/3}$ with $ \Im(s_\infty^+) \simeq 6.260$.
Notice that the relaxation time $\lambda^{-1}(\epsilon)$ goes to zero as $\epsilon^{-2/3}$,
like the average time between collisions
$\tau_{\text{\tiny coll}} \sim \langle v \rangle_{\text{\tiny st}}/a$
used in our simple heuristic derivation of the $\epsilon$-dependence
of the stationary current in the strong field limit.
In Fig.~\ref{AP09e}, we draw the damping factor $\lambda(\epsilon)$ as a function of $\epsilon$.
\begin{figure}
\includegraphics[width=0.9\textwidth]{AP09e.eps}
\caption{\label{AP09e} Damping factor
$\lambda(\epsilon )$ as a function of $\epsilon$. The dashed and
dotted lines represent the asymptotical behaviours
(\ref{III26ter}) and (\ref{III28bis}) at small and large $\epsilon$ respectively.}
\end{figure}
\subsection{Relaxation of density in position space}
In Appendix~\ref{A} we derive an explicit formula for the zeroth moment $\tilde{M}_{0}(k;z)$ of the distribution $\tilde{F}(k,w;z)$ which contains all information on the evolution of the spatial density of the propagating particle. The formula (\ref{C8}) clearly reveals the presence of a hydrodynamic pole in $\tilde{M}_{0}(k;z)$, namely the root of equation
\begin{equation}
\label{D1}
z + (k^2 + i\epsilon k)\; U(k;z) = 0
\end{equation}
where
\begin{equation}
\label{U}
U(k;z) = \frac{A_{11}(k;z)A_{00}(k;z)-A_{10}(k;z)A_{01}(k;z)}{\epsilon A_{00}(k;z)+ 2[A_{11}(k;z)A_{00}(k;z)-A_{10}(k;z)A_{01}(k;z)]} \; .
\end{equation}
\bigskip
If we consider the small-$k$ limit and if we assume the asymptotic form
\begin{equation}
\label{D2}
z_{\rm{hy}}(k) = -ic k - D(\epsilon)\, k^2 + 0(k^2)
\end{equation}
for the hydrodynamic root,
we find immediately from equation (\ref{D1}) the formula
\begin{equation}
\label{D3}
c = \epsilon\; U(0;0) \; .
\end{equation}
This shows that the mode propagates with the average stationary velocity $ \langle w \rangle_{\text{\tiny st}} =
\epsilon \, U(0;0) $ derived in expression (\ref{III11}).
\bigskip
In order to infer the formula for the diffusion coefficient $D(\epsilon)$,
it is necessary to calculate the term linear in variable $k$
in the expansion of function $U(k;z)$ at $z=-ick$. Indeed, equation
(\ref{D1}) implies the equality
\begin{equation}
\label{D4}
D(\epsilon ) = U(0;0) + i\epsilon \, \frac{\hbox{d}}{\hbox{d} k}U(k;-ick)|_{k=0} \; .
\end{equation}
Taking into account the structure (\ref{D4}) of $U(k;z)$ we find the formula
\begin{equation}
\label{D5}
D(\epsilon ) = \frac{\langle w \rangle_{\text{\tiny st}}}{\epsilon} + \frac{ A_{00}[{A}^{\prime}_{11}A_{00}- {A}^{\prime}_{01}A_{10}]+ A_{01}[ {A}^{\prime}_{00}A_{10} -{A}^{\prime}_{10}A_{00}]}{\Delta^{2}}
\end{equation}
where all $A_{jl}$ and $\Delta$ are taken at $k=z=0$, and where
\begin{equation}
\label{D6}
{A}^{\prime}_{jl} = i\epsilon \frac{\hbox{d}}{\hbox{d} k}A_{jl}(k;-ick)|_{k=0} \; .
\end{equation}
A particularly useful representation of the derivative appearing in
expression (\ref{D6}) can be deduced from formulae (\ref{S}) and (\ref{C5}) defining functions $A_{jl}(k;z)$. An integration by parts yields
\begin{equation}
\label{D7}
{A}^{\prime}_{jl}=\int \hbox{d} w \int^{w}_{-\infty} \hbox{d} u\, (u-c) \int^{u}_{-\infty} \hbox{d} v \,
w^j\, v^l\exp \{ [S(0,v;0)-S(0,w;0)]/\epsilon \}\Phi(v) \; .
\end{equation}
It is quite remarkable that equation (\ref{D7}) allows us
to establish a relation between the quantities ${A}^{\prime}_{jl} $
and the stationary velocity distribution $G_{\text{\tiny st}}(w)$.
Indeed, using equation (\ref{III10}), we readily obtain the equalities
\begin{multline}
\int \hbox{d} w \int_{-\infty}^{w} \hbox{d} u \exp \{ [S(0,v;)-S(0,w;0)]/\epsilon \}(u-c)\; G_{\text{\tiny st}}(u) \\
= A_{01}-c A_{00} + \frac{1}{\Delta} [A_{00}{A}^{\prime}_{01} - A_{01}{A}^{\prime}_{00} ] \equiv J_{01}
\label{D8a}
\end{multline}
and
\begin{multline}
\int \hbox{d} w \int_{-\infty}^{w} \hbox{d} u \exp \{ [S(0,v;)-S(0,w;0)]/
\epsilon \}\,w\, (u-c)\; G_{\text{\tiny st}}(u) \\
= A_{11}-c A_{10} + \frac{1}{\Delta} [A_{00}{A}^{\prime}_{11} - A_{01}{A}^{\prime}_{10} ]\equiv J_{11} \; .
\label{D9}
\end{multline}
Then, we find that the linear combination $(A_{00}J_{11}-A_{10}J_{01})$
of integrals $J_{11}$ and $J_{01}$ reduces to
\begin{equation}
\label{D10}
A_{11}A_{00}-A_{10}A_{01} + \frac{1}{\Delta} \left\{ A_{00}[A_{00}{A}^{\prime}_{11} - A_{01}{A}^{\prime}_{10}]-
A_{10}[A_{00}{A}^{\prime}_{01} - A_{01}{A}^{\prime}_{00} ] \right\} \; .
\end{equation}
The comparison of that expression with equation (\ref{D5})
leads to the compact final result
\begin{equation}
\label{D11}
D(\epsilon ) = \frac{A_{00}J_{11}-A_{10}J_{01}}{\Delta} \; .
\end{equation}
The above formula involves, \textsl{via} coefficients $J_{11}$
and $J_{01}$, averages over the stationary velocity distribution. In fact, we show in Appendix~\ref{B} that expression (\ref{D11}) follows
by extending, to the present out-of-equilibrium stationary
state, the familiar Green-Kubo relation between the diffusion
coefficient and the velocity fluctuations. That important fact is one of
the main observations of the present study.
\bigskip
When $\epsilon \to 0$, the behaviour of $D(\epsilon)$ is easily infered
by inserting the small-$\epsilon$ expansion (\ref{III15}) of
the stationary velocity distribution $G_{\text{\tiny st}}(w)$ into
formula (\ref{D11}). We find that $D(\epsilon)$ goes to conductivity
$\sigma$ (\ref{III17}) as quoted above, with a negative
$\epsilon^2$-correction.
When $\epsilon \to \infty$, we can use the large-$\epsilon$ form
(\ref{III19}) of $G_{\text{\tiny st}}(w)$ for evaluating coefficients $J_{11}$ and $ J_{01}$.
Using also the corresponding behaviours of coefficients $A_{00}$ and $A_{10}$,
we eventually obtain that $D(\epsilon)$ goes to the finite
value
\begin{equation}
\label{D12}
D_\infty = \frac{\Gamma^3(1/3)-9\Gamma(1/3)\Gamma(2/3)+6\Gamma^3(2/3)}{2 \Gamma^3(1/3)} \simeq 0.0384 \; .
\end{equation}
The external field dependence of the diffusion coefficient $D(\epsilon)$
is shown in Fig.~\ref{AP09d}.
\bigskip
The expansion (\ref{D2}) of $z_{\rm{hy}}(k)$ can be pursued beyond
the $k^2$-diffusion term, by expanding function $U(k;z)$ in double
entire series with respect to $z$ and $k$. According to the integral
expression of functions $A_{jl}(k;z)$ derived in Appendix~\ref{C}, all
coefficients of those double series are finite. This implies that
the hydrodynamic root $z_{\rm{hy}}(k)$ of equation (\ref{D1}) can be
formally represented by an entire series in $k$, namely
\[ z_{\rm{hy}}(k)=\sum_{n=1}^\infty \alpha_n k^n \; ,\]
with $\alpha_1=-ic$ and $\alpha_2=-D(\epsilon)$. Coefficient $\alpha_n$
($n \geq 3$) can be straightforwardly computed once lowest-order
coefficients $\alpha_p$ with $1 \leq p \leq n-1$ have been determined.
As shown by that calculation, all coefficients are obviously finite.
Therefore, and similarly to what happens in the Maxwell case, only
positive integer powers of $k$ appear in the small-$k$ expansion
of $z_{\rm{hy}}(k)$. Now, we are not able to determine the radius of
convergence of that expansion, so we cannot conclude about the analyticity of
function $z_{\rm{hy}}(k)$. However, we notice that, contrarily to the
Maxwell case, the integrals defining $A_{jl}(k;z)$ remain well-defined
for any complex value of $k$, as soon as $\epsilon \neq 0$
(see Appendic~\ref{C}). This suggests
that $z_{\rm{hy}}(k)$ might be an analytic function of $k$ at $k=0$,
except for $\epsilon = 0$, in which case $k=0$ should be a singular point.
\begin{figure}
\includegraphics[width=0.9\textwidth]{AP09d.eps}
\caption{\label{AP09d} Diffusion coefficient
$D(\epsilon )$ as a function of $\epsilon$. The dotted line
represents the constant asymptotic value
$D_{\infty}$.}
\end{figure}
\section{Concluding comments}
The idea of this work was to perform a detailed study of the
approach to an out-of-equilibrium stationary state,
by considering systems for which analytic solutions
can be derived. To this end we solved, within Boltzmann's kinetic theory,
the one-dimensional initial value problem for the distribution of a particle accelerated by a constant external field and suffering elastic collisions with thermalized bath particles. Our exact results for the Maxwell model and
for the very hard particle model
support the general picture mentioned in the Introduction:
\begin{itemize}
\item a uniform exponentially fast relaxation of the velocity distribution
\item diffusive spreading in space in the reference system moving with stationary flow
\item equality between the diffusion coefficient appearing in the hydrodynamic mode and the
one given by the generalized Green-Kubo formula
\end{itemize}
\bigskip
Although both models display the same phenomena listed above, the
variations of the respective quantities of interest with respect
to $\epsilon$ are different. First we notice that, as far as deformations
of the equilibrium Maxwell distribution are concerned, the
external field is much less efficient for very hard particles. This is
well illustrated by comparing figures \ref{AP09a} and \ref{AP09b} :
for the Maxwell system, a significative deformation of $\Phi$ is
found for $\epsilon=5$, while for the very-hard particle model a similar
deformation is observed for $\epsilon=100$.
This can be easily interpreted as follows.
The collision frequency for very hard particles
becomes much larger than its Maxwell gas counterpart when the
external field increases, so it costs more energy to maintain
a stationary distribution far from the equilibrium one.
That mechanism also explains various related
observations. For instance, the large-velocity behaviour of
$G_{\text{\tiny st}}(w)$ is identical to the equilibrium Gaussian
for very hard particles, while it takes an exponential form in the Maxwell
gas. Also, the average current $\langle w \rangle_{\text{\tiny st}}$
increases more slowly when $\epsilon \to \infty$ for very hard particles,
and the corresponding relaxation time $\lambda^{-1}(\epsilon)$
vanishes instead of remaining constant for the Maxwell gas.
\bigskip
Among the above phenomena, the emergence of a symmetric diffusion process
in the moving reference frame is quite remarkable. In such a frame, there
is some kind of cancellation between the action of the external field and
the effects of collisions induced by the counterflow of bath particles
with velocity $u_{\text{\tiny bath}}^{\ast}=-
\langle v \rangle_{\text{\tiny st}}$. The corresponding diffusion coefficient
$D(\epsilon)$ increases whith $\epsilon$
for the Maxwell gas (case $\gamma=0$),
while it decreases and
saturates to a finite value for very hard particles
(case $\gamma=2$). Therefore, beyond the
previous cancellation, it seems that the large number of collisions
for $\gamma=2$ shrink equilibrium fluctuations. On the contrary, for
$\gamma=0$, since $D(\epsilon)$ diverges when
$\epsilon \to \infty$, the residual effect of collisions
in the reference frame seems to vanish and particles tend to behave
as if they were free.
\bigskip
We expect that the same qualitative picture should be valid in the hard rod case which corresponds to the intermediate value $\gamma = 1$ of the exponent
$\gamma$ in equation (\ref{I1}). The quantitative behaviours
should interpolate between those described for $\gamma=0$ and
$\gamma=2$. For instance, the stationary distribution
$G_{\text{\tiny st}}(w)$ computed in Ref.~\cite{GP86} displays
a large-velocity asymptotic behaviour which is indeed intermediary between
those derived here for $\gamma=0$ and $\gamma=2$. Also, the
average current $<v>_{\text{\tiny st}}$ is of order $\epsilon^{1/2}$ for
$\epsilon$ large, which lies between the $\epsilon$- and
$\epsilon^{1/3}$-behaviours found for $\gamma=0$ and $\gamma=2$
respectively. Notice that the $\epsilon^{1/3}$-behaviour for $\gamma=2$
can be retrieved within a selfconsistent argument, which uses
in an essential way the existence of the velocity scale related to the particle-particle interaction. Whereas the thermal velocity scale becomes irrelevant when $\epsilon\to\infty$, the interaction scale remains important.
In the case of hard rods such an interaction scale does not show
in the kinetic equation, and the unique combination of parameters
having the dimension of velocity is $\sqrt{a/\rho}$,
which does provide a different strong field behaviour of
$<v>_{\text{\tiny st}}$ with order $\epsilon^{1/2}$.
| proofpile-arXiv_065-4840 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{INTRODUCTION}
Group-III nitride semiconductors such as AlN, GaN and InN have drawn considerable attention in recent years because they generate efficient luminescence and their band gaps include the entire visible spectrum and a part of the ultraviolet (UV). As a result of intense research on these compounds, visible light-emitting diodes and UV laser diodes are available nowadays, opening the way to further developments in optoelectronics, specially in optical storage technology. These materials are also advantageous in high-power and high-temperature electronics. We focus on GaN, since it is the most outstanding nitride of this group, widely used and researched.\cite{orton,vurgaftman,morko,reshchikov}
The great interest in semiconductor quantum dots (QD's) is based on the possibility of tailoring their electrooptical properties through the control of size and doping. Nanocrystalline\cite{leppert98,micic99,preschilla00} and self-assembled\cite{APL69-4096,APL80-3937,APL83-984,PRB74-75305,JAP102-24311,PRB75-125306} QD's of GaN have been synthesized and show the confinement-induced blue shift. Recently, Eu-doped GaN nanocrystals (NC's) yielding a bright and narrow Eu$^{3+}$ ion fluorescence line have been shown to be useful as an efficient marker of important biological processes.\cite{fluorescence} However, to our knowledge, no experimental study of donor- or acceptor-doped GaN QD's has yet been reported. Theoretically, spherical NC's containing a substitutional impurity have been investigated previously.\cite{einevoll,chuu,porras,jap74,zhu,deng,ferreira,jap81,yang,janis,ranjan,jap90,movilla,perez,pssb} Both donors and acceptors have been studied within a variational approach (VA)\cite{deng} and the effective-mass approximation (EMA).\cite{chuu,porras,jap74,zhu,ferreira,jap81,yang,janis,ranjan,jap90,movilla,pssb} Acceptors have also been treated in the atomistic framework: Effective bond-orbital\cite{einevoll} and tight-binding (TB)\cite{perez} models.
The electronic structure of a doped QD is highly sensitive to the position of the impurity atom. But most of the theoretical studies reported so far have focused on on-center impurities. The impurity position dependence has been investigated only for donors in the continuous models, VA and EMA, which are known to be inadequate for the electronic structure of small-size QD's and highly localized acceptor states even in the bulk. In the present work we report a TB model calculation of the electronic structure for both on- and off-center donor and acceptor impurities in GaN NC's of diameter up to 13.5~nm. A preliminary study reported previously\cite{perez} was limited to only on-center acceptor impurities in small-size NC's up to 4.5~nm in diameter. Also, it was based on the simple semi-empirical $sp^3s^{\ast}$ model of Vogl \textit{et al}.\cite{vogl} In contrast, the calculations presented here are based on the extended $sp^3d^5s^{\ast}$ model of Jancu \textit{et al}.,\cite{beltram} which provides a more satisfactory description of the band structure in the bulk. For the acceptor impurity we focus on Mg as an example. It substitutes for a Ga cation in the zinc-blende lattice. The hole binding energy in the bulk is known to be $236$~meV.\cite{przybylinska} For the donor we consider O, which replaces a N anion yielding an electron binding energy of $33$~meV in the bulk.\cite{reshchikov} The paper is organized as follows: In Section II we present an outline of the theoretical model; in Sections III and IV we show the binding energy and average radius of the bound carrier as a function of the dopant location and the QD size; in Section V we conclude by summarizing the main results.
\section{THEORY}
The GaN host material is described by a semi-empirical $sp^3d^5s^{\ast}$ TB Hamiltonian which includes the spin-orbit interaction, and was first introduced by
Jancu \textit{et al.}\cite{beltram}:
\begin{equation}
H_{TB}=\sum_{ij}\sum_{\mu\nu}h^{ij}_{\mu\nu}c_{i\mu}^\dagger c_{j\nu}+\sum_{i}\sum_{\mu\nu}\lambda_i<\mu|l \cdot s|\nu>c_{i\mu}^\dagger c_{i\nu},
\end{equation}
where \textit{i} and \textit{j} denote the atomic sites, $\mu$ and $\nu$ the
single-electron orbitals, and $h^{ij}_{\mu\nu}$ the hopping and on-site energies; $\lambda_i$ is a constant standing for the spin-orbit interaction. The tight-binding parameters account for the electronic structure of bulk GaN.\cite{jancu}
We need to take into consideration the dangling bonds on the surface of the quasi-spherical crystallite, since they introduce states within the band gap. The simplest way to move
these states several eV's away from the band gap region consists in binding every dangling bond to a hydrogen atom;\cite{perez110} the Ga-H and N-H distances are taken as one half the respective covalent bond lengths. This simple passivation procedure is more detailed in a previous work\cite{perez110} and references therein.
The effective Bohr radius $a_B^a$ of the hole bound to an acceptor in bulk GaN
is typically small: In the case of Mg (binding energy $E_b^a=236$ meV) it is 4.5~{\AA}. The Bohr radius $a_B^d$ of the electron
bound to a donor is much larger: $a_B^d= 27.7$~{\AA} for O
($E_b^d=33$ meV). These values of the Bohr radii are obtained in the hydrogenic
model: $a_B=\hbar/\sqrt{2E_b(\rm{bulk})\,m^{\ast}}$, where the effective mass is 0.8~$\rm{m}_0$ for the bound hole\cite{perez} and 0.15~$\rm{m}_0$ for the bound electron.\cite{vurgaftman} Since the donor Bohr radius is over six times larger than that of the acceptor, we need to construct large enough QD's to follow the confinement effects on donors. The resulting Hamiltonian matrices are huge and sparse. For example, our largest QD consists of $\sim113,000$ atoms and gives a matrix of size over $2\cdot10^6\times2\cdot10^6$. Furthermore, the new (lower) symmetries introduced by the off-center impurities are difficult to handle within a group-theoretical framework, as in Ref.~\onlinecite{perez}. Hence, we resort to Lanczos algorithms\cite{cullum,matrix,klimeck} to calculate the few dopant levels we are interested in. The numerical libraries that we use are freely avaliable at the Arpack site.\cite{arpack}
The impurity is modelled by a screened Coulomb potential:
\begin{equation}
U(r_i,R)= \left\{ \begin{array}{lll}
\pm\frac{e^2}{\varepsilon(r_i,R)r_i}&\quad\rm{if}& r_i\neq0\\
U_0&\quad\rm{if}&r_i=0
\end{array}
\right.,
\end{equation}
where $r_i$ is the distance between the lattice site \textit{i} and the impurity, \textit{R} the NC radius, and $\varepsilon(r_i,R)$ the dielectric function within the crystal. Here $U_0$ stands for the central cell potential which obviously cannot be described in terms of the long-range Coulomb potential that represents the impurity atom in its crystalline environment as an effective point charge. In the absence of any precise theoretical model, as usual, we shall treat $U_0$ as a phenomenological parameter. Position-dependent dielectric functions have been proposed by Resta\cite{resta} and by Franceschetti and Troparevsky.\cite{franceschetti} The position dependence is, however, restricted to a small volume around the dopant, typically smaller than the distance between two nearest-neighbor atoms,\cite{resta} beyond which the permitivity recovers the long-distance behavior. When restricted to QD's that means simply a value dependent on the QD size. We then take a size-dependent dielectric function given by the simplified Penn model\cite{tsu}:
\begin{equation}
\varepsilon(R)=1+\frac{\varepsilon_{B}-1}{1+(\xi/R)^\gamma}.
\end{equation}
The parameters entering this model are $\gamma=2$, the material-dependent $\xi=\pi E_F/k_F E_g=7.43$, by using $E_g=3.3$~eV and $E_F=16$~eV (Ref.~\onlinecite{orton}), and the bulk dielectric constant\cite{madelung} $\varepsilon_B(0)=8.9$.
The acceptor and donor binding energies are respectively defined as\cite{kohn,perez}
\begin{equation}
\label{eqn:hl}
\begin{array}{l}
E_b^a=E_a-E_{\text{HOMO}},\\
E_b^d=E_{\text{LUMO}}-E_d,
\end{array}
\end{equation}
where $E_{\text{HOMO}}(R)$ is the energy of the valence band-edge state, the so-called highest occupied molecular orbital (HOMO), and $E_{\text{LUMO}}(R)$ the energy of the conduction band-edge state, the lowest unoccupied molecular orbital (LUMO), in the undoped NC. $E_a(R)$ and $E_d(R)$ correspond to the same states, but in the doped NC.
We also compute the average radius of the bound carrier, $\bar{r}(R)$, which is related to the effective Bohr radius at large $R$:
\begin{equation}
\label{eqn:aver}
\bar{r}(R\rightarrow\infty)=\frac{3a_B}{2}\varpropto \frac{1}{\sqrt{E_b(\rm{bulk})}}.
\end{equation}
\section{RESULTS: IMPURITY AT THE NANOCRYSTAL CENTER}
We first study the electronic structure of undoped QD's. The calculated energy levels $E_{\text{HOMO}}(R)$ and $E_{\text{LUMO}}(R)$ are plotted against the NC radius in Fig.~\ref{fig:fig1}. They are the reference levels for calculating the impurity binding energies (see Eq.~(\ref{eqn:hl})). Note the dramatic increase of the band gap with decreasing QD size. This confinement-induced blue shift is in accord with the reported spectroscopic studies of GaN NC's.\cite{leppert98,micic99,APL80-3937} Quantitatively, we obtain a satisfactory agreement with experiment: As an example, the calculated gap for NC's of 30~{\AA} diameter is roughly the same as the experimental value $3.65$~eV reported in Ref.~\onlinecite{micic99}. It is interesting to note that, in the full size range, the HOMO and LUMO degeneracies we obtain are compatible with the $\Gamma_8$ and $\Gamma_6$ representations, respectively, in accord with the symmetry analysis of Ref.~\onlinecite{perez}.
Another feature illustrated in Fig.~\ref{fig:fig1} is a comparison of the electronic structure in two different crystallizations: Ga- and N-centered NC's. As can be seen in the figure, the difference between them is negligible for $R>15$~{\AA}. Even in smaller NC's (see the insets) the small fluctuations do not seem to be significant. This behavior allows us to restrict our considerations to cation-centered NC's for an acceptor impurity and anion-centered NC's for a donor impurity even in the case of off-center impurities (see below) without any loss of generality.
\begin{figure}[!ht]
\includegraphics[scale=0.3]{Fig1.eps}
\caption{\label{fig:fig1}Highest valence band (HOMO) and lowest conduction band
(LUMO) levels in Ga- and N-centered undoped NC's (open and closed symbols, respectively). In the insets the results for the smallest NC's are displayed. The dotted lines are drawn to guide the eye.}
\end{figure}
We next discuss the impurity states starting with the acceptor impurity. We calculate the HOMO in NC's with a Mg atom replacing the Ga at the center and deduce the corresponding hole binding energy $E_b^a$ as a function of the NC radius for different values of the parameter $U_0$. The results are found to fit the simple expression (dropping the superscripts for simplicity)
\begin{equation}
\label{eqn:simple-exp}
E_b(R)=\epsilon_b +\textit{A}\cdot\textit{R}^{-\beta},
\end{equation}
as shown in Fig.~\ref{fig:fig2} for $U_0=11$~eV. Note that this particular value of $U_0$ has been chosen for the Mg impurity in GaN, because the corresponding asymptotic limit $E_b(R\rightarrow\infty)=\epsilon_b=0.239$~eV approximately reproduces the experimental bulk binding energy (0.236 eV). Complementary results concerning the QD acceptor states presented in Figs.~\ref{fig:fig3}, \ref{fig:fig4} and \ref{fig:fig8} correspond to the same value of $U_0$. The fitting parameters $\epsilon_b$, \textit{A} and $\beta$, of course, depend on $U_0$. They may also depend on the QD size range. Generally speaking, the size ranges can be distinguished in terms of two
different regimes of confinement: $R>a_B$ (weak confinement) and $R<a_B$ (strong confinement). In the strong-confinement regime the usual image of the impurity center as a hydrogen-like entity is no longer valid. The asymptotic value $E_b$(bulk) obviously corresponds to $\epsilon_b$ in the weak-confinement regime. Since the radii of all the investigated NC's are larger than the Mg acceptor Bohr radius, all of them belong to the weak-confinement regime in this case, and a single set of fitting parameters is sufficient as shown in Fig.~2.
\begin{figure}[!ht]
\includegraphics[scale=0.3]{Fig2.eps}
\caption{\label{fig:fig2}NC size-dependent acceptor binding energy for $U_0=11$~eV and the fitting curve $E_b=0.239+23.149\,R^{-1.659}$ (dashed line).}
\end{figure}
The degree of localization of the bound carriers is characterized through the analysis of wave functions and average radii. In Fig.~\ref{fig:fig3} we show the shell-wise radial probability distribution of the HOMO in an undoped QD of 68~{\AA} radius in the upper panel. The ground-state of the hole in the same QD when doped with a Mg atom is shown in the lower panel. Both are band-edge wave functions, remarkably different from each other: The HOMO is spread over the whole NC with a maximum at $r\simeq R/2$, while the bound hole is essentially concentrated on the five nearest-neighbor atomic shells. This highly localized character of the acceptor hole in the ground state remains unchanged over the whole set of NC's, as illustrated by the small size dependence of the average radius shown in Fig.~4, in agreement with experiment in the bulk. The magnetic resonance spectra in Mg-doped GaN layers are consistent with the same description of the hole, mainly distributed over the four nearest-neighbor atoms surrounding the impurity.\cite{kaufmann} The average radius of the hole orbit plotted as a function of the NC radius in Fig.~\ref{fig:fig4} is found to fit the curve
\begin{equation}
\label{eqn:radio}
\bar{r}_a(R)=\frac{c_1}{\sqrt{E_b(R)}}=\frac{c_1}{\sqrt{\epsilon_b+\textit{c}_2\cdot \textit{R}^{-\textit{c}_3}}},
\end{equation}
where $c_1$, $c_2$ and $c_3$ are the fitting parameters. From (\ref{eqn:aver}), this expression adequately reproduces the EMA bulk radius at large R: $\bar{r}_a(R\rightarrow\infty)=6.3$~{\AA}, which is indeed close to $3a_B^a/2 = 6.7$~{\AA}.
\begin{figure}[!ht]
\includegraphics[scale=0.3]{Fig3.eps}
\caption{\label{fig:fig3}Radial probability distribution of the HOMO in an undoped NC of 68~{\AA} radius (upper part), and of the ground-state acceptor hole in the same NC doped with a Mg atom (lower part). \textit{r} is the radial distance from the QD center. Note that the two probability scales are different. Dotted lines guide the eye.}
\end{figure}
\begin{figure}[!ht]
\includegraphics[scale=0.3]{Fig4.eps}
\caption{\label{fig:fig4}QD size dependence of the average radius of the acceptor hole in the ground state and the fitting curve $\bar{r}_a=3.035/\sqrt{0.235+31.685\,R^{-1.713}}$ (dashed line).}
\end{figure}
The ground state of the donor impurity (O) is investigated in a similar way. In order to obtain the value of $U_0$ suitable for the O impurity in GaN we computed the binding energies for different values of $U_0$ and fit their size dependence to the expression (\ref{eqn:simple-exp}). Two distinct sets of fitting parameters are obtained for the two confinement regimes. In Fig.~\ref{fig:fig5} we show the results for $U_0=-2$ eV: The binding energy is strongly enhanced in the smallest NC's and tends to 0.027 eV in the bulk limit, which is close to the experimental value $E_b^d(\rm{bulk})=0.033$~eV. This value of $U_0$ is then used in further analysis of the donor states: Figs.~6, 7 and 9. In Fig.~\ref{fig:fig6} we show the probability distribution of the LUMO in an undoped QD of 68~{\AA} radius in the upper panel. That of the ground-state donor electron in the same QD when doped with an O atom is shown in the lower panel. They are the same band-edge states, but very different from each other, because of the carrier localization in the latter. From a comparison of Fig.~\ref{fig:fig3} with Fig.~\ref{fig:fig6} we see that the donor electron is much more spread out than the acceptor hole, regardless of the crystal size. The average radius of the bound electron presented in Fig.~\ref{fig:fig7} confirms this difference; it is remarkably larger than that of the bound hole in Fig.~\ref{fig:fig4} for all the NC's. The fitting curve
$\bar{r}_d(R)$ in the weak-confinement regime
tends to 40.7~{\AA} for $R\rightarrow\infty$, in good agreement with the EMA radius in bulk, $3a_B^d/2=41.6$~{\AA}.
\begin{figure}[!ht]
\includegraphics[scale=0.3]{Fig5.eps}
\caption{\label{fig:fig5}QD size-dependent donor binding energy for $U_0=-2$~eV and the fitting curves: $E_b=0.078+22.386\,R^{-1.837}$ in the strong-confinement regime (solid line) and $E_b=0.027+3.860\,R^{-1.115}$ in the weak-confinement (dashed line).}
\end{figure}
\begin{figure}[!ht]
\includegraphics[scale=0.3]{Fig6.eps}
\caption{\label{fig:fig6}Radial probability distribution of the LUMO in an undoped NC of 68~{\AA} radius (upper part) and that of the ground-state donor electron in the same NC when doped with an O atom (lower part). Note that the two probability scales are different.}
\end{figure}
\begin{figure}[!ht]
\includegraphics[scale=0.3]{Fig7.eps}
\caption{\label{fig:fig7}QD size-dependent average radius of the donor electron in the ground state and the fitting curves: $\bar{r}_d=6.697/\sqrt{0.021+81.871\,R^{-1.912}}$ in the strong-confinement regime (solid line) and $\bar{r}_d=7.279/\sqrt{0.032+81.495\,R^{-1.865}}$ in the weak-confinement (dashed line).}
\end{figure}
\section{RESULTS: OFF-CENTER IMPURITY}
We also compute off-center impurity states. As an example we focus on an intermediate-size NC of radius 31.6~{\AA}, slightly larger than the donor Bohr
radius ($\simeq28$~{\AA}). The impurity position dependence of the acceptor ground state is shown in Fig.~8, where the binding energy and the average radius are plotted against the distance of the impurity atom from the NC center. Notice the slow evolution until the acceptor approaches the QD surface, where dramatic changes occur: The binding energy drops and the average radius rises rather abruptly. This occurs when the Mg atom is placed within a surface layer of thickness
$\sim4.5$~{\AA} of the order of the acceptor Bohr radius. This behavior is related to the strongly localized nature of the bound hole state.
On the other hand, in the case of donor impurity, the Bohr radius is comparable to the radius of the referred NC and the bound electron is spread over the whole NC. As a result, the position dependence of the binding energy and the average radius, as shown in Fig.~\ref{fig:fig9}, is quite smooth. The effects are important regardless of the O atom location. The closer the dopant gets to the crystal boundary, the smaller is the binding energy and the larger the average electron radius. This monotonously decreasing binding energy is comparable to that calculated in Ref.~\onlinecite{pssb}. From the experimental point of view, our results show that the acceptor-doped NC's are less sensitive to the impurity position than the donor-doped NC's.
\begin{figure}[!ht]
\centerline{}
\includegraphics[scale=0.28]{Fig8.eps}
\caption{\label{fig:fig8}Binding energy and average radius of the hole ground
state as a function of the acceptor position in a QD of radius 31.6~{\AA} (closed and open symbols, respectively). The dashed and dotted lines are drawn to guide the eye.}
\end{figure}
\begin{figure}[!ht]
\centerline{}
\includegraphics[scale=0.28]{Fig9.eps}
\caption{\label{fig:fig9}Binding energy and average radius of the electron
ground state as a function of the donor position in a QD of radius 31.6~{\AA} (closed and open symbols, respectively).}
\end{figure}
\newpage
\section{CONCLUDING REMARKS}
We have studied acceptor (Mg) and donor (O) states in GaN NC's doped with a single substitutional impurity within the $sp^3d^5s^{\ast}$ tight-binding model. The zinc-blende-structure crystallites are assumed spherical, ranging from 1 to 13.5 nm in diameter. The computed binding energy is highly enhanced with respect to the experimental bulk value when the dopant is placed at the center of the smallest QD's. For larger sizes it decreases following a scaling law that extrapolates to the bulk limit ($236$~meV for Mg and $33$~meV for O). The
degree of localization of the bound carriers is analyzed through their wave functions and average radii. The ground-state acceptor hole is mostly distributed over the nearest-neighbor anion shell. The donor electron, on the contrary, is much less localized. We also investigated off-center impurities in intermediate-size NC's ($R\sim32$~{\AA}). The acceptor binding energy is found to be barely dependent on the Mg impurity position unless it lies within a surface shell of $\sim4.5$~{\AA} thickness ($\sim$ the acceptor Bohr radius), where the ionization energy is drastically reduced. On the contrary, the donor binding energy gradually decreases as the O dopant approaches the surface (the donor Bohr radius $\sim28$~{\AA} is similar to the radius of the referred NC's). This difference arises from the larger spatial extension of the bound electron as compared to the bound hole. Although we have focused on Mg and O impurities, our approach can be straightforwardly extended to other dopant species.
| proofpile-arXiv_065-4841 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
Random Boolean networks (RBNs) are a class of complex systems, that
show a well-studied transition between ordered and disordered phases.
The RBN model was initially introduced as an idealization of genetic
regulatory networks. Since then, the RBN model has attracted much
interest in a wide variety of fields, ranging from cell differentiation
and evolution to social and physical spin systems (for a review of
the RBN model see \cite{key-1} and \cite{key-2}, and the references within).
The dynamics of RBNs can be classified as ordered, disordered, or
critical, as a function of the average connectivity $k$, between
the elements of the network, and the bias $p$ in the choice of Boolean
functions. For equiprobable Boolean functions, $p=1/2$, the critical
connectivity is $k_{c}=2$. The RBNs operating in the ordered regime
$(k<k_{c})$ exhibit simple dynamics, and are intrinsically robust
under structural and transient perturbations. In contrast, the RBNs
in the disordered regime $(k>k_{c})$ are extremely sensitive to small
perturbations, which rapidly propagate throughout the entire system.
Recently, it has been shown that the pairwise mutual information exhibits
a jump discontinuity at the critical value $k_{c}$ of the RBN model
\cite{key-3}. More recently, similar results have been reported for a related
class of discrete dynamical networks, called random threshold networks
(RTNs) \cite{key-4}.
In this paper we consider a non-linear random networks (NLRNs) model,
which represents a departure from the discrete valued state representation,
corresponding to the RBN and RTN models, to a continuous valued state
representation. We discuss the complex dynamics of the NLRN model,
as a function of the average connectivity (in-degree) $k$. We show
that the NLRN model exhibits an order-chaos phase transition, for
the same critical connectivity value $k_{c}=2$, as the RBN and RTN
models. Also, we show that both, pairwise correlation and complexity
measures are maximized in dynamically critical networks. These results
are in very good agreement with the previously reported studies on
the RBN and RTN models, and show once again that critical networks
provide an optimal coordination of diverse behavior.
\section{NLRN model}
The NLRN model consists of $N$ randomly interconnected variables,
with continuously valued states $-1\leq x_{n}\leq+1$, $n=1,...,N$.
At time $t$ the state of the network is described by an $N$ dimensional
vector\begin{equation}
\mathbf{x}(t)=[x_{1}(t),...,x_{N}(t)]^{T},\end{equation}
which is updated at time $t+1$ using the following map:\begin{equation}
\mathbf{x}(t+1)=f\left(\mathbf{w},\mathbf{x}(t)\right),\end{equation}
where\begin{equation}
f\left(\mathbf{w},\mathbf{x}(t)\right)=[f_{1}\left(\mathbf{w},\mathbf{x}(t)\right),...,f_{N}\left(\mathbf{w},\mathbf{x}(t)\right)]^{T},\end{equation}
and
\begin{equation}
f_{n}\left(\mathbf{w},\mathbf{x}(t)\right)=\tanh\left(\sum_{m=1}^{N}w_{nm}x_{m}(t)+x_{0}\right),\quad n=1,...,N.\end{equation}
Here, $\mathbf{w}$ is an $N\times N$ interaction matrix, with the
following randomly assigned elements: \begin{equation}
w_{nm}=\left\{ \begin{array}{ccc}
-1 & with\: probability & \frac{k}{2N}\\
0 & with\: probability & \frac{N-k}{N}\\
+1 & with\: probability & \frac{k}{2N}\end{array}\right.,\end{equation}
and $k$ is the average in-degree of the network.
The interaction weights can be interpreted as excitatory, if $w_{nm}=1$,
and respectively inhibitory, if $w_{nm}=-1$. Also, we have $w_{nm}=0$,
if $x_{m}$ is not an input to $x_{n}$. Obviously, the threshold
$x_{0}$ can be considered as a constant input, with a fixed weight
$w_{n0}=1$, to each variable $x_{n}$. Therefore, in the following
discussion we do not lose generality by assuming that the threshold
parameter is always set to $x_{0}=0$.
\section{Phase transition }
In order to illustrate the complex dynamics of the NLRN system, we
consider the results of the simulation of three networks, each containing
$N=128$ variables, and having different average in-degrees: $k=1$,
$k=2$ and respectively $k=4$. Also, the continuous values of the
variables $x_{n}(t)$ are encoded in shades of gray, with black and
white corresponding to the extreme values $\pm1$. In Figure 1, one
can easily see the three qualitatively different types of behavior:
ordered $(k=1)$, critical $(k=2)$, and respectively chaotic $(k=4)$.
A quantitative characterization of the transition from the ordered
phase to the chaotic phase is given by the Lyapunov exponents \cite{key-5},
which measure the rate of separation of infinitesimally close trajectories
of a dynamical system. The linearized dynamics in tangent space is
given by:\begin{equation}
\mathbf{\delta x}(t+1)=\mathbf{J}\left(\mathbf{w},\mathbf{x}(t)\right)\delta\mathbf{x}(t),\end{equation}
where $\mathbf{J}$ is the Jacobian of the map $f$, with the elements\begin{equation}
J_{nm}=\frac{\partial f_{n}}{\partial x_{m}}=w_{nm}\left[1-\tanh^{2}\left(\sum_{m=1}^{N}w_{nm}x_{m}(t)\right)\right],\end{equation}
and $\delta\mathbf{x}(t)$ is the separation vector. The dynamics
of $\delta\mathbf{x}(t)$ is typically very complex, involving rotation
and stretching. Therefore, the rate of separation can be different
for different orientations of initial separation vector, such that
one obtains a whole spectrum of Lyapunov exponents. In general, there
are $N$ possible values, which can be ordered: $\lambda_{1}\geq\lambda_{2}\geq...\geq\lambda_{N}$.
These Lyapunov exponents are associated with the Lyapunov vectors,
$v_{1},v_{2},...,v_{N}$, which form a basis in the tangent space.
A perturbation along $v_{n}$ will grow exponentially with a rate
$\lambda_{n}$. Oseledec's theorem \cite{key-6} proves that the following
limit exists:\begin{equation}
\lambda=\lim_{t\rightarrow\infty}\frac{1}{t}\ln\frac{\left\Vert \delta\mathbf{x}(t)\right\Vert }{\left\Vert \delta\mathbf{x}(0)\right\Vert }.\end{equation}
We should note that, Oseledec's limit will always correspond to $\lambda_{1}$,
because an initial random perturbation will always have a component
along the most unstable direction, $v_{1}$, and because the exponential
growth rate the effect of the other exponents will be obliterated
over time. Thus, in general, it is enough to consider only the maximal
Lyapunov exponent (MLE), which is enough to characterize the behavior
of the dynamical system \cite{key-5}. A negative MLE corresponds to an
ordered system (fixed points and periodic dynamics), while a positive MLE
is an indication that the system is chaotic. A zero MLE is associated with
quasiperiodic dynamics and corresponds to the critical transition.
Figure 2 shows the MLE as a function of the average in-degree,
$\lambda(k)$. One can see that the critical in-degree is $k_{c}=2$,
such that for $k<k_{c}$ the NLRNs are ordered, and for $k>k_{c}$
the NLRs become chaotic. The numerical results were obtained by averaging
over the NLRNs ensemble for each $k$, using $M=256$ NLRNs with $N=256$
elements. Also, for each time series we have discarded the first $1024$
steps, in order to eliminate the transient, and the MLE was calculated
from the next $1024$ steps.
In order to provide a more detailed characterization of the order-chaos
phase transition we introduce the following spectral complexity measure:
\begin{equation}
Q_{\omega}=H_{\omega}D_{\omega},\end{equation}
where $H_{\omega}$ is the spectral entropy, and $D_{\omega}$ is
the spectral disequilibrium. The complexity is defined by the interplay
of two antagonistic behaviors: the increase of entropy as the system
becomes more and more disordered and the decrease in the disequilibrium
as the system approaches chaos (equiprobability). A similar complexity
measure, evaluated in the direct (time) space, was introduced in \cite{key-7},
for discrete state systems. In contrast, our complexity measure is
defined for continuous state systems, and it is evaluated in the inverse
(frequency) space.
In order to define the spectral entropy \cite{key-8}, we consider the discrete
Fourier transform (DFT): \begin{equation}
\mathbf{X}_{n}(\omega)=F_{\omega}[\mathbf{x}_{n}(t)]=[X_{n}(1),...,X_{n}(\Omega)]^{T},\end{equation}
\begin{equation}
X_{n}(\omega)=\sum_{t=1}^{T}x_{n}(t)\exp(-2\pi i\omega t/T),\quad\omega=1,...,\Omega,\end{equation}
and the power spectrum: \begin{equation}
\mathbf{Y}_{n}(\omega)=[Y_{n}(1),...,Y_{n}(\Omega)]^{T},\end{equation}
\begin{equation}
Y_{n}(\omega)=X_{n}^{*}(\omega)X_{n}(\omega)=\left|X_{n}(\omega)\right|^{2},\quad\omega=1,...,\Omega,\end{equation}
of the time series: \begin{equation}
\mathbf{x}_{n}(t)=[x_{n}(1),...,x_{n}(T)]^{T},\end{equation}
corresponding to the attractor of the variable $n$ of a given NLRN.
Here, $X_{n}^{*}$ stands for the complex conjugate value. Since the
variables $x_{n}(t)$ are real, the DFT result has the following symmetry:\begin{equation}
X_{n}(\omega)=X_{n}^{*}(T-\omega),\end{equation}
and therefore the power spectrum $\mathbf{Y}_{n}(\omega)$ has only
$\Omega=T/2$ positive values:
One can normalize the power spectrum such that:\begin{equation}
p_{n}(\omega)=\frac{Y_{n}(\omega)}{\sum_{\omega=1}^{\Omega}Y_{n}(\omega)},\quad\omega=1,...,\Omega,\end{equation}
and\begin{equation}
\sum_{\omega=1}^{\Omega}p_{n}(\omega)=1.\end{equation}
The new variable $p_{n}(\omega)$ can be interpreted as the probability
of having the frequency $\omega$ \textit{embedded} in the time series
$\mathbf{x}_{n}(t)$. Thus, using the spectral probability vector
\begin{equation}
\mathbf{p}_{n}(\omega)=[p_{n}(1),...,p_{n}(\Omega)]^{T},\end{equation}
one can define the spectral entropy of the time series $\mathbf{x}_{n}(t)$,
as following:\begin{equation}
H_{\omega}[\mathbf{p}_{n}(\omega)]=-\frac{1}{\log_{2}\Omega}\sum_{\omega=1}^{\Omega}p_{n}(\omega)\log_{2}p_{n}(\omega),\end{equation}
where $\log_{2}\Omega$ is the normalization constant, such that
$0\leq H_{\omega}\leq1$.
Obviously, the spectral entropy of the ordered systems will be low,
$H_{\omega}\sim0$, since only a very small number of frequencies
are present, while the spectral entropy of chaotic systems will be
high, $H_{\omega}\sim1$, since a large number of frequencies are
present. The spectral entropy takes the maximum value, $H_{\omega}=1$,
for the equilibrium state, which is defined deep in the chaotic regime,
where all frequencies become equiprobable: $p(\omega)=\Omega^{-1}$,
$\omega=1,...,\Omega$.
The spectral disequilibrium of the time series $\mathbf{x}_{n}(t)$,
measures the displacement of the corresponding probability vector
$\mathbf{p}_{n}(\omega)$ from the equilibrium state, and it is defined
as following:\begin{equation}
D_{\omega}[\mathbf{p}_{n}(\omega),\Omega^{-1}]=\sum_{\omega=1}^{\Omega}[p_{n}(\omega)-\Omega^{-1}]^{2}.\end{equation}
A special attention is necessary in the case when the attractor is
zero: $\mathbf{x}_{n}(t)=0$. In this particular case, the power spectrum
is also zero, $\mathbf{Y}_{n}(\omega)=0$, and the probability vector
$\mathbf{p}_{n}(\omega)$ is undetermined. In order to overcome this
difficulty, we define $H_{\omega}=0$ and $D_{\omega}=1$ for this
particular attractor, such that it has the lowest entropy and the
largest displacement from equilibrium.
Since the spectral disequilibrium measures the distance between two
distributions, one may consider also the spectral Kullback-Leibler
divergence \cite{key-9} as an alternative. However, for the considered NLRN model,
the Kullback-Leibler divergence is simply given by: \begin{equation}
D_{\omega}^{KL}[\mathbf{p}_{n}(\omega)||\Omega^{-1}]=\frac{1}{\log_{2}\Omega}\sum_{\omega=1}^{\Omega}p_{n}(\omega)\log_{2}\left(\frac{p_{n}(\omega)}{\Omega^{-1}}\right)=1-H_{\omega}[\mathbf{p}_{n}(\omega)].\end{equation}
Similarly, one can show that the symmetrical Kullback-Leibler divergence
is given by:\begin{equation}
D_{\omega}^{KL}[\mathbf{p}_{n}(\omega)||\Omega^{-1}]+D_{\omega}^{KL}[\Omega^{-1}||\mathbf{p}_{n}(\omega)]=-H_{\omega}[\mathbf{p}_{n}(\omega)]-\frac{1}{\Omega\log_{2}\Omega}\sum_{\omega=1}^{\Omega}\log_{2}p_{n}(\omega).\end{equation}
Therefore, in this case, the Kullback-Leibler divergence
(or its symmetrical version) can be expressed in terms of entropy. Thus, the
spectral disequilibrium seems to be a more appropriate distance measure,
since it cannot be expressed in terms of entropy.
Another quantity of interest is the pairwise spectral correlation
between the power spectrum of two network variables $n$ and $m$,
which is defined as:\begin{equation}
C_{\omega}[\mathbf{Y}_{n},\mathbf{Y}_{m}]=\frac{(\mathbf{Y}_{n}-\mathbf{\overline{Y}}_{n})^{T}\mathbf{(Y}_{m}-\mathbf{\overline{\mathbf{Y}}_{m}})}{\left\Vert \mathbf{Y}_{n}-\mathbf{\overline{Y}}_{n}\right\Vert \left\Vert \mathbf{Y}_{m}-\overline{\mathbf{Y}}_{m}\right\Vert },\end{equation}
where $\mathbf{\overline{Y}}_{n}$ and $\mathbf{\overline{Y}}_{m}$
represents the mean values.
The average correlation for a given NLRN is: \begin{equation}
C_{\omega}=\frac{1}{N(N-1)}\sum_{n=1}^{N}\sum_{m=1}^{N}[1-\delta(n,m)]C_{\omega}[\mathbf{Y}_{n},\mathbf{Y}_{m}],\end{equation}
where we have excluded the self-correlation terms ($\delta(n,m)=1$
if $m=n$ and $\delta(n,m)=0$ if $m\neq n$).
In Figure 3 we give the numerical results for the above spectral measures
(entropy, disequilibrium, complexity and correlation), obtained by
averaging over the NLRNs ensemble ($M=256$ networks with $N=256$
elements and $T=1024$). One can see that both the complexity and
the correlation measures are maximized by the critical NLRNs with
$k_{c}=2$.
As mentioned at the beginning of the paper, the continuous NLRN model
is directly related to the binary RTN model, which has been extensively
studied recently \cite{key-4}, \cite{key-10}. Recently, we have investigated
the binary RTN model, using similar quantities, complexity, entropy,
and the mutual information, which are well defined in the time domain.
The obtained results for both NLRN and RTN models are in very good
agreement, showing a phase transition for the same critical connectivity
$k_{c}=2$. Also, for the RTN model, we have shown that the mutual
information, which is the binary counter part of the spectral correlation,
is maximized for $k_{c}=2$. Similar results have also been previously
reported for the RBN model \cite{key-3}.
\section{Conclusion}
We have shown numerically that the NLRN model exhibits an order-chaos
phase transition, for the same critical connectivity value $k_{c}=2$,
as the RBN and RTN models. Also, we have shown that both the pairwise
correlation and the complexity measures are maximized in dynamically
critical networks. These results are in very good agreement with the
previously reported studies on the RBN and RTN models, and show once
again that critical networks provide an optimal coordination of diverse
behavior. We would like also to note that these optimal properties
of critical networks are likely to play a major role in biological
systems, perhaps serving as important selective traits. Given the
potential biological implications, it is of interest that recent data
suggest that genetic regulatory networks in eukaryotic cells are dynamically
critical \cite{key-11}. Also, recent experiments conducted on rat brain
slices show that these neural tissues are critical \cite{key-12}. Thus,
it seems plausible that in cells, neural systems, and other tissues,
natural selection will have acted to maximize both the correlation
across the network, and the diversity of complex behaviors that can
be coordinated within a causal network. Ordered networks have convergent
trajectories, and hence forget their past. Chaotic networks show sensitivity
to initial conditions, and thus they too forget their past, and are
unable to act reliably. On the other hand, critical networks, with
trajectories that on average neither diverge or converge (quasiperiodic dynamics),
seem best able to bind past to future, and therefore to maximize the correlated
complex behavior.
| proofpile-arXiv_065-4853 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
Studies of two-particle azimuthal correlations have become a key tool
in characterizing the evolution of the strongly interacting medium
formed in ultra-relativistic nucleus-nucleus collisions.
Traditionally, the observed two-particle azimuthal correlation
structures are thought to arise from two distinct
contributions. The dominant one is the ``elliptic flow'' term, related
to anisotropic hydrodynamic expansion of the medium from an
anisotropic initial state \cite{Ackermann:2000tr, Adcox:2002ms,
Adler:2003kt, Adams:2003am, Adams:2004bi, Back:2004zg, Back:2004mh,
Alver:2006wh, Adare:2006ti}. In addition, one observes so-called
``non-flow'' contributions from, e.g., resonances and jets, which may
be modified by their interactions with the medium~\cite{Adler:2002tq, Adams:2006yt, :2008cqb}.
The strength
of anisotropic flow is usually quantified with a Fourier decomposition of
the azimuthal distribution of observed particles relative to the
reaction plane~\cite{Voloshin:1994mz}. The experimental observable
related to elliptic flow is the second Fourier coefficient, ``$v_2$.''
The elliptic flow signal has been studied extensively in Au+Au
collisions at RHIC as a function of pseudorapidity, centrality, transverse
momentum, particle species and center of mass energy~\cite{
Adler:2003kt, Adams:2003am, Adams:2004bi, Back:2004zg, Back:2004mh}.
The centrality and transverse momentum dependence of $v_2$ has been
found to be well described by hydrodynamic calculations, which for a
given equation of state, can be used to relate a given initial energy
density distribution to final momentum distribution of produced
particles~\cite{Kolb:2000fha}. In these calculations, the $v_2$ signal
is found to be proportional to the eccentricity, $\ecc$, of the
initial collision region defined by the overlap of the colliding
nuclei~\cite{Ollitrault:1992bk}.
Detailed comparisons of the observed elliptic flow effects
with hydrodynamic calculations have led to the conclusion that a new state
of strongly interacting matter with very low shear viscosity,
compared to its entropy density, has been created in these collisions~\cite{Kolb:2000fha,
Adcox:2004mh, Back:2004je, Adams:2005dq}.
Measurements of non-flow correlations in heavy-ion collisions, in
comparison to corresponding studies in \pp\ collisions, provide
information on particle production mechanisms~\cite{Alver:2008gk} and
parton-medium interactions~\cite{Adler:2002tq, Adams:2006yt,
:2008cqb}. Different methods have been developed to account for the
contribution of elliptic flow to two-particle correlations in these
studies of the underlying non-flow correlations~\cite{Adler:2002tq,
Adams:2004pa, Ajitanand:2005jj, Alver:2008gk, Trainor:2007fu}. The most commonly
used approach is the zero yield at minimum method (ZYAM), where one
assumes that the associated particle yield correlated with the trigger
particle is zero at the minimum as a function of $\dphi$ after
elliptic flow contribution is taken out~\cite{Ajitanand:2005jj}. The
ZYAM approach has yielded rich correlation structures at $\dphi
\approx 0^{\circ}$ and $\dphi\approx120^{\circ}$ for different $\pt$
ranges~\cite{Adams:2005ph, Adare:2007vu, Alver:2009id, Abelev:2009qa}.
These structures, which are not observed in \pp\ collisions at the
same collision energy, have been referred to as the ``ridge'' and
``broad away-side'' or ``shoulder''.
The same correlation structures have been found to be present in Pb+Au
collisions at $\snn=17.4~\gev$ at the SPS~\cite{Adamova:2009ah}.
Measurements at RHIC have shown that these structures extend out to
large pseudorapidity\ separations of $\deta>2$, similar to elliptic flow
correlations~\cite{Alver:2009id}. The ridge and broad
away-side structures have been extensively studied
experimentally~\cite{Adams:2005ph, Adare:2007vu, Alver:2009id,
Abelev:2009qa, :2008cqb, :2008nda} and various theoretical models
have been proposed to understand their origin~\cite{Wong:2008yh,
Pantuev:2007sh, Gavin:2008ev, Dumitru:2008wn, Ruppert:2007mm,
Pruneau:2007ua, Hwa:2009bh, Takahashi:2009na}. A recent review of
the theoretical and experimental results can be found
in~\cite{Nagle:2009wr}.
\begin{figure*}[t]
\centering
\subfigure{
\includegraphics[width=0.31\textwidth]{images/corrs1Dsamepad_Ed_cen13.pdf}
\label{fig:corrs1Dsamepad_Ed_cen13}
}
\subfigure{
\includegraphics[width=0.31\textwidth]{images/corrs1Dsamepad_Wei_cen13.pdf}
\label{fig:corrs1Dsamepad_Wei_cen13}
}
\subfigure{
\includegraphics[width=0.31\textwidth]{images/corrs1Dsamepad_STARLike_cen2.pdf}
\label{fig:corrs1Dsamepad_Trainor_cen1}
}
\caption{Top: azimuthal correlation functions for mid-central
(10-20\%) Au+Au collisions at \snntwo\ obtained from projections
of two-dimensional $\deta,\dphi$ correlation measurements by
PHOBOS~\cite{Alver:2009id, Alver:2008gk} and
STAR~\cite{Abelev:2008un}. The transverse momentum and
pseudorapidity\ ranges are indicated on the figures. Errors bars
are combined systematic and statistical errors. The first three
Fourier components are shown in solid lines. Bottom: the residual
correlation functions after the first three Fourier components are
subtracted.}
\label{fig:corrs1d}
\end{figure*}
In this paper, we propose that the observed ridge and broad away-side
features in two-particle correlations may be due to an average
triangular anisotropy in the initial collision geometry which is
caused by event-by-event fluctuations and which leads to a triangular
anisotropy in azimuthal particle production through the collective
expansion of the medium. It was shown that, in the NEXSPHERIO
hydrodynamic model, ridge and broad away-side structures in two
particle correlations arise if non-smooth initial conditions are
introduced~\cite{Takahashi:2009na}. Sorensen has suggested that
fluctuations of the initial collision geometry may lead to higher
order Fourier components in the azimuthal correlation function through
collective effects~\cite{Sorensen}. An analysis of higher order
components in the Fourier decomposition of azimuthal particle
distributions, including the odd terms, was proposed by
Mishra~et al.\ to probe superhorizon fluctuations in the thermalization
stage~\cite{Mishra:2007tw}. In this work, we show that the second and
third Fourier components of two-particle correlations may be best
studied by treating the components of corresponding initial geometry
fluctuations on equal footing. To
reduce contributions of non-flow correlations, which are most
prominent in short pseudorapidity\ separations, we focus on azimuthal
correlations at long ranges in pseudorapidity. We show that the ridge and broad
away-side structures can be well described by the first three
coefficients of a Fourier expansion of the azimuthal correlation
function
\begin{equation}
\frac{\der N^{\text{pairs}}}{\der \dphi} = \frac{N^{\text{pairs}}}{2\pi} \left(1+\sum_{n} 2\V{n}\cos(n\dphi)\right),
\end{equation}
where the first component, $\V{1}$\footnote{Note the distinction between
$\V{n}$ and $v_n$. See \eqs{eq:vnflow1}{eq:vnflow} for details.}, is
understood to be due to momentum conservation and directed flow and
the second component $\V{2}$ is dominated by the contribution from
elliptic flow. Studies in a multi-phase transport model
(AMPT)~\cite{Lin:2004en} suggest that not only the elliptic flow term,
$\V{2}$, but also a large part of the correlations measured by the
$\V{3}$ term, arises from the hydrodynamic expansion of the medium.
\begin{figure*}[t]
\centering
\subfigure{
\includegraphics[width=0.45\textwidth]{images/GlauberEccNpart2D.pdf}
\label{fig:glauecc}
}
\subfigure{
\includegraphics[width=0.45\textwidth]{images/GlauberTriaNpart2D.pdf}
\label{fig:glautria}
}
\caption{Distribution of \subref{fig:glauecc} eccentricity, $\ecc$,
and \subref{fig:glautria} triangularity, $\tria$, as a
function of number of participating nucleons, $\Npart$, in
\snntwo\ Au+Au collisions.}
\label{fig:glau1}
\end{figure*}
\begin{figure}[b]
\includegraphics[width=0.45\textwidth]{images/GlauberHighTriaEvent.pdf}
\caption{Distribution of nucleons on the transverse plane for a
\snntwo\ Au+Au collision event with $\tria$=0.53 from
Glauber Monte Carlo.
The nucleons in the two nuclei are shown in gray and
black. Wounded nucleons (participants) are indicated as solid
circles, while spectators are dotted circles.}
\label{fig:glau2}
\end{figure}
\section{Fourier decomposition of azimuthal correlations}
In the existing correlation data, different correlation measures such
as $R(\deta,\dphi)$~\cite{Alver:2008gk},
$N\hat{r}(\deta,\dphi)$~\cite{Abelev:2008un} and
$1/N_{\text{trig}}\der N / \der
\dphi(\deta,\dphi)$~\cite{Alver:2009id} have been used to study
different sources of particle correlations. The azimuthal projection
of all of these correlation functions have the form
\begin{equation}
C(\dphi) = A \frac{\der N^{\text{pairs}}}{\der \dphi} + B,
\label{eq:Cdphi}
\end{equation}
where the scale factor $A$ and offset $B$ depend on the definition of
the correlation function as well as the pseudorapidity\ range of the
projection~\cite{Alver:2009id}. Examples of long range azimuthal
correlation distributions are shown in \fig{fig:corrs1d}
for mid-central Au+Au collisions with different trigger and associated
particle $p_T$ selections obtained by projecting the
two-dimensional correlation functions onto the $\dphi$ axis at pseudorapidity\
separations of $1.2<\deta<1.9$ for STAR data~\cite{Abelev:2008un}
and $2<\deta<4$ for
PHOBOS data~\cite{Alver:2008gk,Alver:2009id}. The correlation function data used in this study are
available at~\cite{Phobosdata1, Phobosdata2, Stardata1}. Also shown
in \fig{fig:corrs1d} are the first three Fourier components of the
azimuthal correlations and the residual after these components are
taken out. The data is found to be very well described by the three
Fourier components.
\begin{figure*}[t]
\centering
\includegraphics[width=0.9\textwidth]{images/AMPT_vnVsIni_npartslices_wrtpsi.pdf}
\caption{Top: average elliptic flow, $\mean{v_2}$, as a function of
eccentricity, $\ecc$; bottom: average triangular flow,
$\mean{v_3}$, as a function of triangularity, $\tria$,
in \snntwo\ Au+Au collisions from the AMPT model in bins of
number of participating nucleons. Error bars indicate statistical
errors. A linear fit to the data is shown.}
\label{fig:AMPT_vnVsIni_npartslices_wrtpsi}
\end{figure*}
\section{Participant triangularity and triangular flow}
It is useful to recall that traditional hydrodynamic
calculations start from a smooth matter distribution given by the
transverse overlap of two Woods-Saxon distributions. In such
calculations, elliptic flow is aligned with the orientation of the
reaction plane defined by the impact parameter direction and the beam
axis and by symmetry, no $\V{3}$ component arises in the azimuthal
correlation function. To describe this component in terms of
hydrodynamic flow requires a revised understanding of the initial
collision geometry, taking into account fluctuations in the
nucleon-nucleon collision points from event to event.
The possible influence of initial geometry fluctuations was used to
explain the surprisingly large values of elliptic flow measured for
central Cu+Cu collision, where the average eccentricity calculated
with respect to the reaction plane angle is small~\cite{Alver:2006wh}.
For a Glauber Monte Carlo event, the minor axis of eccentricity of the
region defined by nucleon-nucleon interaction points does not
necessarily point along the reaction plane vector, but may be tilted.
The ``participant eccentricity''~\cite{Alver:2006wh, Alver:2008zza}
calculated with respect to this tilted axis is found to be finite even
for most central events and significantly larger than the reaction
plane eccentricity for the smaller Cu+Cu system. Following this idea,
event-by-event elliptic flow fluctuations have been measured and found
to be consistent with the expected fluctuations in the initial state
geometry with the new definition of eccentricity~\cite{Alver:2010rt}.
In this paper, we use this method of quantifying the initial
anisotropy exclusively.
Mathematically, the participant eccentricity is given as
\begin{equation}
\ecc = \frac{\sqrt{(\sigma_{y}^2-\sigma_{x}^2)^2 + 4(\sigma_{xy})^2}}{\sigma_{y}^2+\sigma_{x}^2},
\end{equation}
where $\sigma_{x}^2$, $\sigma_{y}^2$ and $\sigma_{xy}$, are the
event-by-event (co)variances of the participant nucleon distributions
along the transverse directions $x$ and $y$~\cite{Alver:2006wh}. If
the coordinate system is shifted to the center of mass of the
participating nucleons such that $\mean{x}=\mean{y}=0$, it can be
shown that the definition of eccentricity is equivalent to
\begin{equation}
\ecc = \frac{\sqrt{\mean{r^2\cos(2\phi_{\text{part}})}^2 + \mean{r^2\sin(2\phi_{\text{part}})}^2}}{\mean{r^2}}
\label{eq:ecc}
\end{equation}
in this shifted frame, where $r$ and $\phi_{\text{part}}$ are the
polar coordinate positions of participating nucleons. The minor axis
of the ellipse defined by this region is given as
\begin{equation}
\psi_{2}=\frac{\atantwo\left(\mean{r^2\sin(2\phi_{\text{part}})},\mean{r^2\cos(2\phi_{\text{part}})}\right)
+\pi}{2}.
\label{eq:phiecc}
\end{equation}
Since the pressure gradients are largest along $\psi_{2}$, the
collective flow is expected to be the strongest in this direction. The
definition of $v_2$ has conceptually changed to refer to the second
Fourier coefficient of particle distribution with respect to
$\psi_{2}$ rather than the reaction plane
\begin{equation}
v_2 = \mean{\cos(2(\phi-\psi_2))}.
\label{eq:v2}
\end{equation}
This change has not impacted the experimental definition since the
directions of the reaction plane angle or $\psi_{2}$ are not a priori
known.
Drawing an analogy to eccentricity and elliptic flow, the initial and
final triangular anisotropies can be quantified as participant
triangularity, $\tria$, and triangular flow, $v_3$, respectively:
\begin{eqnarray}
\tria &\equiv& \frac{\sqrt{\mean{r^2\cos(3\phi_{\text{part}})}^2 + \mean{r^2\sin(3\phi_{\text{part}})}^2}}{\mean{r^2}} \label{eq:tria} \\
v_3 &\equiv& \mean{\cos(3(\phi-\psi_3))}
\label{eq:v3}
\end{eqnarray}
where $\psi_3$ is the minor axis of participant triangularity given by
\begin{equation}
\psi_{3}=\frac{\atantwo\left(\mean{r^2\sin(3\phi_{\text{part}})},\mean{r^2\cos(3\phi_{\text{part}})}\right)
+\pi}{3}.
\label{eq:phitria}
\end{equation}
It is important to note that the minor axis of triangularity is found
to be uncorrelated with the reaction plane angle and the minor axis of
eccentricity in Glauber Monte Carlo calculations. This implies that
the average triangularity calculated with respect to the reaction
plane angle or $\psi_2$ is zero. The participant triangularity defined
in \eq{eq:tria}, however, is calculated with respect to $\psi_3$ and
is always finite.
\begin{figure*}[t]
\centering
\subfigure{
\includegraphics[width=0.45\textwidth]{images/AMPT_meanv22andv2_2vsNpart_2_1.pdf}
\label{fig:AMPT_meanv22andv2_2vsNpart_2_1}
}
\subfigure{
\includegraphics[width=0.45\textwidth]{images/AMPT_meanv32andv3_2vsNpart_2_1.pdf}
\label{fig:AMPT_meanv32andv3_2vsNpart_2_1}
}
\caption{Dashed lines show
\subref{fig:AMPT_meanv22andv2_2vsNpart_2_1} second Fourier
coefficient, $\V{2}$, and
\subref{fig:AMPT_meanv32andv3_2vsNpart_2_1} third Fourier
coefficient, $\V{3}$, of azimuthal correlations as a function of
number of participating nucleons, $\Npart$, in \snntwo\ Au+Au
collisions from the AMPT model. Solid lines show the contribution to
these coefficients from flow calculated with respect to the minor
axis of (a) eccentricity and (b) triangularity.}
\label{fig:meansquare}
\end{figure*}
The distributions of eccentricity and triangularity calculated with
the PHOBOS Glauber Monte Carlo implementation~\cite{Alver:2008aq} for
Au+Au events at \snntwo\ are shown in \fig{fig:glau1}. The value of
triangularity is observed to fluctuate event-by-event and have an
average magnitude of the same order as eccentricity. Transverse
distribution of nucleons for a sample Monte Carlo event with a high
value of triangularity is shown in \fig{fig:glau2}. A clear
triangular anisotropy can be seen in the region defined by the
participating nucleons.
\section{Triangular flow in the AMPT model}
To assess the connection between triangularity and the ridge and broad
away-side features in two-particle correlations, we study elliptic and
triangular flow in the AMPT model. AMPT is a hybrid model which
consists of four main components: initial conditions, parton cascade,
string fragmentation, and A Relativistic Transport Model for
hadrons. The model successfully describes main features of the
dependence of elliptic flow on centrality and transverse
momentum~\cite{Lin:2004en}. Ridge and broad away-side features in
two-particle correlations are also observed in the AMPT
model~\cite{Ma:2006fm,Ma:2008nd}. Furthermore, the dependence of
quantitative observables such as away-side RMS width and away-side
splitting parameter $D$ on transverse momentum and reaction plane in
AMPT reproduces the experimental results successfully, where a
ZYAM-based elliptic flow subtraction is applied to both the data and
the model~\cite{Zhang:2007qx,Li:2009ti}.
The initial conditions of AMPT are obtained from Heavy Ion Jet
Interaction Generator (HIJING)~\cite{Gyulassy:1994ew}. HIJING uses a
Glauber Model implementation that is similar to the PHOBOS
implementation to determine positions of participating nucleons. It is
possible to calculate the values of $\ecc$, $\psi_2$, $\tria$ and
$\psi_3$ event-by-event from the positions of these nucleons [see
Equations~\ref{eq:ecc}, \ref{eq:phiecc}, \ref{eq:tria}
and~\ref{eq:phitria}]. Next, we calculate the magnitudes of elliptic
and triangular flow with respect to $\psi_2$ and $\psi_3$ respectively
as defined in \eqs{eq:v2}{eq:v3}.
The average value of elliptic flow, $v_2$, and triangular flow, $v_3$,
for particles in the pseudorapidity\ range \mbox{$\abs{\eta}\!<\!3$} in
\snntwo\ Au+Au collisions from AMPT are shown as a function of $\ecc$
and $\tria$ in \fig{fig:AMPT_vnVsIni_npartslices_wrtpsi} for different
ranges of number of participating nucleons. As previously expected,
the magnitude of $v_2$ is found to be proportional to $\ecc$. We
observe that a similar linear relation is also present between
triangular flow and triangularity.
\begin{figure*}[t]
\centering
\subfigure{
\includegraphics[width=0.45\textwidth]{images/AMPTv2vspt.pdf}
\label{fig:AMPTv2vspt}
}
\subfigure{
\includegraphics[width=0.45\textwidth]{images/AMPTv3vspt.pdf}
\label{fig:AMPTv3vspt}
}
\caption{\subref{fig:AMPTv2vspt} Elliptic flow, $v_2$, and
\subref{fig:AMPTv3vspt} triangular flow, $v_3$, as a function of
transverse momentum, $\pt$, in bins of number of participating
nucleons, $\Npart$, for particles at mid-rapidity ($\abs{\eta}<1$)
in \snntwo\ Au+Au collisions from the AMPT model. Error bars indicate
statistical errors.}
\label{fig:pt}
\end{figure*}
\begin{figure}[b]
\includegraphics[width=0.45\textwidth]{images/AMPTv3overv2ptnpart.pdf}
\caption{Top: the ratio of triangular flow to elliptic flow,
$\mean{v_3}/\mean{v_2}$, as a function of number of participating
nucleons, $\Npart$, for particles at mid-rapidity ($\abs{\eta}<1$)
in \snntwo\ Au+Au collisions from the AMPT model. Open points show
different transverse momentum bins and the filled points show the
average over all transverse momentum bins. Bottom: the ratio of
different $\pt$ bins to the average value. Error bars indicate
statistical errors.}
\label{fig:AMPTv3overv2ptnpart}
\end{figure}
After establishing that triangular anisotropy in initial collision
geometry leads to a triangular anisotropy in particle production, we
investigate the contribution of triangular flow to the observed ridge
and broad away-side features in two-particle azimuthal correlations.
For a given pseudorapidity\ window, the Fourier coefficients of two-particle
azimuthal correlations, $\V{n}$, can be calculated in AMPT by averaging
$\cos(n\dphi)$ over all particle pairs. Contributions from elliptic
(triangular) flow is present in the second (third) Fourier coefficient
of $\dphi$ distribution since
\begin{multline}
\int \frac{1}{4\pi^2} \left\{1+2v_n\cos(n\phi)\right\} \\
\times \left\{1+2v_n\cos(n(\phi+\dphi))\right\} \der \phi \qquad \qquad \\
= \frac{1}{2\pi} \left\{1+2v_n^2 \cos(n\dphi)\right\}.
\label{eq:vnflow1}
\end{multline}
For a given pseudorapidity\ window, this contribution can be calculated
from average elliptic (triangular) flow values as
\begin{multline}
\V{n}^{\text{flow}}=\frac{\mean{\varepsilon_n^2}}{\mean{\varepsilon_n}^2} \\
\times \frac{\int\frac{\der N}{\der \eta}(\eta_{1}) \frac{\der N}{\der \eta}(\eta_{2}) \mean{v_{n}(\eta_1)} \mean{v_{n}(\eta_2)} \der \eta_1 \der \eta_2}{\int \frac{\der N}{\der \eta}(\eta_{1}) \frac{\der N}{\der \eta}(\eta_{2}) \der \eta_1 \der \eta_2}
\label{eq:vnflow}
\end{multline}
where $n\! = \! 2$ ($n\! = \! 3$) and the integration is over the
pseudorapidity\ range of particle pairs. The average single-particle
distribution coefficients, $\mean{v_{n}(\eta)}$, are used in this
calculation to avoid contributions from non-flow correlations which may
be present if the two-particle distributions, $v_{n}(\eta_1)\times
v_{n}(\eta_2)$, are calculated event by event. The ratio
$\mean{\varepsilon_n^2}/\mean{\varepsilon_n}^2$ accounts for the difference
between $\mean{v_n(\eta_1) \times v_n(\eta_2)}$ and
$\mean{v_n(\eta_1)} \times \mean{v_n(\eta_2)}$ expected from initial
geometry fluctuations.
We have calculated the magnitude of the second and third Fourier
components of two-particle azimuthal correlations and expected
contributions to these components from elliptic and triangular flow
for particle pairs in \snntwo\ Au+Au collisions from AMPT within the
pseudorapidity\ range $\abs{\eta}<3$ and $2<\deta<4$. The results are
presented in \fig{fig:meansquare} as a function of number of
participating nucleons. More than 80\% of the third Fourier
coefficient of azimuthal correlations can be accounted for by
triangular flow with respect to the minor axis of triangularity. The
difference between $\V{3}$ and $\V{3}^{\text{flow}}$ may be due to
two different effects: There might be contributions from correlations
other than triangular flow to $\V{3}$ or the angle with respect to
which the global triangular anisotropy develops might not be given
precisely by the minor axis of triangularity calculated from
positions of participant nucleons, i.e.\
$v_3=\mean{(\cos(3(\phi-\psi_3)}$ might be an underestimate for the
magnitude of triangular flow. More detailed studies are needed to
distinguish between these two effects.
We have also studied the magnitudes of elliptic and triangular flow
more differentially as a function of transverse momentum and number of
participating nucleons in the AMPT model. \Fig{fig:pt} shows the results
as a function of transverse momentum for particles at mid-rapidity
($\abs{\eta}<1$) for different ranges of number of participating
nucleons. The dependence of triangular flow on transverse momentum is
observed to show similar gross features as elliptic flow. A more
detailed comparison can be made by taking the ratio of triangular to
elliptic flow, shown in \fig{fig:AMPTv3overv2ptnpart} as a function of
number of participating nucleons for different ranges of transverse
momentum. The relative strength of triangular flow is observed to
increase with centrality and transverse momentum. This observation is
qualitatively consistent with the trends in experimentally measured
ridge yield~\cite{Alver:2009id}.
\section{Triangular flow in experimental data}
\begin{figure*}[t]
\centering
\subfigure{
\includegraphics[width=0.45\textwidth]{images/V2OverV3VsNpart_110_11.pdf}
\label{fig:V2OverV3VsNpart_a}
}
\subfigure{
\includegraphics[width=0.45\textwidth]{images/V2OverV3VsNpart_1000_1000.pdf}
\label{fig:V2OverV3VsNpart_b}
}
\caption{The ratio of the third to second Fourier coefficients of
azimuthal correlations, $\V{3}/\V{2}$, as a function of number of
participating nucleons, $\Npart$, for Au+Au collisions at \snntwo.
Filled points show values derived from \subref{fig:V2OverV3VsNpart_a}
PHOBOS~\cite{Alver:2009id, Alver:2008gk} and
\subref{fig:V2OverV3VsNpart_b} STAR~\cite{Abelev:2008un}
data. Pseudorapidity\ and transverse momentum
ranges and charge selection of particle pairs for different
measurements are indicated on the figures. Open points show
results from the AMPT model for similar selection of pseudorapidity\
and transverse momentum to the available data. Error bars
indicate statistical errors for AMPT and combined statistical and
systematic errors for the experimental data.}
\label{fig:V2OverV3VsNpart}
\end{figure*}
While AMPT reproduces the expected proportionality of $v_2$ and $\ecc$,
the absolute magnitude of $v_2$ is underestimated compared to data and
hydrodynamic calculations. To allow a comparison of the $\V{3}$ calculations
to data, we therefore use the ratio of the third and second Fourier
coefficients. For data, this ratio is given by
\begin{equation}
\frac{\V{3}}{\V{2}} = \frac{\int C(\dphi) \cos(3\dphi) \der \dphi}{\int C(\dphi) \cos(2\dphi) \der \dphi}.
\end{equation}
The factors $A$ and $B$ in \eq{eq:Cdphi} cancel out in this ratio.
Results for PHOBOS~\cite{Alver:2009id, Alver:2008gk} and
STAR~\cite{Abelev:2008un} measurements are plotted as a function of
number of participating nucleons in
Figures~\ref{fig:V2OverV3VsNpart_a} and
\subref{fig:V2OverV3VsNpart_b}, respectively. It is observed that
$\V{3}/\V{2}$ increases with centrality and with the transverse momentum
of the particles.
Also shown in \fig{fig:V2OverV3VsNpart} is the magnitude of
$\V{3}/\V{2}$ in the AMPT model with similar $\eta$, $\deta$ and $\pt$
selections to the available experimental data. The calculations from
the model show a qualitative agreement with the data in term of the
dependence of $\V{3}/\V{2}$ on the pseudorapidity\ region, particle
momenta and centrality. Since the $\V{3}$ component of two-particle
correlations in the model is known to be mostly due to the triangular
anisotropy in the initial collision geometry, this observation
suggests that triangular flow may play an important role in
understanding the ridge and broad away-side structures in data.
A closer look at the properties of the ridge and broad away-side is
possible via studies of three particle correlations. Triangular flow
predicts a very distinct signature in three particle correlation
measurements. Two recent publications by the STAR experiment present
results on correlations in $\dphi_1$-$\dphi_2$ space for
$\abs{\eta}<1$~\cite{:2008nda} and in $\deta_1$-$\deta_2$ space for
$\abs{\dphi}<0.7$~\cite{Abelev:2009jv}. In $\dphi_1$-$\dphi_2$ space,
off diagonal away-side correlations have been observed (e.g.\ first
associated particle at $\dphi_1\approx 120^{\circ}$ and second
associated particle at $\dphi_2\approx -120^{\circ}$) consistent with
expectations from triangular flow. In $\deta_1$-$\deta_2$ space, no
correlation structure between the two associated ridge particles was
detected, also consistent with triangular flow.
\section{Summary}
We have introduced the concepts of participant
triangularity and triangular flow, which quantify the triangular
anisotropy in the initial and final states of heavy-ion collisions.
It has been shown that inclusive and triggered two-particle azimuthal
correlations at large $\deta$ in heavy-ion collisions are well
described by the first three Fourier components. It has been
demonstrated that event-by-event fluctuations lead to a finite
triangularity value in Glauber Monte Carlo events and that this
triangular anisotropy in the initial geometry leads to a triangular
anisotropy in particle production in the AMPT model. The third Fourier
coefficient of azimuthal correlations at large pseudorapidity\ separations have
been found to be dominated by triangular flow in the model. We have
studied the ratio of the third and second Fourier coefficients of
azimuthal correlations in experimental data and the AMPT model as a
function of centrality and pseudorapidity\ and momentum ranges of particle pairs. A
qualitative agreement between data and model has been observed. This
suggests that the ridge and broad away-side features observed in
two-particle correlation measurements in Au+Au collisions contain a
significant, and perhaps dominant, contribution from triangular flow.
Our findings support previous evidence from measurements of the system
size dependence of elliptic flow and elliptic flow fluctuations on the
importance of geometric fluctuations in the initial collision
region. Detailed studies of triangular flow can shed new light on the
initial conditions and the collective expansion of the matter created
in heavy-ion collisions.
The authors acknowledge fruitful discussions with Wei Li, Constantin
Loizides, Peter Steinberg and Edward Wenger. This work was supported
by U.S. DOE grant DE-FG02-94ER40818.
\bibliographystyle{apsrev}
| proofpile-arXiv_065-4865 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
\noindent
Following a classical result of
D'Hoker and Vinet \cite{DV1} (see also \cite{ GvH,DJ-al,HMH,HmonRev,Pl}) a non-relativistic spin-$\frac{1}{2}$ charged particle with gyromagnetic ratio ${g}=2$, interacting with a point magnetic monopole, admits an ${\mathfrak{osp}}(1|2)$
supersymmetry.
It has no Runge-Lenz type
dynamical symmetry, though \cite{FeherO31}.
Another, surprising, result of D'Hoker and Vinet \cite{DV2} says, however, that a
non-relativistic spin-$\frac{1}{2}$ charged particle with \emph{anomalous gyromagnetic ratio} ${g}=\,4\,$, interacting with a point magnetic monopole plus a Coulomb plus a fine-tuned inverse-square potential,
does have such a dynamical symmetry.
This is to be compared with the one
about the ${\rm O}(4)$ symmetry of a scalar particle
in such a combined field \cite{MICZ}.
Replacing the scalar particle by a spin $1/2$ particle
with gyromagnetic ratio ${g}=0$, one can prove that
two anomalous systems, the one with ${g}=4$ and the one with ${g}=0$ are, in fact, superpartners \cite{FH}. Note that
for both particular ${g}$-values, one also
has an additional ${\rm o}(3)$ ``spin'' symmetry.
On the other hand, it has been shown by Spector \cite{Spector} that the $\,{\mathcal{N}}=1\,$ supersymmetry only allows $g=2$ and no scalar potential. Runge-Lenz and SUSY appear, hence, inconsistent.
In this paper, we investigate the bosonic as well as supersymmetries of the Pauli-type Hamiltonian,
\begin{equation}
{\mathcal{H}}_{{g}}=\frac{{\vec{\Pi}}^2}{2}-\frac{e{g}}{2}\,{\vec{S}}\cdot{\vec{B}}+V(r)\,,\label{Hamiltonian}
\end{equation}
which describes the motion of a fermion with spin $\,{\vec{S}}\,$ and electric charge $\,e\,$, in the combined magnetic field, $\,{\vec{B}}\,$, plus a spherically symmetric scalar field $V(r)$, which also includes a Coulomb term (a ``dyon'' in what follows).
In (\ref{Hamiltonian}), $\,{\vec{\Pi}}={\vec{p}}-e\,{\vec{A}}\,$ denotes the gauge covariant momentum and the constant $\,{g}\,$ represents the gyromagnetic ratio of the spinning particle.
Except in Section \ref{S7}, the gauge field is taken that of an Abelian monopole.
We derive the (super)invariants by considering the Grassmannian extension of the algorithm proposed before by one of us \cite{vH}.
The main ingredients are Killing tensors, determined by a linear system of first order partial differential equations.
Our recipe has already been used successfully to derive bosonic symmetries \cite{vH,H-N,Ngome,Visi}; in this paper we systematically extend these results to supersymmetries associated with Grassmann-algebra valued Killing tensors
\cite{vH,vH-al,GvH,Visi2}.
The plan of this paper is as follows~: in Section \ref{S2} we derive the equations of the motion of the system. In Section \ref{S3}, we present the general formalism and we analyse the conditions under which conserved quantities are generated. In Sections \ref{S4} and \ref{S5} we investigate the super resp. bosonic symmetries of
the fermion-monopole system. Our investigations confirm Spector's theorem.
In section \ref{S6}, we show, however, that the obstruction can be overcome by
a \emph{dimensional extension of fermionic space} \cite{Salomonson,Michelson,A-K}. Working with two, rather than just one
Grassmann variable allows us to combine the two anomalous systems into one with $\,{\mathcal{N}}=2\,$ supersymmetry. In section \ref{S7}, we investigate the SUSY of the spinning particle coupled with a static magnetic field in the plane.
\section{Hamiltonian Dynamics of the spinning system}\label{S2}
\noindent
Let us consider a charged spin-$\frac{1}{2}$ particle moving in a flat manifold $\,{\mathcal{M}}^{D+d}\,$ which is the extension of the bosonic configuration space $\,{\mathcal{M}}^{D}\,$ by a $d$-dimensional internal space carrying the fermionic degrees of freedom. The $(D+d)$-dimensional space $\,{\mathcal{M}}^{D+d}\,$ is described by the local coordinates $\left(\,x^{\mu},\,\psi^{a}\right)$ where $\,\mu=1,\cdots,D\,$ and $\,a=1,\cdots,d\,$. The motion of the spin-particle is, therefore, described by the curve $\tau\rightarrow\left(\,x(\tau),\,\psi(\tau)\right)\,\in\,{\mathcal{M}}^{D+d}\,$. We choose $\,D=d=3\,$ and we focus our attention to the spin-$\frac{1}{2}$ charged particle interacting with the static $\,U(1)\,$ monopole background, $\,\displaystyle{{\vec{B}}={\vec{\nabla}}\times{\vec{A}}=\frac{q}{e}({\vec{x}}/r^3)}\,$, such that the system is described by the Hamiltonian (\ref{Hamiltonian}). In order to deduce, in a classical framework, the supersymmetries and conservation laws, we introduce the covariant hamiltonian formalism, with basic phase-space variables $\,\left(x^{j},\Pi_{j},\psi^{a}\right)\,$. Here the variables $\,\psi^{a}\,$ transform as tangent vectors and satisfy the Grassmann algebra, $\,\psi^{i}\psi^{j} + \psi^{j}\psi^{i}=0\,$. The internal angular momentum of the particle
can also be described
in terms of vector-like Grassmann variables,
\begin{equation}
S^{j}=-\frac{i}{2}\epsilon^j_{\,\,kl}\psi^{k}\,\psi^{l}\,.
\end{equation}
Defining the covariant Poisson-Dirac brackets for functions $\,f\,$ and $\,h\,$ of the phase-space as
\begin{eqnarray}
\big\{f,h\big\}&=&{\partial}_j f\,\frac{{\partial} h}{{\partial} \Pi_j}-\frac{{\partial} f}{{\partial} \Pi_j}\,{\partial}_j h
+eF_{ij}\,\frac{{\partial} f}{{\partial} \Pi_i}\frac{{\partial} h}{{\partial} \Pi_j}
+i(-1)^{a^{f}}\frac{{\partial} f}{{\partial} \psi^a}\frac{{\partial} h}{{\partial} \psi_{a}}\,,\label{PBrackets}
\end{eqnarray}
where $\,a^{f}=\left(0,1\right)\,$ is the Grassmann parity of the function $\,f$ and the magnetic field reads $\,B_{i}=(1/2)\epsilon_{ijk}F_{jk}\,$. It is straightforward to obtain the non-vanishing fundamental brackets,
\begin{eqnarray}
\big\{x^{i},\,\Pi_{j}\big\}={\delta}^{i}_{j},\quad\big\{\Pi_{i},\,\Pi_{j}\big\}=e\,F_{ij},\quad\big\{\psi^{i},\,\psi^{j}\big\}=-i\,{\delta}^{ij}\,,\\[8pt]
\big\{S^{i},\,G^{j}\big\}=\epsilon^{\;\,ij}_{k}\,G^{k}\quad\hbox{with}\quad G^{k}=\psi^{k},\,S^{k}\,.
\end{eqnarray}
It follows that, away from the monopole's location, the Jacobi identities are verified \cite{Jackiw84,Chaichian}.
The equations of the motion can be obtained in this covariant Hamiltonian framework \footnote{The dot means derivative w.r.t. the evolution parameter $\,\frac{d}{d\tau}\,$.},
\begin{eqnarray}
\dot{{\vec{G}}}=
\frac{e{g}}{2}\,{\vec{G}}\times{\vec{B}}\,,
\label{EqM}\\[6pt]
\dot{{\vec{\Pi}}}=
e\,{\vec{\Pi}}\times{\vec{B}}-{\vec{\nabla}}{V(r)}+\frac{e{g}}{2}\,{\vec{\nabla}}{\left({\vec{S}}\cdot{\vec{B}}\right)}\,.\label{Lorentz}
\end{eqnarray}
Equation (\ref{EqM}) shows that the fermionic vectors $\,{\vec{S}}\,$ and $\,{\vec{\psi}}\,$ are conserved when the spin and the magnetic field are uncoupled, i.e. for \emph{vanishing gyromagnetic ratio}, $\,{g}=0\,$. Note that, in addition to the magnetic field term, the Lorentz equation (\ref{Lorentz}) also involves a potential term augmented with a spin-field interaction term.
\section{Killing tensors for fermion-monopole system}\label{S3}
\noindent
Now we outline the algorithm developed in \cite{vH} to
construct constants of the motion. First, a phase-space function associated with a (super)symmetry can be expanded in powers of the covariant momenta,
\begin{equation}
{\mathcal{Q}}\left({\vec{x}},\,{\vec{\Pi}},\,{\vec{\psi}}\right)=C({\vec{x}},\,{\vec{\psi}})+\sum_{k=1}^{p-1}\,\frac{1}{k!}\,C^{i_1\cdots i_k}({\vec{x}},\,{\vec{\psi}})\,\Pi_{i_1}\cdots\Pi_{i_k}\,.
\label{Exp}
\end{equation}
Requiring that ${\mathcal{Q}}$ Poisson-commutes with the Hamiltonian,
$\big\{{\mathcal{H}}_{{g}},{\mathcal{Q}}\big\}=0\,,$
implies the series of constraints,
\begin{equation}
\begin{array}{llll}\displaystyle{
C^{i}\,{\partial}_i V+\frac{ie{g}}{4}\,\psi^l\psi^m C^j\,{\partial}_j F_{lm}-\frac{e{g}}{2}\,\psi^m\frac{{\partial} C}{{\partial}\psi^a}\,F_{am}
=0},&\hbox{order 0}
\\[10pt]\displaystyle{
{\partial}_jC=C^{jk}{\partial}_{k}V+eF_{jk}C^k+\frac{ie{g}}{4}\psi^l\psi^m C^{jk}{\partial}_k F_{lm}-\frac{e{g}}{2}\psi^m\frac{{\partial} C^j}{{\partial}\psi^a}F_{am}},
&\hbox{order 1}
\\[11pt]\displaystyle{
{\partial}_jC^k+{\partial}_kC^j=C^{jkm}{\partial}_{m}V+e\left(F_{jm}C^{mk}+F_{km}C^{mj}\right)+\frac{ie{g}}{4}\psi^l\psi^m C^{ijk}{\partial}_i F_{lm}-\frac{e{g}}{2}\psi^m\frac{{\partial} C^{jk}}{{\partial}\psi^a}F_{am}},
&\hbox{order 2}\\[16pt]
{\partial}_jC^{kl}+{\partial}_lC^{jk}+{\partial}_kC^{lj}= C^{jklm}{\partial}_{m}V+e\left(F_{jm}C^{mkl}+F_{lm}C^{mjk}+F_{km}C^{mlj}\right)
\\[8pt]
\quad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\quad+\;\displaystyle{\frac{ie{g}}{4}\psi^m\psi^n C^{ijkl}{\partial}_i F_{mn}-\frac{e{g}}{2}\psi^m\frac{{\partial} C^{jkl}}{{\partial}\psi^a}F_{am}}\,,
&\hbox{order 3}
\\
\vdots\qquad\qquad\qquad\qquad\qquad\vdots&\vdots
\end{array}
\label{constraints}
\end{equation}
This series can be truncated at a finite order, \textit{p}, provided \textit{the constraint of order
$p$ becomes a Killing equation}. The zeroth-order equation can be interpreted as a \textit{consistency condition between the potential and the (super)invariant}. Apart from the zeroth-order constants of the motion, i.e., such that do not depend on the momentum, all other order-\textit{n} (super)invariants are deduced by the systematic method (\ref{constraints}) implying rank-\textit{n} Killing tensors. Each Killing tensor solves the higher order constraint of (\ref{constraints}) and can generate a conserved quantity.
In this paper, we are interested by (super)invariants which are linear or quadratic in the momenta.
Thus, we have to determine generic Grassmann-valued Killing tensors of rank-one and rank-two.
$\bullet$ Let us first investigate the Killing equation,
\begin{equation}
{\partial}_jC^{k}({\vec{x}},\,{\vec{\psi}})+{\partial}_kC^{j}({\vec{x}},\,{\vec{\psi}})=0\,.\label{KConstraint1}
\end{equation}
Following Berezin and Marinov \cite{B-M}, any tensor which takes its values in the Grassmann algebra may be represented as a finite sum of homogeneous monomials,
\begin{equation}
C^{i}({\vec{x}},\,{\vec{\psi}})=\sum_{k\geq 0}{\mathcal{C}}^{i}_{a_{1}\cdots a_{k}}({\vec{x}})\psi^{a_{1}}\cdots\psi^{a_{k}}\,,\label{ExternalTensor}
\end{equation}
where the coefficients tensors, ${\mathcal{C}}^{i}_{a_{1}\cdots a_{k}}$, are completely anti-symmetric in the fermionic indices $\,\left\lbrace a_{k}\right\rbrace\,$. The tensors (\ref{ExternalTensor}) satisfy (\ref{KConstraint1}), from which we deduce that their (tensor) coefficients satisfy,
\begin{equation}
{\partial}_j{\mathcal{C}}^{k}_{a_{1}\cdots a_{k}}({\vec{x}})+{\partial}_k{\mathcal{C}}^{j}_{a_{1}\cdots a_{k}}({\vec{x}})=0\quad\Longrightarrow\quad{\partial}_{i}{\partial}_{j}{\mathcal{C}}^{k}_{a_{1}\cdots a_{k}}({\vec{x}})=0\,,\label{KConstraint1bis}
\end{equation}
providing us with the most general rank-1 Grassmann-valued Killing tensor
\begin{equation}
C^{i}({\vec{x}},\,{\vec{\psi}})=\sum_{k\geq 0}\big(M^{ij}\,x^{j}+N^{i}\big)_{a_{1}\cdots a_{k}}\psi^{a_{1}}\cdots\psi^{a_{k}}\,,\quad M^{ij}=-M^{ji}\,.\label{Exp2}
\end{equation}
Here $\,N^{i}\,$ and the antisymmetric $\,M^{ij}\,$ are constant tensors.
$\bullet$ Let us now construct the rank-2 Killing tensors which solve the Killing equation,
\begin{equation}
{\partial}_jC^{kl}({\vec{x}},\,{\vec{\psi}})+{\partial}_lC^{jk}({\vec{x}},\,{\vec{\psi}})+{\partial}_kC^{lj}({\vec{x}},\,{\vec{\psi}})=0\,.\label{KConstraint2}
\end{equation}
We consider the expansion in terms of Grassmann degrees of freedom \cite{B-M} and the coefficients $\,{\mathcal{C}} ^{ij}_{a_{1}\cdots a_{k}}\,$ are constructed as symmetrized products \cite{G-R} of Yano-type Killing tensors, ${\mathcal{C}}^{i}_{\,Y}({\vec{x}})\,$, associated with the rank-1 Killing tensors $\,{\mathcal{C}} ^{i}({\vec{x}})\,$,
\begin{equation}
{\mathcal{C}}^{ij}_{a_{1}\cdots a_{k}}({\vec{x}})=\frac{1}{2}\left({\mathcal{C}}^{i}_{\,Y}\widetilde{{\mathcal{C}}}^{jY}+\widetilde{{\mathcal{C}}}^{i}_{\,Y}{\mathcal{C}}^{jY}\right)_{a_{1}\cdots a_{k}}\,.\label{Coef}
\end{equation}
It is worth noting that the Killing tensor (\ref{Coef}) is symmetric in its bosonic indices and anti-symmetric in the fermionic indices. Thus, we obtain
\begin{equation}
C^{ij}({\vec{x}},\,{\vec{\psi}})=\sum_{k\geq 0}\left(
M^{(i}_{\;ln}\widetilde{M}^{j)\,n}_{\;m}x^lx^m+M^{(i}_{\;ln}\widetilde{N}^{j)\,n}x^l+N^{(i}_{\;n}\widetilde{M}^{j)\,n}_{\;m}x^m+N^{(i}_{\;n}\widetilde{N}^{j)\,n}\right)_{a_{1}\cdots a_{k}}\psi^{a_{1}}\cdots\psi^{a_{k}}\,,\label{Exp3}
\end{equation}
where $\,M^{ij}_{\;\;k}\,$, $\,\widetilde{M}^{ij}_{\;\;k}\,$, $\,N^{j}_{\;k}\,$ and $\,\widetilde{N}^{j}_{k}\,$ are skew-symmetric constants tensors. Then one can verify with direct calculations that (\ref{Exp2}) and (\ref{Exp3}) satisfy the Killing equations.
\section{SUSY of fermion in magnetic monopole field}\label{S4}
\noindent
Having constructed the generic Killing tensors (\ref{Exp2}) and (\ref{Exp3}) generating constants of the motion, we now describe the supersymmetries of the Pauli-like Hamiltonian (\ref{Hamiltonian}). To start, we search for momentum-independent invariants, i.e. which are not derived from a Killing tensor, $\,C^{i}= C^{ij}= \cdots = 0\,$. In this case, the system of equations (\ref{constraints}) reduces to the constraints,
\begin{eqnarray}\left\lbrace
\begin{array}{lll} \displaystyle{
{g}\psi^m\frac{{\partial} {\mathcal{Q}}_{0}({\vec{x}},{\vec{\psi}})}{{\partial}\psi^a}\,F_{am} =0}\,,\qquad &\hbox{order 0}&
\\[12pt]\displaystyle{
{\partial}_i{\mathcal{Q}}_{0}({\vec{x}},{\vec{\psi}})=0}&\hbox{order 1}\,.&
\end{array}
\right.
\end{eqnarray}
For ${g}=0$ [which means no spin-gauge field coupling], it is straightforward to see that the spin vector, together with an arbitrary function $f({\vec{\psi}}\,)$ which only depends on the Grassmann variables, are conserved.
\textit{For nonvanishing gyromagnetic ratio ${g}$, only the ``chiral" charge} ${\mathcal{Q}}_{0}={\vec{\psi}}\cdot{\vec{S}}\,$ remains conserved. Hence, the charge $\,{\mathcal{Q}}_{0}\,$ can be considered as the projection of the internal angular momentum, $ {\vec{S}}$, onto the internal trajectory $\,\psi(\tau)\,$. Thus ${\mathcal{Q}}_{0}\,$ can be viewed as the internal analogue of the projection of the angular momentum, in bosonic sector, onto the classical trajectory $\,x(\tau)\,$.
Let us now search for superinvariants which are linear in the covariant momentum. $C^{ij}=C^{ijk}=\cdots =0\,$ such that (\ref{constraints}) becomes
\begin{eqnarray}\left\lbrace \begin{array}{lll}
\displaystyle{ C^{i}\,{\partial}_i V+\frac{ie{g}}{4}\,\psi^l\psi^m C^j({\vec{x}},{\vec{\psi}})\,{\partial}_j F_{lm}-\frac{e{g}}{2}\psi^m\frac{{\partial} C({\vec{x}},{\vec{\psi}})}{{\partial}\psi^a}\,F_{am}
=0}\,,&\hbox{order 0}&
\\[7pt]
\displaystyle{
{\partial}_jC({\vec{x}},{\vec{\psi}})=eF_{jk}C^k({\vec{x}},{\vec{\psi}})-\frac{e{g}}{2}\psi^m\frac{{\partial} C^j({\vec{x}},{\vec{\psi}})}{{\partial}\psi^a}\,F_{am}}\,, &\hbox{order 1}&
\\[12pt]
{\partial}_jC^k({\vec{x}},{\vec{\psi}})+{\partial}_kC^j({\vec{x}},{\vec{\psi}})=0&\hbox{order 2}\,.&
\label{AngMom}
\end{array}\right.
\end{eqnarray}
We choose the non-vanishing $\,N^{j}_{a}={\delta}^{j}_{a}\,$ in the general rank-1 Killing tensor (\ref{Exp2}).
This provides us with the rank-1 Killing tensor generating the supersymmetry transformation,
\begin{equation}
\,C^j({\vec{x}},{\vec{\psi}})={\delta}^{j}_{a}\,\psi^{a}\,.
\label{SUSYKILLING}
\end{equation}
By substitution of this Grassmann-valued Killing tensor into the first-order equation of (\ref{AngMom}) we get
\begin{equation}
{\vec{\nabla}} C({\vec{x}},{\vec{\psi}})=\frac{q}{2}\left({g}-2\right)\frac{{\vec{x}}\times{\vec{\psi}}}{r^3}\,.\label{PrExp}
\end{equation}
Consequently, a solution
$C({\vec{x}},{\vec{\psi}})=0$ of (\ref{PrExp}) is only obtained for a fermion with ordinary gyromagnetic ratio
\begin{equation}
{g}=2\,.
\label{g2}
\end{equation}
Thus we obtain, for $\,V(r)=0\,$, the \textit{Grassmann-odd supercharge} generating the ${\mathcal{N}}=1$ supersymmetry of the spin-monopole field system,
\begin{equation}
{\mathcal{Q}}={\vec{\psi}}\cdot{\vec{\Pi}}\,,
\qquad
\big\{{\mathcal{Q}},\,{\mathcal{Q}}\big\}=-2i{\mathcal{H}}_{2}\,.
\label{SC0}
\end{equation}
For nonvanishing potential, $V(r)\neq 0\,$, the zeroth-order consistency condition of (\ref{AngMom}) is expressed as \footnote{We use the identity
$\,S^kG^j{\partial}_j B^k=\psi^l\psi^m\,G^j{\partial}_j F_{lm}=0\,$.}
$
({V'(r)}/{r})\,{\vec{\psi}}\cdot{\vec{x}}=0\,.$ Consequently, adding \emph{any} spherically symmetric potential $V(r)\,$ breaks the supersymmetry generated by the Killing tensor $C^j={\delta}^{j}_{a}\,\psi^{a}$~:
${\mathcal{N}}=1$ SUSY requires an ordinary gyromagnetic factor, and no additional radial potential
is allowed \cite{Spector}.
Another Killing tensor (\ref{Exp2}) is obtained by considering the particular case with the non-null tensor $ \,N^{j}_{\;a_1a_2}=\epsilon^{j}_{\;a_1a_2}\,
$. This leads to the rank-1 Killing tensor,
\begin{equation}
C^j({\vec{x}},{\vec{\psi}})=\nobreak\epsilon^{j}_{\;ab}\psi^{a}\psi^{b}\,.
\end{equation}
The first-order constraint of (\ref{AngMom}) is solved with $\,C({\vec{x}},{\vec{\psi}})=0\,$, provided the gyromagnetic ratio takes the value $\,{g}=2\,$. For vanishing potential, it is straightforward to verify the zeroth-order consistency constraint and to obtain \textit{the Grassmann-even supercharge},
\begin{equation}
{\mathcal{Q}}_{1}={\vec{S}}\cdot{\vec{\Pi}}\,,\label{evenS1}
\end{equation}
\textit{defining the ``helicity" of the spinning particle}. As expected, the consistency condition of superinvariance under (\ref{evenS1}) is again violated for $\,V(r)\neq 0\,$, breaking the supersymmetry of the Hamiltonian $\,{\mathcal{H}}_{2}\,$, in (\ref{SC0}).
Let us now consider the rank-1 Killing vector,
\begin{equation}
C^{j}({\vec{x}},{\vec{\psi}})=\big({\vec{S}}\times{\vec{x}}\big)^{j}\,,
\end{equation}
obtained by putting
$M^{ij}_{\;a_1a_2}=(i/2)\epsilon^{kij}\,\epsilon_{ka_1a_2}$ into the generic rank-1 Killing tensor (\ref{Exp2}). The first-order constraint
is satisfied with $\,C({\vec{x}},{\vec{\psi}})=0\,$, provided the particle carries gyromagnetic ratio ${g}=2$. Thus, we obtain the supercharge,
\begin{equation}
{\mathcal{Q}}_{2}=({\vec{x}}\times{\vec{\Pi}})\cdot{\vec{S}}\,,\label{q2}
\end{equation}
which, just like those in (\ref{SC0}) and (\ref{evenS1}) only appears when the potential is absent, $V=0$.
We consider the SUSY given when
$\, M^{ij}_{\;a}=\epsilon^{\;\,ij}_{a}\,
$ so that the Killing tensor (\ref{Exp2}) reduces to
\begin{equation}
C^{j}({\vec{x}},{\vec{\psi}})=-\epsilon^{j}_{\;ka}x^{k}\psi^{a}\,.
\end{equation}
The first-order constraint of (\ref{AngMom}) is solved with $\,\displaystyle{C({\vec{x}},{\vec{\psi}})=\frac{q}{2}\left({g}-2\right)\frac{{\vec{\psi}}\cdot{\vec{x}}}{r}}\,$. The zeroth-order consistency condition is, in this case, identically satisfied for an arbitrary radial potential. We have thus constructed the Grassmann-odd supercharge,
\begin{equation}
{\mathcal{Q}}_{3}=({\vec{x}}\times{\vec{\Pi}})\cdot{\vec{\psi}}+\frac{q}{2}\left({g}-2\right)\frac{{\vec{\psi}}\cdot{\vec{x}}}{r}\,,
\end{equation}
which is still conserved for a particle carrying an arbitrary gyromagnetic ratio ${g}\,$; see also \cite{DJ-al}.
Now we turn to superinvariants which are quadratic in the covariant momentum. For this, we solve the reduced series of constraints,
\begin{eqnarray}\left\lbrace
\begin{array}{llll}\displaystyle{C^{i}{\partial}_i V+\frac{ie{g}}{4}\,\psi^l\psi^m C^j{\partial}_j F_{lm}-\frac{e{g}}{2}\psi^m\frac{{\partial} C}{{\partial}\psi^a}F_{am}
=0},&\hbox{order 0}
\\[8pt]\displaystyle{
{\partial}_jC=C^{jk}{\partial}_k V+eF_{jk}C^k+\frac{ie{g}}{4}\psi^l\psi^m\,C^{jk}{\partial}_k F_{lm}-\frac{e{g}}{2}\psi^m\frac{{\partial} C^{j}}{{\partial}\psi^a}\,F_{am}},
&\hbox{order 1}
\\[8pt]\displaystyle{
{\partial}_jC^k+{\partial}_kC^j=e\left(F_{jm}C^{mk}+F_{km}C^{mj}\right)-\frac{e{g}}{2}\psi^m\frac{{\partial} C^{jk}}{{\partial}\psi^a}\,F_{am}}\,,
&\hbox{order 2}\\[8pt]\displaystyle{
{\partial}_{j}C^{km}+{\partial}_{m}C^{jk}+{\partial}_{k}C^{mj}=0} &\hbox{order 3}\,.
\end{array}\right.
\label{RLV}
\end{eqnarray}
We first observe that $\,C^{ij}({\vec{x}},{\vec{\psi}})={\delta}^{ij}$ is a constant Killing tensor. Solving the second- and the first-order constraints of (\ref{RLV}), we obtain $\,C^{j}({\vec{x}},{\vec{\psi}})=0\,$ and $\,\displaystyle{C({\vec{x}},{\vec{\psi}})= V(r)-\frac{e{g}}{2}{\vec{S}}\cdot{\vec{B}}}\,$, respectively. The zeroth-order consistency condition is identically satisfied and we obtain the energy of the spinning particle,
\begin{equation}
{\mathcal{E}}= \frac{1}{2}{\vec{\Pi}}^{2}-\frac{e{g}}{2}{\vec{S}}\cdot{\vec{B}}+V(r)\,.
\end{equation}
Next, introducing the nonvanishing constants tensors,
$\,
M^{ijk}\!=\!\epsilon^{ijk}\,,\;\widetilde{N}^{ij}_{\;\,a}\!=\!-\epsilon_{\;\,a}^{ij}\,$, into (\ref{Exp3}), we derive the rank-2 Killing tensor with the property,
\begin{equation}
C^{jk}({\vec{x}},{\vec{\psi}})=2\,{\delta}^{jk}({\vec{x}}\cdot{\vec{\psi}})-x^j\psi^k-x^k\psi^j\,.\label{KTSUSY}
\end{equation}
Using the Killing tensor (\ref{KTSUSY}), we solve the second-order constraints of (\ref{RLV}) with
$
{\vec{C}}({\vec{x}},{\vec{\psi}})=(q/2)\left(2-{g}\right)\left({\vec{\psi}}\times{\vec{x}}\right)/r\,.
$
In order to deduce the integrability condition of the first-order constraint of (\ref{RLV}), we require the vanishing of the commutator,
\begin{equation}
\left[{\partial}_{i},\,{\partial}_{j}\right]C({\vec{x}})=0\;\Longrightarrow\;\Delta\left( V(r)-\left(2-{g}\right)^2\frac{q^2}{8r^2}\right)=0\,.\label{Laplace0}
\end{equation}
Then the Laplace equation (\ref{Laplace0}) provides us with the \textit{most general form of the potential admitting a Grassmann-odd supercharge quadratic in the velocity}, namely with
\begin{equation}
\displaystyle{V(r)=\left(2-{g}\right)^2\frac{q^2}{8r^2}+\frac{{\alpha}}{r}+{\beta}}\,.\label{PotSUSY}
\end{equation}
Thus, we solve the first-order constraint with
\begin{equation}
C({\vec{x}},{\vec{\psi}})=\left(\frac{{\alpha}}{r}-e{g}{\vec{S}}\cdot{\vec{B}}\right){\vec{x}}\cdot{\vec{\psi}}\,,\label{NewSUSY}
\end{equation}
so that the zeroth-order consistency constraint is identically satisfied. Collecting our results leads to the Grassmann-odd supercharge quadratic in the velocity,
\begin{equation}
{\mathcal{Q}}_{4}=\left({\vec{\Pi}}\times({\vec{x}}\times{\vec{\Pi}})\right)\cdot{\vec{\psi}}+\frac{q}{2}\left(2-{g}\right)\frac{{\vec{x}}\times{\vec{\Pi}}}{r}\cdot{\vec{\psi}} +\left(\frac{{\alpha}}{r}-e{g}{\vec{S}}\cdot{\vec{B}}\right){\vec{x}}\cdot{\vec{\psi}}\,.
\end{equation}
This supercharge is \emph{not} a square root of the Hamiltonian $\,{\mathcal{H}}_g\,$, and that $\,{\mathcal{Q}}_4\,$ is conserved without restriction on the gyromagnetic factor, ${g}\,$. We can also remark that for $\,{g}=0\,$, the supercharge coincides with the scalar product of the \textit{separately conserved Runge-Lenz vector for a scalar particle \cite{MICZ} by the Grassmann-odd vector}:
\begin{equation}
\left. {\mathcal{Q}}_{4} \right|_{g = 0} = {\vec{K}}_{s = 0}\cdot{\vec{\psi}}\,.
\end{equation}
The supercharges $\,{\mathcal{Q}}\,$ and $\,{\mathcal{Q}}_j\,$ with $\,j= 0,\cdots,3\,$, previously determined, form together, for ordinary gyromagnetic ratio, the classical superalgebra,
\begin{eqnarray}\begin{array}{lll}\displaystyle{
\big\{{\mathcal{Q}}_0,\,{\mathcal{Q}}_0\big\}=\big\{{\mathcal{Q}}_0,\,{\mathcal{Q}}_1\big\}=\big\{{\mathcal{Q}},\,{\mathcal{Q}}_1\big\}=\big\{{\mathcal{Q}}_1,\,{\mathcal{Q}}_1\big\}=\big\{{\mathcal{Q}}_2,\,{\mathcal{Q}}_2\big\}=0}\,,\\[10pt]\displaystyle{
\big\{{\mathcal{Q}}_0,\,{\mathcal{Q}}\big\}=i{\mathcal{Q}}_{1}\,,\quad\big\{{\mathcal{Q}}_0,\,{\mathcal{Q}}_2\big\}=\big\{{\mathcal{Q}}_2,\,{\mathcal{Q}}_3\big\}=0}\,,\\[10pt]\displaystyle{
\big\{{\mathcal{Q}}_0,\,{\mathcal{Q}}_3\big\}=i{\mathcal{Q}}_{2}\,,\quad
\big\{{\mathcal{Q}},\,{\mathcal{Q}}\big\}=-2i{\mathcal{H}}_{2}}\,,\\[10pt]\displaystyle{
\big\{{\mathcal{Q}},\,{\mathcal{Q}}_2\big\}=\big\{{\mathcal{Q}}_1,\,{\mathcal{Q}}_3\big\}={\mathcal{Q}}_4}\,,\\[10pt]\displaystyle{
\big\{{\mathcal{Q}},\,{\mathcal{Q}}_3\big\}=2i{\mathcal{Q}}_1\,,\quad\big\{{\mathcal{Q}}_1,\,{\mathcal{Q}}_2\big\}=i{\mathcal{Q}}_3{\mathcal{Q}}\,,\quad\big\{{\mathcal{Q}}_3,\,{\mathcal{Q}}_3\big\}=i\left(2{\mathcal{Q}}_{2}-{\mathcal{Q}}_5\right)}\,,
\end{array}
\end{eqnarray}
where ${\mathcal{Q}}_5$ is the supercharge constructed in section \ref{S5}, cf. (\ref{Msquare}). From these results it follows, that the linear combination
${\mathcal{Q}}_Y = {\mathcal{Q}}_3 - 2 {\mathcal{Q}}_0$ has the special property that its bracket with the standard supercharge ${\mathcal{Q}}$ vanishes:
\begin{equation}
\big\{ {\mathcal{Q}}_Y, {\mathcal{Q}} \big\} = 0.
\end{equation}
Indeed, ${\mathcal{Q}}_Y$ is precisely the Killing-Yano supercharge constructed in \cite{DJ-al}.
\section{Bosonic symmetries of the spinning particle }\label{S5}
\noindent
Let us investigate the bosonic symmetries of the Pauli-like Hamiltonian (\ref{Hamiltonian}). We use the generic Killing tensors constructed in section \ref{S3} to derive the associated constants of the motion. Firstly, we describe the rotationally invariance of the system by solving the reduced series of constraints (\ref{AngMom}).
For this, we consider the Killing vector provided by the replacement,
$\,
M^{ij}=-\epsilon^{ij}_{\;\;k}n^{k}\,
$
into (\ref{Exp2}). Thus we obtain for any unit vector $\,{\vec{n}}\,$, the generator of space rotations around $\,{\vec{n}}\,$,
\begin{equation}
{\vec{C}}({\vec{x}},{\vec{\psi}})={\vec{n}}\times{\vec{x}}\,.\label{KillingV}
\end{equation}
Inserting the previous Killing vector in the first-order equation of (\ref{AngMom}) yields
$
C({\vec{x}},{\vec{\psi}})=-q\,\left({\vec{n}}\cdot{\vec{x}}\right)/r+c({\vec{\psi}})\,.
$
The zeroth-order consistency condition of (\ref{AngMom}) requires, for arbitrary radial potential, $\,c({\vec{\psi}})={\vec{S}}\cdot{\vec{n}}\,$. Collecting our results provides us with the total angular momentum [which is plainly conserved for arbitrary gyromagnetic ratio],
\begin{equation}
\displaystyle{{\vec{J}}={\vec{L}}+{\vec{S}}=
{\vec{x}}\times{\vec{\Pi}}-q\,{\frac{\vec{x}}{r}}+{\vec{S}}}\,.
\label{AngMomentum}
\end{equation}
In addition to the typical monopole term,
${{\vec{J}}}$ also involves the spin vector, $\,{\vec{S}}\,$. It
generates an $o(3)_{rotations}$ bosonic symmetry algebra, $\,\big\{J^{i},J^{j}\big\}=\epsilon^{ijk}J^{k}$.
In the case of vanishing gyromagnetic factor
${g}=0$, the orbital part ${\vec{L}}\,$ and the spin angular momentum ${\vec{S}}\,$ are separately conserved involving an $o(3)_{rotations}\oplus o(3)_{spin}$ symmetry algebra.
Now we turn to invariants which are quadratic in the velocity. Then, we have to solve the series of constraints (\ref{RLV}). We first observe that for $\,M^{jmk}\!=\!\widetilde{M}^{jmk}\!=\!\epsilon^{jmk}\,$,
the Killing tensor (\ref{Exp3}) reduces to the rank-2 Killing-St\"ackel tensor,
\begin{equation}
C^{ij}({\vec{x}},{\vec{\psi}})=2{\delta}^{ij}\,{\vec{x}}^{\,2}-2x^{i}x^j\,.\label{Momsquare}
\end{equation}
Inserting (\ref{Momsquare}) into the second- and in the first-order constraints of (\ref{RLV}), we get, for any gyromagnetic factor and for any arbitrary radial potential,
\begin{equation}
{\vec{C}}({\vec{x}},{\vec{\psi}})= 0\quad\hbox{and}\quad C({\vec{x}},{\vec{\psi}})=-{g} q\,\frac{{\vec{x}}\cdot{\vec{S}}}{r}\,.
\end{equation}
Thus, we obtain the Casimir,
\begin{equation}
{\mathcal{Q}}_{5}={\vec{J}}^{2}-q^2+\left({g}-2\right){\vec{J}}\cdot{\vec{S}}-{g}{\mathcal{Q}}_{2}\,,\label{Msquare}
\end{equation}
The bosonic supercharge $\,{\mathcal{Q}}_{5}$ is, as expected, \textit{the square of the total angular momentum, augmented with another, separately conserved term}. Indeed, for ${g}=0\,$, it is straightforward to see that the spin
and hence
$\,{\vec{J}}\cdot{\vec{S}} \,$ are separately conserved. For ${g}=2\,$, we recover the conservation of ${\mathcal{Q}}_{2}$, cf.(\ref{q2}). For the anomalous gyromagnetic ratio ${g}=4\,$ we obtain that $\,{\vec{J}}\cdot{\vec{S}}-2{\mathcal{Q}}_{2}\,$ is a constant of the motion.
Now we are interested by the hidden symmetry generated by conserved Laplace-Runge-Lenz-type vectors, therefore we introduce into the algorithm (\ref{RLV}) the generator,
\begin{eqnarray}\displaystyle{ C^{ij}({\vec{x}},{\vec{\psi}})= 2\,{\delta}^{ij}\,{\vec{n}}\cdot{\vec{x}}-n^{i}x^j-n^{j}x^i}\,,
\label{RL1}
\end{eqnarray}
easily obtained by choosing the non-vanishing,
$\,\widetilde{N}^{ij}\!=\!\epsilon^{imj}n^m\;\hbox{and}\;M^{ijm}\!=\!\epsilon^{ijm}\,$, into the generic rank-2 Killing tensor (\ref{Exp3}).
Inserting (\ref{RL1}) into the second-order constraint of (\ref{RLV}), we get
\begin{equation}
{\vec{C}}({\vec{x}},{\vec{\psi}})= q\frac{{\vec{n}}\times{\vec{x}}}{r}+{\vec{C}}({\vec{\psi}}\,)\,.
\label{RL2}
\end{equation}
In order to solve the first-order constraint of (\ref{RLV}) we write the expansion \cite{B-M} in terms of Grassmann variables,
\begin{equation}
C({\vec{x}},\,{\vec{\psi}})=C({\vec{x}})+\sum_{k\geq 1}C_{a_{1}\cdots a_{k}}({\vec{x}})\psi^{a_{1}}\cdots\psi^{a_{k}}\,.\label{Expansion}
\end{equation}
Consequently, the first- and the zeroth-order equations of (\ref{RLV}) can be classified order-by-order in Grassmann-odd variables. Thus, inserting (\ref{RL2}) in the first-order equation, and requiring again the vanishing of the commutator,
\begin{equation}
\left[{\partial}_{i},\,{\partial}_{j}\right]C({\vec{x}})=0\;\Longrightarrow\;\Delta\left( V(r)-\frac{q^2}{2r^2}\right)=0\,,
\label{Laplace}
\end{equation}
we deduce the most general radial potential admitting a conserved Laplace-Runge-Lenz vector in the fermion-monopole interaction, namely
\begin{equation}
V(r)=\frac{q^2}{2r^2}+\frac{\mu}{r}+\gamma\,,\quad\mu\,,\gamma\in{\mathds{R}}\,.\label{Potential}
\end{equation}
We can now find the first term in the r.h.s of (\ref{Expansion}), $\,\displaystyle{C({\vec{x}})=\mu\frac{({\vec{n}}\cdot{\vec{x}})}{r}}\,$. Introducing (\ref{RL2}) and (\ref{Potential}) into the first-order constraint of (\ref{RLV}) leads to $\,\displaystyle{{\vec{C}}({\vec{\psi}}\,)=-\frac{{g}}{2}{\vec{n}}\times{\vec{S}}}\,$ and
\begin{equation}\begin{array}{cc}\displaystyle{
\sum_{k\geq 1}C_{a_{1}\cdots a_{k}}({\vec{x}})\psi^{a_{1}}\cdots\psi^{a_{k}}=-\frac{e{g}}{2}\left({\vec{S}}\cdot{\vec{B}}\right)\left({\vec{n}}\cdot{\vec{x}}\right)-\frac{{g} q}{2}\left(1-\frac{{g}}{2}\right)\frac{{\vec{n}}\cdot{\vec{S}}}{r}+C({\vec{\psi}})}\,,\\[8pt]
\hbox{with}\quad{g}({g}-4)=0\,.
\end{array}
\label{RL3}
\end{equation}
The zeroth-order consistency condition of (\ref{RLV}) is only satisfied for $\,\displaystyle{C({\vec{\psi}})=\frac{\mu}{q}{\vec{S}}\cdot{\vec{n}}}\,$. Collecting our results, (\ref{RL1}), (\ref{RL2}), (\ref{Potential}) and (\ref{RL3}), we get a conserved Runge-Lenz vector
if and only if
\begin{equation}
{g}=0\qquad\hbox{or}\qquad{g}=4\,;
\label{g04}
\end{equation}
we get namely
\begin{equation}
{\vec{K}}_{{g}}={\vec{\Pi}}\times{\vec{J}}+\mu\,{\frac{\vec{x}}{r}}+\left(1-\frac{{g}}{2}\right){\vec{S}}\times{\vec{\Pi}}-\frac{e{g}}{2}\left({\vec{S}}\cdot{\vec{B}}\right){\vec{x}}-\frac{{g} q}{2}\left(1-\frac{{g}}{2}\right)\frac{{\vec{S}}}{r}+\frac{\mu}{q}{\vec{S}}\,.
\end{equation}
Note that the spin angular momentum which generates the extra ``spin'' symmetry for vanishing gyromagnetic ratio is not more separately conserved for ${g}=4$. Then, an interesting question is to know if the extra ``spin'' symmetry of ${g}=0$
is still present for the anomalous superpartner ${g}=4\,$, cf. section \ref{S6}, in some ``hidden'' way.
Let us consider the ``spin'' transformation generated by the rank-2 Killing tensor with the property,
\begin{equation}
C^{mk}({\vec{x}},{\vec{\psi}})=2{\delta}^{mk}\big({\vec{S}}\cdot{\vec{n}}\big)-\frac{g}{2}\big(S^{m}n^{k}+S^{k}n^{m}\big)\,,\label{KT2}
\end{equation}
The previous rank-$2$ Killing tensor, $\,C^{mk}=C^{mk}_{+}\,+\,C^{mk}_{-}\,$, cf. (\ref{KT2}), is obtained by putting
\begin{eqnarray*}
N^{jk}_{+}=({g}/{2})\epsilon^{\;jk}_{l}\,n^l,
\quad
&\widetilde{N}^{jk}_{+\,\;a}=-({i}/{2})\epsilon^{jk}_{\;\,m}\,\epsilon^{m}_{\;\,a_1 a_2},
\\[6pt]
N^{jkl}_{-}=\big(1-({g}/{2})\big)\epsilon^{jkl},
\quad
&\widetilde{N}^{jkl}_{-\;\,a}=-({i}/{4})\epsilon^{jkl}\,n_{m}\,\epsilon^{m}_{\;\,a_1 a_2}\,
\end{eqnarray*}
into the general rank-2 Killing tensor (\ref{Exp3}). Inserting (\ref{KT2}) into the second-order constraint of (\ref{RLV}) provides us with
\begin{equation}
{\vec{C}}({\vec{x}},{\vec{\psi}})=-\frac{qg}{2}\frac{\big({\vec{S}}\times{\vec{n}}\big)}{r}+{\vec{C}}(\psi)\quad\hbox{and}\quad g(g-4)=0\,.
\end{equation}
We use the potential (\ref{Potential}) to solve the first-order equation of (\ref{RLV}),
\begin{equation}\begin{array}{ll}\displaystyle{
C({\vec{x}},{\vec{\psi}})=\left(2V(r)-\frac{q^{2}g^{2}}{8r^{2}}-\frac{\mu g^2}{4r}\right){\vec{S}}\cdot{\vec{n}}+c(\psi)}\,,
\\[10pt]\displaystyle{
{\vec{C}}(\psi)=\frac{\mu g}{2q}{\vec{n}}\times{\vec{S}}\qquad\hbox{and}\qquad{g}\big({g}-4\big)=0}\,.
\end{array}
\end{equation}
The zeroth-order consistency condition is satisfied with $\displaystyle{c(\psi)=-\frac{{g}^{2}}{8}\frac{\mu^{2}}{q^{2}}{\vec{S}}\cdot{\vec{n}}\,,}$ so that collecting our results leads to the conserved vector,
\begin{equation}\begin{array}{ll}\displaystyle{
{\vec{\Omega}}_{{g}}=\left({\vec{\Pi}}^{2}+\big(2-\frac{{g}^{2}}{4}\big)V(r)\right){\vec{S}}-\frac{{g}}{2}\big({\vec{\Pi}}\cdot{\vec{S}}\big){\vec{\Pi}}+\frac{{g}}{2}\big(\frac{q}{r}+\frac{\mu}{q}\big){\vec{S}}\times{\vec{\Pi}}}\\[8pt]
\displaystyle{
-\frac{{g}^{2}}{4}\big(\frac{\mu^{2}}{2q^{2}}-\gamma\big){\vec{S}}}\quad\displaystyle{\hbox{with}\quad{g}\big({g}-4\big)=0}\,.
\end{array}
\end{equation}
In conclusion, the additional $\,o(3)_{spin}\,$ ``spin'' symmetry is recovered in the same particular cases of anomalous gyromagnetic ratios $0$ and $4$ cf. (\ref{g04}).
$\bullet$ For ${g}=0$, in particular,
\begin{equation}
{\vec{\Omega}}_{0}=2{\mathcal{E}}\,{\vec{S}}\,.
\end{equation}
$\bullet$ For ${g}=4$, we find an expression equivalent to that of D'Hoker and Vinet \cite{DV2}, namely
\begin{equation}
{\vec{\Omega}}_{4}=\left({\vec{\Pi}}^{2}-2V(r)\right){\vec{S}}-2\big({\vec{\Pi}}\cdot{\vec{S}}\big)\,{\vec{\Pi}}+2\big(\frac{q}{r}+\frac{\mu}{q}\big){\vec{S}}\times{\vec{\Pi}}-4\left(\frac{\mu^{2}}{2q^{2}}-\gamma\right){\vec{S}}\,.
\end{equation}
Note that this extra symmetry is generated by a
\emph{Killing tensor}, rather than a Killing vector, as for ``ordinary'' angular momentum. Thus, for sufficiently low energy, the motions are bounded and the conserved vectors ${\vec{J}},\,{\vec{K}}_{{g}}$ and ${\vec{\Omega}}_{{g}}\,$ generate an $o(4)\oplus o(3)_{spin}\,$ bosonic symmetry algebra.
\section{${\mathcal{N}}=2$ Supersymmetry of the fermion-monopole system}\label{S6}
\noindent
So far we have seen that, for a spinning particle with a single Grassmann variable, SUSY and dynamical symmetry are inconsistent, since they require different values for the ${g}$-factor. Now, adapting the idea of D'Hoker and Vinet to our framework, we show that the
two contradictory conditions can be conciliated by doubling the odd degrees of freedom. The
systems with ${g}=0$ and ${g}=4$ will then become
superpartners inside a unified system \cite{FH}.
We consider, hence, a charged spin-$\frac{1}{2}$ particle moving in a flat manifold $\,{\mathcal{M}}^{D+2d}\,$, interacting with a static magnetic field $\,{\vec{B}}\,$. The fermionic degrees of freedom are now carried by a $2d$-dimensional internal space. This is to be compared with the $d$-dimensional internal space sufficient to describe the $\,{\mathcal{N}}=1\,$ SUSY of the monopole. In terms of Grassmann-odd variables $\,\psi_{1,2}\;$, the local coordinates of the fermionic extension $\,{\mathcal{M}}^{2d}\,$ read $\left(\psi^{a}_{1},\,\psi^{b}_{2}\right)$ with $\,a,b=1,\cdots,d\,$. The system is still described by the Pauli-like Hamiltonian (\ref{Hamiltonian}). Choosing $\,d=3\,$, we consider the fermion $\xi_{{\alpha}}\,$ which is a two-component spinor, $\,\xi_{{\alpha}}=\left(\begin{array}{c} \psi_{1}\\\psi_{2}\end{array}\right)\,$, and whose conjugate is $\bar{\xi}^{{\alpha}}\,$. Thus, we have a representation of the spin angular momentum,
\begin{equation}
S^{k}=\frac{1}{2}\bar{\xi}^{{\alpha}} \,\sigma^{k\;\,{\beta}}_{\,{\alpha}}\,\xi_{{\beta}}\quad\hbox{with}\quad{\alpha},{\beta}=1,2\,,
\end{equation}
and the $\,\sigma^{k\;\,{\beta}}_{\,{\alpha}}\,$ with $\,k=1,2,3\,$ are the standard Pauli matrices. Defining the covariant Poisson-Dirac brackets as
\begin{eqnarray}
\big\{f,h\big\}&=&{\partial}_j f\,\frac{{\partial} h}{{\partial} \Pi_j}-\frac{{\partial} f}{{\partial} \Pi_j}\,{\partial}_j h
+e\,\epsilon_{ijk}B^k\,\frac{{\partial} f}{{\partial} \Pi_i}\frac{{\partial} h}{{\partial} \Pi_j}
+i(-1)^{a^{f}}\left(\frac{{\partial} f}{{\partial} \xi_{{\alpha}}}\frac{{\partial} h}{{\partial}\bar{\xi}^{{\alpha}}}+\frac{{\partial} f}{{\partial} \bar{\xi}^{{\alpha}}}\frac{{\partial} h}{{\partial}\xi_{{\alpha}}}\right),\quad
\end{eqnarray}
we deduce the non-vanishing fundamental brackets,
\begin{eqnarray}\begin{array}{ll}
\big\{x^{i},\Pi_{j}\big\}={\delta}^{i}_{j},\quad\big\{\Pi_{i},\Pi_{j}\big\}=e\,\epsilon_{ijk}B^k,\quad\big\{\xi_{{\alpha}},\bar{\xi}^{{\beta}}\big\}=-i{\delta}^{\;{\beta}}_{{\alpha}},\\[8pt]\displaystyle{
\big\{S^{k},S^{l}\big\}=\epsilon^{kl}_{\;\;m}S^{m},\quad\big\{S^{k},\bar{\xi}^{{\beta}}\big\}=-\frac{i}{2}\bar{\xi}^{\mu}\sigma^{k\;\,{\beta}}_{\,\mu},\quad\big\{S^{k},\xi_{{\beta}}\big\}=\frac{i}{2}\sigma^{k\;\,\nu}_{\,{\beta}}\xi_{\nu}}\,.
\end{array}
\end{eqnarray}
We also introduce an auxiliary scalar field, $\,\Phi(r)\,$, satisfying \textit{the ``self-duality'' or
`` Bogomolny'' relation}\footnote{See \cite{FH} to justify terminology.},
\begin{equation}
\big\{\Pi^{k},\Phi(r)\big\}=\pm eB^{k}\,.\label{SelfDuality}
\end{equation}
This auxiliary scalar field also defines a square root of the external potential of the system so that $\,\displaystyle{\frac{1}{2}\Phi^{2}(r)=V(r)}\,$. As an example we obtain the potential \footnote{The constant is $\,\displaystyle{\gamma=\frac{\mu^2}{2q^2}}\,$.} in (\ref{Potential}) by considering the auxiliary field, $\,\displaystyle{\Phi(r)=\pm\left(\frac{q}{r}+\frac{\mu}{q}\right)}\,$.
In order to investigate the $\,{\mathcal{N}}=2\,$ supersymmetry of the Pauli-like Hamiltonian (\ref{Hamiltonian}), we outline the algorithm developed we use to construct supercharges linear in the gauge covariant momentum,
\begin{eqnarray}\left\lbrace \begin{array}{lll}
\displaystyle{\mp e\Phi(r)\,B^{j}C^j+\frac{ie{g}}{4}B^{k}\left(\bar{\xi}^{\mu}\sigma^{k\,\nu}_{\mu}\frac{{\partial} C}{{\partial}\bar{\xi}^{\nu}}-\frac{{\partial} C}{{\partial}\xi_{\mu}}\sigma^{k\,\nu}_{\mu}\xi_{\nu}\right)
-\frac{e{g}}{4}\,\bar{\xi}^{\mu}\sigma^{k\,\nu}_{\mu}\xi_{\nu}\,C^{j}{\partial}_{j}B^{k}
=0}\,,&\hbox{order 0}&
\\[10pt]
\displaystyle{
{\partial}_{m}C=e\,\epsilon_{mjk}B^kC^j+i\frac{e{g}}{4}B^k\left(\bar{\xi}^{\mu}\sigma^{k\,\nu}_{\mu}\frac{{\partial} C^m}{{\partial}\bar{\xi}^{\nu}}-\frac{{\partial} C^m}{{\partial}\xi_{\mu}}\sigma^{k\,\nu}_{\mu}\xi_{\nu}\right) } \,, &\hbox{order 1}&
\\[12pt]
{\partial}_jC^k(x,\xi,\bar{\xi})+{\partial}_kC^j(x,\xi,\bar{\xi})=0&\hbox{order 2}\,.&
\label{2susy}
\end{array}\right.
\end{eqnarray}
Let us first consider the Killing spinor,
\begin{equation}
C_{{\beta}}^j=\frac{1}{2}\sigma^{j\,{\alpha}}_{{\beta}}\,\xi_{{\alpha}}\,.
\end{equation}
Inserting this Killing spinor into the first-order equation of (\ref{2susy}) provides us with
\begin{equation}
{\partial}_m C_{{\beta}}=-\frac{i}{2}e\,B_m\,\xi_{{\beta}}\quad\hbox{and}\quad{g}=4\,,
\end{equation}
which can be solve using \textit{the self-duality relation (\ref{SelfDuality})}. We get $\,\displaystyle{C_{{\beta}}({\vec{x}},{\vec{\xi}})=\pm\frac{i}{2}\Phi(r)\,\xi_{{\beta}}}\,$, provided the anomalous gyromagnetic factor is $\,g=4\,$. The zeroth-order constraint of (\ref{2susy}) is identically satisfied, so that collecting our results provides us with the supercharge,
\begin{equation}
{\mathcal{Q}}_{{\beta}}=\frac{1}{2}\Pi_j\,\sigma^{j\,{\alpha}}_{{\beta}}\,\xi_{{\alpha}}\pm\frac{i}{2}\Phi(r)\xi_{{\beta}}\,.
\label{2susy2}
\end{equation}
To obtain the supercharge conjugate to (\ref{2susy2}), we consider the Killing spinor,
\begin{equation}
\bar{C}^{k\,{\beta}}=\frac{1}{2}\bar{\xi}^{{\alpha}}\,\sigma_{{\alpha}}^{k\,{\beta}}\,.
\end{equation}
We solve the first-order equation of (\ref{2susy}) for the anomalous value of the gyromagnetic ratio $\,g=4\,$ using the Bogomolny equation (\ref{SelfDuality}). This leads to the conjugate $\displaystyle{\bar{C}^{{\beta}}({\vec{x}},{\vec{\xi}})=\mp\frac{i}{2}\Phi(r)\bar{\xi}^{{\beta}}}$. The zeroth-order consistency constraint is still satisfied and we obtain the odd-supercharge,
\begin{equation}
\bar{{\mathcal{Q}}}^{{\beta}}=\frac{1}{2}\bar{\xi}^{{\alpha}}\,\sigma_{{\alpha}}^{k\,{\beta}}\Pi_{k}\mp\frac{i}{2}\Phi(r)\,\bar{\xi}^{{\beta}}\,.
\label{2susy3}
\end{equation}
The supercharges $\,{\mathcal{Q}}_{{\beta}}\,$ and $\,\bar{{\mathcal{Q}}}^{{\beta}}\,$ are, both, square roots of the Pauli-like Hamiltonian $\,{\mathcal{H}}_{4}\,$ and therefore \textit{generate the ${\mathcal{N}}=2$ supersymmetry of the spin-monopole field system,}
\begin{equation}
\big\{\bar{{\mathcal{Q}}}^{{\beta}},{\mathcal{Q}}_{{\beta}}\big\}=-i{\mathcal{H}}_{4}\,{1\!\!1}\,.
\end{equation}
It is worth noting that defining the rescaled, $\,\displaystyle{\bar{{\mathcal{U}}}^{{\beta}}=\bar{{\mathcal{Q}}}^{{\beta}}\frac{1}{\sqrt{{\mathcal{H}}_4}}}\,$ and $\,\displaystyle{{\mathcal{U}}_{{\beta}}=\frac{1}{\sqrt{{\mathcal{H}}_4}}{\mathcal{Q}}_{{\beta}}}\,$, it is straightforward to get,
\begin{equation}\displaystyle{
{\mathcal{H}}_0=\bar{{\mathcal{U}}}^{{\beta}}\,{\mathcal{H}}_4\,{\mathcal{U}}_{{\beta}}}\,,
\end{equation}
which make manifest the fact that the two anomalous cases $\,{g}=0\,$ and $\,{g}=4\,$ can be viewed as superpartners \footnote{With The scalar $\,\bar{\xi}^{{\beta}}\xi_{{\beta}}=2\,$.}, cf \cite{FH}. Moreover, in our enlarged system, the following bosonic charges
\begin{eqnarray}\begin{array}{lll}\displaystyle{
{\vec{J}}={\vec{x}}\times{\vec{\Pi}}-q\,{\frac{\vec{x}}{r}}+{\vec{S}}}\,,\\[10pt]\displaystyle{
{\vec{K}}={\vec{\Pi}}\times{\vec{J}}+\mu\,{\frac{\vec{x}}{r}}-{\vec{S}}\times{\vec{\Pi}}-2e\left({\vec{S}}\cdot{\vec{B}}\right){\vec{x}}+2q\frac{{\vec{S}}}{r}+\frac{\mu}{q}{\vec{S}}}\,,\\[14pt]\displaystyle{
{\vec{\Omega}}=\bar{{\mathcal{Q}}}^{{\beta}}\,{\vec{\sigma}}_{{\beta}}^{\;{\alpha}}\,{\mathcal{Q}}_{{\alpha}}=\frac{1}{2}\left(\Phi^2(r)-{\vec{\Pi}}^{2}\right){\vec{S}}+\big({\vec{\Pi}}\cdot{\vec{S}}\big){\vec{\Pi}}\mp\Phi(r)\,{\vec{S}}\times{\vec{\Pi}},}
\end{array}
\end{eqnarray}
remain conserved such that they form, together with the supercharges $\,{\mathcal{Q}}_{{\beta}}\,$ and $\,\bar{{\mathcal{Q}}}^{{\beta}}\,$, the classical symmetry superalgebra \cite{DV2,FH},
\begin{eqnarray}\begin{array}{llll}\displaystyle{
\big\{\bar{{\mathcal{Q}}}^{{\beta}},{\mathcal{Q}}_{{\beta}}\big\}= -i{\mathcal{H}}_4\,{1\!\!1}\,,\quad\big\{\bar{{\mathcal{Q}}}^{{\beta}},\bar{{\mathcal{Q}}}^{{\beta}}\big\}=\big\{{\mathcal{Q}}_{{\beta}},{\mathcal{Q}}_{{\beta}}\big\}=0\,,\quad\big\{\bar{{\mathcal{Q}}}^{{\beta}},J^k\big\}=\frac{i}{4}\bar{{\mathcal{Q}}}^{{\alpha}}\sigma^{k\,{\beta}}_{{\alpha}}}\,,\\[12pt]\displaystyle{
\big\{{\mathcal{Q}}_{{\beta}},J^k\big\}=-\frac{i}{4}\sigma^{k\,{\alpha}}_{{\beta}}{\mathcal{Q}}_{{\alpha}}\,,\quad\big\{\bar{{\mathcal{Q}}}^{{\beta}},K^j\big\}=-\frac{i}{4}\,\frac{\mu}{q}\,\bar{{\mathcal{Q}}}^{{\alpha}}\sigma^{j\,{\beta}}_{{\alpha}}\,,\quad\big\{{\mathcal{Q}}_{{\beta}},K^j\big\}=\frac{i}{4}\,\frac{\mu}{q}\,\sigma^{j\,{\alpha}}_{{\beta}}{\mathcal{Q}}_{{\alpha}}}\,,\\[12pt]\displaystyle{
\big\{\bar{{\mathcal{Q}}}^{{\beta}},\Omega^k\big\}=-i{\mathcal{H}}_4\,\bar{{\mathcal{Q}}}^{{\alpha}}\sigma^{k\,{\beta}}_{{\alpha}}\,,\quad\big\{{\mathcal{Q}}_{{\beta}},\Omega^k\big\}=i{\mathcal{H}}_4\,\sigma^{k\,{\alpha}}_{{\beta}}{\mathcal{Q}}_{{\alpha}}\,,\quad\big\{\Omega^i,K^j\big\}=\frac{\mu}{q}\epsilon^{ijk}\,\Omega^k}\,,\\[12pt]\displaystyle{
\big\{K^i,K^j\big\}=\epsilon^{ijk}\left[\left(\frac{\mu^2}{q^2}-2{\mathcal{H}}_4\right)J^k+2\Omega^k\right]\,,\quad\big\{\Omega^i,\Omega^j\big\}=\epsilon^{ijk}\,{\mathcal{H}}_4\,\Omega^k}\,,\\[12pt]\displaystyle{
\big\{J^i,\Lambda^j\big\}=\epsilon^{ijk}\Lambda^k\quad\hbox{with}\quad\Lambda^l=J^l,K^l,\Omega^l}\,.
\end{array}\nonumber
\end{eqnarray}
\section{{\bf Planar System}}\label{S7}
\noindent
In 2 dimensions the models simplify. The magnetic field is $F_{ij} = \varepsilon_{ij} B = {\partial}_i A_j - {\partial}_j A_i$ and
the spin tensor is actually a scalar
\begin{equation}
S = - \frac{i}{2}\, \varepsilon_{ij} \psi_i \psi_j.
\label{a.2}
\end{equation}
The Hamiltonian takes the form
\begin{equation}
H = \frac{1}{2}\, \vec{\Pi}^2 - \frac{e{g}}{2}\, S B + V(r).
\label{a.1}
\end{equation}
The fundamental brackets remain the same as in (\ref{PBrackets}).
The dynamical quantities (\ref{Exp}) become constants of motion if the constraints (\ref{constraints}) are satisfied:
\begin{eqnarray}\begin{array}{lll}
\displaystyle{ C_i {\partial}_i H + i \frac{{\partial} H}{{\partial} \psi_i} \frac{{\partial} C}{{\partial} \psi_i} = 0}\,,&\hbox{order 0}&\\[14pt]
\displaystyle{ {\partial}_i C = e F_{ij} C_j + i \frac{{\partial} H}{{\partial} \psi_j} \frac{{\partial} C_i}{{\partial} \psi_j} + C_{ij} {\partial}_j H }\,,&\hbox{order 1}&\\[14pt]
\displaystyle{ {\partial}_i C_j + {\partial}_j C_i = e \left( F_{ik} C_{kj} - C_{ik} F_{kj} \right) + i \frac{{\partial} H}{{\partial} \psi_k} \frac{{\partial} C_{ij}}{{\partial} \psi_k}
+ C_{ijk} {\partial}_k H}\,,&\hbox{order 2}&\\[14pt]
{\partial}_i C_{jk} + {\partial}_j C_{ki} + {\partial}_k C_{ij} = C_{ijkl} {\partial}_l H + (\mbox{terms linear in $C_{lmn}$})&\hbox{order 3}&.\end{array}
\end{eqnarray}
Using
\begin{equation}
i \frac{{\partial} H}{{\partial} \psi_i} = - \frac{e{g}}{2}\, F_{ij} \psi_j = - \frac{e{g}}{2}\, B \varepsilon_{ij} \psi_j,
\label{a.5}
\end{equation}
the first (zeroth-order) constraint becomes
\begin{equation}
\frac{e{g}}{2}\, B \varepsilon_{ij} \psi_j \frac{{\partial} C}{{\partial} \psi_i} = C_i \left({\partial}_i V - \frac{e{g}}{2}\, S\, {\partial}_i B \right),
\label{a.4}
\end{equation}
complemented by the first-order equation
\begin{equation}
{\partial}_i C = eB \left( \varepsilon_{ij} C_j + \frac{{g}}{2}\, \varepsilon_{jk} \psi_j \frac{{\partial} C_i}{{\partial} \psi_k} \right)
+ C_{ij} \left( {\partial}_j V - \frac{e{g}}{2}\, S {\partial}_j B \right).
\label{a.7}
\end{equation}
Similarly the second and higher-order equations take the form
\begin{equation}
{\partial}_i C_j + {\partial}_j C_i = eB \left( \varepsilon_{ik} C_{kj} + \varepsilon_{jk} C_{ki} + \frac{{g}}{2}\,\varepsilon_{jk} \psi_j \frac{{\partial} C_i}{{\partial} \psi_k}
\right) + C_{ijk} \left( {\partial}_k V - \frac{e{g}}{2}\, S {\partial}_k B \right),
\label{a.8}
\end{equation}
etc. For radial functions $V(r)$ and $B(r)$:
$
{\partial}_i V = (x_i/r)\,V^{\prime},\
{\partial}_i B = (x_i/r)\,B^{\prime}\,,
$
hence
\begin{equation}
C_{i...j} \left( {\partial}_j V - \frac{e{g}}{2}\, S {\partial}_j B \right) = \frac{C_{i...j}\, x_j}{r} \left( V^{\prime} - \frac{e{g}}{2}\, S B^{\prime} \right).
\label{a.9}
\end{equation}
Let us now consider some specific cases. Universal generalized Killing vectors are
\begin{equation}
C_i = (\gamma_i,\, \varepsilon_{ij} x_j,\, \psi_i,\, \varepsilon_{ij} \psi_j),
\label{a.10}
\end{equation}
with $\gamma_i$ a constant vector. Observe that $S$ is a constant of motion itself:
\begin{equation}
\left\{ H, S \right\} = 0,
\label{a.11}
\end{equation}
and all quantities quadratic in the Grassmann variables are proportional to $S$.
\vskip2mm
$\bullet$
A constant Killing vector $\gamma_i$ gives a constant of motion only if we can find solutions for the equations
\begin{equation}
{\partial}_i C = e B \varepsilon_{ij} \gamma_j, \hspace{2em}
B \varepsilon_{ji} \psi_i \frac{{\partial} C}{{\partial} \psi_j} = \gamma_i \left( \frac{2}{e{g}}\, {\partial}_i V- S {\partial}_i B \right).
\label{a.a1}
\end{equation}
Now for a Grassmann-even function $C = c_0 + c_2 S$ the left-hand side of the second equation vanishes, therefore we
must require $B$ and $V$ to be constant. Taking $V = 0$, this leads to the solution
\begin{equation}
C = - e B \varepsilon_{ij} \gamma_i x_j, \hspace{2em} V = 0, \hspace{2em} B = \mbox{constant}.
\label{a.a2}
\end{equation}
The corresponding constant of motion is $\gamma_i P_i$, with
\begin{equation}
P_i = \Pi_i - e B \varepsilon_{ij} x_j,
\label{a.a3}
\end{equation}
identified with \textit{``magnetic translations''} \cite{Kostel}.
\vskip2mm
$\bullet$
Next we consider the linear Killing vector $C_i = \varepsilon_{ij} x_j$, with all higher-order coefficients $C_{ij...} = 0$.
Again for Grassmann-even $C$ the left-hand side of eq.\ (\ref{a.4}) vanishes, and we get the condition
\begin{equation}
\varepsilon_{ij} x_i {\partial}_j B = \varepsilon_{ij} x_i {\partial}_j V = 0,
\label{a.a4}
\end{equation}
which is automatically satisfied for radial functions $B(r)$ and $V(r)$. Therefore we only have to solve eq.\ (\ref{a.7}):
\begin{equation}
{\partial}_i C = - e B x_i = - \frac{e x_i}{r}\, (rB).
\label{a.a5}
\end{equation}
We infer that $C(r)$ is a radial function, with
$
C^{\prime} = -e rB.
$
Therefore $C$ is given by \textit{the magnetic flux through the disk $D_r$ centered at the origin with radius $r$}:
\begin{equation}
C = - \frac{e}{2\pi}\, \int_{D_r} B(r) d^2x \equiv - \frac{e}{2\pi}\, \Phi_B(r).
\label{a.a7}
\end{equation}
We then find the constant of motion representing angular momentum:
\begin{equation}
L = \varepsilon_{ij} x_i \Pi_j +\frac{e}{2\pi}\, \Phi_B(r).
\label{a.a8}
\end{equation}
$\bullet$
There are two Grassmann-odd Killing vectors, the first one being $C_i = \psi_i$. With this Ansatz, we get for the
scalar contribution to the constant of motion the constraints
\begin{equation}
\frac{e{g}}{2} B\, \varepsilon_{ij} \psi_j\, \frac{{\partial} C}{{\partial} \psi_i} = \psi_i {\partial}_i V\,, \hspace{2em}
{\partial}_i C = \frac{eB}{2} \left( 2 - {g} \right) \varepsilon_{ij} \psi_j\,.
\label{a.b1}
\end{equation}
It follows that either ${g} = 2$ and $(C, V)$ are constant (in which case one may take $C = V = 0$),
or $ {g} \neq 2$ and $C$ is of the form
\begin{equation}
C = \varepsilon_{ij} K_i(r) \psi_j\,
\quad\hbox{with}\quad
{\partial}_i V = - \frac{e{g}}{2}\, B K_i\,, \hspace{2em}
{\partial}_i K_j = \frac{(2-{g})eB}{2}\, \delta_{ij}.
\label{a.b3}
\end{equation}
This is possible only if $B$ is constant and
\begin{equation}
K_i = \frac{eB(2-{g})}{2}\,x_i \equiv \kappa x_i\,, \hspace{2em}
V(r) = \frac{{g}({g}-2)}{8}\, e^2 B^2 r^2 = - \frac{e{g} \kappa}{4\pi}\, \Phi_B(r).
\label{a.b4}
\end{equation}
It follows that we have a conserved supercharge of the form \cite{Kostel}
\begin{equation}
Q = \psi_i \left( \Pi_i - \kappa \varepsilon_{ij} x_j \right).
\label{a.b5}
\end{equation}
The bracket algebra of this supercharge takes the form
\begin{equation}
i \left\{ Q, Q \right\} = 2 H + (2 - g)e B J, \hspace{2em} J = L + S.
\label{a.b6}
\end{equation}
Of course, as $S$ and $L$ are separately conserved, $J$ is a constant of motion as well.
\vspace{1ex}
$\bullet$
Finally we consider the dual Grassmann-odd Killing vector $C_i = \varepsilon_{ij} \psi_j$. Then the
constraints (\ref{a.4}) and (\ref{a.7}) become
\begin{equation}
\frac{eg}{2}\,B\,\frac{{\partial} C}{{\partial} \psi_i} = {\partial}_i V, \hspace{2em}
{\partial}_i C = \frac{(g-2)eB}{2}\, \psi_i,
\label{a.c1}
\end{equation}
implying that $C = N_i(x) \psi_i$ and
\begin{equation}
\frac{eg}{2}\, B\, N_i = {\partial}_i V, \hspace{2em}
{\partial}_i N_j = \frac{(g-2) eB}{2}\, \delta_{ij}.
\label{a.c2}
\end{equation}
As before, $B$ must be constant and the potential is identical to (\ref{a.b4}):
\begin{equation}
N_i = - \kappa x_i, \hspace{2em}
V = - \frac{eg\kappa}{4\pi}\, \Phi_B(r) = \frac{g(g-2)}{8}\, e^2 B^2 r^2.
\label{a.c3}
\end{equation}
Thus we find the dual conserved supercharge \cite{HmonRev},
\begin{equation}
\tilde{Q} = \varepsilon_{ij} \psi_i \left( \Pi_j - \kappa \varepsilon_{jk} x_k \right)
= \psi_i \left( \varepsilon_{ij} \Pi_j + \kappa x_i \right),
\label{a.c4}
\end{equation}
which satisfies the bracket relations
\begin{equation}
i \left\{ \tilde{Q}, \tilde{Q} \right\} = 2 H + (2-g) eB J, \hspace{2em}
i \left\{ Q, \tilde{Q} \right\} = 0.
\label{a.c5}
\end{equation}
Thus the harmonic potential (\ref{a.b4}) with constant magnetic field $B$ allows a classical ${\mathcal{N}} = 2$ supersymmetry
with supercharges $(Q, \tilde{Q})$, whilst the special conditions $g = 2$ and $V = 0$ allows for ${\mathcal{N}} = 2$
supersymmetry for any $B(r)$.
\section{Discussion}\label{S8}
\noindent
In this paper we studied, in the framework of the covariant Hamiltonian dynamics proposed in \cite{vH}, the symmetries and the supersymmetries of a spinning particle coupled to a magnetic monopole field. The gyromagnetic ratio determines the type of (super)symmetry the system can admit~: for the Pauli-like hamiltonian (\ref{Hamiltonian}) $\,{\mathcal{N}}=1\,$ supersymmetry only arises for gyromagnetic ratio ${g}=2$ and with no potential, $V=0$, confirming Spector's observation \cite{Spector}. We also derived additional supercharges, which are not square roots of the Hamiltonian of the system, though.
A Runge-Lenz-type dynamical symmetry requires instead an anomalous gyromagnetic ratio,
$${g}=0\quad\hbox{or}\quad{g}=4\,,$$
with the additional bonus of an extra ``spin" symmetry.
These particular values of ${g}$ come from the effective coupling of the form $F_{ij}\mp\epsilon_{ijk}D_k\Phi$, which
add or cancel for self-dual fields, $F_{ij}=\epsilon_{ijk}D_k\Phi$ \cite{FH}.
The super- and the bosonic symmetry can be combined; the price to pay is, however, to enlarge the fermionic space, as proposed by D'Hoker and Vinet \cite{DV2} (see also \cite{FH}); this provides us with an $\,{\mathcal{N}}=2\,$ SUSY.
Our recipe also applies to a planar fermion in any planar magnetic field [i.e. one perpendicular to the plane]. As an illustration, we have shown, for ordinary gyromagnetic, that in addition to the usual supercharge (\ref{a.b5}) generating the supersymmetry, the system also admits another square root of the Pauli Hamiltonian $\,H\,$ \cite{HmonRev}. This happens due to the existence
of a dual Killing tensor.
At last, we remark that confining the spinning particle onto a sphere of fixed radius $\,\rho\,$ implies the set of constraints \cite{DJ-al},
\begin{equation}
{\vec{x}}^2=\rho^2\,,\quad{\vec{x}}\cdot{\vec{\psi}}=0\quad\hbox{and}\quad{\vec{x}}\cdot{\vec{\Pi}}=0\,.
\end{equation}
This freezes the radial potential to a constant,
and we recover the $\,{\mathcal{N}}=1\,$ SUSY described by the supercharges $\,{\mathcal{Q}}\,$, $\,{\mathcal{Q}}_1\,$ and $\,{\mathcal{Q}}_2\,$ for ordinary gyromagnetic factor $\,{g}=2\,$.
\begin{acknowledgments}\noindent
One of us (JPN) is indebted, to the {\it R\'egion Centre} for a doctoral scholarship, to the {\it Laboratoire de Math\'ematiques et de Physique Th\'eorique} of Tours University and to the {\it NIKHEF (Amsterdam)} for hospitality extended to him.
\end{acknowledgments}
| proofpile-arXiv_065-4871 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}\label{sect:intro}
In \cite{AGT:2010} a remarkable proposal, now called the AGT conjecture,
was given on the relation between the Liouville theory conformal blocks
and the Nekrasov partition function.
Among the related investigations,
Gaiotto proposed several degenerated versions of the AGT conjecture
in \cite{G:2009}.
In that paper, he conjectured that the inner product of a certain element
in the Verma module of Virasoro algebra coincides
with the Nekrasov partition function
for the four dimensional $\mathcal{N}=2$ pure gauge theory \cite{N:2003}.
Actually, the element considered is a kind of Whittaker vector
in the Verma module of the Virasoro algebra.
Whittaker vectors and Whittaker modules are important gadgets
in the representation theory since its emergence in the study of
finite dimensional Lie algebras \cite{K:1978}.
Although numerous analogues and generalisations have been proposed for
other algebras, such as affine algebras and quantum groups,
not so many investigations have been given for
the Whittaker vectors of the Virasoro algebra.
A general theory on the properties of Whittaker modules
for the Virasoro algebra was recently given in \cite{OW:2009}.
In this paper we give an explicit expression of the Whittaker vector for
the Verma module of Virasoro algebra in terms of Jack symmetric functions
\cite[VI \S 10]{M:1995:book}.
We use the Feigin-Fuchs bosonization \cite{FF:1982}
to identify the Verma module and the ring of symmetric function,
and then utilise the split expression of the Calogero-Sutherland Hamiltonian \cite{SKAO:1996} to derive an recursion relation on the coefficients
of the Whittaker vector in its expansion
with respect to Jack symmetric functions.
Our result is related to a conjecture given
by Awata and Yamada in \cite{AY:2009}.
They proposed the five-dimensional AGT conjecture
for pure $\SU(2)$ gauge theory using the deformed Virasoro algebra,
and as a related topic, they also proposed a conjectural formula
on the explicit form of the deformed Gaiotto state
in terms of Macdonald symmetric functions \cite[(3.18)]{AY:2009}.
Our formula is the non-deformed Virasoro, or four-dimensional, counterpart
of their conjectural formula.
The motivation of our study also comes from the work \cite{MY:1995},
where singular vectors of the Virasoro algebra are
expressed by Jack polynomials.
Before presenting the detail of the main statement,
we need to prepare several notations on Virasoro algebra,
symmetric functions and some combinatorics.
The main theorem will be given in \S \ref{subsec:mainthm}.
\subsection{Partitions}\label{subsec:partition}
Throughout in this paper, notations of partitions follow \cite{M:1995:book}.
For the positive integer $n$, a partition $\lambda$ of $n$ is a (finite)
sequence of positive integers $\lambda=(\lambda_1,\lambda_2,\ldots)$ such that
$\lambda_1\ge\lambda_2\ge\cdots$ and $\lambda_1+\lambda_2+\cdots=n$.
The symbol $\lambda \vdash n$ means that $\lambda$ is a partition of $n$.
For a general partition we also define $|\lambda|\seteq\sum_{i} \lambda_i$.
The number $\ell(\lambda)$ is defined to be
the length of the sequence $\lambda$.
The conjugate partition of $\lambda$ is denoted by $\lambda'$.
We also consider the empty sequence $\emptyset$
as the unique partition of the number $0$.
In addition we denote by $\calP$
the set of all the partitions of natural numbers
including the empty partition $\emptyset$.
So that we have
\begin{align*}
\calP=\{\emptyset,(1),(2),(1,1),(3),(2,1),(1,1,1),\ldots\}.
\end{align*}
As usual,
$p(n)\seteq\#\{\lambda\in\calP\mid |\lambda|=n\}
=\#\{\lambda\mid \lambda\vdash n\}$
denotes the number of partitions of $n$.
In the main text, we sometimes use the dominance semi-ordering
on the partitions:
$\lambda\ge\mu$ if and only if
$|\lambda|=|\mu|$ and
$\sum_{k=1}^i \lambda_k \ge \sum_{k=1}^i \mu_k$ ($i=1,2,\ldots$).
We also follow \cite{M:1995:book} for the convention of the Young diagram.
Moreover we will use the coordinate $(i,j)$ on the Young diagram
defined as follows:
the first coordinate $i$ (the row index) increases as one goes downwards,
and the second coordinate $j$ (the column index) increases as one goes
rightwards.
For example, in Figure \ref{fig:442111}
the left-top box has the coordinate $(1,1)$ and
the left-bottom box has the coordinate $(6,1)$.
We will often identify a partition and its associated Young diagram.
\begin{figure}[htbp]
\unitlength 0.1in
\begin{center}
\begin{picture}( 8.0000, 12.0000)( 4.0000,-15.0000)
\special{pn 8
\special{pa 400 400
\special{pa 600 400
\special{pa 600 600
\special{pa 400 600
\special{pa 400 400
\special{fp
\special{pn 8
\special{pa 600 400
\special{pa 800 400
\special{pa 800 600
\special{pa 600 600
\special{pa 600 400
\special{fp
\special{pn 8
\special{pa 800 400
\special{pa 1000 400
\special{pa 1000 600
\special{pa 800 600
\special{pa 800 400
\special{fp
\special{pn 8
\special{pa 1000 400
\special{pa 1200 400
\special{pa 1200 600
\special{pa 1000 600
\special{pa 1000 400
\special{fp
\special{pn 8
\special{pa 400 600
\special{pa 600 600
\special{pa 600 800
\special{pa 400 800
\special{pa 400 600
\special{fp
\special{pn 8
\special{pa 600 600
\special{pa 800 600
\special{pa 800 800
\special{pa 600 800
\special{pa 600 600
\special{fp
\special{pn 8
\special{pa 800 600
\special{pa 1000 600
\special{pa 1000 800
\special{pa 800 800
\special{pa 800 600
\special{fp
\special{pn 8
\special{pa 1000 600
\special{pa 1200 600
\special{pa 1200 800
\special{pa 1000 800
\special{pa 1000 600
\special{fp
\special{pn 8
\special{pa 400 800
\special{pa 600 800
\special{pa 600 1000
\special{pa 400 1000
\special{pa 400 800
\special{fp
\special{pn 8
\special{pa 600 800
\special{pa 800 800
\special{pa 800 1000
\special{pa 600 1000
\special{pa 600 800
\special{fp
\special{pn 8
\special{pa 400 1000
\special{pa 600 1000
\special{pa 600 1200
\special{pa 400 1200
\special{pa 400 1000
\special{fp
\special{pn 8
\special{pa 400 1200
\special{pa 600 1200
\special{pa 600 1400
\special{pa 400 1400
\special{pa 400 1200
\special{fp
\special{pn 8
\special{pa 400 1400
\special{pa 600 1400
\special{pa 600 1600
\special{pa 400 1600
\special{pa 400 1400
\special{fp
\put(3.0, -3.0){\vector(1,0){4}}
\put(3.0, -3.0){\vector(0,-1){4}}
\put(7.6, -3.4){\makebox(0,0)[lb]{$j$}
\put(2.8, -8.5){\makebox(0,0)[lb]{$i$}
\end{picture
\end{center}
\caption{The Young diagram for $(4,4,2,1,1,1)$}
\label{fig:442111}
\end{figure}
Let us also use the notation $(i,j)\in\lambda$,
which means that $i,j\in\bbZ_{\ge 1}$,
$1\le i\le \ell(\lambda)$ and $1\le j\le \lambda_i$.
On the Young diagram of $\lambda$ the symbol $(i,j)\in\lambda$ corresponds
to the box located at the coordinate $(i,j)$.
In Figure \ref{fig:442111}, we have $(2,3)\in\lambda\seteq(4,4,2,1,1)$
but $(4,3)\notin\lambda$.
\subsection{Virasoro algebra}\label{subsec:vir}
Let us fix notations on Virasoro algebra and its Verma module.
Let $c\in\bbC$ be a fixed complex number.
The Virasoro algebra $\Vir_c$ is a Lie algebra over $\bbC$
with central extension,
generated by $L_n$ ($n\in\bbZ$) with the relation
\begin{align}\label{eq:Vir}
[L_m,L_n]=(m-n)L_{m+n}+\dfrac{c}{12}m(m^2-1)\delta_{m+n,0}.
\end{align}
$\Vir_c$ has the triangular decomposition
$\Vir_c=\Vir_{c,+}\oplus\Vir_{c,0}\oplus\Vir_{c,-}$ with
$\Vir_{c,\pm} \seteq \oplus_{\pm n>0}\bbC L_n$ and
$\Vir_{c,0} \seteq \bbC L_0\oplus \bbC$.
Let $h$ be a complex number.
Let $\bbC_{h}$ be the one-dimensional representation
of the subalgebra $\Vir_{c,\ge0} \seteq \Vir_{c,0}\oplus \Vir_{c,+}$,
where $\Vir_{c,+}$ acts trivially and $L_0$ acts as the multiplication by $h$.
Then one has the Verma module $M_h$ by
\begin{align*}
M_h\seteq\Ind_{\Vir_{c,\ge0}}^{\Vir_c}\bbC_{h}.
\end{align*}
Obeying the notation in physics literature,
we denote by $\ket{h}$ a fixed basis of $\bbC_{h}$.
Then one has $\bbC_{h}=\bbC \ket{h}$ and $M_h=U(\Vir_c)\ket{h}$.
$M_h$ has the $L_0$-weight space decomposition:
\begin{align}\label{eq:L0weightdecomp}
M_h=\bigoplus_{n\in\bbZ_{\ge 0}} M_{h,n},\quad \text{with}\quad
M_{h,n}\seteq\{v\in M_h\mid L_0 v=(h+n)v \}.
\end{align}
A basis of $M_{h,n}$ can be described simply by partitions.
For a partition
$\lambda=(\lambda_1,\lambda_2,\ldots,\lambda_k)$ of $n$
we define the abbreviation
\begin{align}\label{eq:L-lambda}
L_{-\lambda}\seteq L_{-\lambda_k} L_{-\lambda_{k-1}}\cdots L_{-\lambda_1}.
\end{align}
of the element of $U(\Vir_{c,-})$, the enveloping algebra
of the subalgebra $\Vir_{c,-}$.
Then the set
\begin{align*}
\{L_{-\lambda} \ket{h} \mid \lambda \vdash n\},
\end{align*}
is a basis of $M_{h,n}$.
\subsection{Bosonization}
Next we recall the bosonization of the Virasoro algebra \cite{FF:1982}.
Consider the Heisenberg algebra $\calH$ generated by $a_n$ ($n\in\bbZ$)
with the relation
\begin{align*}
[a_m,a_n]=m\delta_{m+n,0}.
\end{align*}
Consider the correspondence
\begin{align}
L_n \mapsto \calL_n
\seteq\dfrac{1}{2}\sum_{m\in\bbZ}\no{a_m a_{n-m}}-(n+1)\rho a_n,
\label{eq:FF}
\end{align}
where the symbol $\no{\ }$ means the normal ordering.
This correspondence determines a well-defined morphism
\begin{align}\label{eq:FF:phi}
\varphi: U(\Vir_c) \to\widehat{U}(\calH).
\end{align}
Here $\widehat{U}(\calH)$ is
the completion of the universal enveloping algebra $U(\calH)$
in the following sense \cite{FF:1996}.
For $n\in\bbZ_{\ge0}$,
let $I_n$ be the left ideal of the enveloping algebra $U(\calH)$
generated by all polynomials in $a_m$ ($m\in\bbZ_{\ge1})$
of degrees greater than or equal to $n$
(where we defined the degree by $\deg a_m\seteq m$).
Then we define
\begin{align*}
\widehat{U}(\calH)\seteq\varprojlim_n \widehat{U}(\calH)/I_n.
\end{align*}
Next we recall the functorial correspondence of the representations.
First let us define the Fock representation $\calF_\alpha$ of $\calH$.
$\calH$ has the triangular decomposition
$\calH=\calH_{+}\oplus\calH_{0}\oplus\calH_{-}$
with $\calH_{\pm}\seteq \oplus_{\pm n\in\bbZ_{+}}\bbC a_n$ and
$\calH_{0}\seteq \bbC a_0$.
Let $\bbC_\alpha=\bbC\ket{\alpha}_\calF$
be the one-dimensional representation of
$\calH_{0}\oplus\calH_{+}$ with the action
$a_0\ket{\alpha}_\calF=\alpha \ket{\alpha}_\calF$ and
$a_n\ket{\alpha}_\calF=0$ ($n\in\bbZ_{\ge1}$).
Then the Fock space $\calF_\alpha$ is defined to be
\begin{align*}
\calF_\alpha\seteq
\Ind_{\calH_{0}\oplus\calH_{-}}^{\calH}\bbC_\alpha
\end{align*}
It has the $a_0$-weight decomposition
\begin{align}\label{eq:F:deg}
\calF_\alpha=\oplus_{n\ge 0}\calF_{\alpha,n},\quad
\calF_{\alpha,n}
\seteq\{w\in\calF_{\alpha} \mid a_0 w= (n+\alpha)w\}.
\end{align}
Each weight space $\calF_{\alpha,n}$ has a basis
\begin{align}\label{eq:a:base}
\{a_{-\lambda}\ket{\alpha}_\calF \mid \lambda\vdash n \}
\end{align}
with $a_{-\lambda}\seteq a_{-\lambda_k}\cdots a_{-\lambda_1}$
for a partition $\lambda=(\lambda_1,\ldots,\lambda_k)$.
Note also that
the action of $\widehat{U}(\calH)$ on $\calF_\alpha$ is well-defined.
Similarly the dual Fock space $\calF^*_\alpha$ is defined to be
$\Ind_{\calH_{0}\oplus\calH_{-}}^{\calH }\bbC_\alpha^*$,
where $\bbC_\alpha^*=\bbC\cdot{}_\calF\bra{\alpha}$
is the one-dimensional right representation of
$\calH_{0}\oplus\calH_{-}$ with the action
${}_\calF\bra{\alpha}a_0=\alpha \cdot {}_\calF\bra{\alpha}$ and
${}_\calF\bra{\alpha} a_{-n}=0$ ($n\in\bbZ_{\ge1}$).
Then one has the bilinear form
\begin{align*}
\cdot: \calF^*_\alpha \times \calF_\alpha \to \bbC
\end{align*}
defined by
\begin{align*}
&{}_\calF\bra{\alpha}\cdot \ket{\alpha}_\calF =1,\quad
0 \cdot \ket{\alpha}_\calF={}_\calF\bra{\alpha}\cdot 0=0,\\
&{}_\calF\bra{\alpha}u u'\cdot\ket{\alpha}_\calF
={}_\calF\bra{\alpha}u \cdot u'\ket{\alpha}_\calF
={}_\calF\bra{\alpha}\cdot u u'\ket{\alpha}_\calF\
(u,u'\in \calH).
\end{align*}
As in the physics literature, we often omit the symbol $\cdot$ and
simply write ${}_\calF\left<\alpha|\alpha\right>_\calF$,
${}_\calF\bra{\alpha}u\ket{\alpha}_\calF$ and so on.
Now we can state the bosonization of representation:
\eqref{eq:FF} is compatible with the map
\begin{align}\label{eq:FF:psi}
\psi: M_h \to \calF_\alpha,\quad
L_{-\lambda}\ket{h}\mapsto \calL_{-\lambda}\ket{\alpha}_\calF
\end{align}
with $\calL_{-\lambda}\seteq\calL_{-\lambda_1}\cdots\calL_{-\lambda_k}$
for $\lambda=(\lambda_1,\ldots,\lambda_k)\in\calP$ and
\begin{align}\label{eq:FF:ch}
c=1-12\rho^2,\quad h=\dfrac{1}{2}\alpha(\alpha-2\rho).
\end{align}
In other words, we have
\begin{align*}
\psi(x v)=\varphi(x)\psi(v)\quad (x\in \Vir_c,\ v\in M_{h})
\end{align*}
under the parametrisation \eqref{eq:FF:ch}
of highest weights.
\subsection{Fock space and symmetric functions}
\label{subsec:Fs}
The Fock space $\calF_\alpha$ is naturally identified
with the space of symmetric functions.
In this paper the term ``symmetric function" means the
infinite-variable symmetric ``polynomial".
To treat such an object rigorously,
we follow the argument of \cite[\S I.2]{M:1995:book}.
Let us denote by $\Lambda_N$ the ring of $N$-variable symmetric polynomials
over $\bbZ$,
and by $\Lambda_N^d$ the space of homogeneous symmetric polynomials
of degree $d$.
The ring of symmetric functions $\Lambda$ is defined as the inverse limit
of the $\Lambda_N$ in the category of graded rings
(with respect to the gradation defined by the degree $d$).
We denote by $\Lambda_K=\Lambda\otimes_\bbZ K$
the coefficient extension to a ring $K$.
Among several bases of $\Lambda$, the family of
the power sum symmetric functions
\begin{align*}
p_n=p_n(x)\seteq \sum_{i\in\bbZ_{\ge1}} x_i^n,\quad
p_\lambda\seteq p_{\lambda_1}\cdots p_{\lambda_k},
\end{align*}
plays an important role.
It is known that $\{p_\lambda\mid \lambda \vdash d\}$ is a basis of
$\Lambda_\bbQ^d$, the subspace of homogeneous symmetric functions
of degree $d$.
Now following \cite{AMOS:1995},
we define the isomorphism between the Fock space
and the space of symmetric functions.
Let $\beta$ be a non-zero complex number and consider the next map between
$\calF_\alpha$ and $\Lambda_{\bbC}$:
\begin{align}
\begin{array}{c c c c}
\iota_\beta: & \calF_\alpha & \to & \Lambda_{\bbC},
\\
& \rotatebox{90}{$\in$} & & \rotatebox{90}{$\in$}
\\
&v&\mapsto
&{\displaystyle
{}_\calF\bra{\alpha}
\exp\Big(\sqrt{\dfrac{\beta}{2}}\sum_{n=1}^\infty \dfrac{1}{n}p_n a_n\Big)v.}
\end{array}
\label{eq:iota}
\end{align}
Under this morphism,
an element $a_{-\lambda}\ket{\alpha}_\calF$ of the base \eqref{eq:a:base}
is mapped to
\begin{align*}
\iota_\beta(a_{-\lambda}\ket{\alpha}_\calF)
=(\sqrt{\beta/2})^{\ell(\lambda)} p_\lambda(x).
\end{align*}
Since $\{a_{-\lambda}\ket{\alpha}_\calF\}$ is a basis of $\calF_\alpha$
and $\{p_\lambda\}$ is a basis of $\Lambda_\bbQ$,
$\iota_\beta$ is an isomorphism.
\subsection{Jack symmetric function}
\label{subsec:jack}
Now we recall the definition of Jack symmetric function
\cite[\S VI.10]{M:1995:book}.
Let $b$ be an indeterminate
\footnote{Our parameter $b$ is usually denoted by $\alpha$ in the literature, for example, in \cite{M:1995:book}. We avoid using $\alpha$ since it is already defined to be the highest weight of the Heisenberg Fock space $\calF_\alpha$.}
and define an inner product on
$\Lambda_{\bbQ(b)}$ by
\begin{align}\label{eq:inner}
\left<p_\lambda,p_\mu\right>_b \seteq
\delta_{\lambda,\mu}z_\lambda b^{\ell(\lambda)}.
\end{align}
Here the function $z_\lambda$ is given by:
\begin{align*}
z_{\lambda}\seteq\prod_{i\in\bbZ_{\ge1}}i^{m_i(\lambda)} m_i(\lambda) !
\quad \text{ with } \quad
m_i(\lambda)\seteq \#\{1\le i\le \ell(\lambda) \mid \lambda_j=i \}.
\end{align*}
Then the (monic) Jack symmetric function $P_{\lambda}^{(b)}$ is determined
uniquely by the following two conditions:
\begin{description}
\item[(i)]
It has an expansion via monomial symmetric function $m_\nu$ in the form
\begin{align*}
P_{\lambda}^{(b)}
=m_\lambda+\sum_{\mu<\lambda}c_{\lambda,\mu}(b)m_\mu.
\end{align*}
Here $c_{\lambda,\mu}(b)\in\bbQ(b)$ and
the ordering $<$ among the partitions is the dominance semi-ordering.
\item[(ii)]
The family of Jack symmetric functions is an orthogonal basis
of $\Lambda_{\bbQ(b)}$ with respect to $\left<\cdot,\cdot\right>_b$:
\begin{align*}
\prd{P_\lambda^{(b)},P_\mu^{(b)}}_b=0 \quad
\text{ if } \lambda\neq\mu.
\end{align*}
\end{description}
\subsection{Main Theorem}\label{subsec:mainthm}
Finally we can state our main statement.
Consider the Verma module $M_{h}$ of the Virasoro algebra $\Vir_c$
with generic complex numbers $c$ and $h$.
Let $a$ be an arbitrary complex number,
and let $v_G$ be an element of the Verma module $M_{h}$ such that
\[L_1 v_G=a v_G,\quad L_n v_G=0\ (n\ge 2).\]
Then $v_G$ exists uniquely up to scalar multiplication
(see Fact \ref{fct:G:ue}).
Introduce the complex numbers $\rho$, $\alpha$ and $\beta$ by the relations
\begin{align*}
c=1-12\rho^2,\quad
h=\dfrac{1}{12}\alpha(\alpha-2\rho),\quad
\rho=\dfrac{\beta^{1/2}-\beta^{-1/2}}{\sqrt{2}}.
\end{align*}
Then by the Feigin-Fuchs bosonization $\psi:M_{h}\to \calF_\alpha$
\eqref{eq:FF:psi}
and the isomorphism
$\iota_\beta:\calF_\alpha \to \Lambda_{\bbC}$
\eqref{eq:iota},
one has an element $\iota_\beta \circ \psi (v_{G})\in\Lambda_\bbC$.
\begin{thm*}
We have
\begin{align}\label{eq:thm:expand}
\iota_\beta \circ \psi (v_{G})=
\sum_{\lambda\in\calP}
a^{|\lambda|} c_\lambda(\alpha,\beta) P_{\lambda}^{(\beta^{-1})},
\end{align}
where $\lambda$ runs over all the partitions and
the coefficient $c_\lambda(\alpha,\beta)$ is given by
\begin{align}\label{eq:thm}
\begin{split}
&c_\lambda(\alpha,\beta)\\
&=\prod_{(i,j)\in\lambda}\dfrac{1}{\lambda_i-j+1+\beta(\lambda_j'-i)}
\prod_{\substack{(i,j)\in\lambda \\ (i,j)\neq (1,1) }}
\dfrac{\beta}{(j+1)+\sqrt{2}\beta^{1/2} \alpha -(i+1)\beta}.
\end{split}
\end{align}
(See \S \ref{subsec:partition} for the symbol ``$(i,j)\in\lambda$".)
\end{thm*}
The proof of this theorem will be given in \S \ref{subsec:coh}.
In the main theorem above, the element $v_G$ is the Whittaker vector
associated to the degenerate Lie algebra homomorphism
$\eta:\Vir_{c,+}\to \bbC$, that is, $\eta(L_2)=0$.
We shall call this element by ``Gaiotto state", following \cite{AY:2009}.
A general theory of Whittaker vectors usually assumes the non-degeneracy
of the homomorphism $\eta$, i.e., $\eta(L_1)\neq0$ and $\eta(L_2)\neq0$.
This non-degenerate case will be treated in Proposition \ref{prp:rec2},
although there seem no factored expressions for the coefficients
as \eqref{eq:thm}.
The content of this paper is as follows.
In \S \ref{sec:vir} we recall the split expression
of the Calogero-Sutherland Hamiltonian,
which is a key point in our proof.
In \S \ref{sec:whit} we investigate the Whittaker vectors
in terms of symmetric functions.
The main theorem will be proved \S \ref{subsec:coh},
using some combinatorial identities shown in \S \ref{sec:id}.
The Whittaker vector with respect to the non-degenerate homomorphism
will be treated in \S \ref{subsec:wit}.
In the final \S \ref{sec:rmk} we give some remarks on possible
generalisations and the related works.
We also added Appendix \ref{sec:AGT} concerning the AGT relation
and its connection to our argument.
\section{Preliminaries on Jack symmetric functions and bosonized Calogero-Sutherland Hamiltonian}
\label{sec:vir}
This section is a preliminary for the proof of the main theorem.
We need the following Definition \ref{dfn:fE} and Proposition \ref{prp:split}:
\begin{dfn}\label{dfn:fE}
(1)
Let $\lambda$ be a partition and $b,\beta$ be generic complex numbers.
Define $f_{\lambda}^{(b,\beta)} \in \calF_{\alpha}$
to be the element such that
\begin{align}\label{eq:flbb}
\iota_{\beta} (f_{\lambda}^{(b,\beta)})=P_\lambda^{(b)},
\end{align}
where $\iota_{\beta}$ is the isomorphism given in \eqref{eq:iota}.
(2)
For a complex number $\beta$, define an element of $\widehat{U}(\calH)$ by
\begin{align}\label{eq:E:split}
\widehat{E}_{\beta}
=\sqrt{2 \beta} \sum_{n>0}a_{-n} \calL_n
+\sum_{n>0}a_{-n}a_n\left(\beta-1-\sqrt{2\beta}a_0\right).
\end{align}
Here $\calL_n\in\widehat{U}(\calH)$ is the bosonized Virasoro generator
\eqref{eq:FF}, and we have put the assumption
\begin{align}\label{eq:prp:split:1}
\rho=(\beta^{1/2}-\beta^{-1/2})/\sqrt{2}.
\end{align}
\end{dfn}
\begin{prp}\label{prp:split}
For a generic complex number $\beta$ we have
\begin{align}
\label{eq:eigen:Ef}
&\widehat{E}_{\beta} f_{\lambda}^{(\beta^{-1},\beta)}
=\ep_{\lambda}(\beta)f_{\lambda}^{(\beta^{-1},\beta)},\\
\label{eq:eigen:E}
&\ep_{\lambda}(\beta)\seteq \sum_i (\lambda_i^2+\beta(1-2i)\lambda_i),
\end{align}
for any partition $\lambda$.
\end{prp}
The proof of this proposition is rather complicated,
since we should utilise Jack symmetric polynomials with finite variables.
\subsection{Jack symmetric polynomials}
Recall that in \S \ref{subsec:Fs} we denoted by $\Lambda_N$
the space of symmetric polynomials of $N$ variables,
and by $\Lambda_N^d$ its degree $d$ homogeneous subspace.
In order to denote $N$-variable symmetric polynomials,
we put the superscript ``$(N)$" on the symbols
for the infinite-variable symmetric functions.
For example, we denote
by $p_\lambda^{(N)}(x)\seteq
p_{\lambda_1}^{(N)}(x) p_{\lambda_2}^{(N)}(x) \cdots$
the product of the power sum polynomials
$p_k^{(N)}(x)\seteq \sum_{i=1}^N x_i^k$,
and by $m_\lambda^{(N)}(x)$
the monomial symmetric polynomial.
Let us fix $N\in\bbZ_{\ge1}$ and an indeterminate
\footnote{In the literature this indeterminate is usually denoted by
$\beta=\alpha^{-1}$, and we will also identify it with our $\beta$ given in
\eqref{eq:prp:split:1} later.
But at this moment we don't use it to avoid confusion.}
$t$.
For a partition $\lambda$ with $\ell(\lambda)\le N$,
the $N$-variable Jack symmetric polynomial
$P_\lambda^{(N)}(x;t)$ is
uniquely specified by the following two properties.
\begin{description}
\item[(i)]
\begin{align*}
P_\lambda^{(N)}(x;t)
=m_\lambda^{(N)}(x)
+\sum_{\mu<\lambda}\widetilde{c}_{\lambda,\mu}(t)m_\mu^{(N)}(x),
\quad
\widetilde{c}_{\lambda,\mu}(t)\in\bbQ(t).
\end{align*}
\item[(ii)]
\begin{align}
\label{eq:eigeneq}
&H_t^{(N)} P_\lambda^{(N)}(x;t)
=\ep_{\lambda}^{(N)}(t) P_\lambda^{(N)}(x;t),
\\
\label{eq:HCS}
&\quad
H_{t}^{(N)}\seteq
\sum_{i=1}^{N}\big(x_i\dfrac{\partial}{\partial x_i}\big)^2
+t\sum_{1\le i<j\le N}\dfrac{x_i+x_j}{x_i-x_j}
(x_i\dfrac{\partial}{\partial x_i}-x_j\dfrac{\partial}{\partial x_j}),
\\
&\quad
\ep_{\lambda}^{(N)}(t)\seteq \sum_i (\lambda_i^2+t(N+1-2i)\lambda_i).
\label{eq:ep:N}
\end{align}
\end{description}
The differential operator \eqref{eq:HCS} is known
to be equivalent to the Calogero-Sutherland Hamiltonian
(see \cite[\S 2]{AMOS:1995} for the detailed explanation.)
In (i) we used the dominance partial semi-ordering on the partitions.
If $N\ge d$, then $\{P_\lambda^{(N)}(x;t)\}_{\lambda\vdash d}$ is a basis
of $\Lambda_{N,\bbQ(t)}^{d}$.
\begin{dfn}
For $M\ge N$,
we denote the restriction map from $\Lambda_M$ to $\Lambda_N$ by
\begin{align*}
\begin{array}{ c c c c}
\rho_{M,N}: &\Lambda_M &\to &\Lambda_N
\\
&\rotatebox{90}{$\in$} & &\rotatebox{90}{$\in$}
\\
& f(x_1,\ldots,x_M)&\mapsto & f(x_1,\ldots,x_N,0,\ldots,0),
\end{array}
\end{align*}
and the induced restriction map from $\Lambda$ to $\Lambda_N$ by
\begin{align*}
\begin{array}{ c c c c}
\rho_N: &\Lambda &\to &\Lambda_N.
\\
&\rotatebox{90}{$\in$} & &\rotatebox{90}{$\in$}
\\
& f(x_1,x_2,\ldots)&\mapsto & f(x_1,\ldots,x_N,0,\ldots).
\end{array}
\end{align*}
We denote the maps on the tensored spaces
$\Lambda_{M,\bbC} \to \Lambda_{N,\bbC}$ and
$\Lambda_\bbC \to \Lambda_{N,\bbC}$
by the same symbols $\rho_{M,N}$ and $\rho_N$.
\end{dfn}
\begin{fct}
For any $\lambda\in\calP$, every $N\in\bbZ_{\ge1}$ with $N\ge\ell(\lambda)$,
and any generic $t\in\bbC$ we have
\begin{align*}
\rho_{N} \big(P_{\lambda}^{(t^{-1})}\big) = P_\lambda^{(N)}(x;t).
\end{align*}
\end{fct}
\subsection{Split form of the Calogero-Sutherland Hamiltonian}
We recall the collective field method in the Calogero-Sutherland model
following \cite[\S 3]{AMOS:1995}.
Recall that the Calogero-Sutherland Hamiltonian $H_t^{(N)}$ \eqref{eq:HCS}
acts on the space of symmetric polynomials $\Lambda_{N,\bbQ(t)}$.
\begin{fct}\label{fct:split}
(1)
Let $t,t'$ be non-zero complex numbers.
Define an element of $\widehat{U}(\calH)$ by
\begin{align*}
\widehat{H}_{t,t'}^{(N)}\seteq
\sum_{n,m>0}\Big(t' a_{-m-n}a_m a_n
&+\dfrac{t}{t'}a_{-m}a_{-n}a_{m+n}\Big)
\\
&+\sum_{n>0}\left(n(1-t)+N t\right) a_{-n}a_n.
\end{align*}
Then for any $v\in\calF_\alpha$ and every $N\in\bbZ_{\ge1}$ we have
\begin{align}\label{eq:fct:H}
\rho_{N}\circ\iota_{t} (\widehat{H}_{t,t'}^{(N)} v)
=H_{t}^{(N)}\big( \rho_{N}\circ \iota_{t}(v)\big).
\end{align}
(2)
Under the relation
\[\rho=\big(t^{1/2}-t^{-1/2}\big)/\sqrt{2}\]
we have
\begin{align}\label{eq:fct:split:2}
\widehat{H}_{t,\sqrt{t/2}}^{(N)}
=\sqrt{2 t} \sum_{n>0}a_{-n} \calL_n
+\sum_{n>0}a_{-n}a_n\left(N t+t-1-\sqrt{2 t} a_0\right).
\end{align}
Here $\calL_n\in\widehat{U}(\calH)$ is the bosonized Virasoro generator
\eqref{eq:FF}.
\end{fct}
\begin{proof}
These are well-known results
(for example, see \cite[Prop. 4.47]{S:2003}, \cite{AMOS:1995}
and the references therein).
We only show the sketch of the proof.
As for (1),
note that $\{a_{-\lambda}\ket{\alpha}_{\calF}\mid \lambda\in\calP\}$
is a basis of $\calF$.
So it is enough to show \eqref{eq:fct:H} for each $\lambda$.
One can calculate the left hand side using the commutation
relation of $\calH$ only.
On the right hand side,
one may use $\iota_t(a_{-\lambda}\ket{\alpha}_{\calF})\propto p_\lambda$,
and calculate it using the expression \eqref{eq:HCS}.
(2) is proved by direct calculation.
\end{proof}
\begin{rmk}
The form \eqref{eq:fct:split:2} is called the split expression in \cite[\S 1]{SKAO:1996}.
\end{rmk}
\subsection{Proof of Proposition \ref{prp:split}}
By Fact \ref{fct:split} we have
the left commuting diagram in \eqref{eq:comm:diag}.
Note that we set the parameters $t$ and $t'$
in $\widehat{H}_{t,t'}^{(N)}$
to be $\beta$ and $\sqrt{\beta/2}$,
so that we may use Fact \ref{fct:split} (2).
In the right diagram of \eqref{eq:comm:diag} we show how
the element $f_\lambda^{(b,\beta)}\in\calF_\alpha$
given in \eqref{eq:flbb} behaves under the maps indicated
in the left diagram.
Here we set the parameter $b$ to be $\beta^{-1}$ so that
$\iota_\beta f_\lambda^{(\beta^{-1},\beta)}
=P_\lambda^{(\beta^{-1})} \in\Lambda_\bbC$
and
$\rho_N\circ\iota_\beta f_\lambda^{(\beta^{-1},\beta)}
=P_\lambda^{(N)}(x;\beta) \in\Lambda_{N,\bbC}$.
At the bottom line we used
the eigen-equation of Jack symmetric polynomial \eqref{eq:eigeneq}.
\begin{align}
\xymatrix{
\calF_\alpha
\ar[r]^{\widehat{H}_{\beta,\sqrt{\beta/2}}^{(N)}}
\ar[d]_{\iota_\beta}^{\rotatebox{90}{$\sim$}}
& \calF_\alpha
\ar[d]^{\iota_\beta}_{\rotatebox{90}{$\sim$}}
& f_\lambda^{(\beta^{-1},\beta)}
\ar@{|->}[d]
\ar@{|->}[r]
& \widehat{H}_{\beta,\sqrt{\beta/2}}^{(N)}
\big(f_\lambda^{(\beta^{-1},\beta)} \big)
\ar@{|->}[d]
\\
\Lambda_\bbC
\ar[d]_{\rho_N}
\ar@{}[r]|{\circlearrowright}
& \Lambda_\bbC
\ar[d]^{\rho_N}
& P_\lambda^{(\beta^{-1})}
\ar@{|->}[d]
& \iota_\beta\circ\widehat{H}_{\beta,\sqrt{\beta/2}}^{(N)}
\big(f_\lambda^{(\beta^{-1},\beta)}\big)
\ar@{|->}[d]
\\
\Lambda_{N,\bbC}
\ar[r]_{H_{\beta}^{(N)}}
& \Lambda_{N,\bbC}
& P_\lambda^{(N)}(x;\beta)
\ar@{|->}[r]
& P_\lambda^{(N)}(x;\beta)\cdot \ep_\lambda^{(N)}(\beta)
}
\label{eq:comm:diag}
\end{align}
Since this diagram holds for every $N$ with $N\ge\ell(\lambda)$, we have
\begin{align*}
\widehat{H}_{\beta,\sqrt{\beta/2}}^{(N)} f_\lambda^{(\beta^{-1},\beta)}
=\ep_\lambda^{(N)}(\beta) f_\lambda^{(\beta^{-1},\beta)}.
\end{align*}
Therefore we have
\begin{align*}
&
\big[\sqrt{2 t} \sum_{n>0}a_{-n} \calL_n
+\sum_{n>0}a_{-n}a_n\left(N t+t-1-\sqrt{2 t} a_0\right)\big]
f_\lambda^{(\beta^{-1},\beta)}
\\
&=f_\lambda^{(\beta^{-1},\beta)}\cdot
\sum_i (\lambda_i^2+t(N+1-2i)\lambda_i).
\end{align*}
We can subtract $N$-dependent terms from both sides.
The result is nothing but the desired statement
of Proposition \ref{prp:split}.
\section{Whittaker vectors}\label{sec:whit}
Recall the notion of the Whittaker vector for a finite dimensional
Lie algebra $\frkg$ given in \cite{K:1978}.
Let $\frkn$ be a maximal nilpotent Lie subalgebra of $\frkg$
and $\eta:\frkn\to\bbC$ be a homomorphism.
Let $V$ be any $U(\frkg)$-module.
Then a vector $w\in V$ is called a Whittaker vector with respect to $\eta$ if
$x w=\eta(x)w$ for all $x\in\frkn$.
We shall discuss an analogue of this Whittaker vector
in the Virasoro algebra $\Vir_c$.
In the triangular decomposition
$\Vir_{c}=\Vir_{c,+}\oplus\Vir_{c,0}\oplus\Vir_{c,-}$,
the elements $L_1,L_2\in\Vir_{c,+}$ generate $\Vir_{c,+}$.
Thus if we take $\Vir_{c,+}$ as the $\eta$ in the above definition,
what we should consider is a homomorphism $\eta:\Vir_{c,+}\to\bbC$,
which is determined by $\eta(L_1)$ and $\eta(L_2)$.
In \cite{OW:2009}, a characterisation of Whittaker vectors
in general $U(\Vir)$-modules are given
under the assumption that $\eta$ is non-degenerate,
i.e. $\eta(L_1)\neq0$ and $\eta(L_2)\neq0$.
In this section we shall express Whittaker vectors
in the Verma module $M_{h}$ using Jack symmetric functions.
Before starting the general treatment,
we first investigate a degenerate version of the Whittaker vector,
i.e. we assume $\eta(L_2)=0$.
We will call this vector by Gaiotto state of Virasoro algebra,
although the paper \cite{G:2009} treated both degenerate and
non-degenerate Whittaker vectors.
\subsection{Gaiotto state via Jack polynomials}\label{subsec:coh}
\begin{dfn}
Fix a non-zero complex number $a$.
Let $v_G$ be a non-zero element of the Verma module $M_{h}$ satisfying
\begin{align*}
L_1 v_G=a v_G,\quad
L_n v_G=0\ (n\in\bbZ_{\ge 2}).
\end{align*}
We call such an element $v_G$ a Gaiotto state of $M_{h}$.
\end{dfn}
\begin{fct}\label{fct:G:ue}
Assume that $c$ and $h$ are generic.
Then $v_G$ exists uniquely up to constant multiplication.
\end{fct}
\begin{proof}
This statement is shown in \cite{OW:2009}.
\end{proof}
\begin{lem}
Decompose a Gaiotto state $v_G$ in the way \eqref{eq:L0weightdecomp} as
\begin{align*}
v_G=\sum_{n\in\bbZ_{\ge 0}}a^n v_{G,n},\quad
v_{G,n}\in M_{h,n}.
\end{align*}
Then we have
\begin{align}\label{eq:cond:g}
v_{G,n}=L_1 v_{G,n+1} \quad (n\in\bbZ_{\ge 0}).
\end{align}
\end{lem}
\begin{proof}
This follows from the commutation relation $[L_0,L_1]=-L_1$.
\end{proof}
Now consider the bosonized Gaiotto state
\begin{align*}
w_{G,n}\seteq \psi(v_{G,n})\in\calF_{\alpha,n}
\end{align*}
where $\psi: M_h \to \calF_{\alpha}$
is the Feigin-Fuchs bosonization \eqref{eq:FF:psi}
and $\calF_{\alpha,n}$ is the $a_0$-weight space \eqref{eq:F:deg}.
At this moment the Heisenberg parameters $\rho,\alpha$
are related to the Virasoro parameters $c,h$ by the relations
\begin{align*}
c=1-12\rho^2,\quad
h=\dfrac{1}{12}\alpha(\alpha-2\rho).
\end{align*}
From the condition \eqref{eq:cond:g} we have
\begin{align}\label{eq:cond:wg}
\calL_1 w_{G,n+1}\in \calF_{h,n},\quad w_{G,n}=\calL_1 w_{G,n+1}.
\end{align}
Next we map this bosonized state $w_{G,n}$ into a symmetric function
by the isomorphism $\iota_\beta:\calF_{\alpha}\to\Lambda_\bbC$ \eqref{eq:iota}:
\begin{align*}
\iota_\beta(w_{G,n})=\iota_\beta\circ\psi(v_{G,n})\in\Lambda_\bbC^n.
\end{align*}
Here $\Lambda_\bbC^n$ is the space of degree $n$ symmetric functions.
We take the parameter $\beta$ so that
the Heisenberg parameter $\rho$ is expressed by
\begin{align*}
\rho=(\beta^{1/2}-\beta^{-1/2})/\sqrt{2}.
\end{align*}
Recall also that the family of Jack symmetric functions
$\{P_\lambda^{(\beta^{-1})} \mid \lambda\vdash n\}$
is a basis of $\Lambda_{\bbC}^n$ for a generic $\beta\in\bbC$.
Thus we can expand $\iota_\beta(w_{G,n})\in\Lambda_\bbC^n$
by $P_\lambda^{(\beta^{-1})}$'s.
Let us express this expansion as:
\begin{align}\label{eq:expand}
\iota_{\beta}(w_{G,n})=\iota_{\beta}\circ\psi(v_{G,n})
=\sum_{\lambda\vdash n}c_\lambda(\alpha,\beta)P_{\lambda}^{(\beta^{-1})},\quad
c_\lambda(\alpha,\beta)\in\bbC.
\end{align}
Note that this expansion is equivalent to
\begin{align}\label{eq:expand:Fock}
w_{G,n}
=\sum_{\lambda\vdash n}c_\lambda(\alpha,\beta)f_{\lambda}^{(\beta^{-1},\beta)}
\in \calF_\alpha
\end{align}
by \eqref{eq:flbb}.
Now the correspondence of the parameters becomes:
\begin{align}\label{eq:param}
c=1-12\rho^2,\quad
h=\dfrac{1}{12}\alpha(\alpha-2\rho_0),\quad
\rho=\dfrac{\beta^{1/2}-\beta^{-1/2}}{\sqrt{2}}.
\end{align}
The main result of this paper is
\begin{thm}\label{thm:G}
Assume that $c$ and $h$ are generic.
(Then $v_G$ exists uniquely up to constant multiplication by Fact \ref{fct:G:ue}.)
If $c_{\emptyset}(\alpha,\beta)$ is set to be $1$
in the expansion \eqref{eq:expand},
then the other coefficients are given by
\begin{align}\label{eq:thm:G}
\begin{split}
c_\lambda(\alpha,\beta)
=\prod_{(i,j)\in\lambda}
&\dfrac{1}{\lambda_i-j+1+\beta(\lambda_j'-i)}
\\
&\times
\prod_{(i,j)\in\lambda\setminus\{(1,1)\}}
\dfrac{1}{(j+1)\beta+\sqrt{2} \beta^{1/2}\alpha -(i+1)}.
\end{split}
\end{align}
Here we used the notation $(i,j)\in\lambda$ as explained in \S \ref{subsec:partition}.
\end{thm}
\subsection{Proof of Theorem \ref{thm:G}}
Before starting the proof, we need to prepare the following
Proposition \ref{prp:rec}.
Recall the Pieri formula of Jack symmetric function.
We only need the case of ``adding one box", that is,
the case of multiplying the degree one power sum function $p_1=x_1+x_2+\cdots$.
\begin{dfn}\label{dfn:<_k}
For partitions $\mu$ and $\lambda$,
we denote $\mu<_k\lambda$ if $|\mu|=|\lambda|-k$ and $\mu\subset\lambda$.
\end{dfn}
\begin{fct}[{\cite[p.340 VI (6.24), p.379 VI (10.10)]{M:1995:book}}]
We have
\begin{align}
\label{eq:pieri}
& p_1 P_\mu^{(\beta^{-1})}
=\sum_{\lambda>_1\mu} \psi_{\lambda/\mu}'(\beta) P_\lambda^{(\beta^{-1})},
\\
&\label{eq:pieri:coeff}
\psi_{\lambda/\mu}'(\beta)\seteq
\prod_{i=1}^{I-1}
\dfrac{\lambda_i-\lambda_I+\beta(I-i+1)}{\lambda_i-\lambda_I+1+\beta(I-i)}
\dfrac{\lambda_i-\lambda_I+1+\beta(I-i-1)}{\lambda_i-\lambda_I+\beta(I-i)}.
\end{align}
In the expression in \eqref{eq:pieri:coeff}
the partitions $\lambda$ and $\mu$ are related by $\lambda_I=\mu_I+1$ and
$\lambda_i=\mu_i$ for $i\neq I$.
\end{fct}
\begin{prp}\label{prp:rec}
$c_\lambda(\alpha,\beta)$ satisfies the next recursion relation.
\begin{align}\label{eq:prp:rec}
\left(\ep_{\lambda}(\beta)+|\lambda|(1+\sqrt{2\beta}\alpha-\beta)\right)
c_\lambda(\alpha,\beta)
=\beta \sum_{\mu<_1\lambda}\psi_{\lambda/\mu}'(\beta)c_\mu(\alpha,\beta).
\end{align}
Here the function $\ep_{\lambda}(\beta)$ is given in \eqref{eq:eigen:E}.
\end{prp}
\begin{proof}
We will calculate $\widehat{E}_\beta w_{G,n}\in\calF_\alpha$ in two ways.
By comparing both expression we obtain the recursion relation.
First, by the definition of $\widehat{E}_\beta$ given in \eqref{eq:E:split}
and by the condition \eqref{eq:cond:wg} of $v_{G,n}$ we have
\begin{align*}
\widehat{E}_\beta w_{G,n}
&=\Big[\sqrt{2 \beta} \sum_{m\ge 1}a_{-m}\calL_m
+\sum_{m\ge 1} a_{-m}a_m(\beta-1-\sqrt{2 \beta}a_0)
\Big]w_{G,n}\\
&=\Big[\sqrt{2 \beta} a_{-1}\calL_1
+\sum_{m\ge 1} a_{-m}a_m(\beta-1-\sqrt{2 \beta}a_0)
\Big]w_{G,n}\\
&=\sqrt{2\beta}a_{-1}w_{G,n-1}+n(\beta-1-\sqrt{2\beta}\alpha)
w_{G,n} \in \calF_\alpha.
\end{align*}
Now applying the isomorphism $\iota_\beta:\calF_\alpha\to\Lambda_\bbC$
on both sides and
substituting $w_{G,n}$ and $w_{G,n-1}$ by their expansions \eqref{eq:expand},
we have
\begin{align*}
\iota_\beta(\widehat{E}_\beta w_{G,n})
= \beta p_{1}
&\sum_{\mu\vdash n-1} c_{\mu}(\alpha,\beta)P_{\mu}^{(\beta^{-1})}
\\
&+n(\beta-1-\sqrt{2\beta}\alpha)
\sum_{\lambda\vdash n}c_\lambda(\alpha,\beta) P_{\lambda}^{(\beta^{-1})}
\in \Lambda_\bbC.
\end{align*}
Using the Pieri formula \eqref{eq:pieri} in the first term, we have
\begin{align}\label{eq:prp:rec:1}
\begin{split}
\iota_\beta(\widehat{E}_\beta w_{G,n})
=&\beta \sum_{\mu\vdash n-1}c_{\mu}(\alpha,\beta)
\sum_{\lambda>_1\mu}\psi'_{\lambda/\mu}(\beta) P_{\lambda}^{(\beta^{-1})}\\
&+n(\beta-1-\sqrt{2\beta}\alpha)\sum_{\lambda\vdash n}
c_\lambda(\alpha,\beta) P_\lambda^{(\beta^{-1})}.
\end{split}
\end{align}
Next, by \eqref{eq:expand:Fock} and by \eqref{eq:eigen:Ef} we have
\begin{align*}
\widehat{E}_\beta w_{G,n}
=\widehat{E}_\beta \sum_{\lambda\vdash n}c_\lambda(\alpha,\beta)
f_{\lambda}^{(\beta^{-1},\beta)}
=\sum_{\lambda\vdash n}
c_\lambda(\alpha,\beta) \ep_{\lambda}(\beta)
f_{\lambda}^{(\beta^{-1},\beta)}\in\calF_\alpha.
\end{align*}
Therefore we have
\begin{align}\label{eq:prp:rec:2}
\iota_\beta(\widehat{E}_\beta w_{G,n})
=\sum_{\lambda\vdash n}
c_\lambda(\alpha,\beta) \ep_{\lambda}(\beta) P_\lambda^{(\beta^{-1})}
\in\Lambda_\bbC.
\end{align}
Then comparing \eqref{eq:prp:rec:1} and \eqref{eq:prp:rec:2} we have
\begin{align*}
&\sum_{\lambda\vdash n}
\big(\ep_{\lambda}(\beta)+n(1+\sqrt{2\beta}\alpha-\beta)\big)
c_\lambda(\alpha,\beta)
P_\lambda^{(\beta^{-1})}
\\
&=\beta\sum_{\mu\vdash n-1}c_\mu(\alpha,\beta)
\sum_{\lambda>_1\mu}\psi'_{\lambda/\mu}(\beta) P_\lambda^{(\beta^{-1})}
\in\Lambda_{\bbC}^n.
\end{align*}
Since $\{P_\lambda^{(\beta)} \mid \lambda\vdash n\}$
is a basis of $\Lambda_{\bbC}^n$,
we have
\begin{align*}
\big(\ep_{\lambda}(\beta)+n(1+\sqrt{2\beta}\alpha-\beta)\big)
c_\lambda(\alpha,\beta)
=\beta \sum_{\mu<_1\lambda}c_\mu(\alpha,\beta)\psi'_{\lambda/\mu}(\beta).
\end{align*}
\end{proof}
\begin{proof}[Proof of Theorem \ref{thm:G}]
The recursion relation \eqref{eq:prp:rec} of Propositions \ref{prp:rec}
determines $c_\lambda(\alpha,\beta)$ uniquely
if we set the value of $c_{\emptyset}(\alpha,\beta)$.
Since the existence and uniqueness of $v_G$ is known by Fact \ref{fct:G:ue},
we only have to show that the ansatz \eqref{eq:thm:G}
satisfies \eqref{eq:prp:rec}.
For partitions $\lambda$ and $\mu$ which are related
by $\lambda_I=\mu_I+1$ and $\lambda_i=\mu_i$ for $i\neq I$,
we have the following two formulas:
\begin{align*}
&\Big[\prod_{(i,k)\in\mu}\dfrac{1}{\lambda_i-k+1+\beta(\lambda_k'-i)}\Big]
\Big/
\Big[\prod_{(i,k)\in\lambda}\dfrac{1}{\lambda_i-k+1+\beta(\lambda_k'-i)}\Big]
\\
&
=\prod_{i=1}^{I-1}
\dfrac{\lambda_i-\lambda_I+1+\beta(I-i)}
{\lambda_i-\lambda_I+1+\beta(I-1-i)}
\times
\prod_{i=1}^{\lambda_I-1}
\dfrac{\lambda_I-i+1+\beta(\lambda_i'-I)}{\lambda_I-i+\beta(\lambda_i'-I)},
\\
&\Big[\prod_{\substack{(i,k)\in\mu \\ \mu\neq (1,1) }}
\dfrac{\beta}{(k+1)+\sqrt{2\beta}\alpha -(i+1)\beta}\Big]
\Big/
\Big[\prod_{\substack{(i,k)\in\lambda \\ \mu\neq (1,1)}}
\dfrac{\beta}{(k+1)+\sqrt{2\beta}\alpha -(i+1)\beta}\Big]
\\
&
=\dfrac{(\lambda_I+1)+\sqrt{2\beta}\alpha -(I+1)\beta}{\beta}.
\end{align*}
Substituting the $c_\mu(\alpha,\beta)$ in the right hand side of
\eqref{eq:prp:rec} by the ansatz \eqref{eq:thm:G}
and using the above two equations, we have
\begin{align}
\label{eq:prf:thm:rhs}
\begin{split}
&\text{RHS of \eqref{eq:prp:rec}}
\\
&=
\sum_{(I,\lambda_I)\in C(\lambda)}
\prod_{i=1}^{I-1}
\dfrac{\lambda_i-\lambda_I+\beta(I-i+1)}
{\lambda_i-\lambda_I+\beta(I-i)}
\times
\prod_{i=1}^{\lambda_I-1}
\dfrac{\lambda_I-i+1+\beta(\lambda_i'-I)}{\lambda_I-i+\beta(\lambda_i'-I)}
\\
&\hskip 6em
\times
\left((\lambda_I+1)+\sqrt{2\beta}\alpha -(I+1)\beta\right)
c_\lambda(\alpha,\beta),
\end{split}
\end{align}
where $C(\lambda)$ is the set of boxes $\square$
in the Young diagram of $\lambda$
such that $\lambda\setminus \{\square\}$ is also a partition.
In particular, if $\square=(I,\lambda_I)\in C(\lambda)$, then
$\mu\seteq\lambda\setminus \{\square\}$ is the partition satisfying
$\mu_I=\lambda_I-1$ and $\mu_i=\lambda_i$ for $i\neq I$,
recovering the previous description.
As for the left hand side of \eqref{eq:prp:rec},
we have by \eqref{eq:eigen:E}:
\begin{align}
\label{eq:prf:thm:lhs}
\ep_\lambda(\beta)+|\lambda|(1+\sqrt{2\beta}\alpha-\beta)
=|\lambda|(1+\sqrt{2\beta}\alpha)+\sum_i (\lambda_i^2-2i\lambda_i\beta).
\end{align}
Thus by \eqref{eq:prf:thm:rhs} and \eqref{eq:prf:thm:rhs},
the equation \eqref{eq:prp:rec} under the substitution
\eqref{eq:thm:G} is equivalent to the next one:
\begin{align*}
&|\lambda|(1+\sqrt{2\beta}\alpha)+\sum_i (\lambda_i^2-2i\lambda_i\beta)
\\
&=\sum_{(I,\lambda_I)\in C(\lambda)}
\prod_{i=1}^{I-1}
\dfrac{\lambda_i-\lambda_I+\beta(I-i+1)}
{\lambda_i-\lambda_I+\beta(I-i)}
\times
\prod_{i=1}^{\lambda_I-1}
\dfrac{\lambda_I-i+1+\beta(\lambda_i'-I)}{\lambda_I-i+\beta(\lambda_i'-I)}
\\
&\phantom{=c_\lambda(\alpha,\beta) \sum_{\mu<_1 \lambda}\times }
\times
\left(1+\sqrt{2\beta}\alpha+\lambda_I-(I+1)\beta\right).
\end{align*}
This is verified by Propositions \ref{prp:young:1} and \ref{prp:young:2}
which will be shown in the next \S \ref{sec:id}.
\end{proof}
\subsection{Non-degenerate Whittaker vector via Jack polynomials}\label{subsec:wit}
\begin{dfn}
Fix non-zero complex numbers $a$ and $b$.
Let $v_W$ be an element of the Verma module $M_{h}$ satisfying
\begin{align*}
L_1 v_W=a v_W,\quad
L_2 v_W=b v_W,\quad
L_n v_W=0\ (n\in\bbZ_{\ge 3}).
\end{align*}
We call such an element $v_W$ by (non-degenerate) Whittaker vector of $M_{h}$.
\end{dfn}
\begin{fct}
Assume that $c$ and $h$ are generic complex numbers.
Then $v_W$ exists uniquely up to scalar multiplication.
\end{fct}
\begin{proof}
This is shown in \cite{OW:2009}.
\end{proof}
\begin{lem}
Let us decompose $v_W$ as
\begin{align*}
v_W=\sum_{n\in\bbZ_{\ge 0}}a^n v_{W,n},\quad
v_{W,n}\in M_{h,n}.
\end{align*}
Then we have
\begin{align*}
&L_1 v_{W,n+1}=v_{W,n},\quad
L_2 v_{W,n+2}=a^{-2} b\cdot v_{W,n}.
\end{align*}
\end{lem}
\begin{proof}
This follows from the commutation relations $[L_0,L_1]=-L_1$ and
$[L_0,L_2]=-2L_2$.
\end{proof}
Now we expand the bosonized Whittaker vector
\begin{align*}
w_{W,n}\seteq \psi(v_{W,n})\in\calF_{\alpha,n}
\end{align*}
by $f_{\lambda}^{(\beta^{-1},\beta)}$'s \eqref{eq:flbb}
and express it as
\begin{align*}
w_{W,n}
=\sum_{\lambda\vdash n}d_\lambda(\alpha,\beta)f_{\lambda}^{(\beta^{-1},\beta)},
\quad d_\lambda(\alpha,\beta)\in\bbC.
\end{align*}
\begin{prp}\label{prp:rec2}
Using the notation $\lambda >_k \mu$ given in Definition \ref{dfn:<_k},
we have the next recursion relation for $d_\lambda(\alpha,\beta)$:
\begin{align}\label{eq:prp:rec2}
\begin{split}
&(\ep_{\lambda}(\beta)+|\lambda|(1+\sqrt{2\beta}\alpha-\beta))
c_\lambda(\alpha,\beta)
\\
&=\beta \sum_{\nu<_2\lambda}
d_\nu(\alpha,\beta){\psi'}_{\lambda/\nu}^{(2)}(\beta)
+\beta \sum_{\mu<_1\lambda}d_\mu(\alpha,\beta)\psi'_{\lambda/\mu}(\beta).
\end{split}
\end{align}
${\psi'}_{\lambda/\nu}^{(2)}(\beta)$ is the coefficient
in the next Pieri formula:
\begin{align*}
p_2 P_{\nu}^{(\beta^{-1})}
=\sum_{\lambda} {\psi'}_{\lambda/\nu}^{(2)}(\beta) P_{\lambda}^{(\beta^{-1})}.
\end{align*}
\end{prp}
\begin{proof}
Similar as the proof of Proposition \ref{prp:rec}
\end{proof}
\begin{rmk}
The author doesn't know whether $d_\lambda$ has a good explicit formula,
although $c_\lambda$ has the factored formula \eqref{eq:thm:G}.
\end{rmk}
\section{Combinatorial identities of rational functions}\label{sec:id}
\begin{prp}\label{prp:young:1}
For a partition $\lambda$,
let $C(\lambda)$ be the set of boxes $\square$ of $\lambda$
such that $\lambda\setminus \{\square\}$ is also a partition.
Then
\begin{align}\label{eq:young:1}
\sum_{(I,\lambda_I)\in C(\lambda)}
\prod_{i=1}^{I-1}
\dfrac{\lambda_i-\lambda_I+\beta(I-i+1)}
{\lambda_i-\lambda_I+\beta(I-i)}
\times
\prod_{i=1}^{\lambda_I-1}
\dfrac{\lambda_I-i+1+\beta(\lambda_i'-I)}{\lambda_I-i+\beta(\lambda_i'-I)}
=|\lambda|.
\end{align}
\end{prp}
\begin{proof}
Let $\lambda$ be the partition such that
\begin{align}\label{eq:young:1:lambda}
\lambda=(
\stackrel{j_1}{\overbrace{n_1,\ldots,n_1}},
\stackrel{j_2}{\overbrace{n_2,\ldots,n_2}},\ldots,
\stackrel{j_l}{\overbrace{n_l,\ldots,n_l}}).
\end{align}
Then we have
\begin{align*}\label{eq:young:1:C}
C(\lambda)=\{(m_1,n_1),(m_2,n_2),\ldots,(m_l,n_l)\}
\end{align*}
with $m_k\seteq j_1+\cdots+j_k$ ($k=1,\ldots,l)$,
where we used the coordinate $(i,j)$ of Young diagram associated to $\lambda$
as explained in \S \ref{subsec:partition}.
Let us choose an element $\square=(m_k,n_k)$ of $C(\lambda)$,
and calculate the corresponding factor in \eqref{eq:young:1}.
The first product reads
\begin{align*}
&\prod_{1\le i\le m_1}\dfrac{(n_1-n_k)+\beta(m_k-i+1)}{(n_1-n_k)+\beta(m_k-i)}
\prod_{m_1+1\le j\le m_2}
\dfrac{(n_2-n_k)+\beta(m_k-i+1)}{(n_2-n_k)+\beta(m_k-i)}
\\
&\times\cdots\times
\prod_{m_{k-1}+1\le i\le m_{k}-1}\dfrac{m_k-i+1}{m_k-i}
\\
&=\prod_{i=1}^{k-1}
\dfrac{(n_i-n_k)+\beta(m_k-m_{i-1})}{(n_i-n_k)+\beta(m_k-m_i)}
\times(m_k-m_{k-1}).
\end{align*}
Here we used the notation $m_0\seteq0$.
The second product reads
\begin{align*}
&\prod_{1\le j\le n_l}\dfrac{(n_k-j+1)+\beta(m_l-n_k)}{(n_k-j)+\beta(m_l-n_k)}
\prod_{n_l+1\le j\le n_{l-1}}
\dfrac{(n_k-j+1)+\beta(m_{l-1}-n_k)}{(n_k-j)+\beta(m_{l-1}-n_k)}
\\
&\times\cdots\times
\prod_{n_{k+1}+1\le j\le n_{k}-1}
\dfrac{(n_k-j+1)}{(n_k-j)}
\\
&
=(n_k-n_{k+1})\times\prod_{j=k+1}^{l}
\dfrac{(n_k-n_{j+1})+\beta(m_j-m_k)}{(n_k-n_j)+\beta(m_j-m_k)}.
\end{align*}
Here we used the notation $n_{l+1}\seteq0$.
Now let us define
\begin{align*}
&F_1(\{m_k\},\{n_k\},\beta)
\seteq \sum_{k=1}^l F_{1,k}(\{m_k\},\{n_k\},\beta),
\\
&F_{1,k}(\{m_k\},\{n_k\},\beta)
\seteq (m_k-m_{k-1})(n_k-n_{k+1})
\\
&\times
\prod_{i=1}^{k-1}
\dfrac{(n_i-n_k)+\beta(m_k-m_{i-1})}{(n_i-n_k)+\beta(m_k-m_i)}
\prod_{j=k+1}^{l}
\dfrac{(n_k-n_{j+1})+\beta(m_j-m_k)}{(n_k-n_j)+\beta(m_j-m_k)}.
\end{align*}
Then for the proof of \eqref{eq:young:1}
it is enough to show that $F_1$ is equal to $|\lambda|$
if $\{m_k\}$ and $\{n_k\}$ correspond to $\lambda$
as in \eqref{eq:young:1:lambda} and \eqref{eq:young:1:C}.
Hereafter we consider $F_1$ as a rational function of the valuables
$\{m_k\}$, $\{n_k\}$ and $\beta$.
As a rational function of $\beta$, $F_1$ has the apparent poles at
$\beta_{j,k}\seteq -(n_j-n_k)/(m_k-m_j)$ ($j=1,2,\ldots,k-1,k+1,\ldots,l$).
We may assume that these apparent poles are mutually different
so that all the poles are at most single.
Then the residue at $\beta=\beta_{j,k}$ comes from
the factors $F_{1,j}$ and $F_{1,k}$.
Now we may assume $j<k$. Then the direct computation yields
\begin{align}
\Res_{\beta=\beta_{j,k}}F_{1,j}
=&\dfrac{(m_j-m_{j-1})(n_j-n_{j+1})(n_k-n_{k+1})}{(m_j-m_k)}
\\
\label{eq:prp:yng:1:j:1}
&\times
\prod_{i=1}^{j-1}
\dfrac{(n_i-n_j)(m_k-m_j)-(n_j-n_k)(m_j-m_{i-1})}
{(n_i-n_j)(m_k-m_j)-(n_j-n_k)(m_j-m_i)}
\\
\label{eq:prp:yng:1:j:2}
&\times
\prod_{i=j+1}^{k-1}
\dfrac{(n_j-n_{i+1})(m_k-m_j)-(n_j-n_k)(m_i-m_j)}
{(n_j-n_i)(m_k-m_j)-(n_j-n_k)(m_i-m_j)}
\\
\label{eq:prp:yng:1:j:3}
&\times
\prod_{i=k+1}^{l}
\dfrac{(n_j-n_{i+1})(m_k-m_j)-(n_j-n_k)(m_i-m_j)}
{(n_j-n_i)(m_k-m_j)-(n_j-n_k)(m_i-m_j)},
\end{align}
and
\begin{align}
\Res_{\beta=\beta_{j,k}}F_{1,k}
=&\dfrac{(m_k-m_{k-1})(n_k-n_{k+1})(n_j-n_k)(m_{j-1}-m_j)}{(m_k-m_j)^2}
\\
\label{eq:prp:yng:1:k:1}
&\times
\prod_{i=1}^{j-1}
\dfrac{(n_i-n_k)(m_k-m_j)-(n_j-n_k)(m_k-m_{i-1})}
{(n_i-n_k)(m_k-m_j)-(n_j-n_k)(m_k-m_i)}
\\
\label{eq:prp:yng:1:k:2}
&\times
\prod_{i=j+1}^{k-2}
\dfrac{(n_i-n_k)(m_k-m_j)-(n_j-n_k)(m_k-m_{i-1})}
{(n_i-n_k)(m_k-m_j)-(n_j-n_k)(m_k-m_i)}
\\
\label{eq:prp:yng:1:k:3}
&\times
\prod_{i=k+1}^{l}
\dfrac{(n_k-n_{i+1})(m_k-m_j)-(n_j-n_k)(m_i-m_k)}
{(n_k-n_i)(m_k-m_j)-(n_j-n_k)(m_i-m_k)}.
\end{align}
Using the identity $(a-b)(x-y)-(c-b)(x-z)=(a-c)(x-y)-(c-b)(y-z)$,
one finds that
the factors \eqref{eq:prp:yng:1:j:1} and \eqref{eq:prp:yng:1:k:1} are equal.
Similarly \eqref{eq:prp:yng:1:j:3} and \eqref{eq:prp:yng:1:k:3} are equal.
Shifting the index $i$ in \eqref{eq:prp:yng:1:j:2}
and using the above identity,
one also finds that
\begin{align*}
\dfrac{\eqref{eq:prp:yng:1:j:2}}{\eqref{eq:prp:yng:1:k:2}}
=-\dfrac{(n_j-n_k)(m_k-m_{k-1})}
{(n_{j+1}-n_j)(m_k-m_j)}.
\end{align*}
Thus we have
\begin{align*}
\dfrac{\Res_{\beta=\beta_{j,k}}F_{1,j}}{\Res_{\beta=\beta_{j,k}}F_{1,k}}
=-\dfrac{(n_j-n_k)(m_k-m_{k-1})}{(n_{j+1}-n_j)(m_k-m_j)}
\dfrac{\eqref{eq:prp:yng:1:j:1}}{\eqref{eq:prp:yng:1:k:1}}
=-1.
\end{align*}
Therefore we have
\begin{align*}
\Res_{\beta=\beta_{j,k}}F_1(\{m_k\},\{n_k\},\beta)=0,
\end{align*}
so that $F_1$ is a polynomial of $\beta$.
Then from the behaviour $F_1$ in the limit $\beta\to \infty$,
we find that $F_1$ is a constant as a function of $\beta$.
This constant can be calculated by setting $\beta=0$, and the result is
\begin{align*}
F_1(\{m_k\},\{n_k\},\beta)
=\sum_{k=1}^l (m_k-m_{k-1})n_k.
\end{align*}
It equals to $|\lambda|$ if $\lambda$ is given by \eqref{eq:young:1:lambda}
and $m_k=j_1+\cdots+j_k$.
This is the desired consequence.
\end{proof}
\begin{prp}\label{prp:young:2}
Using the same notation as in Proposition \ref{prp:young:1}, we have
\begin{align}\label{eq:prp:young:2}
\begin{split}
&\sum_{(I,\lambda_I)\in C(\lambda)}
\prod_{i=1}^{I-1}
\dfrac{\lambda_i-\lambda_I+\beta(I-i+1)}
{\lambda_i-\lambda_I+\beta(I-i)}
\times
\prod_{i=1}^{\lambda_I-1}
\dfrac{\lambda_I-i+1+\beta(\lambda_i'-I)}{\lambda_I-i+\beta(\lambda_i'-I)}
\\
&\phantom{=c_\lambda(\alpha,\beta) \sum_{\mu<_1 \lambda}\times }
\times
\left(\lambda_I-(I+1)\beta\right)
=\sum_i (\lambda_i^2-2i\lambda_i\beta).
\end{split}
\end{align}
\end{prp}
\begin{proof}
As in the proof of Proposition \ref{prp:young:1},
set
$\lambda=(
\stackrel{j_1}{\overbrace{n_1,\ldots,n_1}},
\stackrel{j_2}{\overbrace{n_2,\ldots,n_2}},\ldots,
\stackrel{j_l}{\overbrace{n_l,\ldots,n_l}})$
and $m_k\seteq j_1+\cdots+j_k$ ($k=1,\ldots,l$).
We can write the left hand side of \eqref{eq:prp:young:2} as
\begin{align*}
&F_2(\{m_k\},\{n_k\},\beta)
= \sum_{k=1}^l F_{2,k}(\{m_k\},\{n_k\},\beta),
\\
&F_{2,k}(\{m_k\},\{n_k\},\beta)
\seteq (n_k-(m_k+1)\beta) (m_k-m_{k-1})(n_k-n_{k+1})
\\
&\times
\prod_{i=1}^{k-1}
\dfrac{(n_i-n_k)+\beta(m_k-m_{i-1})}{(n_i-n_k)+\beta(m_k-m_i)}
\prod_{j=k+1}^{l}
\dfrac{(n_k-n_{j+1})+\beta(m_j-m_k)}{(n_k-n_j)+\beta(m_j-m_k)}.
\end{align*}
The residues of $F_2$ are the same as those of $F_1$, and
by the similar calculation as in Proposition \ref{prp:young:1},
one can find that $F_2$ is a polynomial of $\beta$.
The behaviour of $F_2$ in the limit $\beta\to\infty$ shows that
$F_2$ is a linear function of $\beta$.
Using the original expression \eqref{eq:prp:young:2},
we find that
\[F_2(\{m_k\},\{n_k\},0)=\sum_{i}\lambda_i^2.\]
In order to determine the coefficient of $\beta$ in $F_2$,
we rewrite $F_2$ as the rational function of $\beta^{-1}$,
and take the limit $\beta^{-1}\to\infty$.
The result is
\begin{align*}
&\lim_{\beta\to\infty} \big(\beta^{-1} F_2(\{m_k\},\{n_k\},\beta)\big)
\\
&=-\sum_{k=1}^l
(m_k+1)(m_k-m_{k-1})(n_k-n_{k+1})\dfrac{m_k}{m_k-m_{k-1}}
\\
&=-\sum_{k=1}^l n_k(m_k-m_{k-1})(m_k+m_{k-1}+1).
\end{align*}
A moment thought shows that this becomes $-\sum_i 2 i \lambda_i$
if $\{m_k\}$ and $\{n_k\}$ correspond to $\lambda$.
Thus the proof is completed.
\end{proof}
\section{Conclusion and Remarks}\label{sec:rmk}
We have investigated the expansions of Whittaker vectors
for the Virasoro algebra in terms of Jack symmetric functions.
As we have mentioned in \S \ref{sect:intro},
the paper \cite[(3.18)]{AY:2009} proposed a conjecture on
the factored expression for the Gaiotto state of the deformed Virasoro algebra.
using Macdonald symmetric functions.
However, our proof cannot be applied to this deformed case.
The main obstruction is that
the zero-mode $T_0$ of the generating field $T(z)$
of the deformed Virasoro algebra behaves badly,
so that one cannot analyse its action on Macdonald symmetric functions,
and cannot obtain a recursive formula similar to the one
in Proposition \ref{prp:rec}.
It is also valuable to consider the $\calW(\frksl_n)$-algebra case.
In \cite{T:2009} a degenerate Whittaker vector is expressed in terms of
the contravariant form of the $\calW(\frksl_3)$-algebra.
At this moment, however, we don't know how to treat
Whittaker vectors for $\calW(\frksl_n)$-algebra.
It seems to be related to the higher rank analogues of
the AGT conjecture (see \cite{W:2009} for examples).
| proofpile-arXiv_065-4883 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
Turbulence is a state of spatio-temporal chaotic flow generically attainable
for fluids with access to a sufficient source of free energy.
A result of turbulence is enhanced mixing of the fluid which is directed
towards a reduction of the free energy.
Mixing typically occurs by formation of vortex structures on a large
range of spatial and temporal scales, that span between system, energy
injection and dissipation scales \cite{frisch,davidson,sreenivasan}.
Fluids comprise the states of matter of liquids, gases and plasmas
\cite{landau}.
A common free energy source that can drive turbulence in neutral (or more
precisely: non-conducting) fluids is a strong enough gradient (or ``shear'')
in flow velocity, which can lead to vortex formation by Kelvin-Helmholtz
instability. Examples for turbulence occuring from this type of instability
are forced pipe flows, where a velocity shear layer is developing at the wall
boundary, or a fast jet streaming into a stationary fluid.
Another source of free energy is a thermal gradient in connection with an
aligned restoring force (as in liquids heated from below in a gravity field)
that leads to Rayleigh-Benard convection \cite{gallery}.
Several routes for the transition from laminar flow to turbulence in fluids
have been proposed.
For example, in some specific cases the Ruelle-Takens scenario occurs, where
by linear instability through a series of a few period doubling bifurcations
a final nonlinear transition to flow chaos is observed when a control
parameter (like the gradient of velocity or temperature) is increased
\cite{ott}. For other scenarios, like in pipe flow, a sudden direct
transition by subcritical instability to fully developed turbulence or an
intermittent transition are possible \cite{grossmann00,drazin}.
The complexity of the flow dynamics is considerably enhanced in a plasma
compared to a non-conducting fluid. A plasma is a macroscopically neutral gas
composed of many electrically charged particles that is essentially determined
by collective degrees of freedom \cite{chen,spatschek}.
Space and laboratory plasmas are usually composed
of positively charged ions and negatively charged electrons that are
dynamically coupled by electromagnetic forces.
Thermodynamic properties are governed by collisional equilibration and
conservation laws like in non-conducting fluids.
The additional long-range collective interaction by spatially and temporally
varying electric and magnetic fields allows for rich dynamical behaviour of
plasmas with the possibility for complex flows and structure formation in the
presence of various additional free energy sources \cite{horton}.
The basic physics of plasmas in space, laboratory and fusion experiments is
introduced in detail in a variety of textbooks (e.g. in
Refs.~\cite{cap,goldston,sturrock}).
Although the dynamical equations for fluid and plasma flows can be
conceptually simple, they are highly nonlinear and involve an infinite number
of degrees of freedom \cite{biskamp}.
Analytical solutions are therefore in general
impossible. The description of fluid and plasma dynamics mainly relies on
statistical and numerical methods.
\begin{figure}
\includegraphics[width=7.0cm]{kendl_fig1.eps}
\caption{\label{f:decay} \sl
Computation of decaying two-dimensional fluid turbulence, showing contours\\
of vorticity ${\boldsymbol \omega} = \boldsymbol{\nabla} \times {\bf u}$.
}
\end{figure}
\section{Continuum dynamical theory of fluids and plasmas}
Computational models for fluid and plasma dynamics may be broadly classified
into three major categories:
\begin{itemize}
\item
(1) Microscopic models: many body dynamical description by ordinary
differential equations and laws of motion;
\item
(2) Mesoscopic models: statistical physics description (usually by
integro-differential equations) based on probability
theory and stochastic processes;
\item
(3) Macroscopic models: continuum description by partial differential
equations based on conservation laws for the
distribution function or its fluid moments.
\end{itemize}
Examples of microscopic models are Molecular Dynamics (MD) methods for neutral
fluids that model motion of many particles connected by short range
interactions \cite{rapaport}, or Particle-In-Cell (PIC) methods for plasmas
including electromagnetic forces \cite{birdsall,particlesim05}.
Such methods become important when relevant
microscopic effects are not covered by the averaging procedures used to obtain
meso- or macroscopic models, but they usually are intrinsical computationally
expensive.
Complications result from multi-particle or multi-scale interactions.
Mesoscopic modelling treats such effects on the dynamical evolution of
particles (or modes) by statistical assumptions on the interactions
\cite{stochastic}.
These may be implemented either on the macroscale as spectral fluid
closure schemes like, for example, in the Direct Interaction Approximation
(DIA), or on the microscale as advanced collision operators like in
Fokker-Planck models. An example of a mesoscopic computational model for fluid
flow is the Lattice Boltzmann Method (LBM) that combines free streaming
particle motion by a minimalistic discretisation in velocity space with
suitable models for collision operators in such a way that fluid motion is
recovered on macroscopic scales.
Macroscopic models are based on the continuum description of the kinetic
distribution function of particles in a fluid, or of its hydrodynamic moments.
The continuum modelling of fluids and plasmas is introduced in more detail
below.
Computational methods for turbulence simulations have been developed within
the framework of all particle, mesoscopic or continuum models. Each of the
models has both advantages and disadvantages in their practical numerical
application. The continuum approach can be used in situations where discrete
particle effects on turbulent convection processes are negligible.
This is to some approximation also the case for many situations and regimes
of interest in fusion plasma experiments that are dominated by turbulent
convective transport, in particular at the (more collisional) plasma edge.
Within the field of Computational Fluid Dynamics the longest experience
and broadest applications have been obtained with continuum methods
\cite{wesseling}.
Many numerical continuum methods that were originally developed for neutral
fluid simulation have been straightforwardly applied to plasma physics
problems \cite{tajima}.
In continuum kinetics, the time evolution of the single-particle probability
distribution function $f({\bf x}, {\bf v}, t)$ for particles of each species
(e.g. electrons and ions in a plasma) in the presence of a mean
force field ${\bf F}({\bf x}, t)$ and within the binary collision
approximation (modelled by an operator $C$) is described by the Boltzmann
equation \cite{boltzmann}
\begin{equation}
\left( \partial_t + {\bf v} \cdot \partial_{\bf x} + {\bf F} \cdot
\partial_{\bf v} \right) f = C f.
\end{equation}
In a plasma the force field has to be self-consistently determined by
solution of the Maxwell equations. Usually, kinetic theory and computation for
gas and plasma dynamics make use of further simplifying approximations that
considerably reduce the complexity: in the Vlasov equation binary collisions
are neglected ($C=0$), and in the drift-kinetic or gyro-kinetic plasma
equations further reducing assumptions are taken about the time and space
scales under consideration.
The continuum description is further simplified when the fluid can be assumed
to be in local thermodynamic equilibrium. Then a hierarchical set of
hydrodynamic conservation equations is obtained by construction of moments
over velocity space \cite{chapman,sone}.
In lowest orders of the infinite hierarchy, the
conservation equations for mass density $n({\bf x}, t)$, momentum $n {\bf u}
({\bf x}, t)$ and energy density ${\cal E} ({\bf x}, t)$ are obtained. Any
truncation of the hierarchy of moments requires the use of a closure scheme
that relates quantities depending on higher order moments by a constitutive
relation to the lower order field quantities.
An example of a continuum model for neutral fluid flow are the Navier-Stokes
equations. In their most widely used form (in particular for technical and
engineering applications) the assumptions of incompressible
divergence free flow (i.e., $n$ is constant on particle paths) and of an
isothermal equation of state are taken \cite{foias}.
Then the description of fluid flow can be reduced to the solution of the
(momentum) Navier-Stokes equation
\begin{equation}
\left( \partial_t + {\bf u} \cdot \boldsymbol{\nabla} \right) {\bf u} =
- \boldsymbol{\nabla} \! P + \nu \Delta {\bf u}
\label{e:nse}
\end{equation}
under the constraints given by
\begin{equation}
\nabla \cdot {\bf u} \equiv 0
\end{equation}
and by boundary
conditions. Most numerical schemes for the
Navier-Stokes equation require solution of a Poisson type equation for the
normalised scalar pressure $P = p/\rho_0$ in order to guarantee divergence
free flow.
The character of solutions for the Navier-Stokes equation intrinsically
depends on the ratio between the dissipation time scale (determined by the
kinematic viscosity ${\nu}$) and the mean flow time scale (determined by the
system size $L$ and mean velocity $U$), specified by the Reynolds number
\begin{equation}
R_e = { L U / \nu }.
\end{equation}
For small values of $R_e$ the viscosity will dominate the time evolution of
${\bf u}({\bf x})$ in the Navier-Stokes equation, and the flow is
laminar. For higher $R_e$ the advective nonlinearity is dominant and the
flow can become turbulent.
The Rayleigh number has a similar role for the onset of
thermal convective turbulence \cite{ott}.
\section{Drift-reduced two-fluid equations for plasma dynamics}
Flow instabilities as a cause for turbulence, like those driven by flow shear
or thermal convection, do in principle also exist in plasmas similar to
neutral fluids \cite{biskamp-turb}, but are in general found to be less
dominant in strongly magnetised plasmas.
The most important mechanism which results in turbulent transport and enhanced
mixing relevant to confinement in magnetised plasmas
\cite{hasmim78,horton81,wakatani84} is an
unstable growth of coupled wave-like perturbations in plasma pressure
$\tilde p$ and electric fields $\tilde {\bf E}$.
The electric field forces a flow with the ExB (``E-cross-B'') drift velocity
\begin{equation}
{\bf v}_{ExB} = {1 \over B^2} \tilde {\bf E} \times {\bf B}
\end{equation}
of the plasma
perpendicular to the magnetic field {\bf B}. A phase shift, caused by any
inhibition of a fast parallel Boltzmann response of the electrons,
between pressure and electric field perturbation in the presence of a pressure
gradient can lead to an effective transport of plasma across the magnetic
field and to an unstable growth of the perturbation amplitude.
Nonlinear self-advection of the ExB flow and coupling between perturbation
modes (``drift waves'') can finally lead to a fully developed turbulent state
with strongly enhanced mixing.
A generic source of free energy for magnetised laboratory plasma turbulence
resides in the pressure gradient: in the core of a magnetic confinement region
both plasma density and temperature are usually much larger than near the
bounding material wall, resulting in a pressure gradient directed inwards to
the plasma center.
Instabilities which tap this free energy tend to lead to enhanced mixing and
transport of energy and particles down the gradient \cite{lnp-stroth}.
For magnetically confined fusion plasmas, this turbulent convection by
ExB drift waves often dominates collisional diffusive transport mechanisms
by orders of magnitude, and essentially determines energy and particle
confinement properties \cite{itoh99,hinton76}.
The drift wave turbulence is regulated by formation of mesoscopic streamers
and zonal structures out of the turbulent flows \cite{zf-review05}.
Continuum models for drift wave turbulence have to capture the
different dynamics of electrons and ions parallel and perpendicular to the
magnetic field and the coupling between both species by electric and
magnetic interactions \cite{itoh01,itoh03}.
Therefore, a single-fluid magneto-hydrodynamic (MHD)
model can not appropriately describe drift wave dynamics: one has to refer to
a set of two-fluid equations, treating electrons and ions as separate species,
although the plasma on macroscopic scales remains quasi-neutral with nearly
identical ion and electron density, $n_i \approx n_e \equiv n$.
The two-fluid equations require quantities like collisional momentum exchange
rate, pressure tensor and heat flux to be expressed by hydrodynamic moments
based on solution of a kinetic (Fokker-Planck) model. The most widely used set
of such fluid equations has been derived by Braginskii \cite{braginskii} and
is e.g. presented in brief in Ref.~\cite{wesson}.
The most general continuum descriptions for the plasma
species, based either on the kinetic Boltzmann equation or on the hydrodynamic
moment approach like in the Braginskii equations, are covering all time and
space scales, including detailed
gyro-motion of particles around the magnetic field lines, and the fast plasma
frequency oscillations. From experimental observation it is on the other hand
evident \cite{hugill83,liewer85,wootton90,wagner93,stroth98}, that the
dominant contributions to turbulence and transport in
magnetised plasmas originate from time and space scales that are associated
with frequencies in the order the drift frequency $\omega \sim (\rho_s /
L_{\perp}) \Omega_i$, that are much lower than the ion gyro-frequency
$\Omega_i = q_i B/M_i$ by the ratio between drift scale $\rho_s = \sqrt{T_e
M_i}/(eB)$ to gradient length $L_{\perp}$:
\begin{equation}
\omega \sim \partial_t \ll \Omega_i < \Omega_e.
\end{equation}
Under these assumptions one can apply a ``drift ordering'' based on
the smallness of the order parameter $\delta = \omega / \Omega_i \ll 1$.
This can be introduced either on the kinetic level, resulting in the
drift-kinetic model, or on the level of two-fluid
moment equations for the plasma, resulting in the ``drift-reduced two-fluid
equations'', or simply called ``drift wave equations''
\cite{hinton71,tang78,wakatani84}:
neglect of terms scaling with $\delta$ in
higher powers than 2 considerably simplifies both the numerical
and analytical treatment of the dynamics, while retaining all effects of the
perpendicular drift motion of guiding centers and nonlinear couplings
that are necessary to describe drift wave turbulence.
For finite ion temperature, the ion gyro-radius $\rho_i = \sqrt{T_i M_i}/(eB)$
can be of the same magnitude as typical fluctuation scales, with wave numbers
found around $k_{\perp} \rho_s \sim 0.3$ in the order of the drift scale
\begin{equation}
\rho_s = {\sqrt{T_e M_i} \over eB}.
\end{equation}
Although the gyro-motion is still fast compared to turbulent time scales, the
ion orbit then is of similar size as spatial variations of the fluctuating
electrostatic potential.
Finite gyro-radius (or ``finite Larmor radius'', FLR) effects are captured by
appropriate averaging procedures over the gyrating particle trajectory and
modification of the polarisation equation, resulting in ``gyrokinetic'' or
``gyrofluid models'' for the plasma.
\section{Turbulent vortices and mean flows}
The prevalent picture of drift wave turbulence is that of small-scale,
low-frequency ExB vortices in the size of several gyro-radii, that determine
mixing and transport of the plasma perpendicular to the magnetic field across
these scales.
Beyond that, turbulence in magnetised plasmas exhibits large-scale structure
formation that is linked to this small-scale eddy motion:
The genesis of mean zonal flow structures out of an initially nearly
homogeneous isotropic vortex field and the resulting shear-flow suppression of
the driving turbulence is a particular example of a self-organising regulation
process in a dynamical system
\cite{hasmim78,haswak87,biglari90,lin98,terry00rmp,terry00pop,hahm00,hahm02}.
The scale of these macroscopic turbulent zonal flows is that of the system
size, setting up a radially localised differential ExB rotation of the whole
plasma on an entire toroidal flux surface.
\begin{figure}
\includegraphics[width=15.0cm]{kendl_fig3.eps}
\caption{\label{f:eddy-flow} \sl Generation of mean sheared flows from drift wave
turbulence within the poloidal cross-section of a magnetised plasma torus.}
\end{figure}
Moreover, the process of self-organisation to zonal flow structures is thought
to be a main ingredient in the still unresolved nature of the L-H transition
in magnetically confined toroidal plasmas for fusion research \cite{connor00}.
The L-H transition is experimentally found to be a sudden improvement
in the global energy content of the fusion plasma from a low to high (L-H)
confinement state when the central heat source power is increased above a
certain threshold \cite{wagner82,gohil94,hugill00,suttrop97}.
The prospect of operation in a high confinement
H-mode is one of the main requirements for successful performance of fusion
experiments like ITER.
The mechanism for spin-up of zonal flows in drift wave turbulence
is a result of the quasi two-dimensional nature of the nonlinear ExB dynamics,
in connection with the double periodicity and parallel coupling in a toroidal
magnetised plasma.
Basic concepts and terminology for the interaction between vortex turbulence
and mean flows have been first developed in the context of neutral fluids.
It is therefore instructive to briefly review these relations in the framework
of the Navier-Stokes Eq.~(\ref{e:nse}) before applying them to plasma dynamics.
Small (space and/or time) scale vortices and mean flows may be separated
formally by an ansatz known as Reynolds decomposition,
\begin{equation}
{\bf u} = \bar{ \bf U} + \tilde {\bf u},
\end{equation}
splitting the flow velocity into a mean part $\bar {\bf U} = \langle {\bf u}
\rangle$, averaged over the separation scale, and small-scale fluctuations
$\tilde {\bf u}$ with $\langle \tilde {\bf u}\rangle =0$. While the averaging
procedure, $\langle ...\rangle$, is mathematically most unambiguous for the
ensemble average, the physical interpretation in fluid dynamics makes a time
or space decomposition more appropriate.
Applying this averaging on the Navier-Stokes Eq.~(\ref{e:nse}), one obtains the
Reynolds equation (or: Reynolds averaged Navier-Stokes equation, RANS):
\begin{equation}
\left( \partial_t + \bar{\bf U} \cdot \boldsymbol{\nabla} \right) \bar{\bf U} =
- \boldsymbol{\nabla} \! \bar{P} + \boldsymbol{\nabla} {\bf R} + \nu \Delta \bar{\bf U}
\label{e:rans}
\end{equation}
This mean flow equation has the same structure as the original Navier-Stokes
equation with one additional term including the Reynolds stress tensor $R_{ij}
= \langle \tilde u_i \tilde u_j \rangle$. Momentum transport between turbulence
and mean flows can thus be caused by a mean pressure gradient, viscous forces,
and Reynolds stress. A practical application of the RANS is in Large Eddy
Simulation (LES) of fluid turbulence, which efficiently reduces the time and
space scales necessary for computation by {\sl modelling} the Reynolds stress
tensor for the smaller scales as a local function of the large scale flow.
LES is however not applicable for drift wave turbulence computations, as
here in any case all scales down to the effective gyro-radius (or drift scale
$\rho_s$) have to be resolved in Direct Numerical Simulation (DNS).
\section{Two-dimensional fluid turbulence}
Turbulent flows are generically three-dimensional. In some particular
situations the dependence of the convective flow dynamics on one of the
Cartesian directions can be negligible compared to the others and the
turbulence becomes quasi two-dimensional \cite{kraichnan80,tabeling02}.
Examples for such 2D fluid systems are thin films
(e.g. soap films), rapidly rotating stratified fluids, or geophysical flows of
ocean currents and the (thin) planetary atmosphere. In particular, also the
perpendicular ExB dynamics in magnetised plasmas behaves similar to a 2D
fluid \cite{horton94}.
The two-dimensional approximation of fluid dynamics not only simplifies the
treatment, but moreover introduces distinctly different behaviour.
The major difference can be discerned by introducing the vorticity
\begin{equation}
{\boldsymbol \omega} = \boldsymbol{\nabla} \times {\bf u}
\end{equation}
and taking the curl of the Navier-Stokes Eq.~(\ref{e:nse})
to get the vorticity equation
\begin{equation}
\left( \partial_t + {\bf u} \cdot {\boldsymbol{\nabla}} \right) {\boldsymbol \omega} =
\left( {\boldsymbol \omega} \cdot {\boldsymbol{\nabla}} \right) {\bf u} + \nu \Delta {\boldsymbol
\omega}.
\label{e:voreq}
\end{equation}
In a two-dimensional fluid with ${\bf v} = v_x {\bf e}_x + v_y {\bf e}_y$ and
$v_z = 0$ the vorticity reduces to ${\boldsymbol \omega} = w {\bf e}_z$ with
$w= \partial_x v_y - \partial_y v_x$.
The vortex stretching and twisting term $\left( {\boldsymbol \omega} \cdot
{\boldsymbol{\nabla}} \right) {\bf u}$ is zero in 2D, thus eliminating a characteristic
feature of 3D turbulence.
For unforced inviscid 2D fluids then due to $\left( \partial_t + {\bf u}
\cdot {\boldsymbol{\nabla}}\right) {\boldsymbol \omega} = 0$ the vorticity $w$ is constant
in flows along the fluid element. This implies conservation of total enstrophy
$W = \int (1/2) |{\boldsymbol \omega}|^2 d{\bf x}$ in addition to the
conservation of kinetic flow energy $E = \int (1/2) |{\bf u}|^2 d{\bf x}$.
The 2D vorticity equation can be further rewritten in terms of a scalar stream
function $\phi$ that is defined by $(v_x, v_y) = (\partial_y, - \partial_x)
\phi$ so that $w = \nabla^2 \phi$, to obtain
\begin{equation}
\partial_t w + \left[ \phi, w \right] = \nu \Delta w.
\label{e:vor2d}
\end{equation}
Here the Poisson bracket $[a,b] = \partial_y a \; \partial_x b - \partial_x a
\; \partial_y b$ is introduced. For force driven flows a term given
by the curl of the force adds to the right hand side of Eq.~(\ref{e:vor2d}).
Although the pressure is effectively eliminated from that equation, it is
still necessary to similarly solve a (nonlocal) Poisson equation for the
stream function. For ExB flows in magnetised plasmas the stream function
$\phi$ is actually represented by the fluctuating electrostatic potential.
The energetics of homogeneous 3D turbulence is usually understood in terms
of a direct cascade of energy from the injection scale down to small
(molecular) dissipation scales \cite{frisch}: large vortices break up into
smaller ones due to mutual stretching and shearing.
In terms of the Reynolds Eq.~(\ref{e:rans}) this means that the Reynolds stress
transfer is usually negative, taking energy out of mean flows into
small scale vortices.
The interaction between scales takes place basically by
three-mode coupling maintained by the convective quadratic nonlinearity.
This leads to the generic Kolmogorov $k^{-5/3}$ power spectrum of Fourier
components $E(k) = \int dx \; (1/2) |{\bf u}(x)|^2 \exp(-ikx)$
in the cascade range of 3D turbulence when energy injection and dissipation
scales are well separated by a high Reynolds number \cite{k41}.
In two dimensions the behaviour is somewhat different: Kraichnan, Leith and
Batchelor \cite {kraichnan67,leith68,batchelor69} have conjectured that the
energy has an inverse cascade property to scales larger than the injection.
Smaller vortex structures self-organise to merge into bigger ones as a result
of the absence of vortex stretching. For unforced turbulence the Reynolds
stress transfer is on the average positive and into the mean flows.
The classical theory of 2D fluid turbulence by Kraichnan et al. predicts a
$k^{-3}$ energy spectrum and a $k^{-1}$ enstrophy spectrum in the inertial
range.
Numerical simulations of 2D Navier-Stokes turbulence however rather find, for
example,
a $k^{-5/3}$ inverse cascade for energy on large scales and a $k^{-3}$ direct
cascade for enstrophy on small scales \cite{tran03}, modifying the classical
predictions due to the existence of intermittency and coherent structures,
although the extent of the modification is still under discussion. The
(limiting or periodic) domain boundary in 2D simulations has also been found
to have stronger influence than for the 3D case.
Periodicity in one dimension of the 2D system can lead to the spin up of
sustained zonal structures of the mean flow out of the turbulence by inverse
cascade.
Prominent examples of zonal flows in planetary atmospheric dynamics are the
well visible structures spanning around the planet Jupiter approximately along
constant latitude, and jet streams in the earth's atmosphere. Zonal flows are
also observed in fluids rotating in a circular basin.
Drift wave turbulence in magnetised plasmas also has basically a 2D character
and exhibits zonal structure formation in the poloidally and toroidally
periodic domain on magnetic flux surfaces of a torus. These zonal plasma flows
have finite radial extension and constitute a differential, sheared rotation
of the whole plasma on flux surfaces.
\begin{figure}
\includegraphics[width=16.5cm]{kendl_fig4.eps}
\caption{\label{f:lbm} \sl
Example for a fluid simulation of 2D grid turbulence with a Lattice-Boltzmann
code~\cite{C45}: a high Reynolds number flow with $R_e=5000$ is entering from
the left of the domain and passes around a grid of obstacles. The shading
depicts vorticity. In the near field directly behind the grid the particular
vortex streets can be distinguished. In the middle of the domain neighbouring
eddies are strongly coupled to a quasi homogeneous (statistically in the
perpendicular direction) vortex field. On the far right side eddies decay
into larger structures in the characteristic way of 2D turbulence. The
simulation agrees well with flowing soap film experiments~\cite{soapfilm}.
}
\end{figure}
\section{Turbulence in magnetised plasmas}
Drift wave turbulence is nonlinear, non-periodic motion involving
disturbances on a background thermal gradient of a magnetised plasma
and eddies of fluid like motion in which the advecting velocity of all
charged species is the ExB velocity. The
disturbances in the electric field ${\bf E}$ implied by the presence
of these eddies are caused by the tendency of the electron dynamics to
establish a force balance along the magnetic field ${\bf B}$.
Pressure disturbances have their parallel gradients balanced by a parallel
electric field, whose static part is given by the parallel gradient of
the electrostatic potential. This potential in turn is the stream
function for the ExB velocity in drift planes,
which are locally perpendicular to the magnetic field. The turbulence
is driven by the background gradient, and the electron pressure and
electrostatic potential are coupled together through parallel
currents. Departures from the static force balance are mediated
primarily through electromagnetic induction and resistive friction,
but also the electrons inertia, which is not negligible.
The dynamical character of cross-field ExB drift wave turbulence in the edge
region of a tokamak plasma is governed by this electromagnetic and dissipative
effects in the parallel response.
The most basic drift-Alfv\'en (DALF) model to capture the drift wave
dynamics
includes nonlinear evolution equations of three fluctuating fields: the
electrostatic potential $\tilde \phi$, electromagnetic potential $\tilde
A_{||}$, and density $\tilde n$. The tokamak edge usually features a more or
less pronounced density pedestal, and the dominant contribution to the free
energy drive to the turbulence by the inhomogeneous pressure background is
thus due to the density gradient.
On the other hand, a steep enough ion
temperature gradient (ITG) does not only change the turbulent
transport quantitatively, but adds new interchange physics into the
dynamics. In addition, more field quantities have to be treated:
parallel and perpendicular temperatures $\tilde T_\parallel$ and $\tilde
T_\perp$ and the associated parallel heat fluxes, for a total of six
moment variables for each species. Finite Larmor radius effects
introduced by
warm ions require a gyrofluid description of the turbulence equations.
Both the resistive DALF and the ITG models can be covered by using the
six-moment electromagnetic gyrofluid model GEM by Scott \cite{gem}, but for
basic studies it is also widely used in its more economical two-moment version
for scenarios where the DALF model is applicable \cite{scott03ppcf}.
The gyrofluid model is based upon a moment approximation of the underlying
gyrokinetic equation.
The first complete six-moment gyrofluid formulation was given for slab
geometry by Dorland et al. \cite{dorland93}, and later extended by Beer et
al. to incorporate toroidal effects \cite{beer96a} using a ballooning-based
form of flux surface geometry \cite{beer95}.
Electromagnetic induction and electron collisionality were then included
to form a more general gyrofluid for edge turbulence by Scott \cite{scott00},
with the geometry correspondingly replaced by the version from the edge
turbulence work, which does not make ballooning assumptions and in
particular represents slab and toroidal mode types equally well and does
not require radial periodicity \cite{scott01}. Energy conservation
considerations were solidified first for the two-moment version
\cite{scott03ppcf}, and recently for the six-moment version in
Ref.~\cite{gem}.
\begin{figure}
\includegraphics[width=16.5cm]{kendl_fig2.eps}
\caption{\label{f:profile} \sl
Left: A cut of a torus shows the poloidal cross section with minor
radius $r$, major radius $R$, poloidal angle $\theta$ and toroidal angle
$\zeta$. Right: typical radial profiles of density $n(r)$ and temperature
$T(r)$ in the toroidal plasma of a tokamak. The density often is nearly
constant in the ``core'' region and shows a pronounced steep-gradient
pedestal in the plasma ``edge'' region. The outer scrape-off layer (``SOL'')
connects the plasma with materials walls.
}
\end{figure}
\section{Basic drift wave instability}
Destabilization of the ExB drift waves occurs when the parallel electron
dynamics deviates from a fast ``adiabatic'' response to potential
perturbations, resulting in a phase shift between the density and potential
fluctuations.
In this section the basic linear instability mechanism is discussed in the
most basic electrostatic, cold ion limit for a straight magnetic field.
Figure~\ref{f:dw} schematically shows a localized perturbation
of plasma pressure $\tilde p$ (left) that results in a positive potential
perturbation $\tilde \phi>0$ (middle) due to ambipolar diffusion.
For typical tokamak parameters it is found that the perturbation scale
$\Delta \gg \lambda_D = \sqrt{\varepsilon_0 T_e / (ne^2)}$ is much larger than
the Debye length $\lambda_D$, so that quasi neutrality $n_i \approx n_e \equiv
n$ can be assumed.
In accordance with the stationary parallel electron momentum balance equation
\begin{equation}
-en_0{\tilde {\bf E}}_{||} - \boldsymbol{\nabla}_{||} \tilde p_e=0
\label{e:emom}
\end{equation}
with $\tilde p_e = \tilde n_e T_e$, the isothermal electrons try to locally
establish along the field line
a Boltzmann relation $n_e = n_0(r) \exp( e \tilde \phi / T_e)$.
Under quasi neutrality $n_i$=$n_e$=$n_0(r)$+$\tilde n_e$, where $n_0(r)$ is
the (in general radially varying) background density.
Without restrictions on the parallel electron dynamics (like e.g. due
to collisions, Alfv\'en waves or kinetic effects like Landau damping
and particles trapped in magnetic field inhomogeneities) this balance
is established instantaneously on the drift time scale and is usually
termed an ``adiabatic response''.
Already at homogeneous background density the perturbation convects the
plasma with the ExB drift velocity ${\bf v}_{\perp} = {\bf v}_{ExB} = (B^{-2})
\tilde {\bf E} \times {\bf B}$ equal for electrons and ions.
When a perpendicular background pressure gradient $\boldsymbol{\nabla} p$ is present, the
perturbed structure propagates in the electron diamagnetic drift direction
$\sim \boldsymbol{\nabla} p \times {\bf B}$.
In the continuity equation
\begin{equation}
\partial_t n + \boldsymbol{\nabla} \cdot (n {\bf v}) = 0
\label{e:cont}
\end{equation}
for cold ions in a homogeneous magnetic field and by neglecting ion inertia the
only contribution to the velocity is the perpendicular ExB drift velocity
${\bf v}_E = - B^{-2} (\nabla \tilde \phi \times {\bf B})$. Using the
Boltzmann relation in Eq.~(\ref{e:cont}) one gets
\begin{equation}
\partial_t \; n_0 \exp \left( {e \tilde \phi \over T_e} \right) - \boldsymbol{\nabla} \cdot
\left[
{1 \over B^2} (\nabla \tilde \phi \times {\bf B}) \: n_0 \exp \left({e \tilde
\phi \over T_e} \right) \right] = 0,
\end{equation}
and due to the straight ${\bf B}=B {\bf e}_{\parallel}$ it is obtained:
\begin{equation}
\partial_t \tilde \phi - \left( {T_e \over e B} \right) (\partial_r \ln n_0)
\; \partial_{\theta} \tilde \phi = 0.
\end{equation}
Assuming a perturbation periodical in the electron diamagnetic drift coordinate
$\theta$ with $\tilde \phi = \tilde \Phi \exp[-i \omega t + i k_{\theta}
\theta]$, the electron drift wave frequency is found to be
\begin{equation}
\omega_{\ast e} = {T_e \over eB} {1 \over L_n} k_{\theta}
=
{c_s \over L_n} [\rho_s k_{\theta}]
= {\rho_s \over L_n} \Omega_i [\rho_s k_{\theta}].
\end{equation}
Here the density gradient length $L_n = (\partial_r \ln n_0)^{-1}$
and the drift scale $\rho_s= \sqrt{m_i T_e}/(eB)$, representing an ion
radius at electron temperature, have been introduced.
\begin{figure}
\includegraphics[width=16.5cm]{kendl_fig5.eps}
\caption{\label{f:dw} \sl
Basic drift waves mechanism: (1) an initial pressure perturbation
$\tilde p$ leads to an ambipolar loss of electrons along the magnetic
field ${\bf B}$, whereas ions remain more immobile. (2)
The resulting electric field ${\bf \tilde E}=-\boldsymbol{\nabla} \tilde \phi$ convects
the whole plasma
with ${\bf v}_{ExB}$ around the perturbation in the plane perpendicular to
${\bf B}$. (3) In the presence of a pressure gradient, $\tilde p$
propagates in electron diamagnetic
drift direction with ${\bf v}_{\ast} \sim \boldsymbol{\nabla} p \times {\bf B}$.
This ``drift wave'' is stable if
the electrons establish $\tilde \phi$ according to the Boltzmann relation
without delay (``adiabatic response'').
A non-adiabatic response due to collisions, magnetic flutter or wave-kinetic
effects causes a phase shift between $\tilde p$ and $\tilde \phi$.
The ExB velocity is then effectively transporting plasma down
the gradient, enhances the principal perturbation and leads to an
unstable growth of the wave amplitude
}
\end{figure}
The motion of the perturbed structure perpendicular to magnetic field and
pressure gradient in the electron diamagnetic drift direction $k_{\theta} \sim
\boldsymbol{\nabla} p \times {\bf B}$ is in this approximation still stable and does so far
not cause any transport down the gradient.
The drift wave is destabilized only when a phase shift $\delta_{\bf
k}$ between potential and density perturbation is introduced by
``non-adiabatic'' electron dynamics
\begin{equation}
\tilde n_e = n_0 (1-i\delta_{\bf k}) \: {e \tilde \phi \over T_e}
\label{e:nonadiab}
\end{equation}
The imaginary term $i\delta_{\bf k}$ in general is an anti-hermitian
operator and describes dissipation of the electrons, that causes the
density perturbations to proceed the potential perturbations in $\theta$ by
slowing down the parallel equilibration. This leads to an exponential
growth of the perturbation amplitude by $\exp (\gamma_k t)$ with linear growth
rate $\gamma_k \sim \delta_k \omega_k$.
Parallel electron motion also couples drift waves to shear Alfv\'en
waves, which are parallel propagating perpendicular magnetic field
perturbations. With the vector potential $A_{||}$ as a further dynamic
variable, the parallel electric field $E_{||}$, parallel electron
motion, and nonlinearly the parallel gradient are modified.
The resulting nonlinear drift-Alfv\'en equations are discussed in the following
section.
The stability and characteristics of drift waves and resulting plasma
turbulence are further influenced by inhomogeneities in the magnetic field, in
particular by field line curvature and shear.
The normal and geodesic components of field line curvature have different
roles for drift wave turbulence instabilities and saturation.
The field gradient force associated with the normal curvature, if
aligned with the plasma pressure gradient, can either act to restore or amplify
pressure gradient driven instabilities by compression of the fluid drifts,
depending on the sign of alignment.
The geodesic curvature describes the compression of the field strength in
perpendicular direction on a flux surface and is consequently related to the
compression of large-scale (zonal) ExB flows.
Transition from stable drift waves to turbulence has been studied
experimentally in linear and simple toroidal magnetic field configurations,
and by direct numerical simulation.
Experimental investigations in a magnetized low-beta plasma with clindrical
geometry by Klinger {\sl et al.} have demonstrated that the spatiotemporal
dynamics of interacting destabilised travelling drift wave follows a
bifurcation sequence towards weakly developed turbulence according to the
Ruelle-Takens-Newhouse scenario \cite{klinger97}.
The relationship between observations made in linear magnetic geometry,
purely toroidal geometry and magnetic confinement is discussed in
Ref.~\cite{grulke02}, where the role of large-scale fluctuation
structures has been highlighted.
The role of parallel electron dynamics and Alfv\'en waves for
coherent drift modes and drift wave turbulence have been studied in a
collisionality dominated high-density helicon plasma \cite{grulke07}.
Measurements of the phase coupling between spectral components of interchange
unstable drift waves at different frequencies in a basic toroidal magnetic
field configuration have indicated that the transition from a coherent to a
turbulent spectrum is mainly due to three-wave interaction processes
\cite{poli07}.
The competition between drift wave and interchange physics in ExB
drift turbulence has been studied computationaly in tokamak geometry with
respect to the linear and nonlinear mode structure by Scott \cite{scott05}.
A quite remarkable aspect of fully developed drift wave turbulence in a
sheared magnetic field lying in closed surfaces is its strong nonlinear
character, which can be self-sustaining even in the absence of linear
instabilities \cite{scott-prl}. This situation of self-sustained plasma
turbulence does not have any analogy in neutral fluid dynamics and, as shown
in numerical simulations by Scott, is mostly applicable to tokamak edge
turbulence, where linear forcing is low enough so that the nonlinear physics
can efficiently operate \cite{scott02}.
\section{Drift-Alfv\'en turbulence simulations for fusion plasmas}
The model DALF3 by Scott \cite{scott02}, in the cold ion approximation without
gyrofluid FLR corrections, represents the four field version of the
dissipative drift-Alfv\'en equations, with disturbances (denoted by the
tilde) in the ExB vorticity
$\tilde\Omega$, electron pressure $\tilde p_e$, parallel current $\tilde
J_\parallel$, and parallel ion velocity $\tilde u_\parallel$ as
dependent variables. The equations are derived
under gyro/drift ordering, in a three dimensional globally consistent
flux tube geometry \cite{scott98,scott01}, and appear (in cgs units as used
in the references) as
\begin{eqnarray} \label{e:eqvor}
{n M_i c^2 \over B^2} \left( \partial_t + {\bf v}_E\cdot\boldsymbol{\nabla}
\right) \tilde\Omega
&=& \nabla_\parallel \tilde j_\parallel
- {\cal K}(\tilde p_e), \\
{1 \over c} \partial_t \tilde A_{\parallel}
+ {m_e \over n_e e^2} \left( \partial_t + {\bf v}_E\cdot\boldsymbol{\nabla}
\right) \tilde j_{\parallel}
&=& {1 \over ne} \nabla_{\parallel} (p_e+\tilde p_e )
- \nabla_{\parallel} \tilde\phi
- \eta_{\parallel} \tilde j_{\parallel}, \\
\left( \partial_t + {\bf v}_E\cdot\boldsymbol{\nabla} \right) (p_e+\tilde p_e)
&=& {T_e \over e} \nabla_{\parallel}\tilde j_{\parallel}
- p_e \nabla_{\parallel} \tilde u_\parallel
+ p_e {\cal K} (\tilde \phi)
- {T_e \over e} {\cal K} (\tilde p_e), \nonumber \\ \\
n M_i \left( \partial_t + {\bf v}_E\cdot\boldsymbol{\nabla} \right)\tilde u_\parallel &=& -
\nabla_\parallel (p_e+\tilde p_e ),
\end{eqnarray}
with the parallel magnetic potential $\tilde A_\parallel$ given by
$\tilde j_\parallel= - (c / 4 \pi) \nabla_{\perp}^2 A_\parallel $
through Ampere's law, and the vorticity
$\tilde\Omega=\nabla_\perp^2\tilde\phi.$
Here, $\eta_\parallel$ is the Braginskii parallel resistivity,
$m_e$ and $M_i$ are the electron and ion masses, $n$ is the electron
(and ion) density, and $T_e$ is the electron temperature with pressure
$p_e=n T_e$.
The dynamical character of the system is further determined by a set of
parameters characterising the relative role of dissipative, inertial and
electromagnetic effects in addition to the driving by gradients of
density and temperature.
The flux surface geometry of a tokamak enters into the fluid and gyrofluid
equations
via the curvilinear generalisation of differentiation operators and via
inhomogeneity of the magnetic field strength $B$. The different scales of
equilibrium and fluctuations parallel and perpendicular to the magnetic
field motivate the use of field aligned flux coordinates.
The differential operators in the field aligned frame are the parallel
gradient
\begin{equation}
\nabla_\parallel = (1/B) ({\bf B} + \tilde{\bf B}_\perp) \cdot \boldsymbol{\nabla},
\end{equation}
with magnetic field disturbances $\tilde{\bf B}_\perp = (-1/B) {\bf B}
\times \boldsymbol{\nabla} \tilde A_\parallel$ as additional nonlinearities,
the perpendicular Laplacian
\begin{equation}
\nabla_\perp^2=\boldsymbol{\nabla}\cdot[(-1/B^2){\bf B}\times({\bf B}\times\boldsymbol{\nabla})],
\end{equation}
and the curvature operator
\begin{equation}
{\cal K}=\boldsymbol{\nabla}\cdot[(c/B^2){\bf B}\times\boldsymbol{\nabla})].
\end{equation}
The DALF equations constitute the most basic model containing the
principal interactions of dissipative drift wave physics in a general closed
magnetic flux surface geometry. The drift wave coupling effect is
described by $\nabla_\parallel$ acting upon
$\tilde p_e/p_e-e\tilde\phi/T_e$ and $\tilde J_\parallel$,
while interchange forcing is described by ${\cal K}$
acting upon $\tilde p_e$ and $\tilde\phi$ \cite{scott97a}.
In the case of tokamak edge turbulence, the drift wave effect is qualitatively
more important \cite{scott02}, while the most important role for ${\cal K}$ is
to regulate the zonal flows \cite{scott03pla}.
Detailed accounts on the role of magnetic field geometry shape in tokamaks
and stellarators on plasma edge turbulence can be found in
Refs.~\cite{akpop06,akjpp06} and \cite{akppcf00,jenko02}, in particular with
respect to effects of magnetic field line shear \cite{akprl03} and curvature
\cite{akpop05}.
An example for typical experimental parameters are those of the ASDEX Upgrade
(AUG) edge pedestal plasmas in L mode near to the L-H transition for Deuterium
ions with $M_i = M_D$:
electron density $n_e = 3 \cdot 10^{13} \mbox{cm}^{-3}$, temperatures $T_e =
T_i = 70$ eV, magnetic field strength $B = 2.5$ T, major radius $R = 165$ cm,
perpendicular gradient length $L_{\perp} = 4.25$ cm, and safety factor $q =
3.5$.
The dynamical character of the DALF/GEM system is determined by a set of
parameters characterising the relative role of dissipative, inertial and
electromagnetic effects in addition to the driving by gradients of
density and temperature. In particular, for the above experimental
values, these are collisionality $C = 0.51 \hat\epsilon (\nu_e
L_\perp / c_s) (m_e/M_i)=5$, magnetic induction $\hat \beta = \hat\epsilon (4
\pi p_e / B^2)=1$, electron inertia $\hat \mu = \hat\epsilon (m_e/M_i) =5$
and ion inertia $\hat\epsilon = (qR / L_{\perp})^2=18350$.
The normalised values are similar in edge plasmas of other large
tokamaks like JET. The parameters can be partially obtained even by smaller
devices like the torsatron TJ-K at University of Stuttgart \cite{tjk,tjk-sim},
which therefore provides ideal test situations for comparison between
simulations and the experiment \cite{tjk07}.
\begin{figure}
\includegraphics[width=11.5cm]{kendl_fig6.eps}
\caption{\label{f:aug} \sl
Computation of plasma edge turbulence in the magnetic field of a divertor
tokamak fusion experiment, using the DALF3 model as described in the text
[Background figure: ASDEX Upgrade, Max-Planck-Institute for Plasma Physics].
}
\end{figure}
A review and introduction on drift wave theory in inhomogeneous
magnetised plasmas has been presented by Horton in Ref.~\cite{horton-rev},
although its main emphasis is placed on linear dynamics.
An excellent introduction and overview on turbulence in magnetised plasma and
its nonlinear properties by Scott can be found in Ref.~\cite{slnp-scott}, and
a very detailed survey on drift wave theory with emphasis on the plasma
edge is given by Scott in Refs.~\cite{habil-scott,scott07}.
However, no tokamak edge turbulence simulation has yet reproduced the
important threshold transition to the high confinement mode known from
experimental fusion plasma operation.
The possibility to obtain a confinement transition within first principle
computations of edge turbulence will have to be studied with models
that at least include full temperature dynamics, realistic flux surface
geometry, global profile evolution including the equilibrium, and a coupling
of edge and SOL regions with realistic sheath boundary conditions. In addition
the model still has to maintain sufficient grid resolution, grid deformation
mitigation, and energy plus enstrophy conservation in the vortex/flow system.
Such ,,integrated'' fusion plasma turbulence simulation codes are currently
under development.
The necessary computing power to simulate the extended physics models and
computation domains is going to be available within the next years.
This may facilitate international activities (for example within the European
Task Force on Integrated Tokamak Modelling) towards a
,,computational'' tokamak plasma with a first-principles treatment of both
transport and equilibrium across the whole cross section.
The objective of this extensive project in Computational Plasma Physics is to
provide the means for a direct comparison between our theoretical
understanding with the emerging burning-plasma physics of the next large
international fusion experiment ITER.
\section*{Acknowledgements}
This work was supported by the Austrian Science Fund FWF under contract
P18760-N16, and by the European Communities under the Contract of
Associations between Euratom and the Austrian Academy of Sciences and
carried out within the framework of the European Fusion Development
Agreement. The views and opinions herein do not necessarily reflect those of
the European Commission.
| proofpile-arXiv_065-4889 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction
\label{sec:intro}}
\citet{Stromgren_1939} idealized photoionized nebulae around
hot stars as static, spherical regions with
a uniform density of ionized gas out to a bounding radius $R$.
The Str\"omgren sphere model continues
to serve as the starting point for studies of \ion{H}{2} regions
around hot stars.
However, a number of physical effects
lead to departures from the simple Str\"omgren sphere model:
dynamical expansion of the \ion{H}{2} region if the pressure in the surrounding
neutral medium cannot confine the ionized gas;
deviations from sphericity due to nonuniform density;
motion of the star relative to the gas;
injection of energy and momentum by a stellar wind;
absorption of H-ionizing photons by dust grains;
and radiation pressure acting on gas and dust.
Each of these effects has been the object
of a number of investigations, beginning with the study of
ionization fronts by \citet{Kahn_1954}.
\citet{Savedoff+Greene_1955} appear to have been the first to discuss
the expansion of a spherical \ion{H}{2} region in an initially uniform
neutral medium.
\citet{Mathews_1967,Mathews_1969} and
\citet{Gail+Sedlmayr_1979}
calculated the dynamical expansion of
an \ion{H}{2} region produced by an O star in a medium that was initially
neutral, including the effects of radiation
pressure acting on the dust.
\citet{Mathews_1967,Mathews_1969}
showed that radiation pressure on dust
would produce low-density central cavities in \ion{H}{2} regions.
More
recently, \citet{Krumholz+Matzner_2009} reexamined the role of
radiation pressure on the expansion dynamics of \ion{H}{2} regions,
concluding that radiation pressure
is generally unimportant for \ion{H}{2} regions ionized
by a small number of stars, but is important for
\newtext{the expansion dynamics of}
giant \ion{H}{2} regions
surrounding clusters containing many O-type stars.
Their study concentrated on the forces acting on the dense shell
of neutral gas and dust bounding the \ion{H}{2} region, hence they
did not consider the density structure within the ionized region.
Dust absorbs
$h\nu > 13.6\eV$ photons that would otherwise be able to ionize hydrogen,
thereby reducing the extent of the ionized zone.
\citet{Petrosian+Silk+Field_1972} developed analytic approximations for
dusty \ion{H}{2} regions.
They assumed the gas density to be uniform, with a constant dust-to-gas
ratio, and found that dust could absorb a substantial fraction of the
ionizing photons in dense \ion{H}{2} regions.
Petrosian et al.\
did not consider the effects of radiation pressure.
\citet{Dopita+Groves+Sutherland+Kewley_2003,
Dopita+Fischera+Crowley+etal_2006}
constructed models of compact \ion{H}{2} regions,
including the effects of radiation pressure
on dust, and presented models for different ionizing stars
and bounding pressures. In these models, radiation pressure produces
a density gradient within the ionized gas.
The present paper provides a systematic discussion of
the structure of dusty \ion{H}{2} regions that are
assumed to be in equilibrium with an external bounding pressure.
The assumptions and governing equations are presented in
\S\ref{sec:equilibrium}, where it is shown that dusty \ion{H}{2}
regions are essentially described by a 3-parameter family
of similarity solutions.
In \S\ref{sec:results} we show density profiles for selected cases,
as well as surface brightness
profiles. The characteristic ionization parameter
$U_{1/2}$ and the fraction $(1-\fion)$
of the ionizing photons absorbed by dust are calculated.
Dust grain drift is examined in \S\ref{sec:dust drift}, where it is shown that
it can alter the dust-to-gas ratio in the centers of high
density \ion{H}{2} regions.
The results are discussed in \S\ref{sec:discussion}, and summarized in
\S\ref{sec:summary}.
\section{\label{sec:equilibrium}
Equilibrium Model}
Consider the idealized problem of a static, spherically-symmetric
equilibrium \ion{H}{2} region
ionized by a point source, representing either a single star or
a compact stellar cluster.
Assume a constant
dust-to-gas ratio (the validity of this assumption will be examined
later).
For simplicity, ignore scattering, and assume $\sigma_d$, the dust
absorption cross section per H nucleon, to be independent of
photon energy $h\nu$
over the $\sim 5\eV$ to $\sim30\eV$ range containing most of the
stellar power.
Let the star have luminosity $L=L_n+L_i=L_{39}10^{39}\erg\s^{-1}$,
where $L_n$ and $L_i$
are the luminosities in $h\nu < 13.6\eV$ and $h\nu > 13.6\eV$ photons,
respectively.
The rate of emission of $h\nu>13.6\eV$ photons is
$Q_0\equiv 10^{49}Q_{0,49}\s^{-1}$
and the mean energy of the ionizing photons is
$\langle h\nu\rangle_i\equiv L_i/Q_0$.
A single main sequence star of spectral type O6V
has $L_{39}=0.80$ and $Q_{0,49}=0.98$
\citep{Martins+Schaerer+Hillier_2005}.
A compact cluster of OB stars might be treated as a point source with
much larger values of $Q_{0,49}$ and $L_{39}$.
Ignore He, and assume the H to be nearly fully ionized, with
photoionization balancing ``Case B'' radiative recombination,
with ``on-the-spot'' absorption of $h\nu>13.6\eV$ recombination radiation.
Take the effective radiative recombination coefficient to be
$\alpha_B\approx2.56\times10^{-13}T_4^{-0.83}\cm^3\s^{-1}$ for
$0.5\ltsim T_4\ltsim 2$, with $T_4\equiv T/10^4\K$, where $T$ is
the gas temperature.
Assume the gas to be in dynamical equilibrium (the neutral gas outside
the ionized zone is assumed to provide a confining pressure).
Static equilibrium then requires that the force per unit volume
from radiation pressure be balanced by the pressure gradient:
\beq \label{eq:dynamical equilibrium}
n\sigma_d \frac{\left[L_n e^{-\tau}+L_i \phi(r)\right]}{4\pi r^2 c} +
\alpha_B n^2\frac{\langle h\nu\rangle_i}{c} -
\frac{d}{dr}\left(2nkT\right)
= 0 ~~~,
\eeq
where $n(r)$ is the proton density,
$L_i \phi(r)$ is the power in $h\nu>13.6\eV$ photons crossing a sphere
of radius $r$, and $\tau(r)$ is the dust absorption optical depth.
Eq.\ (\ref{eq:dynamical equilibrium}) underestimates
the radiation pressure force, because it assumes that recombination radiation
(including Lyman-$\alpha$)
and cooling radiation escape freely.
The functions $\phi(r)$ and $\tau(r)$ are determined by
\beqa \label{eq:dphi/dr}
\frac{d\phi}{dr} &=&
-\frac{1}{Q_0} \alpha_B n^2 4\pi r^2
- n\sigma_d \phi ~~~,~~~
\\ \label{eq:dtau/dr}
\frac{d\tau}{dr} &=& n\sigma_d ~~~,
\eeqa
with boundary conditions $\phi(0)=1$ and $\tau(0)=0$.
Define a characteristic density and length scale
\beqa \label{eq:define n_0}
n_0
&\equiv& \frac{4\pi\alpha_B}{Q_0}
\left(\frac{2ckT}{\alpha_B\langle h\nu\rangle_i}\right)^3
=
4.54\times10^5 ~\frac{T_4^{4.66}}{Q_{0,49}}
\left(\frac{18\eV}{\langle h\nu\rangle_i}\right)^3\cm^{-3}
~~~,~~~
\\
\lambda_0
&\equiv& \frac{Q_0}{4\pi\alpha_B}
\left(\frac{\alpha_B\langle h\nu\rangle_i}{2ckT}\right)^2
=
2.47\times10^{16}~\frac{Q_{0,49}}{T_4^{2.83}}
\left(\frac{\langle h\nu\rangle_i}{18\eV}\right)^2
\cm
~~~,~~~
\eeqa
and the dimensionless parameters
\beqa \label{eq:define beta}
\beta &\equiv& \frac{L_n}{L_i} = \frac{L}{L_i} - 1
= 3.47~\frac{L_{39}}{Q_{0,49}}
\left(\frac{18\eV}{\langle h\nu\rangle_i}\right) -1
~~~,~~~
\\ \label{eq:define gamma}
\gamma
&\equiv& \left(\frac{2ckT}{\alpha_B \langle h\nu\rangle_i}\right)
\sigma_d
= 11.2 ~T_4^{1.83}\left(\frac{18\eV}{\langle h\nu\rangle_i}\right)
\left(\frac{\sigma_d}{10^{-21}\cm^2}\right)
~~~.~~~
\eeqa
The parameter $\beta$, the
ratio of the power in non-ionizing photons to the power in photons
with $h\nu>13.6\eV$, depends solely on the stellar spectrum.
We take $\beta=3$ as our standard value, corresponding to the
spectrum of a $T_\star=32000\K$ blackbody, but we also
consider $\beta=2$ ($T_\star=45000\K$) and $\beta=5$ ($T_\star=28000\K$);
the latter value might apply to a cluster of O and B stars.
\begin{figure}[hbt]
\begin{center}
\includegraphics[width=10cm,angle=270]
{f1.eps}
\caption{\label{fig:sigma_rp}
\capsize
(a) Absorption cross section per H, and
(b) radiation pressure cross section per H, averaged over
blackbody spectra, as functions
of the blackbody temperature $T_\star$, for the
dust models of \citet[][WD01]{Weingartner+Draine_2001a} and
\citet[][ZDA04]{Zubko+Dwek+Arendt_2004}.
Broken lines show averages over $h\nu<13.6\eV$ only, appropriate
for dust in neutral gas.
}
\end{center}
\end{figure}
Momentum can be transferred to a dust grain by photon absorption, but also
by scattering.
The cross section $\sigma_d$
appearing in eq.\ (\ref{eq:dynamical equilibrium})
should be $\langle\sigma_\radpr\rangle$, the
radiation pressure cross section per H,
$\sigma_\radpr(\nu)
\equiv
\sigma_{\rm abs} + (1-\langle\cos\theta\rangle)\sigma_{\rm sca}$,
averaged over
the spectrum of the radiation field at radius $r$, where
$\sigma_{\rm abs}(\nu)$ and $\sigma_{\rm sca}(\nu)$ are the absorption
and scattering cross section per H, and $\langle\cos\theta\rangle$ is
the mean value of the cosine of the scattering angle $\theta$ for photons
of frequency $\nu$.
In eqs.\ (\ref{eq:dphi/dr}) and (\ref{eq:dtau/dr}),
$\sigma_d$
characterizes the effectiveness of the dust in attenuating
the radiation field.
While scattering does not destroy the photon, it does increase the
probability of the photon undergoing subsequent absorption.
Thus, the value of $\sigma_d$ in eqs.\ (\ref{eq:dphi/dr}) and (\ref{eq:dtau/dr})
should exceed $\langle\sigma_{\rm abs}\rangle$.
Figure \ref{fig:sigma_rp}(a) shows the dust absorption cross section per H
nucleon averaged over a blackbody spectrum, for two dust models
\citep{Weingartner+Draine_2001a, Zubko+Dwek+Arendt_2004} that
reproduce the wavelength-dependent extinction in the diffuse interstellar
medium using mixtures of PAHs, graphite, and amorphous silicate grains.
Fig.\ \ref{fig:sigma_rp}(b)
shows that $\langle\sigma_\radpr\rangle$,
the radiation pressure cross section
averaged over blackbody spectra,
is only slightly larger than $\langle\sigma_{\rm abs}\rangle$.
Given the uncertainties in the nature of the dust in \ion{H}{2}
regions, it is reasonable to ignore the distinction between
$\langle\sigma_\radpr\rangle$ and the attenuation cross section
and simply take $\sigma_d=\langle\sigma_\radpr\rangle$ in
eq.\ (\ref{eq:dynamical equilibrium}--\ref{eq:dtau/dr}).
For dust characteristic of the diffuse ISM, one could
take
$\langle\sigma_\radpr\rangle\approx1.5\times10^{-21}\cm^2{\,\rm H}^{-1}$ for
$2.5\times10^4\K\ltsim T_{\rm rad}\ltsim 5\times10^4\K$.
However, dust within an \ion{H}{2} region
may differ from average interstellar dust.
For example, the small-size end of the size distribution might be
suppressed, in which case $\sigma_d$ would be reduced.
Low metallicity galaxies will also have lower values of $\sigma_d$, simply
because there is less material out of which to form grains.
In the present work we will assume a factor $\sim$1.5 reduction in $\sigma_d$
relative to the local diffuse ISM, taking
$\sigma_d\approx 1\times10^{-21}\cm^2{\,\rm H}^{-1}$
as the nominal value, but larger and smaller values of
$\sigma_d$ will also be considered.
The dimensionless parameter $\gamma$ (defined in eq.\ \ref{eq:define gamma})
depends also on the gas temperature $T$ and on the
mean ionizing photon energy $\langle h\nu\rangle_i$, but these are not likely
to vary much for \ion{H}{2} regions around OB stars.
We take $\gamma=10$ as a standard value,
but will also consider $\gamma=5$ and
$\gamma=20$.
Low-metallicity systems would be
characterized by small values of $\gamma$.
Switching to dimensionless variables
$y\equiv r/\lambda_0$, $u\equiv n_0/n$, the governing equations
(\ref{eq:dynamical equilibrium}--\ref{eq:dtau/dr})
become
\beqa \label{eq:dudy}
\frac{du}{dy} &=& -1 - \gamma\left(\beta e^{-\tau} + \phi\right)\frac{u}{y^2}
~~~,~~~
\\ \label{eq:dphidy}
\frac{d\phi}{dy} &=& -\frac{y^2}{u^2} - \gamma\frac{\phi}{u}
~~~,~~~
\\ \label{eq:dtaudy}
\frac{d\tau}{dy} &=& \frac{\gamma}{u} ~~~,
\eeqa
with initial conditions
$\phi(0) = 1$ and
$\tau(0) = 0$.
The solutions are defined for $0<y\leq\ymax$, where $\ymax$ is
determined by the boundary condition $\phi(\ymax)=0$.
The actual radius of the ionized zone is $R=\ymax\lambda_0$.
For each solution $u(y)$
the mean density is
\beq
\langle n \rangle = \frac{3n_0}{\ymax^3}\int_0^{\ymax} \frac{1}{u} y^2 dy
~~~,
\eeq%
the root-mean-square density is
\beq
\nrms \equiv
n_0
\left[\frac{3}{\ymax^3}
\int_0^{\ymax} \frac{1}{u^2}y^2 dy\right]^{1/2}
~~~,
\eeq
and the gas pressure at the edge of the \ion{H}{2} region is
\beq
p_{\rm edge} = 2 n(R) kT = \frac{2n_0 kT}{u(\ymax)}
~~~.
\eeq%
Let
\beq
R_{s0}\equiv \left(\frac{3Q_0}{4\pi\nrms^2\alpha_B}\right)^{1/3}
= 2.10\times10^{18} \frac{Q_{0,49}^{1/3}}{n_{\rm rms,3}^{2/3}}
T_4^{0.28}\cm
~~~
\eeq
be the radius of a dustless Str\"omgren sphere with density
$\nrms = 10^3 n_{{\rm rms},3}\cm^{-3}$.
The fraction of the $h\nu>13.6\eV$
photons that are absorbed by H is simply
\beq
\fion = \left(\frac{R}{R_{s0}}\right)^3 ~~~.~~~
\eeq
For given $(\beta,\gamma)$, varying the initial value\footnote{
For $\gamma>0$, $u\propto \exp[(\beta+1)\gamma/y]\rightarrow \infty$
as $y\rightarrow 0$, and the integration must start at
some small $y>0$.}
of $u=n_0/n$ at some fixed $y=r/\lambda_0$
generates solutions with different density profiles.
Therefore the full set of solutions forms a three-parameter family
of similarity solutions $u(y)$, $\phi(y)$, and $\tau(y)$,
parametrized
by $\beta$, $\gamma$, and a third parameter.
The third parameter can be taken to be $Q_0\nrms$.
For dusty \ion{H}{2} regions, an alternative choice for the
third parameter is the dust optical depth on a path $R_{s0}$
with density $\nrms$:
\beqa \label{eq:taud0}
\tau_{d,0} \equiv \nrms R_{s0} \sigma_d &=&
2.10\left(Q_{0,49}n_{\rm rms,3}\right)^{1/3}
T_4^{0.28}
\frac{\sigma_d}{10^{-21}\cm^2}
\\
&=& 0.188 \gamma \left(Q_{0,49}n_{\rm rms,3}\right)^{1/3}
T_4^{-1.55}
\frac{\langle h\nu\rangle_i}{18\eV}
~~~.
\eeqa
The static \ion{H}{2} regions described by
eq.\ (\ref{eq:dynamical equilibrium}\,--\,\ref{eq:dtau/dr})
are determined by 7 distinct dimensional quantities:
three parameters describing the central star
($Q_0$, $\langle h\nu\rangle_i$, and $L_n$),
the recombination rate coefficient $\alpha_B$,
the thermal energy $kT$,
the dust cross section per nucleon $\sigma_d$,
and the external pressure $p_{\rm edge}$ confining the \ion{H}{2}
region.
According to the present analysis, this 7-parameter family of solutions
actually reduces to a 3-parameter family of similarity solutions.
The dimensionless parameters $\beta$ and $\gamma$, plus choice of an
initial value for
the function $u$ near $y=0$, suffice to determine the scaled density
profile
$n(r)/n_0$ and radius $\ymax=R/\lambda_0$: this is the 3-parameter
family of similarity solutions.
Specifying numerical values for the ratios $Q_0/\alpha_B$ and
$kT/(\alpha_B\langle h\nu\rangle_i)$ fixes the values of $n_0$ and
$\lambda_0$, thus giving $n(r)$ for $r<R$.
Thus far we have invoked 5 independent parameters, but have not actually
specified either $kT$ or $\alpha_B$.
Specifying $kT$
and $\alpha_B$ -- the 6th and 7th parameters -- allows us to
compute the actual values of $Q_0$ and $\langle h\nu\rangle_i$,
and the bounding pressure $p_{\rm edge}=2n(R)kT$.
If the ``initial value'' of $u$ near the origin is taken as a
boundary condition, then
$p_{\rm edge}$ emerges as a derived quantity. However, if we
instead regard $p_{\rm edge}$ as a boundary condition,
then the initial value of
$u$ ceases to be a free parameter, and instead must be found
(e.g., using a shooting technique) so as to give the
correct boundary pressure $p_{\rm edge}$: the initial value of $u$
is thus determined by the
7 physical parameters ($Q_0$, $\langle h\nu\rangle_i$, $L_n$,
$\alpha_B$, $T$, $\sigma_d$, and $p_{\rm edge}$).
Thus we see how the 3 parameter family of dimensionless similarity
solutions corresponds to a 7 parameter family of physical solutions.
\begin{figure}[hbt]
\begin{center}
\includegraphics[width=6cm,angle=270]%
{f2a.eps}
\includegraphics[width=6cm,angle=270]%
{f2b.eps}
\includegraphics[width=6cm,angle=270]%
{f2c.eps}
\includegraphics[width=6cm,angle=270]%
{f2d.eps}
\caption{\label{fig:nprofs gamma=0, 5, 10, 20}
\capsize
Normalized density profiles of static equilibrium \ion{H}{2} regions, as
a function of $r/R$, where $R$ is the radius of the ionized
region. Profiles are shown for 7 values of
$Q_0\nrms$; the numerical values given in the legends assume
$T_4=0.94$ and $\langle h\nu\rangle_i=18\eV$.
(a) Dustless ($\gamma=0$);
(b) $\gamma=5$;
(c) $\gamma=10$;
(d) $\gamma=20$.
}
\end{center}
\end{figure}
\begin{figure}[hbt]
\begin{center}
\includegraphics[width=10cm,angle=0]%
{f3.eps}
\caption{\label{fig:U}
\capsize
Ionization parameter $U_{1/2}$ at the half-ionization
radius in dusty H\,II regions (see text),
calculated assuming $T_4=0.94$ and $\langle h\nu\rangle_i=18\eV$.
}
\end{center}
\end{figure}
\section{\label{sec:results}
Results}
Equations (\ref{eq:dudy}-\ref{eq:dtaudy}) can be integrated numerically.
Figure \ref{fig:nprofs gamma=0, 5, 10, 20}a shows the solution for the
case where no dust is present ($\gamma=0$).
Radiation pressure associated with photoionization produces a
density gradient in the \ion{H}{2} region, but it is
modest unless $Q_0\nrms$ is very large.
The central density is nonzero.
For
$Q_{0,49}\nrms\ltsim 10^3\cm^{-3}$,
the density is uniform to within $\pm15\%$.
As discussed above,
the dust abundance relative to H is characterized by the parameter
$\gamma$.
Density profiles are shown in
Fig.\ \ref{fig:nprofs gamma=0, 5, 10, 20}b-d for $\beta=3$ and
$\gamma=5, 10, 20$, corresponding approximately to
$\sigma_d=0.5, 1, 2 \times10^{-21}\cm^2$.
With dust present,
the density formally goes to zero at $r=0$.
For fixed $\gamma$,
the size of the low-density central cavity
(as a fraction of the radius $R$ of the ionization front)
increases with increasing $Q_0\nrms$.
The enhancement of the density near the ionization front also becomes more
pronounced as $Q_0\nrms$ is increased.
For
$\beta=3$, $\gamma=10$ and $Q_{0,49}\nrms=10^5\cm^{-3}$,
we find $n(R)=2.5\nrms$.
\begin{figure}[bt]
\begin{center}
\includegraphics[width=6cm,angle=270]%
{f4a.eps}
\includegraphics[width=6cm,angle=270]%
{f4b.eps}
\includegraphics[width=6cm,angle=270]%
{f4c.eps}
\includegraphics[width=6cm,angle=270]%
{f4d.eps}
\caption{\label{fig:Iprofs gamma=0, 5, 10, 20}
\capsize
Normalized emission measure (EM) profiles for a cut
across the center of \ion{H}{2} regions with (a) $\gamma=0$ (no dust),
(b) $\gamma=5$, (c) $\gamma=10$, and (d) $\gamma=20$.
Profiles are shown for selected values of $Q_0\nrms$.
Numerical values of $Q_{0,49}\nrms$ assume $T_4=0.94$
and $\langle h\nu\rangle_i=18\eV$.
}
\end{center}
\end{figure}
The state of ionization of the gas is determined by the hardness of the
radiation field, and the value of the dimensionless ``ionization parameter''
\beq
U \equiv \frac{n(h\nu > \IH)}{\nH} ~~~,
\eeq
where $n(h\nu>\IH)$ is the density of photons with $h\nu>\IH$.
Within an \ion{H}{2} region, the value of $U$ varies radially.
As a representative value,
we evaluate $U_{1/2}$, the value at the
``half-ionization'' radius $R_{1/2}$, the radius within
which 50\% of the H ionizations and recombinations take place.\footnote{
Some authors \citep[e.g.,][]{Dopita+Fischera+Crowley+etal_2006}
use the volume-averaged ionization parameter $\langle U\rangle_V$.
For a uniform density dustless H\,II region,
$\langle U\rangle_V=(81/256\pi)^{1/3}(\alpha_B^{2/3}/c)(Q_0\nrms)^{1/3}
= 2.83\, U_{1/2}$.}
In a
uniform density dustless \ion{H}{2} region, $R_{1/2} = 2^{-1/3}R_{S0}$
is the same as the half-mass radius, and
\beq \label{eq:Uhalf, no dust}
U_{1/2}^{\rm (no\,dust)}
=
\frac{\alpha_B^{2/3}}{(72\pi)^{1/3}c} \left(Q_0 \nrms\right)^{1/3} ~~~.
\eeq
For our present models,
\beq
U_{1/2} = \frac{Q_0}{4\pi \lambda_0^2 n_0 c}
\frac{\phi(y_{1/2})u(y_{1/2})}{y_{1/2}^2}~~~,
\eeq
where $y_{1/2}=R_{1/2}/\lambda_0$ is the value of $y$ within which
50\% of the H ionizations and recombinations take place.
Figure \ref{fig:U} shows
$U_{1/2}$ as a function of $Q_0\nrms$ for
static dusty \ion{H}{2} regions with radiation pressure,
for selected values of $\beta$ and $\gamma$.
For small values of $Q_0 \nrms$, dust and radiation pressure are negligible
and $U_{1/2}$ coincides with $U_{1/2}^{\rm (no\,dust)}$
(eq.\ \ref{eq:Uhalf, no dust}).
However, for large values of $Q_0\nrms$, $U_{1/2}$
falls below $U_{1/2}^{\rm (no\,dust)}$.
For $\gamma\approx 10$ -- corresponding to the
dust abundances that we consider to be likely for Galactic \ion{H}{2}
regions -- we see that $U_{1/2} \approx 0.07 \pm 0.02$ for
$Q_{0,49}\nrms \gtsim 10^4 \cm^{-3}$.
The emission measure $EM(b)=\int n_e^2 ds$
is shown
as a function of impact parameter $b$
in Figure \ref{fig:Iprofs gamma=0, 5, 10, 20}.
For small values of $Q_0\nrms$, the intensity profile is close to the
semicircular profile of a uniform density sphere.
As $Q_0\nrms$ is increased, the profile becomes flattened, but,
if no dust is present ($\gamma=0$,
Fig.\ \ref{fig:nprofs gamma=0, 5, 10, 20}a), the ionized gas only begins
to develop an appreciable
central minimum for $Q_{0,49}\nrms\gtsim 10^{4.5}\cm^{-3}$.
When dust is present, however, the profiles are strongly affected.
For standard parameters $\beta=3,\gamma=10$, the emission measure shows
a pronounced central minimum for $Q_{0,49}\nrms \gtsim 10^3\cm^{-3}$,
with a peak-to-minimum ratio $>2$ for $Q_{0,49}\nrms \gtsim 10^4\cm^{-3}$.
As $Q_{0}\nrms$ is
increased,
the ionized gas becomes concentrated in a thin, dense
shell,
the peak intensity near the edge rises, and the
central emission measure changes from $EM(0)=2\nrms^2 R$ for small
$Q_0\nrms$ to
$EM(0)\rightarrow (2/3)\nrms^2 R$ as $Q_0\nrms\rightarrow\infty$.
\begin{figure}[htb]
\begin{center}
\includegraphics[width=14.0cm,angle=0,
clip=true,trim=0.0cm 0.0cm 0.0cm 0.0cm]%
{f5abcd.eps}
\caption{\label{fig:model summ}
\capsize
For dusty \ion{H}{2} regions with $\gamma=5, 10, 20$
and $\beta=2, 3, 5$, as a function of $\tau_{d0}$:
(a) ratio $\nrms/\langle n\rangle$
of rms density to mean density;
(b) ratio $n(R)/\nrms$ of the edge density to the rms
density;
(c) center-to-edge dust optical depth $\tau(R)$;
(d) ratio of peak emission measure/central emission measure.
Results are for $\langle h\nu\rangle_i=18\eV$.
}
\end{center}
\end{figure}
\begin{figure}[htb]
\begin{center}
\includegraphics[width=11cm,angle=0]%
{f6.eps}
\caption{\label{fig:log fion vs taud0}
\capsize
Fraction $\fion$ of the $h\nu>13.6\eV$ photons that
photoionize H in dusty \ion{H}{2} regions with
radiation pressure, as a function of $\tau_{d0}$,
for $\beta=2,3,5$ and $\gamma=5,10,20$.
Calculated assuming $\langle h\nu\rangle_i=18\eV$.
The dotted lines show the fitting formula
(\ref{eq:fion fit}) for each
case.
Also shown is $\fion$ calculated for assumed uniform density
\citep{Petrosian+Silk+Field_1972}.
}
\end{center}
\end{figure}
When $\tau_{d0}\gg1$, the present models have
the ionized gas concentrated in a thin, dense, shell.
Figure \ref{fig:model summ}a shows the ratio of the rms density
$\nrms$
to the mean density $\langle n\rangle$ as a function of $\tau_{d0}$.
The highest ionized density occurs at the outer edge of the ionized
zone, and Figure \ref{fig:model summ}b shows the ratio
$n_{\rm edge}/\nrms$ as a function of $\tau_{d0}$.
In the low density limit $Q_0\nrms \rightarrow 0$,
the dust optical depth from center to edge $\tau(R)\approx \tau_{d0}$.
The actual dust optical depth from center to edge is shown in
Figure \ref{fig:model summ}c.
When the \ion{H}{2} region develops a dense shell, which occurs for
$\tau_{d0}\gtsim 3$, the actual dust optical depth $\tau(R)$ is significantly
smaller than the value $\tau_{d0}$. Figure \ref{fig:model summ}c shows
that for $\tau_{d0}=40$, for example, the actual dust optical depth
$\tau(R)$ is only in the range 1--2.3, depending on the values of $\beta$ and
$\gamma$.
The shell-like structure is also apparent in the ratio of the peak intensity
to the central intensity.
As seen in Figures \ref{fig:Iprofs gamma=0, 5, 10, 20}(b-d), dust causes
the peak intensity to be off-center.
For fixed $\beta$ and $\gamma$, the ratio of peak intensity
to central intensity $I({\rm peak})/I({\rm center})$
increases monotonically with increasing $\tau_{d0}$, as shown
in Fig.\ \ref{fig:model summ}(d).
Because the shell
is dense, radiative recombination is rapid, the neutral hydrogen
fraction is enhanced, and H atoms can compete with dust to absorb
$h\nu>13.6\eV$ photons.
Figure \ref{fig:log fion vs taud0} shows $\fion$,
the fraction of the $h\nu>13.6\eV$
photons emitted by the star that photoionize H (i.e., are not absorbed
by dust), as a function of the parameter $\tau_{d0}$.
Results are shown for $\beta=2, 3, 5$ and
$\gamma=5, 10, 20$.
For $2\leq\beta\leq5$, $5\leq\gamma\leq20$, and $0\leq\tau_{d0}\leq40$,
the numerical results in Figure \ref{fig:log fion vs taud0}can be
approximated by the fitting formula
\beqa \label{eq:fion fit}
\fion(\beta,\gamma,\tau_{d0})
&\approx&
\frac{1}{1+(2/3+AB)\tau_{d0}} + \frac{AB\tau_{d0}}{1+B\tau_{d0}}
\\ \label{eq:define A}
A &=& \frac{1}{1+0.75\gamma^{0.65}\beta^{-0.44}}
\\ \label{eq:define B}
B &=& \frac{0.5}{1+0.1(\gamma/\beta)^{1.5}}
\eeqa
where $\beta$, $\gamma$, and $\tau_{d0}$ are given by
(\ref{eq:define beta}), (\ref{eq:define gamma}), and
(\ref{eq:taud0}).
The form of eq.\ (\ref{eq:fion fit}-\ref{eq:define B})
has no physical significance, but
eq.\ (\ref{eq:fion fit}) can be used to estimate the total
H ionization rate $\fion Q_0$ in dusty \ion{H}{2} regions.
Even for large
values of $\tau_{d0}$, Fig.\ \ref{fig:log fion vs taud0} shows that
$\sim$1/3 of the $h\nu>13.6\eV$ photons are
absorbed by hydrogen.
This contrasts with the uniform-density models
of \citet{Petrosian+Silk+Field_1972}, where the fraction of the
$h\nu>13.6\eV$ photons that are absorbed by the gas goes to zero as
$\tau_{d0}$ becomes large.
\section{\label{sec:dust drift}
Dust Drift}
\subsection{Gas Drag vs.\ Radiation Pressure}
Eq.\ (\ref{eq:dynamical equilibrium})
assumes the dust to be tightly coupled to the gas,
so that the radiation pressure force on the dust can be considered to
act directly on the gas.
In reality,
radiation pressure will drive the dust grains through the plasma.
If the grains approach their terminal velocities (i.e., acceleration
can be neglected) then, as before,
it can be assumed that the radiation pressure
force is effectively applied to the gas. However, the motion of the
dust grains will lead to changes in the dust/gas ratio, due to
movement of the grains from one zone to another, as well as because of
grain destruction. Here we estimate the drift velocities of grains.
Let $Q_\radpr\pi a^2$ be the radiation pressure cross section
for a grain of radius $a$.
Figure \ref{fig:Q_radpr vs. a} shows $Q_\radpr(a,\lambda)$ averaged over
blackbody radiation fields with $T=25000\K$, 32000\,K, and 40000\,K,
for carbonaceous grains and amorphous silicate grains.
For spectra characteristic of O stars, $\langle Q_\radpr\rangle \approx 1.5$
for $0.02\micron\ltsim a \ltsim 0.25\micron$.
\begin{figure}[htb]
\begin{center}
\includegraphics[width=8cm,angle=0]%
{f7.eps}
\caption{\label{fig:Q_radpr vs. a}
\capsize
Spectrum-averaged radiation pressure efficiency factor
$\langle Q_\radpr\rangle$ as a function of radius,
for $T=25000\K$, 32000~K, and 40000~K blackbody spectra.
For temperatures characteristic of O stars,
$\langle Q_\radpr\rangle\approx 1.5$ to within 20\%
for $0.02\micron \ltsim a \ltsim 0.25\micron$.
}
\end{center}
\end{figure}
If the magnetic field $B=0$, the terminal velocity $v_d$ for a grain
at distance $r$
is determined by balancing the forces due to radiation pressure
and gas drag:
\beqa
\frac{L(r)}{4\pi r^2 c}\pi a^2 \langle Q_\radpr\rangle
&=&
2\pi a^2 nkT\,G(s) ~~~~,~~~~ s \equiv \frac{v_d}{\sqrt{2kT/\mH}} ~~~,
\eeqa
where the drag function $G(s)$,
including both collisional drag and Coulomb drag,
can be approximated by \citep{Draine+Salpeter_1979a}
\beqa
G(s) &\approx& \frac{8s}{3\sqrt{\pi}}\left(1+\frac{9\pi}{64}s^2\right)^{1/2}
+ \left(\frac{eU}{kT}\right)^2 \ln\Lambda \frac{s}{(3\sqrt{\pi}/4+s^3)}
~~~,~~~
\\
\Lambda&=&\frac{3}{2ae}\frac{kT}{|eU|}\left(\frac{kT}{\pi n_e}\right)^{1/2}
= 6.6\times10^6 a_{-5}^{-1} \frac{kT}{|eU|} T_4^{1/2} n_3^{-1/2}
~~~,~~~
\eeqa
where $U$ is the grain potential, $a_{-5}\equiv a/10^{-5}\cm$,
and $n_3\equiv n_e/10^3\cm^{-3}$.
The drag force from the electrons is smaller than that from
the ions by at least $\sqrt{m_e/m_p}$, and can be neglected.
The charge on the grains will be determined by collisional charging
and photoelectric emission. Collisional charging would result in
$eU/kT\approx-2.51$ \citep{Spitzer_1968}, or $U\approx-2.16 T_4\Volt$.
Photoelectric charging will dominate close to the star, but is expected to
result in potentials $U\ltsim10\Volt$.
Taking
$|eU/kT|\approx 2.5$ and $\ln\Lambda\approx14.8$
as representative,
\beq
G(s)\approx \left[1.50\left(1+\frac{9\pi}{64}s^2\right)^{1/2}
+\frac{69.5}{1+4s^3/3\sqrt{\pi}}\right]s ~~~.
\eeq
Note that $G(s)$ is not monotonic: as $s$ increases from 0,
$G(s)$ reaches a peak value $\sim42$ for $s\approx 0.89$, but then
begins to decline with increasing $s$ as the Coulomb drag
contribution falls.
At sufficiently large $s$, the direct collisional term becomes large
enough that $G(s)$ rises above $\sim42$ and continues to rise thereafter.
The drag time for a grain of density
$\rho=3\g\cm^{-3}$ in \ion{H}{2} gas is
\beq
\tau_{\rm drag} = \frac{Mv}{F_{\rm drag}}
= 295 \left(\frac{a_{-5}}{n_3 T_4^{1/2}}\right) ~ \frac{s}{G(s)}\yr ~~~.
\eeq
For $n_3\gtsim0.01$ this is sufficiently short that each grain
can be assumed to be moving at its terminal velocity $v_d$, with
isothermal Mach number
$s\equiv v_d/\sqrt{2kT/\mH}$
determined by the dimensionless equation
\beq \label{eq:eq for G(s)}
G(s) =
\left[\phi(y)+\beta e^{-\tau(y)}\right]\frac{u(y)}{y^2}\langle Q_\radpr\rangle
~~~,~~~ y\equiv \frac{r}{\lambda_0}
~~~.~~~
\eeq
Eq. (\ref{eq:eq for G(s)}) is solved to find
$s(r)$.
For $20 < G < 42$, there are three values of $s$ for which the drag
force balances the radiation pressure force. The intermediate solution
is unstable; we choose the smaller solution,\footnote{
This solution is physically relevant if
the drift speed began with $s\ltsim0.9$
and increased with time.}
which means that $s$ undergoes
a discontinuous jump from $\sim 0.9$ to $6.2$ at $G\approx 42$.
The resulting terminal velocity $v(r)$ is shown in Figure
\ref{fig:vdrift gamma=10} for 7
values of $Q_{0,49}\nrms$.
The velocities in the interior can be very large, but the velocities
where most of the dust is located [$\tau(r)/\tau(R)>0.5$] are much smaller.
In the outer part of the bubble, where most of the gas and dust
are located, the drift velocities are much more
modest. This is visible in Fig.\ \ref{fig:vdrift gamma=10}a, where the
drift speeds become small as $r\rightarrow R$, but is more
clearly seen in Fig.\ \ref{fig:vdrift gamma=10}b, showing drift speeds
as a function of normalized optical depth.
The range
$0.5 < \tau(r)/\tau(R) < 1$ contains more than 50\% of the dust, and
throughout this zone the drift speeds are $\ltsim 0.3\kms$ even for
$Q_{0,49}\nrms$ as large as $10^7\cm^{-3}$.
With drift speeds $v_d\ltsim 0.3\kms$, grains will not be destroyed, except
perhaps by shattering in occasional collisions between grains with
different drift speeds. However, for large values of
$\nrms$, these grains
are located close to the boundary, the drift
times may be short, and the
grains may be driven out of the \ion{H}{2} and into
the surrounding shell of dense neutral gas.
This will be discussed further below.
\begin{figure}[htb]
\begin{center}
\includegraphics[width=6cm,angle=270]%
{f8a.eps}
\includegraphics[width=6cm,angle=270]%
{f8b.eps}
\caption{\label{fig:vdrift gamma=10}
\capsize
Radial drift velocities $v_{d,r}$ for six
different \ion{H}{2} regions, all with $\beta=3$ and
$\gamma=10$, for $T_4=0.94$, $\langle h\nu\rangle_i=18\eV$,
$Q_\radpr=1.5$, $|eU/kT|=2.5$, and $B=0$.
(a) $v_{d,r}$ vs $r/R$.
All solutions have large drift velocities
near the center, which will result in removal of the dust
from the interior.
Drift velocities increase with increasing $Q_0\nrms$.
(b) $v_{d,r}$ as a function of dust column density
$\tau(r)$.
Even if $B=0$ or $\bB\parallel\br$,
drift velocities $v_d>75\kms$ occur only in a region
with a small fraction of the dust.
In most of the volume, drift will not result in grain destruction.
}
\end{center}
\end{figure}
\subsection{Magnetic Fields}
Let
$\epsilon_B \equiv B^2/16\pi nkT$ be the ratio of magnetic pressure to
gas pressure.
The importance of magnetic fields for the grain dynamics
is determined by the dimensionless ratio
$\omega \tau_{\rm drag}$, where $\omega\equiv QB/Mc$ is the gyrofrequency
for a grain with charge $Q$ and mass $M$ in a magnetic field $B$:
\beq \label{eq:omega tau}
\left(\omega\tau_{\rm drag}\right)^2 =
17.3 \frac{T_4^2}{n_3 a_{-5}^2}
\left(\frac{\epsilon_B}{0.1}\right)
\left(\frac{eU/kT}{2.5}\right)^2
\left(\frac{71}{G(s)/s}\right)^2
~~~,~~~\epsilon_B\equiv\left(\frac{B^2/8\pi}{2nkT}\right) ~~~.~~~
\eeq
If $|eU/kT|\approx2.5$ and $\ln\Lambda\approx15$, then $(G(s)/s)\approx71$
for
$s\ltsim 0.5$.
Let the local magnetic field be
$\bB=B(\hat{\br}\cos\theta+\hat{\by}\sin\theta)$.
The steady-state drift velocity is
\beqa \label{eq:vdrift}
v_d &=& \left(\frac{F_{\rm rad}\taudrag}{M}\right)
\sqrt{\frac{1+(\omega\taudrag)^2\cos^2\theta}{1+(\omega\taudrag)^2}}
~~~,~~~
\\
v_{d,r} &=& \left(\frac{F_{\rm rad}\taudrag}{M}\right)
\frac{1+(\omega\taudrag)^2\cos^2\theta}{1+(\omega\taudrag)^2}
~~~,~~~
\\
v_{d,y} &=& \left(\frac{F_{\rm rad}\taudrag}{M}\right)
\frac{(\omega\taudrag)^2\sin\theta\cos\theta}{1+(\omega\taudrag)^2}
~~~,~~~
\\
v_{d,z} &=& -\left(\frac{F_{\rm rad}\taudrag}{M}\right)
\frac{\omega\taudrag\sin\theta}{1+(\omega\taudrag)^2}
~~~,~~~
\eeqa
where $v_{d,r}$, $v_{d,y}$, $v_{d,z}$ are the radial and transverse
components.
If $\sin\theta\rightarrow 0$, the magnetic field does not affect
the radiation-pressure-driven drift velocity, but in the limit
$\sin\theta\rightarrow1$ magnetic effects can strongly suppress
the radial drift if
$\omega\taudrag\gg 1$ and $\cos\theta\ll 1$.
The magnetic field strength is uncertain, but it is unlikely that the
magnetic energy density will be comparable to the gas pressure; hence
$\epsilon_B\ltsim 0.1$.
From eq.\ (\ref{eq:omega tau}) it is then apparent that if the magnetic field
is strong ($\epsilon_B\approx 0.1$),
magnetic effects on the grain dynamics can be important
in low density \ion{H}{2} regions, but will not
be important for very high densities:
$(\omega\tau_{\rm drag})^2 \ltsim 1$ for
$n_3\gtsim 170 a_{-5}^{-2}\epsilon_B$.
\subsection{Drift Timescale}
\begin{figure}[tb]
\begin{center}
\includegraphics[width=8cm,angle=0]%
{f9.eps}
\caption{\label{fig:tdrift}
\capsize
Drift timescale
$t_{\rm drift}/Q_{0,49}$ (see eq.\ \ref{eq:tdrift})
for
$\beta=3$ and $\gamma=5$, 10, and 20
(assuming $T_4=0.94$, $\langle h\nu\rangle_i=18\eV$).
The dust grains are assumed to have $\langle Q_\radpr\rangle=1.5$
and $|eU/kT|=2.5$.
Solid lines are for $B=0$ (or $\bB\parallel\br$).
Broken lines are for $a=0.03\micron$,
and $\epsilon_B Q_{0,49}=0.1$ and $10^2$.
}
\end{center}
\end{figure}
When radiation pressure effects are important, the gas and dust are
concentrated in a shell that becomes increasingly thin as
$Q_0 \nrms$ is increased. The drift velocities where most of the
dust is located are not large (see
Fig.\ \ref{fig:vdrift gamma=10}b), but the grains are also not far from
the ionization front.
The timescale on which dust drift would be important can be estimated by
calculating the drift velocity at the radius $r_{0.5}$ defined by
$\tau(r_{0.5})=0.5\tau(R)$. More than 50\% of the dust has $r_{0.5}<r<R$.
Figure \ref{fig:tdrift} shows the characteristic drift time
\beq \label{eq:tdrift}
t_{\rm drift} \equiv \frac{R-r_{0.5}}{v_{d,r}(r_{0.5})}
~~~.~~~
\eeq
If no magnetic field is present, the drift velocity depends only on
$T$ and the dimensionless quantities
$\{\phi,\tau,u,y,\langle Q_\radpr\rangle\}$
(see eq.\ \ref{eq:eq for G(s)}).
It is easy to see that for fixed $T$ and $Q_0\nrms$, the radius
$R\propto Q_0$, thus
$t_{\rm drift}\propto Q_0$.
Figure \ref{fig:tdrift} shows $t_{\rm drift}/Q_{0,49}$.
For $Q_{0,49}=1$, \ion{H}{2} regions with $\nrms>10^3\cm^{-3}$ have
$t_{\rm drift}<10^6\yr$ if magnetic effects are negligible.
If a magnetic field is present with $\bB\perp\br$ and
$\epsilon_B=0.1$, then the grain drift is slowed, but
drift times of $<1\Myr$ are found
for $\nrms > 10^4\cm^{-3}$.
Therefore, compact and ultracompact \ion{H}{2} regions around
single O stars are able to lower the dust/gas ratio by means of
radial drift of the dust on time scales of $\ltsim\Myr$.
However, if the O star is moving relative to the gas cloud with a
velocity of more than a few $\kms$, then individual fluid elements
pass through the ionized zone on timescales that may be shorter
than the drift timescale, precluding substantial changes in the
dust-to-gas ratio.
Grain removal by drift can also occur for giant \ion{H}{2} regions.
As an example, consider a giant \ion{H}{2} region ionized by
a compact cluster of $\sim$$10^3$ O stars
emitting ionizing photons at a rate $Q_0=10^{52}\s^{-1}$.
For $\nrms=10^3\cm^{-3}$, we have $Q_{0,49}\nrms=10^6\cm^{-3}$,
and we see that if $B=0$, the drift timescale is only
$t_{\rm drift}\approx 2\times10^5\yr$.
If a magnetic field is present with $\bB\perp\br$ and
$\epsilon_B=0.1$, then from Figure \ref{fig:tdrift} the
drift timescale $t_{\rm drift}$ is increased, but only to $\sim10^6\yr$.
It therefore appears possible for radiation-pressure driven drift
to remove dust from giant \ion{H}{2} regions provided they are
sufficiently dense.
Aside from magnetic effects, the drift speeds at a given location
depend only on
$\langle Q_\radpr\rangle$ and $T_4$ (see eq.\ \ref{eq:eq for G(s)}).
Figure \ref{fig:Q_radpr vs. a} shows
that $\langle Q_\radpr\rangle$ is constant to within a factor $\sim$1.5
for $a\gtsim 0.010\micron$.
Hence radiation-pressure-drive drift would act to drive
grains with $a\gtsim 0.01\micron$ outwards.
Smaller grains will drift as well, but more slowly.
Because of this, the gas-to-dust ratio in the centers of \ion{H}{2} regions
should in general be lower than the gas-to-dust ratio in the gas prior
to ionization.
The dust-to-gas ratio will first be reduced in the center, where
the drift speeds (see Fig.\ \ref{fig:vdrift gamma=10}) are large.
Dust drift will also alter the dust-to-gas ratio
in the outer ionized material, initially raising it by moving dust
outwards from the center.
In an initially uniform neutral cloud, the ionization front
expands rapidly at early times
\citep[see, e.g., Fig. 37.3 in][]{Draine_2011a} but
in gas with $n_3\gtsim 1$, at late times the ionization front
will slow to velocities small enough for dust grains to actually drift
outward across the ionization front, lowering the overall dust-to-gas
ratio within the \ion{H}{2} region.
\subsection{Grain Destruction}
\citet{Arthur+Kurtz+Franco+Albarran_2004} computed models of uniform density
\ion{H}{2} regions including the effects of dust destruction by
sublimation or evaporation,
finding that the dust/gas ratio can be substantially reduced
near the star.
If the maximum temperature at which a grain can survive is $T_{\rm sub}$,
and the Planck-averaged absorption efficiencies are
$Q_{\rm uv}$ and $Q_{\rm ir}$ for $T=T_\star$ and $T=T_{\rm max}$, then grains
will be destroyed within a distance $r_{\rm sub}$ with
\beq
\frac{r_{\rm sub}}{R_{s0}} = 2.82\times10^{-3}L_{39}^{1/6}n_{\rm rms,3}^{2/3}
\left(\frac{10^3\K}{T_{\rm sub}}\right)^2
\left(\frac{L_{39}}{Q_{0,49}}\right)^{1/3}
\left(\frac{Q_{\rm uv}/Q_{\rm ir}}{10^2}\right)^{1/2}
\eeq
For parameters of interest (e.g., $L_{39}/Q_{0,49}\approx1$,
$L_{39}\ltsim10^2$)
we find
$r_{\rm sub}/R_{s0}\ll 1$ for
$\nrms\ltsim 10^5\cm^{-3}$,
and sublimation
would therefore destroy only a small fraction of the dust.
As we have seen, we expect radiation pressure to
drive grains through the gas, with velocity given by
eq.\ (\ref{eq:vdrift}).
Drift velocities $v_d\gtsim75\kms$ will lead to sputtering by impacting
He ions, with sputtering yield $Y(\He)\approx 0.2$ for
$80\ltsim v \ltsim 500\kms$
\citep{Draine_1995b}.
For hypersonic motion, the grain of initial radius $a$ will be
destroyed after traversing a column density
\beq
\nH\Delta r = \frac{\nH}{n_{\rm He}}\frac{4\rho a}{Y(\He)\mu}
\approx
2\times10^{20}a_{-5}\left(\frac{Y(\He)}{0.2}\right)\cm^{-2}
\eeq
for a grain density $\rho/\mu=1\times10^{23}\cm^{-3}$,
appropriate for either silicates (e.g., FeMgSiO$_4$,
$\rho/\mu\approx3.8\g\cm^{-3}/25\mH=9\times10^{22}\cm^{-3}$)
or carbonaceous material ($2\g\cm^{-3}/12\mH=1.0\times10^{23}\cm^{-3}$).
Therefore the dust grain must traverse material with (initial) dust optical
depth
\beq \label{eq:Delta taud}
\Delta \tau_d = \sigma_d \nH\Delta r =
0.2 \sigma_{d,-21} a_{-5} \left(\frac{Y(\He)}{0.2}\right)
\eeq
if it is to be substantially eroded by sputtering.
However, Fig.\ \ref{fig:vdrift gamma=10}b shows that even in the
absence of magnetic effects, $v_d\gtsim 75\kms$
occurs only in a central region with $\tau_d<0.05$.
Therefore sputtering arising from radiation-pressure-driven drift
will not appreciably affect the dust content.
\section{\label{sec:discussion}
Discussion}
\subsection{Absorption of Ionizing Photons by Dust}
\begin{figure}[htb]
\begin{center}
\includegraphics[width=8cm,angle=0]%
{f10.eps}
\caption{\label{fig:inoue}
\capsize
Photoionizing fraction $\fion$ for 12 Galactic \ion{H}{2} regions,
as estimated by \citet{Inoue_2002} from infrared and radio
observations, vs.\ $Q_{0,49}n_e T_4^{0.83}$ (see text).
$\fion$ cannot exceed 1, therefore the high value found for
G298.22-0.34 give some indication of the uncertainties in estimation
of $\fion$.
Solid lines: $\fion$ for \ion{H}{2} regions with radiation
pressure for dust characterized by $\gamma=5$, 10, and 20.
Broken line: $\fion$ for uniform \ion{H}{2} regions with
$\sigma_d=10^{-21}\cm^2\,{\rm H}^{-1}$.}
\end{center}
\end{figure}
For a sample of 13 Galactic \ion{H}{2} regions, \citet{Inoue_2002}
used infrared and radio continuum observations to obtain the
values of $\fion$ shown in
Figure \ref{fig:inoue}.
The estimated values of $\fion$ are much larger than would be expected
for uniform \ion{H}{2} regions with dust-to-gas ratios comparable to the
values found in neutral clouds.
\citet{Inoue_2002} concluded that the central regions of
these \ion{H}{2} regions must be dust-free, noting that this was
likely to be due to the combined effects of stellar winds and
radiation pressure on dust.
As seen in Fig.\ \ref{fig:inoue}, the values of $\fion$ found by
Inoue are entirely consistent with what is expected for static
\ion{H}{2} regions
with radiation pressure for $5 \ltsim \gamma \ltsim 20$
(corresponding to $0.5 \ltsim \sigma_{d,-21}\ltsim 2$), with no need
to appeal to stellar winds or grain destruction.
\subsection{The Density-Size Correlation for \ion{H}{2} Regions}
\begin{figure}[htb]
\begin{center}
\includegraphics[width=8cm,angle=0]%
{f11a.eps}
\includegraphics[width=8cm,angle=0]%
{f11b.eps}
\caption{\label{fig:ne vs D}
\capsize
Density $\nrms$ vs.\ diameter $D$.
(a) Models with
$(\beta,\gamma)=$ (2,1), (2,5), (3,10), and (5,20).
Results shown were calculated for $T_4=0.94$, $\langle h\nu\rangle_i=18\eV$.
Solid lines show models with $p_{\rm edge}$ fixed, and
$Q_0$ varying from $10^{48}\s^{-1}$ to $10^{54}\s^{-1}$.
Broken lines show models with $Q_0$ fixed, and $p_{\rm edge}/k$
varying from $10^4\cm^{-3}\K$ to $10^{11}\cm^{-3}\K$.
(b)
Model grid for $\beta=3$, $\gamma=10$ together with
observed values are shown for various samples
of Galactic and extragalactic H\,II regions.
Cyan open triangles: \citet{Kennicutt_1984}.
Blue diamonds: \citet{Churchwell+Goss_1999}.
Green crosses: \citet{Garay+Lizano_1999}.
Cyan crosses: \citet{Kim+Koo_2001}.
Black open stars: \citet{Martin-Hernandez+Vermeij+vanderHulst_2005}.
Red solid triangles: radio sample from \citet{Hunt+Hirashita_2009}.
Red open circles: HST sample from \citet{Hunt+Hirashita_2009}.
}
\end{center}
\end{figure}
\ion{H}{2} regions come in many sizes, ranging from \ion{H}{2}
regions powered by a single O star, to giant \ion{H}{2} regions ionized
by a cluster of massive stars. The physical size of the \ion{H}{2}
region is obviously determined both by the total ionizing output
$Q_0$ provided by the ionizing stars, and the r.m.s.\ density $\nrms$ of
the ionized gas, which is regulated by the pressure $p_{\rm edge}$ of the
confining medium.
With the balance between photoionization and recombination determining
the size of an \ion{H}{2} region, an anticorrelation between size $D$
and density $\nrms$ is expected, and was observed as soon as large samples of
\ion{H}{2} regions became available
\citep[e.g.,][]{Habing+Israel_1979,Kennicutt_1984}.
For dustless \ion{H}{2} regions, one expects
$\nrms\propto D^{-1.5}$ for fixed $Q_0$, but for various samples
relations close to
$\nrms\propto D^{-1}$ were reported
\citep[e.g.,][]{Garay+Rodriguez+Moran+Churchwell_1993,
Garay+Lizano_1999,
Kim+Koo_2001,
Martin-Hernandez+Vermeij+vanderHulst_2005}.
For ultracompact \ion{H}{2} regions,
\citet{Kim+Koo_2001} attribute the $n_e\propto D^{-1}$ trend
to a ``champagne flow'' and the hierarchical structure of the dense
gas in the star-forming region, but
\citet{Arthur+Kurtz+Franco+Albarran_2004} and
\citet{Dopita+Fischera+Crowley+etal_2006} argue that
the $n_e\propto D^{-1}$ trend is
a result of both absorption by dust and radiation
pressure acting on dust in static \ion{H}{2} regions.
\citet{Hunt+Hirashita_2009} recently reexamined the size-density
relationship. They interpreted the size-density relation for different
observational samples in terms of models with different star formation rates
[and hence different time evolution of the ionizing output $Q_0(t)$],
and differences in the density of the neutral cloud into which the
\ion{H}{2} region expands.
Their models did not include the effects of radiation pressure on dust;
at any time the ionized gas in an \ion{H}{2} region
was taken to have uniform density, resulting in overestimation of
the dust absorption.
Figure \ref{fig:ne vs D}a shows a grid of $\nrms$ vs.\ $D$
for the present models, for
four combinations of $(\beta,\gamma)$.
While differences between the models with different $(\beta,\gamma)$
can be seen, especially for high $Q_0$ and high $p_{\rm edge}$,
the overall trends are only weakly dependent on $\beta$ and $\gamma$, at
least for $1\ltsim\gamma\ltsim 20$.
Figure \ref{fig:ne vs D}b shows the model grid for
$\beta=3$ and $\gamma=5$ together with observed values of $D$ and $\nrms$
from a number of different studies.
It appears that observed \ion{H}{2} regions -- ranging from
\ion{H}{2} regions ionized by one or at most a few O stars
($Q_0<10^{50}\s^{-1}$) to ``super star clusters'' powered by
up to $10^3-10^5$ O stars ($Q_0=10^{52}-10^{54}\s^{-1}$)
can be accomodated by the present static equilibrium models
with external pressures in the range
$10^4
\ltsim p/k \ltsim 10^{10.3}\cm^{-3}\K$.
Note that for diameters $D\gtsim 10^2\pc$, the assumption of
static equilibrium is unlikely to be justified, because the sound-crossing
time $(D/2)/15\kms\gtsim 3 \Myr$ becomes longer than the lifetimes of
high-mass stars.
The fact that some \ion{H}{2} region samples
\citep[e.g.,][]{Garay+Rodriguez+Moran+Churchwell_1993,Kim+Koo_2001}
seem to obey a
$\nrms\propto D^{-1}$ relationship appears to be an artifact of the
sample selection. We see in Fig.\ \ref{fig:ne vs D}b that the overall
sample of \ion{H}{2} regions does not have a single $\nrms$-vs.-$D$
relationship. But the observations appear to be generally consistent
with the current models of dusty \ion{H}{2} regions.
\subsection{Cavities in \ion{H}{2} Regions: N49}
Even without dust present, radiation pressure from photoelectric absorption
by H and He
can alter the density profile in a static \ion{H}{2} region, lowering the
central density and enhancing the density near the edge of the ionized
region (see Fig.\ \ref{fig:nprofs gamma=0, 5, 10, 20}a). As seen in
Figure \ref{fig:Iprofs gamma=0, 5, 10, 20}a, for large values
of $Q_0\nrms$ the surface brightness profile
can be noticeably flattened.
If dust is assumed to be present, with properties typical of the dust
in diffuse clouds, the equilibrium density profile changes dramatically,
with a central cavity surrounded by a high-pressure shell of ionized gas
pushed out by radiation pressure.
In real \ion{H}{2} regions, fast stellar winds will also act to inflate
a low-density cavity, or ``bubble'', near the star; the observed
density profile will be the combined result of the stellar wind bubble
and the effects of radiation pressure.
The GLIMPSE survey \citep{Churchwell+Babler+Meade+etal_2009} has
discovered and catalogued numerous interstellar ``bubbles''.
An example is N49 \citep{Watson+Povich+Churchwell+etal_2008}, with
a ring of free-free continuum emission at 20~cm, surrounded by a ring of
$8\micron$ PAH emission.
An O6.5V star is located near the center of the N49 ring.
The image is nearly circularly symmetric, with only a modest asymmetry
that could be due to motion of the star relative to the gas.
The 20~cm image has a ring-peak-to-center
intensity ratio $I({\rm peak})/I({\rm center})\approx 2$.
Is the density profile in N49 consistent with what is expected
for radiation pressure acting on dust?
From the 2.89~Jy flux from N49 at $\lambda=20$~cm
\citep{Helfand+Becker+White+etal_2006}
and distance $5.7\pm0.6\kpc$
\citep{Churchwell+Povich+Allen+etal_2006}, the stellar source
has $Q_{0,49}\approx (0.78\pm0.16)/\fion$.
If $\fion\approx 0.6$, then $Q_{0,49}\approx (1.3\pm0.3)$.
The \ion{H}{2} region, with radius $(0.018\pm0.02)$~deg,
has $\nrms\approx197\pm63\cm^{-3}$.
Hence
$Q_{0,49}\nrms\approx 260\cm^{-3}$.
If $\sigma_{d,-21}=1$, then $\tau_{d0}\approx 1.3$.
From Fig.\ \ref{fig:log fion vs taud0}a we confirm that $\fion\approx 0.6$ for
$\tau_{d0}\approx 1.3$.
Figure \ref{fig:model summ}d shows that
an \ion{H}{2} region with $\tau_{d0}=1.3$ is expected to have
a central minimum in the emission measure, but with
$I({\rm peak})/I({\rm center})\approx 1.3$ for $\beta=3,\gamma=10$,
whereas the observed
$I({\rm peak})/I({\rm center})\approx 2$.
The central cavity in N49 is therefore significantly
larger than would be expected
based on radiation pressure alone.
While the effects of radiation pressure are not negligible in N49,
the observed
cavity must be the result of the combined effects of radiation pressure
and a
dynamically-important stellar wind (which is of course not unexpected
for an O6.5V star).
\subsection{Lyman-$\alpha$}
The original ionizing photon deposits a radial momentum $h\nu_i/c$ at the
point where it is absorbed by either a neutral atom or a dust grain.
A fraction ($1-\fion$) of the ionizing photons
are absorbed by dust; this energy is
reradiated isotropically, with
no additional force exerted on the emitting material.
Because the infrared optical depth within the \ion{H}{2} region is small,
the infrared emission escapes freely, with no dynamical effect within the
\ion{H}{2} region.
A fraction $\fion$ of the ionizing energy is absorbed by the gas.
Subsequent radiative
recombination and radiative cooling converts this energy to photons, but
the isotropic emission process itself
involves no net momentum transfer to the gas.
We have seen above that the \ion{H}{2} can have a center-to-edge
dust optical depth
$\tau(R)\approx 1.6$ for $\tau_{d0}\gtsim 5$,
or $Q_{0,49}\nrms\gtsim10^{2}\cm^{-3}$ (cf.\ Fig.\ \ref{fig:model summ}c
with $\beta=3$, $\gamma=10$).
This optical depth applies to the $h\nu>13.6\eV$ ionizing radiation;
the center-to-edge
optical depth for the $h\nu<3.4\eV$ Balmer lines and collisionally-excited
cooling lines emitted by the ionized gas will be significantly smaller,
and much of this
radiation will escape dust absorption or scattering
within the \ion{H}{2} region.
That which is absorbed or scattered will exert a force on the dust at that
point only to the extent that
the diffuse radiation field is anisotropic.
We conclude that momentum deposition from
the Balmer lines and collisionally-excited cooling lines
within the ionized zone will be small compared to the momentum
deposited by stellar photons.
Lyman-$\alpha$ is a special case.
At low densities ($n \ll 10^3\cm^{-3}$),
$\sim70\%$ of Case B recombinations result in emission of
a Ly-$\alpha$ photon, increasing to $>95\%$ for $n>10^5\cm^{-3}$
as a result of collisionally-induced $2s\rightarrow2p$ transitions
\citep{Brown+Mathews_1970}.
After being emitted isotropically, the
photon may scatter many times before either escaping
from the \ion{H}{2} or being absorbed by dust.
Most of the scatterings take place near the point of emission,
while the photon frequency is still close to line-center.
On average, the net radial momentum transfer per emitted photon
will likely be dominated
by the last scattering event before the photon escapes from the
\ion{H}{2} region, or by the dust absorption event if it does not.
At a given point in the nebula, the incident photons involved in these
final events will be only moderately anisotropic.
Since there is less than one
Ly-$\alpha$ photon created per case B recombination,
the total radial momentum deposited by these final events will be
a small fraction of the radial momentum of the original ionizing photons.
\citet{Henney+Arthur_1998} estimate that dust limits the Ly-$\alpha$
radiation pressure to $\sim$$6\%$ of the gas pressure.
We conclude that Ly-$\alpha$ has only a minor effect on the density
profile within the ionized zone.
\subsection{\ion{H}{2} Region Expansion}
\ion{H}{2} regions arise when massive stars begin to emit ionizing radiation.
The development of the \ion{H}{2} region over time depends on the growth of the
ionizing output from the central star, and the expansion of the
initially-high pressure ionizing gas.
Many authors \citep[e.g.,][]{Kahn_1954,Spitzer_1978}
have discussed the development
of an \ion{H}{2} region in gas that is initially neutral and uniform.
If the ionizing output from the star turns on suddenly,
the ionization front is initially
``strong R-type'', propagating supersonically without affecting the
density of the gas, slowing until it
becomes ``R-critical'', at which point it
makes a transition to ``D-type'',
with the ionization front now preceded by a shock wave producing
a dense (expanding) shell of neutral gas bounding the ionized region.
While the front is R-type, the gas density and pressure are essentially
uniform within the ionized zone.
When the front becomes D-type, a rarefaction wave propagates inward from
the ionization front, but the gas pressure (if radiation pressure effects
are not important) remains
relatively uniform within the ionized region, because the motions
in the ionized gas are subsonic.
When radiation pressure effects are included, the instantaneous density profile
interior to the ionization front is expected to be similar to the profile
calculated for the static equilibria studied here.
Let $V_i$ be the velocity of the ionization front relative
to the star.
When the ionization front is weak D-type, the velocity of the ionization front
relative to the ionized gas just inside the ionization front is
$\sim0.5 V_i$ \citep{Spitzer_1978}.
Given the small dust drift velocities $v_{d,r}$ near the ionization front
(i.e., $\tau(r)\rightarrow \tau(R)$ in Fig.\ \ref{fig:vdrift gamma=10}),
dust is unable to drift outward across the ionization front
as long as the ionization front is propagating outward with a speed
(relative to the ionized gas) $V_i\gtsim 0.1\kms$
\section{\label{sec:summary}
Summary}
\begin{enumerate}
\item
Dusty \ion{H}{2} regions in static equilibrium
consist of a three-parameter family of similarity solutions,
parametrized by parameters $\beta$, $\gamma$, and a
third parameter, which can be
taken to be $Q_{0,49}\nrms$ or $\tau_{d0}$ (see eq.\ \ref{eq:taud0}).
The $\beta$ parameter (eq.\ \ref{eq:define beta})
characterizes the relative importance of
$h\nu<13.6\eV$ photons, and $\gamma$ (eq.\ \ref{eq:define gamma})
characterizes the dust opacity.
A fourth parameter -- e.g., the value of $\nrms$ or $Q_{0,49}$ --
determines the overall size and density of the \ion{H}{2} region.
\item Radiation pressure acting on both gas and dust can strongly affect
the structure of \ion{H}{2} regions.
For dust characteristic of the diffuse ISM of the Milky Way, static
\ion{H}{2} regions with $Q_{0,49}\nrms\ltsim 10^2\cm^{-3}$ will
have nearly uniform
density, but when $Q_{0,49}\nrms\gg 10^2\cm^{-3}$, radiation pressure acts
to concentrate the gas in a spherical shell.
\item For given $\beta$ and $\gamma$, the importance of radiation pressure
is determined mainly by the parameter $\tau_{d0}$ (see eq.\ \ref{eq:taud0}).
When $\tau_{d0}\gtsim 1$, radiation pressure will produce a noticeable
central cavity.
\item If the dust-to-gas ratio is similar to the
value in the Milky Way, then compression of the ionized gas into a shell
limits the
characteristic ionization parameter: $U_{1/2}\ltsim 0.01$,
even for $Q_0 \nrms\gg 1$
(see Fig.\ \ref{fig:U}).
\item For $\tau_{d0}\gtsim 1$, compression of the gas and dust
into an ionized shell leads to a substantial {\it increase}
\citep[compared to the estimate by][]{Petrosian+Silk+Field_1972}
in the fraction $\fion$ of $h\nu>13.6\eV$ photons
that actually ionize H, relative to what would have been estimated for a uniform
density \ion{H}{2} region, as shown in Fig.\ \ref{fig:log fion vs taud0}.
Eq.\ (\ref{eq:fion fit}) allows $\fion$ to be estimated for given
$Q_0\nrms$, $\beta$, and $\gamma$.
Galactic \ion{H}{2} regions appear to have values of $\fion$ consistent
with the present results for \ion{H}{2} regions with radiation pressure
(see Fig.\ \ref{fig:inoue}).
\item Interstellar bubbles surrounding O stars are the result of
the combined effects of radiation pressure and stellar winds.
For the N49 bubble, as an example, the observed ring-like free-free
emission profile
is more strongly peaked than would
be expected from radiation pressure alone,
implying that a fast stellar wind must be present to help create the
low-density central cavity.
\item For static \ion{H}{2} regions, dust drift would be important on
time scales $\ltsim1\Myr$ for $Q_{0,49}\nrms\gtsim 10^3\cm^{-3}$.
Real \ion{H}{2} regions are not static, and the dust will not drift
out of the ionized gas because the ionization front will generally
be propagating (relative to the ionized gas just inside the ionization
front) faster than the dust drift speed $\ltsim 1\kms$
(see Fig.\ \ref{fig:vdrift gamma=10}).
\end{enumerate}
\acknowledgements
I am grateful to Bob Benjamin and Leslie Hunt for helpful discussions, to
R.H. Lupton for making available the SM graphics package, and
to the anonymous referee for suggestions that improved the paper.
This research made use of NASA's Astrophysics Data System Service,
and was supported in part by NASA through JPL contract 1329088,
and in part by NSF grant AST 1008570.
| proofpile-arXiv_065-4896 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
\subsection{The basic setup}
Let $\cA_d := \mb{C}[z_1, \ldots, z_d]$ be the algebra of complex polynomials in $d$ variables. We use the usual multi-index notation: if $\alpha = (\alpha_1, \ldots, \alpha_d) \in \mb{N}^d$ is a multi-index, then $|\alpha| = \alpha_1 + \ldots + \alpha_d$ and
\bes
z^\alpha = z_1^{\alpha_1} z_2^{\alpha_2} \cdots z_d^{\alpha_d}.
\ees
We denote by $\cA_d \otimes \mb{C}^r$ the finite multiplicity versions of $\cA_d$. In this note we are interested in the case where there is some norm, always denoted $\|\cdot\|$, on $\cA_d$. We will consider in detail two norms.
\emph{The natural $\ell_1$ norm:} For $p(z) = \sum_\alpha c_\alpha z^\alpha$ we define
\be\label{eq:l1}
\|p\| = \sum_\alpha |c_\alpha|.
\ee
\emph{The $H^2$ norm:}
We give $\cA_d$ an inner product by declaring that all monomials are orthogonal one to the other, and a monomial has norm
\be\label{eq:H2}
\|z^\alpha\|^2 = \frac{\alpha_1! \cdots \alpha_d!}{|\alpha|!}.
\ee
$H^2 = H^2_d$ will denote the Hilbert space obtained by completing $\cA_d = \mb{C}[z_1, \ldots, z_d]$ with respect to the above mentioned inner product. This space is also known as ``Symmetric Fock Space", or ``the Drury-Arveson" space.
$\cA_d$ has a natural grading that extends naturally to its finite multiplicity versions and to its completions with respect to the various norms.
We write the grading of
$H^2_d$ as $H_0 + H_1 + H_2 + \ldots$. Thus, $H_k$ will also stand for the space of homogeneous polynomials of degree $k$.
A \emph{homogeneous ideal} (resp., \emph{module}) is an ideal (resp., module) generated by homogeneous polynomials. We say that $M$ is a \emph{graded submodule} of $\arv$ if it is the completion of a homogeneous module. Whenever $M \subseteq H^2_d \otimes \mb{C}^r$ is a graded submodule of $H^2_d \otimes \mb{C}^r$, we write the grading of $M$ as $M = M_0 + M_1 + M_2 + \ldots$.
\subsection{Stable polynomial division}
Let $M$ be a submodule of $\cA_d \otimes \mb{C}^r$ and let $\{f_1, \ldots, f_k\}$ be a generating set. Then every $h \in M$ can be written as a combination
\be\label{eq:h}
h = a_1 f_1 + \ldots + a_k f_k ,
\ee
with $a_i \in \cA_d$, $i=1, \ldots, k$. A natural question that arises is whether this can be done in such a way that that the terms $a_i f_i$ are controlled by the size of $h$. That is, does there exist a constant $C$ such that
\be\label{eq:stablediv2}
\sum \|a_i f_i\|^2 \leq C \|h\|^2
\ee
for all $h \in M$.
\begin{definition}
Let $M$ be a submodule of $\cA_d \otimes \mb{C}^r$. We say that $M$ has the \emph{stable division property} if there is a set $\{f_1, \ldots, f_k\} \subset M$ that generates $M$ as a module, and there exists a constant $C$, such that for any polynomial $h \in M$ one can find $a_1, \ldots, a_k \in \cA_d$ such that
(\ref{eq:h}) and (\ref{eq:stablediv2}) hold. In this case, we also say that $M$ has stable division constant $C$. The set $\{f_1, \ldots, f_k\}$ is said to be a \emph{stable generating set} for $M$.
\end{definition}
\begin{remark}\emph{
A generating set for a module with the stable division property is not necessarily a stable generating set (see Example \ref{expl:non_stable_basis}).}
\end{remark}
\begin{remark}
\emph{When $M$ is a graded module it suffices to check (\ref{eq:h}) and (\ref{eq:stablediv2}) for $h$ homogeneous.}
\end{remark}
\begin{remark}\emph{
Note that condition (\ref{eq:stablediv2}) is equivalent to }
\be\label{eq:stablediv1}
\sum \|a_i f_i\| \leq C' \|h\| ,
\ee
\emph{when the finite set of generators is held fixed. }
\end{remark}
For an example of a module with the above property, note that any principal submodule of $\cA_d \otimes \mb{C}^r$ has the stable division property. On the other hand, we do not know whether or not there are submodules of $\cA_d \otimes \mb{C}^r$ that do not enjoy this property. Of greatest interest to our purposes is the case where $M$ is generated by \emph{homogeneous} polynomials, and we shall focus mainly on this case.
Although the literature contains some recent treatment of numerical issues arising in computational algebra (see, e.g., \cite{AFT,KSW,MT}) and although questions of effective computation in algebraic geometry have been considered for some time (see, e.g., this survey \cite{BM}), it does not seem that the problems with which we deal here have been addressed.
Below we will give some additional examples of modules with the stable division property. But before that, let us indicate some difficulties that arise in this context.
\begin{example}\label{ex:x^2+xy}\emph{
In the following discussion we will use some standard terminology from computational algebraic geometry (see the appendix for a review). Consider the ideal $I \subset \mb{C}[x,y]$ generated by the set $B = \{x^2 + 2xy, y^2\}$. One can check that $B$ is a Groebner basis for $I$. There is a standard and well known algorithm that, given $h \in I$, finds coefficients $a_1, a_2 \in \cA_d$ such that $h = a_1 f_1 + a_2 f_2$ \cite[p. 63]{CLO92}. However, this division algorithm is not stable. For example, running the division algorithm on $x^{n+2}$ gives the output
\bes
x^{n+2} = \big[x^{n} - 2x^{n-1}y + 4x^{n-2}y^2 + \ldots + (-2)^{n}y^{n}\big](x^2 + 2xy) + \big[(-2)^{n+1}xy^{n-1} \big] y^2.
\ees
Thus, while the polynomials $x^{n+2}$ have norm $1$, running the division algorithm naively exhibits these polynomials as the sum of two terms of norm $\sim 2^n$. In particular, the division algorithm may be numerically unstable.}
\end{example}
Note that one may also write
\bes
x^{n+2} = \big[x^{n} - 2x^{n-1}y \big](x^2 + 2xy) + \big[4x^n \big] y^2.
\ees
We will show below that in the two variable case, a slight modification of the above mentioned algorithm will always give the desired result. However, it is not clear whether it is possible to design an algorithm that will make the correct choices to produce optimal coefficients in the general $d$-variable case. In Section \ref{sec:H^2} and \ref{sec:ell1} we treat specific classes of modules for which we can show that the stable division property holds. We will show that with respect to the $H^2$ norm ideals generated by linear polynomials, arbitrary ideals in $\mb{C}[x,y]$, finite dimensional ideals, as well as modules generated by monomials, have the stable division property. These classes of modules can be seen (using the same proofs) to have the stable division property with respect to the $\ell^1$ norm too, but with respect to the $\ell^1$ norm we in fact show that every ideal is linearly equivalent to an ideal that has the stable division property.
\subsection{The $d$-shift and essential normality}
We now explain the reason that brought us to study stable division.
On $\arv$ we may define natural multiplication operators $Z_1, \ldots, Z_d$ as follows:
\bes
Z_i f(z) = z_i f(z) \,\, , \,\, f \in \arv.
\ees
The $d$-tuple $(Z_1, \ldots, Z_d)$ is known as \emph{the $d$-shift}, and has been studied extensively in \cite{Arv98} and since. Arveson showed that the commutators $[Z_i,Z_j^*]$ belong to the Schatten class $\cL^p$ for all $p>d$, thus, in particular, they are compact. This is significant - see \cite{Arv98} for ramifications.
Given a graded submodule $M \subseteq \arv$, one may obtain two other $d$-tuples by compressing $(Z_1, \ldots, Z_d)$ to $M$ and to $M^\perp$:
\bes
(A_1, \ldots, A_d) = \left(Z_1\big|_M, \ldots, Z_d\big|_M\right) \,,
\ees
and
\bes
(B_1, \ldots, B_d) = \left(P_{M^\perp}Z_1\big|_{M^\perp}, \ldots, P_{M^\perp}Z_d\big|_{M^\perp}\right)\,.
\ees
If $[A_i,A_j^*] \in \cL^p$ for all $i,j$ then $M$ is said to be $p$-essentially normal, and if $[A_i,A_j^*]$ is compact for all $i,j$ then $M$ is said to be essentially normal. Similarly, the quotient $\arv /M$ is said to be $p$-essentially normal (resp. essentially normal) if the commutators $[B_i,B_j^*]$ are all in $\cL^p$ (resp. compact).
Arveson conjectured that every graded submodule $M$ of $\arv$, as well as its quotient $\arv /M$, are $p$-essentially normal for $p>d$ \cite{Arv05}. This has been verified for modules generated by monomials \cite{Arv05,Doug06}, and also for principal modules as well as arbitrary modules in dimensions $d=2,3$ \cite{GuoWang}. Douglas conjectured further that $\arv/M$ is $p$ essentially normal for all $p>\dim(M)$ \cite{Doug06b}. This has also been verified in several cases. We will not discuss here the varied and important consequences of this conjecture (see \cite{Arv05,Arv07,Doug06,Doug06b,GuoWang}).
In Section \ref{sec:stab_div_ess_nor} we will show that every module that has the stable division property satisfies Douglas' refinement of Arveson's conjecture. Thus, having the results of Sections \ref{sec:H^2} and \ref{sec:ell1} at hand, we obtain a unified proof that principal modules, monomial modules, and arbitrary ideals in $\mb{C}[x,y]$ are $p$-essentially normal for $p>d$, and that their quotients are $p$-essentially normal for $p> \dim(M)$.
\subsection*{Acknowledgments.} The author would like to thank Ken Davidson, J\"org Eschmeier and Chris Ramsey for reading and discussing preliminary versions of these notes. Moreover, the generous and warm hospitality provided by Ken Davidson at the University of Waterloo is greatly appreciated.
\section{Stable division with respect to the $H^2$ norm}\label{sec:H^2}
In this section $\|\cdot\|$ denotes the $H^2$ norm given by (\ref{eq:H2}), though the results here can be shown to be true also for other natural norms, in particular for the $\ell_1$ norm. The following is the simplest example.
\begin{proposition}\label{prop:orthogonal}
Let $I = I_1 + I_2 + \ldots$ be a homogeneous ideal in $\cA_d$ generated by an orthonormal set $\{f_1, \ldots, f_k\}$ of linear polynomials. For every $n\geq 1$, every $g \in I_n$ can be written as $g = a_1 f_1 + \ldots + a_k f_k$, where $a_i \in H_{n-1}$ for $i=1, \ldots, k$ in such a way that $a_m f_m \perp a_{j} f_{j}$ for all $i \neq j$. In particular, $I$ has the stable division property.
\end{proposition}
\begin{proof}
We may assume that $f_i = z_i$, the first coordinate function, for $i=1,2,\ldots,k$. (see the corollary to Proposition 1.12 in \cite{Arv98}). Every polynomial $g$ is a sum of monomials of degree $n$. Take all monomials that have $z_1$ in them, and gather them up as $a_1 f_1$. All the remaining monomials in $g-a_1 f_1$ do not have $z_1$ in them, so they are orthogonal to $a_1f_1$. Proceeding inductively we are done.
\end{proof}
We note that the conclusion in the above proposition does not hold if $\{f_1, \ldots, f_k\}$ is an orthonormal set of linear, vector valued polynomials in $H_d^2 \otimes \mb{C}^r$.
\subsection{Monomial modules}
A \emph{monomial} is a polynomial of the form $z^\alpha \otimes \xi$, with $\alpha$ a multi-index and $\xi \in \mb{C}^r$ (note that this definition of monomial is more general than that given in \cite{CLO98}).
\begin{proposition}\label{prop:monomials}
Let $M \subset \cA_d \otimes \mb{C}^r$ be a module that is generated by monomials. Then $M$ has the stable division property. Moreover, the constant $C$ in (\ref{eq:stablediv2}) can be chosen to be $1$.
\end{proposition}
\begin{proof}
By Hilbert's Basis Theorem, there is some $m$ and a finite family $B = \{z^{\alpha_i} \otimes \xi_i\}_{i=1}^k \subseteq M_m$
that generates $M_m + M_{m+1} + \ldots $.
A Graham-Schmidt orthogonalization procedure puts us in the situation where whenever $\alpha_i = \alpha_j$ then $\xi_i \perp \xi_j$.
Throwing in finite orthonormal bases of $M_1, \ldots, M_{n-1}$ allows us to restrict attention to stable division in $M_m + M_{m+1} + \ldots$, so let us assume that $B$ generates $M$. Under these assumptions, we proceed by induction on $k$.
We have already noted that a principal submodule has the stable division property, so if $k = 1$ we are done.
Now let $k>1$, and fix $h \in M_n$, $n \geq m$. $h$ can be written as a sum of monomials
\bes
h = \sum_{|\beta| = n} z^\beta \otimes \eta_\beta .
\ees
We re-label the set $\{z^{\alpha_i} \otimes \xi_i\ | \alpha_i = \alpha_1\}$ as $\{z^{\alpha_1} \otimes \zeta_j\}_{j=1}^t$. Remember that by our assumptions, $\{\zeta_1, \ldots, \zeta_t\}$ is an orthonormal set. Let $W = \textrm{span}\{\zeta_1, \ldots, \zeta_t\}$. Put
$$S(\alpha_1) = \{\beta : |\beta| = n \, , \, \beta \geq \alpha_1 \}.$$
For all $\beta \in S(\alpha_1)$, $\eta_{\beta} = v_\beta + u_\beta$, with $v_\beta \in W$ and $u_\beta \in W^\perp$. Define
\bes
g = \sum_{\beta \in S(\alpha_1)} z^\beta \otimes v_\beta .
\ees
$g$ is in the module generated by $\{z^{\alpha_i} \otimes \xi_i\ | \alpha_i = \alpha_1\}$. Writing $v_\beta = \sum_{j=1}^t c^\beta_j \zeta_j$, we find that
\bes
g = \sum_{j=1}^t \left( \sum_{\beta \in S(\alpha_1)} c^\beta_j z^{\beta-\alpha_1} \right) z^{\alpha_1} \otimes \zeta_j ,
\ees
so that gives $g = \sum_j a_j z^{\alpha_1} \otimes \zeta_j$ with $\sum_j \|a_j z^{\alpha_1}\otimes \zeta_j\|^2 \leq \|g\|^2$.
Now, $g \perp h-g$, and $h-g$ is in the module generated by $\{z^{\alpha_i} \otimes \xi_i | \alpha_i \neq \alpha_1\}$. By the inductive hypothesis, we can find a set of polynomials $\{b_i\}$ such that
\bes
h-g = \sum_{\alpha_i \neq \alpha_1} b_i z^{\alpha_i} \otimes \xi_i
\ees
and $\sum \|b_i z^{\alpha_i} \otimes \xi_i\|^2 \leq \|h-g\|^2$. Thus
\bes
h = \sum_j a_j z^{\alpha_1} \otimes \zeta_j + \sum_{\alpha_i \neq \alpha_1} b_i z^{\alpha_i} \otimes \xi_i
\ees
with
\bes
\sum\|a_j z^{\alpha_1} \otimes \zeta_j \|^2 + \sum \| b_i z^{\alpha_i} \otimes \xi_i\|^2 \leq \|h\|^2.
\ees
\end{proof}
\subsection{Ideals in $\mb{C}[x,y]$}
We now consider the case of two variables, that is, $d=2$.
\begin{lemma}\label{lem:stab2}
Let $f_1, \ldots, f_k$ be homogeneous polynomials of the same degree $m$ in $\mb{C}[x,y]$ such that $LT(f_1) > LT(f_2) > \ldots > LT(f_k)$. There is a constant $C$ such that for every polynomial $h \in \mb{C}[x, y]$, division of $h$ by $(f_1,\ldots,f_k)$ gives a representation
\bes
h = a_1 f_1 + \ldots + a_k f_k + r,
\ees
with
\be\label{eq:stabdivr}
\sum_i \|a_i f_i\|^2 \leq C (\|h\|^2 + \|r\|^2) ,
\ee
where $a_i, r \in \cA_d$, and either $r = 0$ or $r$ is a linear combination of monomials, non of which is divisible by any of $LT(f_1), \ldots, LT(f_k)$.
\end{lemma}
\begin{proof}
Note that we need only consider homogeneous $h$ - otherwise we apply the result to the homogeneous components of $h$. We may also assume that $\deg h > 4m$.
We will use {\bf Algorithm I} from Appendix \ref{subsec:div_alg} for the division, where in step (\ref{it:choice}) we will choose $i_0 = \max I$. What remains to show will be proved by showing that the output of the algorithm described in Appendix \ref{subsec:div_alg} satisfies the required conditions, once the input is arranged so that $LT(f_1) > LT(f_2) > \ldots > LT(f_k)$, and as long $i_0$ is chosen as above.
The only change from the algorithm given in \cite[p. 63]{CLO92} is the specification of the $f_i$ that is used to reduce $p$ in step (\ref{it:reduce}). The correctness of this algorithm is proved in \cite{CLO92} and is independent of the choice of the dividing $f_i$ in step (\ref{it:reduce}). It remains to prove that there exists $C$ such that (\ref{eq:stabdivr}) holds.
The proof is by induction on $k$ - the number of the $f_i$'s given. If $k=1$ the result is trivial.
Assume that $k>1$. Write $f_i = \sum_{j=0}^m a_{ij} x^{m-j}y^j$, and for all $i$, put $j_i = \min\{j | a_{ij} \neq 0\}$. By assumption, $j_1 < j_2 < \ldots < j_k$.
Recall that we may assume that $\deg h = n > 4m$. From the definition of the algorithm it follows that $f_1$ will be used in step (\ref{it:reduce}) to divide $p$ only when the leading term of $p$ is of the form $b_t x^{n-t}y^t$, with $b_t \neq 0$ and $j_1 \leq t < j_2$.
By the triangle inequality, at every iteration in which $a_1$ changes, the quantity $\|a_1 f_1\|$ grows by at most $\|LT(p)/LT(f_1) f_1\|$.
\noindent{\bf Claim:} $\|LT(p)/LT(f_1) f_1\|^2 \leq |a_{1j_1}|^{-1} \|LT(p)\|^2 \sum_j |a_{1j}|^2 $.
\noindent{\bf Proof of Claim:}
\begin{align*}
LT(p)/LT(f_1) f_1 &= \frac{b_t}{a_{1j_1}} x^{n-t-(m-j_1)}y^{t-j_1} \sum_j a_{1j} x^{m-j}y^j \\
&= \sum_j a_{1j} \frac{b_t}{a_{1j_1}} x^{n-t-(j-j_1)}y^{t+j-j_1}.
\end{align*}
Thus, by the definition of the norm in $H^2_2$,
\bes
\|LT(p)/LT(f_1) f_1\|^2 = \left|\frac{b_t}{a_{1j_1}}\right|^2 \sum_{j} |a_{1j}|^2 \frac{(n-(t+j-j_1))!(t+j-j_1)!}{n!}
\ees
But $t \leq (t+j-j_1) < n/2$, and for integers $i,j$ such that $i \leq j<n/2$ we have
\bes
\frac{(n-j)!j!}{n!} \leq \frac{(n-i)!i!}{n!},
\ees
so
\begin{align*}
\|LT(p)/LT(f_1) f_1\|^2 &\leq \left|\frac{b_t}{a_{1j_1}}\right|^2 \sum_j |a_{1j}|^2 \frac{(n-t))!t!}{n!} \\
&= |a_{1j_1}|^{-1} \|LT(p)\|^2 \sum_j |a_{1j}|^2 .
\end{align*}
That establishes the claim.
Now, we have seen that at every step of the iteration where $a_1$ changes, the quantity $\|a_1 f_1\|$ grows by as most
$(\sum_j |a_{1j}|^2 \|LT(p)\|^2)^{1/2}$. At every such iteration, $\|p\|$ also grows by at most $(\sum_j |a_{1j}|^2 \|LT(p)\|^2)^{1/2}$. At the iterations where $a_1$ does not change, $\|p\|$ becomes smaller.
It follows that after at most $j_2$ iterations, we have the following situation:
\begin{enumerate}
\item $\|a_1 f_1\| \leq C \|h\|$.
\item $\|p\| \leq C \|h\|$.
\item $r$ is something.
\item $a_2 = \ldots a_k = 0$.
\end{enumerate}
Here $C$ is a constant that depends only on $\sum_j |a_{1j}|^2$ and $j_2$. From this stage on, the algorithm continues to divide $p$ by $f_2, \ldots, f_k$. It will find the same $a_2, \ldots, a_k$ that it would given $p$ instead of $h$ as input, and it would add to $r$ a remainder that is orthogonal to the remainder $r$ that is obtained when we are done with $f_1$.
By the inductive hypothesis,
\bes
\sum_{i=2}^k \|a_i f_i\|^2 \leq C' (\|p\|^2 + \|r\|^2) \leq C' (C\|h\| + \|r\|^2).
\ees
Putting this together with $\|a_1 f_1\| \leq C \|h\|$, and changing $C$, we are done.
\end{proof}
\begin{remark}
\emph{It would be desirable to replace (\ref{eq:stabdivr}) with the stronger $\sum_i \|a_i f_i\|^2 + \|r\|^2 \leq C' \|h\|^2$, but that is impossible. For example, when $k=1$ and $f_1 = x^2 + xy$, running the algorithm with the input $h = x^n$ will give huge remainders $r$ (see Example \ref{ex:x^2+xy}).}
\end{remark}
\begin{theorem}\label{thm:d=2}
Every homogeneous ideal $I \subseteq \cA_2$ has the stable division property.
\end{theorem}
\begin{proof}
As in the proof of Proposition \ref{prop:monomials}, we may assume that that $I$ is generated by a set $F = \{f_1, \ldots, f_k\}$ of homogeneous polynomials of the same degree $m$. Furthermore, we may assume that $F$ is a Groebner basis with respect to lexicographic order on monomials.
By Lemma \ref{lem:stab2}, there is a $C$ such that every $h \in \cA_2$ can be written as
\bes
h = a_1 f_1 + \ldots + a_k f_k + r,
\ees
with $\sum_i \|a_i f_i\|^2 \leq C(\|h\|^2 + \|r\|^2)$. Now let $h \in I_n$. We may assume that $n > 4m$. Under this assumption, we saw that the $a_i$'s and $r$ can be found by the division algorithm. But by the Corollary on p. 81, \cite{CLO92}, since $F$ is a Groebner basis, we actually get $r = 0$. Thus
\bes
\sum_i \|a_i f_i\|^2 \leq C\|h\|^2
\ees
for all such $h$, and the proof is complete.
\end{proof}
The following example shows that Lemma \ref{lem:stab2} cannot be extended to $d>2$.
\begin{example}\label{expl:non_stable_basis}
\emph{Taking $f_1 = x^2+wy, f_2 = y^2$, and $h = x^4 w^n$, we find that the above algorithm gives}
\bes
h = (x^2 w^n - w^{n+1}y) f_1 + w^{n+2}f_2.
\ees
\emph{But $\|h\|^2 \sim n^{-4}$, while $\|w^{n+2}f_2\|^2 = \|w^{n+2}y^2\|^2 \sim n^{-2}$. In fact, in any presentation of $h$ as a combination $h = a_1 f_1 + a_2 f_2$, the monomial $w^{n+2}y^2$ must appear in both terms $a_1 f_1$ and $a_2 f_2$. That means that we cannot write $h = a_1 f_1 + a_2 f_2$ with $\|a_1 f_1 \|^2 + \|a_2 f_2\|^2 \leq C \|h\|^2$, where $C$ is independent of $h$. So the the set of generators $\{x^2 + wy, y^2\}$ is not a stable generating set for the ideal $I = \lel x^2 + wy, y^2 \rir$ that it generates. It is worth noting that $\{x^2 + wy, y^2\}$ is a Groebner basis for I. On the other hand, the ideal $I$ \emph{does} have the stable division property. This can be verified by using a Groebner basis with respect to the lexicographic order with $w>x>y$. This Groebner basis is given by $\{y^2, yx^2, x^4, wy + x^2\}$. }
\end{example}
\subsection{Zero dimensional ideals}
Recall that an ideal $I \subseteq \cA_d$ is said to be \emph{zero dimensional} if the affine variety associated to $I$,
\bes
V(I):=\{z \in \mb{C}^d : \forall f \in I . f(z) = 0\},
\ees
is finite. Note that for a zero dimensional \emph{homogeneous} ideal $I$ it is always true that $V(I) = \{0\}$.
\begin{theorem}
Let $I$ be any zero dimensional ideal in $\cA_d$. Then $I$ has the stable division property.
\end{theorem}
\begin{proof}
By the theorem on page 232 in \cite{CLO92}, $I$ is a finite co-dimensional subspace of $\cA_d$, and from here it is not hard to prove that it has the stable division property.
\end{proof}
\section{Stable division with respect to the $\ell_1$ norm}\label{sec:ell1}
In this section $\|\cdot\|$ denotes the $\ell_1$ norm given by (\ref{eq:l1}). This norm is perhaps the most natural way to measure the ``size" of a polynomial, and it also has the feature that it behaves nicely with respect to the division algorithm (roughly speaking, the division algorithm moves coefficients from one coordinate to another, therefore an $\ell_1$ norm is more appropriate than an $\ell_2$ norm). All the classes of modules that were shown in the previous section to have the stable division property with respect to the $H^2$ norm can also be seen (using the same proofs) to have the stable division property with respect to the $\ell_1$ norm. However, for the $\ell_1$ norm we can prove much more. We shall show in this section that every ideal is \emph{linearly equivalent} to an ideal that has the stable division property (see Definition \ref{def:lineq} below).
In this section, unlike the rest of the paper, it will be convenient to use the lexicographic order with $z_d > \ldots > z_1$.
A straightforward calculation gives the following lemma.
\begin{lemma}\label{lem:M_f}
Let $f \in \cA_d$, and let $M_f: \cA_d \rightarrow \cA_d$ be the operator given by
\bes
M_f g = fg.
\ees
Then $\|M_f\| = \|f\|$.
\end{lemma}
\begin{proposition}\label{prop:stable_div_cond}
Let $f_1, \ldots, f_k \in \cA_d$ be such that for all $j=1, \ldots,k$, if $f_j(z) = \sum_\alpha c_\alpha z^\alpha$ with $LT(f_j) = c_\beta z^\beta$, then
\be\label{eq:cineq}
|c_\beta| > \sum_{\alpha \neq \beta} |c_\alpha|.
\ee
Then there is a constant $C$ such that for every $h \in \cA_d$, the division algorithm gives a decomposition
\be\label{eq:decomposition}
h = \sum_{i=1}^k a_i f_i + r,
\ee
with $\sum_i \|a_i f_i \| \leq C \|h\|$ and $\|r\| \leq h$.
\end{proposition}
\begin{proof}
It convenient to assume that the leading coefficients of the $f_j$'s are all $1$, and we may do so. Thus there is some $\rho\in(0,1)$ such that for all $j$, if $f_j(z) = \sum_\alpha c_\alpha z^\alpha$ with $LT(f_j) = c_\beta z^\beta$, then $\sum_{\alpha \neq \beta} |c_\alpha| < \rho$.
Let $h(z) = \sum b_\alpha z^\alpha$. We run {\bf Algorithm II} from Appendix \ref{subsec:div_alg} (please recall the notation). Using condition (\ref{eq:cineq}), it is easy to see that every modification of $p$ in Step (\ref{it:reduceII}), $\|p\|$ only gets smaller. Since at the beginning of the algorithm we set $p:=h$, and at the end of the algorithm we set $r:=p$, we get $\|r\| \leq \|h\|$.
Now we must also bound the quantity $\sum \|a_i f_i\|$. By Lemma \ref{lem:M_f}, it is enough to to bound $\sum_i \|a_i\|$ by a multiple of $\|h\|$. The rest of the proof is devoted to obtaining the bound
\be\label{eq:bound}
\sum_{i=1}^k \|a_i\| \leq (1-\rho)^{-1} \|h\| .
\ee
We introduce some notation to streamline the slightly technical argument. For a monomial term $h = cz^\gamma$ ($c\neq 0$), we define the \emph{hight} of $h$, denoted $Ht(h)$, as
\bes
Ht(h) := |\{\beta : \beta \leq \gamma \}|,
\ees
where $|\cdot|$ denotes cardinality. For a general polynomial $h$ we define $Ht(h) = Ht(LT(h))$.
To algorithmically obtain (\ref{eq:decomposition}) with estimate (\ref{eq:bound}), we need to specify the choice of term made in Step (\ref{it:ChooseTerm}) in {\bf Algorithm II}. The specifications needed will be made clear by the proof below. The reader may later want to check that the procedure implied by the proof below is equivalent to choosing at each iteration of Step (\ref{it:ChooseTerm}) the term $t$ of $p$ that is the \emph{minimal} possible term reducible by any $f_j$.
We will prove (\ref{eq:bound}) by induction on the height of $h$.
\noindent{\bf Claim:} \emph{Division of a polynomial $h$ by $(f_1, \ldots, f_k)$ gives the decomposition (\ref{eq:decomposition}) such that}
\be\label{eq:Htbound}
\sum_{i=1}^k\|a_i\| \leq \sum_{n=0}^{Ht(h)} \rho^n \|h\|.
\ee
\noindent{\bf Proof of claim.} If $Ht(h) = 1$ then $h$ is a nonzero constant. Either it plays the role of the remainder in (\ref{eq:decomposition}), or one of the $f_i$'s is a constant, say $f_1 = c$, and then $h = h/c f_1$. In this case (\ref{eq:Htbound}) trivially holds.
Assume now that $Ht(h) > 1$. Write $h = cz^\gamma + g$, where $cz^\gamma = LT(h)$ and $g = h - LT(h)$. Note that $\|h\| = \|cz^\gamma\| + \|g\|$. Algorithmically, we will first divide $g$ and only then shall we turn to dividing $cz^\gamma$. This is equivalent to dividing $g$ and $cz^\gamma$ separately and then adding the output. Since $Ht(g) < Ht(h)$, the inductive hypothesis gives
\bes
g = \sum_{i=1}^k a_i^1 f_i + r^1
\ees
with $\sum\|a_i^1\| \leq \sum_{n=0}^{Ht(h)-1} \rho^n \|g\|$. Now we consider the term $cz^\gamma$. If it is not divisible by any of the leading terms of $f_1, \ldots, f_k$ then we have equation (\ref{eq:decomposition}) with $a_i = a_i^1$ and $r = r^1 + cz^\gamma$. In this case the required bound holds.
If $cz^\gamma$ is divisible by one of the leading terms of $f_1, \ldots, f_k$, say by $LT(f_{i_0})$, then we reduce the term $t = cz^\gamma$ by $f_{i_0}$ as described in Step \ref{it:reduceII} of {\bf Algorithm II}: $A_{i_0} := cz^\gamma /LT(f_{i_0})$ and $p := cz^\gamma - (cz^\gamma /LT(f_{i_0}))f_{i_0}$. This step produces a polynomial $p$ which we need to continue to divide. Note that $\|p\| \leq \rho \|cz^\gamma\|$ and $Ht(p) < Ht(cz^\gamma)$. By the inductive hypothesis, division of $p$ gives
\bes
p = \sum_{i=1}^k a_i^2 f_i + r^2 ,
\ees
with $\sum\|a^2_i\| \leq \sum_{n=0}^{Ht(h)-1} \rho^n \|p\|$. Thus we have equation (\ref{eq:decomposition}) with $a_i = a_i^1 + a_i^2$ for $i \neq i_0$, $a_{i_0} = a^1_{i_0} + a^2_{i_0} + A_{i_0}$, and $r = r^1 + r^2$. Thus
\begin{align*}
\sum \|a_i\| &\leq \sum \|a_i^1\| + \sum \|a_i^2\| + \|A_{i_0}\| \\
&\leq \sum_{n=0}^{Ht(h)-1} \rho^n \|g\| + \sum_{n=0}^{Ht(h)-1} \rho^n \|p\| + \|cz^\gamma\| \\
&\leq \sum_{n=0}^{Ht(h)-1} \rho^n \|g\| + \sum_{n=1}^{Ht(h)} \rho^n \|cz^\gamma\| + \|cz^\gamma\| \\
&\leq \sum_{n=0}^{Ht(h)} \rho^n (\|g\|+\|cz^\gamma\|) = \sum_{n=0}^{Ht(h)} \rho^n \|h\|.
\end{align*}
That proves the claim, which clearly implies (\ref{eq:bound}). As we noted earlier, this bound together with Lemma \ref{lem:M_f} completes the proof.
\end{proof}
\begin{definition}\label{def:lineq}
We say that two ideals $I,J \subseteq \cA_d$ are \emph{linearly equivalent} if there is a linear change of variables that sends $I$ onto $J$.
\end{definition}
\begin{lemma}\label{lem:existlambda}
Let $f_1, \ldots, f_k \in \cA_d$. There exist $\lambda_1, \ldots, \lambda_d > 0$ such that the polynomials $g_1, \ldots, g_k$ given by
\bes
g_j(z_1, \ldots, z_d) = f_j(\lambda_1 z_1, \ldots, \lambda_d z_d)
\ees
satisfy the following: for all $j=1, \ldots,k$, if $g_j(z) = \sum_\alpha c_\alpha z^\alpha$ with $LT(g_j) = c_\beta z^\beta$, then
\bes
|c_\beta| > \sum_{\alpha \neq \beta} |c_\alpha|.
\ees
\end{lemma}
\begin{proof}
We may assume that not all the $f_j$'s are monomials. Put $N = \max_j \deg f_j$, and let $M$ be the dimension of the space of polynomials with degree less than or equal to $N$. Define $K = \max\{|c| : c \textrm{ is a coefficient of some } f_j\}$. Define $\lambda_1 = M(K+1)$, and now define $\lambda_2, \ldots, \lambda_d$ recursively by
\bes
\lambda_{j+1} = (\lambda_1 \cdots \lambda_j)^{N+1} \,\, , \,\, j = 1, \ldots, d-1.
\ees
Now let $j\in \{1, \ldots,k\}$, and consider $g_j(z) = \sum_\alpha c_\alpha z^\alpha$. The choice of the $\lambda_i$'s implies that whenever $c_\alpha z^\alpha < c_\beta z^\beta$, then $M (K+1) |c_\alpha| < |c_\beta|$. The result follows.
\end{proof}
\begin{lemma}\label{lem:GrobnerBasis}
Let $I$ be an ideal, let $\{f_1, \ldots, f_k\}$ be a Groebner basis for $I$, and fix $\lambda_1, \ldots, \lambda_d \in \mb{C} \setminus \{0\}$. Define
\be\label{eq:J}
J = \{f(\lambda_1 z_1, \ldots, \lambda_d z_d) : f \in I\},
\ee
and
\be\label{eq:g_j}
g_j(z_1, \ldots, z_d) = f(\lambda_1 z_1, \ldots, \lambda_d z_d) \,\, , \,\, j = 1, \ldots, k.
\ee
Then $J$ is an ideal that is equivalent to $I$ for which $\{g_1, \ldots, g_k\}$ is a Groebner basis.
\end{lemma}
\begin{proof}
Note that $LT(J) = LT(I)$, and that for all $j$, up to multiplication by constants, $LT(f_j) = LT(g_j)$. Thus $\lel LT(g_1), \ldots, LT(g_k) \rir = LT(J)$, thus $\{g_1, \ldots, g_k\}$ is a Groebner basis for $J$.
\end{proof}
\begin{theorem}
Every ideal in $\cA_d$ is linearly equivalent to an ideal that has the stable division property with respect to the $\ell_1$ norm.
\end{theorem}
\begin{proof}
Let $I$ be an ideal in $\cA_d$. Let $\{f_1, \ldots, f_k\}$ be a Groebner basis for $I$. Define $J$ as in (\ref{eq:J}), and define $\{g_1, \ldots, g_k\}$ as in (\ref{eq:g_j}). By Lemma \ref{lem:GrobnerBasis}, $\{g_1, \ldots, g_k\}$ is a Groebner basis for $J$, and $J$ is equivalent to $I$, for any choice of nonzero $\lambda_1, \ldots, \lambda_k$. By Lemma \ref{lem:existlambda}, we can find such $\lambda$'s for which $g_1, \ldots, g_k$ satisfy the condition of Proposition \ref{prop:stable_div_cond}. But every $h \in J$ is divisible by $\{g_1, \ldots, g_k\}$ with remainder zero, so Proposition \ref{prop:stable_div_cond} implies that $J$ has the stable division property.
\end{proof}
This theorem shows that the stable division property, at least with respect to the $\ell^1$ norm, has nothing to do with the \emph{geometry} of an ideal (in the sense of algebraic geometry). That is: either all ideals have the stable division property, or there exists an ideal that does not posses this property, but which is equivalent to one that does.
\section{Stable division and essential normality}\label{sec:stab_div_ess_nor}
Let $M$ be a graded submodule of $H^2_d \otimes \mb{C}^r$. It is known that there exists a univariate polynomial $HP_M(t)$ such that $\dim (M_n^\perp) = HP_M(n)$ for $n$ sufficiently large \cite[Proposition 4.7]{CLO98}. We define the dimension of $M$, denoted $\dim(M)$, to be $\deg (HP_M) + 1$. When $r=1$ and $M$ is an ideal, then $\dim(M)$ is the dimension of the \emph{affine} variety determined by $M$.
Since $\dim(H_n \otimes \mb{C}^r) \sim c n^{d-1}$, we always have that $\dim(M) \leq d$.
\begin{theorem}\label{thm:stabdivessnorm}
Let $M$ be a graded Hilbert submodule of $H^2_d \otimes \mb{C}^r$ that has the stable division property. Then $M$ and $\arv /M$ are $p$-essentially normal for all $p>d$. In fact, $\arv /M$ is $p$-essentially normal for all $p> \dim(M)$.
\end{theorem}
\begin{proof}
It suffices to prove the assertion for $\arv/M$ \cite[Proposition 4.2]{Arv07}. $\arv/M$ is unitarily equivalent, as a Hilbert module, to $M^\perp$, where the coordinate functions are given by compressing $Z_1, \ldots, Z_d$ to $M^\perp$.
Let $P$ be the orthogonal projection onto $M^\perp$. Denote $B_i = P Z_i \big|_{M^\perp}$. Fix $i,j$ and $p>\dim(M)$. What we need to prove is that
\bes
[B_i,B_j^*] = B_iB_j^* - B_j^*B_i \in \cL^p .
\ees
We know that $\|[Z_i,Z_j^*] \big|_{H_n} \| \leq \frac{2}{n+1}$ \cite[Proposition 5.3]{Arv98}, therefore
\bes
\textrm{trace}(|P[Z_i,Z_j^*]P|^p) \leq \sum_n \frac{2 \dim(M_n^\perp)}{(n+1)^p} < \infty .
\ees
Thus it is equivalent to show that $[B_i,B_j^*] - P[Z_i,Z_j^*]P$ is in $\cL^p$. But
\bes
[B_i,B_j^*] - P[Z_i,Z_j^*]P = PZ_iPZ_j^*P - PZ_j^*PZ_iP - PZ_iZ_j^*P + PZ_j^*Z_i P = PZ_j^* (I - P) Z_i P ,
\ees
where we used $PZ_j^*P = Z_j^*P$ ($M^\perp$ is coinvariant). Letting $E_n$ denote the orthogonal projection $E_n : \arv \rightarrow H_n \otimes \mb{C}$, and putting $P_n = E_nP$, then we may write
\bes
PZ_j^* (I - P) Z_i P = \sum_n P_n Z_j^* (E_{n+1}-P_{n+1})Z_i P_n.
\ees
The proof will be complete once we show that
\be\label{eq:asnormestimate}
\|P_n Z_j^* (E_{n+1}-P_{n+1})\| \leq C(n+1)^{-1/2} ,
\ee
with $C$ independent of $n$. Indeed, this would imply that
\bes
\textrm{trace}(|PZ_j^* (I - P) Z_i P|^p) \leq C' \sum_n \frac{n^{\dim(M)-1}}{(n+1)^{-p}}
\ees
(here, $C'$ is some other constant) which is finite for $p>\dim(M)$.
Let $F = \{f_1, \ldots, f_k\}$ be a stable generating set for $M$. Let $m$ be the maximal degree of an element in $F$. Modifying $F$ if needed, we may assume that $F \subset M_m$ is a stable generating set for $M_m + M_{m+1} + \ldots$.
Now consider $n \geq m$, and let $h \in M_{n+1}$. Because $F$ is a stable generating set, we write
$h = a_1 f_1 + \ldots + a_k f_k$, with $\sum_i \|a_i f_i\| \leq C\|h\|$.
Recalling that $Z^*_j\big|_{H_{n+1}} = (n+1)^{-1} \frac{\partial}{\partial z_j}$, we get
\bes
Z^*_j h = \frac{1}{n+1}\Big( \sum_{i=1}^k a_i \frac{\partial}{\partial z_j} f_i + \sum_{i=1}^k f_i \frac{\partial}{\partial z_j} a_i \Big) ,
\ees
so, because $M$ is a submodule,
\bes
P_n Z_j^* h = \frac{1}{n+1}\sum_{i=1}^k a_i \frac{\partial}{\partial z_j} f_i .
\ees
By \cite[Proposition 2.3]{GuoWang} there is a constant $C_1$ such that $\|g \partial/\partial z_j f_i\| \leq C_1\sqrt{n+1}\|gf_i\|$ for $i=1, \ldots, k$, and we get
\begin{align*}
\|P_n Z_j^* h\| &\leq \frac{1}{n+1}\sum_{m=1}^k \|a_m \frac{\partial}{\partial z_j} f_m\| \\
&\leq \frac{C_1 \sqrt{n+1}}{n+1}\sum_{m=1}^k \|a_m f_m\| \\
&\leq \frac{C C_1 \|h\|}{\sqrt{n+1}}.
\end{align*}
That establishes (\ref{eq:asnormestimate}), and completes the proof of the theorem.
\end{proof}
Using the theorem together with the results of Section \ref{sec:H^2}, we obtain a unified proof for the following known results:
\begin{theorem}[Guo-Wang \cite{GuoWang}]
Every principal graded submodule $M \subseteq \arv$, as well as its quotient, is $p$-essentially normal for all $p>d$. $\arv/M$ is $p$-essentially normal for $p>\dim(M)$.
\end{theorem}
\begin{theorem}[Guo-Wang \cite{GuoWang}]
Every homogeneous ideal $I$ in $H^2_2$, as well as its quotient, are $p$-essentially normal for $p>2$.
\end{theorem}
\begin{theorem}[Arveson \cite{Arv05}, Douglas \cite{Doug06}]
Let $f_1, \ldots, f_k$ be homogeneous vector valued polynomials of the same degree $m$, all of which are monomials. Then the module $M$ generated by $\{f_1, \ldots, f_k\}$, as well as its quotient, are essentially $p$-normal for all $p>d$. $\arv/M$ is $p$-essentially normal for $p>\dim(M)$.
\end{theorem}
\begin{remark}
\emph{In a previous version of this note it was only asserted that $\arv/M$ is $p$-essentially normal for $p>d$, rather than for $p>\dim(M)$. It was noticed that the proof gives the stronger result thanks to a correspondence with J\"org Eschmeier.}
\end{remark}
\section{Reduction from linear submodules of $H^2_d \otimes \mb{C}^r$ to quadratic submodules of $H^2_d$.}
The purpose of this section is to show that the problem of showing the $p+r$-essential normality of linear submodule of $\arv$ can be reduced to the problem of showing $p$-essential normality of quadratic submodules of $H^2_d$.
The motivation for this reduction is, of course, Arveson's result that if every homogeneous submodule $M$ of $\arv$ that is generated by linear polynomials is essentially normal, then every graded submodule of $\arv$ (as well as its quotient) is essentially normal \cite[Corollary 8.4]{Arv07}\footnote{We note that it appears that the same proof given in \cite{Arv07} would give the same result for $p$-essential normality.}.
\noindent{\bf Statement:} \emph{If it is true that every homogeneous ideal in $H^2_d$ that is generated by quadratic polynomials is $p$-essentially normal for $p>d$, then every homogeneous submodule of $\arv$ that is generated by linear polynomials is $p$-essentially normal for all $p> d+r$. Similarly, if it is true that every homogeneous ideal in $H^2_d$ that is generated by quadratic polynomials is essentially normal, then every homogeneous submodule of $\arv$ that is generated by linear polynomials is essentially normal.}
\begin{proof}
We prove the statement about $p$-essential normality. The statement about essential normality is proved in a similar way. Fix $p>d+r$.
Write the $d$-dimensional variable as $z = (z_1, \ldots, z_d)$, and denote the coordinate operators by
$S_1, \ldots, S_d$. Put $T_i = S_i \big|_{M}$, $i=1, \ldots, d$.
Let $M\subseteq \arv$ be generated by polynomials of degree $1$. Let $\{v_1, \ldots, v_r\}$ denote an orthonormal basis in $\mb{C}^r$. Let the generators $\{f_1, \ldots, f_k\}$ of $M_1$ be given by
\bes
f_m(z) = \sum_{i,j}a^{m}_{ij} z_i v_j.
\ees
Now, consider the space $H^2_{d+r}$, with the $(d+r)$-dimensional variable written as $(z,y) = (z_1,\ldots, z_d, y_1, \ldots, y_r)$. We denote the coordinate operators of $H^d_{d+r}$ by $Z_1, \ldots, Z_d, Y_1, \ldots, Y_r$. Note that there is a difference between the tuples $(S_1, \ldots, S_d)$ and $(Z_1, \ldots, Z_d)$ - they are acting on different spaces and in a different way.
Define $k$ quadratic forms $g_1, \ldots, g_k$ by
\bes
g_m(z,y) = \sum_{i,j}a^{m}_{ij} z_i y_j.
\ees
Let $N$ be the graded Hilbert submodule of $H^2_{d+r}$ generated by $\{g_1, \ldots, g_k\}$. By assumption, $N$ is $p$-essentially normal. In particular, letting $A_i = Z_i \big|_{N}$, we have that
\bes
A_i A_j^* - A_j^* A_i \in \cL^p \,\, , \,\, i,j = 1, \ldots, d .
\ees
Now, let $\cA$ be $\mb{C}[z_1, \ldots, z_d]$, considered as the subalgebra of $\mb{C}[z_1, \ldots, z_d, y_1, \ldots, y_r]$ consisting of polynomials depending only on the $z_i's$. $N$ is also an $\cA$-module. Let $P$ be the completion of the $\cA$-submodule of $N$ generated by $\{g_1, \ldots, g_k\}$. Denote $B_i = A_i\big|_{P}$.
With all these definitions set up, the proof will now be completed in two steps. First, we will show that for all $i=1, \ldots, d$, $P$ reduces $A_i$. As this obviously implies that $[B_i,B_j^*]$ are also in $\cL^p$, the second and final step will be to show that $p$-essential normality of $[B_i,B_j^*]$ implies $p$-essential normality of $[T_i,T_j^*]$.
\noindent {\bf 1. $P$ reduces $A_i$:}
$P$ is invariant for $A_i$ by definition. We need to show that $N \ominus P$ is also invariant under $A_i$. But $P$ consists of all polynomials in $N$ in which the $y$ variables appear in any term with degree precisely one. Thus $N \ominus P$ certainly contains the space of all polynomials in which the $y$ variables appear with degree strictly greater then $1$. Call this space $Q$. But $P + Q = N$, hence $N \ominus P = Q$. The definition of $Q$ as the space of polynomials in which the $y$ variables appear with degree strictly greater then $y$ implies that it is invariant under multiplication by $z_i$, i.e., it is invariant under the operator $A_i$.
\noindent{\bf 2. $p$-essential normality of $[B_i,B_j^*]$ implies $p$-essential normality of $[T_i,T_j^*]$:}
Let $R$ be the completion of the $\cA$-submodule of $H^2_{d+r}$ generated by $\{y_1, \ldots, y_r\}$. $R$ can be equivalently defined as
\bes
R = \{f\in H^2_{d+r} : \forall z,y,\lambda . f(z,\lambda y) = \lambda f(z,y)\}.
\ees
Define $U: H^2_d \otimes \mb{C}^r \rightarrow R$ on monomials by
\bes
U (z^{\alpha} v_j) = \sqrt{1+|\alpha|} z^\alpha y_j.
\ees
Using the formula
\bes
\|z^\alpha \|^2 = \frac{\alpha_1 ! \cdots \alpha_d !}{|\alpha|!},
\ees
one sees that $U$ extends to a unitary. From our definitions it follows that $U$ maps $M$ onto $P$. A simple computation shows:
\be\label{eq:almostunitary}
U^* Z_i U (z^{\alpha} v_j) = \sqrt{\frac{|\alpha|+1}{|\alpha| +2 }} S_i (z^\alpha v_j).
\ee
Let $D$ be the graded operator of degree $0$ on $H^2_d \otimes \mb{C}^r$, acting on the space of homogeneous polynomials of degree $n$ as multiplication by $\sqrt{n+1}/\sqrt{n}$. Then we can rewrite (\ref{eq:almostunitary}) as
\bes
D U^* B_i U = T_i .
\ees
Further computations show that
\bes
DU^* B_i U= D' U^* B_i U D ,
\ees
where $D'$ is the operator that multiplies homogeneous polynomials of degree $n \geq 2$ by $\sqrt{(n-1)(n+1)}/n$. Now,
\begin{align*}
T_i T_j^* - T_j^* T_i &= D U^*B_i U U^* B_j^* U D - U^* B_j^* U D D U^*B_i U \\
&= D U^* B_i B_j^* U D - DU^* B_j^* U^* D'^2 U B_i U D \\
&= D U^* B_i B_j^* U D - DU^* B_j^* B_i U D + DU^* B_j^* U^*(I-D'^2)U B_i U D.
\end{align*}
Now, $D U^* B_i B_j^* U D - DU^* B_j^* B_i U D = D U^* [B_i, B_j^*] U D \in \cL^p$. On the other hand, $I - D'^2$ is the operator that multiplies the homogeneous polynomials of degree $n$ by $1 - (n-1)(n+1)/{n^2} = 1/{n^2}$, and it is not hard to see that this operator is in $\cL^q$ for all $q>d/2$. But $p>d+r$, so $DU^* B_j^*U^*(I-D'^2)U B_i U D \in \cL^p$, and we are done.
\end{proof}
\section{Concluding remarks}
The problem of determining whether every homogeneous ideal in $\cA_d$ has the stable division property remains open. Besides being a compelling problem in its own right, and in addition to being directly related to questions of numerical stability in computational algebraic geometry, the consequence to essential normality of Hilbert modules serves as a great motivation for solving this problem. By the result of the previous section, it is already interesting to solve this problem for ideals generated by quadratic forms. But it is possible that even this problem is too hard to solve.
The notion of stable division can be weakened in several ways. One of these ways is to allow for \emph{approximate stable division}. That is, instead of requiring
\bes
\sum_{i=1}^k a_i f_i = h
\ees
with $\sum \|a_i f_i\| \leq C \|h\|$, one requires only
\bes
\|\sum_{i=1}^k a_i f_i - h \| \leq cn^{-1/2} \|h\|
\ees
and $\sum \|a_i f_i\| \leq C \|h\|$, where $n$ is the degree of $h$. It is then easy to see that, under the assumption of \emph{approximate} stable division, the proof of Theorem \ref{thm:stabdivessnorm} goes through. One can also allow for $C$ to be a slowly growing function of $n:= \deg h$, although that may affect the interval of those $p$ for which $p$-essential normality is shown (for example $C \leq cn^{1/2-\epsilon}$ would still give $p$-essential normality for sufficiently large $p$, $C \leq c\log n$ would give $p$-essential normality for the same $p$'s, etc.). In fact, an analysis of the proof of Theorem \ref{thm:stabdivessnorm} shows that we may also allow the generating set to vary and in fact to having (slowly) growing degree.
These weakened notions of stable division are perhaps what one might hope to prove in order to establish Arveson's conjecture in general.
Recently, J\"org Eschmeier developed a different approach to the problem of essential normality \cite{E10}. His approach is related to ours, but somewhat different in spirit. He showed that if an ideal $I$ is generated by homogeneous polynomials $\{f_1, \ldots, f_k\}$ of degree $m$, such that
\be\label{eq:Eschmeier}
\|P_{I^\perp}\sum_{i=1}^k a_i \frac{\partial}{\partial z_j} f_i \| \leq C\sqrt{n} \|\sum_{i=1}^k a_i f_i \|
\ee
holds for all $a_1, \ldots, a_d \in H_{n-m}$, then $H^2_d/I$ is $p$-essentially normal for all $p>\dim(I)$. He also showed that if $\{f_1, \ldots, f_k\}$ is a stable generating set for $I$, then $I$ has the above property.
| proofpile-arXiv_065-4918 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction} \label{s: Introduction}
Let $S$ be the {\it Lie group} ${{\rr}^n}\ltimes {\mathbb R}$ endowed with the
following {\it product}: for all $(x, t)$, $(x', t')\in S$,
$$(x, t)\cdot(x', t')\equiv (x+e^tx', t+t')\,.$$
The group $S$ is also called an
{\emph{$ax+b$--group}}. Clearly, $o= (0, 0)$ is the {\it identity} of $S$.
We endow $S$ with the {\it left-invariant Riemannian metric}
$$ds^2 \equiv e^{-2t}(dx_1^2+\cdots+dx_n^2)+dt^2\,,$$
and denote by $d$ the {\it corresponding metric}. This coincides with the
metric on the hyperbolic space $H^{n +1}({\mathbb R})$. For all $(x, t)$ in
$S$, we have
\begin{equation}\label{1.1}
\cosh d\big((x, t), o\big)= \frac{ e^t+e^{-t}+e^{-t}|x|^2 }{2}.
\end{equation}
The group $S$ is nonunimodular. The {\it right and left Haar measures} are
given by
$$
d\rho(x, t)\equiv\,dx\,dt \quad {\rm{and}}
\quad d\lambda(x, t)\equiv e^{-nt}\,dx\,dt.
$$
Throughout the whole paper, we work on the {\it triple $(S, d, \rho)$},
namely, the group $S$ endowed with the left-invariant Riemannian
metric $d$ and the right Haar measure $\rho$. For all $(x, t)\in S$
and $r>0$, we denote by $B\big((x,t), r\big)$ the {\it ball centered at
$(x,t)$ of radius $r$}. In particular, it is well known that the
right invariant measure of the ball $B(o, r)$ has the following behavior
\begin{eqnarray*}
\rho\big(B(o, r)\big)\sim\left\{\begin{array}{ll}
r^{n+1}\quad\quad&\text{if\quad $r<1$}\\
e^{nr}&\text{if\quad $r\ge 1$}.
\end{array}\right.
\end{eqnarray*}
Thus, $(S, d, \rho)$ is a space of exponential growth.
Throughout this paper, we denote by $L^p$ the {\it Lebesgue space $L^p(\rho)$} and by
$\|\cdot\|_{L^p}$ its {\it quasi-norm}, for all $p\in (0, \infty]$. We also
denote by $L^{1,\,\infty}$ the {\it Lorentz space $L^{1,\,\infty}(\rho)$}
and by $\|\cdot\|_{L^{1,\, \infty}}$ its {\it quasi-norm}.
Harmonic analysis on exponential growth groups recently attracts a
lot of attention. In particular, many efforts have been made to
study the theory of singular integrals on the space $(S,d,\rho)$.
In the remarkable paper \cite{hs}, Hebisch and Steger developed a new
Calder\'on--Zygmund theory which holds in some spaces of exponential
growth, in particular in the space $(S,d,\rho)$. The main idea of
\cite{hs} is to replace the family of balls which is used in the
classical Calder\'on--Zygmund theory by a suitable family of
rectangles which we call Calder\'on--Zygmund sets (see
Section \ref{s2} for their {definitions}). We let ${\mathcal R}$
denote the {\it family of all Calder\'on--Zygmund sets}.
The Hardy--Littlewood maximal function associated with ${\mathcal R}$ is of
weak type $(1,1)$ (see \cite{gs, va1}).
In \cite{hs}, it was proven that every
integrable function on $(S,d,\rho)$ admits a Calder\'on--Zygmund
decomposition involving the family ${\mathcal R}$. As a consequence, a
theory for singular integrals holds in this setting. In particular,
every integral operator bounded on $L^2$ whose kernel satisfies a
suitable integral H\"ormander's condition is of weak type $(1,1)$.
Interesting examples of singular integrals in this setting are
spectral multipliers and Riesz transforms associated with a
distinguished Laplacian $\Delta$ on $S$, which have been studied by
numerous authors in, for example, \cite{as, cghm, gqs, gas, hs, heb3, mt, sjo, sv}.
Vallarino \cite{va} introduced an atomic Hardy space $H^1$ on the
group {$(S, d,\rho)$,} defined by atoms supported in Calder\'on--Zygmund sets
instead of balls, and a corresponding ${\mathop\mathrm{\,BMO}}$ space, which enjoy some
properties of the classical Hardy and ${\mathop\mathrm{\,BMO}}$ spaces (see \cite{cw2, fest,
st93}). More precisely, it was proven that the dual of $H^1$ may be
identified with ${\mathop\mathrm{\,BMO}}$, that singular integrals whose kernel
satisfies a suitable integral H\"ormander's condition are bounded
from $H^1$ to $L^1$ and from $L^{\infty}$ to ${\mathop\mathrm{\,BMO}}$. Moreover, for
every $\theta\in (0,1)$, the real interpolation space
$[H^1,L^2]_{\theta, q}$ is equal to $L^q$ if $\frac{1}{q} =1-\frac{\theta}{2}$,
and $[L^2,{\mathop\mathrm{\,BMO}}]_{\theta, p}$ is equal to $L^p$ if
$\frac{1}{p} = \frac{1-\theta}{2}$. The complex interpolation
spaces between $H^1$ and $L^2$ and between $L^2$ and ${\mathop\mathrm{\,BMO}}$
are not identified in \cite{va}.
In this paper, we introduce a dyadic grid of Calder\'on--Zygmund sets
on $S$, which we denote by $\mathcal{D}$ and which can be considered as the
analogue of the family of classical dyadic cubes (see Theorem
\ref{t3.1} below). Recall that dyadic sets in the context of spaces
of homogeneous type were also introduced by Christ \cite{c}; his
construction used the doubling condition of the considered measure, so it
cannot be adapted to the current setting. {In the $ax+b$\,--groups, the main
tools we use to construct such a dyadic grid are some nice
splitting properties of the Calder\'on--Zygmund sets and an
effective method to construct a ``parent" of a given
Calder\'on--Zygmund set (see Lemma \ref{l3.1} below).
More precisely, given a Calder\'on--Zygmund set $R$, we
find a bigger Calder\'on--Zygmund set $M(R)$ which can be split into
at most $2^n$ sub-Calder\'on--Zygmund sets such that one of these
subsets is exactly $R$ and each of these subsets has measure
comparable to the measure of $R$.} To the best of our knowledge,
this is the first time that a family of dyadic sets appears in a
space of exponential growth. The dyadic grid $\mathcal{D}$ turns out to be a
useful tool to study the analogue of maximal singular integrals (see
\cite{hyy}) on the space $(S,d,\rho)$, which will be investigated
in a forthcoming paper \cite{lvy}.
By means of the dyadic collection $\mathcal{D}$, in Section 4 below, we
prove a relative distributional inequality
involving the dyadic Hardy--Littlewood maximal function and the
dyadic sharp maximal function on $S$, which implies a
Fefferman--Stein type inequality involving those
maximal functions; see Stein's book \cite[Chapter IV, Section
3.6]{st93} and Fefferman--Stein's paper \cite{fest} for
the analogous inequality in the Euclidean setting. The previous
inequality is the main ingredient to prove that the complex
interpolation space $(L^2,{\mathop\mathrm{\,BMO}})_{[\theta]}$ is equal to
$L^{p_{\theta}}$ if $\frac{1}{p_{\theta}}= \frac{1-\theta}{2}$ and
$(H^1,L^2)_{[\theta]}$ is equal to $L^{q_{\theta}}$ if
$\frac{1}{q_{\theta}}= 1-\frac{\theta}{2}$. This implies complex
interpolation results for analytic families of operators
(see Theorems \ref{t5.2} and \ref{t5.3} below). {In particular,}
the complex interpolation result for analytic families of operators
involving $H^1$ could be interesting and useful to obtain endpoint
growth estimates of the solutions to the wave equation associated
with the distinguished Laplacian $\Delta$ on $ax+b$--groups, as was
pointed out {by M\"uller and Vallarino} \cite[Remark 6.3]{mv}.
We remark that the corresponding complex interpolation results for
the classical Hardy and ${\mathop\mathrm{\,BMO}}$ spaces were proven by Fefferman and
Stein \cite{fest}. Recently, an $H^1$--${\mathop\mathrm{\,BMO}}$ theory was developed by
Ionescu \cite{i} for noncompact symmetric spaces of rank $1$ and,
more generally, by Carbonaro, Mauceri and Meda \cite{cmm} for metric
measure spaces which are nondoubling and satisfy suitable geometric
assumptions. In those papers, the authors proved a Fefferman--Stein type
inequality for the maximal functions associated with the family of balls
of small radius: the main ingredient in their proofs is an isoperimetric property
which is satisfied by the spaces studied in \cite{i, cmm}. As a consequence,
the authors in \cite{i, cmm} obtained some complex interpolation results
involving a Hardy space defined only by means of atoms supported in small balls
and a corresponding ${\mathop\mathrm{\,BMO}}$ space. Notice that the space $(S,d,\rho)$
which we study here does not satisfy the
isoperimetric property (\cite[(2.2)]{cmm}). Moreover, we
consider atoms supported both in {``small'' and ``big'' } sets.
Then we have to use different methods to obtain a suitable
Fefferman--Stein inequality and complex interpolation results
involving $H^1$ and ${\mathop\mathrm{\,BMO}}$.
Due to the existence of the dyadic collection $\mathcal{D}$, it makes sense
to define a dyadic ${\mathop\mathrm{\,BMO}}_\mathcal{D}$ space and its predual dyadic Hardy
space $H_\mathcal{D}^1$ on $S$ (see Definitions
\ref{dyadic-hardy} and \ref{dyadic-bmo} below). Though in Theorem
\ref{dyadic-hardy and hardy} below, it is proven that $H_\mathcal{D}^1$ is a
proper subspace of $H^1$, the complex interpolation result given by
$H_\mathcal{D}^1$ and $L^2$ is the same as that given by $H^1$ and $L^2$;
see Remark \ref{dyadic-complex-interpolation} below.
Finally, we make some conventions on notations. Set
${\mathbb Z}_+\equiv\{1,\,2,\,\cdots\}$ and ${\mathbb N}={\mathbb Z}_+\cup\{0\}$. In the
following, $C$ denotes a {\it positive finite constant} which may vary
from line to line and may depend on parameters according to the
context. Constants with subscripts do not change through the whole
paper. Given two quantities $f$ and $g$, by $f\lesssim g$, we mean that
there exists a positive constant $C$ such that $f\le Cg$. If $f\lesssim
g\lesssim f$, we then write $f\sim g$. For any bounded linear operator
$T$ from a Banach space $A$ to a Banach space $B$, we denote by
$\|T\|_{A\to B}$ its {\it operator norm}.
\section{Preliminaries}\label{s2}
We first recall the definition of Calder\'on--Zygmund sets which appears
in \cite{hs} and implicitly in \cite{gs}. In the sequel, we denote
by ${\mathcal Q}$ the {\it collection of all dyadic cubes in ${{\rr}^n}$}.
\begin{definition}\label{d2.1}
A {\it Calder\'on--Zygmund set} is a set $R\equiv Q\times[t-r, t+r)$, where
$Q\in{\mathcal Q}$ with side length $L$, $t\in{\mathbb R}$, $r>0$ and
\begin{eqnarray*}
&&e^2\,e^tr\le L<e^8\,e^tr\quad\quad \mbox{if}\quad r<1,\\
&&e^t\,e^{2r}\le L<e^t\,e^{8r}\quad\quad\mbox{if}\quad r\ge1.
\end{eqnarray*}
We set $t_R\equiv t$, $r_R\equiv r$ and $x_R\equiv(c_Q, t)$, where
$c_Q$ is the {\it center} of $Q$. For a Calder\'on--Zygmund set $R$,
its {\it dilated set} is defined as $R^*\equiv \{x\in S:\, d(x, R)<r_R\}$.
Denote by ${\mathcal R}$ the family of all Calder\'on--Zygmund sets on $S$.
For any $x\in S$, denote by ${\mathcal R}(x)$
the family of the sets in ${\mathcal R}$ which contain $x$.
\end{definition}
\begin{remark}\label{r2.1}\rm
For any set $R\equiv Q\times[t-r, t+r)\in{\mathcal R}$, we have that
$$\rho(R)=\displaystyle\int_Q\displaystyle\int_{t-r}^{ t+r} \,ds\,dx=2r|Q|=2rL^n,$$
where $|Q|$ and $L$ denote the Lebesgue measure and the side length
of $Q$, respectively.
\end{remark}
The following lemma presents some properties of the
Calder\'on--Zygmund sets.
\begin{lemma}\label{l2.1}
Let all the notation be as in Definition \ref{d2.1}. Then there
exists a constant ${\kappa}_0\in[1, \infty)$ such that for all $R\in{\mathcal R}$,
the following hold:
\begin{itemize}
\vspace{-0.25cm}
\item[(i)] $B(x_R, r_R)\subset R\subset B(x_R, {\kappa}_0r_R)$;
\vspace{-0.25cm}
\item[(ii)] $\rho(R^\ast)\le{\kappa}_0\rho(R)$;
\vspace{-0.25cm}
\item[(iii)] every $R\in{\mathcal R}$ can be decomposed into
mutually disjoint sets $\{R_i\}_{i=1}^k$, with $k=2$ or $k=2^n$,
$R_i\in {\mathcal R}$, such that $R=\cup_{i=1}^k R_i$ and
$\rho(R_i)=\rho(R)/k$ for all $i\in\{1,\cdots, k\}$.
\end{itemize}
\end{lemma}
We refer the reader to \cite{hs, va} for the proof of Lemma \ref{2.1}.
We recall here an idea of the proof of the property (iii) from \cite{hs,va}.
Given a Calder\'on--Zygmund set $R\equiv Q\times [t-r,t+r)$,
when the side length $L$ of $Q$ is sufficiently large
with respect to $e^t$ and $r$, it suffices to decompose $Q$ into
$2^n$ smaller dyadic Euclidean cubes $\{Q_1,\cdots,Q_{2^n}\}$ and
define $R_i\equiv Q_i\times [t-r,t+r)$ for $i\in\{1,\cdots,2^n\}$. Otherwise, it suffices
to split up the interval $[t-r,t-r)$ into two disjoint sub-intervals
$\{I_1, I_2\}$, which have the same measure, and define
$R_i\equiv Q\times I_i$ for $i\in\{1, 2\}$. This construction gives rise
to either $2^n$ or $2$ smaller Calder\'on--Zygmund sets
satisfying the property (iii) above.
\smallskip
The Hardy--Littlewood maximal function associated to
the family ${\mathcal R}$ is defined as follows.
\begin{definition}\label{d2.2}
For any locally integrable function $f$ on $S$, the {\it Hardy--Littlewood
maximal function ${\mathcal M} f$} is defined by
\begin{equation}\label{2.1}
{\mathcal M}
f(x)\equiv\sup_{R\in{\mathcal R}(x)}\dfrac1{\rho(R)}\displaystyle\int_R|f|\,d\rho\qquad
{ \forall\ } x\in S\,.
\end{equation}
\end{definition}
The maximal operator ${\mathcal M}$ has the following boundedness properties
\cite{gs, hs, va1}.
\begin{proposition}\label{p2.1}
The Hardy--Littlewood maximal
operator ${\mathcal M}$ is bounded from $L^1$ to
$L^{1,\,\infty}$, and also bounded on $L^p$ for all $p\in(1, \infty]$.
\end{proposition}
By Proposition \ref{p2.1}, Lemma \ref{l2.1} and a stopping-time argument,
Hebisch and Steger \cite{hs} showed that
any integrable function $f$ on $S$ at any level $\alpha>0$ has a
Calder\'on--Zygmund decomposition $f=g+\sum_ib_i$, where $|g|$ is a
function almost everywhere bounded by ${\kappa}_0 \alpha$ and functions
$\{b_i\}_i$ have vanishing integral and are supported in sets of the
family ${\mathcal R}$. This was proven to be a very useful tool
in establishing the boundedness of some multipliers and
singular integrals in \cite{hs}, and the theory of the
Hardy space $H^1$ on $S$ in \cite{va}.
Lemma {\ref{l2.1}(iii)} states that given a Calder\'on--Zygmund set, one
can split it up into a finite number of disjoint subsets which are
still in ${\mathcal R}$. We shall now study how, starting from a given
Calder\'on--Zygmund set $R$, one can obtain a bigger set containing
it which is still in ${\mathcal R}$ and whose measure is comparable to the
measure of $R$.
\begin{definition}\label{d3.1}
For any $R\in{\mathcal R}$, $M(R)\in{\mathcal R}$ is called a \emph{parent} of $R$,
if
\begin{itemize}
\vspace{-0.25cm}
\item[(i)] $M(R)$ can be decomposed into $2$ or $2^n$ mutually disjointed
sub-Calder\'on--Zygmund sets, and one of these sets is $R$;
\vspace{-0.25cm}
\item[(ii)] $3\rho(R)/2\le\rho(M(R))\le\max\{3, 2^n\}\rho(R)$.
\end{itemize}
\end{definition}
For any $R\in{\mathcal R}$, a parent of $R$ always exists, but it may not be
unique. The following lemma gives three different kinds of
extensions for sets $R\equiv Q\times[t{-r},\, t+r)\in{\mathcal R}$ when
$r\ge1$. Precisely, if $Q$ has small side length, then we find a
parent of $R$ by extending $R$ ``horizontally''; if $Q$ has large
side length, then we find a parent of $R$ by extending $R$
either ``vertically up" or ``vertically down".
\begin{lemma}\label{l3.1}
Suppose that $R\equiv Q\times[t{-r},\, t+r)\in{\mathcal R}$, where $t\in\mathbb R$,
$r\equiv r_R\ge1$ and $Q\subset{{\rr}^n}$ is a dyadic cube with side
length $L$ satisfying $e^t\,e^{2r}\le L<e^t\,e^{8r}$. Then the following
hold:
\begin{itemize}
\vspace{-0.25cm}
\item[(i)] {If} $e^t\,e^{2r}\le L<e^t\,e^{8r}/2$, then $R_1\equiv
Q'\times[t-r,\, t+r)$ is a parent of $R$, where $Q'\subset{{\rr}^n}$
is the unique dyadic cube with side length $2L$ that contains $Q$.
Moreover, $\rho(R_1)=2^n\rho(R)$.
\vspace{-0.25cm}
\item[(ii)] If $e^t\,e^{8r}/2\le L<e^t\,e^{8r}$, then
$R_2\equiv Q\times[t{-r},\, t+{3r})$
is a parent of $R$. Moreover, the set $R'\equiv Q\times[t+r,
t+{3r})$ belongs to the family ${\mathcal R}$, $R_2=R\cup R'$ and
$$\rho(R)=\rho(R')=\rho(R_2)/2.$$
The set $R_2$ is also a parent of $R'$.
\vspace{-0.25cm}
\item[(iii)] If $e^t\,e^{8r}/2\le L<e^t\,e^{8r}$, then
$R_3\equiv Q\times[t{-5r},\, t+{r})$
is a parent of $R$. Moreover, the set $R''\equiv Q\times[t{-5r},
t{-r})$ belongs to the family ${\mathcal R}$, $R_3=R\cup R''$ and
$$\rho(R)=\rho(R'')/2=\rho(R_3)/3.$$
The set $R_3$ is also a parent of $R''$.
\end{itemize}
\end{lemma}
\begin{proof}
We first prove (i). Since $2e^te^{2r}\le 2L<e^te^{8r}$, we
have that $R_1\in{\mathcal R}$. Obviously $Q'\times[t{-r},
t+r)$ can be decomposed into $2^n$ sub-Calder\'on--Zygmund sets and
one of these sets is $R$. By Remark \ref{r2.1}, we have
$\rho(R_1)=2^n\rho(R)$. Thus, (i) holds.
To show (ii), notice that $r_{R_2}=2r$ and $t_{R_2}=t+r$. Since
$r\ge 1$ and $e^t\,e^{8r}/2\le L<e^t\,e^{8r}$, we have that $e^{t+r}\,e^{4r}\le
L<e^{t+r}\, e^{16r}$, which implies that $R_2\in{\mathcal R}$. If we set
$$R'\equiv Q\times[t+r,\, t+3r),$$
then $t_{R'}=t+{2r}$ and
$r_{R'}=r$. Since $r\ge 1$ and $e^t\,e^{8r}/2\le
L<e^t\,e^{8r}$, we obtain $e^{t+2r}\,e^{r}\le L<e^{t+2r}\,e^{8r}$, and
hence $R'\in{\mathcal R}$. By Remark \ref{r2.1},
$\rho(R)=\rho(R')=\rho(R_2)/2.$ Thus, (ii) holds.
Finally, we show (iii). Observe that $t_{R_3}=t-2r$ and
$r_{R_3}=3r$. Since $r\ge 1$ and $e^t\,e^{8r}/2\le
L<e^t\,e^{8r}$, we have that $e^{t-2r}\,e^{6r}\le L<e^{t-2r}\,e^{24r}$
and hence $R_3\in{\mathcal R}$. Set $R''\equiv Q\times[t{-5r},\,
t{-r})$. Notice that $t_{R''}=t{-3r}$ and $r_{R''}=2r$. Again by
{$r\ge 1$} and $e^t\,e^{8r}/2\le L<e^t\,e^{8r}$, we obtain that
$e^{t-3r}\,e^{4r}\le L<e^{t-3r}\,e^{16r}$ and hence $R''\in{\mathcal R}$. It
is easy to see that $R_3=R\cup R''$, $\rho(R'')=2\rho(R)$ and
$\rho(R_3)=3\rho(R)$. Therefore, we obtain (iii), which completes
the proof.
\end{proof}
We conclude this section by recalling the definition of the Hardy
space $H^1$ and its dual space ${\mathop\mathrm{\,BMO}}$ (see \cite{va}).
\begin{definition}
An {\emph{$H^1$-atom}} is a function $a$ in $L^1$ such that
\begin{itemize}
\vspace{-0.2cm}
\item [(i)] $a$ is supported in a set $R\in{\mathcal R}$;
\vspace{-0.2cm}
\item [(ii)]$\|a\|_{ L^{\infty} } \le [\rho(R)]^{-1};$
\vspace{-0.2cm}
\item [(iii)] $\int_S a \,d\rho =0$\,.
\end{itemize}
\end{definition}
\begin{definition}
The {\it Hardy space} $H^{1}$ is the space of all functions $g$ in $ L^1$
which can be written as $g=\sum_j \lambda_j\, a_j$, where
{$\{a_j\}_j$} are {$H^1$-atoms } and {$\{\lambda _j\}_j$} are
complex numbers such that $\sum _j |\lambda _j|<\infty$. {Denote} by
$\|g\|_{H^{1}}$ the {\it infimum of $\sum_j|\lambda_j|$ over such
decompositions}.
\end{definition}
In the sequel, for any locally integrable function $f$ and any set
$R\in {\mathcal R}$, we denote by $f_R$ the {\it average of $f$ on $R$}, namely,
$\frac{1}{ \rho(R)} \int_R f d\rho$.
\begin{definition}
For any locally integrable function $f$, its {\it sharp maximal function}
is defined by
$$f^{\sharp}(x)\equiv\sup_{R\in{\mathcal R}(x)}\frac{1}{ \rho(R)} \int_R
|f-f_R|\,d\rho \qquad { \forall\ } x\in S\,.$$
The space $\mathcal{B}\mathcal{M}\mathcal{O}$ is the {\it space of all
locally integrable functions $f$ such that $f^{\sharp}\in L^
{\infty}$}. The {\it space ${\mathop\mathrm{\,BMO}}$} is the quotient of
$\mathcal{B}\mathcal{M}\mathcal{O}$ module constant functions. It is
a Banach space endowed with the norm $\|f\|_{{\mathop\mathrm{\,BMO}}}
\equiv\|f^{\sharp}\|_{L^{\infty}}$.
\end{definition}
The space ${\mathop\mathrm{\,BMO}}$ is identified with the dual of $H^1$; see
\cite[Theorem 3.4]{va}. More precisely, for any $f$ in
${\mathop\mathrm{\,BMO}}$, the functional $\ell$ defined by $\ell(g)\equiv\int
fg\,d\rho$ for any finite linear combination $g$ of atoms extends to
a bounded functional on $H^{1}$ whose norm is no more than $C\,\|f\|_{{\mathop\mathrm{\,BMO}}}.$
On the other hand, for any bounded linear functional $\ell$ on
$H^{1}$, there exists a function $f^{\ell}$ in ${\mathop\mathrm{\,BMO}}$
such that $\|f^{\ell}\|_{{\mathop\mathrm{\,BMO}}}\le C\,\|\ell\|_{(H^{1})^*}$ and
$\ell(g)=\int f^{\ell}g\,d\rho$ for any finite linear combination $g$ of
atoms.
\section{A dyadic grid on $(S, d, \rho)$}\label{s3}
The main purpose of this section is to introduce a dyadic grid of
Calder\'on--Zygmund sets on $(S, d,\rho)$, which can be considered
as an analogue of Euclidean dyadic cubes (see \cite[p.\,149]{st93}
or \cite[p.\,384]{g}). The key tools to construct such a grid are
{Lemmas \ref{l2.1} and \ref{l3.1}.}
\begin{theorem}\label{t3.1}
There exists a collection $\{\mathcal{D}_j\}_{j\in{\mathbb Z}}$ of partitions of $S$
such that each $\mathcal{D}_j$ consists of pairwise disjoint
Calder\'on--Zygmund sets, and
\begin{itemize}
\vspace{-0.25cm}
\item[(i)] for any $j\in{\mathbb Z}$, $S=\cup_{R\in\mathcal{D}_j} R$;
\vspace{-0.25cm}
\item[(ii)] if $\ell\le k$, $R\in\mathcal{D}_\ell$ and $R^{\prime}\in\mathcal{D}_k$, then
either $R\subset R^{\prime}$ or $R\cap R^{\prime}=\emptyset$;
\vspace{-0.25cm}
\item[(iii)] for any $j\in{\mathbb Z}$ and $R\in\mathcal{D}_j$, there
exists a unique $R^{\prime}\in\mathcal{D}_{j+1}$ such that $R\subset R^{\prime}$ and
$\rho(R^{\prime})\le 2^n\,\rho(R)$;
\vspace{-0.25cm}
\item[(iv)] for any $j\in{\mathbb Z}$, every $R\in\mathcal{D}_j$ can be decomposed into
mutually disjoint sets $\{R_i\}_{i=1}^k\subset\mathcal{D}_{j-1}$, with $k=2$
or $k=2^n$, such that $R=\cup_{i=1}^k R_i$ and
$\frac{\rho(R)}{2^n}\le\rho(R_i)\le {\frac{2\rho(R)}{3}}$ for all
$i\in\{1,\cdots,k\}$;
\vspace{-0.25cm}
\item[(v)] for any $x\in S$ and for any $j\in{\mathbb Z}$,
let $R_j^x$ be the unique set in $\mathcal{D}_j$ which contains $x$, then
$\lim_{j\to -\infty}\rho(R_j^x)=0$ and $\lim_{j\to
\infty}\rho(R_j^x)=\infty$.
\end{itemize}
\end{theorem}
\begin{proof}
We write $S\equiv\Omega_1\cup\Omega_2$, where $\Omega_1\equiv{{\rr}^n}\times[0,
\infty)$ and $\Omega_2\equiv{{\rr}^n}\times(-\infty,0)$,
and construct a sequence $\{\mathcal{D}_j^1\}_{j\in{\mathbb N}}$
of partitions of $\Omega_1$ as well as a sequence $\{\mathcal{D}_j^2\}_{j\in{\mathbb N}}$
of partitions of $\Omega_2$, respectively.
Let us first construct the desired partitions
$\{\mathcal{D}_j^1\}_{j\in{\mathbb N}}$ of $\Omega_1$ by the following four steps.
{{\bf Step 1.} } Choose a Calder\'on--Zygmund set $R_0\equiv
Q_0\times[t_0-r_0, t_0+{r_0})$, where $t_0=r_0\ge1$ and
$Q_0=[0, \ell_0)^n\in{\mathcal Q}$, with $e^{t_0}\,e^{2
r_0}\le\ell_0<e^{t_0}\,e^{8 r_0}$. To find a parent of $R_0$, we consider
the following two cases, separately.
{\it {Case 1:}} $e^{t_0}\,e^{8 r_0}/2\le\ell_0<e^{t_0}\,e^{8 r_0}$. In this
case, by Lemma \ref{l3.1}(ii),
$$R_1\equiv Q_0\times[t_0{-r_0}, t_0+{3r_0})$$
is a parent of $R_0$ and $\rho(R_1)=2\rho(R_0)$.
{\it {Case 2:}} $e^{t_0}\,e^{2 r_0}\le\ell_0< e^{t_0}\,e^{8 r_0}/2$.
By Lemma \ref{l3.1}(i), $R_1\equiv Q_1\times[t_0{-r_0},
t_0+{r_0})$ is a parent of $R_0$, where $Q_1\subset{{\rr}^n}$ is the
unique dyadic cube with side length $2\ell_0$ that contains $Q_0$,
namely, $Q_1=[0, 2\ell_0)^n$.
We then proceed as above to obtain a parent of $R_1$, which is
denoted by $R_2$. By repeating this process, we obtain a sequence of
Calder\'on-Zygmund sets, $\{R_j\}_{j\in{\mathbb N}}$, such that each
$R_{j+1}$ is a parent of $R_j$.
Without loss of generality, for any $j\in{\mathbb N}$, we may set $R_j\equiv
Q_j\times [t_j-r_j, t_j+r_j)$, where $r_{j+1}\ge r_j\ge1$,
$t_j=r_j$ and $Q_j=[0, \ell_j)^n\in{\mathcal Q}$, with
$e^{t_j}\,e^{2 r_j}\le\ell_j<e^{t_j}\,e^{8 r_j}$. Observe that $R_{j+1}$ is
obtained by extending $R_j$ either ``vertically up" (see {\it {Case
1}}) or ``horizontally" (see {\it {Case 2}}). Notice that the
definition of Calder\'on--Zygmund sets implies that we cannot always
{extend} $R_j$ ``horizontally'' to obtain its parent $R_{j+1}$; in
other words, for some $j$, to obtain $R_{j+1}$, we have to {extend}
$R_j$ ``vertically {up}''. Thus, $\lim_{j\to\infty}(t_j+r_j)=\infty$.
This, combined with the fact that $t_j=r_j$, implies that
\begin{equation}\label{3.1}
\Omega_1 = \bigcup_{j\in{\mathbb N}} \big({{\rr}^n}\times[t_j-r_j, t_j+{r_j})\big).
\end{equation}
{\bf Step 2. } For any $j\in{\mathbb N}$ and $R_j$ as constructed in {\bf
Step 1}, we set
\begin{equation}\label{3.2}
{\mathcal N}_j\equiv\{Q\times[t_j{-r_j}, t_j+{r_j}):\, Q\in{\mathcal Q},\,
\ell(Q)=\ell(Q_j)\}.
\end{equation}
Then ${\mathcal N}_j\subset{\mathcal R}$ and we put all sets of ${\mathcal N}_j$ into
$\mathcal{D}_j^1$. If $R_{j+1}$ is obtained by extending $R_j$ ``vertically
{up}" as in {\it Case 1} of {\bf Step 1}, then we set
\begin{equation}\label{3.3}
\widetilde{{\mathcal N}_j}\equiv\{Q\times[t_j+{r_j}, t_j+{3r_j}):\, Q\in{\mathcal Q},\,
\ell(Q)=\ell(Q_j)\}.
\end{equation}
By
Lemma \ref{l3.1}(ii), $\widetilde{{\mathcal N}_j}\subset{\mathcal R}$.
If $R_{j+1}$ is obtained by extending $R_j$ ``horizontally" as in
{\it Case 2} of {\bf Step 1}, then we set $\widetilde{{\mathcal N}_j}=\emptyset$.
We also put all sets of $\widetilde{{\mathcal N}_j}$ into $\mathcal{D}_j^1$.
We claim that for any fixed $j\in{\mathbb N}$,
\begin{equation}\label{Omega1unione}
\Omega_1=\bigcup_{R\in{\mathcal N}_j\cup (\cup_{\ell=j}^{\infty}\widetilde{{\mathcal N}_{\ell}})} R\,.
\end{equation}
Indeed,
\begin{equation}\label{primaparte}
{\mathbb R}^n\times [0,t_j+{3r_j})=\bigcup_{R\in {\mathcal N}_j\cup \widetilde{{\mathcal N}_j}}R.
\end{equation}
Rewrite the sequence $\{\widetilde{\mathcal N}_k:\, k>j,\, \widetilde{{\mathcal N}}_{k}\neq\emptyset\}$ as
$\{\widetilde{{\mathcal N}}_{\ell_k}\}_{k=1}^\infty$, where $$j+1\le
\ell_1<\ell_2<\dots<\ell_k<\dots \,.$$
We have that
\begin{equation} \label{oss1}
t_j+{3r_j}=t_{\ell_1}+{r_ {\ell_1} } \quad {\rm{and}} \quad
t_{\ell_{k-1}}+{3r_ {\ell_{k-1}} }=t_{\ell_k}+{r_ {\ell_k} } \quad
\forall\ k\ge 1.
\end{equation}
Since
$$
{\mathbb R}^n\times [t_{\ell_k}+{r_ {\ell_k} }, t_{\ell_k}+{3r_ {\ell_k}
})= {\bigcup_ {R\in\widetilde{{\mathcal N}} _{\ell_k} } R }
$$
and $\lim_{k\to \infty} ( t_{\ell_k}+{3r_ {\ell_k}})
={\infty}$, by (\ref{oss1}), we obtain that
\begin{equation}\label{secondaparte}
{ {\mathbb R}^n\times [ t_j+{3r_j},\infty ) =\bigcup_{k\ge
1}\bigcup_{R\in\widetilde{{\mathcal N}}_{\ell_k} } R = \bigcup_{
\ell\ge j+1} \bigcup_{R\in \widetilde{{\mathcal N}}_{\ell} } R.}
\end{equation}
The claim \eqref{Omega1unione} follows by \eqref{primaparte} and
\eqref{secondaparte}.
{\bf Step 3. } Now fix $j\in{\mathbb N}$ and take $\ell\ge j+1$ such that
$\widetilde{{\mathcal N}_{\ell}}\neq\emptyset$.
For any
$R\in\widetilde{{\mathcal N}_{\ell}}$, by Lemma \ref{l2.1}(iii), there exist mutually
disjoint sets $\{R^{i}\}_{i=1}^k\subset{\mathcal R}$ with $k=2$ or $k=2^n$
such that $R=\cup_{i=1}^k R_i$, and
$\rho(R)/2^n\le\rho(R_{i})\le\rho(R)/2$ for all $i\in\{1,\cdots,k\}$.
Denote by $\widetilde{{\mathcal N}_{\ell}}^1$ the collection of all such small
Calder\'on-Zygmund sets $R_i$ obtained by running $R$ over all
elements in $\widetilde{{\mathcal N}_{\ell}}$. Observe that sets in $\widetilde{{\mathcal N}_{\ell}}^1$ are
mutually disjoint. Next, we apply Lemma \ref{l2.1}(iii) to every
$R\in\widetilde{{\mathcal N}_{\ell}}^1$ and argue as above; we then obtain a collection
of smaller Calder\'on--Zygmund sets, which is denoted by
$\widetilde{{\mathcal N}_{\ell}}^2$. By repeating the above procedure $i$ times, we
obtain a collection of Calder\'on--Zygmund sets which we denote by $\widetilde{{\mathcal N}_\ell}^{i}$.
In particular, we put the collection $\widetilde{{\mathcal N}_\ell}^{\ell-j}$ obtained after $\ell-j$
steps into $\mathcal{D}_j^1$.
Thus, for any $j\in{\mathbb N}$, we define
\begin{equation}\label{3.4}
\mathcal{D}_j^1={\mathcal N}_j\bigcup\widetilde{{\mathcal N}_j} \bigcup \left(\bigcup_{\ell\ge
j+1}\widetilde{{\mathcal N}_\ell}^{\ell-j}\right) .
\end{equation}
By construction, the sets in $\mathcal{D}_j^1$ are mutually disjoint. Moreover,
since for all $j\ge 0$ and $\ell\ge j+1$,
$$
\bigcup_{R\in\widetilde{{\mathcal N}}_{\ell}^{\ell-j}}R=\bigcup_{R\in\widetilde{{\mathcal N}}_{\ell}}R\,,
$$
from the formula \eqref{Omega1unione},
we deduce that $\Omega_1=\cup_{R\in\mathcal{D}_j^1}R$. This
shows that $\mathcal{D}_j^1$ satisfies the property (i).
{\bf Step 4.} For any $0\le \ell\le k$, $R\in\mathcal{D}_\ell^1$ and
$R^{\prime}\in\mathcal{D}_k^1$, by \eqref{3.4}, \eqref{3.2}, \eqref{3.3} and the
construction above, it is easy to verify that either
$R\subset R^{\prime}$ or $R\cap R^{\prime}=\emptyset$, namely, the property
(ii) is satisfied.
Let $R$ be in $\mathcal{D}_j^1$ for some $j\in{\mathbb N}$. If $R$ is in
${\mathcal N}_j\cup\widetilde{{\mathcal N}}_j$ and if $R_{j+1}$ is obtained by extending
$R_j$ ``horizontally", then there exists one parent of $R$ in
$\mathcal{D}_{j+1}$ whose measure is {$2^n\rho(R)$}. If $R$ is in
${\mathcal N}_j\cup\widetilde{{\mathcal N}}_j$ and if $R_{j+1}$ is obtained by extending
$R_j$ ``vertically {up}", then there exists one parent of $R$ in
$\mathcal{D}_{j+1}$ whose measure is {$2\rho(R)$.} If $R$ is in
$\widetilde{{\mathcal N}}_{\ell}^{\ell-j}$ for some $\ell\ge j+1$, then it has a
parent in $\widetilde{{\mathcal N}}_{\ell}^{\ell-j-1} \subset \mathcal{D}_{j+1}$ whose
measure is either $2\rho(R)$ or $2^n\rho(R)$. Thus, the property (iii) is
satisfied.
So far, we have proven that there exists a sequence $\{\mathcal{D}_j^1\}_{j\in{\mathbb N}}$
of partitions of $\Omega_1$ whose elements satisfy the
properties (i)--(iii).
To obtain the desired partitions $\{\mathcal{D}_j^2\}_{j\in{\mathbb N}}$ on
$\Omega_2$, we {apply (i) and (iii) of Lemma \ref{l3.1} and} proceed
as for $\Omega_1$: the details are left to the reader.
We define $\mathcal{D}_j\equiv\mathcal{D}_j^1\cup\mathcal{D}_j^2$ for all $j\ge 0$.
We now construct the partitions $\mathcal{D}_j$ for $j<0$. By applying
Lemma \ref{l2.1}(iii) to each $R\in\mathcal{D}_0$, we find mutually disjoint sets
$\{R_{i}\}_{i=1}^k$, with $k=2$ or $k=2^n$, such that $R_i\in{\mathcal R}$,
$R=\cup_{i=1}^k R_i$, and $\rho(R)/2^n\le\rho(R_{i})\le\rho(R)/2$
for all $i\in\{1,\cdots,k\}$. Then we define $\mathcal{D}_{-1}$ to be the
collection of all such small Calder\'on-Zygmund sets $R_i$ obtained
by running $R$ over all elements in $\mathcal{D}_0$. Clearly $\mathcal{D}_{-1}$ is
still a partition of $S$. Again, applying {Lemma \ref{l2.1}(iii)} to
each element of $\mathcal{D}_{-1}$ and using a similar splitting argument
to this, we obtain a collection of smaller Calder\'on--Zygmund sets, which is
defined to be $\mathcal{D}_{-2}$. By repeating this process, we obtain a
collection $\{\mathcal{D}_j\}_{j<0}$, where each $\mathcal{D}_j$ is a partition of
$S$. By {the} construction of $\{\mathcal{D}_j\}_{j<0}$ and by Lemma \ref{l2.1}(iii),
it is easy to check that the sets in
$\{\mathcal{D}_j\}_{j<0}$ satisfy {the} properties {(i)--(iii).}
It remains to prove the properties (iv) and (v). For a set $R\in \mathcal{D}_j$,
with $j\le 0$, the property (iv) is easily deduced from Lemma \ref{l2.1}(iii).
Take now a set $R$ in $\mathcal{D}_j^1$ for some $j>0$. If $R$ is in ${\mathcal N}_j$,
then it has either $2^n$ disjoint subsets in ${\mathcal N}_{j-1}$ or $2$
disjoint subsets in ${\mathcal N}_{j-1}\cup \widetilde{{\mathcal N}}_{j-1}$. If $R$ is in
$\widetilde{{\mathcal N}}_j$, then it has either $2^n$ or $2$ disjoint subsets in
$\widetilde{{\mathcal N}}_{j}^1\subset \mathcal{D}_{j-1}^1$. Finally, if $R$ is in
$\widetilde{{\mathcal N}}_{\ell}^{\ell-j}$ for some $\ell\ge j+1$, then it has
either $2^n$ or $2$ subsets in $\widetilde{{\mathcal N}}_{\ell}^{\ell-j+1}\subset
\mathcal{D}_{j-1}^1$. In all the previous cases, $R$ satisfies the property (iv).
The case when $R$ is in $\mathcal{D}_j^2$ for some $j>0$ is similar and omitted.
As far as the property (v) is concerned, given a point $x$ in $S$, for any
$j\in{\mathbb Z}$, let $R_j^x$ be the set in $\mathcal{D}_j$ which contains $x$. By
the construction and the property (iv), for any $j\in{\mathbb Z}$,
there exists a set
$R_{j+1}^x\in \mathcal{D}_{j+1}$ which is a parent of $R_j^x$, so that
$$\rho( R_{j+1}^x)\ge \frac{3}{2}\rho(R_j^x)\ge
\left(\frac{3}{2}\right)^j\rho(R_0^x);$$
this shows that
$\lim_{j\to \infty} \rho(R_j^x)=\infty$.
For any $j<0$,
we have that
$$\rho(R_j^x)\le {\frac23}\rho(R_{j+1}^x) \le
{\left(\frac23\right)^j}\rho(R_0^x);$$
this shows that $\lim_{j\to -\infty}
\rho(R_j^x)=0$ and concludes the proof of the theorem.
\end{proof}
\begin{remark}\label{r3.1}
\begin{itemize}
\item[(i)] It should be pointed out that a sequence $\{\mathcal{D}_j\}_{j\in{\mathbb Z}}$
satisfying Properties (i)--(v) of Theorem \ref{t3.1} is not unique.
\vspace{-0.25cm}
\item[(ii)] For any given $j\in{\mathbb Z}$, the measures of any two elements
in $\mathcal{D}_j$ may not be comparable. This is an essential difference
between the collection of Euclidean dyadic cubes and of dyadic sets
in spaces of homogeneous type \cite{c} and the dyadic sets which we
introduced above.
\end{itemize}
\end{remark}
We now choose one collection $\mathcal{D}\equiv \{\mathcal{D}_j \}_j$ of dyadic sets
in $S$ constructed as in Theorem \ref{t3.1}. In the sequel, $\mathcal{D}$
always denotes this collection.
\section{Dyadic maximal functions}
By using the collection $\mathcal{D}$ introduced above, we define the
corresponding Hardy--Littlewood dyadic maximal function and dyadic
sharp maximal function as follows.
\begin{definition}\label{dyadicmaxfct}
For any locally integrable function $f$ on $(S, d, \rho)$,
the {\it Hardy--Littlewood dyadic maximal function} ${\mathcal M}_{\mathcal{D}} f$
is defined by
\begin{equation}\label{3.6}
{\mathcal M}_{\mathcal{D}} f(x)\equiv\sup_{R\in{\mathcal R}(x),\,
R\in\mathcal{D}}\dfrac1{\rho(R)}\displaystyle\int_R|f|\,d\rho\qquad {\forall\ } x\in
S\,,
\end{equation}
and the {\it dyadic sharp maximal function} $f^{\sharp}_{\mathcal{D}}$ by
\begin{equation}\label{fsharpd}
f^{\sharp}_{\mathcal{D}}(x)\equiv\sup_{R\in{\mathcal R}(x),\,
R\in\mathcal{D}}\dfrac1{\rho(R)}\displaystyle\int_R|f -f_R|\,d\rho\qquad {\forall\ }
x\in S\,.
\end{equation}
Recall that $f_R\equiv\frac{1}{ \rho(R)} \int_R f\, d\rho$.
\end{definition}
It is easy to see that for all locally integrable functions $f$ and
almost every $x\in S$, $ f(x)\le{\mathcal M}_{\mathcal{D}} f(x)\le{\mathcal M} f(x)$ {and
$f_\mathcal{D}^\#(x)\le f^\#(x)$.} This combined with Proposition \ref{p2.1}
implies the following conclusion.
\begin{corollary}\label{c3.1}
The operator ${\mathcal M}_{\mathcal{D}}$ is bounded from $L^1$ to $L^{1,\,\infty}$, and
also bounded on $L^p$ for all $p\in(1, \infty]$.
\end{corollary}
\begin{remark}
\begin{itemize}
\item[(i)] It is obvious that ${\mathcal M}_{\mathcal{D}} f(x)\le {\mathcal M} f(x)$
for any locally integrable function $f$ at any point $x\in S$.
However, there exist functions $f$ such that ${\mathcal M} f$ and ${\mathcal M}_{\mathcal{D}}
f$ are not pointwise equivalent. To see this, we take a set
$R\equiv \,Q\times [0, {2r})$ in $\mathcal{D}$ such that
$Q={[0, 2^{\ell_0})}^n$ for some $\ell_0\in{\mathbb Z}$. Then, for all
points $(y_1,\dots,y_n, s)\in S$ such that $y_j<0$ for all $j\in
\{1,\dots,n\}$ and $s<0$, we have ${\mathcal M}_{\mathcal{D}}(\chi_R)(y, s)=0$
and ${\mathcal M}(\chi_R)(y, s)>0.$ So there does not exist a positive
constant $C$ such that ${\mathcal M}(\chi_R)\le C {\mathcal M}_{\mathcal{D}}(\chi_R)$.
\vspace{-0.25cm}
\item[(ii)] It is obvious that $f^{\sharp}_{\mathcal{D}}(x)\le
f^{\sharp}(x)$ for any locally integrable function $f$
at any point $x\in S$. The same counterexample as in (i) shows that
the sharp maximal function and the dyadic sharp maximal function may
be not pointwise equivalent. Indeed, if we take the set
$R\equiv \,Q\times [0, 2r)$ as above, then for all points
$(y_1,\dots,y_n, s)\in S$ such that $y_j<0$ for all $j\in
\{1,\dots,n\}$ and $s<0$, we have
$(\chi_R)^{\sharp}_{\mathcal{D}} (y, s)=0$ and $(\chi_R)^{\sharp} (y, s)>0.$
So there does not exist a positive constant $C$ such that
$(\chi_R)^{\sharp} \le C {(\chi_R)^{\sharp}_\mathcal{D}}$.
\end{itemize}
\end{remark}
We now state a covering lemma for the level sets of ${\mathcal M}_{\mathcal{D}}$,
which is proven in a standard way as follows;
see also \cite[Lemma 1, p.150]{st93}.
\begin{lemma}\label{covering}
Let $f$ be a locally integrable function and $\alpha$ a positive
constant such that
$\Omega_{\alpha}\,\equiv \{x\in S:\, {\mathcal M}_{\mathcal{D}}f(x)>\alpha\}$
has finite measure. Then $\Omega_t$ is a disjoint
union of dyadic sets, $\{R_j\}_j$, with
$\alpha<\frac{1}{\rho(R_j)}\int_{R_j} |f|d\rho\le 2^n\alpha$
for all $j$.
\end{lemma}
\begin{proof}
Since the measure of $\Omega_{\alpha}$ is finite,
for each $x\in\Omega_{\alpha}$
there exists a maximal dyadic set $R_x\in\mathcal{D}$ which contains $x$ such
that $ \alpha<\frac{1}{\rho(R_x)}\int_{R_x} |f|d\rho$. Any two of these
maximal dyadic sets are disjoint. Indeed, {by Theorem \ref{t3.1},}
given two points $x,y\in \Omega_{\alpha}$, either $R_x\cap R_y=\emptyset$
or one is contained in the other; by maximality, this implies
that $R_x=R_y$. We denote by $\{R_j\}_j$ this collection of
dyadic maximal sets. Then it is clear that $\Omega_{\alpha}=\cup_jR_j$.
Moreover, for any $j$, since $R_j$ is maximal, there exists a dyadic
set $\tilde{R}_j\in\mathcal{D}$ which {is a parent of $R_j$ and}
$\frac{1}{\rho(\tilde{R}_j)}\int_{\tilde{R}_j} |f|\, d\rho\le {\alpha}$.
{Thus,}
$$\frac{1}{\rho(R_j)}\int_{R_j} |f|\,d\rho \le
2^n\frac{1}{\rho(\tilde{R}_j)}\int_{\tilde{R}_j} |f|\,d\rho\le
2^n{\alpha}.$$
This finishes the proof.
\end{proof}
As a consequence of the previous covering lemma, following closely
the proof of the inequality \cite[(22), p.153]{st93}, we obtain the
following relative distributional inequality. We omit the details.
\begin{proposition}
There exists a positive constant $K$ such that for any locally
integrable function $f$, and for any positive $c$ and $b$ with
$b<1$,
\begin{equation}\label{distr}
\rho\big( \{ x\in S:\,{\mathcal M}_{\mathcal{D}}f(x)>\alpha,\,
f^{\sharp}_{\mathcal{D}}(x)\le c\alpha\}\big) \le K \frac{c}{ 1-b} \,
\rho\big( \{ x\in S:\,{\mathcal M}_{\mathcal{D}}f(x)>b\alpha\}\big)
\end{equation}
for all $\alpha>0$. The constant $K$ only depends on $n$ and on the
norm $\|{\mathcal M}_{\mathcal{D}}\|_{L^1\to L^{1,\infty} }$.
\end{proposition}
By the relative distributional inequality (\ref{distr}) and arguing as
in {\cite[Corollary 1, p.\,154]{st93}}, we obtain the following
Fefferman--Stein type inequality. We also omit the details.
\begin{corollary}\label{FeffStein}
Let {$p\in(0,\infty)$.} There exists a positive constant $A_p$ such
that for any locally integrable function $f$ such that
$f^{\sharp}_{\mathcal{D}}$ belongs to $L^p$ and {${\mathcal M}_{\mathcal{D}}f \in L^{p_0}$}
with $p_0\le p$, then $f$ is in $L^p$ and
$$\| {\mathcal M}_{\mathcal{D}}f \|_{L^p}\le A_p\,\| f^{\sharp}_{\mathcal{D}} \|_{L^p}.$$
\end{corollary}
\begin{remark}
Recall that for any locally integrable function $f$, $|f|\le
{\mathcal M}_{\mathcal{D}}f$ and $f^{\sharp}_{\mathcal{D}}\le f^{\sharp}$. Thus, from
Corollary \ref{FeffStein}, we deduce that if $p\in {(0,\infty)}$,
$f^{\sharp}$ belongs to $L^p$ and $f$ belongs to some $L^{p_0}$
with $p_0\in (0, p]$, then $f$ is in $L^p$ and
\begin{equation}\label{FeffStein2}
\| f \|_{L^p}\le A_p\,\| f^{\sharp} \|_{L^p}\,,
\end{equation}
where $A_p$ is the constant which appears in Corollary
\ref{FeffStein}. This generalizes the classical Fefferman--Stein
inequality {\cite[Theorem 2, p.148]{st93}} to the current setting.
\end{remark}
We shall now introduce a dyadic Hardy space and a dyadic ${\mathop\mathrm{\,BMO}}$
space.
\begin{definition}\label{dyadic-hardy}
The {\it dyadic Hardy space} $H^{1}_{\mathcal{D}}$ is defined
to be the space of all functions $g$ in $ L^1$ which
can be written as $g=\sum_j \lambda_j\, a_j$,
where $\{a_j\}_j$ are $H^1$-atoms supported in dyadic sets and
{$\{\lambda _j\}_j$} are complex numbers such that $\sum _j |\lambda
_j|<\infty$. Denote by $\|g\|_{H^{1}_{\mathcal{D}}}$ the {\it infimum of
$\sum_j|\lambda_j|$ over all such decompositions}.
\end{definition}
\begin{definition}\label{dyadic-bmo}
The space $\mathcal{B}\mathcal{M}\mathcal{O}_{\mathcal{D}}$ is the space of
all locally integrable functions $f$ such that $f^{\sharp}_{\mathcal{D}}\in
L^ {\infty}$. The space ${\mathop\mathrm{\,BMO}}_{\mathcal{D}}$ is the quotient of
$\mathcal{B}\mathcal{M}\mathcal{O}_{\mathcal{D}}$ module constant functions.
It is a Banach space endowed with the norm $\|f\|_{{\mathop\mathrm{\,BMO}}_{\mathcal{D}}}
\equiv\|f^{\sharp}_{\mathcal{D}}\|_{L^\infty}$.
\end{definition}
It is easy to follow the proof in \cite[Theorem 3.4]{va1} to show
that the dual of $H^1_{\mathcal{D}}$ is identified with ${\mathop\mathrm{\,BMO}}_{\mathcal{D}}$.
We omit the details.
Obviously, $H^1_{\mathcal{D}}\subset H^1$ and $\|g\|_{H^1}\le
\|g\|_{H^1_{\mathcal{D}}}$ for all $g$ in $H^1_\mathcal{D}$. It is
natural to ask whether the norms $\|\cdot\|_{H^1}$ and
$\|\cdot\|_{H^1_{\mathcal{D}}}$ are equivalent. The analog problem in the
classical setting was studied by {Abu-Shammala and Torchinsky}
\cite{at}. By following the ideas in \cite{at}, we
obtain the following result.
\begin{theorem}\label{dyadic-hardy and hardy}
The norms $\|\cdot\|_{H^1}$ and $\|\cdot\|_{H^1_{\mathcal{D}}}$ are not
equivalent.
\end{theorem}
\begin{proof}
We give the details of the proof in the case when $n=1$.
By the construction of
$\mathcal{D}$ in Theorem \ref{t3.1}, there exists $[0,2^{\ell_0+1})\times
[0,2{r_0})\in\mathcal{D}_{k_0+1}$ for some $k_0\in{\mathbb Z}$, $\ell_0\in{\mathbb Z}$
and $r_0>0$ such that $R_0\equiv[0,2^{\ell_0})\times
[0,2{r_0})\in \mathcal{D}_{k_0}$ and $E_0\equiv[2^{\ell_0}, 2\cdot
2^{\ell_0})\times [0,2{r_0})\in \mathcal{D}_{k_0}$. Generally, for any
$j<0$, there exist
$R_j=[2^{\ell_0}-2^{\ell_j},2^{\ell_0})\times I_j\in\mathcal{D}_{k_j}$ and
$E_j=[2^{\ell_0},2^{\ell_0}+2^{\ell_j})\times I_j\in\mathcal{D}_{k_j}$ such
that $R_j\cup E_j\in \mathcal{D}_{k_j+1}$, where both $\{k_j\}_{j<0}$ and
$\{\ell_j\}_{j<0}$ are strictly decreasing sequences which tend to
$-\infty$ as $j\to-\infty$, and each $I_j$ is an interval contained in
$[0,\infty)$. Notice that for all $j\in{\mathbb N}$,
$\rho(R_j)=\rho(E_j)=2r_j2^{\ell_j}$ for some $r_j>0$.
Set $a_j \equiv \frac 1{2\rho(R_j)} (\chi_{R_j}-\chi_{E_j})$.
Obviously, each $a_j$
is an $H^1$-atom and $\|a_j\|_{H^1}\le 1$.
Take the function $\phi(x,t)\equiv\chi_{(2^{\ell_0},\infty)}(x)
\log(x-2^{\ell_0})\equiv h(x)$ for all $(x,t)\in S$.
An easy calculation gives that
$$\|\phi\|_{{\mathop\mathrm{\,BMO}}_\mathcal{D}}\le \sup_{\genfrac{}{}{0pt}{}{I\subset{\mathbb R} }
{ I\, \mathrm{is\, a\, dyadic\, interval}} } \frac1{|I|}\int_I
\left|h(x)-\frac1 {|I|}\int_I h(y)\,dy\right|\,dx<\infty.$$ We then have
\begin{equation*}
\begin{aligned}
\|a_j\|_{H^1_{\mathcal{D}}} & = \sup_{\psi\in {\mathop\mathrm{\,BMO}}_{\mathcal{D}}}
\frac 1{\|\psi\|_{{\mathop\mathrm{\,BMO}}_{\mathcal{D}}}}\left|\int_S a_j\,\psi\, d\rho\right|\\
&\ge \frac 1{\|\phi\|_{{\mathop\mathrm{\,BMO}}_{\mathcal{D}}}}\left|\int_S a_j\,\phi\, d\rho\right|\\
&=\frac 1{2\|\phi\|_{{\mathop\mathrm{\,BMO}}_{\mathcal{D}}}}\left|2^{-\ell_j}\int_{2^{\ell_0}}^{
2^{\ell_0}+2^{\ell_j}}\,\log(x-2^{\ell_0})\,dx\right|
=\frac{|1-\log 2^{\ell_j}|}{2\|\phi\|_{{\mathop\mathrm{\,BMO}}_{\mathcal{D}}}} \sim
|\ell_j|\,.
\end{aligned}
\end{equation*}
So there exists no positive constant such that
$\|a_j\|_{H^1_{\mathcal{D}}}\le C\|a_j\|_{H^1}$ for all $j<0$.
\end{proof}
Notice that all the arguments of \cite[Section 5]{va}
can be adapted to the dyadic spaces $H^1_{\mathcal{D}}$ and ${\mathop\mathrm{\,BMO}}_{\mathcal{D}}$
such that all results therein also hold for $H^1_{\mathcal{D}}$ and ${\mathop\mathrm{\,BMO}}_{\mathcal{D}}$. In
particular, one can prove that, though $H_\mathcal{D}^1$ is a proper
subspace of $H^1$, the real interpolation space
$[H^1_{\mathcal{D}},L^2]_{\theta,q}$ is equal to $L^q$, if $\theta\in (0,1)$
and $\frac{1}{q}=1-\frac{\theta}{2}$.
\section{Complex interpolation}
We now formulate an interpolation theorem involving $H^1$ and ${\mathop\mathrm{\,BMO}}$.
In the following, when $A$ and $B$ are Banach spaces and $\theta$ is
in $(0,1)$, we denote by $(A,B)_{[\theta]}$ the complex
interpolation space between $A$ and $B$ with parameter $\theta$,
obtained via Calder\'on's complex interpolation method
(see \cite{Ca, bl}).
\begin{theorem}\label{int1}
Suppose that $\theta$ is in $(0,1)$. Then the following hold:
\begin{itemize}
\vspace{-0.25cm}
\item[(i)] if $\frac{1}{p_{\theta}}= \frac{1-\theta}{2}$,
then $(L^2,{\mathop\mathrm{\,BMO}})_{[\theta]}=L^{p_{\theta}};$
\vspace{-0.25cm}
\item[(ii)] if $\frac{1}{q_{\theta}}=1- \frac{\theta}{2}$,
then $(H^1,L^2)_{[\theta]}=L^{q_{\theta}}$.
\end{itemize}
\end{theorem}
\begin{proof}
The proof of (i) is an easy adaptation of the proof of
{\cite[p.156,\,Corollary 2]{fest}} and of
\cite[Theorem 7.4]{cmm}. We omit the
details.
The proof of (ii) follows from a duality argument
(see \cite[Theorem 7.4]{cmm}). Denote by $X_{\theta}$
the interpolation space $\bigl(H^1, L^2 \bigr)_{[\theta]}$.
Now by the duality theorem \cite[Corollary 4.5.2]{bl}, if
$\frac{1}{q_{\theta}}=1-\frac{\theta}{2}$, then the dual of
$X_{\theta}$ is $\bigl(L^2,{\mathop\mathrm{\,BMO}}\bigr)_{[\theta]}$, which is equal to
$L^{q_\theta'}$ by (i), where $\frac 1{q_\theta}+\frac 1{q_\theta'}=1$.
Furthermore, $X_\theta$ is continuously
included in $L^{q_\theta}$, because $H^1$ is continuously included
in $L^1$ and $\bigl(L^1,L^2\bigr)_{[\theta]} =
L^{q_\theta}$. Since $L^2$ is reflexive, the interpolation space
$X_\theta$ is reflexive (see \cite[Section~4.9]{bl}), so that
$X_{\theta}$ is isomorphic to $X_\theta^{**} =
\bigl(L^{q_\theta'}\bigr)^*=L^{q_\theta}$. This concludes the
proof.
\end{proof}
A consequence of the previous theorem is the following.
\begin{theorem}\label{t5.2}
Denote by $\Sigma$ the closed strip $\{ s\in {\mathbb C}: \Re s\in [0,1] \}
$. Suppose that $\{T_s\}_{s\in\Sigma}$ is a family of uniformly
bounded operators on $L^2$ such that the map $s\to \int_S
T_s(f)g\, d\rho$ is continuous on $\Sigma$ and analytic in the
interior of $\Sigma$, whenever $f,g\in L^2$. Moreover, assume that
there exists a positive constant $A$ such that
$$\|T_{it} f\|_{L^2}\le A \,\|f\|_{L^2}\qquad {\forall\ } f\in L^2,\,
{\forall\ } t\in{\mathbb R}\,,$$
and
$$\|T_{1+it} f\|_{{\mathop\mathrm{\,BMO}}}\le A \,\|f\|_{\infty}\qquad {\forall\ } f\in
L^2\cap L^{\infty},\,{\forall\ } t\in{\mathbb R}\,.$$
Then for every $\theta\in (0,1)$, the operator $T_{\theta}$ is
bounded on $L^{p_{\theta}}$, with
$\frac{1}{p_{\theta}}=\frac{1-\theta}{2}$ and
$$\|T_{\theta} f\|_{L^{p_{\theta}}}\le A_{\theta}
\,\|f\|_{L^{p_{\theta}}}\qquad {\forall \ } f\in L^2\cap
L^{p_{\theta}}\,.$$
Here $A_{\theta}$ depends only on $A$ and $\theta$.
\end{theorem}
\begin{proof}
This follows from Theorem \ref{int1}(i) and \cite[Theorem 1]{cj}.
Alternatively, we may follow the proof of \cite[p.\,175, Theorem 4]{st93}.
We leave the details to the reader.
\end{proof}
\begin{theorem}\label{t5.3}
Denote by $\Sigma$ the closed strip $\{ s\in {\mathbb C}:\ \Re s\in [0,1] \}
$. Suppose that $\{T_s\}_{s\in\Sigma}$ is a family of uniformly
bounded operators on $L^2$ such that the map $s\to \int_S
T_s(f)g\, d\rho$ is continuous on $\Sigma$ and analytic in the
interior of $\Sigma$, whenever $f,g\in L^2$. Moreover, assume that
there exists a positive constant $A$ such that
$$\|T_{it} f\|_{L^1}\le A \,\|f\|_{H^1}\qquad {\forall\ } f\in
L^2\cap H^1,\,{\forall\ } t\in{\mathbb R}\,,$$
and
$$\|T_{1+it} f\|_{{L^2}}\le A \,\|f\|_{{L^2}}\qquad {\forall\ } f\in
L^2,\,{\forall\ } t\in{\mathbb R}\,.$$
Then for every $\theta\in (0,1)$, the operator $T_{\theta}$ is
bounded on $L^{q_{\theta}}$, with
$\frac{1}{q_{\theta}}=1-\frac{\theta}{2}$ and
$$
\|T_{\theta} f\|_{L^{q_{\theta}}}\le A_{\theta}
\,\|f\|_{L^{q_{\theta}}}\qquad {\forall\ } f\in L^2\cap
L^{q_{\theta}}\,.
$$
Here $A_{\theta}$ depends only on $A$ and $\theta$.
\end{theorem}
\begin{proof}
This follows from Theorem \ref{int1}(ii) and \cite[Theorem 1]{cj}.
We omit the details.
\end{proof}
\begin{remark}\label{dyadic-complex-interpolation}\rm
It is easy to see that Theorems 5.1, 5.2 and 5.3 still hold if $H^1$
and ${\mathop\mathrm{\,BMO}}$ are replaced by $H_\mathcal{D}^1$ and ${\mathop\mathrm{\,BMO}}_\mathcal{D}$, respectively.
We leave the details to the reader.
\end{remark}
\medskip
{\small\noindent{\bf Acknowledgments}\quad
Maria Vallarino is partially supported by PRIN 2007 ``Analisi Armonica"
and Dachun Yang (the corresponding author) is supported by the National
Natural Science Foundation (Grant No. 10871025) of China.
Also, the authors sincerely wish to express their
deeply thanks to the referee for her/his very carefully reading and
also her/his so many careful, valuable and suggestive remarks which
essentially improve the presentation of this article.}
| proofpile-arXiv_065-4925 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
Post-Newtonian (PN) approximation methods in general relativity
are based on the weak-field limit in which the metric is close to the
Minkowski metric and the assumption that the typical
velocity $v$ in a system divided by the speed of light is
very small. In post-Minkowskian (PM) approximation methods only the weakness
of the gravitational field is assumed but no assumption
about slowness of motion is made. In the PM approximation we obtain\cite{LSB}
the Hamiltonian for gravitationally interacting particles
that includes all terms linear in gravitational constant $G$.
It thus yields PN approximations to {\it any} order in $1/c$ when terms linear
in $G$ are considered; and it can also describe
particles with ultrarelativistic velocities or with zero rest mass.
We use the canonical formalism of Arnowitt, Deser, and Misner
(ADM) \cite{ADM} where the independent degrees of freedom of the gravitational field are described
by $h_{ij}^{TT}$, the transverse-traceless part of $h_{ij}=g_{ij}-\delta_{ij}$
($h^{TT}_{ii}=0$, $h^{TT}_{ij,j}=0$, $i,j=1,2,3$), and by conjugate
momenta $c^3/(16\pi G) {\pi}^{ij\,TT}$. The field is generated by $N$ particles with rest
masses $m_a$ located at ${\bf x}_a$, $a=1, ... N$, and with momenta ${\bf p}_a$.
We start with the Hamiltonian\cite{S86} correct up to $G^2$ found by
the expansion of the Einstein equations (the energy and momentum constraints)
in powers of $G$ and by the use of suitable regularization procedures.
When we consider only terms linear in $G$ and put $c=1$ this Hamiltonian reads
\begin{align}
\label{HlinGS}
H_{\rm lin}=&
\sum_a {\overline{m}}_a - \frac{1}{2}G\sum_{a,b\ne a} \frac{{\overline{m}}_a {\overline{m}}_b }{ r_{ab} }
\left( 1+ \frac{p_a^2}{{\overline{m}}_a^2}+\frac{p_b^2}{{\overline{m}}_b^2}\right)
\\
\nonumber
&+ \frac{1}{4}G\sum_{a,b\ne a} \frac{1}{r_{ab}}\left( 7\, {\bf p}_a{\hspace{-1.3pt}\cdot\!\,}{\bf p}_b + ({\bf p}_a{\hspace{-1.3pt}\cdot\!\,}{\bf n}_{ab})({\bf p}_b{\!\,\cdot\!\,}{\bf n}_{ab}) \right)
-\frac{1}{2}\sum_a \frac{p_{ai}p_{aj}}{{\overline{m}}_a}\,h_{ij}^{TT}({\bf x}={\bf x}_a)
\\\nonumber
\nonumber
&+\frac{1}{16\pi G}\int d^3x~ \left( \frac{1}{4} h_{ij,k}^{TT}\, h_{ij,k}^{TT} +\pi^{ij\,TT} \pi^{ij\,TT}\right)~,
\end{align}
where ${\overline{m}}_a=\left( m_a^2+{\bf p}^2_a \right)^\frac{1}{2}$,
${\bf n}_{ab} r_{ab} = {\bf x}_a-{\bf x}_b$, $|{\bf n}_{ab}|=1$.
The equations of motion for particles are standard Hamilton equations, the Hamilton equations for the field read
\begin{equation}
\dot {\pi}^{ij\,TT}~=~-16\pi G~\delta_{kl}^{TT\,ij} \frac{\delta H}{\delta h_{kl}^{TT}}
~,~~~
\dot h_{ij}^{TT}~=~~16\pi G~\delta_{ij}^{TT\,kl} \frac{\delta H}{\delta {\pi}^{kl\,TT}}~;
\end{equation}
here the variational derivatives and the TT-projection operator
$\delta_{kl}^{TT\,ij} = \frac{1}{2}\left( \Delta_{ik}\Delta_{jl}+\Delta_{il}\Delta_{jk}-\Delta_{ij}\Delta_{kl}\right){\Delta^{-2}}$,
$\Delta_{ij} = \delta_{ij}\Delta - \partial_i\,\partial_j$, appear.
These equations imply the equations for the gravitational field
in the first PM approximation to be the wave equations with point-like sources $\sim\delta^{(3)}( {\bf x}-{\bf x}_a)$.
Since both the field and the accelerations $\dot {\bf p}_a$
are proportional to $G$, the changes of the field due to the accelerations of particles are
of the order $O(G^2)$. Thus, in this approximation,
wave equations can be solved assuming field to be generated by unaccelerated motion of particles,
i.e., it can be written as a sum of boosted static spherical fields:
\begin{equation}
\label{LiWi4h}
h_{ij}^{TT}({\bf x}) =
\delta_{ij}^{TT\,kl} \sum_b
\frac{4G}{{\overline{m}}_b}
\frac{1}{|{\bf x}-{\bf x}_b|}
\frac{p_{bk}p_{bl}}{\sqrt {1-{\dot {\bf x}_b}^2\sin^2 \theta_b}}~,
\end{equation}
where ${\bf x}-{\bf x}_a={\bf n}_a |{\bf x}-{\bf x}_a|$ and $\cos \theta_a={{\bf n}_a{\hspace{-1.3pt}\cdot\!\,} \dot {\bf x}_a /|\dot {\bf x}_a|}$.
Surprisingly, it is possible to convert the projection $\delta_{ij}^{TT\,kl}$ (which involves solving two Poisson
equations) into an inhomogeneous linear second order ordinary differential equation and write
\begin{align}
&h_{ij}^{TT}({\bf x}) ~=~ \sum_b
\frac{G}{|{\bf x}-{\bf x}_b|} \frac{1}{{\overline{m}}_b}\frac{1}{y(1+y)^2}
\Big\{
\left[y{\bf p}_b^2-({\bf n}_b{\!\,\cdot\!\,}{\bf p}_b)^2(3y+2)\right]\delta_{ij}
\\\nonumber&
+2\left[
1- \dot {\bf x}_b^2(1 -2\cos^2 \theta_b)\right]{p_{bi}p_{bj}}
+\left[
\left( 2+y\right)({\bf n}_b{\!\,\cdot\!\,}{\bf p}_b)^2
\!-\!\left( 2+{3}y -2\dot {\bf x}_b^2\right){\bf p}_b^2
\right]{n_{bi}n_{bj}}
\\\nonumber&
+2({\bf n}_b{\!\,\cdot\!\,}{\bf p}_b) \left(1-\dot {\bf x}_b^2+2y\right) \left(n_{bi}p_{bj}+p_{bi}n_{bj}\right)
\Big\}
+O({\overline{m}}_b \dot {\bf x}_b-{\bf p}_b)G\!+\!O(G^2)~;
\label{unprojected_h}
\end{align}
here $y = y_b \equiv\sqrt {1-{\dot {\bf x}_b}^2\sin^2 \theta_b}$ and we anticipate $O({\overline{m}}_b \dot {\bf x}_b-{\bf p}_b)\sim G$.
In the next step we use the Routh functional (see, e.g., Ref.\cite{DJS98})
\begin{equation}
R( {\bf x}_a,{\bf p}_a, h_{ij}^{TT}, \dot h_{ij}^{TT} ) =
H - \frac{1}{16\pi G}\int d^3x~ \pi^{TT\,ij}\, \dot h_{ij}^{TT}~,
\end{equation}
which is ``the Hamiltonian for the particles but the Lagrangian for the field.''
Since the functional derivatives of Routhian vanish if the field equations hold,
the (non-radiative) solution (\ref{LiWi4h}) can be substituted into the Routh functional
without changing the Hamilton equations for the particles.
Using the Gauss's law, an integration by parts and similar standard steps
(such as dropping out total time derivatives, i.e. a canonical transformation)
and the explicit substitution for $h_{ij}^{TT}({\bf x}={\bf x}_a)$
we get the Hamiltonian for a $N$-particle gravitating system
in the PM approximation:
\begin{align}
\label{H1PM}
\nonumber
&H_{\rm lin}=
\sum_a {\overline{m}}_a
+ \frac{1}{4}G\sum_{a,b\ne a} \frac{1}{r_{ab}}\left( 7\, {\bf p}_a{\hspace{-1.3pt}\cdot\!\,}{\bf p}_b + ({\bf p}_a{\hspace{-1.3pt}\cdot\!\,}{\bf n}_{ab})({\bf p}_b{\!\,\cdot\!\,}{\bf n}_{ab}) \right)
- \frac{1}{2}G\sum_{a,b\ne a} \frac{{\overline{m}}_a {\overline{m}}_b }{ r_{ab}}
\\ &
\times\left( 1+ \frac{p_a^2}{ {\overline{m}}_a^2}+\frac{p_b^2}{{\overline{m}}_b^2}\right)
-\frac{1}{4}
G\sum_{a,b\ne a} \frac{1}{r_{ab}}
\frac{({\overline{m}}_a {\overline{m}}_b)^{-1}}{ (y_{ba}+1)^2 y_{ba}}
\Bigg[
2\Big(2
({\bf p}_a{\hspace{-1.3pt}\cdot\!\,}{\bf p}_b)^2 ({\bf p}_b{\!\,\cdot\!\,}{\bf n}_{ba})^2
\\\nonumber&
\!-\!2 ({\bf p}_a{\hspace{-1.3pt}\cdot\!\,}{\bf n}_{ba}) ({\bf p}_b{\!\,\cdot\!\,}{\bf n}_{ba}) ({\bf p}_a{\hspace{-1.3pt}\cdot\!\,}{\bf p}_b) {\bf p}_b^2
\!+\!({\bf p}_a{\hspace{-1.3pt}\cdot\!\,}{\bf n}_{ba})^2 {\bf p}_b^4
\!-\!({\bf p}_a{\hspace{-1.3pt}\cdot\!\,}{\bf p}_b)^2 {\bf p}_b^2
\Big ) \frac{1}{{\overline{m}}_b^2}
+2 \Big[-\!{\bf p}_a^2 ({\bf p}_b{\!\,\cdot\!\,}{\bf n}_{ba})^2
\\ \nonumber&
+ ({\bf p}_a{\hspace{-1.3pt}\cdot\!\,}{\bf n}_{ba})^2 ({\bf p}_b{\!\,\cdot\!\,}{\bf n}_{ba})^2 +
2 ({\bf p}_a{\hspace{-1.3pt}\cdot\!\,}{\bf n}_{ba}) ({\bf p}_b{\!\,\cdot\!\,}{\bf n}_{ba}) ({\bf p}_a{\hspace{-1.3pt}\cdot\!\,}{\bf p}_b) +
({\bf p}_a{\hspace{-1.3pt}\cdot\!\,}{\bf p}_b)^2 - ({\bf p}_a{\hspace{-1.3pt}\cdot\!\,}{\bf n}_{ba})^2 {\bf p}_b^2\Big]
\\ \nonumber&
+
\Big[-3 {\bf p}_a^2 ({\bf p}_b{\!\,\cdot\!\,}{\bf n}_{ba})^2 +({\bf p}_a{\hspace{-1.3pt}\cdot\!\,}{\bf n}_{ba})^2 ({\bf p}_b{\!\,\cdot\!\,}{\bf n}_{ba})^2
+8 ({\bf p}_a{\hspace{-1.3pt}\cdot\!\,}{\bf n}_{ba}) ({\bf p}_b{\!\,\cdot\!\,}{\bf n}_{ba}) ({\bf p}_a{\hspace{-1.3pt}\cdot\!\,}{\bf p}_b)
\\ \nonumber&
+ {\bf p}_a^2 {\bf p}_b^2 - 3 ({\bf p}_a{\hspace{-1.3pt}\cdot\!\,}{\bf n}_{ba})^2 {\bf p}_b^2 \Big]y_{ba}
\Bigg]~,~~~~~~~~~~~~~~~~~~~~~~~~~y_{ba} = \frac{1}{{\overline{m}}_b} \sqrt{ m_b^2+ \left ({\bf n}_{ba}{\!\,\cdot\!\,}{\bf p}_b\right)^2}~.
\end{align}
Since the PM approximation can describe ultrarelativistic or zero-rest-mass particles,
we calculated gravitational scattering of two such particles using the Hamiltonian (\ref{H1PM}).
If perpendicular separation $\bf b$ of trajectories ($|{\bf b}|$ is the impact parameter)
in the center-of-mass system (${\bf p}_1=-{\bf p}_2\equiv{\bf p}$) is used, ${\bf p}{{\!\,\cdot\!\,}}{\bf b}=0$,
we find, after evaluating a few simple integrals, that the exchanged momentum in the
system is given by
\begin{align}
\label{delta_p}
\Delta {\bf p} &= -2\frac{{\bf b}}{{\bf b}^2} \frac{G}{|{\bf p}|}
\frac{{\overline{m}}_1^2 {\overline{m}}_2^2}{{\overline{m}}_1 +{\overline{m}}_2 }
\left[
1+\left(\frac{1}{{\overline{m}}_1^2}+\frac{1}{{\overline{m}}_2^2}+\frac{4}{{\overline{m}}_1{\overline{m}}_2} \right){\bf p}^2
+\frac{{\bf p}^4}{{\overline{m}}_1^2 {\overline{m}}_2^2 }
\right]~.
\end{align}
The quartic term is all that remains from the field part $h^{TT}_{ij}$ in agreement with Westpfahl\cite{W85}
who used a very different approach.
The Hamiltonian (\ref{H1PM}) can also describe a binary system with one massless and one massive particle orbiting around each other. This is not obvious: the second, fourth or even sixth-order PM approximation would not be able to describe massless test particles orbiting around a Schwarzschild black hole.
\bigskip
We acknowledge the support from SFB/TR7 in Jena,
from the Grant GA\v CR 202/09/0772 of the Czech Republic, and of Grants No LC 06014 and the MSM 0021620860
of Ministry of Education.
| proofpile-arXiv_065-4936 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
\label{sec:introduction}
The dwarf galaxies within our Local Group (LG) have been the
subject of intensive spectroscopic and photometric observations in
different wavelength regimes and thus are well studied
objects. Their study has been facilitated by the proximity of
these dwarfs so that individual stars may even be resolved down to
the main sequence turn-off, depending on their distance. Thus,
extending the studies to dwarf galaxies in nearby groups with
different environment and comparing their properties are of great
importance in order to understand the main drivers of their
evolution. In addition, the derived properties can provide a way
to constrain models of galaxy formation as well as chemical
evolutionary models
In this respect, the \object{M\,81 group} is an interesting target: despite
several differences, it bears close resemblance to our LG. The
similarity of the M\,81 group to our LG lies in its binary
structure (Karachentsev et al.~\cite{sl_m81distances}), while its
difference is mainly due to the recent interactions between its
dominant consituents as revealed by the formation of tidal tails
and bridges detected in HI observations (Appleton, Davies \&
Stephenson \cite{sl_appleton}; Yun, Ho \& Lo \cite{sl_yun}). With
a mean distance of $\sim$3.7~Mpc (Karachentsev et
al.~\cite{sl_m81distances}), the M\,81 group is one of the nearest
groups to our own LG. It consists of about 40 dwarfs of both
early-type and late-type, with the addition of 12 recently
discovered dwarf candidates (Chiboucas, Karachentsev \& Tully
\cite{sl_chiboucas}).
Here we focus on the dwarf spheroidal galaxies (dSphs) in the
M\,81 group with available Hubble Space Telescope
(HST)\,/\,Advanced Camera for Surveys (ACS) archival data. The
dSphs are objects with low surface brightness and poor in gas
content. For a summary of their properties we refer to Grebel,
Gallagher \& Harbeck (\cite{sl_grebel}; and references
therein). We use their color-magnitude diagrams (CMDs) to derive
the photometric metallicity distribution functions (MDFs) and
search for the potential presence of population gradients in the
M\,81 group dSphs.
The use of the CMD to infer the star formation histories and MDFs
is a very powerful tool. For nearby groups at distances, where
individual red giants are beyond the reach of spectroscopy even
with 8-10~m class telescopes, CMDs are the best means to derive
evolutionary histories. With the use of HST observations of
adequate depth, the upper part of the red giant branch (RGB) can
be resolved into single stars. Many studies have derived the
photometric MDFs of distant LG dSphs (for example Cetus by
Sarajedini et al.~\cite{sl_sarajedini}; And\,VI and And\,VII by
Grebel \& Guhathakurta \cite{sl_grebel99}) from their CMDs. A
similar work to derive the photometric MDFs for dwarf galaxies in
nearby groups has not been done so far.
The search for radial population gradients in LG dwarf galaxies
has been favoured by the fact that the resolved stellar
populations reach the horizontal branch or extend even below the
main-sequence turn-off depending on the distance of the dwarf,
permitting one to use a variety of different stellar
tracers. There are several studies for population gradients in the
LG dwarfs and as an example of such studies we refer to the work
done by Hurley-Keller, Mateo \& Grebel
(\cite{sl_hurley-keller99}), Harbeck et al.~(\cite{sl_harbeck}),
Battaglia et al.~(\cite{sl_battaglia06}) (photometric) and Tolstoy
et al.~(\cite{sl_tolstoy}), Koch et al.~(\cite{sl_koch06})
(spectroscopic). There is not any study so far searching for
population gradients in nearby group dwarf galaxies.
This paper is structured as follows. In \S2 we present the
observations, in \S3 we show our results, in \S4 we discuss our
main findings and in \S5 we present our conclusions.
\section{Observations}
\label{sec:observations}
We use HST\,/\,ACS archival data that were retrieved
\begin{table*}
\begin{minipage}[t]{\textwidth}
\caption{Log of Observations.}
\label{table1}
\centering
\renewcommand{\footnoterule}{}
\begin{tabular}{l c c c c c }
\hline\hline
Galaxy &RA~(J2\,000.0) &Dec~(J2\,000.0) &Program~ID~/~PI &ACS\,/\,WFC Filters &Exposure time \\
& & & & &(s) \\
(1) &(2) &(3) &(4) &(5) &(6) \\
\hline
KDG 61 &09~57~03.10 &$+$68~35~31.0 &GO~9\,884~/~Armandroff &F606W\,/\,F814W &8\,600~/~9\,000 \\
KDG 64 &10~07~01.90 &$+$67~49~39.0 &... &... &... \\
DDO 71 &10~05~06.40 &$+$66~33~32.0 &... &... &... \\
F12D1 &09~50~10.50 &$+$67~30~24.0 &... &... &... \\
F6D1 &09~45~10.00 &$+$68~45~54.0 &... &... &... \\
\hline
HS\,117 &10~21~25.20 &$+$71~06~51.0 &SNAP~9\,771~/~Karachentsev &F606W\,/\,F814W &1\,200~/~900 \\
IKN &10~08~05.90 &$+$68~23~57.0 &... &... &... \\
\hline
DDO\,78 &10~26~28.00 &$+$67~39~35.0 &GO~10\,915~/~Dalcanton &F475W\,/\,F814W &2\,274~/~2\,292 \\
DDO\,44 &07~34~11.50 &$+$66~52~47.0 &... &... &2\,361~/~2\,430 \\
\hline
\end{tabular}
\footnotetext{Note.-- Units of right ascension are hours,
minutes, and seconds, and units of declination
are degrees, arcminutes and arcseconds.}%
\end{minipage}
\end{table*}
through the Multimission Archive at STScI (MAST). The details of
the datasets used are listed in Table~\ref{table1}, where the
columns show: (1) the galaxy name, (2) and (3) equatorial
coordinates of the field centers (J2000.0), (4) the number of the
Program ID and the PI, (5) the ACS\,/\,WFC filters used, and (6)
the total exposure time for each filter.
The data reduction was carried out using Dolphot, a modified
version of the HSTphot photometry package (Dolphin
\cite{sl_dolphin2000}) developed specifically for ACS point source
photometry. The reduction steps followed are the ones described in
the ACS module of the Dolphot manual. In the Dolphot output
photometric catalogue, only objects with $S/N >$~5 and ``type''
equal to 1, which means ``good stars'', were allowed to enter the
final photometric catalogue. The ``type'' is a Dolphot parameter
that is used to distinguish objects that are classified as ``good
stars'', ``elongated objects'', ``too sharp objects'' and so
on. After this first selection, quality cuts were applied so as to
further clean the photometric catalogue. These cuts were based on
the distributions of the sharpness and crowding parameters, as
suggested in the Dolphot manual and also in Williams et
al.~(\cite{sl_angst}). Guided by these distributions, we use for
the sharpness parameter the restriction of
$|sharpness_{filter}+sharpness_{F814W}|<\ \alpha$, where
(1.0~$<\alpha<$~1.5) depending on the dSph, and for the crowding
parameter the requirement
($Crowding_{filter}~+~Crowding_{F814W})<$~1.0, where ``filter''
corresponds to either the F606W or the F475W filter. These
selections were made so as to ensure that only stellar objects
have entered our final photometric catalogue. The number of stars
recovered after applying all the photometric selection criteria
are listed in Table~\ref{table2}, column (3).
The photometry obtained with Dolphot provides magnitudes in both
the ACS\,/\,WFC and the Landolt UBVRI photometric systems using
the transformations provided by Sirianni et
al.~(\cite{sl_sirianni}) for the UBVRI system. In the analysis
presented throughout this work, we chose to use the ACS\,/\,WFC
filter system. Therefore, if we use data from the literature
computed in the UBVRI photometric system, we transform them to the
ACS\,/\,WFC system. There are two cases where this is
necessary. The first case is the extinction. The galactic
foreground extinction in the V-band and I-band, $A_{I}$ and
$A_{V}$, taken from Schlegel, Finkbeiner \& Davis
(\cite{sl_schlegel}) through NED, are transformed into the
ACS\,/\,WFC system. For the transformation, we use the
corresponding extinction ratios $A(P)\,/\,E(B-V)$ for a G2 star,
where $A(P)$ corresponds to the extinctions in the filters F814W
and F606W (or F475W), which are provided by Sirianni et
al.~(\cite{sl_sirianni}; their Table 14). We note that the
assumption of a largely color-independent reddening for the RGB is
justified since theoretical models indicate that the expected
effect of the change of color accross the RGB amounts to at most
0.01 in $E(V-I)$ for our data (see Grebel \& Roberts
\cite{sl_grebel95}). We multiply these extinction ratios with the
$E(B-V)$, in order to finally get the extinctions in the ACS
filters. The transformed values, $A_{F814W}$ and $A_{F606W}$ (or
$A_{F475W}$), are listed in Table~\ref{table2}, columns (6) and
(7) respectively.
The second case is the I-band tip of the RGB (TRGB), which we
transform to the F814W-band TRGB in the following way. As already
mentioned, Dolphot provides the magnitudes both in the
instrumental ACS\,/\,WFC system and in the transformed
UBVRI. Thus, in the range of magnitudes near the I-band TRGB, we
compute the difference in magnitudes between the I-band and
F814W-band. This difference is 0.01~mag for all the dSphs except
for DDO\,44 and DDO\,78, where the difference is $-$0.015~mag. The
F814W-band TRGB is then equal to the sum of this difference and
the I-band TRGB. We confirm further more this approach of
estimating the F814W-band TRGB by applying a Sobel-filtering
technique to the luminosity function of some of the dSphs (Lee,
Freedman \& Madore \cite{sl_lee}; Sakai, Madore \& Freedman
\cite{sl_sakai}) estimating the location of the F814W-band
TRGB. We find that these approaches give values that are in good
agreement, with a mean difference between them of the order of
0.05~mag.
\begin{table*}
\begin{minipage}[t]{\textwidth}
\caption[]{Global Properties (see text for references).}
\label{table2}
\centering
\renewcommand{\footnoterule}{}
\begin{tabular}{l c c c c c c c c c}
\hline\hline
Galaxy &Type &$N_{*}$ &$M_V$ &$I_{TRGB}$ &$A_{F814W}$ &$A_{F606W}$\footnote{or $A_{F475W}$ in the case of DDO\,44 and DDO\,78}
&$(m-M)_{O}$ &$R$ &$r_{eff}$ \\
& & &(mag) &(mag) &(mag) &(mag) &(mag) &(kpc) &$(\prime\prime)$ \\
(1) &(2) &(3) &(4) &(5) &(6) &(7) &(8) &(9) &(10) \\
\hline
KDG\,61 &dIrr~/~dSph &$53\,543$ &$-13.87$ &$23.86\pm0.15$ &$0.131$ &$0.202$ &$27.78\pm0.15$ &$44$ &$48$ \\
KDG\,64 &dIrr~/~dSph &$38\,012$ &$-13.43$ &$23.90\pm0.15$ &$0.099$ &$0.152$ &$27.84\pm0.15$ &$126$ &$28$ \\
DDO\,71 &dIrr~/~dSph &$37\,291$ &$-13.22$ &$23.83\pm0.15$ &$0.173$ &$0.267$ &$27.72\pm0.15$ &$211$ &$59$ \\
F12D1 &dSph &$39\,519$ &$-12.84$ &$23.95\pm0.15$ &$0.263$ &$0.404$ &$27.71\pm0.15$ &$181$ &$31$ \\
DDO\,78 &dSph &$21\,073$ &$-12.83$ &$23.85\pm0.15$ &$0.040$ &$0.079$ &$27.85\pm0.15$ &$223$ &$38$ \\
DDO\,44 &dSph &$19\,357$ &$-12.56$ &$23.55\pm0.15$ &$0.075$ &$0.149$ &$27.52\pm0.15$ &$901$ &$28$ \\
IKN &dSph &$14\,600$ &$-11.51$ &$23.94\pm0.15$ &$0.111$ &$0.171$ &$27.87\pm0.18$ &$110$ &... \\
F6D1 &dSph &$14\,260$ &$-11.46$ &$23.77\pm0.14$ &$0.144$ &$0.222$ &$27.66\pm0.17$ &$218$ &$32$ \\
HS\,117 &dIrr~/~dSph &$4\,596$ &$-11.31$ &$24.16\pm0.15$ &$0.210$ &$0.323$ &$27.99\pm0.18$ &$204$ &$29$ \\
\hline
\end{tabular}
\end{minipage}
\end{table*}
We list the global properties of the present dSph sample in
Table~\ref{table2}. The columns contain the following information:
(1) the galaxy name, (2) the galaxy type, (3) the number of stars
detected after applying all the photometric selection criteria,
(4) the visual absolute magnitude $M_{V}$ of each galaxy adopted
from Karachentsev et al.~(\cite{sl_wfpc2data},
\cite{sl_f6d1trgb}), Alonso-Garcia, Mateo \& Aparicio
(\cite{sl_garcia}), Georgiev et al.~(\cite{sl_georgiev}), (5) the
I-band TRGB adopted from Karachentsev et al.~(\cite{sl_acsdata},
\cite{sl_f6d1trgb}, \cite{sl_wfpc2data}, \cite{sl_ddo44trgb}), (6)
and (7) the foreground extinction derived by us for the
ACS\,/\,WFC filters F814W, F606W and F475W, as described in
Sec.~\ref{sec:observations}, (8) the true distance moduli adopted
from Karachentsev et al.~(\cite{sl_acsdata}, \cite{sl_f6d1trgb},
\cite{sl_wfpc2data}, \cite{sl_ddo44trgb}), (9) the deprojected
distance of the dSphs from the M\,81 galaxy, R, adopted from
Karachentsev et al.~(\cite{sl_m81distances}), (10) the effective
radius, $r_{eff}$ adopted from Sharina et al.~(\cite{sl_sharina})
and Karachentseva et al.~(\cite{sl_karreff}). The dSphs in
Table~\ref{table2} are sorted according to their $M_{V}$ value.
Finally, the pixel scale of the ACS\,/\,WFC is 0.05\arcsec with a
field of view of 202\arcsec\,$\times$\,202\arcsec or
4\,096\,$\times$\,4\,096~pixels. Thus for the mean distance of
$\sim$3.7~Mpc of the M\,81 group (Karachentsev et
al.~\cite{sl_m81distances}) this field of view corresponds to
3.6~kpc\,$\times$\,3.6~kpc, or simply 1~pixel corresponds to
roughly 1~pc.
\section{Results}
\label{sec:results}
\subsection{Color-Magnitude Diagrams}
\label{sec:cmds}
\begin{figure*}
\centering
\includegraphics[width=16cm,clip]{13364fg1a.eps}
\includegraphics[width=16cm,clip]{13364fg1b.eps}
\caption{Color-magnitude diagrams for the nine dSphs. The
horizontal dashed lines show the location of the
TRGB. The crosses on the left hand side correspond to
the photometric errors as derived from artificial star
tests. The boxes enclose the stars for which we derive
the photometric metallicities.}
\label{sl_figure1}%
\end{figure*}
We show the CMDs of the nine dSphs in Fig.~\ref{sl_figure1}, where
we note the difference in the x-axis. The proximity of the M\,81
group and the depth of the observations allow us to resolve the
upper part of the RGB into individual stars. The most prominent
feature seen in our CMDs is the RGB. We note the presence of stars
above the TRGB, which is indicated in Fig.~\ref{sl_figure1} with a
dashed line. These stars are most likely luminous AGB stars, which
indicate the presence of stellar populations in an age range from
1~Gyr up to less than 10~Gyr. In addition, some of the dSphs
appear to have bluer stars that probably belong to a younger main
sequence. The dwarfs in our sample are classified as dSphs
(Karachentsev et al.~\cite{sl_cng}), with the exception of
KDG\,61, KDG\,64, DDO\,71, and HS\,117 which are classified as
transition-types (dIrr\,/\,dSph) as they have detectable HI
content (Huchtmeier \& Skillman \cite{sl_huch}; Boyce et
al.~\cite{sl_hiblind}) or $H\alpha$ emission (Karachentsev \&
Kaisin \cite{sl_m81halpha}; Karachentsev et
al.~\cite{sl_acsdata}).
\begin{figure*}
\centering
\includegraphics[width=17cm,clip]{13364fg2.eps}
\caption{Metallicity distribution functions for the nine dSphs
sorted by their absolute $M_V$ magnitude, from top to
bottom and from left to right. The solid lines show
the metallicity distribution convolved with the errors
in metallicity. The dashed lines show the fitted
gaussian distributions. The error bars in the upper
right corner, or upper left in the case of DDO\,44,
show the 1~$\sigma$ spread for the weighted mean
metallicity we derive from our data. Note the
different scaling of the individual y-axes.
}
\label{sl_figure2}%
\end{figure*}
\subsection{Photometric Metallicity Distribution Functions}
\label{sec:mdfs}
We show the photometric MDFs for the nine dSphs in
Fig.~\ref{sl_figure2}. These are constructed using linear
interpolation between Dartmouth isochrones (Dotter et
al.~\cite{sl_dartmouth}) with a fixed age of 12.5~Gyr. We use
Dartmouth isochrones, since they give the best simultaneous fit to
the full stellar distribution within a CMD as demonstrated by
e.g. Glatt et al.~(\cite{sl_glatta}, \cite{sl_glattb}). We chose
the fixed age of 12.5~Gyr since the RGB in these dSphs may be
assumed to consist of mainly old stars in an age range of about
10~Gyr to 13~Gyr. The assumption of an old isochrone is also
justified by the omnipresence of old stellar populations in all of
the LG dSphs (Grebel \cite{sl_grebel01}; Grebel \& Gallagher
\cite{sl_grebelgallagher}) and by the comparatively small number
of luminous AGB stars above the TRGB.
The choice of 12.5~Gyr is an assumption we are making in order to
estimate the MDFs for each dwarf, while the choice of another age
in the above range would not considerably affect our
results. Indeed, the colors of the stars on the RGB are mostly
affected by metallicity differences rather than age differences
(see for example Caldwell et al.~\cite{sl_caldwell}; Harris,
Harris \& Poole \cite{sl_harris99} (their Fig.~6); Frayn \&
Gilmore \cite{sl_frayn} (their Fig.~2)). Thus the observed spread
in the RGB color is likely caused by a metallicity spread rather
than an age spread, justifying our choice of an constant age
isochrone.
The isochrone metallicities we used range from $-$2.50~dex to
$-$0.50~dex with a step of 0.05~dex. The isochrone step we used is
chosen such that it is smaller than the photometric
errors. Representative photometric error bars are indicated with
crosses in Fig.~\ref{sl_figure1}. To account for the influence of
crowding and the photometric quality, we conducted artificial star
tests. For that purpose, we used the utilities provided and as
described in Dolphot.
Given the lack of any spectroscopic information, this method of
deriving the MDFs is in general fairly accurate as discussed in
Frayn \& Gilmore\ (\cite{sl_frayn}). In practice, we interpolate
between the two closest isochrones bracketing the color of a star,
in order to find the metallicity of that star.
\begin{figure}
\centering
\includegraphics[width=7.7cm,clip]{13364fg3.eps}
\caption{Mean error in metallicity versus the F814W-band
magnitude. The circles indicate the mean error in
metallicity in each magnitude bin, while the error
bars indicate the standard deviation of the errors.}
\label{sl_figure3}%
\end{figure}
We only select stars within a box plausibly containing stars on
the upper RGB to construct the galaxies' MDF. The bright magnitude
limit of the box is chosen to exclude the stars brighter than the
TRGB which belong mainly to the luminous AGB phase. The faint
magnitude limit of the box is chosen to fulfil the requirement
that the formal error in the derived [Fe/H] is less than 0.15~dex,
or 0.2~dex in the case of IKN and HS\,117 when the photometric
errors are taken into account. We employ a different selection
criterion in the case of IKN and HS\,117 in order to have a
significant number of stars in their sample as compared to the one
of the remaining dSphs. The selection criterion based on the
metallicity formal error depends on the depth of the
observations. In our data sample we distinguish three categories,
from now on referred to as depth categories. The first depth
category contains \object{KDG\,61}, \object{KDG\,64},
\object{DDO\,71}, F6D1 and F12D1. The second depth category
contains \object{DDO\,44} and \object{DDO\,78}. The third depth
category contains \object{IKN} and HS\,117. Each depth category
contains those dSphs that belong to the same Program~ID and thus
have the same filters and roughly the same exposure times, as
listed in Table~\ref{table1}.
In order to estimate the faint magnitude limit for each dSph's RGB
box as a function of the error in [Fe/H], we proceed as
follows. We extend the faint limit of the bounding box to a
magnitude limit of 26 in F814W for all the dSphs. We compute the
[Fe/H] for all the stars within each dSph's box, as well as the
corresponding errors in metallicity. We show the derived mean
errors in metallicity versus the F814W-band magnitude in
Fig.~\ref{sl_figure3} for KDG\,61, DDO\,44, IKN and HS\,117, which
are chosen here as representative examples of the three depth
categories. In the case of IKN and HS\,117, which belong to the
third depth category, we show the corresponding plots for both
since the requirement of 0.20~dex leads to slightly different
faint limits of the RGB box. Based on these plots and on the
metallicity requirements, we choose 25 and 24.5~mag as faint limit
for the first and second depth category, while in the case of IKN
and HS\,117 we choose 24.4 and 24.6~mag, respectively. The choice
of these limits corresponds to an error in color of less than
0.02~mag for the first depth category and less than 0.07~mag for
the remaining two depth categories. We note that the difference in
the error in color is due to the different exposure times for each
dSph data set, which are listed in the column (6) of
Table~\ref{table1}. The RGB boxes used in each dSph are drawn in
Fig.~\ref{sl_figure1}.
\begin{figure}
\centering
\includegraphics[width=8.5cm,clip]{13364fg4.eps}
\caption{\textit{Left panel}: Color-magnitude diagram for
KDG\,64 zoomed in the RGB part to show the stars
selected in the box, shown with the dashed line, for
which we compute their photometric metallicities. In
the same figure we overplot with solid lines a subset
of the isochrones used for the interpolation
method, with metallicities ranging from
$-$2.50~dex to $-$0.80~dex.
\textit{Right panel}: The error in metallicity versus
the [Fe/H] as derived with the use of Monte Carlo
simulations. The circles indicate the mean metallicity in
each metallicity bin, while the error bars indicate
the standard deviation of the mean.}
\label{sl_figure4}%
\end{figure}
In Fig.~\ref{sl_figure4}, left panel, we plot the RGB box used in
the case of the dwarf KDG\,64 as well as a subset of the
isochrones used for the interpolation method, here ranging from
$-$2.50~dex to $-$0.80~dex. The step remains 0.05~dex. The grid of
the theoretical isochrones we use is fine enough that the spacing
between them is kept nearly vertical. We correct the magnitudes
and colors of the theoretical isochrones for foreground Galactic
extinction and for the distance modulus of each dSph. The
$A_{F814W}$ and $A_{F606W}$ (or $A_{F475W}$ in the case of DDO\,44
and DDO\,78) that we calculate and the true distance moduli are
listed in Table~\ref{table2}, columns (6) and (7) for the
extinction and (8) for the distance moduli. The I-band TRGB values
shown in column (5) of the same table were used to compute the
F814W-band TRGB values, as explained already in
Sec.~\ref{sec:observations}.
We note in Fig.~\ref{sl_figure4} the presence of stars within the
RGB box bluewards of the most metal-poor isochrone available from
the Dartmouth set of isochrones. These stars are not used to
construct the MDF. The existence of such stars may indicate the
presence of more metal-poor RGB stars or of old AGB stars, with
ages typically greater or equal to 10~Gyr. Such old AGB stars that
have the same luminosity as RGB stars were also noted, for
example, by Harris, Harris \& Poole (\cite{sl_harris99}) while
constructing the MDF for stars in a halo field of the giant
elliptical NGC\,5128. It is expected that at most 22~\% of the
stars in the RGB selection box, and within 1~mag below the TRGB,
are actually old AGB stars (Durrell, Harris \& Pritchet
\cite{sl_durrell01}; Martinez-Delgado \& Aparicio
\cite{sl_martinez97}; and references therein). In order to
quantify the effect of the presence of such stars, we construct
the MDF of KDG\,61 using Padova isochrones (Girardi et
al.~\cite{sl_girardi08}; Marigo et al.~\cite{sl_marigo08}), which
also include the AGB phase. We run the interpolation code once
using isochrones that only include the RGB phase and once with
isochrones that only include the AGB phase, for a constant age of
12.5~Gyr and a range in metallicities from [Fe/H]\,$=\,-$2.36~dex
(or Z\,$=$\,0.0001) to [Fe/H]\,$=\,-$0.54~dex (or Z\,$=$\,0.006),
with a step of 0.1~dex in [Fe/H]. The derived mean values differ
only by 0.04~dex in [Fe/H] with a mean of
$\langle$[Fe/H]$\rangle\,=\,-$1.24~dex for the MDF constructed
using isochrones that include only the RGB phase. The MDF of the
stars that were fit using isochrones that include only the AGB
phase becomes more metal-rich. Furthermore, the derived 1~$\sigma$
spreads in [Fe/H] have comparable values of 0.26~dex when we
include only the RGB phase and of 0.27~dex when we include only
the AGB phase. In addition, if we randomly assign 22~\% of the
stars within the RGB box with metallicities as derived using only
the AGB phase, while the remaining 78~\% of the stars with
metallicities as derived using only the RGB phase, then the
resulting MDF has a mean of $\langle$[Fe/H]$\rangle\,=\,-$1.24~dex
with 1~$\sigma$ spread in [Fe/H] of 0.29~dex. This mean
metallicity is comparable to the one we compute when we use only
the RGB phase to derive the metallicities. The shape of the MDFs
in all these cases does not change significantly. Thus, we can
safely conclude that the presence of these contaminating old AGB
stars within the RGB box does not affect the derived MDFs'
properties significantly.
\begin{table}
\begin{minipage}[t]{\columnwidth}
\caption[]{Derived Properties. }
\label{table3}
\centering
\renewcommand{\footnoterule}{}
\begin{tabular}{l c c c c c}
\hline\hline
Galaxy &$\langle$[Fe/H]$\rangle\pm\sigma$ &$\langle$[Fe/H]$\rangle_{w}\pm\sigma$ &$K-S$\footnote{The probabilities indicate whether the populations under conside\-ration are from the same distribution.}
&$f_{AGB}$ \\
&(dex) &(dex) &(\%) & \\
(1) &(2) &(3) &(4) &(5) \\
\hline
KDG\,61 &$-1.65\pm0.28$ &$-1.49\pm0.26$ &$16$ &$0.07$ \\
KDG\,64 &$-1.72\pm0.30$ &$-1.57\pm0.23$ &$12$ &$0.09$ \\
DDO\,71 &$-1.64\pm0.29$ &$-1.56\pm0.24$ &$0$ &$0.09$ \\
F12D1 &$-1.56\pm0.27$ &$-1.43\pm0.34$ &$8$ &$0.07$ \\
DDO\,78 &$-1.51\pm0.35$ &$-1.36\pm0.20$ &$0$ &$0.09$ \\
DDO\,44 &$-1.77\pm0.29$ &$-1.67\pm0.19$ &$0$ &$0.11$ \\
IKN &$-1.38\pm0.37$ &$-1.08\pm0.16$ &... &$0.08$ \\
F6D1 &$-1.63\pm0.30$ &$-1.48\pm0.43$ &$0.036$ &$0.03$ \\
HS\,117 &$-1.65\pm0.32$ &$-1.41\pm0.20$ &$55$ &$0.14$ \\
\hline
\end{tabular}
\end{minipage}
\end{table}
In Fig.~\ref{sl_figure2} we overplot the metallicity distribution
convolved with the errors in metallicity (solid line). Also shown
in Fig.~\ref{sl_figure2} (dashed lines) are fits of Gaussian
distributions with the observed mean and dispersion. For each dSph
we compute the mean metallicity, $\langle$[Fe/H]$\rangle$, as well
as the error-weighted mean metallicity,
$\langle$[Fe/H]$\rangle_w$, along with the corresponding intrinsic
1~$\sigma$ dispersions. We show them in Table~\ref{table3},
columns (2) and (3), while the error bars in Fig.~\ref{sl_figure2}
indicate the 1~$\sigma$ dispersion of the error-weighted mean
metallicities. The errors in metallicity are computed from a set
of Monte Carlo simulations, in which each star is varied by its
photometric uncertainties (both in color and magnitude, as given
by the Dolphot output) and re-fit using the identical isochrone
interpolation as described above. The 1~$\sigma$ scatter of the
output random realizations was then adopted as the metallicity
error for each star. In the right panel of Fig.~\ref{sl_figure4}
we show the errors in metallicity computed as described above
versus the metallicities derived for all the stars within the RGB
box, here for KDG\,64 as an example. The error in metallicity
increases towards the metal-poor part, which is due to the spacing
between the isochrones that becomes narrower towards the
metal-poor part.
\begin{figure}
\centering
\includegraphics[width=7.8cm,clip]{13364fg5a.eps}
\includegraphics[width=7.8cm,clip]{13364fg5b.eps}
\includegraphics[width=7.8cm,clip]{13364fg5c.eps}
\includegraphics[width=7.8cm,clip]{13364fg5d.eps}
\includegraphics[width=7.8cm,clip]{13364fg5e.eps}
\includegraphics[width=7.8cm,clip]{13364fg5f.eps}
\includegraphics[width=7.8cm,clip]{13364fg5g.eps}
\includegraphics[width=7.8cm,clip]{13364fg5h.eps}
\includegraphics[width=7.8cm,clip]{13364fg5i.eps}
\caption{Metallicity distribution functions for the nine dSphs
computed in the same way using isochrones with two
different ages: \textit{left panels} for
a constant age of 10.5~Gyr and \textit{middle
panels} for a constant age of
8.5~Gyr. The solid lines show the metallicity
distribution convolved with the errors in
metallicity. The error bars correspond to the
1~$\sigma$ spread for the weighted mean metallicity we
derive from our data.
\textit{Right panels:} Star-by-star difference of the
derived metallicities, $\Delta$[Fe/H], using the
10.5~Gyr isochrones minus the 12.5~Gyr
isochrones versus the [Fe/H] of the 10.5~Gyr
isochrones, indicated with the open circles. The
star-by-star differences of the 8.5~Gyr isochrones
minus the 12.5~Gyr isochrones are indicated with the
dots.
}
\label{sl_figure5}%
\end{figure}
\begin{table}
\begin{minipage}[t]{\columnwidth}
\caption[]{Error-weighted mean metallicities for the 10.5~Gyr and 8.5~Gyr isochrones MDFs.}
\label{table4}
\centering
\begin{tabular}{l c c }
\hline\hline
Galaxy &$\langle$[Fe/H]$\rangle_{w,10.5}\pm\sigma$ &$\langle$[Fe/H]$\rangle_{w,8.5}\pm\sigma$ \\
&(dex) &(dex) \\
(1) &(2) &(3) \\
\hline
KDG\,61 &$-1.37\pm0.27$ &$-1.32\pm0.27$ \\
KDG\,64 &$-1.45\pm0.25$ &$-1.39\pm0.26$ \\
DDO\,71 &$-1.41\pm0.25$ &$-1.34\pm0.27$ \\
F12D1 &$-1.31\pm0.35$ &$-1.25\pm0.36$ \\
DDO\,78 &$-1.28\pm0.21$ &$-1.24\pm0.22$ \\
DDO\,44 &$-1.60\pm0.20$ &$-1.53\pm0.20$ \\
IKN &$-1.03\pm0.17$ &$-0.98\pm0.17$ \\
F6D1 &$-1.37\pm0.48$ &$-1.31\pm0.50$ \\
HS\,117 &$-1.31\pm0.21$ &$-1.27\pm0.22$ \\
\hline
\end{tabular}
\end{minipage}
\end{table}
In order to further quantify the effect of the assumption of the
constant age on the MDFs, we apply again the same analysis using
two different constant ages for the isochrones in the
interpolation method. The first constant age for the isochrones is
10.5~Gyr, while the second constant age is 8.5~Gyr. We repeat the
isochrone interpolation with the bounding boxes and the
metallicity ranges being kept the same in all cases. We show the
results for the MDFs of all the dSphs in Fig.~\ref{sl_figure5},
where the MDFs for the 10.5~Gyr isochrones are shown in the left
panels and the ones for the 8.5~Gyr isochrones in the middle
panels. The derived error-weighted mean metallicities
$\langle$[Fe/H]$\rangle_{w,10.5}$ and
$\langle$[Fe/H]$\rangle_{w,8.5}$, for the 10.5~Gyr and 8.5~Gyr
isochrones, respectively, are shown in Table~\ref{table4}, along
with their corresponding dispersions.
In addition, the star-by-star difference in [Fe/H] as derived
using the 10.5~Gyr and 8.5~Gyr isochrones is shown in
Fig.~\ref{sl_figure5}, right panels. The maximum difference in the
derived [Fe/H] using the 10.5~Gyr isochrones minus the isochrones
with a constant age of 12.5~Gyr is less than 0.20~dex in all the
cases, while the maximum difference in the derived [Fe/H] using
the 8.5~Gyr isochrones minus the isochrones with a constant age of
12.5~Gyr is less than 0.40~dex in all the cases. Finally, the
overall shape of the MDFs as derived for the 10.5~Gyr and the
8.5~Gyr isochrones does not change significantly.
\subsection{Population gradients}
\label{sec:gradients}
In order to examine the presence or absence of population
gradients in our dSph sample, we construct the cumulative
histograms of the stars in each dSph selected in two metallicity
ranges, defined as above and below the respective
$\langle$[Fe/H]$\rangle_{w}$. Since the dSphs can be considered as
being elliptical in projection to first order, we define in the
following the elliptical radius $r$ as
\begin{equation}
r = \sqrt {x^2 + \frac{y^2} {(1-\epsilon)^2}},
\end{equation}
where $x$ and $y$ are the distance along the major and minor axis,
and $\epsilon$ is the ellipticity. The major and minor axes are
\begin{figure*}
\centering
\includegraphics[width=6cm,clip]{13364fg6a.eps}
\includegraphics[width=6cm,clip]{13364fg6b.eps}
\includegraphics[width=6cm,clip]{13364fg6c.eps}
\includegraphics[width=6cm,clip]{13364fg6d.eps}
\includegraphics[width=6cm,clip]{13364fg6e.eps}
\includegraphics[width=6cm,clip]{13364fg6f.eps}
\includegraphics[width=6cm,clip]{13364fg6g.eps}
\includegraphics[width=6cm,clip]{13364fg6h.eps}
\includegraphics[width=6cm,clip]{13364fg6i.eps}
\caption{Contour plots, shown in red, for the nine dSphs which
are overlaid on top of density maps. The elliptical
shape chosen for each dSph, with the exception of IKN,
is shown with the white ellipse. The star symbols in
some plots correspond to bright foreground stars,
while in the case of KDG\,64 corresponds to a
background galaxy. The diamond symbol in the case of
F6D1 corresponds to an extended background galaxy. The
unit of the colorbars is number of stars per
(50~pixels)$^2$.}
\label{sl_figure6}%
\end{figure*}
computed by fitting an ellipse to contours of the number counts of
all stars above the 1~$\sigma$ level. This ellipse is shown in
white in Fig.~\ref{sl_figure6} and represents the elliptical shape
that was chosen for each dSph. In the same Fig.~\ref{sl_figure6}
we show the contours above the 0.5~$\sigma$ to 2~$\sigma$ level,
which are overlaid on top of density maps. In the study of
population gradients we exclude the IKN dSph since the field of
view does not cover the whole extent of the galaxy and furthermore
is contaminated by a bright foreground star. In addition, we do
not show the elliptical shape for IKN.
\begin{figure*}
\centering
\includegraphics[width=6cm,clip]{13364fg7a.eps}
\includegraphics[width=6cm,clip]{13364fg7b.eps}
\includegraphics[width=6cm,clip]{13364fg7c.eps}
\includegraphics[width=6cm,clip]{13364fg7d.eps}
\includegraphics[width=6cm,clip]{13364fg7e.eps}
\includegraphics[width=6cm,clip]{13364fg7f.eps}
\caption{In each panel from left to right and from top to bottom
we show the radial metallicity distributions (top
panels), their cumulative distributions (middle
panels), and the radial mean metallicity profile
(bottom panels). The cumulative histogram is for stars
selected in metallicity above and below the weighted
mean value $\langle$[Fe/H]$\rangle_{w}$ listed in
Table~\ref{table3}.}
\label{sl_figure7}%
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=6cm,clip]{13364fg8a.eps}
\includegraphics[width=6cm,clip]{13364fg8b.eps}
\caption{Same as in Fig.~\ref{sl_figure7} for the remaining two
dSphs. Note that IKN is excluded from this analysis.}
\label{sl_figure8}%
\end{figure*}
We show the cumulative metallicity distributions in
Fig.~\ref{sl_figure7} and Fig.~\ref{sl_figure8} (middle
panels). We show in the same figures the radial metallicity
distributions (upper panels) and the mean radial metallicity
profiles (lower panels). Each radial metallicity profile shows the
mean values of metallicity within an elliptical annulus versus the
elliptical annulus in units of the effective radius $r_{eff}$. The
values for $r_{eff}$ are listed in the column (10) of
Table~\ref{table2}. The error bars in the metallicity profile
correspond to the standard deviation of the mean metallicity in
each elliptical radius annulus.
\subsection{Density Maps}
\label{sec:maps}
\begin{figure*}
\centering
\includegraphics[width=6cm,clip]{13364fg9a.eps}
\includegraphics[width=6cm,clip]{13364fg9b.eps}
\includegraphics[width=6cm,clip]{13364fg9c.eps}
\includegraphics[width=6cm,clip]{13364fg9d.eps}
\includegraphics[width=6cm,clip]{13364fg9e.eps}
\includegraphics[width=6cm,clip]{13364fg9f.eps}
\includegraphics[width=6cm,clip]{13364fg9g.eps}
\includegraphics[width=6cm,clip]{13364fg9h.eps}
\includegraphics[width=6cm,clip]{13364fg9i.eps}
\includegraphics[width=6cm,clip]{13364fg9j.eps}
\includegraphics[width=6cm,clip]{13364fg9k.eps}
\includegraphics[width=6cm,clip]{13364fg9l.eps}
\caption{From left to right and from top to bottom we show the
density maps for each dSph, after smoothing with a
Gaussian kernel. In each upper and middle panel, the
``metal-rich'' population corresponds to stars having
[Fe/H]$\ge-$1.30~dex while ``metal-poor'' refers to
[Fe/H]$ \le-$1.80. The bottom panel corersponds to the
density map of the luminous AGB stars, as defined in the
text. The unit of the colorbars is number of stars per
(100 pixels)$^2$.}
\label{sl_figure9}%
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=6cm,clip]{13364fg10a.eps}
\includegraphics[width=6cm,clip]{13364fg10b.eps}
\includegraphics[width=6cm,clip]{13364fg10c.eps}
\includegraphics[width=6cm,clip]{13364fg10d.eps}
\includegraphics[width=6cm,clip]{13364fg10e.eps}
\includegraphics[width=6cm,clip]{13364fg10f.eps}
\caption{The same as in Fig.~\ref{sl_figure9} but for the
remaining three dSphs. Note that the zero density
region in the center of IKN is an artifact due to a
bright star contaminating the field of view.}
\label{sl_figure10}%
\end{figure*}
We now examine the spatial distribution of the stellar populations,
separated into a metal-poor and metal-rich component. For that
purpose, we define two stellar populations, the first includes
stars having a metallicity less than or equal to the value of
$-$1.80~dex (''metal-poor''), while the second includes stars with
a metallicity larger than or equal to $-$1.30~dex
(''metal-rich''). All galaxies have peak values that lie well in
between those cuts so that the metal-poor and the metal-rich tails
are representatively sampled for all dSphs. For these two
populations we construct the gaussian-smoothed density maps, shown
in Fig.~\ref{sl_figure9} and Fig.~\ref{sl_figure10}, upper and
middle panels for each dSph.
\subsection{Luminous AGB stars}
\label{sec:lumagb}
As already noted before, all the dSphs in our sample contain a
significant number of luminous AGB star candidates. These stars
are located above the TRGB and have ages ranging from 1~Gyr up to
less than 10~Gyr. We broadly refer to stellar populations in this
age range as ``intermediate-age'' populations. Assuming that the
metallicities of dwarf galaxies increase with time as star
formation continues, we may also assume that these
intermediate-age populations are more metal-rich than the old
populations. We note, however, that dwarf galaxies do not
necessarily experience smooth metal enrichment as a function of
time (see, e.g., Koch et al.~\cite{sl_koch07a};
\cite{sl_koch07b}).
In Fig.~\ref{sl_figure9} and Fig.~\ref{sl_figure10}, the density
maps in the lower panels show the spatial distribution of these
intermediate-age stars for each dSph. We consider as luminous AGB
stars the stars that are brighter by 0.15~mag than the TRGB
(Armandroff et al.~\cite{sl_armandroff}) and that lie within 1~mag
above ($I_{TRGB}-0.15$)~mag. In addition, we consider stars within
the color range of $a\,<\,(V-I)_0\,<\,a+2.50$~(mag), where the
left-hand limit $a$ is equal to the color of the TRGB of the most
metal-poor isochrone we use, dereddened for each dSph using the
extinction values listed in columns (6) and (7) of
Table~\ref{table2}. Then, the right-hand limit is the left-hand
limit plus 2.50~mag. This selection criterion was motivated by the
work of Brewer, Richer \& Crabtree (\cite{sl_brewer95}) and Reid
\& Mould (\cite{sl_reid84}).
\section{Discussion}
\label{sec:discussion}
\subsection{Photometric Metallicity Distribution Functions}
\label{sec:dmdfs}
The photometric MDFs in Fig.~\ref{sl_figure2} indicate that these
dSphs cover a wide range in metallicity. All of them seem to have
a steeper cut-off in their metal-rich end. This can be easily seen
if we compare the MDFs to the fitted gaussian distributions,
indicated by the dashed lines in Fig.~\ref{sl_figure2}. We do not
expect the MDFs to follow a gaussian distribution since they are
shaped by the star formation histories of each dSph. For instance,
a steep cut-off of the MDF toward the metal-rich tail could be
indicative of the occurrence of strong and continuous galactic
winds (Lanfranchi \& Matteucci \cite{sl_lanfranchi},
\cite{sl_lanfranchi07}; Koch et al.~\cite{sl_koch06}) or of the
effects of extended star formation and SNe Ia ``pockets'' of
localized, inhomogeneous enrichment (Marcolini et al.~\cite{sl_marcolini08}).
The low mean metallicities, $\langle$[Fe/H]$\rangle$, that are
derived from the distribution functions and are shown in
Table~\ref{table3}, columns (2) and (3), indicate that the M\,81
dSphs are metal-poor systems, which points to a low star formation
efficiency in analogy with the LG dSphs (e.g., Lanfranchi \&
Matteucci \cite{sl_lanfranchi}; Grebel, Gallagher \& Harbeck
\cite{sl_grebel}; Koch \cite{sl_koch09}).
One exception is the dSph IKN, which shows a high mean metallicity
for its luminosity. Objects that have high metallicity for their
luminosity and that, most importantly, are dark matter free are
promising candidates for tidal dwarf galaxies. Their properties
are set by their formation mechanism. They are believed to form
out of the dark matter free material that was expelled during the
tidal interaction of the parent galaxies (Bournaud et
al.~\cite{sl_bournaud}; and references therein). One possible
tidal dwarf galaxy candidate has been identified in the M\,81
group, namely Holmberg\,IX (Makarova et al.~\cite{sl_makarova};
Sabbi et al.~\cite{sl_sabbi}) and these are favoured to be
detected in such recently interacting groups. These systems
contain a young stellar component, while their older stellar
populations are believed to consist of stars from their parent
galaxies, which is M\,81 in the case of Holmberg\,IX. There is no
information available about the presence or absence of dark matter
in this system.
In the case of IKN, we should consider the fact that its stellar
metallicity bears the imprint of the medium that formed these old
stars, while young stars are not observed in this dwarf. That
makes it distinct from young tidal dwarf candidates like
Holmberg\,IX. A connection with the recent interactions of the
M\,81 group is not obvious. IKN might be an old tidal dwarf galaxy
if such systems exist. Alternatively, IKN may have undergone
substantial mass loss in the past, leaving it as a low-luminosity
but comparatively high-metallicity dSph. Without data on its
detailed structure and kinematics, we cannot distinguish between
these possibilities.
The metallicity spreads of the studied dSphs are large, spanning
1~$\sigma$ ranges from 0.27~dex to 0.37~dex, or intrinsic,
error-weighted 1~$\sigma$ ranges from 0.16~dex to 0.43~dex. These
abundance spreads are comparable to the ones observed in the LG
dSphs (Grebel, Gallagher \& Harbeck \cite{sl_grebel}; Koch
\cite{sl_koch09}) and may indicate the presence of multiple
stellar populations and\,/\,or extended star formation
histories. According to the models of Marcolini et
al.~(\cite{sl_marcolini08}) and Ikuta \& Arimoto
(\cite{sl_ikuta02}), the initial star formation in dSphs may have
lasted as long as 3~Gyr or even longer, which can lead to a large
dispersion in [Fe/H]. For ages older than 10~Gyr, the shape of the
MDF does not depend strongly on age as described in
Sec.~\ref{sec:mdfs} and shown in Fig.~\ref{sl_figure5}.
\subsection{Luminosity-Metallicity Relation}
\label{sec:dmetlum}
\begin{figure}
\centering
\includegraphics[width=5.5cm,clip]{13364fg11a.eps}
\includegraphics[width=5.5cm,clip]{13364fg11b.eps}
\includegraphics[width=5.5cm,clip]{13364fg11c.eps}
\caption{\textit{Upper}: Luminosity-metallicity relation for LG
dwarf galaxies, after Grebel, Gallagher \& Harbeck\
(\cite{sl_grebel}), together with the 13 dSphs of the M\,81
group. LG dSphs are plotted with blue asterisks, LG dIrrs
are shown as green crosses, and red dots indicate the
available M\,81 data. Nine out of the thirteen M\,81 group
dSphs have been studied here, while the remaining four,
marked with a red circled dot, have been adopted from
the literature, as discussed in the text. The red
squared dots and green squared crosses indicate the
transition-types of the M\,81 group and LG, respectively.
\textit{Middle}: Mean metallicities versus the
deprojected distance from M\,81, R, for the 13 M\,81 group
dSphs. The circled dots correspond to the four dSphs
for which the metallicities were adopted from the
literature, as discussed in the text.
\textit{Lower}: Fraction of AGB stars versus the RGB
stars within 1~mag below the TRGB, $f_{AGB}$, versus
the deprojected distance from M\,81, R, for the nine M\,81 group
dSphs studied here. With the circled dot we show the
corresponding fraction $f_{AGB}$ in the case of F6D1.}
\label{sl_figure11}%
\end{figure}
In Fig.~\ref{sl_figure11}, upper panel, we show the
luminosity-metallicity relation compiled for the dwarf galaxies in
our LG as studied by Grebel, Gallagher \& Harbeck\
(\cite{sl_grebel}), with the addition of thirteen dSphs in the
M\,81 group. This compilation includes LG objects with mean
metallicities derived from either spectroscopic or photometric
studies. The mean metallicities for the nine M\,81 group dSphs
listed in Table~\ref{table3}, column (3), are from this work,
while the metallicities for the remaining four dSphs are adopted
from the literature and are computed using the mean $(V-I)_{0}$
color of the RGB stars at the luminosity of $M_{I}=-$3.5~mag (from
Caldwell et al.~(\cite{sl_caldwell}) for BK5N and F8D1 and from
Sharina et al.~(\cite{sl_sharina}) for KKH\,57 and BK6N).
Overall, the M\,81 group dwarfs follow the luminosity-metallicity
relation quite well, albeit some of them exhibit a tendency of
being slightly more metal-poor than LG dSphs of comparable
luminosity. Therefore, they mainly populate the region defined by
the LG dSphs while a few are located in the border region between
the dSph and dIrr locus defined by the LG dwarfs. The M\,81 group
dSphs that seem to lie in this apparent transition region are
KDG\,61, KDG\,64, DDO\,44 and DDO\,71. Out of these four objects,
three are classified as transition-types, namely, KDG\,61, KDG\,64
and DDO\,71 (Karachentsev \& Kaisin \cite{sl_m81halpha}; Boyce et
al.~\cite{sl_hiblind}) based on HI detections and $H\alpha$
emission. Also among the dwarfs that coincide with the LG dSph
locus, one dwarf is classified as transition-type, namely HS\,117,
with HI associated with it (Huchtmeier \& Skillman \cite{sl_huch};
Karachentsev et al.~\cite{sl_acsdata}).
Transition-type dwarfs are galaxies that share properties of both
morphological types. Their stellar populations and star formation
histories resemble those of dSphs and their gas content and
present-day star formation activity is akin to low-mass dIrrs. It
has been suggested that transition-type dwarfs are indeed evolving
from dIrrs to gas-deficient dSphs.
The projected spatial distribution of the dSphs within the M\,81
group is shown in Fig. 1 of Karachentsev et
al.~(\cite{sl_m81distances}), while their three-dimensional view
is shown in their Fig.~6. In Fig.~\ref{sl_figure11}, middle panel,
we plot the deprojected distances from the M\,81 galaxy, R, versus
the mean metallicities for the 13 M\,81 group dSphs. The
deprojected distances from the M\,81 galaxy, R, are adopted from
Karachentsev et al.~(\cite{sl_m81distances}) and are listed in
Table~\ref{table2}, column (9). The most distant dSph is DDO\,44
which belongs to the NGC\,2403 subgroup, while KDG\,61 is the one
closest to the M\,81 itself. Interestingely, the dwarf KDG\,61
which is classified as a transition-type based on HI detections,
is the closest to the M\,81 itself, with a deprojected distance of
44~kpc (Karachentsev et al.~\cite{sl_m81distances}). The remaining
three dwarfs classified as transition-types, namely KDG\,64,
DDO\,71 and HS\,117, lie in a deprojected distance of more than
100~kpc. As discussed already, the most metal-rich dSph studied by
us is IKN, while according to the value that Caldwell et
al.~(\cite{sl_caldwell}) provide for F8D1, this is the most
metal-rich dSph in this group so far studied. We see no trend of
the mean metallicity with the deprojected distance.
\subsection{Population Gradients}
\label{sec:dgradients}
By examining the cumulative metallicity distributions in the
middle panels in Fig.~\ref{sl_figure7} and Fig.~\ref{sl_figure8},
we conclude that the metallicity gradients are present in the case
of DDO\,71, DDO\,78, DDO\,44 and F6D1, while the metallicity
gradients are less pronounced or not present in the remaining
dSphs. We again separate the RGB stars in each dSph into two
samples, where we choose to separate the distributions at the
observed weighted mean metallicity. The probabilities from the
two-sided Kolmogorov-Smirnov (K-S) test that the two components
are drawn from the same parent distribution are listed in
Table~\ref{table3}, column (4). The K-S results are consistent
with showing spatially separated populations in the case of
DDO\,71, DDO\,78, DDO\,44 and F6D1. In the case of the remaining
dSphs, the gradients are less pronounced and the metal-rich and
metal-poor populations have different distributions at the 84 --
99.7~\% confidence level ($>$1.5~$\sigma$), except for HS\,117. In
all cases, in which we observe such metallicity segregation, the
sense is that more metal rich stars are more centrally
concentrated, as also found in the majority of the LG dSphs
(Harbeck et al.~\cite{sl_harbeck}; Tolstoy et
al.~\cite{sl_tolstoy}; Ibata et al.~\cite{sl_ibata}).
\subsection{Density Maps}
\label{sec:dmaps}
The density maps in Fig.~\ref{sl_figure9} and
Fig.~\ref{sl_figure10} are useful to study the spatial
distribution of the metal-rich (upper panels) and metal-poor
(middle panels) population of each dSph. From these density maps
we conclude that each dSph has a different stellar spatial
distribution of their metal-rich and metal-poor stellar
component. All of them show either a spatial variation of the
centroids of the two stellar populations, as is the case of F12D1
and KDG\,64, or that the metal-rich population is more centrally
concentrated, as is the case of DDO\,71, DDO\,44 and F6D1. These
findings agree well with the ones from the cumulative histograms,
though we should keep in mind that the metal-poor and metal-rich
stellar populations involved are differently selected. DDO\,78 and
IKN are clearly fairly metal-rich while KDG\,61, KDG\,64 and
DDO\,44 have prominent metal-poor populations.
\subsection{Luminous AGB stars}
\label{sec:dlumagb}
Luminous AGB stars were also detected in two more dSphs in the M\,81
group, namely BK5N and F8D1 (Caldwell et
al.~\cite{sl_caldwell}). These luminous AGB stars may include
carbon stars and may be long-period variables (LPVs) as have been
found in other early-type dwarf galaxies (e.g., Menzies et
al.~\cite{sl_menzies02}; Rejkuba et al.~\cite{sl_rejkuba06};
Whitelock et al.~\cite{sl_whitelock09}), but we can not establish
this for certain based on our current data. We compute the
fraction of the luminous AGB stars, $f_{AGB}$, defined as the
number of the luminous AGB stars, $N_{AGB}$, counted within the
magnitude bin considered in Sec.~\ref{sec:lumagb}, over the number
of the RGB stars within one magnitude below the TRGB,
$N_{RGB}$. In order to estimate the $N_{RGB}$, we take into
account that approximately 22~\% of the stars we count within one
magnitude below the TRGB are old AGB stars (Durrell, Harris \&
Pritchet \cite{sl_durrell01}). Thus, the $N_{RGB}$ is equal to
78~\% of the stars we count within one mag below the TRGB. The
fractions $f_{AGB}$ we derive in this way are listed in
Table~\ref{table3}, column (5).
\subsubsection{Blends, blue straggler progeny and foreground contamination}
We now discuss the possibility that these luminous AGB stars may
actually be (1) blends of bright RGB stars (Renzini
\cite{sl_renzini98}), (2) blue straggler progeny (Guarnieri,
Renzini \& Ortolani \cite{sl_guarnieri97}; and references
therein), or (3) due to the foreground contamination.
In order to evaluate case (1), we use our artificial star
experiments in order to quantify the number of the blends of
bright RGB stars, $N_{blends}$, that may contribute to the
detected number of the observed luminous AGB stars. We only
consider here the case of a blend of two equal-magnitude
stars. The magnitude of the blended star is always $0.75$~mag
brighter than the initial magnitude of the two superimposed
stars.
We want to determine the location in the CMD of all the RGB stars
that can end up as blends within the location in the CMD of the
luminous AGB stars, as defined in Sec.~\ref{sec:lumagb} and
further called as luminous AGB box. For that purpose, if we assume
that the stars within the luminous AGB box were all blends, then
they would originate from stars that have magnitudes 0.75~mag
fainter. Thus, we shift the luminous AGB box by 0.75~mag towards
the fainter magnitudes and furthermore we only consider the stars
with magnitudes fainter than the TRGB. This procedure defines the
location of the RGB stars that can end up as blends within the
luminous AGB box and we call this the ``RGB blends box''.
From the stellar catalogue we use as an input for the artificial
star experiments, we select the stars that have such input
magnitudes to place them within the ``RGB blends box''. From these
input stars, we consider as blends the ones that have output
magnitudes that can place them above the TRGB. We normalize the
number of these blends to the number of the total input stars that
are located within the ``RGB blends box''. Finally, the number of
the observed blends is proportional to the number of the observed
RGB stars located within the same ``RGB blends box''. Thus, the
$N_{blends}$ for all the dSphs computed this way is equal to
5~blends, 11~blends, 12~blends and 2~blends in the case of
DDO\,44, DDO\,78, IKN and HS\,117, respectively, while in all the
other cases the number of blends is less than 1. Thus, the
fraction of the blends defined as the number of blends divided by
the number of the RGB stars within 1\,mag below the TRGB, is less
than 0.6~\% in all cases but for IKN which is equal to 0.9~\%.
In the case (2), Guarnieri, Renzini \& Ortolani
(\cite{sl_guarnieri97}) point out that the number of blue
straggler progeny is of the order of $\sim$2~\% of all stars that
reach the luminous AGB phase.
In the case (3), we estimate the foreground contamination using
the TRILEGAL code (Vanhollebeke, Groenewegen \& Girardi
\cite{sl_trilegal1}; Girardi et al.~\cite{sl_trilegal2}). We count
the number of foreground stars that fall within the same location
in the CMD as in the luminous AGB box and these are considered to
be the number of expected foreground contamination. In all the
cases, the number of foreground stars is 4, with the exception of
DDO\,71 and DDO\,44 where the number of foreground stars is 3 and
5, respectively. This translates to a fraction of foreground
stars, defined as the number of foreground stars divided by the
number of RGB stars within 1~mag below the TRGB, of less than
0.7~\%, with the exception of F6D1 where this fraction is equal to
2~\%.
We do not consider the case of old AGB LPVs, whose large amplitude
variations may place them above the TRGB (Caldwell et
al.~\cite{sl_caldwell}), as an additional source of contamination
in the luminous AGB box, since the studied dSphs have mean
metallicities of less than $-$1~dex. Such old AGB LPVs, with ages
greater than or equal to 10~Gyr, have been observed above the TRGB
in metal-rich ([Fe/H]\,$> -$1~dex) globular clusters (e.g.,
Guarnieri, Renzini \& Ortolani \cite{sl_guarnieri97}; and
references therein).
We can now add all the contributions estimated in the above three
cases, for each dSph. We call the sum of these three contributions
the number of total contaminants, $N_{cont,tot}$, and compute
their fraction $f_{cont,tot}\,=\,N_{cont,tot}\,/\,N_{RGB}$. The
fraction of the total contaminants is less than 1~\% in all cases
apart from IKN, HS\,117 and F6D1, where the fraction of the total
contaminants is approximately 1.1~\%, 1.3~\% and 2.1~\%,
respectively. We note that in all cases, there is a significant
fraction of luminous AGB stars that can not be accounted for by
considering the contribution of blends, binaries and foreground
contamination. The dSph F6D1 is an exception to that, where the
$f_{cont,tot}$ is $\sim$\,2~\%, as compared to the $f_{AGB}$ which
is $\sim$\,3~\%. We note though that in the case of F6D1 the
number of stars counted in the luminous AGB box is equal to 6
stars. Thus, we conclude that the luminous AGB stars are a genuine
population for the majority of the dSphs studied here.
\subsubsection{Luminous AGB density maps and fractions}
From the density maps of the luminous AGB stars shown in
Fig.~\ref{sl_figure9} and Fig.~\ref{sl_figure10}, lower panels,
we see that if we consider the peak densities or the bulk of the
luminous AGB stars, then it seems that these are more confined to
the central regions of the dwarfs, a behaviour similar to what is
found for the Fornax dSph (Stetson, Hesser \& Smecker-Hane
\cite{sl_stetson}). If we consider the overall distribution then
we note that for most of the dSphs these stars are rather more
widely distributed, following the distribution of the metal-poor
stars, with the exception of KDG\,64 and DDO\,71 where their AGB
stars' distributions coincide mostly with the metal-rich
population, which in the case of DDO\,71 is centrally
concentrated. We conclude that the intermediate-age stellar
component is well-mixed with the old stellar component.
This behaviour is similar to the AGB stars' spatial distribution
of the LG dwarfs. Indeed, Battinelli \& Demers
(\cite{sl_battinelli04}; and references therein) discuss that for
the LG dwarfs, for which there are carbon star studies, their
Carbon-rich stars are distributed such that they coincide with the
spatial distribution of the old stellar component. An exception is
the dE NGC\,185, where the AGB stars are concentrated more in the
centre than the old stellar component (Battinelli \& Demers
\cite{sl_battinelli04b}), a similar behaviour as observed in the
two M\,81 group dSphs discussed above.
We plot the $f_{AGB}$ versus the deprojected distance from M\,81,
R, in the lower panel of Fig~\ref{sl_figure11}. The highest
fraction of luminous AGB stars is observed in HS\,117 and the
lowest one in KDG\,61 and F12D1. We do not see any trend of the
$f_{AGB}$ with increasing deprojected distance from M\,81. If we
compute the net fraction of the luminous AGB stars, by subtracting
the fraction $f_{cont,tot}$, due to the contribution of blends,
binaries and foreground contamination, from the fraction $f_{AGB}$
listed in the column (5) of Table~\ref{table3}, none of the trends
and conclusions change.
\section{Conclusions}
\label{sec:conclusions}
We use the CMDs of nine dSphs in the M\,81 group to construct
their photometric MDFs. These MDFs show populations covering a
wide range in metallicity with low mean metallicities indicating
that these are metal-poor systems. All MDFs show a steeper
fall-off at their high-metallicity end than toward their
low-metallicity end indicating that galactic winds may play a role
in shaping their distribution.
We compute the mean metallicity, $\langle$[Fe/H]$\rangle$, and the
mean metallicity weighted by the metallicity error,
$\langle$[Fe/H]$\rangle_{w}$, along with their corresponding
standard deviations. The most metal-rich dSph in our sample is IKN
even though it is the least luminous galaxy in our sample. IKN's
comparatively high metallicity may indicate that it is a tidal
dwarf galaxy or that it suffered substantial mass loss in the
past. We do not see any correlation between the
$\langle$[Fe/H]$\rangle$ and the deprojected distance from the
M\,81 galaxy, R.
We use the mean metallicity weighted by the metallicity errors,
$\langle$[Fe/H]$\rangle_{w}$, to select two stellar populations
having metallicities above and below that value. For these two
stellar populations we construct cumulative histograms, as a way
to search for population gradients in metallicity. We find that
some dSphs show strong metallicity gradients, while others do
not. In dSphs with radial metallicity gradients the more
metal-rich populations are more centrally concentrated.
Furthermore, we study the spatial (i.e., two-dimensional)
distribution of our defined metal-rich and metal-poor stellar
populations. This refined look no longer assumes radial symmetry,
and we now find that in some dwarfs the metal-rich population is
more centrally concentrated, while others show offsets in the
centroid of the two populations. By examining the distribution of
the luminous AGB stars, we conclude that, for the majority of the
dSphs, these stars have mostly extended distributions, indicating
that they have been well-mixed with the metal-poor stellar
population. We do not find any correlation between the fraction of
luminous AGB stars and the deprojected distance from the M\,81
galaxy. While present-day distances may not be indicative of the
dwarfs' position in the past and while their orbits are unknown,
the apparent lack of a correlation between distance and
evolutionary history may suggest that the evolution of the dwarfs
was determined to a large extend by their internal properties and
not so much by their environment.
Finally, there are some M\,81 dSphs that straddle the transition
region between LG dSphs and dIrrs in the metallicity-luminosity
relation. We may be observing low-luminosity transition-type
dwarfs moving toward the dSph locus. Interestingly, these dwarfs
are slightly more luminous than the bulk of the LG transition
dwarfs. Perhaps some of the M\,81 dwarfs experienced gas stripping
during the recent interactions between the dominant galaxies in
the M\,81 group.
\begin{acknowledgements} The authors would like to thank an anonymous
referee for the thoughtful comments. We would
like to thank Rainer Spurzem and Thorsten
Lisker for useful discussions. SL and this
research were supported within the framework
of the Excellence Initiative by the German
Research Foundation (DFG) via the Heidelberg
Graduate School of Fundamental Physics
(HGSFP) (grant number GSC 129/1). SL would
like to acknowledge an EAS travel grant to
participate to the JENAM 2008 conference in
Vienna, where the preliminary results
presented here were shown. AK acknowledges
support by an STFC postdoctoral fellowship
and by the HGSFP of the University of
Heidelberg.
This research has made use of the NASA/IPAC
Extragalactic Database (NED) which is
operated by the Jet Propulsion Laboratory,
California Institute of Technology, under
contract with the National Aeronautics and
Space Administration. This research has made
use of NASA's Astrophysics Data System
Bibliographic Services. This research has
made use of SAOImage DS9, developed by
Smithsonian Astrophysical Observatory. This
research has made use of Aladin.
All of the data presented in this paper were
obtained from the Multimission Archive at the
Space Telescope Science Institute
(MAST). STScI is operated by the Association
of Universities for Research in Astronomy,
Inc., under NASA contract NAS5-26555. Support
for MAST for non-HST data is provided by the
NASA Office of Space Science via grant
NNX09AF08G and by other grants and
contracts.
\end{acknowledgements}
| proofpile-arXiv_065-4948 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Phases of Gauge Theories}
Models of dynamical breaking of the electroweak symmetry are theoretically appealing and constitute one of the best motivated natural extensions of the standard model (SM). We have proposed several models \cite{Sannino:2004qp,Dietrich:2005wk,Dietrich:2005jn,Gudnason:2006mk,Ryttov:2008xe,Frandsen:2009fs,Frandsen:2009mi,Antipin:2009ks} possessing interesting dynamics relevant for collider phenomenology \cite{Foadi:2007ue,Belyaev:2008yj,Antola:2009wq,Antipin:2010it} and cosmology \cite{Nussinov:1985xr,Barr:1990ca,Bagnasco:1993st,Gudnason:2006ug,Gudnason:2006yj,Kainulainen:2006wq,Kouvaris:2007iq,Kouvaris:2007ay,Khlopov:2007ic,Khlopov:2008ty,Kouvaris:2008hc,Belotsky:2008vh,Cline:2008hr,Nardi:2008ix,Foadi:2008qv,Jarvinen:2009wr,Frandsen:2009mi,Jarvinen:2009mh,Kainulainen:2009rb,Kainulainen:2010pk}. The structure of one of these models, known as Minimal Walking Technicolor, has led to the construction of a new supersymmetric extension of the SM featuring the maximal amount of supersymmetry in four dimension with a clear connection to string theory, i.e. Minimal Super Conformal Technicolor \cite{Antola:2010nt}. These models are also being investigated via first principle lattice simulations \cite{Catterall:2007yx,Catterall:2008qk,DelDebbio:2008zf,Hietanen:2008vc,Hietanen:2009az,Pica:2009hc,Catterall:2009sb,Lucini:2009an,Bursa:2009we,DeGrand:2009hu,DeGrand:2008kx,DeGrand:2009mt,Fodor:2008hm,Fodor:2009ar,Fodor:2009nh,Kogut:2010cz} \footnote{Earlier interesting models \cite{Appelquist:2002me,Appelquist:2003uu,Appelquist:2003hn} have contributed triggering the lattice investigations for the conformal
window with theories featuring fermions in the fundamental representation
\cite{Appelquist:2009ty,Appelquist:2009ka,Fodor:2009wk,Fodor:2008hn,
Deuzeman:2009mh, Fodor:2009rb,Fodor:2009ff}}. An up-to-date review is Ref. \refcite{Sannino:2009za} while an excellent review updated till 2003 is Ref. \refcite{Hill:2002ap}. These are also among the most challenging models to work with since they require deep knowledge of gauge dynamics in a regime where perturbation theory fails. In particular, it is of utmost importance to gain information on the nonperturbative dynamics of non-abelian four dimensional gauge theories. The phase diagram of $SU(N)$ gauge theories as functions of number of flavors, colors and matter representation has been investigated in \cite{Sannino:2004qp,Dietrich:2006cm,Ryttov:2007sr,Ryttov:2007cx,Sannino:2008ha}. The analytical tools which will be used here for such an exploration are: i) The conjectured {\it physical} all orders beta function for nonsupersymmetric gauge theories with fermionic matter in arbitrary representations of the gauge group \cite{Ryttov:2007cx}; ii) The truncated Schwinger-Dyson equation (SD) \cite{Appelquist:1988yc,Cohen:1988sq,Miransky:1996pd} (referred also as the ladder approximation in the literature); The Appelquist-Cohen-Schmaltz (ACS) conjecture \cite{Appelquist:1999hr} which makes use of the counting of the thermal degrees of freedom at high and low temperature. These are the methods which we have used in our investigations. However several very interesting and competing analytic approaches \cite{Grunberg:2000ap,Gardi:1998ch,Grunberg:1996hu,Gies:2005as,Braun:2005uj, Poppitz:2009uq,Poppitz:2009tw,Antipin:2009wr,Antipin:2009dz,Jarvinen:2009fe, Braun:2009ns,Alanen:2009na} have been proposed in the literature. What is interesting is that despite the very different starting point the various methods agree qualitatively on the main features of the various conformal windows presented here.
\subsection{Physical all orders Beta Function - Conjecture}
\label{All-orders}
Recently we have conjectured an all orders beta function which allows for a bound of the conformal window \cite{Ryttov:2007cx} of $SU(N)$ gauge theories for any matter representation. The predictions of the conformal window coming from the above beta function are nontrivially supported by all the recent lattice results \cite{Catterall:2007yx,DelDebbio:2008wb,Catterall:2008qk,Appelquist:2007hu,
Shamir:2008pb,Deuzeman:2008sc,Lucini:2007sa}.
In \cite{Sannino:2009aw} we further assumed the form of the beta function to hold for $SO(N)$ and $Sp(2N)$ gauge groups and further extended in \cite{Sannino:2009za} to chiral gauge theories. Consider a generic gauge group with $N_f(r_i)$ Dirac flavors belonging to the representation $r_i,\ i=1,\ldots,k$ of the gauge group. The conjectured beta function reads:
\begin{eqnarray}
\beta(g) &=&- \frac{g^3}{(4\pi)^2} \frac{\beta_0 - \frac{2}{3}\, \sum_{i=1}^k T(r_i)\,N_{f}(r_i) \,\gamma_i(g^2)}{1- \frac{g^2}{8\pi^2} C_2(G)\left( 1+ \frac{2\beta_0'}{\beta_0} \right)} \ ,
\end{eqnarray}
with
\begin{eqnarray}
\beta_0 =\frac{11}{3}C_2(G)- \frac{4}{3}\sum_{i=1}^k \,T(r_i)N_f(r_i) \qquad \text{and} \qquad \beta_0' = C_2(G) - \sum_{i=1}^k T(r_i)N_f(r_i) \ .
\end{eqnarray}
The generators $T_r^a,\, a=1\ldots N^2-1$ of the gauge group in the
representation $r$ are normalized according to
$\text{Tr}\left[T_r^aT_r^b \right] = T(r) \delta^{ab}$ while the
quadratic Casimir $C_2(r)$ is given by $T_r^aT_r^a = C_2(r)I$. The
trace normalization factor $T(r)$ and the quadratic Casimir are
connected via $C_2(r) d(r) = T(r) d(G)$ where $d(r)$ is the
dimension of the representation $r$. The adjoint
representation is denoted by $G$.
The beta function is given in terms of the anomalous dimension of the fermion mass $\gamma=-{d\ln m}/{d\ln \mu}$ where $m$ is the renormalized mass, similar to the supersymmetric case \cite{Novikov:1983uc,Shifman:1986zi,Jones:1983ip}.
The loss of asymptotic freedom is determined by the change of sign in the first coefficient $\beta_0$ of the beta function. This occurs when
\begin{eqnarray} \label{AF}
\sum_{i=1}^{k} \frac{4}{11} T(r_i) N_f(r_i) = C_2(G) \ , \qquad \qquad \text{Loss of AF.}
\end{eqnarray}
At the zero of the beta function we have
\begin{eqnarray}
\sum_{i=1}^{k} \frac{2}{11}T(r_i)N_f(r_i)\left( 2+ \gamma_i \right) = C_2(G) \ ,
\end{eqnarray}
Hence, specifying the value of the anomalous dimensions at the IRFP yields the last constraint needed to construct the conformal window. Having reached the zero of the beta function the theory is conformal in the infrared. For a theory to be conformal the dimension of the non-trivial spinless operators must be larger than one in order not to contain negative norm states \cite{Mack:1975je,Flato:1983te,Dobrev:1985qv}. Since the dimension of the chiral condensate is $3-\gamma_i$ we see that $\gamma_i = 2$, for all representations $r_i$, yields the maximum possible bound
\begin{eqnarray}
\sum_{i=1}^{k} \frac{8}{11} T(r_i)N_f(r_i) = C_2(G) \ , \qquad \gamma_i = 2 \ .
\label{Bound}
\end{eqnarray}
In the case of a single representation this constraint yields
\begin{equation}
N_f(r)^{\rm BF} \geq \frac{11}{8} \frac{C_2(G)}{T(r)} \ , \qquad \gamma = 2 \ .
\end{equation}
The actual size of the conformal window can be smaller than the one determined by the bound above, Eq. (\ref{AF}) and (\ref{Bound}). It may happen, in fact, that chiral symmetry breaking is triggered for a value of the anomalous dimension less than two. If this occurs the conformal window shrinks. Within the ladder approximation \cite{Appelquist:1988yc,Cohen:1988sq} one finds that chiral symmetry breaking occurs when the anomalous dimension is close to one. Picking $\gamma_i =1$ we find:
\begin{eqnarray}
\sum_{i=1}^{k} \frac{6}{11} T(r_i)N_f(r_i) = C_2(G) \ , \qquad \gamma = 1 \ ,.
\end{eqnarray}
In the case of a single representation this constraint yields
\begin{equation}
N_f(r)^{\rm BF} \geq \frac{11}{6} \frac{C_2(G)}{T(r)} \ , \qquad \gamma =1 \ .
\end{equation}
When considering two distinct representations the conformal window becomes a three dimensional volume, i.e. the conformal {\it house} \cite{Ryttov:2007sr}. Of course, we recover the results by Banks and Zaks \cite{Banks:1981nn} valid in the perturbative regime of the conformal window.
We note that the presence of a physical IRFP requires the vanishing of the beta function for a certain value of the coupling. The opposite however is not necessarily
true; the vanishing of the beta function is not a sufficient condition to determine if the theory has a fixed point unless the beta function is {\it physical}. By {\it physical} we mean that the beta function allows to determine simultaneously other scheme-independent quantities at the fixed point such as the anomalous dimension of the mass of the fermions. This is exactly what our beta function does. In fact, in the case of a single representation, one finds that at the zero of the beta function one has:
\begin{eqnarray}
\gamma = \frac{11C_2(G)-4T(r)N_f}{2T(r)N_f} \ .\end{eqnarray}
\subsection{Schwinger-Dyson in the Rainbow Approximation}
\label{ra}
{}For nonsupersymmetric theories another way to get quantitative estimates is to use the
{\it rainbow} approximation
to the Schwinger-Dyson equation
\cite{Maskawa:1974vs,Fukuda:1976zb}. After a series of approximations (see \cite{Sannino:2009za} for a review) one deduces for an $SU(N)$ gauge theory with $N_f$ Dirac fermions transforming according to the representation $r$ the critical number of flavors above which chiral symmetry maybe unbroken:
\begin{eqnarray}
{N_f^{\rm SD}} &=& \frac{17C_2(G)+66C_2(r)}{10C_2(G)+30C_2(r)}
\frac{C_2(G)}{T(r)} \ . \label{nonsusy}
\end{eqnarray}
Comparing with the previous result obtained using the all orders beta function we see that it is the coefficient of $C_2(G)/T(r)$ which is different. We note that in \cite{Armoni:2009jn} it has been advocated a coefficient similar to the one of the all-orders beta function.
\subsection{The $SU$, $SO$ and $Sp$ phase diagrams}
We consider here gauge theories with fermions in any representation of the $SU(N)$ gauge group \cite{Sannino:2004qp,Dietrich:2006cm,Ryttov:2007sr,Ryttov:2007cx,Ryttov:2009yw} using the various analytic methods described above.
Here we plot in Fig.~\ref{PHComparison} the
conformal windows for various representations predicted with the physical all orders beta function and the SD approaches.
\begin{figure}[h]
\begin{center}\resizebox{10cm}{!}{\includegraphics{PhaseDiagramComparisonb}}
\caption{Phase diagram for nonsupersymmetric theories with fermions
in the: i) fundamental representation (black), ii) two-index
antisymmetric representation (blue), iii) two-index symmetric
representation (red), iv) adjoint representation (green) as a
function of the number of flavors and the number of colors. The
shaded areas depict the corresponding conformal windows. Above the
upper solid curve the theories are no longer asymptotically free.
In between the upper and the lower solid curves the theories are
expected to develop an infrared fixed point according to the all orders
beta function. The area between the upper solid curve and
the dashed curve corresponds to the conformal window obtained in the
ladder approximation.} \label{PHComparison}\end{center}
\end{figure}
The ladder result provides a size of the window, for every fermion representation, smaller than the maximum bound found earlier. This is a consequence of the value of the anomalous dimension at the lower bound of the window. The unitarity constraint corresponds to $\gamma =2$ while the ladder result is closer to $\gamma \sim 1$. Indeed if we pick $\gamma =1$ our conformal window approaches the ladder result. Incidentally, a value of $\gamma$ larger than one, still allowed by unitarity, is a welcomed feature when using this window to construct walking technicolor theories. It may allow for the physical value of the mass of the top while avoiding a large violation of flavor changing neutral currents \cite{Luty:2004ye} which were investigated in \cite{Evans:2005pu} in the case of the ladder approximation for minimal walking models.
\subsubsection{The $Sp(2N)$ phase diagram}
\label{sp}
$Sp(2N)$ is the subgroup of $SU(2N)$ which leaves the tensor
$J^{c_1 c_2} = ({\bf 1}_{N \times N} \otimes i \sigma_2)^{c_1 c_2}$
invariant. Irreducible tensors of $Sp(2N)$ must be traceless with respect to
$J^{c_1 c_2}$.
Here we consider $Sp(2N)$ gauge theories with fermions transforming according to a given irreducible representation. Since $\pi^4\left[Sp(2N)\right] =Z_2$ there is a Witten topological anomaly \cite{Witten:1982fp} whenever the sum of the Dynkin indices of the various matter fields is odd. The adjoint of $Sp(2N)$ is the two-index symmetric tensor.
In Figure~\ref{Sp-PhaseDiagram} we summarize the relevant zero temperature and matter density phase diagram as function of the number of colors and Weyl flavors ($N_{Wf}$) for $Sp(2N)$ gauge theories. For the vector representation $N_{Wf} = 2N_f$ while for the two-index theories $N_{Wf} = N_f$. The shape of the various conformal windows are very similar to the ones for $SU(N)$ gauge theories \cite{Sannino:2004qp,Dietrich:2006cm,Ryttov:2007cx} with the difference that in this case the two-index symmetric representation is the adjoint representation and hence there is one less conformal window.
\begin{figure}[ht]
\centerline{
\includegraphics[height=6cm,width=11cm]{SP-PhaseDiagram}}
\caption
{Phase Diagram, from top to bottom, for $Sp(2N)$ Gauge Theories with $N_{Wf}=2N_f$ Weyl fermions in the vector representation (light blue), $N_{Wf}=N_f$ in the two-index antisymmetric representation (light red) and finally in the two-index symmetric (adjoint) (light green). The arrows indicate that the conformal windows can be smaller and the associated solid curves correspond to the all orders beta function prediction for the maximum extension of the conformal windows.}
\label{Sp-PhaseDiagram}
\end{figure}
\subsubsection{The $SO(N)$ phase diagram}
\label{so}
We shall consider $SO(N)$ theories (for $N>5$) since they do not suffer of a Witten
anomaly \cite{Witten:1982fp} and, besides, for $N<7$ can always be reduced to either an $SU$ or an $Sp$ theory.
In Figure~\ref{So-PhaseDiagram} we summarize the relevant zero temperature and matter density phase diagram as function of the number of colors and Weyl flavors ($N_{f}$) for $SO(N)$ gauge theories. The shape of the various conformal windows are very similar to the ones for $SU(N)$ and $Sp(2N)$ gauge with the difference that in this case the two-index antisymmetric representation is the adjoint representation. We have analyzed only the theories with $N\geq 6$ since the remaining smaller $N$ theories can be deduced from $Sp$ and $SU$ using the fact that
$SO(6)\sim SU(4)$, $SO(5)\sim Sp(4)$,
$SO(4)\sim SU(2)\times SU(2)$, $SO(3)\sim SU(2)$, and $SO(2)\sim U(1)$.
\begin{figure}[t!]
\centerline{
\includegraphics[height=6cm,width=11cm]{SO-PhaseDiagram}}
\caption
{Phase diagram of $SO(N)$ gauge theories with $N_f$ Weyl fermions in the vector representation, in the two-index antisymmetric (adjoint) and finally in the two-index symmetric representation. The arrows indicate that the conformal windows can be smaller and the associated solid curves correspond to the all orders beta function prediction for the maximum extension of the conformal windows. }
\label{So-PhaseDiagram}
\end{figure}
The phenomenological relevance of orthogonal gauge groups for models of dynamical electroweak symmetry breaking has been shown in \cite{Frandsen:2009mi}.
\subsection{Phases of Chiral Gauge Theories}
Chiral gauge theories, in which at least part of the matter field
content is in complex representations of the gauge group, play an
important role in efforts to extend the SM. These
include grand unified theories, dynamical breaking of symmetries,
and theories of quark and lepton substructure. Chiral theories received much attention in the 1980's~\cite{Ball:1988xg,Raby:1979my}.
Here we confront the results obtained in Ref.~\cite{Appelquist:1999vs,Appelquist:2000qg} using the thermal degree of count freedom with the generalization of the all orders beta function useful to constrain chiral gauge theories appeared in \cite{Sannino:2009za}. The two important class of theories we are going to investigate are the Bars-Yankielowicz (BY) \cite{Bars:1981se} model involving fermions in the two-index symmetric tensor
representation, and the other is a generalized Georgi-Glashow (GGG)
model involving fermions in the two-index antisymmetric tensor
representation. In each case, in addition to fermions in complex
representations, a set of $p$ anti fundamental-fundamental pairs
are included and the allowed phases are considered as a function of
$p$. An independent relevant study of the phase diagrams of chiral gauge theories appeared in \cite{Poppitz:2009tw}. Here the authors also compare their results with the ones presented below.
\subsubsection{All-orders beta function for Chiral Gauge Theories}
A generic chiral gauge theory has always a set of matter fields for which one cannot provide a mass term, but it can also contain vector-like matter. We hence suggest the following minimal modification of the all orders beta function \cite{Ryttov:2007cx} for any nonsupersymmetric chiral gauge theory:
\begin{equation}
\beta_{\chi} (g)= -\frac{g^3}{(4\pi)^2} \frac{\beta_0 - \frac{2}{3}\sum_{i=1}^{k}T(r_i)p(r_i) \gamma_i (g^2)}
{1 - \frac{g^2}{8\pi^2}C_2(G)\left(1 + \frac{2\beta^{\prime}_{ \chi}}{\beta_0} \right)} \ ,
\end{equation}
where $p_i$ is the number of vector like pairs of fermions in the representation $r_i$ for which an anomalous dimension of the mass $\gamma_i$ can be defined. $\beta_0$ is the standard one loop coefficient of the beta function while $\beta^{\prime}_{\chi}$ expression is readily obtained by imposing that when expanding $\beta_{\chi}$ one recovers the two-loop coefficient correctly and its explicit expression is not relevant here. According to the new beta function gauge theories without vector-like matter but featuring several copies of purely chiral matter will be conformal when the number of copies is such that the first coefficient of the beta function vanishes identically. Using topological excitations an analysis of this case was performed in \cite{Poppitz:2009uq}.
\subsubsection{ The Bars Yankielowicz (BY) Model}
\label{due}
This model is based on the single gauge group $SU(N\geq 3) $ and
includes fermions transforming as a symmetric tensor
representation, $S=\psi
_{L}^{\{ab\}}$, $a,b=1,\cdots ,N$; $\ N+4+p$ conjugate fundamental
representations: $\bar{F}_{a,i}=\psi _{a,iL}^{c}$, where $i=1,\cdots ,N+4+p$%
; and $p$ fundamental representations, $F^{a,i}=\psi _{L}^{a,i},\ i=1,\cdots
,p$. The $p=0$ theory is the basic chiral theory, free of gauge
anomalies by virtue of cancellation between the antisymmetric
tensor and the $N+4$ conjugate fundamentals. The additional $p$
pairs of fundamentals and conjugate fundamentals, in a real
representation of the gauge group, lead to no gauge anomalies.
The global symmetry group is
\begin{equation}
G_{f}=SU(N+4+p)\times SU(p)\times U_{1}(1)\times U_{2}(1)\ .
\label{gglobal3}
\end{equation}
Two $U(1)$'s are the linear combination of the original $U(1)$'s generated
by $S\rightarrow e^{i\theta _{S}}S$ , $\bar{F}\rightarrow e^{i\theta _{\bar{F%
}}}\bar{F}$ and $F\rightarrow e^{i\theta _{F}}F$ that are left invariant by
instantons, namely that for which $\sum_{j}N_{R_{j}}T(R_{j})Q_{R_{j}}=0$,
where $Q_{R_{j}}$ is the $U(1)$ charge of $R_{j}$ and $N_{R_{j}}$ denotes
the number of copies of $R_{j}$.
Thus the fermionic content of the theory is
\begin{table}[h]
\[ \begin{array}{c | cc c c c } \hline
{\rm Fields} &\left[ SU(N) \right] & SU(N+4+p) & SU(p) & U_1(1) &U_2(1) \\ \hline
\hline
S &\Ysymm &1 &1 & N+4 &2p \\
\bar{F} &\bar{\Yfund} &\bar{\Yfund} & 1 & -(N+2) & - p \\
{F} &\Yfund &1 & \Yfund & N+2 & - (N - p) \\
\hline \end{array}
\]
\caption{The Bars Yankielowicz (BY) Model}
\end{table}
where the first $SU(N)$ is the gauge group, indicated by the square brackets.
From the numerator of the chiral beta function and the knowledge of the one-loop coefficient of the BY perturbative beta function the predicted conformal window is:
\begin{equation}
3\frac{(3N-2)}{2+\gamma^{\ast}}\leq p \leq \frac{3}{2}(3N-2) \ ,
\end{equation}
with $\gamma^{\ast}$ the largest possible value of the anomalous dimension of the mass. The maximum value of the number of $p$ flavors is obtained by setting $\gamma^{\ast} = 2$:
\begin{equation}
\frac{3}{4} (3N-2) \leq p \leq \frac{3}{2}(3N-2) \ , \qquad \gamma^{\ast} = 2 \ ,
\end{equation}
while for $\gamma^{\ast} =1$ one gets:
\begin{equation}
(3N-2) \leq p \leq \frac{3}{2} (3N-2) \ , \qquad \gamma^{\ast} = 1 \ .
\end{equation}
The chiral beta function predictions for the conformal window are compared with the thermal degree of freedom investigation as shown in the left panel of Fig.~\ref{Chiral}.
\begin{figure}[ht]
\centerline
{\includegraphics[height=4cm,width=16cm]{Chiral}}
\caption
{ {\it Left panel}: Phase diagram of the BY generalized model. The upper solid (blue) line corresponds to the loss of asymptotic freedom; the dashed (blue) curve corresponds to the chiral beta function prediction for the breaking/restoring of chiral symmetry. The dashed black line corresponds to the ACS bound stating that the conformal region should start above this line. We have augmented the ACS method with the Appelquist-Duan-Sannino \cite{Appelquist:2000qg} extra requirement that the phase with the lowest number of massless degrees of freedom wins among all the possible phases in the infrared a chiral gauge theory can have. We hence used $f^{\rm brk + sym}_{IR}$ and $f_{UV}$ to determine this curve. According to the all orders beta function (B.F.) the conformal window cannot extend below the solid (blue) line, as indicated by the arrows. This line corresponds to the anomalous dimension of the mass
reaching the maximum value of $2$. {\it Right panel}: The same plot for the GGG model.}
\label{Chiral}
\end{figure}
In order to derive a prediction from the ACS method we augmented it with the Appelquist-Duan-Sannino \cite{Appelquist:2000qg} extra requirement that the phase with the lowest number of massless degrees of freedom wins among all the possible phases in the infrared a chiral gauge theory can have. The thermal critical number is:
\begin{eqnarray}
p^{\rm Therm} = \frac{1}{4}\left[-16 + 3N + \sqrt{208 - 196N + 69 N^2} \right] \ .
\end{eqnarray}
\subsubsection{ The Generalized Georgi-Glashow (GGG) Model}
This model is similar to the BY model just considered. It is an
$SU(N\geq 5)$ gauge theory, but with fermions in the
anti-symmetric, rather than symmetric, tensor representation. The
complete fermion content is $A=\psi _{L}^{[ab]},$ $a,b=1,\cdots
,N$; an additional $N-4+p$ fermions in the conjugate fundamental
representations: $\bar{F}_{a,i}=\psi
_{a,iL}^{c},$ $i=1,\cdots ,N-4+p$; and $p$ fermions in the fundamental
representations, $F^{a,i}=\psi _{L}^{a,i}$, $i=1,\cdots ,p$.
The global symmetry is
\begin{equation}
G_{f}=SU(N-4+p)\times SU(p)\times U_{1}(1)\times U_{2}(1) \ .
\end{equation}
where the two $U(1)$'s are anomaly free. With respect to this symmetry, the
fermion content is
\begin{table}[h]
\[ \begin{array}{c | cc c c c } \hline
{\rm Fields} &\left[ SU(N) \right] & SU(N -4+p) & SU(p) & U_1(1) &U_2(1) \\ \hline
\hline &&&&&\\
A &\Yasymm &1 &1 & N- 4 &2p \\
\bar{F} &\bar{\Yfund} &\bar{\Yfund} & 1 & -(N - 2) & - p \\
{F} &\Yfund &1 & \Yfund & N - 2 & - (N - p) \\
\hline \end{array}
\]
\caption{The Generalized Georgi-Glashow (GGG) Model}
\end{table}
Following the analysis for the BY model the chiral beta function predictions for the conformal window are compared with the thermal degree of freedom investigation and the result is shown in the right panel of Fig.~\ref{Chiral}.
\subsection{Conformal Chiral Dynamics}
Our starting point is a nonsupersymmetric non-abelian gauge theory with sufficient massless fermionic matter to develop a nontrivial IRFP. The cartoon of the running of the coupling constant is represented in Fig.~\ref{run1}. In the plot $\Lambda_U$ is the dynamical scale below which the IRFP is essentially reached. It can be defined as the scale for which $\alpha$ is $2/3$ of the fixed point value in a given renormalization scheme.
\begin{figure}[h]
\begin{center}
\includegraphics[width=6cm]{Alpha-Coulomb.pdf}
\caption{ Running of the coupling constant in an asymptotically free gauge theory developing an infrared fixed point for a value $\alpha = \alpha^{\ast}$. }
\label{run1}
\end{center}
\end{figure}
If the theory possesses an IRFP the chiral condensate must vanish at large distances. Here we want to study the behavior of the condensate when a flavor singlet mass term is added to the underlying Lagrangian $
\Delta L = - m\,{\widetilde{\psi}}{\psi} + {\rm h.c.} $
with $m$ the fermion mass and $\psi^f_{c}$ as well as $\widetilde{\psi}_f^c$ left transforming two component spinors, $c$ and $f$ represent color and flavor indices. The omitted color and flavor indices, in the Lagrangian term, are contracted.
We consider the case of fermionic matter in the fundamental representation of the $SU(N)$ gauge group.
The effect of such a term is to break the conformal symmetry together with some of the global symmetries of the underlying gauge theory. The composite operator $
{{\cal O}_{\widetilde{\psi}{\psi}}}^{f^{\prime}}_f = \widetilde{\psi}^{f^{\prime}}{\psi}_f $
has mass dimension $\displaystyle{
d_{\widetilde{\psi}{\psi}} = 3 - \gamma}$ with
$\gamma$ the anomalous dimension of the mass term. At the fixed point $\gamma$ is a positive number smaller than two \cite{Mack:1975je}. We assume $m \ll \Lambda_U$. Dimensional analysis demands $
\Delta L \rightarrow
-m\, \Lambda_U^{\gamma} \, {\rm Tr}[{\cal O}_{\widetilde{\psi}{\psi}}] + {\rm h.c.}~$.
The mass term is a relevant perturbation around the IRFP driving the theory away from the fixed point. It will induce a nonzero vacuum expectation value for ${\cal O}_{\widetilde{\psi}{\psi}}$ itself proportional to $\delta^{f^\prime}_f$. It is convenient to define ${\rm Tr}[{\cal O}_{\widetilde{\psi}{\psi}}] = N_f \cal O $ with $\cal O$ a flavor singlet operator. The relevant low energy Lagrangian term is then $
-m\, \Lambda_U^{\gamma} \, N_f \cal O + {\rm h.c.} $.
To determine the vacuum expectation value of $\cal{O}$ we follow \cite{Stephanov:2007ry,Sannino:2008nv}.
The induced physical mass gap is a natural infrared cutoff. We, hence, identify $\Lambda_{IR} $ with the physical value of the condensate. We find:
\begin{eqnarray}
\langle \widetilde{\psi}^f_c \psi^c_f \rangle &\propto& -m \Lambda_U^2 \ , \qquad ~~~~~~0 <\gamma < 1 \ , \label{BZm} \\
\langle \widetilde{\psi}^f_c \psi^c_f \rangle &\propto & -m \Lambda_U^2 \log \frac{\Lambda^2_U}{|\langle {\cal O} \rangle|}\ , ~~~ \gamma \rightarrow 1 \ , \label{SDm} \\
\langle \widetilde{\psi}^f_c\psi^c_f \rangle &\propto & -m^{\frac{3-\gamma} {1+\gamma}}
\Lambda_U^{\frac{4\gamma} {1+\gamma}}\ , ~~~1<\gamma \leq 2 \ .
\label{UBm}
\end{eqnarray}
We used $\langle \widetilde{\psi} \psi \rangle \sim \Lambda_U^{\gamma} \langle {\cal O} \rangle $ to relate the expectation value of ${\cal O}$ to the one of the fermion condensate. Via an allowed axial rotation $m$ is now real and positive.
The effects of the Instantons on the conformal dynamics has been investigated in \cite{Sannino:2008pz}. Here it was shown that the effects of the instantons can be sizable only for a very small number of flavors given that, otherwise, the instanton induced operators are highly irrelevant.
\subsection {Gauge Duals and Conformal Window}
One of the most fascinating possibilities is that generic asymptotically free gauge theories have magnetic duals. In fact, in the late nineties, in a series of ground breaking papers Seiberg \cite{Seiberg:1994bz,Seiberg:1994pq} provided strong support for the existence of a consistent picture of such a duality within a supersymmetric framework. Arguably the existence of a possible dual of a generic nonsupersymmetric asymptotically free gauge theory able to reproduce its infrared dynamics must match the 't Hooft anomaly conditions.
We have exhibited several solutions of these conditions for QCD in \cite{Sannino:2009qc} and for certain gauge theories with
higher dimensional representations in \cite{Sannino:2009me}. An earlier exploration already appeared in the literature \cite{Terning:1997xy}. The novelty with respect to these earlier results are: i) The request that the gauge singlet operators associated to the magnetic baryons should be interpreted as bound states of ordinary baryons \cite{Sannino:2009qc}; ii) The fact that the asymptotically free condition for the dual theory matches the lower bound on the conformal window obtained using the all orders beta function \cite{Ryttov:2007cx}. These extra constraints help restricting further the number of possible gauge duals without diminishing the exactness of the associate solutions with respect to the 't Hooft anomaly conditions.
We will briefly summarize here the novel solutions to the 't Hooft anomaly conditions for QCD. The resulting {\it magnetic} dual allows to predict the critical number of flavors above which the asymptotically free theory, in the electric variables, enters the conformal regime as predicted using the all orders conjectured beta function \cite{Ryttov:2007cx}.
\subsubsection{QCD Duals}
The underlying gauge group is $SU(3)$ while the
quantum flavor group is
\begin{equation}
SU_L(N_f) \times SU_R(N_f) \times U_V(1) \ ,
\end{equation}
and the classical $U_A(1)$ symmetry is destroyed at the quantum
level by the Adler-Bell-Jackiw anomaly. We indicate with
$Q_{\alpha;c}^i$ the two component left spinor where $\alpha=1,2$
is the spin index, $c=1,...,3$ is the color index while
$i=1,...,N_f$ represents the flavor. $\widetilde{Q}^{\alpha ;c}_i$
is the two component conjugated right spinor.
The global anomalies are associated to the triangle diagrams featuring at the vertices three $SU(N_f)$ generators (either all right or all left), or two
$SU(N_f)$ generators (all right or all left) and one $U_V(1)$ charge. We indicate these anomalies for short with:
\begin{equation}
SU_{L/R}(N_f)^3 \ , \qquad SU_{L/R}(N_f)^2\,\, U_V(1) \ .
\end{equation}
For a vector like theory there are no further global anomalies. The
cubic anomaly factor, for fermions in fundamental representations,
is $1$ for $Q$ and $-1$ for $\tilde{Q}$ while the quadratic anomaly
factor is $1$ for both leading to
\begin{equation}
SU_{L/R}(N_f)^3 \propto \pm 3 \ , \quad SU_{L/R}(N_f)^2 U_V(1)
\propto \pm 3 \ .
\end{equation}
If a magnetic dual of QCD does exist one expects it to be weakly coupled near the critical number of flavors below which one breaks large distance conformality in the electric variables. This idea is depicted in Fig~\ref{Duality}.
\begin{figure}[h!]
\centerline{\includegraphics[width=8cm]{Duality}}
\caption{Schematic representation of the phase diagram as function of number of flavors and colors. For a given number of colors by increasing the number flavors within the conformal window we move from the lowest line (violet) to the upper (black) one. The upper black line corresponds to the one where one looses asymptotic freedom in the electric variables and the lower line where chiral symmetry breaks and long distance conformality is lost. In the {\it magnetic} variables the situation is reverted and the perturbative line, i.e. the one where one looses asymptotic freedom in the magnetic variables, correspond to the one where chiral symmetry breaks in the electric ones. }
\label{Duality}
\end{figure}
Determining a possible unique dual theory for QCD is, however, not simple given the few mathematical constraints at our disposal. The saturation of the global anomalies is an important tool but is not able to select out a unique solution. We shall see, however, that one of the solutions, when interpreted as the QCD dual, leads to a prediction of a critical number of flavors corresponding exactly to the one obtained via the conjectured all orders beta function.
We seek solutions of the anomaly matching conditions for a gauge theory $SU(X)$ with global symmetry group $SU_L(N_f)\times SU_R(N_f) \times U_V(1)$ featuring
{\it magnetic} quarks ${q}$ and $\widetilde{q}$ together with $SU(X)$ gauge singlet states identifiable as baryons built out of the {\it electric} quarks $Q$. Since mesons do not affect directly global anomaly matching conditions we could add them to the spectrum of the dual theory. We study the case in which $X$ is a linear combination of number of flavors and colors of the type $\alpha N_f + 3 \beta$ with $\alpha$ and $\beta$ integer numbers.
We add to the {\it magnetic} quarks gauge singlet Weyl fermions which can be identified with the baryons of QCD but massless.
Having defined the possible massless matter content of the gauge theory dual to QCD one computes the $SU_{L}(N_f)^3$ and $SU_{L}(N_f)^2\,\, U_V(1)$ global anomalies in terms of the new fields.
We have found several solutions to the anomaly matching conditions presented above. Some were found previously in \cite{Terning:1997xy}. Here we display a new solution in which the gauge group is $SU(2N_f - 5N)$ with the number of colors $N$ equal to $3$. It is, however, convenient to keep the dependence on $N$ explicit.
\begin{table}[bh]
\[ \begin{array}{|c| c|c c c|c|} \hline
{\rm Fields} &\left[ SU(2N_f - 5N) \right] & SU_L(N_f) &SU_R(N_f) & U_V(1)& \# ~{\rm of~copies} \\ \hline
\hline
q &\Yfund &{\Yfund }&1& \frac{N(2 N_f - 5)}{2 N_f - 5N} &~~~1 \\
\widetilde{q} & \overline{\Yfund}&1 & \overline{\Yfund}& -\frac{N(2 N_f - 5)}{2 N_f - 5N}&~~~1 \\
A &1&\Ythreea &1&~~~3& ~~~2 \\
B_A &1&\Yasymm &\Yfund &~~~3& -2\\
{D}_A &1&{\Yfund} &{\Yasymm } &~~~3& ~~~2 \\
\widetilde{A} &1&1&\overline{\Ythreea} &-3&~~~2\\
\hline \end{array}
\]
\caption{Massless spectrum of {\it magnetic} quarks and baryons and their transformation properties under the global symmetry group. The last column represents the multiplicity of each state and each state is a Weyl fermion.}
\label{dual}
\end{table}
$X$ must assume a value strictly larger than one otherwise it is an abelian gauge theory. This provides the first nontrivial bound on the number of flavors:
\begin{equation}
N_f > \frac{5N + 1}{2} \ , \end{equation}
which for $N=3$ requires $N_f> 8 $.
Asymptotic freedom of the newly found theory is dictated by the coefficient of the one-loop beta function :
\begin{equation}
\beta_0 = \frac{11}{3} (2N_f - 5N) - \frac{2}{3}N_f \ .
\end{equation}
To this order in perturbation theory the gauge singlet states do not affect the {magnetic} quark sector and we can hence determine the number of flavors obtained by requiring the dual theory to be asymptotic free. i.e.:
\begin{equation}
N_f \geq \frac{11}{4}N \qquad\qquad\qquad {\rm Dual~Asymptotic~Freedom}\ .
\end{equation}
Quite remarkably this value {\it coincides} with the one predicted by means of the all orders conjectured beta function for the lowest bound of the conformal window, in the {\it electric} variables, when taking the anomalous dimension of the mass to be $\gamma =2 $. We recall that for any number of colors $N$ the all orders beta function requires the critical number of flavors to be larger than:
\begin{equation}
N_f^{BF}|_{\gamma = 2} = \frac{11}{4} N \ .
\end{equation}
{}For N=3 the two expressions yield $8.25$ \footnote{Actually given that $X$ must be at least $2$ we must have $N_f \geq 8.5$ rather than $8.25$}. We consider this a nontrivial and interesting result lending further support to the all orders beta function conjecture and simultaneously suggesting that this theory might, indeed, be the QCD magnetic dual.
The actual size of the conformal window matching this possible dual corresponds to setting $\gamma =2$. {}We note that although for $N_f = 9$ and $N=3$ the magnetic gauge group is $SU(3)$ the theory is not trivially QCD given that it features new massless fermions and their interactions with massless mesonic type fields.
Recent suggestions to analyze the conformal window of nonsupersymmetric gauge theories based on different model assumptions \cite{Poppitz:2009uq} are in qualitative agreement with the precise results of the all orders beta function conjecture. It is worth noting that the combination $2N_f - 5N$ appears in the computation of the mass gap for gauge fluctuations presented in \cite{Poppitz:2009uq,Poppitz:2008hr}. It would be interesting to explore a possible link between these different approaches in the future.
We have also find solutions for which the lower bound of the conformal window is saturated for $\gamma =1$. The predictions from the gauge duals are, however, entirely and surprisingly consistent with the maximum extension of the conformal window obtained using the all orders beta function \cite{Ryttov:2007cx}. Our main conclusion is that the 't Hooft anomaly conditions alone do not exclude the possibility that the maximum extension of the QCD conformal window is the one obtained for a large anomalous dimension of the quark mass.
By computing the same gauge singlet correlators in QCD and its suggested dual, one can directly validate or confute this proposal via lattice simulations.
\subsection{Conclusions}
We investigated the conformal windows of chiral and non-chiral nonsupersymmetric gauge theories with fermions in any representation of the underlying gauge group using four independent analytic methods. {}For vector-like gauge theories one observes a universal value, i.e. independent of the representation, of the ratio of the area of the maximum extension of the conformal window, predicted using the all orders beta function, to the asymptotically free one, as defined in \cite{Ryttov:2007sr}. It is easy to check from the results presented that this ratio is not only independent on the representation but also on the particular gauge group chosen.
The four methods we used to unveil the conformal windows are the all orders beta function (BF), the SD truncated equation, the thermal degrees of freedom method and possible gauge - duality. They have vastly different starting points and there was no, a priori, reason to agree with each other, even at the qualitative level.
Several questions remain open such as what happens on the right hand side of the infrared fixed point as we increase further the coupling. Does a generic strongly coupled theory develop a new UV fixed point as we increase the coupling beyond the first IR value \cite{Kaplan:2009kr}? If this were the case our beta function would still be a valid description of the running of the coupling of the constant in the region between the trivial UV fixed point and the neighborhood of the first IR fixed point. One might also consider extending our beta function to take into account of this possibility as done in \cite{Antipin:2009wr}. It is also possible that no non-trivial UV fixed point forms at higher values of the coupling constant for any value of the number of flavors within the conformal window. Gauge-duals seem to be in agreement with the simplest form of the beta function. The extension of the all orders beta function to take into account fermion masses has appeared in \cite{Dietrich:2009ns}.
Our analysis substantially increases the number of asymptotically free gauge theories which can be used to construct SM extensions making use of (near) conformal dynamics. Current Lattice simulations can test our predictions and lend further support or even disprove the emergence of a universal picture possibly relating the phase diagrams of gauge theories of fundamental interactions.
\subsubsection*{Acknowledgments}
I would like to thank the organizers of SCGT09 for providing a very interesting scientific meeting. It is a pleasure to thank M. Antola, T. Appelquist, S. Catterall, L. Del Debbio, S. Di Chiara, D.D. Dietrich, J. Giedt, R. Foadi, M.T. Frandsen, M. J\"arvinen, C. Pica, T.A. Ryttov, M. Shifman, J. Schechter. R. Shrock, and K. Tuominen who have contributed getting me interested in this interesting subject, substantially helped developing the ideas and the results reported here and finally for discussions and comments.
| proofpile-arXiv_065-4954 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
\label{sec:intro}
All material objects, even if charge neutral, support instantaneous
current fluctuations due to quantum and thermal fluctuations of their
charge distribution. The interaction that results from the
electromagnetic coupling of these currents on different objects is
usually called the Casimir force. Originally, this force has been
derived for two parallel perfect metal plates \cite{Casimir48-1} and
atoms \cite{Casimir48-2}, and generalized later to two infinite
dielectric half-spaces with planar and parallel surfaces
\cite{Lifshitz55,Lifshitz56,Lifshitz57,Dzyaloshinskii61}. The
non-additivity of the Casimir force limits these results in their
applicability to objects at very short separation via the so-called
proximity force approximation which provides only an uncontrolled
approximation of surface curvature to lowest order at vanishingly
small separations and ignores the global geometrical arrangement of
the objects. Generically, one encounters in practice geometries and
shapes that are rather distinct from infinite, parallel and planar
surfaces. Hence one faces the problem to compute the Casimir force
between objects of general shape, arrangement and material
decomposition.
This article summarizes recent progress that has been proofed useful
in solving this problem for a variety of geometries. (For an overview
of the development of related approaches, see Ref. \refcite{Rahi:fk}.)
In order to study Casimir forces in more general geometries, it turns
out to be advantageous to describe how fluctuating currents are
induced on the objects by the scattering of electromagnetic waves.
This representation of the Casimir interaction was developed in
Refs.~\refcite{Emig07,Emig08,Rahi:fk}. Each object is characterized by
its on-shell electromagnetic scattering amplitude. The separations
and orientations of the objects are encoded in universal translation
matrices, which describe how a solution to the source-free Maxwell's
equations in the basis appropriate to one object looks when expanded
in the basis appropriate to another. These matrices hence describe the
electrodynamic interaction of the multipole moments associated with
the currents and depend on the displacement and orientation of
coordinate systems, but not on the shape and material of the objects
themselves. The scattering amplitudes and translation matrices are
then combined in a simple formula that allows efficient numerical and,
in some cases, analytical calculations of Casimir forces and torques
for a wide variety of geometries, materials, and external conditions.
The approach applies to any finite number of arbitrarily shaped
objects with arbitrary linear electromagnetic response at zero or
finite temperature.
To illustrate this general formulation, we provide some sample
applications, including results for the interaction between metallic
objects for two spheres and for a sphere and a plane, taking into
account the combined effect of shape and material properties at large
distances. In addition, we provide examples for the non-additivity of
the interaction by considering three objects (two spheres and a plane)
and for the orientation dependence in the case of spheroids. The
results are presented in form of analytical expressions at large
distances and as numerical results at smaller separations.
\section{Fluctuating currents and T-operators}
\label{sec:T-op}
We consider the Casimir energy for neutral objects with electric and
magnetic susceptibilities. The partition function $Z$ is defined
through the path integral, which sums all configurations of the
electromagnetic field (outside and inside the objects) with periodic
boundary conditions in time between $0$ and $T$. The free energy $F$
of the field at inverse temperature $\beta$ is
\begin{equation}
F(\beta) = -\frac{1}{\beta}\log Z(\beta).
\label{free}
\end{equation}
The unrenormalized free energy generally depends on the ultraviolet
cutoff, but cutoff-dependent contributions arise from the objects
individually and do not depend on their separations or orientations.
Since we are only interested in energy differences, we can remove
these divergences by subtracting the energy of the system
when the objects are in some reference configuration, see below. By
replacing the time $T$ by $-i\hbar\beta$, we obtain the partition
function $Z(\beta)$ in 4D Euclidean space. In $A^0=0$ gauge, the
result is simply to replace the Matsubara frequencies $\omega_n =
\frac{2\pi n}{T}$ by $i\frac{2\pi n}{\hbar \beta}=ic\kappa_n$, where
$\kappa_n$ is the $n^{\rm th}$ Matsubara frequency divided by $c$.
The action is quadratic, so the modes with different $\kappa_n$
decouple and the partition function decomposes into a product of
partition functions for each mode. In the limit
$\beta\to\infty$, the sum $\sum_{n\geq 0}$ turns into an integral
$\frac{\hbar c \beta}{2\pi}\int_{0}^\infty d\kappa$, and we have
the ground state energy
\begin{equation}
\calE_0 = -\frac{\hbar c}{2\pi} \int_0^\infty d\kappa \,
\log Z(\kappa),
\label{EKem}
\end{equation}
with
\begin{equation}
\begin{split}
Z(\kappa) = \int \dA \dA^* \,
\exp & \left[ -\beta \int d\vecx \,
\E^{*}(\kappa,\vecx) \left(\Hzero+\frac{1}{\kappa^2}
\tV(\kappa,\vecx) \right) \, \E (\kappa,\vecx)
\right],
\end{split}
\label{ZKem}
\end{equation}
where we have used $\curl \E = i\frac{\omega}{c} \B$ to eliminate $\B$
in the action, and it is assumed that $\E$ is expressed by $\E= -
c^{-1} \partial_t \A$ in terms of the vector potential $\A$. This
functional integral sums over configurations of $\A$. This sum
must be restricted by a choice of gauge, so that it does not include
the infinitely redundant gauge orbits. We will choose to work in the
gauge $A^0=0$, although of course no physical results depend on this
choice. Here we defined the Helmholtz operator
\begin{equation}
\Hzero(\kappa)=\tI +\frac{1}{\kappa^2} \nabla \times \nabla \times \, ,
\end{equation}
which is inverted by the
Green's function that is defined by
\begin{equation}
\kappa^2 \Hzero(\kappa) \tG_0(\kappa,\vecx,\vecx') = \tI
\delta^{(3)}(\vecx-\vecx') \, .
\end{equation}
The potential operator is
\begin{equation}
\tV(\kappa,\vecx) = \tI \, \kappa^2
\left(\epsilon(ic\kappa,\vecx)-1\right) + \curl
\left(\frac{1}{\mu(ic\kappa,\vecx)} -1 \right) \curl
\,.
\end{equation}
It is nonzero only at those points in space where the objects are
located ($\epsilon \neq 1$ or $\mu \neq 1$). At small frequencies,
typical materials have $\epsilon>1$ and $\mu\approx 1$, and $\V$ can
be regarded as an attractive potential.
Next, we transform to a free field (with kernel $\Hzero$) by
introducing fluctuating currents $\J$ that are confined to the
objects. To perform this Hubbard-Stratonovich-like transformation we
multiply and divide the partition function of Eq.~(\ref{ZKem}) by
\begin{equation}
\begin{split}
W & = \int \dJJ \exp\left[-\beta
\int d\vecx \, \J^*(\vecx) \cdot \V^{-1}(\kappa,\vecx)
\J(\vecx) \right] = \det \V \, ,
\end{split}
\end{equation}
where $\left.\right|_{\rm obj}$ indicates that the currents are
defined only over the objects, {\it i.e.\/} the domain where $\V$ is
nonzero (and therefore $\V^{-1}$ exists), and we have represented the
local potential as a matrix in position space, $\V(\kappa,\vecx,
\vecx') = \V(\kappa,\vecx) \delta^{(3)}(\vecx - \vecx')$. We then
change variables in the integration, $\J(\vecx) = \J'(\vecx) +
\frac{i}{\kappa} \V(\kappa,\vecx) \E(\vecx)$ and $\J^{*}(\vecx) =
{\J'}^{*}(\vecx) + \frac{i}{\kappa} \V(\kappa,\vecx) \E^{*}(\vecx)$,
to obtain
\begin{equation}
\label{EtoJ}
\begin{split}
\raisetag{20pt}
Z(\kappa) & = \frac{1}{W} \!\int\! \dA \dA^* \dJJprime \,
\times\\ &
\exp \left[
-\beta \!\!\int \!\!d\vecx \,
\E^{*}(\kappa,\vecx)
\left(\Hzero(\kappa)+
\frac{1}{\kappa^2} \V(\kappa,\vecx)\right)\E(\kappa,\vecx)\right.
\\ & \left.
\!\!+ \left({\J'}^{*}(\vecx) + \frac{i}{\kappa}
\V(\kappa,\vecx) \E^{*}(\kappa,\vecx)\right)
\V^{-1}(\kappa,\vecx)
\left(\J'(\vecx) + \frac{i}{\kappa}
\V(\kappa,\vecx) \E(\kappa,\vecx)\right)
\right],
\\ & = \frac{1}{W} \!\int\! \dA \dA^* \dJJprime \,
\times\\ &
\exp \left[
-\beta \!\!\int \!\!d\vecx \,
\E^{*} \Hzero \E + {\J'}^{*} \V^{-1}\J' +\frac{i}{\kappa} \left(
{\J'}^{*}\E + \J'\E^{*}\right)\right] \, .
\end{split}
\end{equation}
Now the free electromagnetic field can be integrated out
using $\Hzero^{-1} = \kappa^2\tG_0$, yielding
\begin{equation}
\begin{split}
\raisetag{45pt}
Z(\kappa) & = \frac{Z_0}{W} \int
\dJJprime \cr
& \exp\left[
-\beta \!\!\int \!\!d\vecx d\vecx' \,
{\J'}^{*}(\vecx) \left(
\tGzero(\kappa,\vecx,\vecx')
+ \V^{-1}(\kappa,\vecx)\delta^{3}(\vecx-\vecx')\right) \J'(\vecx')
\right],
\end{split}
\label{Zemfull}
\end{equation}
with $ Z_0 = \int \dA \dA^* \exp[-\beta \int d\vecx \, \E^*
\Hzero(\kappa)\E]$. Both factors $W$ and $Z_0$ contain
cutoff-dependent contributions but are independent of the separation
of the objects. Hence these factors cancel and can be ignored when we
consider a {\it change} in the energy due to a change of the object's
separations with the shape and the material composition of the objects
fixed. The kernel of the action in Eq.~\eqref{Zemfull} is the inverse
of the T-operator, i.e., $\T^{-1} = \tGzero + \V^{-1}$ which is
equivalent to
\begin{equation}
\label{eq:T-operator-def}
\T = \V(\tI + \tGzero\V)^{-1} \, .
\end{equation}
The Casimir energy at zero temperature (without the cutoff-dependent
parts) is hence
\begin{equation}
\label{eq:Energy_full}
\calE = -\frac{\hbar c}{2\pi} \int_0^\infty d\kappa \log\det \T \, .
\end{equation}
The determinant is here taken over the spatial indices $\vecx$ and
$\vecx'$, which are restricted to the objects since $\T$ vanishes if
$\vecx$ or $\vecx'$ are not on an object. To compute the determinant
we start from the expression for $\T^{-1}$ which yields the reciprocal
of the determinant. We decompose $\T^{-1}$ by introducing separate
position space basis functions for each object. The projection of the
currents onto this basis defines the object's multipole moments. This
yields a division of $\T^{-1}$ into blocks where each block is labeled by an
object.
The off-diagonal blocks are given by $\tGzero$ only and
describe the interaction of the multipoles on different objects. To
see this we choose for each object individually an eigenfunction basis
to expand the free Green's function,
\begin{equation}
\label{G0-expansion}
\tGzero(\kappa,\vecx,\vecx')
= \sum_{\aindex} \EoutaP(\kappa,\vecx_>) \otimes \E^{\rm
reg*}_\alpha(\kappa,\vecx'_<)
\end{equation}
with regular solutions $\E^{\rm
reg}_\alpha$ and outgoing solutions $\EoutaP$ of the free vector
Helmholtz equation, where $\vecx_<$ and $\vecx_>$ denote the position
with smaller and greater value of the ``radial'' variable of the
separable coordinates. The multipole moments of object $j$ are then
$Q_{j,\alpha}(\kappa)=\int d\vecx \J_j(\kappa,\vecx) \E^{\rm
reg*}_\alpha(\kappa,\vecx) $. Regular solutions form a complete set
and hence outgoing solutions can be expanded in terms of regular
solutions except in a region (enclosed by a surface of constant radial
variable) that contains the origin of the
coordinate system of object $i$. This expansion defines the
translation matrices $\U^{ji}_{\bindex,\aindex}$ via
\begin{equation}
\label{transoutreg}
\EoutaP(\kappa,\vecx_i) = \sum_{\bindex}
\U^{ji}_{\bindex\aindex}(\kappa,\vecX_{ji})
\EregbQ(\kappa,\vecx_j) \, ,
\end{equation}
where the definition of the coordinates is shown in
Fig.~\ref{fig:transboth}. The free Green's function then becomes
\begin{equation}
\tGzero(\kappa,\vecx,\vecx') = \sum_{\aindex,\bindex}
\EregaP(\kappa,\vecx_i) \otimes \U^{ji}_{\abindex}(\kappa,\vecX_{ji})
\EregbQcc(\kappa,\vecx_j')
\end{equation}
so that the off-diagonal blocks of $\T^{-1}$ are given by the
translation matrices. Equivalent translation matrices can be defined
between two sets of regular solutions as is necessary for one object
inside another, see Ref.~\refcite{Rahi:fk}.
\begin{figure}
\begin{center}
\includegraphics[width=0.65\linewidth]{figtransboth}
\end{center}
\caption{ Geometry of the configuration. The dotted lines show
surfaces separating the objects on which the radial variable is
constant. The translation vector $\vecX_{ij} = \vecx_i - \vecx_j =
-\vecX_{ji}$ describes the relative positions of the two origins. }
\label{fig:transboth}
\end{figure}
The diagonal blocks of $\T^{-1}$ are given by the matrix elements of
the T-operators $\T_j$ of the {\it individual} objects. By multiplying
$\T^{-1}$ by the T-operator $\T_\infty$ without the off-diagonal
blocks which can interpreted as describing a reference configuration
with infinite separations between the objects, one finds that (for
objects outside each other) the diagonal blocks are given by the
inverse of the matrix representing $\T_j$ in the basis $\EregaP$
\cite{Rahi:fk}. The physical meaning of this matrix follows from the
Lippmann-Schwinger equation for the full scattering solution
$\E_\alpha(\kappa,\vecx)$,
\begin{equation}
\label{eq:lipp-schw}
\E_\alpha(\kappa,\vecx)=\E^\text{reg}_\alpha(\kappa,\vecx) - \tGzero
\V_j \E_\alpha(\kappa,\vecx) = \E^\text{reg}_\alpha(\kappa,\vecx) -
\tGzero \T_j \E^\text{reg}_\alpha(\kappa,\vecx) \, .
\end{equation}
Using the expansion of Eq.~\eqref{G0-expansion}, the solution
sufficiently far away from the object (i.e., for positions that have
a radial variable larger than any point on the object) can be
expressed as
\begin{equation}
\label{eq:scatt_amp}
\E_\alpha(\kappa, \vecx) = \EregaP(\kappa, \vecx) -
\sum_{\bindex} \EoutbQ(\kappa, \vecx) \int
\EregbQcc(\kappa, \vecx')
\T_j(\kappa) \EregaP(\kappa, \vecx') d\vecx'\, ,
\end{equation}
where the integral defines the scattering amplitude $\F_{j,\beta\alpha}(\kappa)$
of object $j$. It can be obtained, e.g., from matching boundary
conditions at the surface of a dielectric object.
The Casimir energy (without cutoff-dependent contributions from $W$
and $Z_0$) can now be expressed as
\begin{equation}
\label{Elogdet}
\mathcal{E} = \frac{\hbar c}{2\pi} \int_0^\infty d\kappa
\log \det (\mathbb{M} \mathbb{M}_\infty^{-1}),
\end{equation}
where
\begin{equation}
\mathbb{M} =
\left(
\begin{array}{c c c c}
\F_1^{-1} & \U^{12} & \U^{13} & \cdots \\
\U^{21} & \F_2^{-1} & \U^{23} & \cdots \\
\cdots & \cdots & \cdots & \cdots
\end{array}
\right)
\end{equation}
and $\mathbb{M}^{-1}_\infty$ is the block diagonal matrix
$\text{diag}(\F_1, \F_2, \cdots)$. For the case of two objects
this expressions simplifies to
\begin{equation}
\label{Elogdet2}
\mathcal{E} = \frac{\hbar c}{2\pi} \int_0^\infty d\kappa \log \det
\left(\tI - \F_1\U^{12}\F_2\U^{21}\right) \, .
\end{equation}
In order to obtain the free energy at nonzero temperature instead of
the ground state energy, we do not take the limit $\beta\to\infty$ in
Eq.~(\ref{free}) \cite{Lifshitz55}. Instead, the integral $\frac{\hbar
c}{2\pi} \int_0^\infty d\kappa$ is replaced everywhere by $\frac{1}{\beta}
\sum_{n}'$, where $c \kappa_n=\frac{2\pi n}{\hbar\beta}$ with
$n=0,1,2,3\ldots$ is the $n$th Matsubara frequency. A careful
analysis of the derivation shows that the zero frequency mode is
weighted by $1/2$ compared to the rest of the terms in the sum; this
modification of the sum is denoted by a prime on the summation
symbol.
\section{Applications}
In this section we demonstrate the applicability of the method through
some examples. Due to the lack of space, we only present the final
analytical and numerical results that all follow from
Eq.~\eqref{Elogdet} or Eq.~(\ref{Elogdet2}) by truncation of the
matrices at some order of partial waves, i.e., by considering only a
finite set of basis functions. At asymptotically large distances, the
interaction only depends on the dipole contribution while with
drecreasing distance the number of partial waves has to be increased.
Below we will provide results both in form of a asymptotic series in
the inverse separation and numerical results for a wide range of
distances.
\subsection{Sphere-plane}
First, we consider the sphere-plate geometry that has been
employed in the majority of recent experiments. At large distances,
the energy can be expanded in an asymptotic series in the inverse
separation. For a {\em dielectric sphere} in front
of {\em perfectly reflecting mirror} with sphere-center to mirror separation
$L$ the Casimir energy is
\begin{equation}
\label{eq:energy-eps-mu-sphere}
\begin{split}
\raisetag{46pt}
{\mathcal E} &= -\frac{\hbar c}{\pi} \left\{
\frac{3}{8} (\alpha_1^\textsc{e} - \alpha_1^\textsc{m}) \frac{1}{L^4}
+\frac{15}{32} (\alpha_2^\textsc{e} - \alpha_2^\textsc{m} +2 \gamma_{13}^\textsc{e}
-2 \gamma_{13}^\textsc{m}) \frac{1}{L^6} \right.\\
& +\left. \frac{1}{1024} \left[ 23 (\alpha_1^\textsc{m})^2 - 14
\alpha_1^\textsc{m} \alpha_1^\textsc{e}
+23 (\alpha_1^\textsc{e})^2
+ 2160 (\gamma_{14}^\textsc{e}- \gamma_{14}^\textsc{m})
\right] \frac{1}{L^7} \right. \\
& +\left.\frac{7}{7200} \left[ 572 (\alpha_3^\textsc{e}-\alpha_3^\textsc{m}) + 675 \left(
9( \gamma_{15}^\textsc{e} - \gamma_{15}^\textsc{m}) -55
( \gamma_{23}^\textsc{e} - \gamma_{23}^\textsc{m})
\right)\right] \frac{1}{L^8} + \dots
\right\} \, ,
\end{split}
\end{equation}
where $\alpha_l^{\textsc{e}}$, $\alpha_l^{\textsc{m}}$ are the static
electric and magentic multipole polarizabilities of the sphere of
order $l$ ($l=2$ for dipoles), and the coefficients
$\gamma^{\textsc{e}}_{ln}$, $\gamma^{\textsc{m}}_{ln}$ describe
finite-frequency corrections to these polarizabilities, i.e., terms
$\sim \kappa^{2l+n}$ in the low-$\kappa$ expansion of the T-matrix
element for the $l^{\rm th}$ partial wave.
Notice that the first three terms of the contribution at order $L^{-7}$ have
precisely the structure of the Casimir-Polder interaction between two
atoms with static dipole polarizabilities $\alpha_1^\textsc{m}$ and
$\alpha_1^\textsc{e}$ but it is reduced by a factor of $1/2^8$. This
factor and the distance dependence $\sim L^{-7}$ of this term suggests
that it arises from the interaction of the dipole fluctuations inside
the sphere with those inside its image at a distance $2L$. The
additional coefficient of $1/2$ in the reduction factor $(1/2)(1/2^7)$
can be traced back to the fact that the forces involved in bringing
the dipole in from infinity act only on the dipole and not on its
image. If the sphere is also assumed to be a {\em perfect reflector}, the energy
becomes
\begin{equation}
\label{eq:EM-energy-series}
{\mathcal E} = \frac{\hbar c}{\pi} \frac{1}{L} \sum_{j=4}^\infty b_j \left(
\frac{R}{L}\right)^{j-1} \, ,
\end{equation}
where the coefficients up to order $1/L^{11}$ are
\begin{eqnarray}
\label{eq:coeff_EM}
b_4&=&-\frac{9}{16}, \quad
b_5=0, \quad b_6=-\frac{25}{32}, \quad b_7=-\frac{3023}{4096}
\nonumber\\
\quad b_8&=&-\frac{12551}{9600},
\quad b_9=\frac{1282293}{163840},\nonumber \\
b_{10}&=&-\frac{32027856257}{722534400},
\,\,\, b_{11}=\frac{39492614653}{412876800} \, .
\end{eqnarray}
Our method can be also employed to study the material
dependence of the interaction. When the sphere and the mirror
are described by a simple {\em plasma model}, we can obtain
the interaction energy again from Eq.~\eqref{Elogdet2} by
substituting the dielectric function on the imaginary frequency
axis,
\begin{equation}
\label{eq:epsilon_plasma}
\epsilon_p(ic\kappa) = 1+
\left(\frac{2\pi}{\lambda_p\kappa}\right)^2 \, ,
\end{equation}
into the T-matrices of sphere and mirror. From this we get at large
separations
\begin{equation}
\label{eq:E_plasma}
{\mathcal E} = -\frac{\hbar c}{\pi}\left[ f_4(\lambda_p/R) \frac{R^3}{L^4}
+f_5(\lambda_p/R)\frac{R^4}{L^5} +{\cal O}(L^{-6}) \right]
\end{equation}
with the functions
\begin{equation}
\label{eq:E_plasma_fcts}
\begin{split}
f_4(z) & =\frac{9}{16} +\frac{9}{64\pi^2} z^2 -\frac{9}{32\pi} z \coth\frac{2\pi}{z}\\
f_5(z) & = - \frac{13}{20\pi}z-\frac{21}{80\pi^3}
z^3+\frac{21}{40\pi^2} z^2 \coth \frac{2\pi}{z} \, .
\end{split}
\end{equation}
It is interesting that the amplitude $f_4$ of the leading term is not
universal but depends on the plasma wavelength $\lambda_p$. Only in
the two limits $\lambda_p/R\to 0$ and $\lambda_p/R\to\infty$ the
amplitude assumes material independent values, $9/16$ and $3/8$,
respectively. The first limit describes perfect reflection of electric
and magnetic fields at arbitrarily low frequencies and hence agrees
with the result of Eq.~\eqref{eq:EM-energy-series}. The change to the
second amplitude for large $\lambda_p$ can be understood when one
considers a London superconductor that is described at zero
temperature by the plasma dielectric function \cite{Haakh:vn}. If one associates
$\lambda_p$ with the penetration depth, the perfect reflector limit
results from the absence of any field penetration while the second limit
corresponds to a large penetration depth and hence the suppression
of the magnetic mode contribution to the Casimir energy, explaining
the reduced amplitude of $3/8$. The latter result follows also when
the objects are considered to be normal metals, described by the
{\it Drude model} dielectric function
\begin{equation}
\label{eq:eps_drude}
\epsilon_p(ic\kappa)=1+\frac{(2\pi)^2}{(\lambda_p\kappa)^2+\pi c
\kappa/\sigma} \, .
\end{equation}
From this function we get for a sphere and a mirror made of a Drude
metal the asymptotic energy
\begin{equation}
\label{eq:e_drude}
{\mathcal E} = -\frac{\hbar c}{\pi} \left[
\frac{3}{8} \frac{R^3}{L^4} -\frac{77}{384} \frac{R^3}{\sqrt{2\sigma /c} \, L^{9/2}}
- \left(\frac{c}{8\pi \sigma}
-\frac{\pi}{20}\frac{\sigma R^2}{c}\right)\frac{R^3}{L^5}+{\cal
O}(L^{-\frac{11}{2}})\right] \, .
\end{equation}
In fact, one observes that the leading term is universal and agrees
with the $\lambda_p\to\infty$ limit of the plasma model. Note that the
result of Eq.~\eqref{eq:e_drude} does not apply to arbitrarily large
dc conductivity $\sigma$. The conditions for the validity of
Eq.~\eqref{eq:e_drude} can be written as $L\gg R$, $L\gg c/\sigma$ and
$L\gg \sigma R^2/c$. The above results demonstrate strong
correlations between shape and material since for two parallel,
infinite plates, both the plasma and the Drude model yield at large
separations the same (universal) result as a perfect mirror description.
In order to study short separations, Eq.~\eqref{Elogdet2} has to be
evaluated numerically by including sufficiently many partial waves.
The result of an extrapolation from $l=29$ partial waves is shown in
Fig.~\ref{fig:EM} in the perfect reflection limit\cite{Emig08-1}. At
small separations the result can be fitted to a power law of the form
\begin{equation}
\label{eq:E-PFA+corrections}
{\mathcal E} = {\mathcal E}_\text{PFA} \left[ 1+ \theta_1 \frac{d}{R} + \theta_2
\left(\frac{d}{R}\right)^2 + \ldots \right] \, .
\end{equation}
with $ {\mathcal E}_\text{PFA} $ and $d$ defined in Fig. \ref{fig:EM}.
The coefficients $\theta_j$ measure corrections to the proximity force
approximation and are obtained from a fit of the function of
Eq.~(\ref{eq:E-PFA+corrections}) to the data points for the four
smallest studied separations. We find $\theta_1=-1.42 \pm 0.02$ and
$\theta_2=2.39 \pm 0.14$. This result is in agreement with numerical
findings in Ref.~\refcite{Maia_Neto08} but is in disagreement with an asymptotic
expansion for small distances \cite{Bordag:kx}. The latter yields
$\theta_1=-5.2$ and very small logarithmic corrections that however
can be ignored at the distances considered here. The origin of this
discrepancy is currently unclear but might be related to the
applicability of the asymptotic expansion to only much smaller
distances than accessible by current numerics.
\begin{figure}[h]
\begin{center}
\includegraphics[width=0.9\linewidth]{sphere-plate}
\caption{Electromagnetic Casimir energy for the sphere-plate
geometry. The energy is scaled by the proximity force approximation
(PFA) energy ${\mathcal E}_\text{PFA} = -\frac{\pi^3}{720}
\frac{\hbar c R}{d^2} $. The asymptotic expansion of
Eq.~(\ref{eq:EM-energy-series}) is shown as dashed line. Inset:
Corrections to the PFA at small distances as function of $d=L-R$.}
\label{fig:EM}
\end{center}
\end{figure}
\subsection{Three-body effects}
\begin{figure}[htbp]
\includegraphics[width=0.29\linewidth]{2spheres+plate}
\includegraphics[width=.7\linewidth]{mirror_dipoles}
\caption{\label{fig:2sphere+plate}Left: Geometry of the two-sphere/atom and
sidewall system. Shown are also the mirror images (grey) and two-
and three-body contributions (solid and dashed curly lines,
respectively). Right: Typical orientations of electric (E) and magnetic
(M) dipoles and image dipoles for $H/L\to 0$ and $H/L\to\infty$.}
\end{figure}
Casimir interactions are not pair-wise additive. To study the
consequences of this property, we consider the case of two identical,
general polarizable objects near a perfectly reflecting wall in the
dipole approximation, see Fig.~\ref{fig:2sphere+plate}. This situation
applies to ground state atoms and also to general objects at {\it
large} separations. The separation between the objects is $L$ and
the separation of each of them from the wall is $H$. In dipole
approximation, the retarded limit of the interaction is described by
the static electric ($\alpha_z$, $\alpha_\|$) and magnetic ($\beta_z$,
$\beta_\|$) dipole polarizabilities of the objects which can be
different in the directions perpendicular ($z$) and parallel ($\|$) to
the wall. In the absence of the wall the potential for the two
polarizable objects is given by the well-known Casimir-Polder (CP) potential
\begin{equation}
\label{eq:E_CP}
{ \mathcal E}_{2,|}(L) = -\frac{\hbar c}{8\pi L^7} \!\!
\left[ 33 \alpha_\|^2 +\! 13 \alpha_z^2
- \! 14 \alpha_\|\beta_z + (\alpha \!\leftrightarrow\! \beta) \!\right] \, ,
\end{equation}
The $L$-dependent part of the interaction energy in the presence of
the wall is
\begin{equation}
\label{eq:E_CP_plane}
{\mathcal E}_{\underline{\circ\circ}}(L,H) = \cE_{2,|}(L) + \cE_{2,\backslash}(D,L) + \cE_3(D,L)
\end{equation}
with $D=\sqrt{L^2+4H^2}$. The change in the relative orientation of the
objects with $\ell=L/D$ leads to the modified 2-body CP potential
\begin{equation}
\label{eq:E_CP_diag}
\begin{split}
\raisetag{35pt}
\cE_{2,\backslash}(D,L) &= -\frac{\hbar c}{8\pi D^7} \!\!\left[ 26\alpha_\|^2
+\! 20 \alpha_z^2 -\! 14 \ell^2 (4\alpha_\|^2-9\alpha_\|\alpha_z
+5\alpha_z^2)\right.\\
& + \left. 63\ell^4 (\alpha_\| - \alpha_z)^2
- 14\!\left(\alpha_\| \beta_\|(1\!-\!\ell^2) +\!\ell^2 \alpha_\| \beta_z \!\right) + (\alpha\!\leftrightarrow \!\beta) \right] \, .
\end{split}
\end{equation}
The 3-body energy $\cE_3(D,L)$ describes the collective interaction
between the two objects and one image object. It is given by
\begin{equation}
\label{eq:E_3}
\begin{split}
\raisetag{15pt}
\cE_3(D,L) &= \frac{4\hbar c}{\pi} \frac{1}{L^3D^4(\ell+1)^5}\left[ \Big(
3\ell^6 +15\ell^5+28\ell^4+20\ell^3+6\ell^2-5\ell-1\right)\\
&\times \left(\alpha_\|^2-\beta_\|^2\right)
- \left(3\ell^6+15\ell^5+24\ell^4-10\ell^2-5\ell-1\right)
\left(\alpha_z^2-\beta_z^2\right)\\
& +4\left(\ell^4+5\ell^3+\ell^2\right)\left(\alpha_z\beta_\|-\alpha_\|\beta_z
\right)\Big] \, .
\end{split}
\end{equation}
It is instructive to consider the two limits $H\ll L$ and $H\gg L$.
For $H\ll L$ $\cE_{\underline{\circ\circ}}$ turns out to be the CP
potential of Eq.~\eqref{eq:E_CP} with the replacements $\alpha_z\to
2\alpha_z$, $\alpha_\|\to 0$, $\beta_z\to 0$, $\beta_\|\to
2\beta_\|$. The 2-body and 3-body contributions add constructively or
destructively, depending on the relative orientation of a dipole and
its image which together form a dipole of zero or twice the original
strength (see Fig.~\ref{fig:2sphere+plate}).
For $H \gg L$ the leading correction to the CP potential of
Eq.~\eqref{eq:E_CP} comes from the 3-body energy. The energy then
becomes (up to order $H^{-6}$)
\begin{equation}
\label{eq:E_3_large_H}
\cE_{\underline{\circ\circ}}(L,H) = { \mathcal E}_{2,|}(L)+\frac{\hbar c}{\pi} \!\!\left[ \!\frac{\alpha_z^2-\alpha_\|^2}{4 L^3H^4} +
\frac{9\alpha_\|^2-\alpha_z^2 -2\alpha_\| \beta_z}{8LH^6} - (\alpha\leftrightarrow \beta)\!\right] .
\end{equation}
The signs of the polarizabilities in the leading term $\sim H^{-4}$
can be understood from the relative orientation of the dipole of one
atom and the image dipole of the other atom, see Fig.~\ref{fig:2sphere+plate}.
If these two electric (magnetic) dipoles are almost perpendicular to
their distance vector they contribute attractively (repulsively) to
the potential between the two original objects. If these electric
(magnetic) dipoles are almost parallel to their distance vector they
yield a repulsive (attractive) contribution. For isotropic
polarizabilities the leading term of Eq.~\eqref{eq:E_3_large_H}
vanishes and the electric (magnetic) part $\sim H^{-6}$ of the 3-body
energy is always repulsive (attractive).
Next, we study the same geometry as before but with the objects assumed to
be two perfectly reflecting spheres of radius $R$. The lengths $L$ and
$H$ are measured now from the centers of the spheres, see
Fig.~\ref{fig:2sphere+plate}. Here we do not limit the analysis to
large separations but consider arbitrary distances and include higher
order multipole moments than just dipole polarizability. For $R \ll
L,\, H$ and arbitrary $H/L$ the result for the force can be written as
\begin{equation}
\label{eq:force-of-L}
F = \frac{\hbar c}{\pi R^2} \sum_{j=6}^\infty f_j(H/L) \left(\frac{R}{L}\right)^{j+2} \, .
\end{equation}
The functions $f_j$ can be computed exactly. We have obtained them up to $j=11$
and the first three are (with $s\equiv \sqrt{1+4h^2}$)
\begin{align}
\label{eq:h-fcts}
& f_6(h) = -\frac{1}{16h^8}\Big[s^{-9}(18 + 312 h^2 + 2052 h^4 + 6048 h^6
\nonumber\\
&\! + 5719 h^8) + 18 - 12 h^2 + 1001 h^8\Big] \, , \quad f_7(h)=0\, , \\
& f_8(h) = -\frac{1}{160 h^{12}}\Big[s^{-11} (6210 + 140554 h^2 + 1315364 h^4
\nonumber\\
&\! + 6500242 h^6 + \! 17830560 h^8 + \! 25611168 h^{10} + \! 15000675 h^{12}) \nonumber\\
&\! - 6210 - 3934 h^2 + 764 h^4 - 78 h^6 + 71523 h^{12}\Big] \, .
\end{align}
For $H \gg L$ one has
$f_6(h) = -1001/16 +3/(4h^6)+ {\cal O}(h^{-8})$,
$f_8(h)=-71523/160+39/(80h^6)+ {\cal O}(h^{-8})$ so that the wall
induces weak repulsive corrections. For $H \ll L$,
$f_6(h)=-791/8+6741 h^2/8 +{\cal O}(h^4)$, $f_8(h)=-60939/80 + 582879
h^2/80 +{\cal O}(h^4)$ so that the force amplitude decreases when the spheres are moved a small
distance away from the wall. This proves the existence of a minimum in
the force amplitude as a function of $H/R$ for fixed, sufficiently
small $R/L$. We note that all $f_j(h)$ are finite for
$h\to \infty$ but some diverge for $h\to 0$, e.g., $f_9 \sim f_{11}
\sim h^{-3}$, making them important for small $H$.
To obtain the interaction at smaller separations or larger radius, we
have computed the energy $\cE_{\underline{\circ\circ}}$ and force
$F=-\partial \cE_{\underline{\circ\circ}} /\partial L$ between the
spheres numerically \cite{Rodriguez-Lopez:ys}. In order to show the
effect of the wall, we plot the energy and force
normalized to the results for two spheres without a wall.
Fig.~\ref{fig:2spheres+plate_num} shows the force between the two
spheres as a function of the wall distance for fixed $L$. When the
spheres approach the wall, the force first decreases slightly if $R/L
\lesssim 0.3$ and then increases strongly under a further reduction of
$H$. For $R/L \gtrsim 0.3$ the force increases monotonically as the
spheres approach the wall. This agrees with the prediction of the
large distance expansion. The expansion of Eq.~\eqref{eq:force-of-L}
with $j=10$ terms is also shown in Fig.~\ref{fig:2spheres+plate_num}
for $R/L\le 0.2$. Its validity is limited to large $L/R$ and not too
small $H/R$; it fails completely for $R/L>0.2$ and hence is not shown
in this range.
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=.8\linewidth]{2spheres+plate_num}
\caption{\label{fig:2spheres+plate_num}Numerical results for the force (dots) between
two spheres as function of the sidewall separation $H/R$ for
different sphere separations $R/L$. Shown are also the analytical
results of Eq.~\eqref{eq:force-of-L}, including terms up to $j=10$
for $R/L\le 0.2$ (solid curves). Inset: Magnification of the
nonmonotonicity.}
\end{center}
\end{figure}
\subsection{Orientation dependence}
In this section we investigate the shape and orientation dependence of
the Casimir force using Eq.~\eqref{Elogdet2}. As examples we focus on
ellipsoids, computing the orientation dependent force between two
spheroids, and between a spheroid and a plane \cite{Emig:2009zr}. For
two anisotropic objects, the CP potential of Eq.~\eqref{eq:E_CP} must
be generalized. In terms of the Cartesian components of the standard
electric (magnetic) polarizability matrix $\mathbb{\alpha}$
($\mathbb{\beta}$), the asymptotic large distance potential of two
objects (with the $\hat{z}$ axis pointing from one object to the
other), can be written as
\begin{equation}
\label{eq:energy_aniso}
\begin{split}
\cE &= -\frac{\hbar c}{d^7} \frac{1}{8\pi} \bigg\{
13\left( \alpha^1_{xx}\alpha^2_{xx} + \alpha^1_{yy}\alpha^2_{yy}+2 \alpha^1_{xy}\alpha^2_{xy}\right) \\
&+ 20 \, \alpha^1_{zz}\alpha^2_{zz} -30 \left( \alpha^1_{xz}\alpha^2_{xz}
+ \alpha^1_{yz}\alpha^2_{yz}\right) +
\left(\mathbb{\alpha}\to\mathbb{\beta}\right) \\
&- 7 \left( \alpha^1_{xx}\beta^2_{yy} + \alpha^1_{yy}\beta^2_{xx}
-2 \alpha^1_{xy}\beta^2_{xy} \right) +\left( 1\leftrightarrow 2\right)
\bigg\} \, .
\end{split}
\end{equation}
For the case of an
ellipsoidal object with static electric permittivity $\epsilon$ and
magnetic permeability $\mu$, the polarizability tensors are diagonal
in a basis oriented to its principal axes, with elements (for
$i\in\{1,2,3\}$)
\begin{equation}
\label{eq:pol-tensor-diag}
\alpha_{ii}^0 = \frac{V}{4\pi} \frac{\epsilon-1}{1+(\epsilon-1)n_i}\, ,\,
\beta_{ii}^0 = \frac{V}{4\pi} \frac{\mu-1}{1+(\mu-1)n_i}\,,
\end{equation}
where $V=4\pi r_1 r_2 r_3/3$ is the ellipsoid's volume. In the case of
spheroids, for which $r_1=r_2=R$ and $r_3 = L/2$, the so-called
depolarizing factors can be expressed in terms of elementary
functions,
\begin{equation}
n_1=n_2=\frac{1-n_3}{2}, \, n_3 = \frac{1-e^2}{2e^3} \left(\log
\frac{1+e}{1-e} - 2 e \right),
\label{eq:depolarizing}
\end{equation}
where the eccentricity $e = \sqrt{1 - \frac{4R^2}{L^2}}$ is real for a
prolate spheroid ($L > 2R$) and imaginary for an oblate spheroid ($L <
2R$). The polarizability tensors for an arbitrary orientation are then
obtained as $\mathbb{\alpha}={\cal R}^{-1}\mathbb{\alpha}^0{\cal R}$,
where ${\cal R}$ is the matrix that rotates the principal axis of the
spheroid to the Cartesian basis, i.e. ${\cal
R}(1,2,3)\to(x,y,z)$. Note that for rarefied media with
$\epsilon\simeq 1$, $\mu\simeq 1$ the polarizabilities are isotropic
and proportional to the volume. Hence, to leading order in
$\epsilon-1$ the interaction is orientation independent at
asymptotically large separations, as we would expect, since pairwise
summation is valid for $\epsilon-1\ll 1$. In the following we focus on
the interesting opposite limit of two identical perfectly reflecting
spheroids. We first consider prolate spheroids with $L \gg R$. The
orientation of each ``needle'' relative to the line joining them (the
initial $z$-axis) is parameterized by the two angles $(\theta,\psi)$,
as depicted in Fig.~\ref{fig:cigars}. Then the energy is
\begin{equation}
\label{eq:energy-cylidenr-general}
\begin{split}
\raisetag{55pt}
{\cal E}(\theta_1,\theta_2,\psi) &= -\frac{\hbar c}{d^7} \bigg\{
\frac{5L^6}{1152 \pi \left( \ln \frac{L}{R} - 1\right)^2}
\bigg[\cos^2\theta_1 \cos^2\theta_2\\
+ &\frac{13}{20}\cos^2\psi \sin^2 \theta_1\sin^2\theta_2
- \frac{3}{8} \cos\psi \sin 2\theta_1 \sin 2\theta_2\bigg]
+{\cal O}\bigg(\frac{L^4R^2}{\ln\frac{L}{R}}\bigg)\bigg\}\, ,
\end{split}
\end{equation}
where $\psi\equiv\psi_1-\psi_2$. It is minimized for two
needles aligned parallel to their separation vector. At almost all
orientations the energy scales as $L^6$, and vanishes logarithmically
slowly as $R\to 0$. The latter scaling changes when one needle is
orthogonal to $\hat{z}$ (i.e. $\theta_1=\pi/2$), while the other is
either parallel to $\hat{z}$ ($\theta_2=0$) or has an arbitrary
$\theta_2$ but differs by an angle $\pi/2$ in its rotation about the
$z$-axis (i.e. $\psi_1-\psi_2=\pi/2$). In these cases the energy
comes from the next order term in
Eq.~(\ref{eq:energy-cylidenr-general}), and takes the form
\begin{equation}
\label{eq:crossed-cigars-finite-theta}
{\cal E}\left(\frac{\pi}{2},\theta_2,\frac{\pi}{2}\right) =
-\frac{\hbar c}{1152 \pi \, d^7} \frac{L^4R^2}{\ln\frac{L}{R} - 1}
\left( 73+7\cos 2\theta_2
\right) \, ,
\end{equation}
which shows that the least favorable configuration corresponds to two
needles orthogonal to each other and to the line joining them.
For perfectly reflecting oblate spheroids with $R\gg L/2$, the
orientation of each ``pancake'' is again described by a pair of angles
$(\theta,\psi)$, as depicted in Fig.~\ref{fig:pancakes}. To leading
order at large separations, the energy is given by
\begin{equation}
\label{eq:energy_oblate}
\begin{split}
\cE &= -\frac{\hbar c}{d^7} \bigg\{
\frac{R^6}{144\pi^3} \bigg[
765 - 5(\cos 2\theta_1+\cos 2\theta_2)
+237 \cos 2\theta_1 \cos 2\theta_2 \\
&+372 \cos 2\psi \sin^2\theta_1\sin^2\theta_2
- 300 \cos \psi\sin 2\theta_1 \sin 2\theta_2 \bigg]
+{\cal O}\big( {R^5L}\big)\bigg\} \, .
\end{split}
\end{equation}
The leading dependence is proportional to $R^6$, and does not
disappear for any choice of orientations. Furthermore, this
dependence remains even as the thickness of the pancake is taken to
zero ($L\to 0$). This is very different from the case of the needles,
where the interaction energy vanishes with thickness as
$\ln^{-1}(L/R)$. The lack of $L$ dependence is due to the assumed
perfectly reflectivity. The energy is minimal for two pancakes
lying on the same plane ($\theta_1=\theta_2=\pi/2$, $\psi=0$) and has
energy $-\hbar c \, (173/18\pi^3) R^6/d^7$. When the two pancakes are
stacked on top of each other, the energy is increased to $-\hbar c
\,(62/9\pi^3) R^6/d^7$. The least favorable configuration is when the
pancakes lie in perpendicular planes, i.e., $\theta_1=\pi/2$,
$\theta_2=0$, with an energy $-\hbar c\, (11/3\pi^3) R^6/d^7$.
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=.9\linewidth]{cigar_new}
\caption{\label{fig:cigars} (Color online) Orientation of a prolate (cigar-shaped)
spheroid: The symmetry axis (initially the $z$-axis) is rotated by
$\theta$ about the $x$-axis and then by $\psi$ about the $z$-axis.
For two such spheroids, the energy at
large distances is give by Eq.~\eqref{eq:energy-cylidenr-general}.
The latter is depicted at fixed distance $d$, and for
$\psi_1=\psi_2$, by a contour plot as function
of the angles $\theta_1$, $\theta_2$ for the $x$-axis rotations .
Minima (maxima) are marked by filled (open) dots.}
\end{center}
\end{figure}
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=.9\linewidth]{pancake_new}
\caption{\label{fig:pancakes} (Color online) As in
Fig.~\ref{fig:cigars} for oblate (pancake-shaped) spheroids, with a
contour plot of energy at large separations.}
\end{center}
\end{figure}
For an anisotropic object interacting with a perfectly reflecting
mirror, at leading order the CP potential generalizes to
\begin{equation}
\label{eq:energy_aniso_wall}
\cE = -\frac{\hbar c}{d^4} \frac{1}{8\pi} \tr (\alpha-\beta )
+{\cal O}(d^{-5})\, ,
\end{equation}
which is clearly independent of orientation. Orientation dependence
in this system thus comes from higher multipoles. The next order also
vanishes, so the leading term is the contribution from the partial
waves with $l=3$ for which the scattering matrix is not known
analytically. However, we can obtain the preferred orientation by
considering a distorted sphere in which the radius $R$ is deformed to
$R+\delta f(\vartheta,\varphi)$. The function $f$ can be expanded
into spherical harmonics $Y_{lm}(\vartheta,\varphi)$, and spheroidal
symmetry can be mimicked by choosing $f=Y_{20}(\vartheta,\varphi)$.
The leading orientation dependent part of the energy is then obtained
as
\begin{equation}
\cE_f = - \hbar c \frac{1607}{640 \sqrt{5} \pi^{3/2}} \frac{\delta R^4}{d^6} \cos(2\theta) \,.
\end{equation}
A prolate spheroid ($\delta>0$) thus minimizes its energy by pointing
towards the mirror, while an oblate spheroid ($\delta<0$) prefers to
lie in a plane perpendicular to the mirror. (We assume that the
perturbative results are not changed for large distortions.) These
configurations are also preferred at small distances $d$, since (at
fixed distance to the center) the object reorients to minimize the
closest separation. Interestingly, the latter conclusion is not
generally true. In Ref.~\refcite{Emig:2009zr} it has been shown that there
can be a transition in preferred orientation as a function of $d$ in
the simpler case of a scalar field with Neumann boundary conditions.
The separation at which this transition occurs varies with the
spheroid's eccentricity.
\subsection{Material dependence}
In this section we shall discuss some characteristic effects of the
Casimir interaction between metallic nano-particles by studying two spheres
with {\it finite} conductivity in the limit where their radius $R$ is
much smaller than their separation $d$. We assume further that $R$ is
large compared to the inverse Fermi wave vector $\pi/k_F$ of the
metal. Since typically $\pi/k_F$ is of the order of a few Angstrom,
this assumption is reasonable even for nano-particles.
Theories for the optical properties of small
metallic particles \cite{Wood:1982nl} suggest a Drude dielectric function
\begin{equation}
\label{eq:eps_Drude}
\epsilon(ic\kappa) = 1+4\pi \frac{\sigma(ic\kappa)}{c\kappa} \, ,
\end{equation}
where $\sigma(ic\kappa)$ is the conductivity which approaches for
$\kappa\to 0$ the dc conductivity $\sigma_{dc}$. For bulk metals
$\sigma_{dc}=\omega_p^2\tau/4\pi$ where $\omega_p=\sqrt{4e^2k_F^3/3\pi
m_e}$ is the plasma frequency with electron charge $e$ and electron
mass $m_e$, and $\tau$ is the relaxation time. With decreasing
dimension of the particle, $\sigma_{dc}(R)$ is reduced compared to its
bulk value due to finite size effects and hence becomes a function of
$R$ \cite{Wood:1982nl}. In analogy to the result for a sphere and a
plate that are described by the Drude model, we obtain for the large
distance expansion of the energy the result
\begin{equation}
\label{eq:E_drude_spheres}
\cE = -\hbar c \, \frac{23}{4\pi} \frac{R^6}{L^7} -\left(
\frac{R\sigma_{dc}(R)}{c} -\frac{45}{4\pi^2} \frac{c}{R\sigma_{dc}(R)}\right) \frac{R^7}{L^8} + \ldots \, .
\end{equation}
As in the sphere-plate case, the leading term is material independent
but different from that of the perfect metal limit (where the
amplitude is $143/16\pi$) since only the electric polarization
contributes. At next order, the first and second terms in the
parentheses come from magnetic and electric dipole fluctuations,
respectively. The term $\sim 1/L^8$ is absent in the interaction
between perfectly conducting spheres. The limit of perfect
conductivity, $\sigma_{dc}\to\infty$ cannot be taken in
Eq.~(\ref{eq:E_drude_spheres}) since this limit does not commute with
the large $L$ expansion.
\begin{figure}[h]
\includegraphics[width=11cm]{sigma}
\caption{\label{Fig:sigma} Dimensionless dc conductivity
$\hat\sigma_{dc}(R)$ in units of $e^2/2\hbar a_0$ (with Bohr radius
$a_0$) for a Aluminum sphere with $\epsilon_F=11.63$eV,
$\pi/k_F=1.8\!\!\buildrel _\circ \over {\mathrm{A}}$ and
$\tau=0.8\cdot 10^{-14}$sec as function of the radius $R$, measured
in units of $\pi/k_F$. Also shown is the corresponding ratio
$R\sigma_{dc}(R)/c$ that determines the Casimir interaction of
Eq.~(\ref{eq:E_drude_spheres}). The bulk dc conductivity
$\hat\sigma_{dc}(\infty)=17.66$ is indicated by the dashed line.}
\end{figure}
In order to estimate the effect of finite conductivity and its
dependence on the size of the nano-particle, we have to employ a
theory that can describe the evolution of $\sigma_{dc}(R)$ with the
particle size. A theory for the dielectric function of a cubical
metallic particle of dimensions $R \gg \pi/k_F$ has been developed
within the random phase approximation in the limit of low frequencies
$\ll c/R$ \cite{Wood:1982nl}. In this theory it is further assumed
that the discreteness of the electronic energy levels, and not the
inhomogeneity of the charge distribution, is important. This implies
that the particle responds only at the wave vector of the incident
field which is a rather common approximation for small particles.
From an electron number-conserving relaxation time approximation the
complex dielectric function is obtained which yields the
size-dependent dc conductivity for a cubic particle of volume $a^3$
\cite{Wood:1982nl}. It has been shown that the detailed shape of the
particle does not matter much, and we can set $a=(4\pi/3)^{1/3}R$
which defines the volume equivalent sphere radius $R$. For
$\pi/k_F\simeq a$ the nano particle ceases to be conducting,
corresponding to a metal-insulator transition due to the localisation
of electrons for particles with a size of the order of the mean free
path. It is instructive to consider the size dependence of
$\sigma_{dc}(R)$ and of the Casimir interaction for a particular
choice of material. Following Ref.~\refcite{Wood:1982nl}, we focus on
small Aluminum spheres with Fermi energy $\epsilon_F=11.63$eV and
$\tau=0.8\cdot 10^{-14}$sec. These parameters correspond to
$\pi/k_F=1.8\!\!\buildrel _\circ \over {\mathrm{A}}$ and a plasma
wavelength $\lambda_p=79$nm. It is useful to introduce the
dimensionless conductivity $\hat\sigma_{dc}(R)$, which is measured in
units of $e^2/2\hbar a_0$ with Bohr radius $a_0$, so that the
important quantity of Eq.~(\ref{eq:E_drude_spheres}) can be written as
$R\sigma_{dc}(R)/c=(\alpha/2)(R/a_0)\hat\sigma_{dc}(R)$ where $\alpha$
is the fine-structure constant. The result is shown in
Fig.~\ref{Fig:sigma}. For example, for a sphere of radius $R=10$nm,
the dc conductivity is reduced by a factor $\approx 0.15$ compared to
the bulk Drude value. If the radius of the sphere is equal to the
plasma wavelength $\lambda_p$, the reduction factor $\approx
0.8$. These results show that shape and material properties are
important for the Casimir interaction between
nano-particles. Potential applications include the interaction between
dilute suspensions of metallic nano-particles.
\subsection{Further extensions}
The general result of Eq.~\eqref{Elogdet} and its extensions described
in Ref.~\refcite{Rahi:fk} have been recently applied to a number of
new geometries and further applications are under way. Examples
include so-called interior configurations with an object contained
within an otherwise empty, perfectly conducting spherical shell
\cite{Zaheer:ve}. For this geometry the forces and torques on a
dielectric or conducting object, well separated from the cavity walls,
have been determined. Corrections to the proximity force approximation
for this interior problem have been obtained by computing the
interaction energy of a finite-size metal sphere with the cavity walls
when the separation between their surfaces tends to zero.
Eq.~\eqref{Elogdet}, evaluated in parabolic cylinder coordinates, has
been used to obtain the interaction energy of a parabolic cylinder and
an infinite plate (both perfect mirrors), as a function of their
separation and inclination, and the cylinder's parabolic radius
\cite{Graham:2009ly}. By taking the limit of vanishing radius,
corresponding to a semi-infinite plate, the effect of edge and
inclination could be studied.
\section*{Acknowledgments}
The reported results have been obtained in collaboration with
N. Graham, R. L. Jaffe, M. Kardar, S. J. Rahi, P. Rodriguez-Lopez,
A. Shpunt, S.~Zaheer, R. Zandi. This work was supported by the
Deutsche Forschungsgemeinschaft (DFG) through grant EM70/3 and Defense
Advanced Research Projects Agency (DARPA) contract No. S-000354.
| proofpile-arXiv_065-4958 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
The problem of nonparametric statistical inference for jump processes
or more generally for semimartingale models has long history and goes
back to the works of \citet{RT} and \citet{BB}. In the past decade,
one has witnessed the revival of interest in this topic which is
mainly related to a wide availability of financial and economical time
series data and new types of statistical issues that have not been
addressed before. There are two major strands of recent literature
dealing with statistical inference for semimartingale models. The first
type of literature considers the so-called high-frequency setup, where
the asymptotic properties of the corresponding estimates are studied
under the assumption that the frequency of observations tends to
infinity. In the second strand of literature, the frequency of
observations is assumed to be fixed (the so-called low-frequency setup)
and the asymptotic analysis is done under the premiss that the
observational horizon tends to infinity. It is clear that none of
the above asymptotic hypothesis can be perfectly realized on real
data and they can only serve as a convenient approximation, as in
practice the frequency of observations and the horizon are always
finite. The present paper studies the problem of statistical inference
for a class of semimartingale models in low-frequency setup.
Let $X = (X_t)_{t\geq0} $ be a stochastic process valued in $ \mathbb
{R}^{d} $ and let $\mathcal{T} = (\mathcal{T}(s))_{s\geq
0} $ be a nonnegative, nondecreasing stochastic process not necessarily
independent of $ X $ with $ \mathcal{T}(0)=0 $. A time-changed process
$Y = (Y_s)_{s\geq0} $ is then
defined as $Y_s = X_{\mathcal{T}(s)}$. The process $\mathcal{T} $ is
usually referred to as time
change. Even in the case of the one-dimensional Brownian motion $ X$,
the class of time-changed processes $ X_{\mathcal{T}} $ is very large
and basically coincides with the class of all semimartingales [see,
e.g., \citet{M}]. In fact, the construction in
\citet{M} is not direct, meaning that the problem of
specification of
different models with the specific properties remains an important
issue. For example, the base process $X $ can be assumed to possess
some independence property
(e.g., $ X $ may have independent components), whereas a nonlinear
time change can induce deviations from the independence. Along this
line, the time change can be used to model dependence for stochastic
processes. In this work, we
restrict our attention to the case of time-changed L\'evy processes,
that is,
the case where $X=L$ is a multivariate L\'evy process and
$\mathcal{T} $ is an independent of $ L $ time change.
Time-changed L\'evy processes are one step further in increasing
the complexity of models in order to incorporate the so-called stylized features
of the financial time series, like volatility clustering [for more
details, see \citet{CGMY}]. This type of processes
in the case of the one-dimensional Brownian motion was first studied by
\citet{Bo}. \citet{Cl} introduced Bochner's time-changed
Brownian motion into financial economics: he used it to relate future
price returns of cotton to the variations in volume during
different trading periods.
Recently, a number of parametric time-changed L\'evy processes
have been introduced by \citet{CGMY}, who model the stock price $
S_{t} $ by a geometric time-changed L\'evy model
\[
S_{t}=S_{0}\exp\bigl(L_{{\mathcal{T}(t)}}\bigr),
\]
where $ L $ is a L\'evy process and $ \mathcal{T}(t) $ is a time change
of the form
\begin{equation}
\label{TCD}
\mathcal{T}(t)=\int_{0}^{t}\rho(u) \,du
\end{equation}
with $ \{ \rho(u) \}_{u\geq0} $ being a positive mean-reverting process.
\citet{CGMY} proposed to model $ \rho(u) $ via the
Cox--Ingersoll--Ross (CIR) process.
Taking different parametric L\'evy models for $ L $ (such as
the normal inverse Gaussian or the variance Gamma processes) results in
a wide range of processes
with rather rich volatility structure (depending on the rate process
$\rho$) and various distributional properties (depending on the
specification of $ L $).
From statistical point of view, any parametric model (especially one
using only few parameters) is prone to misspecification problems.
One approach to deal with the
misspecification issue is to adopt the general nonparametric models for
the functional
parameters of the underlying process. This may reduce the estimation
bias resulting
from an inadequate parametric model.
In the case of time-changed L\'evy models, there are two natural
nonparametric parameters: L\'evy density $ \nu$, which determines the
jump dynamics of the process $ L $ and the marginal distribution of the
process $ \mathcal{T}$.
In this paper, we study the problem of statistical inference on the
characteristics of a multivariate L\'evy process $ L $ with independent
components based on low-frequency observations of the time-changed
process $ Y_{t}=L_{\mathcal{T}(t)}$, where $ \mathcal{T}(t) $ is a
time change process independent of $L$ with strictly stationary
increments. We assume that the distribution of $ \mathcal{T}(t) $
is unknown, except of its mean value.
This problem is rather challenging and has not been yet given
attention in the literature, except for the special case of $ \mathcal
{T}(t)\equiv t $ [see, e.g., \citet{NR} and \citet{CG}].
In particular, the main difficulty in constructing nonparametric
estimates for the L\'evy density $ \nu$ of $ L $
lies in the fact that the jumps are unobservable variables, since in practice
only discrete observations of the process $ Y $ are available. The more
frequent the observations, the more relevant information about
the jumps of the underlying process,
and hence, about the L\'evy density $ \nu$ are contained in the
sample. Such high-frequency based statistical
approach has played a central role in the recent literature on
nonparametric estimation
for L\'evy type processes. For instance, under discrete observations of
a pure L\'evy process
$ L_{t} $ at times $ t_{j}=j\Delta, j=0,\ldots, n$,
\citet{W} and \citet{FL1}
proposed the quantity
\[
\widehat\beta(f)=\frac{1}{n\Delta}\sum_{k=1}^{n} f (L_{t_{k}}-L_{t_{k-1}})
\]
as a consistent estimator for the functional
\[
\beta(f)=\int f (x)\nu(x) \,dx,
\]
where $ f $ is a given ``test function.''
Turning back to the time-changed L\'evy processes, it was shown in
\citet{FL2} [see also \citet{RTA}] that in the case, where
the rate process $ \rho$ in (\ref{TCD}) is a positive
ergodic diffusion independent of the L\'evy process $ L $, $ \widehat
\beta(f) $ is still a consistent
estimator for $ \beta(f) $ up to a constant, provided the time horizon
$ n \Delta$ and
the sampling frequency $ \Delta^{-1} $ converge to infinite at
suitable rates.
In the case of low-frequency data ($ \Delta$ is fixed), we cannot be
sure to what extent the
increment $ L_{t_{k}}-L_{t_{k-1}} $ is due to one or several jumps or
just to the
diffusion part of the L\'evy process so that at first sight it may
appear surprising that some kind of inference in this situation is
possible at all. The key observation here is that for any bounded
``test function'' $f $
\begin{equation}
\label{CONVERG}
\frac{1}{n}\sum_{j=1}^{n}f\bigl(L_{\mathcal{T}(t_{j})}-L_{\mathcal
{T}(t_{j-1})}\bigr)\to\E_{\pi}\bigl[f\bigl(L_{\mathcal{T}(\Delta)}\bigr)\bigr],\qquad
n\to\infty,
\end{equation}
provided the sequence $\mathcal{T}(t_{j})-\mathcal{T}(t_{j-1}),
j=1,\ldots, n$, is stationary and ergodic with the invariant stationary
distribution $ \pi$.
The limiting expectation in~(\ref{CONVERG}) is then given by
\[
\E_{\pi}\bigl[f\bigl(L_{\mathcal{T}(\Delta)}\bigr)\bigr]
=\int_{0}^{\infty} \E
[f(L_{s})]
\pi(ds).
\]
Taking
$ f(z) = f_{u}(z)=\exp(\ii u^{\top}z), u\in\mathbb{R}^{d}$, and using
the independence of $L$ and~$\mathcal{T}$, we arrive at the following
representation for the c.f. of $ L_{\mathcal{T}(s)} $:
\begin{equation}
\label{EstEq}
\E\bigl[\exp\bigl(\ii u^{\top}L_{\mathcal{T}(\Delta)}
\bigr)\bigr]=\int
_{0}^{\infty} \exp(t\psi(u)) \pi(dt)=\mathcal{L}_{\Delta}(-\psi(u)),
\end{equation}
where $ \psi(u)=t^{-1}\log[\E\exp(\ii u L_{t})] $ is the characteristic
exponent of the L\'evy process $ L $ and $ \mathcal{L}_{\Delta} $
is the Laplace transform of $ \pi$. In fact, the most difficult part
of estimation procedure comes only now and consists in reconstructing
the characteristics of the underlying L\'evy process $ L $ from an
estimate for $ \mathcal{L}_{\Delta}(-\psi(u))$. As we will see, the
latter statistical problem is closely related to the problem of
composite function estimation, which is known to be highly nonlinear
and ill-posed. The identity (\ref{EstEq}) also reveals the major
difference between high-frequency and low-frequency setups. While in
the case of high-frequency data one can directly estimate linear
functionals of the L\'evy measure $ \nu$, under low-frequency
observations, one has to deal with nonlinear functionals of $ \nu$
rendering the underlying estimation problem nonlinear and ill-posed.
Last but not least, the increments of time-changed L\'evy
processes are not any longer independent, hence advanced tools from
time series analysis have to be used for the estimation of $ \mathcal
{L}_{\Delta}(-\psi(u)) $.
The paper is organized as follows. In Section \ref{TCL}, we introduce
the main object of our study, the time-changed L\'evy processes. In
Section \ref{SP}, our statistical problem is formulated and its
connection to the problem of composite function estimation is
established. In Section \ref{SA}, we impose some restrictions on the
structure of the time-changed L\'evy processes in order to ensure the
identifiability and avoid the ``curse of dimensionality.'' Section \ref
{EST} contains
the main estimation procedure. In Section \ref{ASYMP}, asymptotic
properties of the estimates defined
in Section \ref{EST} are studied. In particular, we derive uniform and
pointwise rates of convergence (Sections \ref{URC} and \ref
{PCR}, resp.) and prove their
optimality over suitable classes of time-changed L\'evy models
(Section~\ref{LBOUNDS}). Section~\ref{DISC} contains some discussion.
Finally, in Section~\ref{SIM} we present a~simulation study. The rest
of the paper contains proofs of the main results and some auxiliary
lemmas. In particular, in Section \ref{EXPBOUNDS} a useful inequality
on the probability of large deviations for empirical processes in
uniform metric for the case of weakly dependent random variables can be
found.\looseness=-1
\section{Main setup}
\subsection{\texorpdfstring{Time-changed L\'evy processes}{Time-changed Levy processes}}
\label{TCL}
Let $ L_{t} $ be a $ d $-dimensional L\'evy process on the probability
space $(\Omega,\mathcal{F},\mathbb{P})$ with the characteristic
exponent $
\psi
(u)$, that is,
\[
\psi(u)=t^{-1}\log\E[\exp(\ii u^{\top} L_{t}
)].
\]
We know by the L\'evy--Khintchine formula that
\begin{equation}
\label{psi}
\psi(u)=\ii\mu^{\top} u-\frac{1}{2}u^{\top}\Sigma u+
\int_{\mathbb{R}^{d}}\bigl( e^{\ii u^{\top}y}-1-\ii u^{\top}y\cdot
\mathbf
{1}_{\{ |y|\leq1 \}} \bigr)\nu(dy),
\end{equation}
where $ \mu\in\mathbb{R}^{d}, \Sigma$ is a positive-semidefinite
symmetric $ d\times d $ matrix
and $ \nu$ is a L\'evy measure on $ \mathbb{R}^{d}\setminus\{ 0 \} $
satisfying
\[
\int_{\mathbb{R}^{d}\setminus\{ 0 \}}(|y|^{2}\wedge1)\nu
(dy)<\infty.
\]
A triplet $ (\mu,\Sigma,\nu) $ is usually called a characteristic
triplet of the $ d $-dimensional L\'evy process $ L_{t}$.
Let $ t\to\mathcal{T}(t), t\geq0 $ be an increasing right-continuous
process with left limits such that $ \mathcal{T}(0)=0 $ and for each
fixed $ t $, the random variable $ \mathcal{T}(t) $ is a stopping time
with respect to the filtration $ \mathcal{F} $. Suppose furthermore
that~$\mathcal{T}(t)$ is finite
$ \mathbb{P}$-a.s. for all $ t\geq0 $ and that $ \mathcal{T}(t)\to\infty$
as $ t\to\infty$. Then the family of $ ( \mathcal{T}(t) )_{t\geq0} $
defines a random
time change.
Now consider a $ d $-dimensional process $ Y_{t}:=L_{\mathcal{T}(t)} $.
The process $ Y_{t} $ is called the time-changed L\'evy process. Let us
look at some examples. If $ \mathcal{T}(t) $ is a L\'evy
process, then~$ Y_{t}$ would be another L\'evy process. A more general
situation
is when $ \mathcal{T}(t) $ is modeled by a nondecreasing semimartingale
\[
\mathcal{T}(t)=b_{t}+\int_{0}^{t}\int_{0}^{\infty} y\rho(dy,ds),
\]
where $ b $ is a drift and $ \rho$ is the counting measure of jumps in
the time
change. As in \citet{CW}, one can take $ b_{t}=0 $ and consider
locally deterministic time changes
\begin{equation}
\label{TCProcess}
\mathcal{T}(t)=\int_{0}^{t}\rho(s_{-}) \,ds,
\end{equation}
where $ \rho$ is the instantaneous activity rate which is assumed to
be nonnegative.
When $ L_{t} $ is the Brownian motion and $ \rho$ is proportional\vadjust{\goodbreak}
to the instantaneous
variance rate of the Brownian motion, then $ Y_{t} $ is a pure jump L\'
evy process with
the L\'evy measure proportional to $ \rho$.
Let us now compute the characteristic function of $ Y_{t} $.
Since $ \mathcal{T}(t) $ and $ L_{t} $ are independent, we get
\begin{equation}
\label{CFY}
\phi_{Y}(u|t)=\E\bigl(e^{\ii u^{\top}L_{\mathcal{T}(t)}}\bigr)
=\mathcal{L}_{t}(-\psi(u)),
\end{equation}
where $ \mathcal{L}_{t} $ is the Laplace transform of $ \mathcal{T}(t)
$:
\[
\mathcal{L}_{t}(\lambda)=\E\bigl( e^{-\lambda\mathcal{T}(t)}
\bigr).
\]
\subsection{Statistical problem}
\label{SP}
In this paper, we are going to study the problem of estimating the
characteristics of the L\'evy process $ L $ from low-frequency
observations $ Y_{0}, Y_{\Delta},\ldots,Y_{n\Delta} $ of the process $
Y $ for some fixed $ \Delta>0$. Moving to the spectral domain and
taking into account (\ref{psi}), we can reformulate our problem as the
problem of semi-parametric estimation of the characteristic exponent $
\psi$ under structural assumption (\ref{psi}) from an estimate of $
\phi_{Y}(u|\Delta) $ based on $ Y_{0}, Y_{\Delta},\ldots,Y_{n\Delta
}$.
The formula (\ref{CFY}) shows that the function $ \phi_{Y}(u|\Delta
) $
can be viewed as a composite function and our statistical problem is
hence closely related to the problem of statistical inference on the
components of a composite function. The latter type of problems in
regression setup has gotten much attention recently [see, e.g.,
\citet
{HM} and \citet{JLT}]. Our problem has, however, some features not
reflected in the previous literature. First, the unknown link function
$ \mathcal{L}_{\Delta}$, being the Laplace transform of the r.v. $
\mathcal{T}(\Delta)$, is completely monotone. Second, the
complex-valued function $ \psi$ is of the form~(\ref{psi}) implying,
for example, a certain asymptotic behavior of $ \psi(u) $ as $ u\to
\infty$. Finally, we are not in regression setup and $ \phi
_{Y}(u|\Delta) $ is to be estimated by its empirical counterpart
\[
\widehat\phi(u)=\frac{1}{n}\sum_{j=1}^{n}e^{ \ii u^{\top}(
Y_{\Delta
j}-Y_{\Delta(j-1)})}.
\]
The contribution of this paper to the literature on composite function
estimation is twofold. On the one hand, we introduce and study a new
type of statistical problems which can be called estimation of a
composite function under structural constraints. On the other hand, we
propose new and constructive estimation approach which is rather
general and can be used to solve other open statistical problems of
this type. For example, one can directly adapt our method to the
problem of semi-parametric inference in distributional Archimedian
copula-based models [see, e.g., \citet{MN} for recent results],
where one faces the problem of estimating a multidimensional
distribution function of the form
\[
F(x_{1},\ldots,x_{d})=G\bigl(f_{1}(x_{1})+\cdots+f_{d}(x_{d})\bigr),\qquad
(x_{1},\ldots,x_{d})\in\mathbb{R}^{d},
\]
with a completely monotone function $ G $ and some functions $
f_{1},\ldots, f_{d}$. Further discussion on the problem of composite
function estimation can be found in Remark \ref{compfuncbounds}.
\subsection{Specification analysis}
\label{SA}
It is clear that without further restrictions on the class of
time-changed L\'evy processes our problem of estimating $ \nu$ is not
well defined, as even in the case of the perfectly known distribution
of the process $ Y $ the parameters of the L\'evy process $ L $ are
generally not identifiable. Moreover, the corresponding statistical
procedure will suffer from the ``curse of dimensionality'' as the
dimension $ d $ increases. In order to avoid these undesirable
features, we have to impose some additional restrictions on the
structure of the time-changed process~$Y$. In statistical literature,
one can basically find two types of restricted composite models:
additive models and single-index models. While the latter class of
models is too restrictive in our situation, the former one naturally
appears if one assumes the independence of the components of $ L_{t} $.
In this paper,
we study a class of time-changed L\'evy processes satisfying the
following two assumptions:
\begin{longlist}[(ALI)]
\item[(ALI)] The L\'evy process $ L_{t} $ has independent components
such that at least two of them are nonzero, that is,
\begin{equation}
\label{CFYA}
\phi_{Y}(u|t)=\mathcal{L}_{t}\bigl(-\psi_{1}(u_{1})-\cdots-\psi_{d}(u_{d})\bigr),
\end{equation}
where $ \psi_{k}, k=1,\ldots,d$, are the characteristic exponents of
the components of~$L_{t}$ of the form
\begin{eqnarray}
\label{psik}
\psi_{k}(u)&=&\ii\mu_{k}u-\sigma_{k}^{2}u^{2}/2\nonumber\\[-8pt]\\[-8pt]
&&{} +
\int_{\mathbb{R}}\bigl( e^{\ii ux}-1-\ii ux\cdot\mathbf{1}_{\{
|x|\leq
1 \}} \bigr)\nu_{k}(dx),\qquad k=1,\ldots,d,\nonumber
\end{eqnarray}
and
\begin{equation}
\label{IA}
|\mu_{l}|+\sigma^{2}_{l}+\int_{\mathbb{R}} x^{2}\nu_{l}(dx) \neq0
\end{equation}
for at least two different indexes $ l$.
\item[(ATI)] The time change process $ \mathcal{T} $ is independent of
the L\'evy process $L$ and satisfies $ \E[\mathcal{T}(t)]=t$.
\end{longlist}
\subsubsection*{Discussion}
The advantage of the modeling
framework (\ref{CFYA}) is twofold. On the one hand, models of this
type are rather flexible: the distribution of~$Y_{t}$ for a fixed $ t
$ is in general determined by $ d+1 $ nonparametric components and $
2\times d $ parametric ones. On the other hand, these models remain
parsimonious and, as we will see later, admit statistical inference not
suffering from the ``curse of dimensionality'' as $ d $ becomes large.
The latter feature of our model is in accordance with the well
documented behavior of the additive models in regression setting and
may become particularly important if one is going to use it, for
instance, to model large portfolios of assets. The nondegeneracy
assumption (\ref{IA}) basically excludes one-dimensional models and
is\vadjust{\goodbreak}
not restrictive since it can be always checked prior to estimation by
testing that
\[
-\partial_{u_{l}u_{l}}\widehat\phi(u)|_{u=0}=\frac{1}{n}\sum
_{j=1}^{n}\bigl(Y_{\Delta j,l}-Y_{\Delta(j-1),l}\bigr)^{2}> 0
\]
for at least two different indexes $ l$.
Let us make a few remarks on the one-dimensional case, where
\begin{equation}
\label{CFY1D}
\phi_{Y}(u|t)=\mathcal{L}_{t}(-\psi_{1}(u)),\qquad t\geq0.
\end{equation}
If $ \mathcal{L}_{\Delta} $ is known, that is, the distribution of the
r.v. $ \mathcal{T}(\Delta) $ is known, we can
consistently estimate the L\'evy measure $ \nu_{1} $ by inverting $
\mathcal{L}_{\Delta} $ (see Section \ref{ext} for more details). In the
case when the function $ \mathcal{L}_{\Delta} $ is unknown, one needs
some additional assumptions (e.g., absolute continuity of the time
change) to ensure identifiability. Indeed, consider a class of the
one-dimensional L\'evy processes of the so-called compound exponential
type with the characteristic exponent of the form
\[
\psi(u)=\log\biggl[ \frac{1}{1-\widetilde\psi(u)} \biggr],
\]
where $ \widetilde\psi(u) $ is the characteristic exponent of another
one-dimensional L\'evy process $ \widetilde{L}_{t}$.
It is well known [see, e.g., Section 3 in Chapter 4 of \citet{SV}]
that $ \exp(\psi(u)) $ is the characteristic function of some
infinitely divisible distribution if $ \exp(\widetilde\psi(u)) $
does. Introduce
\[
\widetilde{\mathcal{L}}_{\Delta}(z)=\mathcal{L}_{\Delta}\bigl(\log(1+z)\bigr).
\]
As can\vspace*{1pt} be easily seen, the function $ \widetilde{\mathcal{L}}_{\Delta}
$ is completely monotone with
\mbox{$ \widetilde{\mathcal{L}}_{\Delta}(0)\,{=}\,1 $} and $ \widetilde{\mathcal
{L}}'_{\Delta}(0)= \mathcal{L}'_{\Delta}(0)$.
Moreover, it is fulfilled $ \widetilde{\mathcal{L}}_{\Delta
}(-\widetilde\psi(u))=\mathcal{L}_{\Delta}(-\psi(u)) $ for all $
u\in
\mathbb{R}$.
The existence of the time change (increasing) process $ \mathcal{T} $
with a given marginal $ \mathcal{T}(\Delta) $ can be derived from the
general theory of stochastic partial ordering [see \citet{KK}].
The above construction indicates that the assumption $ \E[\mathcal
{T}(t)]=t, t\geq0$, is not sufficient to ensure the
identifiability in the case of one-dimensional time-changed L\'evy models.
\vspace*{-3pt}\section{Estimation}\vspace*{-3pt}
\label{EST}
\subsection{Main ideas}
Assume that the L\'evy measures of the component processes $
L^{1}_{t},\ldots, L_{t}^{d} $ are absolutely continuous with integrable
densities $ \nu_{1}(x),\ldots,\allowbreak\nu_{d}(x) $ that satisfy
\[
\int_{\mathbb{R}} x^{2}\nu_{k} (x) \,dx<\infty,\qquad k=1,\ldots, d.
\]
Consider the functions
\[
\bar\nu_{k}(x)=x^{2}\nu_{k}(x),\qquad k=1,\ldots, d.
\]
By differentiating $ \psi_{k} $ two times, we get
\[
\psi''_{k}(u)=-\sigma_{k}^{2}-\int_{\mathbb{R}} e^{\ii ux} \bar\nu
_{k}(x) \,dx.
\]
For the sake of simplicity, in the sequel we will make the following
assumption:
\begin{longlist}[(ALS)]
\item[(ALS)] The diffusion volatilities $ \sigma_{k}, k=1,\ldots, d$,
of the L\'evy process $L$ are supposed to be known.
\end{longlist}
A way how to extend our results to the case of the unknown
$ (\sigma_{k}) $ is outlined in Section \ref{ext}.
Introduce the functions $ \bar\psi_{k}(u)=\psi_{k}(u)+\sigma
_{k}^{2}u^{2}/2 $ to get
\begin{equation}
\label{FourierNu}
\mathbf{F}[\bar\nu_{k}](u) =-\bar\psi''_{k}(u)=-\sigma
_{k}^{2}-\psi''_{k}(u),
\end{equation}
where $\mathbf{F}[\bar\nu_{k}](u)$ stands for the Fourier transform of
$\bar\nu_{k}$.
Denote $ Z=Y_{\Delta}$, $ \phi_{k}(u)=\partial_{u_{k}}\phi_{Z}(u),
\phi
_{kl}(u)=\partial_{u_k u_{l}}\phi_{Z}(u) $ and $\phi
_{jkl}(u)=\partial
_{u_{j}u_k u_{l}}\phi_{Z}(u) $ for $ j,k$, $l\in\{ 1,\ldots, d \}$ with
\begin{equation}
\label{PHIZ}
\phi_{Z}(u)=\E[ \exp(\ii u^{\top} Z) ]=\mathcal
{L}_{\Delta
}\bigl(-\psi_{1}(u_{1})-\cdots-\psi_{d}(u_{d})\bigr).
\end{equation}
Fix some $ k\in\{ 1,\ldots, d \} $ and for any real number $ u $
introduce a vector
\[
u^{(k)}=(0,\ldots, 0,u,0,\ldots,0)\in\mathbb{R}^{d}
\]
with $ u $ being placed at the $ k $th coordinate of the vector $
u^{(k)}$. Choose some $ l\neq k$, such that the component $ L^{l}_{t}
$ is not degenerated. Then we get from~(\ref{PHIZ})
\begin{equation}
\label{PhiRatio1}
\frac{\phi_{k}(u^{(k)})}{\phi_{l}(u^{(k)})}=\frac{\psi
'_{k}(u)}{\psi'_{l}(0)},
\end{equation}
if $ \mu_{l}\neq0 $
and
\begin{equation}
\label{PhiRatio2}
\frac{\phi_{k}(u^{(k)})}{\phi_{ll}(u^{(k)})}=\frac{\psi
'_{k}(u)}{\psi''_{l}(0)}
\end{equation}
in the case $ \mu_{l}=0$.
The identities $ \phi_{l}(\mathbf{0})=-\psi'_{l}(0)\mathcal
{L}'_{\Delta
}(0) $ and $ \phi_{ll}(\mathbf{0})=[\psi'_{ l}(0)]^{2}\times\mathcal
{L}''_{\Delta}(0)-\psi''_{l}(0)\mathcal{L}'_{\Delta}(0) $ imply
$ \psi'_{l}(0)=- [\mathcal{L}'_{\Delta}(0)]^{-1}\phi_{l}(\mathbf
{0})=\Delta^{-1}\phi_{l}(\mathbf{0})$ and
$ \psi''_{l}(0) =- [\mathcal{L}'_{\Delta}(0)]^{-1}\phi_{ll}(\mathbf
{0})=\Delta^{-1}\phi_{ll}(\mathbf{0})$ if $ \psi'_{l}(0)=0$, since
$ \mathcal{L}'_{\Delta}(0)=-\E[\mathcal{T}(\Delta)]=-\Delta$.
Combining this with (\ref{PhiRatio1}) and (\ref{PhiRatio2}), we derive
\begin{eqnarray}\hspace*{26pt}
\label{PhiDeriv21}
\psi''_{k}(u)&=&\Delta^{-1}\phi_{l}(\mathbf{0})\frac{\phi
_{kk}(u^{(k)})\phi_{l}(u^{(k)})-\phi_{k}(u^{(k)})
\phi_{lk}(u^{(k)})}{\phi^{2}_{l}(u^{(k)})},\qquad \mu_{l}\neq0,
\\
\label{PhiDeriv22}
\psi''_{k}(u)&=&\Delta^{-1}\phi_{ll}(\mathbf{0})\frac{\phi
_{kk}(u^{(k)})\phi_{ll}(u^{(k)})-\phi_{k}(u^{(k)})
\phi_{llk}(u^{(k)})}{\phi^{2}_{ll}(u^{(k)})},\qquad \mu_{l}= 0.
\end{eqnarray}
Note that in the above derivations we have repeatedly used
assumption (ATI), that turns out to be crucial for the identifiability.
The basic idea of the algorithm, we shall develop in the Section \ref
{ALG}, is to estimate $ \bar\nu_{k} $
by an\vspace*{1pt} application of the regularized Fourier inversion formula to an
estimate of $ \bar\psi''_{k}(u)$.
As indicated by formulas (\ref{PhiDeriv21}) and (\ref{PhiDeriv22}), one
could, for example, estimate $ \bar\psi''_{k}(u)$, if some estimates
for the functions $ \phi_{k}(u), \phi_{lk}(u) $ and $ \phi_{llk}(u) $
are available.
\begin{rem}
One important issue we would like to comment on is the robustness of
the characterizations (\ref{PhiDeriv21}) and (\ref{PhiDeriv22}) with
respect to the independence assumption for the components of the L\'evy
process $ L_{t}$. First, note that if the components are dependent,
then the key identity (\ref{FourierNu}) is not any longer valid for $
\psi''_{k} $ defined as in (\ref{PhiDeriv21}) or (\ref{PhiDeriv22}).
Let us determine how strong can it be violated. For concreteness,
assume that $ \mu_{l}>0 $ and that the dependence in the components of
$ L_{t} $ is due to a correlation between diffusion components. In
particular, let $ \Sigma(k,l)>0$.
Since in the general case
\[
\partial_{u_{k}} \psi\bigl(u^{(k)}\bigr)=\partial_{u_{l}}\psi\bigl(u^{(k)}\bigr)\frac
{\phi_{k}(u^{(k)})}{\phi_{l}(u^{(k)})}
\]
and $ \partial_{u_{k}u_{k}} \psi(u^{(k)})=-\sigma^{2}_{k}-\mathbf
{F}[\bar\nu_{k}](u)$,
we get
\[
\mathbf{F}[\bar\nu_{k}](u)+\psi''_{k}(u)+\sigma^{2}_{k} =\frac
{\Sigma
(k,l)}{2}\biggl[ u \partial_{u_{k}}\biggl\{ \frac{\phi
_{k}(u^{(k)})}{\phi
_{l}(u^{(k)})} \biggr\}+\frac{\phi_{k}(u^{(k)})}{\phi_{l}(u^{(k)})}
\biggr].
\]
Using the fact that both functions $ u \partial_{u_{k}}\{ \phi
_{k}(u^{(k)})/\phi_{l}(u^{(k)})\} $ and
$ \phi_{k}(u^{(k)})/\allowbreak\phi_{l}(u^{(k)}) $ are uniformly bounded for $
u\in
\mathbb{R}$, we get that
the model ``misspecification bias'' is bounded by $ C\Sigma(k,l) $ with
some constant $ C>0$. Thus, the weaker is the dependence between
components $ L^{k} $ and $ L^{l} $, the
smaller is the resulting ``misspecification bias.''
\end{rem}
\subsection{Algorithm}
\label{ALG}
Set $ Z_{j}=Y_{\Delta j}-Y_{\Delta(j-1)}, j=1,\ldots, n$, and denote
by $Z_{j}^{k}$ the $k$th coordinate of $Z_{j}$. Note that $Z_{j},
j=1,\ldots, n$, are identically distributed. The estimation procedure
consists basically of three steps:
\begin{longlist}[Step 1.]
\item[Step 1.] First, we are interested in estimating partial
derivatives of the function $ \phi_{Z}(u) $ up to the third order.
To this end, define
\begin{eqnarray}
\label{PhiDeriv1Est}
\widehat\phi_{k}(u)&=&\frac{1}{n}\sum_{j=1}^{n}Z^{k}_{j}\exp(\ii
u^{\top
} Z_{j}),
\\
\label{PhiDeriv2Est}
\widehat\phi_{lk}(u)&=&\frac{1}{n}\sum
_{j=1}^{n}Z^{k}_{j}Z^{l}_{j}\exp
(\ii u^{\top} Z_{j}),
\\
\label{PhiDeriv3Est}
\widehat\phi_{llk}(u)&=&\frac{1}{n}\sum
_{j=1}^{n}Z^{k}_{j}Z^{l}_{j}Z^{l}_{j}\exp(\ii u^{\top} Z_{j}).
\end{eqnarray}
\item[Step 2.] In a second step, we estimate the second derivative of
the characteristic exponent $ \psi_{k}(u)$. Set
\begin{equation}
\label{PsiDeriv2Est1}
\widehat\psi_{k,2}(u)=\Delta^{-1}\widehat\phi_{l}(\mathbf
{0})\frac
{\widehat\phi_{kk}(u^{(k)})
\widehat\phi_{l}(u^{(k)})-\widehat\phi_{k}(u^{(k)})\widehat\phi
_{lk}(u^{(k)})}
{[\widehat\phi_{l}(u^{(k)})]^{2}},
\end{equation}
if $|\widehat\phi_{l}(\mathbf{0})|>\kappa/\sqrt{n}$ and
\begin{equation}
\label{PsiDeriv2Est2}
\widehat\psi_{k,2}(u)=\Delta^{-1}\widehat\phi_{ll}(\mathbf
{0})\frac
{\widehat\phi_{kk}(u^{(k)})
\widehat\phi_{ll}(u^{(k)})-\widehat\phi_{k}(u^{(k)})\widehat\phi
_{llk}(u^{(k)})}
{[\widehat\phi_{ll}(u^{(k)})]^{2}}
\end{equation}
otherwise,
where $ \kappa$ is a positive number.
\item[Step 3.] Finally,\vspace*{1pt} we construct an estimate for $ \bar\nu_{k}(x)
$ by applying the Fourier inversion formula
combined with a regularization to $ \widehat\psi_{k,2}(u) $:
\begin{equation}
\label{NUEST}
\widehat{\nu}_{k}(x)=-\frac{1}{2\pi}\int_{\mathbb{R}}e^{-\ii
ux}[\widehat{\psi}_{k,2}(u)+\sigma_{k}^{2}] \mathcal{K}(uh_{n})
\,du,
\end{equation}
where $ \mathcal{K}(u) $ is a regularizing kernel supported on $ [-1,1]
$ and $ h_{n} $ is a sequence of bandwidths which tends to $ 0 $ as $
n\to\infty$.
The choice of the sequence $ h_{n} $ will be discussed later on.
\end{longlist}
\begin{rem}
The parameter $ \kappa$ determines the testing error for the
hypothesis $ H\dvtx\mu_{l}>0$. Indeed, if
$ \mu_{l}=0$, then $ \phi_{l}(\mathbf{0})=0 $ and by the central
limit theorem
\begin{eqnarray*}
\mathbb{P}\bigl(|\widehat\phi_{l}(\mathbf{0})|>\kappa/\sqrt{n}
\bigr)&\leq&\mathbb{P}
\bigl(\sqrt{n}|\widehat\phi_{l}(\mathbf{0})-\phi_{l}(\mathbf
{0})|>\kappa
\bigr)\\
&\to&\mathbb{P}\bigl(|\xi|>\kappa/\sqrt{\Var[Z^{l}]}\bigr),\qquad
n\to
\infty,
\end{eqnarray*}
with $ \xi\sim\mathcal{N}(0,1)$.
\end{rem}
\section{Asymptotic analysis}
\label{ASYMP}
In this section, we are going to study the asymptotic properties of
the estimates $ \widehat{\nu}_{k}(x)$,
$ k=1,\ldots,d$.
In particular, we prove almost sure uniform as well as pointwise
convergence rates for $ \widehat{\nu}_{k}(x)$. Moreover, we will
show the optimality of the above rates over suitable classes of
time-changed L\'evy models.
\subsection{\texorpdfstring{Global vs. local smoothness of L\'evy
densities}{Global vs. local smoothness of Levy densities}}
\label{CLASSLEVY}
Let $ L_{t} $ be a one-dimen\-sional L\'evy process with a L\'evy
density $ \nu$. Denote $ \bar\nu(x)=x^{2}\nu(x)$.
For any two nonnegative numbers $ \beta$ and $ \gamma$ such that
$ \gamma\in[0,2] $ consider two following classes of L\'evy densities
$ \nu$:
\begin{equation}
\label{ClCond1}
\mathfrak{S}_{\beta}=\biggl\{ \nu\dvtx\int_{\mathbb{R}} (1+|u|^{\beta
})\mathbf{F}[\bar\nu](u) \,du<\infty\biggr\}
\end{equation}
and
\begin{equation}
\label{ClCond2}
\mathfrak{B}_{\gamma}=\biggl\{\nu\dvtx\int_{|y|>\epsilon}\nu(y)
\,dy\asymp
\frac{\Pi(\epsilon)}{\epsilon^{\gamma}}, \epsilon\to+0
\biggr\},
\end{equation}
where $ \Pi$ is some positive function on $ \mathbb{R}_{+} $
satisfying $ 0<\Pi(+0)<\infty$.
The parameter $ \gamma$
is usually called the Blumenthal--Geetor
index of $ L_{t} $.
This index~$\gamma$ is related to the ``degree of activity'' of jumps
of $ L_{t} $. All L\'evy measures put finite mass
on the set $(-\infty,-\epsilon]\cup[\epsilon,\infty) $ for any
arbitrary $\epsilon>0$.
If $\nu([-\epsilon,\epsilon])<\infty$
the process
has finite activity and $\gamma=0$.
If $ \nu([-\epsilon,\epsilon])=\infty$,
that is, the process
has infinite activity and in addition the L\'evy measure
$\nu((-\infty,-\epsilon]\cup[\epsilon,\infty))$ diverges near $0$
at a rate $|\epsilon|^{-\gamma} $ for some $\gamma>0$, then the
Blumenthal--Geetor
index of $ L_{t} $ is equal to $\gamma$. The higher $\gamma$ gets, the
more frequent the
small jumps become.
Let us now investigate the connection between classes $ \mathfrak
{S}_{\beta} $ and $ \mathfrak{B}_{\gamma} $. First, consider an
example. Let $ L_{t} $ be a tempered stable L\'evy process
with a L\'evy density
\[
\nu(x)=\frac{2^{\gamma}\cdot\gamma}{\Gamma(1-\gamma)}x^{-(\gamma
+1)}\exp\biggl( -\frac{x}{2} \biggr)\mathbf{1}_{(0,\infty)}(x),\qquad
x>0,
\]
where $ \gamma\in(0,1)$. It is clear that $ \nu\in\mathfrak
{B}_{\gamma}$ but what is about $ \mathfrak{S}_{\beta} $? Since
\[
\bar\nu(x)=\frac{2^{\gamma}\cdot\gamma}{\Gamma(1-\gamma
)}x^{1-\gamma
}\exp\biggl( -\frac{x}{2} \biggr)\mathbf{1}_{(0,\infty)}(x),
\]
we derive
\[
\mathbf{F}[\bar\nu](u)=\int_{0}^{\infty} e^{\ii u x}\bar\nu(x)
\,dx\asymp2^{\gamma}\gamma(1-\gamma)e^{\ii\pi(1-\gamma/2)
}u^{-2+\gamma
},\qquad u\to+\infty,
\]
by the Erd\'elyi lemma [see \citet{ER}]. Hence, $ \nu$ cannot belong
to $\mathfrak{S}_{\beta} $ as long as $ \beta>1-\gamma$. The message
of this example is that given the activity index $ \gamma$, the
parameter $ \beta$ determining the smoothness of
$ \bar\nu$, cannot be taken arbitrary large. The above example can
be straightforwardly generalized to a class of L\'evy densities
supported on $ \mathbb{R}_{+}$. It turns out that if the L\'evy
density $ \nu$ is supported on $ [0,\infty) $, is infinitely smooth
in $ (0,\infty) $ and $ \nu\in\mathfrak{B}_{\gamma}$ for some $
\gamma\in(0,1)$, then $ \nu\in\mathfrak{S}_{\beta} $ for all $
\beta$ satisfying $ 0\leq\beta<1-\gamma$ and
$ \nu\notin\mathfrak{S}_{\beta}$ for $ \beta>1-\gamma$. As a
matter of fact, in the case $ \gamma=0 $ (finite activity case) the
situation is different and $ \beta$ can be arbitrary large.
The above discussion indicates that in the case $ \nu\in\mathfrak
{B}_{\gamma}$ with some $ \gamma>0 $ it is reasonable to look at the
local smoothness
of $ \bar\nu_{k} $ instead of the global one. To this end, fix a point
$ x_{0}\in\mathbb{R} $ and a positive integer number $ s\geq1$. For
any $\delta>0 $ and $D>0$ introduce a class $ \mathfrak
{H}_{s}(x_{0},\delta,D) $ of L\'evy densities $ \nu$ defined as
\begin{eqnarray}
\label{pnormx0}
\mathfrak{H}_{s}(x_{0},\delta,D)
&=&\Bigl\{\nu\dvtx\bar\nu(x)\in
C^{s}(]x_{0}-\delta,x_{0}+\delta[),\nonumber\\[-8pt]\\[-8pt]
&&\hphantom{\Bigl\{}
\sup
_{x\in]x_ {0}-\delta,x_{0}+\delta[}\bigl|\bar\nu^{(l)}(x)\bigr| \leq D \mbox{
for } l=1,\ldots, s\Bigr\}.
\nonumber
\end{eqnarray}
\subsection{Assumptions}
\label{ASS}
In order to prove the convergence of $ \widehat{\nu}_{k}(x)$, we need
the assumptions listed below:
\begin{longlist}[(AL1)]
\item[(AL1)] The L\'evy densities $ \nu_{1},\ldots, \nu_{d} $ are in
the class $ \mathfrak{B}_{\gamma} $
for some $ \gamma>0$.
\item[(AL2)] For some $ p>2$, the L\'evy densities $ \nu_{k},
k=1,\ldots,d$, have finite absolute moments of the order $ p$:
\[
\int_{\mathbb{R}} |x|^{p}\nu_{k}(x) \,dx<\infty,\qquad k=1,\ldots,d.
\]
\item[(AT1)] The time change $\mathcal{T}$ is independent of the L\'evy
process $L$ and the sequence $ T_{k}=\mathcal{T}(\Delta k)-\mathcal
{T}(\Delta(k-1)),
k\in\mathbb{N}$, is strictly stationary, $ \alpha$-mixing with the
mixing coefficients $ (\alpha_{T}(j))_{j\in\mathbb{N}} $ satisfying
\[
\alpha_{T}(j)\leq\bar\alpha_{0}\exp(-\bar\alpha_{1} j),\qquad
j\in
\mathbb{N},
\]
for some positive constants $ \bar\alpha_{0} $ and $ \bar\alpha
_{1}$.
Moreover, assume that
\[
\E[\mathcal{T}^{-2/\gamma}(\Delta)]<\infty,\qquad
\E[\mathcal{T}^{2p}(\Delta)]<\infty
\]
with $ \gamma$ and $ p $ being from assumptions (AL1) and (AL2),
respectively.
\item[(AT2)] The Laplace transform $ \mathcal{L}_{t}(z) $ of $
\mathcal
{T}(t) $ fulfills
\[
\mathcal{L}'_{t}(z)=o(1),\qquad \mathcal{L}''_{t}(z)/\mathcal
{L}'_{t}(z)=O(1),\qquad |z| \to\infty,\qquad \Ree z>0.
\]
\item[(AK)] The regularizing kernel $ \mathcal{K} $ is uniformly
bounded, is supported on $ [-1,1] $ and satisfies
\[
\mathcal{K}(u)=1,\qquad u\in[-a_{K},a_{K}],
\]
with some $ 0<a_{K}<1$.
\item[(AH)] The sequence of bandwidths $ h_{n} $ is assumed to satisfy
\[
h^{-1}_{n}=O(n^{1-\delta}),\qquad M_{n}\sqrt{\frac{\log n }{n}}\sqrt
{\frac{1}{h_{n}}\log\frac{1}{h_{n}}}=o(1),\qquad n\to\infty,
\]
for some positive number $ \delta$ fulfilling $ 2/p<\delta\leq1$, where
\[
M_{n}= \max_{l\neq
k}\sup_{\{|u|\leq1/h_{n}\}}\bigl|\phi^{-1}_{l}\bigl(u^{(k)}\bigr)\bigr|.
\]
\end{longlist}
\begin{rem}
By requiring $ \nu_{k}\in\mathfrak{B}_{\gamma}, k=1,\ldots, d$,
with some $ \gamma>0$, we exclude from our analysis pure compound
Poisson processes and some infinite activity L\'evy processes with $
\gamma=0$. This is mainly done for the sake of brevity: we would like
to avoid additional technical calculations related to the fact that the
distribution of $ Y_{t} $ is not in general absolutely continuous in
this case.
\end{rem}
\begin{rem} Assumption (AT1) is satisfied if, for example, the process~$ \mathcal{T}(t) $ is of the form (\ref{TCD}),
where the rate process $ \rho(u) $ is strictly\vadjust{\goodbreak} stationary,
geometrically $ \alpha$-mixing and fulfills
\begin{equation}
\label{momassinteg}
\E[\rho^{2p}(u)]<\infty,\qquad u\in[0,\Delta],\qquad \E\biggl(
\int
_{0}^{\Delta}\rho(u) \,du \biggr)^{-2/\gamma}<\infty.
\end{equation}
In the case of the Cox--Ingersoll--Ross process $ \rho$ (see Section
\ref{sec52}), assumptions (\ref{momassinteg}) are satisfied for any
$ p>0
$ and any $ \gamma>0$.
\end{rem}
\begin{rem}
Let us comment on assumption (AH). Note that in order to determine
$M_{n}$, we do not need the characteristic function $ \phi(u) $ itself,
but only a low bound for its tails. Such low bound can be constructed
if, for example, a low bound for the tail of $ \mathcal{L}'_{t}(z) $
and an upper bound for the Blumenthal--Geetor index $ \gamma$ are
available [see \citet{Bel1} for further discussion]. In practice,
of course, one should prefer adaptive methods for choosing $ h_{n}$.
One such method, based on the so called ``quasi-optimality'' approach,
is proposed and used in Section \ref{TCGamma}. The theoretical analysis
of this
method is left for future research.
\end{rem}
\subsection{Uniform rates of convergence}
\label{URC}
Fix some $ k $ from the set $ \{ 1,2,\ldots,d \}$. Define a weighting function
$ w(x)=\log^{-1/2}(e+|x|) $ and denote
\[
\|\bar\nu_{k}-\widehat\nu_{k}\|_{L_{\infty}(\mathbb
{R},w)}=\sup_{x\in\mathbb{R}}[w(|x|)|\bar\nu_{k}(x)-\widehat\nu_{k}(x)|].
\]
Let $ \xi_{n} $ be a sequence of positive r.v. and $ q_{n} $ be a
sequence of positive real numbers.
We shall write $ \xi_{n}=O_{\mathrm{a.s.}}(q_{n}) $ if there is a constant $ D>0
$ such that $ \mathbb{P}(\limsup_{n\to\infty} q_{n}^{-1}\xi_{n}\leq
D)=1$. In
the case $ \mathbb{P}(\limsup_{n\to\infty} q_{n}^{-1}\xi_{n}=0)=1 $, we shall
write $ \xi_{n}=o_{\mathrm{a.s}.}(q_{n})$.
\begin{theorem}
\label{UpperBounds}
Suppose that assumptions \textup{(AL1)}, \textup{(AL2), (AT1), (AT2)}, \textup{(AK)}
and \textup{(AH)}
are fulfilled.
Let $ \widehat{\nu}_{k}(x) $ be the estimate for $\bar\nu_{k}(x) $
defined in Section~\ref{ALG}.
If $ \nu_{k}\in\mathfrak{S}_{\beta} $
for some $ \beta>0$, then
\[
\|\bar\nu_{k}-\widehat\nu_{k}\|_{L_{\infty}(\mathbb
{R},w)}=O_{\mathrm{a.s}.}\Biggl( \sqrt{\frac{\log^{3+\varepsilon} n}{n}\int
_{-1/h_{n}}^{1/h_{n}}\mathfrak{R}^{2}_{k}(u) \,du}+ h_{n}^{\beta
}\Biggr)
\]
for arbitrary small $ \varepsilon>0$, where
\[
\mathfrak{R}_{k}(u)=\frac{(1+|\psi'_{k}(u)|)^{2}}{|\mathcal
{L}'_{\Delta
}(-\psi_{k}(u))|}.
\]
\end{theorem}
\begin{cor}
\label{UPPERCOR1}\label{UPPERCOR2}
Suppose that $ \sigma_{k}=0$, $ \gamma\in(0,1] $ in assumption
\textup{(AL1)}
and
\[
|\mathcal{L}'_{\Delta}(z)|\gtrsim\exp(-a|z|^{\eta}),\qquad |z|\to
\infty,\qquad \Ree z\geq0,
\]
for some $ a>0 $ and $ \eta>0$. If $ \mu_{k}>0$, then
\begin{equation}
\label{UBE1}
\|\bar\nu_{k}-\widehat\nu_{k}\|_{L_{\infty}(\mathbb
{R},w)}=O_{\mathrm{a.s}.}\Biggl(\sqrt{\frac{\log^{3+\varepsilon} n}{n}}\exp
(ac\cdot h_{n}^{-\eta} ) +h_{n}^{\beta} \Biggr)
\end{equation}
with some constant $ c>0$.
In the case $ \mu_{k}=0 $ we have
\begin{equation}
\label{UBE2}
\|\bar\nu_{k}-\widehat\nu_{k}\|_{L_{\infty}(\mathbb
{R},w)}=O_{\mathrm{a.s}.}\Biggl(\sqrt{\frac{\log^{3+\varepsilon} n}{n}}\exp
(ac\cdot h_{n}^{-\gamma\eta} ) +h_{n}^{\beta} \Biggr).
\end{equation}
Choosing $ h_{n} $ in such a way that the r.h.s. of (\ref
{UBE1}) and (\ref{UBE2}) are minimized, we obtain
the rates shown in the Table \ref{UCR}.
\begin{table}
\tabcolsep=0pt
\caption{Uniform convergence rates for $ \widehat\nu_{k} $ in the case
$ \sigma_{k}=0 $}\label{UCR}
\begin{tabular*}{\tablewidth}{@{\extracolsep{4in minus 4in}}llcc@{}}
\hline
\multicolumn{2}{@{}c}{$\bolds{
|\mathcal{L}'_{\Delta}(z)|\bm{\gtrsim}
|z|^{-\alpha}}
$} & \multicolumn{2}{c@{}}{$\bolds{ |\mathcal{L}'_{\Delta}(z)|\bm{\gtrsim}\exp
(-a|z|^{\eta}) }$}\\[-4pt]
\multicolumn{2}{@{}c}{\hrulefill} & \multicolumn{2}{c@{}}{\hrulefill} \\
$\bolds{ \mu_{k}>0} $ & \multicolumn{1}{c}{$ \bolds{\mu_{k}=0} $} & $ \bolds{\mu_{k}>0} $
& $ \bolds{\mu_{k}=0} $ \\
\hline
$ n^{-{\beta}/{(2\alpha+2\beta+1)}}$ & $ n^{-{\beta}/{(2\alpha
\gamma+2\beta+1)}} $ & $ \log^{-\beta/\eta} n $ & $ \log^{-\beta
/\gamma
\eta} n $\\
$\quad{}\times\log^{{(3+\varepsilon
)\beta}/{(2\alpha+2\beta+1)}}(n) $ & $\quad{}\times\log^{{(3+\varepsilon)\beta}/{(2\alpha
\gamma
+2\beta+1)}}(n)$\\
\hline
\end{tabular*}
\end{table}
\begin{table}[b]
\tablewidth=270pt
\caption{Uniform convergence rates for $ \widehat\nu_{k} $ in the case
$ \sigma_{k}>0 $} \label{RCS}
\begin{tabular*}{\tablewidth}{@{\extracolsep{4in minus 4in}}cc@{}}
\hline
$ \bolds{|\mathcal{L}'_{\Delta}(z)|\bm{\gtrsim}|z|^{-\alpha}} $ & $ \bolds{|\mathcal
{L}'_{\Delta}(z)|\bm{\gtrsim}\exp(-a|z|^{\eta}) }$\\
\hline
$ n^{-{\beta}/{(4\alpha+2\beta+1)}}\log^{{(3+\varepsilon
)\beta}/{(4\alpha+2\beta+1)}}(n) $ & $ \log^{-\beta/2\eta} n $ \\
\hline
\end{tabular*}
\end{table}
If $ \gamma\in(0,1] $ in assumption \textup{(AL1)} and
\[
|\mathcal{L}'_{\Delta}(z)|\gtrsim|z|^{-\alpha},\qquad
|z|\to\infty,\qquad
\Ree z\geq0,
\]
for some $ \alpha>0$, then
\[
\|\bar\nu_{k}-\widehat\nu_{k}\|_{L_\infty(\mathbb{R},w)}
=O_{\mathrm{a.s}.}
\Biggl(\sqrt{\frac{\log^{3+\varepsilon} n}{n}} h_{n}^{-1/2-\alpha}
+h_{n}^{\beta} \Biggr)
\]
provided $ \mu_{k}>0$. In the case $ \mu_{k}=0 $, one has
\[
\|\bar\nu_{k}-\widehat\nu_{k}\|_{L_\infty(\mathbb{R},w)}
=O_{\mathrm{a.s}.}
\Biggl(\sqrt{\frac{\log^{3+\varepsilon} n}{n}} h_{n}^{-1/2-\alpha\gamma}
+h_{n}^{\beta} \Biggr).
\]
The choices $ h_{n}=n^{-1/(2(\alpha+\beta) +1)}\log^{(3+\varepsilon
)/(2(\alpha+\beta) +1)}(n) $
and
\[
h_{n}=n^{-1/(2(\alpha\gamma+\beta) +1)}
\log^{(3+\varepsilon)/(2(\alpha\gamma+\beta) +1)}(n)
\]
for the cases $ \mu_{k}>0 $ and $ \mu_{k}=0$, respectively, lead to
the bounds shown in Table~\ref{UCR}. In the case $ \sigma_{k}>0 $, the
rates of convergence are given in Table \ref{RCS}.\vadjust{\goodbreak}
\end{cor}
\begin{rem}
As one can see, assumption (AH) is always fulfilled for the optimal
choices of $ h_{n} $ given in Corollary \ref{UPPERCOR2}, provided $
\alpha\gamma+\beta>0 $
and $ p>2+1/(\alpha\gamma+\beta)$.
\end{rem}
\subsection{Pointwise rates of convergence}
\label{PCR}
Since the transformed L\'evy density~$ \bar\nu_{k} $ is usually not
smooth at $ 0 $ (see Section \ref{CLASSLEVY}), pointwise rates of
convergence might be more informative than the uniform ones if $ \nu
_{k}\in\mathfrak{B}_{\gamma} $ for some $ \gamma>0$. It is remarkable
that the same estimate $ \widehat\nu_{k} $ as before will achieve the
optimal pointwise convergence rates in the class $ \mathfrak
{H}_{s}(x_{0},\delta,D)$, provided the kernel $ \mathcal{K} $
satisfies (AK) and is sufficiently smooth.
\begin{theorem}
\label{pointwiseupper}
Suppose that assumptions \textup{(AL1), (AL2), (AT1), (AT2), (AK)} and
\textup{(AH)}
are fulfilled.
If $ \nu_{k}\in\mathfrak{H}_{s}(x_{0},\delta,D) $ with $\mathfrak
{H}_{s}(x_{0},\delta,D)$ being defined in (\ref{pnormx0}),
for some $ s\geq1, \delta>0, D>0$, and $ \mathcal{K}\in
C^{m}(\mathbb{R}) $ for some $ m\geq s$, then
\begin{equation}
\label{upperineq}
|\widehat\nu_{k}(x_{0})-\bar\nu_{k}(x_{0})|= O_{\mathrm{a.s}.}\Biggl( \sqrt
{\frac
{\log^{3+\varepsilon} n}{n}\int_{-1/h_{n}}^{1/h_{n}}\mathfrak
{R}^{2}_{k}(u) \,du}+ h_{n}^{s}\Biggr)
\end{equation}
with $ \mathfrak{R}_{k}(u) $ as in Theorem \ref{UpperBounds}.
As a result, the pointwise rates of convergence for different
asymptotic behaviors of the Laplace transform $ \mathcal{L}_{t} $
coincide with ones given in Tables \ref{UCR} and \ref{RCS}, if we
replace $ \beta$ with $ s$.
\end{theorem}
\begin{rem}
If the kernel $ \mathcal{K} $ is infinitely smooth, then it will
automatically ``adapt'' to the pointwise smoothness of $\bar\nu_{k}$,
that is, (\ref{upperineq}) will hold for arbitrary large $ s\geq1$,
provided $ \nu_{k}\in\mathfrak{H}_{s}(x_{0},\delta,D) $ with some
$\delta>0$ and $D>0$. An example of infinitely smooth kernels
satisfying (AK) is given by the so called flat-top kernels (see
Section \ref{TCGamma} for the definition).
\end{rem}
\subsection{Lower bounds}
\label{LBOUNDS}
In this section, we derive a lower bound on the minimax risk of an
estimate $ \widehat\nu(x) $
over a class of one-dimensional time-changed L\'evy processes $
Y_{t}=L_{\mathcal{T}(t)} $ with the known distribution
of $ \mathcal{T}$,
such that the L\'evy measure $ \nu$ of the L\'evy process $ L_{t} $
belongs to the class $ \mathfrak{S}_{\beta}\cap\mathfrak{B}_{\gamma}$
with some $ \beta>0 $ and $ \gamma\in(0,1]$.
The following theorem holds.
\begin{theorem}
\label{LowBounds}
Let $ L_{t} $ be a L\'evy process with zero diffusion part,
a~drift~$\mu$ and a L\'evy density $ \nu$. Consider a time-changed L\'evy
process $ Y_{t}=L_{\mathcal{T}(t)}$, where the Laplace transform of
the time change $ \mathcal{T}(t) $ fulfills
\begin{equation}
\label{LCond}
\mathcal{L}_{\Delta}^{(k+1)}(z)/\mathcal{L}_{\Delta}^{(k)}(z)
=O(1),\qquad |z|\rightarrow\infty,\qquad \Ree z\geq0,
\end{equation}
for $ k=0,1,2$, and uniformly in $ \Delta\in[0,1]$. Then
\begin{equation}
\quad
\label{LB}
\liminf_{n\to\infty}\inf_{\widehat\nu} \sup_{\nu\in\mathfrak
{S}_{\beta}\cap\mathfrak{B}_{\gamma} }\mathbb{P}_{(\nu,\mathcal
{T})}\bigl( \|
\bar\nu-\widehat\nu\|_{L_\infty(\mathbb{R},w)}> \varepsilon
h^{\beta
}_{n}\log^{-1}(1/h_{n})\bigr)>0
\end{equation}
for any $ \varepsilon>0 $ and any sequence $ h_{n} $ satisfying
\[
n\Delta^{-1}[ \mathcal{L}_{\Delta}^{\prime}(c\cdot
h_{n}^{-\gamma
}
] ^{2}h_{n}^{2\beta+1 }=O(1),\qquad n\to\infty,
\]
in the case $ \mu=0 $
and
\[
n\Delta^{-1}[ \mathcal{L}_{\Delta}^{\prime}(c\cdot h_{n}^{-1
}
] ^{2}h_{n}^{2\beta+1 }=O(1),\qquad n\to\infty,
\]
in the case $ \mu>0$, with some positive constant $ c>0$.
Note that the infimum in (\ref{LB}) is taken over all estimators of $
\nu$ based on $ n $ observations of the r.v.~$Y_{\Delta} $ and $ \mathbb{P}
_{(\nu,\mathcal{T})} $ stands for the distribution of $ n $ copies of $
Y_{\Delta}$.
\end{theorem}
\begin{cor}
\label{ExpL}
Suppose that the underlying L\'evy process is driftless, that is, $ \mu=0
$ and $ \mathcal{L}_{t}(z)=\exp(-azt) $ for some $ a>0$,
corresponding to a~deterministic time change process $ \mathcal{T}(t)=at$. Then by taking
\[
h_{n}=\biggl( \frac{\log n - ((2\beta+1)/\gamma) \log\log
n}{2ac\Delta
} \biggr)^{-1/\gamma},
\]
we arrive at
\[
\liminf_{n\to\infty}\inf_{\widehat\nu} \sup_{\nu\in\mathfrak
{S}_{\beta}\cap\mathfrak{B}_{\gamma} }\mathbb{P}_{(\nu,\mathcal
{T})}\bigl(\|
\bar\nu-\widehat\nu\|_{L_{\infty}(\mathbb{R},w)}> \varepsilon
\cdot
\Delta^{\beta/\gamma} \log^{-\beta/\gamma} n\bigr)>0.\vspace*{-3pt}
\]
\end{cor}
\begin{cor}
\label{PolL}
Again let $ \mu=0$. Take $ \mathcal{L}_{t}(z)=1/(1+z)^{\alpha_{0}
t}, \Ree z>0 $ for some $ \alpha_{0} >0$, resulting in a Gamma
process $ \mathcal{T}(t) $ (see Section \ref{TCGamma} for the
definition). Under the choice
\[
h_{n}=(n\Delta)^{-1/(2\alpha\gamma+2\beta+1)}
\]
we get
\[
\liminf_{n\to\infty}\inf_{\widehat\nu} \sup_{\nu\in\mathfrak
{S}_{\beta}\cap\mathfrak{B}_{\gamma} }\mathbb{P}_{(\nu,\mathcal
{T})}\bigl(\|
\bar\nu-\widehat\nu\|_{L_{\infty,w}(\mathbb{R})}> \varepsilon
\cdot
(n\Delta)^{-\beta/(2\alpha\gamma+2\beta+1)}\log^{-1}n\bigr)>0,\vspace*{-3pt}
\]
where $ \alpha= \alpha_{0} \Delta+1$.
\end{cor}
\begin{rem}
\label{HighFreq}
Theorem \ref{LowBounds} continues to hold for $ \Delta\to0 $ and
therefore can be used to derive minimax lower bounds for the risk of $
\widehat\nu$ in high-frequency setup. As can be seen from
Corollaries \ref{ExpL} and \ref{PolL}, the rates will strongly depend
on the specification of the time change process $
\mathcal{T}$.\vspace*{-3pt}
\end{rem}
The pointwise rates of convergence obtained in (\ref{pointwiseupper})
turn out to be optimal over the class
$ \mathfrak{H}_{s}(x_{0},\delta,D)\cap\mathfrak{B}_{\gamma} $ with $
s\geq1$,
$\delta>0$, $ x_{0}\in\mathbb{R}$, $D>0$ and $ \gamma\in(0,1] $ as
the next theorem shows.
\begin{theorem}
\label{LowBoundsPW}
$\!\!\!$Let $ L_{t} $ be a L\'evy process with zero diffusion part, a~drift~$
\mu$ and a L\'evy density $ \nu$. Consider a time-changed
L\'evy
process $ Y_{t}=L_{\mathcal{T}(t)}$, where the Laplace transform of
the time change $ \mathcal{T}(t) $ fulfills (\ref{LCond}). Then
\begin{equation}
\label{LBP}
\liminf_{n\to\infty}\inf_{\widehat\nu}\!\sup_{\nu\in\mathfrak
{H}_{s}(x_{0},\delta,D)\cap\mathfrak{B}_{\gamma} }\!\mathbb{P}_{(\nu,\mathcal
{T})}\bigl( |\bar\nu(x_{0})\,{-}\,\widehat\nu(x_{0})|\,{>}\, \varepsilon
h^{s}_{n}\log^{-1}(1/h_{n})\bigr)>0\hspace*{-37pt}\vadjust{\goodbreak}
\end{equation}
for $ s\geq1$, $\delta>0$, $D>0$, any $ \varepsilon>0 $ and any
sequence $ h_{n} $ satisfying
\[
n\Delta^{-1}[ \mathcal{L}_{\Delta}^{\prime}(c\cdot
h_{n}^{-\gamma
}
] ^{2}h_{n}^{2s+1 }=O(1),\qquad n\to\infty,
\]
in the case $ \mu=0 $
and
\[
n\Delta^{-1}[ \mathcal{L}_{\Delta}^{\prime}(c\cdot h_{n}^{-1
}
] ^{2}h_{n}^{2s+1 }=O(1),\qquad n\to\infty,
\]
in the case $ \mu>0$, with some positive constant $ c>0$.
\end{theorem}
\subsection{Extensions}
\label{ext}
\subsubsection*{\texorpdfstring{One-dimensional time-changed L\'evy models}{One-dimensional
time-changed Levy models}} Let us consider
a class of one-dimensional time-changed L\'evy models (\ref{CFY1D})
with the known time change process, that is, the known function $
\mathcal{L}_{t} $ for all $ t>0$.
This class of models trivially includes L\'evy processes without time
change [by setting $ \mathcal{L}_{t}(z)=\exp(-tz) $] studied in
\citet{NR} and \citet{CG}.
We have in this case
\begin{equation}\quad
\label{PhiDeriv23}
\psi''_{1}(u)=-\frac{\phi''(u)\mathcal{L}'_{\Delta}(-\psi
_{1}(u))-\phi
'(u)\mathcal{L}''_{\Delta}(-\psi_{1}(u))/\mathcal{L}'_{\Delta
}(-\psi
_{1}(u))}{[\mathcal{L}'_{\Delta}(-\psi_{1}(u))]^{2}}
\end{equation}
with
\[
\psi_{1}(u)=-\mathcal{L}_{\Delta}^{-}(\phi(u)),
\]
where $ \mathcal{L}_{\Delta}^{-} $ is an inverse function for $
\mathcal
{L}_{\Delta}$.
Thus, $ \psi''_{1}(u) $ is again a ratio-type estimate involving the
derivatives of the c.f. $\phi$ up to second order, that agrees with
the one proposed in \citet{CG} for the case of pure L\'evy
processes. Although we do not study the case of one-dimensional models
in this work, our analysis can be easily adapted to this situation as
well. In particular, the derivation of the pointwise convergence rates
can be directly carried over to this situation.
\subsubsection*{The case of the unknown $ (\sigma_{k})$}
One way to proceed in the case of the unknown $ (\sigma_{k}) $ and $
\nu
_{k}\in\mathfrak{B}_{\gamma} $ with $ \gamma<2 $ is to define $
\widetilde\nu_{k}(x)=x^{4}\nu_{k}(x)$.
Assuming $ \int\widetilde\nu_{k}(x) \,dx<\infty$, we get
\[
\psi^{(4)}_{k}(u)=\int_{\mathbb{R}}e^{\ii ux}\widetilde\nu_{k}(x) \,dx.
\]
Hence, in the above situation
one can apply the regularized Fourier inversion formula to an estimate
of $ \psi^{(4)}_{k}(u) $ instead of $ \psi''_{k}(u)$.
\subsubsection*{Estimation of $ \mathcal{L}_{\Delta}$} Let us first
estimate $ \psi_{k}$. Set
\[
\widehat\psi_{k}(u)=\Delta^{-1}\widehat\phi_{l}(\mathbf{0})\int
_{0}^{u}\frac{\widehat\phi_{k}(v^{(k)})}
{\widehat\phi_{l}(v^{(k)})} \,dv.\vadjust{\goodbreak}
\]
Under Assumptions (AL2), (AT1), (AT2), (AK) and (AH) we derive
\begin{equation}
\label{psibound}
\|\psi_{k}-\widehat\psi_{k}\|_{L_{\infty}(\mathbb
{R},w)}=O_{\mathrm{a.s}.}\Biggl( \sqrt{\frac{\log^{3+\varepsilon}
n}{n}}\Biggr)
\end{equation}
with a weighting function
\[
w(u)=\biggl[ \int_{0}^{u}\frac{1+|\psi'_{k}(v)|}{|\mathcal
{L}'_{\Delta
}(-\psi_{k}(v))|} \,dv \biggr]^{-1}.
\]
Now let us define an estimate for $ \mathcal{L}_{\Delta} $ as a
solution of the following optimization problem
\begin{equation}
\label{Lest}
\widehat{\mathcal{L}}_{\Delta}=\arg\inf_{\mathcal{L}\in\mathfrak
{M}_{\Delta}}\sup_{u\in\mathbb{R}}\bigl\{ w(u)\bigl| \mathcal
{L}(-\widehat\psi_{k}(u))-\widehat\phi\bigl(u^{(k)}\bigr) \bigr| \bigr\},
\end{equation}
where $ \mathfrak{M}_{\Delta} $ is the set of completely monotone
functions $ \mathcal{L} $ satisfying $ \mathcal{L}(0)=1 $ and $
\mathcal
{L}'(0)=-\Delta$. Simple calculations and the bound (\ref
{psibound}) yield
\begin{equation}
\label{Lbound}
\sup_{u\in\mathbb{R}}\{ w(u)| \widehat{\mathcal
{L}}_{\Delta
}(-\psi_{k}(u))-\mathcal{L}_{\Delta}(-\psi_{k}(u)) |
\}
=O_{\mathrm{a.s}.}\Biggl( \sqrt{\frac{\log^{3+\varepsilon} n}{n}}\Biggr).
\end{equation}
Since any function $ \mathcal{L} $ from $ \mathfrak{M}_{\Delta} $
has a representation
\[
\mathcal{L}(u)=\int_{0}^{\infty}e^{-u x}\,dF(x)
\]
with some distribution function $ F $ satisfying $ \int x
\,dF(x)=\Delta
$, we can replace the optimization
over $ \mathfrak{M} $ in (\ref{Lest}) by the optimization over the
corresponding set of distribution functions. The advantage of the
latter approach is that herewith we can directly get an estimate for
the distribution function of the r.v. $ \mathcal{T}(\Delta)$. A
practical implementation of the estimate (\ref{Lest}) is still to be
worked out, as the optimization over the set $ \mathfrak{M}_{\Delta} $
is not feasible and should be replaced by the optimization over
suitable approximation classes (sieves). Moreover, the ``optimal''
weights in (\ref{Lest}) depend on the unknown~$ \mathcal{L}$. However, it turns out that it is possible to use any
weighting function which is dominated by $ w(u)$, that is, one needs only
some lower bounds for~$\mathcal{L}'_{\Delta}$.
\begin{rem}
\label{compfuncbounds}
It is interesting to compare (\ref{psibound}) and (\ref{Lbound}) with
Theorem~3.2 in
\citet{HM}. At first sight it may seem strange that, while the
rates of convergence for our ``link'' function $ \mathcal{L}_{\Delta} $
and the ``components'' $ \psi_{k} $ depend on
the tail behavior of $ \mathcal{L}_{\Delta}'$, the rates in
\citet{HM} rely only on the smoothness of
the link function and the components. The main reason for this is that
the derivative of the link function in the above paper is assumed to be
uniformly bounded from below [assumption (A8)], a restriction that can
be hardly justifiable in our setting. The convergence analysis in the
unbounded case is, in our opinion, an important contribution of this
paper to the problem of estimating composite functions that can be
carried over to other setups and settings.
\end{rem}
\subsection{Discussion}
\label{DISC}
As can be seen, the estimate $ \widehat\nu_{k} $ can exhibit various
asymptotic behavior
depending on the underlying L\'evy process $ L_{t} $ and the
time-change $ \mathcal{T}(t)$.
In particular, if the Laplace transform $ \mathcal{L}_{t}(z) $ of $
\mathcal{T} $ dies off at exponential rate as $ \Ree z\to+\infty$ and
$ \mu_{k}=0 $, then the rates of convergence of $ \widehat\nu_{k} $
are logarithmic and depend on the Blumenthal--Geetor index of
the L\'evy process $ L_{t}$. The larger is the Blumenthal--Geetor
index, the slower are the rates and
the more difficult the estimation problem becomes. For the polynomially
decaying $ \mathcal{L}_{t}(z) $ one gets polynomial convergence rates
that also depend on the Blumenthal--Geetor index of $ L_{t} $.
Let us also note that the uniform rates of convergence are usually
rather slow, since $ \beta<1-\gamma$
in most situations. The pointwise convergence rates for points
$ x_{0}\neq0 $ can, on the contrary, be very fast.
The rates obtained turn out to be optimal up to a logarithmic factor in
the minimax sense over the classes $ \mathfrak{S}_{\beta}\cap
\mathfrak
{B}_{\gamma} $ and $ \mathfrak{H}_{s}(x_{0},\delta,D)\cap\mathfrak
{B}_{\gamma}$.
\section{Simulation study}
\label{SIM}
In our simulation study, we consider two models based on time-changed
normal inverse Gaussian (NIG) L\'evy processes.
The NIG L\'evy processes is a relatively new
class of processes introduced in \citet{BN} as a model for log
returns of stock prices.
The processes of this type are characterized by the property that their
increments have NIG distribution. \citet{BN} considered classes of
normal variance--mean
mixtures and defined the NIG distribution as the case when the
mixing distribution is inverse Gaussian.
Shortly after its introduction, it was shown that the NIG
distribution fits very well the log returns on German stock market
data, making the NIG L\'evy processes of great interest for
practioneers. A NIG distribution has in general four parameters: $
\alpha\in\mathbb{R}_{+}$, $ \varkappa\in\mathbb{R}$,
$\delta\in\mathbb{R}_{+} $ and $ \mu\in\mathbb{R} $ with $
|\varkappa
|<\alpha$. Each parameter in $ \operatorname{NIG}(\alpha, \varkappa,
\delta,\mu) $ distribution can be interpreted
as having a different effect on the shape of the distribution: $ \alpha
$ is responsible for the tail heaviness of steepness, $ \varkappa$ has
to do with symmetry, $ \delta$ scales the distribution and $ \mu$
determines its mean value. The NIG distribution is infinitely divisible
with c.f.
\[
\phi(u)=\exp\bigl\{ \delta\bigl( \sqrt{\alpha^{2}-\varkappa
^{2}}-\sqrt
{\alpha^{2}-(\varkappa+\ii u)^{2}}+\ii\mu u \bigr) \bigr\}.
\]
Therefore, one can define the NIG L\'evy process $ (L_{t})_{t\geq0} $ which
starts at zero and has independent and stationary increments such that
each increment $ L_{t+\Delta}-L_{t} $ has $ \operatorname{NIG}(\alpha,
\varkappa, \Delta\delta,\Delta\mu) $ distribution.
The NIG process has no diffusion component making it a pure jump
process with the L\'evy density
\begin{equation}
\label{NIGNU}
\nu(x)=\frac{2\alpha\delta}{\pi}\frac{\exp(\varkappa
x)K_{1}(\alpha|x|)}{|x|},
\end{equation}
where $ K_{\lambda}(z) $ is the modified Bessel function of the third
kind. Taking into account the asymptotic relations
\[
K_{1}(z)\asymp2/z,\qquad z\to+0,\quad \mbox{and}\quad K_{1}(z)\asymp\sqrt
{\frac{\pi}{2z}} e^{-z},\qquad z\to+\infty,
\]
we conclude that $ \nu\in\mathfrak{B}_{1} $ and $ \nu\in\mathfrak
{H}_{s}(x_{0},\delta,D) $ for arbitrary large $ s>0 $ and some $\delta
>0, D>0$, if $ x_{0}\neq0$. Moreover, assumption (AL2) is
fulfilled for any $ p>0$.
Furthermore, the identity
\[
\frac{d^{2}}{du^{2}} \log\phi(u)=-\alpha^2/\bigl(\alpha
^{2}-(\varkappa
+\ii u)^{2}\bigr)^{3/2}
\]
implies $ \nu\in\mathfrak{S}_{2-\delta} $ for arbitrary small $
\delta
>0$. In the next sections are going to study two time-changed NIG
processes: one uses the Gamma process as a~time change and another
employs the integrated CIR processes to model~$ \mathcal{T} $.
\subsection{Time change via a Gamma process}
\label{TCGamma}
Gamma process is a L\'evy process such that its increments have Gamma
distribution, so that
$ \mathcal{T} $ is a pure-jump increasing L\'evy process with the L\'
evy density
\[
\nu_{\mathcal{T}}(x)=\theta x^{-1}\exp(-\lambda x),\qquad x\geq0,
\]
where the parameter $ \theta$ controls the rate of jump arrivals and
the scaling parameter $ \lambda$ inversely controls the jump size.
The Laplace transform of $ \mathcal{T} $ is of the form
\[
\mathcal{L}_{t}(z)=(1+z/\lambda)^{-\theta t},\qquad \Ree z\geq0.
\]
It follows from the properties of the Gamma and the corresponding
inverse Gamma distributions that assumptions (AT1) and (AT2) are
fulfilled for the Gamma process $ \mathcal{T}$,
provided $ \theta\Delta>2/\gamma$.
Consider now the time-changed L\'evy process $ Y_{t}=L_{\mathcal{T}(t)}
$ where $ L_{t}=(L^{1}_{t},L^{2}_{t},L^{3}_{t}) $ is a
three-dimensional L\'evy process with independent NIG components and $
\mathcal{T} $ is a Gamma process. Note that the process $ Y_{t} $ is a
multidimensional L\'evy process since $ \mathcal{T} $ was itself the
L\'
evy process. Let us be more specific and take the $ \Delta
$-increments of the L\'evy processes $ L^{1}_{t}$,
$ L^{2}_{t} $ and $ L^{3}_{t} $ to have $ \operatorname{NIG}(1, -0.05,
1,-0.5)$, $ \operatorname{NIG}(3, -0.05, 1,-1) $ and $ \operatorname
{NIG}(1, -0.03, 1, 2) $ distributions, respectively. Take also $ \theta
=1 $ and $ \lambda=1 $ for the parameters of the Gamma process $
\mathcal{T}$. Next, fix an equidistant grid on $ [0,10] $ of the
length $ n=1\mbox{,}000 $ and simulate a discretized trajectory of the process
$ Y_{t}$.
Let us stress that the dependence structure between the components of $
Y_{t} $ is rather flexible (although they are uncorrelated) and can be
efficiently controlled by the parameters of the corresponding Gamma
process $ \mathcal{T}$.
Next, we construct an estimate $ \widehat\nu_{1} $ as described in
Section \ref{ALG}.
We first estimate the derivatives $ \phi_{1}$, $ \phi_{2}$, $ \phi_{11}
$ and $ \phi_{12} $ by means of (\ref{PhiDeriv1Est}) and~(\ref
{PhiDeriv2Est}). Then we estimate $ \psi''_{1}(u) $ using the formula
(\ref{PsiDeriv2Est1}) with $ k=1 $ and $ l=2$. Finally, we get $
\widehat\nu_{1} $ from (\ref{NUEST}) where the kernel $ \mathcal{K} $
is chosen to be the so-called flat-top kernel of the form
\[
\mathcal{K}(x)=
\cases{
1, &\quad$|x|\leq0.05$, \vspace*{2pt}\cr
\displaystyle \exp\biggl( -\frac{e^{-1/(|x|-0.05)}}{1-|x|} \biggr), &\quad
$0.05<|x|<1$,\cr
0, &\quad$|x|\geq1$.}
\]
The flat-top kernels obviously satisfy assumption (AK).
Thus, all assumptions of Theorem \ref{UpperBounds} are fulfilled and
Corollary \ref{UPPERCOR2} leads to the following convergence rates for
the estimate $ \widehat\nu_{1} $ of the function $ \bar\nu
_{1}(x)=x^{2}\nu(x) $:
\[
\|\bar\nu_{1}-\widehat\nu_{1}\|_{L_\infty(\mathbb{R},w)}
=O_{\mathrm{a.s}.}
\bigl(n^{-({1-\delta'})/{(\theta\Delta+5/2)}}\log^{({3+\epsilon
'})/{(\theta\Delta+5/2)}}(n) \bigr),\qquad n\to\infty,
\]
with arbitrary small positive numbers $ \delta' $ and $ \epsilon'$,
provided the sequence
$ h_{n} $ is chosen as in Corollary \ref{UPPERCOR2}.
Let us turn to the finite sample performance of the estimate $ \widehat
\nu_{1}$.
It turns out that the choice of the sequence
$ h_{n} $ is crucial for a good performance of~$ \nu_{1} $. For this choice,
we adopt the so called ``quasi-optimality'' approach proposed in
\citet{BR}. This approach is aimed to perform a model selection in
inverse problems without taking into account the noise level.
Although one can prove the optimality of this criterion on average
only, it leads in many situations to quite reasonable results. In order
to implement the ``quasi-optimality'' algorithm in our situation, we
first fix a sequence of bandwidths $ h_{1},\ldots, h_{L} $ and
construct the estimates $ \nu^{(1)}_{1},\ldots, \nu_{1}^{(L)} $ using
the formula (\ref{NUEST}) with bandwidths
$ h_{1},\ldots, h_{L}$, respectively. Then one finds
$ l^{\star}=\argmin_{l} f(l) $ with
\[
f(l)=\bigl\| \widehat\nu_{1}^{(l+1)}- \widehat\nu_{1}^{(l)} \bigr\|
_{L_{1}(\mathbb{R})},\qquad l=1,\ldots,L.
\]
Denote by $ \widetilde\nu_{1}=\widehat\nu^{l^{*}}_{1} $ a new
adaptive estimate for $ \bar\nu_{1}$. In our implementation of the
``quasi-optimality'' approach, we take $ h_{l}=0.5+0.1\times l$, $
l=1,\ldots, 40$.
\begin{figure}[b]
\includegraphics{901f01.eps}
\caption{Left-hand side: objective function $ f(l)$ for
``quasi-optimality'' approach versus the corresponding bandwidths $
h_{l}$, $l=1,\ldots, 40 $. Right-hand side: adaptive estimate $
\widetilde\nu_{1} $ (dashed line) together with the true function $
\bar\nu_{1}$ (solid line).}
\label{NIGGammaNUEst
\end{figure}
In Figure~\ref{NIGGammaNUEst}, the sequence $ f(l)$, $l=1,\ldots,
40$, is plotted. On the right-hand side of
Figure \ref{NIGGammaNUEst}, we show the resulting estimate $
\widetilde\nu_{1} $ together with the true function $ \bar\nu_{1}$.
Based on the estimate $ \widetilde\nu_{1}$, one can estimate some
functionals of~$\bar\nu_{1}$. For example, we have $ \int
\widetilde
\nu_{1}(x) \,dx=1.049053 $ [$ \int\bar\nu_{1}(x) \,dx=1.015189 $].
\subsection{Time change via an integrated CIR process}\label{sec52}
Another possibility to construct a time-changed L\'evy process from the
NIG L\'evy process $ L_{t} $
is to use a time change of the form (\ref{TCProcess}) with some rate
process $ \rho(t)$. A possible candidate for the rate of the time
change is given by the Cox--Ingersoll--Ross process (CIR process). The
CIR process is defined as a solution of the following SDE:
\[
dZ_{t} = \kappa(\eta-Z_{t}) \,dt + \zeta\sqrt{Z_{t}} \,dW_{t},\qquad
Z_{0}=1,
\]
where $ W_{t} $ is a Wiener process.
This process is mean reverting with $ \kappa>0 $ being the speed of
mean reversion, $ \eta>0 $ being the long-run mean rate and $ \zeta>0 $
controlling the volatility of $ Z_{t} $. Additionally, if $ 2\kappa
\eta
>\zeta^{2} $ and $ Z_{0} $ has Gamma distribution, then $ Z_{t} $ is
stationary and exponentially $ \alpha$-mixing [see, e.g., \citet{MH}].
The time change $ \mathcal{T} $ is then defined as
\[
\mathcal{T}(t)=\int_{0}^{t}Z_{t} \,dt.
\]
Simple calculations show that the Laplace transform of $ \mathcal{T}(t)
$ is given by
\[
\mathcal{L}_{t}(z)=\frac{\exp(\kappa^{2}\eta t/\zeta^{2})\exp
(-2z/(\kappa+\gamma(z)\coth(\gamma(z)t/2)))}{(\cosh(\gamma
(z)t/2)+\kappa\sinh(\gamma(z)t/2)/\gamma(z) )^{2\kappa\eta/ \zeta
^{2} }}
\]
with $ \gamma(z)=\sqrt{\kappa^{2}+2\zeta^{2}z}$. It is easy to see
that $ \mathcal{L}_{t}(z)\asymp\exp( -\frac{\sqrt{2z}}{\zeta
}[1+t\kappa\eta] ) $ as $ |z|\to\infty$ with $ \Ree z \geq0$.
Moreover, it can be shown that $ \E|\mathcal{T}(t)|^{p}<\infty$ for
any $ p\in\mathbb{R}$.
Let $ L_{t} $ be again a three-dimensional NIG L\'evy process with
independent components distributed as in Section \ref{TCGamma}.
Construct the time-changed process $ Y_{t}=L_{\mathcal{T}(t)}$. Note
that the process $ Y_{t} $ is not any longer a~L\'evy process and has
in general dependent increments. Let us estimate $ \bar\nu_{1} $, the
transformed L\'evy density of the first component of $ L_{t}$. First,
note that according to Theorem \ref{UpperBounds}, the estimate $
\widehat\nu_{1} $ constructed as described in Section~\ref{ALG}, has
the following logarithmic convergence rates
\[
\|\bar\nu_{1}-\widehat\nu_{1}\|_{L_\infty(\mathbb{R},w)}
=O_{\mathrm{a.s}.}
\bigl(\log^{-2(2-\delta)} (n) \bigr),\qquad n\to\infty,
\]
for arbitrary small $ \delta>0$, provided the bandwidth sequence is
chosen in the optimal way. Finite sample performance of $ \widehat\nu
_{1} $ with the choice of $ h_{n} $ based on the ``quasi-optimality''
approach is illustrated in
Figure \ref{NIGCIRNUEst} where the sequence of estimates $ \widehat
\nu^{(1)}_{1},\ldots, \widehat\nu^{(L)}_{1} $ was constructed from the
time series $ Y_{\Delta},\ldots, Y_{n\Delta} $ with $ n=5\mbox{,}000 $
and $ \Delta=0.1$.
\begin{figure}
\includegraphics{901f02.eps}
\caption{Left-hand side: objective function $ f(l)$ for the
``quasi-optimality'' approach versus the corresponding bandwidths $
h_{l} $. Right-hand side: adaptive estimate $ \widetilde\nu_{1} $
(dashed line) together with the true function $ \bar\nu_{1}$ (solid line).}
\label{NIGCIRNUEst
\end{figure}
The parameters of the used CIR process are $ \kappa=1 $, $ \eta=1 $ and
$ \zeta=0.1$. Again we can compute some functionals of $ \widetilde
\nu
_{1}$. We have, for example, following estimates for the integral and
for the mean of $ \bar\nu_{1}$:
$ \int\widetilde\nu_{1}(x) \,dx=1.081376 $ [$ \int\bar\nu_{1}(x)
\,dx=1.015189 $] and $ \int x\widetilde\nu_{1}(x) \,dx=-0.4772505 $ [$
\int x\bar\nu_{1}(x) \,dx=-0.3057733 $].
Let us now test the performance of estimation algorithm in the case of
a time-changed NIG process (parameters are the same as before), where
the time change is again given by the integrated CIR process with the
parameters $ \eta=1$, $\zeta=0.1 $ and $ \kappa\in\{0.05,0.1,0.5,1\}$.
Figure \ref{boxplots}(left) shows the boxplots of the resulting error $
\|\bar\nu_{1}-\widetilde\nu_{1}\|_{L_\infty(\mathbb{R},w)} $
computed using $ 100 $ trajectories each of the length $ n=5\mbox{,}000$,
where the time span between observation is $ \Delta=0.1$.
\begin{figure}
\includegraphics{901f03.eps}
\caption{Boxplots of the error
$\|\bar\nu_{1}-\widetilde\nu_{1}\|_{L_\infty(\mathbb{R},w)} $ for
different values of the mean reversion speed parameter $ \kappa$ and
different numbers of observations $ n$. } \label{boxplots}
\end{figure}
Note that if our time units are days, then we get about two years of
observations with about one mean reversion per month in the case $
\kappa=0.05$. As one can see, the performance of the algorithm remains
reasonable for the whole range of $ \kappa$.
In Figure \ref{boxplots}(right), we present the boxplots of the error $
\|\bar\nu_{1}-\widetilde\nu_{1}\|_{L_\infty(\mathbb{R},w)} $
in the case of $ \eta=1$, $\zeta=0.1$, $ \kappa=1$ and $ n\in\{500,
1\mbox{,}000, 3\mbox{,}000, 5\mbox{,}000 \}$. As one can expect, the performance of the
algorithm becomes worse as $ n $ decreases. However,
the quality of the estimation remains reasonable even for $ n=500$.
\section{Proofs of the main results}
\subsection{\texorpdfstring{Proof of Theorem \protect\ref{UpperBounds}}{Proof of Theorem 4.4}}
For simplicity, let consider the case of $ \mu_{l}>0 $ and $ \sigma
_{k}=0$. By Proposition \ref{ExpBounds}
[take $ G_{n}(u,z)=\exp(\ii u z)$, $ L_{n}=\bar\mu_{n}=\bar\sigma_{n}=1
$, $ a=0, b=1 $]
\[
\mathbb{P}\bigl(|\widehat\phi_{l}(\mathbf{0})|\leq\kappa/\sqrt{n}
\bigr)\geq
\mathbb{P}\bigl(|\widehat\phi_{l}(\mathbf{0})-\phi_{l}(\mathbf{0})|> \mu
_{l}\bigr)\leq B n^{-1-\delta}
\]
for some constants $ \delta>0, B>0 $ and $ n $ large enough.
Furthermore, simple calculations lead to the following representation:
\begin{eqnarray}
\label{psirepr}
\psi_{k}^{\prime\prime}(u)-\widehat{\psi}_{k,2}(u) &=&\frac{\psi
''_{k}(u)}{\psi'_{l}(0)}\bigl(
\phi_{l}(\mathbf{0})-\widehat{\phi}_{l}(\mathbf{0})
\bigr)\nonumber\\[-8pt]\\[-8pt]
&&{}+\mathcal{R}_{0}(u)+
\mathcal{R}_{1}(u)+\mathcal{R}_{2}(u),\nonumber
\end{eqnarray}
where
\begin{eqnarray*}
\mathcal{R}_{0}(u)&=&[V_{1}(u)\psi''_{k}(u)-V_{2}(u)\psi'_{k}(u)
]\bigl(
\phi_{l}\bigl(u^{(k)}\bigr)-\widehat{\phi}_{l}\bigl(u^{(k)}\bigr)\bigr) \\
&&{}+V_{2}(u)\bigl( \phi_{k}\bigl(u^{(k)}\bigr)-\widehat{\phi
}_{k}\bigl(u^{(k)}\bigr)\bigr)
\\
&&{}-V_{1}(u)\bigl( \phi_{kk}\bigl(u^{(k)}\bigr)-\widehat{\phi
}_{kk}\bigl(u^{(k)}\bigr)\bigr) \\
&&{}+V_{1}(u)\psi'_{k}(u)\bigl( \phi_{lk}\bigl(u^{(k)}\bigr)-\widehat{\phi
}_{lk}\bigl(u^{(k)}\bigr)\bigr),
\\
\mathcal{R}_{1}(u)&=&[\widetilde V_{1}(u)\psi
''_{k}(u)-\widetilde
V_{2}(u)\psi'_{k}(u) ]\bigl(
\phi_{l}\bigl(u^{(k)}\bigr)-\widehat{\phi}_{l}\bigl(u^{(k)}\bigr)\bigr) \\
&&{}+\widetilde V_{2}(u)\bigl( \phi_{k}\bigl(u^{(k)}\bigr)-\widehat{\phi
}_{k}\bigl(u^{(k)}\bigr)\bigr)
\\
&&{}-\widetilde V_{1}(u)\bigl( \phi_{kk}\bigl(u^{(k)}\bigr)-\widehat{\phi
}_{kk}\bigl(u^{(k)}\bigr)\bigr) \\
&&{}+\widetilde V_{1}(u)\psi'_{k}(u)\bigl( \phi_{lk}\bigl(u^{(k)}\bigr)-\widehat
{\phi}_{lk}\bigl(u^{(k)}\bigr)\bigr),
\\
\mathcal{R}_{2}(u) &=&\Gamma^{2}(u)\frac{\phi_{l}(\mathbf{0})(
\phi
_{lk}(u^{(k)})-\widehat{\phi}_{lk}(u^{(k)})) }{[ \phi
_{l}(u^{(k)})] ^{2}}\\
&&{}\times\bigl[ \bigl( \phi_{l}\bigl(u^{(k)}\bigr)-\widehat
{\phi
_{l}\bigl(u^{(k)}\bigr)\bigr) \psi_{k}^{\prime}(u)-\bigl( \phi_{k}\bigl(u^{(k)}\bigr)
\widehat{\phi}_{k}\bigl(u^{(k)}\bigr)\bigr) \bigr]\\
&&{} +\frac{(\widehat{\phi
}_{l}(\mathbf{0})-\phi_{l}(\mathbf{0}))}
\phi_{l}(u^{(k)})}\biggl[\frac{\mathcal{R}_{0}+\mathcal
{R}_{1}}{\phi
_{l}(\mathbf{0})}\biggr]
\end{eqnarray*}
wit
\begin{eqnarray*}
V_{1}(u) &=&\frac{\phi_{l}(\mathbf{0})}{\Delta\phi_{l}(u^{(k)})}=
-\frac{1}{\mathcal{L}_{\Delta}^{\prime}(-\psi_{k}(u))},
\\
V_{2}(u) &=&\frac{\phi_{l}(\mathbf{0})\phi_{lk}(u^{(k)})}{\Delta
[ \phi_{l}(u^{(k)})] ^{2}}=-V_{1}(u)\psi'_{k}(u)\frac
{\mathcal{L
_{\Delta}^{\prime\prime}(-\psi_{k}(u))}{\mathcal{L}_{\Delta
}^{\prime
}(-\psi
_{k}(u))},
\\
\widetilde V_{1}(u)&=&\bigl(\Gamma(u)-1\bigr)V_{1}(u),\qquad \widetilde
V_{2}(u)=\bigl(\Gamma^{2}(u)-1\bigr)V_{2}(u)
\end{eqnarray*}
and
\[
\Gamma(u)=\biggl[ 1-\frac{1}{\phi_{l}(u^{(k)})}\bigl( \phi
_{l}\bigl(u^{(k)}\bigr)
\widehat{\phi}_{l}\bigl(u^{(k)}\bigr)\bigr) \biggr] ^{-1}.
\]
The representation (\ref{psirepr}) and the Fourier inversion formula
imply the following representation for
the deviation $ \bar\nu_{k}-\widehat\nu_{k} $:
\begin{eqnarray*}
\bar\nu_{k}(x)-\widehat\nu_{k}(x)&=&\frac{1}{2\pi}\frac{(
\phi_{l}(\mathbf{0})-\widehat{\phi}_{l}(\mathbf{0}))}{\psi
'_{l}(0)}\int_{\mathbb{R}} e^{-\ii u x}\psi''_{k}(u)\mathcal
{K}(uh_{n}) \,du\\
&&{}+\frac{1}{2\pi}\int_{\mathbb{R}} e^{-\ii u
x}\mathcal
{R}_{0}(u)\mathcal{K}(uh_{n}) \,du
\\
&&{} +\frac{1}{2\pi}\int_{\mathbb{R}} e^{-\ii u x}\mathcal
{R}_{1}(u)\mathcal{K}(uh_{n}) \,du\\
&&{}+\frac{1}{2\pi}\int_{\mathbb{R}} e^{-\ii u x}\mathcal
{R}_{2}(u)\mathcal
{K}(uh_{n}) \,du
\\
&&{}+\frac{1}{2\pi}\int_{\mathbb{R}} e^{-\ii u x}\bigl(1-\mathcal
{K}(uh_{n})\bigr)\psi''_{k}(u) \,du.
\end{eqnarray*}
First, let us show that
\[
\sup_{x\in\mathbb{R}}\biggl| \int_{\mathbb{R}}e^{-\ii u x}\mathcal
{R}_{1}(u)\mathcal{K}(uh_{n}) \,du \biggr|=o_{\mathrm{a.s}}\Biggl( \sqrt
{\frac
{\log^{3+\varepsilon} n}{n}\int_{-1/h_{n}}^{1/h_{n}}\mathfrak
{R}^{2}_{k}(u) \,du} \Biggr)
\]
and
\[
\sup_{x\in\mathbb{R}}\biggl| \int_{\mathbb{R}}e^{-\ii u x}\mathcal
{R}_{2}(u)\mathcal{K}(uh_{n}) \,du \biggr|=o_{\mathrm{a.s}}\Biggl( \sqrt
{\frac
{\log^{3+\varepsilon} n}{n}\int_{\mathbb{R}}\mathfrak
{R}^{2}_{k}(u)
\,du} \Biggr).
\]
We have, for example, for the first term in $ \mathcal{R}_{1}(u) $
\begin{eqnarray*}
&&\biggl| \int_{\mathbb{R}}e^{-\ii u z} \bigl(\Gamma(u)-1\bigr)V_{1}(u)\psi
''_{k}(u)\bigl(
\phi_{l}\bigl(u^{(k)}\bigr)-\widehat{\phi}_{l}\bigl(u^{(k)}\bigr)\bigr)\mathcal
{K}(uh_{n}) \,du \biggr|
\\
&&\qquad\leq\sup_{|u|\leq1/h_{n}}|\Gamma(u)-1|\sup_{u\in\mathbb{R}}\bigl[
w(|u|)\bigl|\phi_{l}\bigl(u^{(k)}\bigr)-\widehat{\phi}_{l}\bigl(u^{(k)}\bigr)\bigr|
\bigr]w^{-1}(1/h_{n})\\
&&\qquad\quad{}\times\int_{-1/h_{n}}^{1/h_{n}}|V_{1}(u)||\psi''_{k}(u)| \,du
\end{eqnarray*}
with $ w(u)=\log^{-1/2}(e+u), u\geq0$.
Fix some $ \xi>0 $ and consider the event
\[
\mathcal{A}=\Biggl\{ \sup_{\{|u|\leq1/h_{n}\}}\bigl[
w(|u|)\bigl|\widehat
\phi_{l}\bigl(u^{(k)}\bigr)-\phi_{l}\bigl(u^{(k)}\bigr)\bigr| \bigr]
\leq\xi\sqrt{\frac
{\log
n}{n}}\Biggr\}.
\]
By assumption (AH), it holds on $ \mathcal{A} $ that
\begin{eqnarray*}
\sup_{|u|<1/h_{n}}\biggl| \frac{\phi_{l}(u^{(k)})
\widehat{\phi}_{l}(u^{(k)})}{\phi_{l}(u^{(k)})} \biggr|&\leq&\xi
M_{n}w^{-1}(1/h_{n})\sqrt{\log n/n}\\
&=&o\bigl(\sqrt{h_{n}}\bigr),\qquad
n\to\infty,
\end{eqnarray*}
and hence
\begin{equation}
\label{GammaEst}
\sup_{\{|u|\leq1/h_{n}\}}|1-\Gamma(u)|=o\bigl(\sqrt{h_{n}}
\bigr),\qquad n\to\infty.
\end{equation}
Therefore, one has on $ \mathcal{A} $ that
\begin{eqnarray*}
\sup_{x\in\mathbb{R}}\biggl| \int_{-1/h_{n}}^{1/h_{n}}e^{-\ii u x}
\bigl(\Gamma(u)-1\bigr)V_{1}(u)\psi''_{k}(u)\bigl(
\phi_{l}\bigl(u^{(k)}\bigr)-\widehat{\phi}_{l}\bigl(u^{(k)}\bigr)\bigr)\mathcal
{K}(uh_{n}) \,du \biggr|
\\
= o\Biggl( \sqrt{\frac{h_{n}\log^{2} n}{n}}\int
_{-1/h_{n}}^{1/h_{n}}\mathfrak{R}_{k}(u) \,du \Biggr)
= o\Biggl( \sqrt{\frac{\log^{3+\varepsilon} n}{n}\int
_{-1/h_{n}}^{1/h_{n}}\mathfrak{R}^{2}_{k}(u) \,du} \Biggr)
\end{eqnarray*}
since $ \psi''_{k}(u) $ and $ \mathcal{K}(u) $ are uniformly bounded on
$ \mathbb{R}$.
On the other hand, Proposition~\ref{ExpBounds}
implies [on can take $ G_{n}(u,z)=\exp(\ii u z)$, $ L_{n}=\bar\mu
_{n}=\bar\sigma_{n}=1 $, $ a=0$, \mbox{$b=1 $}]
\[
\mathbb{P}(\bar{\mathcal{A}})\lesssim n^{-1-\delta'},\qquad n\to\infty,
\]
for some $ \delta'>0$.
The Borel--Cantelli lemma yields
\begin{eqnarray*}
&&\sup_{x\in\mathbb{R}}\biggl| \int_{-1/h_{n}}^{1/h_{n}}e^{-\ii u x}
\bigl(\Gamma(u)-1\bigr)V_{1}(u)\psi''_{k}(u)\bigl(
\phi_{l}\bigl(u^{(k)}\bigr)-\widehat{\phi}_{l}\bigl(u^{(k)}\bigr)\bigr)\mathcal
{K}(uh_{n}) \,du \biggr|
\\
&&\qquad=o_{\mathrm{a.s}.}\Biggl( \sqrt{\frac{\log^{3+\varepsilon} n}{n}\int
_{-1/h_{n}}^{1/h_{n}}\mathfrak{R}^{2}_{k}(u) \,du} \Biggr).
\end{eqnarray*}
Other terms in $ \mathcal{R}_{1} $ and $ \mathcal{R}_{2} $ can be
analyzed in a similar way.
Turn now to the rate determining term $ \mathcal{R}_{0}$. Consider,
for instance, the integral
\begin{eqnarray}\qquad
\label{R0}
&&\int_{-1/h_{n}}^{1/h_{n}}e^{-\ii u x}V_{1}(u)\psi''_{k}(u)\bigl(
\phi_{l}\bigl(u^{(k)}\bigr)-\widehat{\phi}_{l}\bigl(u^{(k)}\bigr)\bigr)\mathcal
{K}(uh_{n}) \,du
\nonumber\\[-8pt]\\[-8pt]
&&\qquad=\frac{1}{nh_{n}}\sum_{j=1}^{n}\biggl[ Z_{j}^{l}K_{n}\biggl( \frac
{x-Z_{j}^{k}}{h_{n}
\biggr) -\E\biggl\{ Z^{l}\frac{1}{h_{n}}K_{n}\biggl(
\frac{x-Z^{k}}{h_{n}}\biggr) \biggr\} \biggr]
=\mathcal{S}(x)
\nonumber
\end{eqnarray}
with
\[
K_{n}(z)=\int_{-1}^{1}e^{-\ii uz} V_{1}(u/h_{n})\psi
_{k}^{\prime\prime}(u/h_{n})\mathcal{K}(u) \,du.
\]
Now we are going to make use of Proposition \ref{ExpBounds} to estimate
the term $ \mathcal{S}(x) $ on the r.h.s. of (\ref{R0}). To this end,
let
\[
G_{n}(u,z)= \frac{1}{h_{n}}K_{n}\biggl( \frac{u-z}{h_{n}}\biggr).
\]
Since $ \nu_{k},\nu_{l}\in\mathfrak{B}_{\gamma} $ for some $
\gamma>0
$ [assumption (AL1)],
the L\'evy processes $ L^{k}_{t} $ and $ L^{l}_{t} $ possess infinitely
smooth densities $ p_{k,t} $ and $ p_{l,t} $ which are bounded for $
t>0 $
[see \citet{SA}, Section 28] and fulfill [see \citet{P}]
\begin{eqnarray}
\label{pka}
\sup_{x\in\mathbb{R}} \{ p_{k,t}(x) \}&\lesssim&
t^{-1/\gamma},\qquad t\to0,
\\
\label{pla}
\sup_{x\in\mathbb{R}} \{ p_{l,t}(x) \}&\lesssim&
t^{-1/\gamma},\qquad t\to0.
\end{eqnarray}
Moreover, under assumption (AL2) [see \citet{LP}]
\begin{equation}\quad
\label{mpa1}
\int|x|^{m} p_{k,t}(x) \,dx = O(t),\qquad \int|x|^{m} p_{l,t}(x)
\,dx = O(t),\qquad t\to0,
\end{equation}
and
\begin{eqnarray}
\label{mpa2}
\int|x|^{m} p_{k,t}(x) \,dx &=& O(t^{m}),\nonumber\\[-8pt]\\[-8pt]
\int|x|^{m}p_{l,t}(x) \,dx &=& O(t^{m}),\qquad t\to+\infty,\nonumber
\end{eqnarray}
for any $ 2\leq m\leq p$.
As a result, the distribution of $ (Z^{k},Z^{l}) $ is absolutely
continuous with uniformly bounded density $ q_{kl} $ given by
\[
q_{kl}(y,z)=\int_{0}^{\infty} p_{k,t}(y)p_{l,t}(z) \,d\pi(dt),
\]
where $ \pi$ is the distribution function of the r.v. $\mathcal
{T}(\Delta)$.
The asymptotic relations (\ref{pka})--(\ref{mpa2}) and assumption
(AT1) imply
\begin{eqnarray*}
\E[| Z^{l} |^{2}|G_{n}(u,Z^{k})|^{2}]&=&
\frac
{1}{h^{2}_{n}}\int_{\mathbb{R}}\biggl| K_{n}\biggl( \frac
{u-y}{h_{n}}\biggr)\biggr|^{2}\biggl\{ \int_{\mathbb
{R}}|z|^{2}q_{kl}(y,z) \,dz \biggr\} \,dy
\\
&\leq& \frac{C_{0}}{h_{n}}\int_{\mathbb{R}}| K_{n}(v)
|^{2} \,dv
\\
&\leq& C_{1}\int_{-1/h_{n}}^{1/h_{n}}|V_{1}(u)|^{2} \,du
\end{eqnarray*}
with some finite constants $ C_{0}>0 $ and $ C_{1}>0$.
Similarly,
\begin{eqnarray*}
\E[|Z^{k}|^{2}|G_{n}(u,Z^{k})|^{2}]& \leq&
C_{2}\int_{-1/h_{n}}^{1/h_{n}}|V_{1}(u)|^{2} \,du,
\\
\E[|Z^{k}|^{4}|G_{n}(u,Z^{k})|^{2}]& \leq&
C_{3}\int_{-1/h_{n}}^{1/h_{n}}|V_{1}(u)|^{2} \,du,
\\
\E[|Z^{k}|^{2}|Z^{l}
|^{2}|G_{n}(u,Z^{k})|^{2}]& \leq& C_{4}\int
_{-1/h_{n}}^{1/h_{n}}|V_{1}(u)|^{2} \,du
\end{eqnarray*}
with some positive constants $ C_{2}, C_{3} $ and $ C_{4}$.
Define
\begin{eqnarray*}
\bar\sigma^{2}_{n}&=&C\int_{-1/h_{n}}^{1/h_{n}} |V_{1}(u)|^{2} \,du,
\\
\bar\mu_{n}&=&\| \mathcal{K} \|_{\infty}\| \psi'' \|_{\infty}\int
_{-1/h_{n}}^{1/h_{n}} |V_{1}(u)| \,du,
\\
L_{n}&=&\| \mathcal{K} \|_{\infty}\| \psi'' \|_{\infty}\int
_{-1/h_{n}}^{1/h_{n}} |u||V_{1}(u)| \,du,
\end{eqnarray*}
where $ C=\max_{k=1,2,3,4}\{ C_{k} \}$.
Since $ |V_{1}(u)|\to\infty$ as $ |u|\to\infty$ and $ h_{n}\to
\infty, $ we get $ \bar\mu_{n}/\bar\sigma^{2}_{n}=O(1)$. Furthermore,
due to assumption (AH)
\begin{equation}
\label{ARMS}
\bar\mu_{n}\lesssim h_{n}^{-1/2}\bar\sigma_{n}\lesssim
n^{1/2-\delta
/2} \bar\sigma_{n},\quad L_{n}\lesssim h_{n}^{3/2} \bar\sigma_{n}
\lesssim n^{3/2} \bar\sigma_{n},\quad n\to\infty,\hspace*{-28pt}
\end{equation}
and $ \bar\sigma_{n}=O(h_{n}^{-1/2}M_{n})=O(n^{1/2})$.
Thus, assumptions (AG1) and (AG2) of Proposition \ref{ExpBounds}
are fulfilled.
Assumption (AZ1) follows from Lemma \ref{MIX}
and assumption (AT1).
Therefore, we get by Proposition \ref{ExpBounds}
\[
\mathbb{P}\Biggl( \sup_{z\in\mathbb{R}}[ w(|z|)| \mathcal{S}(z)
| ]\geq\xi\sqrt{\frac{\bar\sigma^{2}_{n}\log
^{3+\varepsilon}n}{n}} \Biggr)\lesssim n^{-1-\delta' }
\]
for some $ \delta'>0 $ and $ \xi>\xi_{0}$. Noting that
\[
\bar\sigma_{n}^{2}\leq C\int_{-1/h_{n}}^{1/h_{n}}\mathfrak
{R}^{2}_{k}(u) \,du,
\]
we derive
\[
\sup_{z\in\mathbb{R}}[ w(|z|)| \mathcal{S}(z) |
]
=O_{\mathrm{a.s}.}
\Biggl( \sqrt{\frac{\log^{3+\varepsilon} n}{n}\int
_{-1/h_{n}}^{1/h_{n}}\mathfrak{R}^{2}_{k}(u) \,du} \Biggr).
\]
Other terms in $ \mathcal{R}_{0} $ can be studied in a similar manner. Finally,
\begin{eqnarray}
\label{nudev}
\| \widehat\nu_{k}-\bar\nu_{k} \|_{L_{\infty}(\mathbb{R},w)}
&=& O_{\mathrm{a.s}.}
\Biggl( \sqrt{\frac{\log^{3+\varepsilon} n}{n}\int
_{-1/h_{n}}^{1/h_{n}}\mathfrak{R}^{2}_{k}(u) \,du} \Biggr)
\nonumber\\[-8pt]\\[-8pt]
&&{}+
\frac{1}{2\pi}\int_{\mathbb{R}} |1-\mathcal{K}(uh_{n})||\psi
_{k}^{\prime\prime}(u)| \,du.\nonumber
\end{eqnarray}
The second, bias term on the r.h.s. of (\ref{nudev}) can be easily
bounded if we recall that $ \nu_{k}\in\mathfrak{S}_{\beta} $ and $
\mathcal{K}(u)=1 $ on $ [-a_{K},a_{K}] $
\begin{eqnarray*}
\frac{1}{2\pi}\int_{\mathbb{R}} |1-\mathcal{K}(uh_{n})||\psi
_{k}^{\prime\prime}(u)| \,du
&\lesssim& h_{n}^{\beta}\int_{\{|u|>a_{K}/h_{n}\}} |u|^{\beta
}|\mathbf
{F}[\bar\nu_{k}](u)| \,du
\\
&\lesssim& h_{n}^{\beta}\int_{\mathbb{R}} (1+|u|^{\beta})|\mathbf
{F}[\bar\nu_{k}](u)| \,du,\qquad n\to\infty.
\end{eqnarray*}
\subsection{\texorpdfstring{Proof of Theorem \protect\ref{pointwiseupper}}{Proof of Theorem 4.7}}
We have
\begin{eqnarray*}
\widehat\nu_{k}(x_{0})-\bar\nu_{k}(x_{0})&=&\biggl[ \frac{1}{2\pi
}\int
_{\mathbb{R}}e^{-\ii ux_{0}}\psi''_{k}(u)\mathcal{K}(uh_{n})
\,du-\bar
\nu_{k}(x_{0}) \biggr]
\\
&&{}+\frac{1}{2\pi}\int_{\mathbb{R}}e^{-\ii ux_{0}}\bigl(\widehat\psi
_{k,2}-\psi''_{k}(u)\bigr)\mathcal{K}(uh_{n}) \,du\\
&=&J_{1}+J_{2}
\end{eqnarray*}
Introduce
\[
K(z)=\frac{1}{2\pi}\int_{-1}^{1}e^{\ii u z} \mathcal{K}(u) \,du,
\]
then by the Fourier inversion formula
\begin{equation}
\label{KFInv}
\mathcal{K}(u)=\int_{\mathbb{R}}e^{-\ii uz}K(z) \,dz.
\end{equation}
Assumption (AK) together with the smoothness of $ \mathcal{K} $
implies that $ K(z) $ has finite absolute moments up to order $ m \geq
s$ and it holds that
\begin{equation}
\label{AKM}
\int K(z) \,dz=1,\qquad \int z^{k}K(z) \,dz=0, \qquad k=1,\ldots,m.
\end{equation}
Hence
\[
J_{1}= \int_{-\infty}^{\infty}\bar\nu_{k}(x_{0}+h_{n}v)K(v)
\,dv-\bar
\nu_{k}(x_{0})
\]
and
\begin{eqnarray*}
| J_{1} |&\leq&
\biggl| \int_{|v|>\delta/h_{n}}[\bar\nu_{k}(x_{0})-\bar\nu
_{k}(x_{0}+h_{n}v)]K(v) \,dv \biggr|
\\
&&{} + \biggl| \int_{|v|\leq\delta/h_{n}}[\bar\nu_{k}(x_{0})-\bar
\nu
_{k}(x_{0}+h_{n}v)]K(v) \,dv \biggr|
\\
&=& I_{1}+I_{2}.
\end{eqnarray*}
Since $ \| \bar\nu\|_{\infty}\leq C_{\bar\nu} $ for some constant $
C_{\bar\nu}>0$, we get
\[
I_{1}\leq2C_{\bar\nu} \int_{|v|>\delta/h_{n}}|K(v)| \,dv\leq
C_{\bar
\nu} C_{K}(h_{n}/\delta)^{m}
\]
with $ C_{K}=\int_{\mathbb{R}}|K(v)||v|^{m} \,dv$.
Further, by the Taylor expansion formula,
\begin{eqnarray*}
I_{2}&\leq& \Biggl| \sum_{j=0}^{s-1}\frac{h_{n}^{j}\bar\nu
_{k}^{(j)}(x_{0})}{j!}\int_{|v|\leq\delta/h_{n}}K(v)v^{j} \,dv \Biggr|
\\
&&{} +\biggl| \int_{|v|\leq\delta/h_{n}} K(v) \biggl[ \int
_{x_{0}}^{x_{0}+h_{n}v} \frac{\bar\nu_{k}^{(s)}(\zeta)(\zeta
-x_{0})^{s-1}}{(s-1)!} \,d\zeta\biggr] \,dv \biggr|
\\
&=&I_{21}+I_{22}.
\end{eqnarray*}
First, let us bound $ I_{21} $ from above. Note that, due to (\ref{AKM}),
\[
I_{21}=\Biggl| \sum_{j=0}^{s-1}\frac{h_{n}^{j}\bar\nu
_{k}^{(j)}(x_{0})}{j!}\int_{|v|> \delta/h_{n}}K(v)v^{j} \,dv \Biggr|.
\]
Hence,
\begin{eqnarray*}
I_{21}&\leq& \biggl( \frac{h_{n}}{\delta} \biggr)^{m} \sum
_{j=0}^{s-1}\frac{\delta^{j}|\bar\nu_{k}^{(j)}(x_{0})|}{j!}\int_{|v|>
\delta/h_{n}}|K(v)||v|^{m} \,dv
\\
&\leq& \biggl( \frac{h_{n}}{\delta} \biggr)^{m} L C_{K}\exp(\delta).
\end{eqnarray*}
Furthermore, we have for $ I_{22} $
\[
I_{22} \leq\frac{L
h_{n}^{s}}{s!}\int_{|v|\leq\delta/ h_{n}}|K(v)||v|^{s} \,dv.
\]
Combining all previous inequalities and taking into account the fact
that $ m\geq s $, we derive
\[
|J_{1}|\lesssim h_{n}^{s},\qquad n\to\infty.
\]
The stochastic term $ J_{2} $ can handled along the same lines as in
the proof of Theorem~\ref{UpperBounds}.
\subsection{\texorpdfstring{Proof of Theorem \protect\ref{LowBounds}}{Proof of Theorem 4.9}}
Define
\[
K_{0}(x)=\prod_{k=1}^{\infty}\biggl( \frac{\sin(a_{k}x)}{a_{k}x}
\biggr) ^{2}
\]
with $ a_{k}=2^{-k}, k\in\mathbb{N}$. Since $ K_{0}(x) $ is
continuous at $ 0 $ and does not vanish there, the function
\[
K(x)=\frac{1}{2\pi}\frac{\sin(2x)}{\pi x}\frac{K_{0}(x)}{K_{0}(0)}
\]
is well defined on $ \mathbb{R}$.
Next, fix two positive numbers $ \beta$ and $ \gamma$ such that $
\gamma\in(0,1) $ and $ 0<\beta<1-\gamma$. Consider a function
\[
\Phi(u)=\frac{e^{\ii x_{0}u}}{(1+u^{2})^{(1+\beta)/2}\log^{2}(e+u^{2})}
\]
for some $x_{0}>0$ and defin
\[
\mu_{h}(x)=\int_{-\infty}^{\infty}\mu(x+zh)K(z) \,dz
\]
for any $ h>0$, where
\[
\mu(x)=\frac{1}{2\pi}\int_{-\infty}^{\infty}e^{-\ii xu}\Phi(u)\,du.
\]
In the next lemma, some properties of the functions $ \mu$ and $ \mu
_{h} $ are collected.
\begin{lem}
\label{MuProp}
Functions $ \mu$ and $ \mu_{h} $ have the following properties:
\begin{longlist}
\item$ \mu$ and $ \mu_{h} $ are uniformly bounded on $ \mathbb
{R}$,
\item for any natural $ n>0 $
\begin{equation}
\label{MuDecay}
\max\{ \mu(x), \mu_{h}(x) \}\lesssim|x|^{-n},\qquad |x|\to\infty,
\end{equation}
that is, both functions $ \mu(x) $ and $ \mu_{h}(x)$ decay faster than
any negative power of $ x $,
\item it holds
\begin{equation}
\label{MuMuh}
x_{0}^{2}\mu(x_{0})-x_{0}^{2}\mu_{h}(x_{0})\geq Dh^{\beta}\log^{-1}(1/h)
\end{equation}
for some constant $ D>0 $ and $ h $ small enough.
\end{longlist}
\end{lem}
Fix some $ \varepsilon>0 $ and consider two functions
\begin{eqnarray*}
\nu_{1}(x) &=&\nu_{\gamma}(x)+\frac{1-\varepsilon}{(1+x^{2})^{2}
+\varepsilon\mu(x), \\
\nu_{2}(x) &=&\nu_{\gamma}(x)+\frac{1-\varepsilon}{(1+x^{2})^{2}
+\varepsilon\mu_{h}(x),
\end{eqnarray*}
where $\nu_{\gamma}(x)$ is given b
\[
\nu_{\gamma}(x)=\frac{1}{(1+x^{2})}\biggl[ \frac{1}{x^{1+\gamma
}}1\{
x\geq
0\}+\frac{1}{|x|^{1+\gamma}}1\{x<0\}\biggr].
\]
Due to statements (i) and (ii) of Lemma \ref{MuProp}, one can
always choose $ \varepsilon$ in such a way that
$ \nu_{1} $ and $ \nu_{2} $ stay positive on $ \mathbb{R}_{+} $ and
thus they can be viewed as the L\'evy densities
of some L\'evy processes $L_{1,t}$ and $L_{2,t}$, respectively. It
directly follows from the definition of $ \nu_{1} $ and $ \nu_{2} $
that $ \nu_{1},\nu_{2}\in\mathfrak{B}_{\gamma}$. The next lemma
describes some other properties of $ \nu_{1}(x) $ and $ \nu_{2}(x)$.
Denote $ \bar\nu_{1}(x)=x^{2}\nu_{1}(x) $
and $ \bar\nu_{2}(x)=x^{2}\nu_{2}(x)$.
\begin{lem}
\label{NuProp}
Functions $ \bar\nu_{1}(x) $ and $ \bar\nu_{2}(x) $ satisfy
\begin{equation}
\label{NuNuh}
\sup_{x\in\mathbb{R}}|\bar\nu_{1}(x)-\bar\nu_{2}(x)|\geq
\varepsilon
Dh^{\beta}\log^{-1}(1/h)
\end{equation}
and
\begin{equation}
\label{FNu1Mom}
\int_{-\infty}^{\infty} (1+|u|^{\beta})\vert\mathbf{F}[\bar
{\nu}_{i
}](u)\vert\,du<\infty,\qquad i=1,2,
\end{equation}
that is, both functions $ \nu_{1}(x) $ and $ \nu_{2}(x) $ belong to the
class $ \mathfrak{S}_{\beta}$.
\end{lem}
Let us now perform a time change in the processes $ L_{1,t} $ and $
L_{2,t}$.
To this end, introduce a time change $ \mathcal{T}(t)$, such that
the Laplace transform of
\mathcal{T}(t)$ has following representation
\[
\mathcal{L}_{t}(z)=\E\bigl[e^{-z\mathcal{T}(t)}\bigr]=\int_{0}^{\infty
}e^{-zy}\,dF_{t}(y),
\]
where $ (F_{t}, t\geq0) $ is a family of distribution functions on
$ \mathbb{R}_{+} $ satisfying
\[
1-F_{t}(y)\leq1-F_{s}(y),\qquad y\in\mathbb{R}_{+},
\]
for any $ t\leq s$.
Denote by $ \widetilde{p}_{1,t} $ and $ \widetilde{p}_{2,t} $ the
marginal densities of the resulting time-changed L\'evy processes
$Y_{1,t}=L_{1,\mathcal{T}(t)}$ and $Y_{2,t}=
L_{2,\mathcal{T}(t)}$, respectively. The following lemma provides us
with an upper bound for the $ \chi^{2} $-divergence between
$\widetilde
{p}_{1,t}$ and $\widetilde{p}_{2,t}$, where for any two probability
measures $ P $ and $ Q $ the $ \chi^{2} $-divergence between $ P $ and
$ Q $ is defined as
\[
\chi^{2}(P,Q)=
\cases{\displaystyle
\int\biggl( \frac{dP}{dQ}-1 \biggr)^{2} \,dQ, &\quad if $P\ll Q$, \vspace*{2pt}\cr
+\infty, &\quad otherwise.}
\]
\begin{lem}
\label{BoundDiv}
Suppose that the Laplace transform of the time change $ \mathcal{T}(t)
$ fulfills
\begin{equation}
\label{LCond1}
\bigl\vert\mathcal{L}_{\Delta}^{(k+1)}(z)/\mathcal{L}_{\Delta
}^{(k)}(z
\bigr\vert=O(1),\qquad |z|\rightarrow\infty,
\end{equation}
for $ k=0,1,2$, and uniformly in $ \Delta\in[0,1]$. Then
\[
\chi^{2}( \widetilde{p}_{1,\Delta},\widetilde{p}_{2,\Delta
}
) \lesssim\Delta^{-1}[ \mathcal{L}_{\Delta}^{\prime
}(ch^{-\gamma
}
] ^{2}h^{(2\beta+1)},\qquad h\rightarrow0,
\]
with some constant $ c>0$.
\end{lem}
The proofs of Lemmas \ref{MuProp}, \ref{NuProp} and \ref{BoundDiv} can
be found in the preprint version of our paper \citet{Bel}.
Combining Lemma \ref{BoundDiv} with inequality~(\ref{NuNuh}) and
using the well-known Assouad lemma [see, e.g., Theorem~2.6 in
\citet{TS}], one obtains
\[
\liminf_{n\to\infty}\inf_{\widehat\nu}\sup_{\nu\in\mathfrak
{B}_{\gamma}\cap\mathfrak{S}_{\beta}}\mathbb{P}\Bigl(\sup_{x\in\mathbb
{R}}|\bar\nu(x)-\widehat\nu(x) |>h^{\beta}_{n}\log
^{-1}(1/h_{n})\Bigr)>0
\]
for any sequence $ h_{n} $ satisfying
\[
n\Delta^{-1}[ \mathcal{L}_{t}^{\prime}(c\cdot h_{n}^{-\gamma})
] ^{2}h_{n}^{(2\beta+1)}=O(1),\qquad n\to\infty.
\]
\section{Auxiliary results}
\subsection{\texorpdfstring{Some results on time-changed L\'evy
processes}{Some results on time-changed Levy processes}}
\begin{lem}
\label{MIX}
Let $ L_{t} $ be a $ d $-dimensional L\'evy process with the L\'evy
measure $ \nu$ and let $ \mathcal{T}(t) $
be a time change independent of $ L_{t}$. Fix some $ \Delta>0 $ and
consider two sequences
$ T_{k}=\mathcal{T}(\Delta k)-\mathcal{T}(\Delta(k-1)) $ and $
Z_{k}=Y_{\Delta k}-Y_{\Delta(k-1)}$, $ k=1,\ldots, n$, where $
Y_{t}=L_{\mathcal{T}(t)}$. If the sequence $ (T_{k})_{k\in\mathbb{N}}
$ is strictly stationary and $ \alpha$-mixing with the mixing coefficients
$ (\alpha_{T}(j))_{j\in\mathbb{N}}$, then the sequence $
(Z_{k})_{k\in\mathbb{N}} $ is also strictly stationary and $ \alpha
$-mixing with the mixing coefficients $ (\alpha_{Z}(j))_{j\in\mathbb
{N}}$, satisfying
\begin{equation}
\label{AZAT}
\alpha_{Z}(j)\leq\alpha_{T}(j),\qquad j\in\mathbb{N}.
\end{equation}
\end{lem}
\begin{pf}
Fix some natural $ k,l $ with $ k+l<n$.
Using the independence of increments of the L\'evy process $ L_{t} $
and the fact that $ \mathcal{T} $
is a nondecreasing process, we get $ \E[ \phi(Z_{1},\ldots, Z_{k})
]=\E[ \widetilde\phi(T_{1},\ldots, T_{k}) ] $
and
\begin{eqnarray*}
&&\E[\phi(Z_{1},\ldots, Z_{k})\psi(Z_{k+l},\ldots, Z_{n})]\\
&&\qquad=\E
[\widetilde
\phi(T_{1},\ldots, T_{k}) \widetilde\psi(T_{k+l},\ldots, T_{n})],\qquad
k,l\in\mathbb{N},
\end{eqnarray*}
for any two functions $ \phi\dvtx\mathbb{R}^{k}\to[0,1] $ and $
\psi\dvtx
\mathbb{R}^{n-l-k}\to[0,1]$,
where $ \widetilde\phi(t_{1},\ldots,\allowbreak t_{k})=\E[\phi
(L_{t_{1}},\ldots,
L_{t_{k}})] $ and $ \widetilde\psi(t_{1},\ldots,t_{k})=\E[\psi
(L_{t_{1}},\ldots, L_{t_{k}})]$. This implies that
the sequence $ Z_{k} $ is strictly stationary and $ \alpha$-mixing
with the mixing coefficients satisfying
(\ref{AZAT}).
\end{pf}
\subsection{Exponential inequalities for dependent sequences}
The following theorem can be found in \citet{MPR}.
\begin{theorem}
\label{EIB}
Let $ (Z_k, k\geq1) $ be a strongly mixing sequence of centered
real-valued random variables on the probability space $(\Omega
,\mathcal F,P)$
with the mixing coefficients satisfying
\begin{equation}
\label{ALPHAEXPDECAY}
\alpha(n)\leq\bar\alpha\exp(-cn ),\qquad n\geq1, \bar\alpha
>0, c>0.
\end{equation}
Assume that $\sup_{k\geq1}|Z_k|\leq M$ a.s.,
then there is a positive constant $ C $ depending on $ c $ and $ \bar
\alpha$ such that
\[
\mathbb{P}\Biggl\{ \sum_{i=1}^n Z_i\geq\zeta\Biggr\}\leq\exp
\biggl[-\frac
{C\zeta^2 }{nv^{2}+M^{2} +M\zeta\log^{2}(n)}\biggr]
\]
for all $ \zeta>0 $ and $ n\geq4$,
where
\[
v^{2}=\sup_{i}\biggl( \E[Z_{i}]^{2}+2\sum_{j\geq i}\Cov(Z_{i},Z_{j})
\biggr).
\]
\end{theorem}
\begin{cor}
\label{COVEST}
Denote
\[
\rho_{j}=\E\bigl[ Z_{j}^{2}\log^{2(1+\varepsilon)}(
|Z_{j}|^{2}) \bigr],\qquad j=1,2,\ldots,
\]
with arbitrary small $ \varepsilon>0 $ and suppose that all $ \rho_{j}
$ are finite. Then
\[
\sum_{j\geq i}\Cov(Z_{i},Z_{j})\leq C\max_{j}\rho_{j}
\]
for some constant $ C>0$, provided (\ref{ALPHAEXPDECAY}) holds.
Consequently, the following inequality holds:
\[
v^{2}\leq\sup_{i}\E[Z_{i}]^{2}+C\max_{j}\rho_{j}.
\]
\end{cor}
The proof can be found in \citet{Bel}.
\subsection{Bounds on large deviations probabilities for weighted sup norms}
\label{EXPBOUNDS}
Let $ Z_{j}=(X_{j},Y_{j})$, $j=1,\ldots, n$, be a sequence of
two-dimensional random vectors and let $ G_{n}(u,z)$, $ n=1,2,\ldots, $
be a sequence of complex-valued functions defined on $ \mathbb{R}^{2}$. Define
\begin{eqnarray*}
\widehat m_{1}(u)&=&\frac{1}{n}\sum_{j=1}^{n}X_{j}G_{n}(u,X_{j}),
\\
\widehat m_{2}(u)&=&\frac{1}{n}\sum_{j=1}^{n}Y_{j}G_{n}(u,X_{j}),
\\
\widehat m_{3}(u)&=&\frac{1}{n}\sum_{j=1}^{n}X^{2}_{j}G_{n}(u,X_{j}),
\\
\widehat m_{4}(u)&=&\frac{1}{n}\sum_{j=1}^{n}X_{j}Y_{j}G_{n}(u,X_{j}).
\end{eqnarray*}
\begin{prop}
\label{ExpBounds}
Suppose that the following assumptions hold:
\begin{longlist}[(AZ1)]
\item[(AZ1)] The sequence $ Z_{j}$, $ j=1,\ldots, n$, is strictly
stationary and is $ \alpha$-mixing with mixing coefficients $ (\alpha
_{Z}(k))_{k\in\mathbb{N}} $ satisfying
\[
\alpha_{Z}(k)\leq\bar\alpha_{0}\exp(-\bar\alpha_{1} k),\qquad
k\in
\mathbb{N},
\]
for some $ \bar\alpha_{0}>0 $ and $ \bar\alpha_{1}>0$.
\item[(AZ2)] The r.v. $X_{j} $ and $ Y_{j} $ possess finite absolute
moments of order $ p>2$.
\item[(AG1)] Each function $ G_{n}(u,z), n\in\mathbb{N} $ is
Lipschitz in $ u $
with linearly growing (in~$ z $) Lipschitz constant,
that is, for any $ u_{1},u_{2} \in\mathbb{R} $
\[
|G_{n}(u_{1},z)-G_{n}(u_{2},z)|\leq L_{n}(a+b|z|)|u_{1}-u_{2}|,
\]
where $ a,b $ are two nonnegative real numbers
not depending on $ n $ and the sequence $ L_{n} $ does not depend on $
u$.
\item[(AG2)] There are two sequences $ \bar\mu_{n} $ and $ \bar
\sigma
_{n}$,
such that
\[
|G_{n}(u,z)|\leq\bar\mu_{n},\qquad (u,z)\in\mathbb{R}^{2},
\]
and all the functions
\begin{eqnarray*}
&\E[(|X|^{2}+|Y|^{2})|G_{n}(u,X)|^{2}],\qquad
\E[|X|^{4}|G_{n}(u,X)|^{2}],&\\
&\E[|X|^{2}|Y|^{2}|G_{n}(u,X)|^{2}]&
\end{eqnarray*}
are uniformly bounded on $ \mathbb{R} $ by $ \bar\sigma^{2}_{n}$.
Moreover, assume that the sequences $ \bar\mu_{n}, L_{n} $
and $ \bar\sigma_{n} $ fulfill
\begin{eqnarray*}
\bar\mu_{n}/\bar\sigma^{2}_{n}&=&O(1), \qquad\bar\mu_{n}/\bar
\sigma
_{n}=O(n^{1/2-\delta/2}),\qquad \bar\sigma^{2}_{n}=O(n),\\
L_{n}/\bar\sigma_{n}&=&O(n^{3/2}),\qquad n\to\infty,
\end{eqnarray*}
for some $ \delta$ satisfying $ 2/p<\delta\leq1$.\vadjust{\goodbreak}
\end{longlist}
Let $w$ be a symmetric, Lipschitz continuous, positive, monotone
decreasing on $\mathbb{R}_{+}$ function such that
\begin{equation}
\label{decreasingw}
0<w(z)\leq\log^{-1/2}(e+|z|),\qquad z\in\mathbb{R}.
\end{equation}
Then there is $ \delta'>0 $ and $ \xi_{0}>0 $, such that
the inequality
\begin{equation}
\label{MINEQ}
\mathbb{P}\Biggl\{\log^{-(1+\varepsilon)}(1+\bar\mu_{n})\sqrt{\frac
{n}{\bar\sigma
^{2}_{n}\log n}}
\| \widehat m_{k}- \E[\widehat m_{k}] \|_{L_{\infty
}(\mathbb
{R},w)}>\xi\Biggr\} \leq B n^{-1-\delta' }\hspace*{-28pt}
\end{equation}
holds for any $ \xi>\xi_{0}$, any $ k\in\{ 1,\ldots,4 \}$, some
positive constant $ B $ depending on $ \xi$ and arbitrary small $
\varepsilon>0$.
\end{prop}
The proof of the proposition can be found in \citet{Bel}.
| proofpile-arXiv_065-4963 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
The study of the in-medium properties of hadrons is an active topic of
research in strong interaction physics, both experimentally and
theoretically. The topic is of direct relevance in the context of
heavy ion collision experiments, which probe matter at high
temperatures and/or densities. The medium modifications of the
hadrons have direct consequences on the experimental observables
from the strongly interacting matter produced in heavy ion collision
experiments. The in-medium properties of the kaons and antikaons
in the nuclear medium are of relevance in neutron star phenomenology
where an attractive interaction of antikaon-nucleon can lead
to antikaon condensation in the interior of the neutron stars.
The medium modifications of kaons and antikaons also show in their
production and propagation in the heavy ion collision experiments.
The modifications of the properties of the charm mesons, $D$ and
$\bar D$ as well as the $J/\psi$ mesons and the excited states of
charmonium, can have important consequences on the production of open
charm and the suppression of $J/\psi$ in heavy ion collision experiments.
In high energy heavy ion collision experiments at RHIC as well as LHC,
the suppression of $J/\psi$ can arise from the formation of
the quark-gluon-plasma (QGP) \cite{blaiz, satz}.
The D (${\rm {\bar D}}$) mesons are made up of one heavy charm quark
(antiquark) and one light (u or d) antiquark (quark).
In the QCD sum rule calculations, the mass modifications of
$D$ ($\bar D$) mesons in the nuclear medium arise due to the
interactions of light antiquark (quark) present in the $D$($\bar D$)
mesons with the light quark condensate. There is appreciable change
in the light quark condensate in the nuclear medium and hence
$D$ ($\bar D$) meson mass, due to its interaction with
the light quark condensate, changes appreciably in the hadronic matter.
On the other hand, the charmonium states are made up of a heavy charm quark and
a charm antiquark. Within QCD sum rules, it is suggested that these heavy
charmonium states interact with the nuclear medium through the gluon
condensates. This is contrary to the interaction of the light vector mesons
($\rho$, $\omega$, $\phi$), which interact with the nuclear
medium through the quark condensates.
This is because all the heavy quark condensates can be related to the
gluon condensates via heavy-quark expansion \cite{kimlee}. Also in the
nuclear medium there are no valence charm quarks to leading order in
density and any interaction with the medium is gluonic. The QCD sum
rules has been used to study the medium modification of $D$ mesons
\cite{haya1} and light vector mesons \cite{hatsuda}. The QCD sum rule
approach \cite{klingl} and leading order perturbative calculations
\cite{pes1} to study the medium modifications of charmonium,
show that the mass of $J/\psi$ is reduced only slightly in the nuclear medium.
In \cite{lee1}, the mass modification of charmonium has been studied using
leading order QCD formula and the linear density approximation for
the gluon condensate in the nuclear medium. This shows a small drop
for the $J/\psi$ mass at the nuclear matter density, but there is seen
to be significant shift in the masses of the excited states of charmonium
($\psi$(3686) and $\psi$(3770)).
In the present work, we study the medium modification of the masses
of $J/\psi$ and excited charmonium states $\psi(3686)$ and $\psi(3770)$
in the nuclear medium due to the interaction with the gluon condensates
using the leading order QCD formula. The gluon condensate in the nuclear
medium is calculated from the medium modification of a scalar
dilaton field introduced within a chiral SU(3) model \cite{papa}
through a scale symmetry breaking term in the Lagrangian density
leading to the QCD trace anomaly. In the chiral SU(3) model, the gluon
condensate is related to the fourth power of the dilaton field $\chi$
and the changes in the dilaton field with density are seen to be small.
The model has been used successfully to study the medium modifications
of kaons and antikaons in isospin asymmetric nuclear matter in
\cite{isoamss} and in hyperonic matter in \cite{isoamss2}. The model
has also been used to study the $D$ mesons in asymmetric nuclear matter
at zero temperature \cite{amarind} and in the symmetric and asymmetric
nuclear matter at finite temperatures in Ref.\cite{amdmeson} and
\cite{amarvind}. The vector mesons have also been studied within
the model \cite{hartree, kristof1}. In the present investigation,
we study the isospin dependence of the in-medium masses
of charmonium obtained from the dilaton field, $\chi$
calculated for the asymmetric nuclear matter at finite temperatures.
This study will be of relevance for the experimental observables
from high density matter produced in the asymmetric nuclear collisions
at the future facility at GSI.
The outline of the paper is as follows : In section II, we give a brief
introduction of the chiral $SU(3)$ model used to study the in-medium
masses of charmonium in the present investigation. The medium
modifications of the charmonium masses arise from the medium
modification of a scalar dilaton field introduced in the hadronic
model to incorporate broken scale invariance of QCD leading to QCD
trace anomaly. In section III, we summarize the results obtained
in the present investigation.
\section{The hadronic chiral $SU(3) \times SU(3)$ model }
We use an effective chiral $SU(3)$ model for the present investigation
\cite{papa}. The model is based on the nonlinear realization of chiral
symmetry \cite{weinberg, coleman, bardeen} and broken scale invariance
\cite{papa,hartree, kristof1}. This model has been used successfully to
describe nuclear matter, finite nuclei, hypernuclei and neutron stars.
The effective hadronic chiral Lagrangian density contains the following terms
\begin{equation}
{\cal L} = {\cal L}_{kin}+\sum_{W=X,Y,V,A,u} {\cal L}_{BW} +
{\cal L}_{vec} + {\cal L}_{0} + {\cal L}_{SB}
\label{genlag}
\end{equation}
In Eq. (\ref{genlag}), ${\cal L}_{kin}$ is kinetic energy term,
${\cal L}_{BW}$ is the baryon-meson interaction term in which the
baryon-spin-0 meson interaction term generates the vacuum baryon masses.
${\cal L}_{vec}$ describes the dynamical mass generation of the vector
mesons via couplings to the scalar mesons and contain additionally
quartic self-interactions of the vector fields. ${\cal L}_{0}$ contains
the meson-meson interaction terms inducing the spontaneous breaking of
chiral symmerty as well as a scale invariance breaking logarthimic
potential. ${\cal L}_{SB}$ describes the explicit chiral symmetry breaking.
To study the hadron properties at finite temperature and densities
in the present investigation, we use the mean field approximation,
where all the meson fields are treated as classical fields.
In this approximation, only the scalar and the vector fields
contribute to the baryon-meson interaction, ${\cal L}_{BW}$
since for all the other mesons, the expectation values are zero.
The interactions of the scalar mesons and vector mesons with the
baryons are given as
\begin{eqnarray}
{\cal L} _{Bscal} + {\cal L} _{Bvec} = - \sum_{i} \bar{\psi}_{i}
\left[ m_{i}^{*} + g_{\omega i} \gamma_{0} \omega
+ g_{\rho i} \gamma_{0} \rho + g_{\phi i} \gamma_{0} \phi
\right] \psi_{i}.
\end{eqnarray}
The interaction of the vector mesons, of the scalar fields and
the interaction corresponding to the explicitly symmetry breaking
in the mean field approximation are given as
\begin{eqnarray}
{\cal L} _{vec} & = & \frac{1}{2} \left( m_{\omega}^{2} \omega^{2}
+ m_{\rho}^{2} \rho^{2} + m_{\phi}^{2} \phi^{2} \right)
\frac{\chi^{2}}{\chi_{0}^{2}}
\nonumber \\
& + & g_4 (\omega ^4 +6\omega^2 \rho^2+\rho^4 + 2\phi^4),
\end{eqnarray}
\begin{eqnarray}
{\cal L} _{0} & = & -\frac{1}{2} k_{0}\chi^{2} \left( \sigma^{2} + \zeta^{2}
+ \delta^{2} \right) + k_{1} \left( \sigma^{2} + \zeta^{2} + \delta^{2}
\right)^{2} \nonumber\\
&+& k_{2} \left( \frac{\sigma^{4}}{2} + \frac{\delta^{4}}{2} + 3 \sigma^{2}
\delta^{2} + \zeta^{4} \right)
+ k_{3}\chi\left( \sigma^{2} - \delta^{2} \right)\zeta \nonumber\\
&-& k_{4} \chi^{4} - \frac{1}{4} \chi^{4} {\rm {ln}}
\frac{\chi^{4}}{\chi_{0}^{4}}
+ \frac{d}{3} \chi^{4} {\rm {ln}} \Bigg (\bigg( \frac{\left( \sigma^{2}
- \delta^{2}\right) \zeta }{\sigma_{0}^{2} \zeta_{0}} \bigg)
\bigg (\frac{\chi}{\chi_0}\bigg)^3 \Bigg ),
\label{lagscal}
\end{eqnarray}
and
\begin{eqnarray}
{\cal L} _{SB} & = & - \left( \frac{\chi}{\chi_{0}}\right) ^{2}
\left[ m_{\pi}^{2}
f_{\pi} \sigma + \left( \sqrt{2} m_{k}^{2}f_{k} - \frac{1}{\sqrt{2}}
m_{\pi}^{2} f_{\pi} \right) \zeta \right].
\end{eqnarray}
The effective mass of the baryon of species $i$ is given as
\begin{equation}
{m_i}^{*} = -(g_{\sigma i}\sigma + g_{\zeta i}\zeta + g_{\delta i}\delta)
\label{mbeff}
\end{equation}
The baryon-scalar meson interactions, as can be seen from equation
(\ref{mbeff}), generate the baryon masses through
the coupling of baryons to the non-strange $\sigma$, strange $\zeta$
scalar mesons and also to scalar-isovector meson $\delta$. In analogy
to the baryon-scalar meson coupling there exist two independent
baryon-vector meson interaction terms corresponding to the F-type
(antisymmetric) and D-type (symmetric) couplings. Here antisymmetric
coupling is used because the universality principle \cite{saku69}
and vector meson dominance model suggest small symmetric coupling.
Additionally, we choose the parameters \cite{papa,isoamss} so as
to decouple the strange vector field $\phi_{\mu}\sim\bar{s}\gamma_{\mu}s$
from the nucleon, corresponding to an ideal mixing between $\omega$ and
$\phi$ mesons. A small deviation of the mixing angle from ideal mixing
\cite{dumbrajs,rijken,hohler1} has not been taken into account in the
present investigation.
The concept of broken scale invariance leading to the trace anomaly
in (massless) QCD, $\theta_{\mu}^{\mu} = \frac{\beta_{QCD}}{2g}
{G^a}_{\mu\nu} G^{\mu\nu a}$, where $G_{\mu\nu}^{a} $ is the
gluon field strength tensor of QCD, is simulated in the effective
Lagrangian at tree level \cite{sche1} through the introduction of
the scale breaking terms
\begin{equation}
{\cal L}_{scalebreaking} = -\frac{1}{4} \chi^{4} {\rm {ln}}
\Bigg ( \frac{\chi^{4}} {\chi_{0}^{4}} \Bigg ) + \frac{d}{3}{\chi ^4}
{\rm {ln}} \Bigg ( \bigg (\frac{I_{3}}{{\rm {det}}\langle X
\rangle _0} \bigg ) \bigg ( \frac {\chi}{\chi_0}\bigg)^3 \Bigg ),
\label{scalebreak}
\end{equation}
where $I_3={\rm {det}}\langle X \rangle$, with $X$ as the multiplet
for the scalar mesons. These scale breaking terms,
in the mean field approximation, are given by the last two terms
of the Lagrangian density, ${\cal L}_0$ given by equation (\ref{lagscal}).
The effect of these logarithmic terms is to break the scale invariance,
which leads to the trace of the energy momentum tensor as \cite{heide1}
\begin{equation}
\theta_{\mu}^{\mu} = \chi \frac{\partial {\cal L}}{\partial \chi}
- 4{\cal L}
= -(1-d)\chi^{4}.
\label{tensor1}
\end{equation}
Hence the scalar gluon condensate of QCD ($\langle {G^a}_{\mu \nu}
G^{\mu \nu a} \rangle$) is simulated by a scalar dilaton field in the present
hadronic model.
The coupled equations of motion for the non-strange scalar field $\sigma$,
strange scalar field $ \zeta$, scalar-isovector field $ \delta$ and dilaton
field $\chi$, are derived from the Lagrangian density
and are given as
\begin{eqnarray}
&& k_{0}\chi^{2}\sigma-4k_{1}\left( \sigma^{2}+\zeta^{2}
+\delta^{2}\right)\sigma-2k_{2}\left( \sigma^{3}+3\sigma\delta^{2}\right)
-2k_{3}\chi\sigma\zeta \nonumber\\
&-&\frac{d}{3} \chi^{4} \bigg (\frac{2\sigma}{\sigma^{2}-\delta^{2}}\bigg )
+\left( \frac{\chi}{\chi_{0}}\right) ^{2}m_{\pi}^{2}f_{\pi}
-\sum g_{\sigma i}\rho_{i}^{s} = 0
\label{sigma}
\end{eqnarray}
\begin{eqnarray}
&& k_{0}\chi^{2}\zeta-4k_{1}\left( \sigma^{2}+\zeta^{2}+\delta^{2}\right)
\zeta-4k_{2}\zeta^{3}-k_{3}\chi\left( \sigma^{2}-\delta^{2}\right)\nonumber\\
&-&\frac{d}{3}\frac{\chi^{4}}{\zeta}+\left(\frac{\chi}{\chi_{0}} \right)
^{2}\left[ \sqrt{2}m_{k}^{2}f_{k}-\frac{1}{\sqrt{2}} m_{\pi}^{2}f_{\pi}\right]
-\sum g_{\zeta i}\rho_{i}^{s} = 0
\label{zeta}
\end{eqnarray}
\begin{eqnarray}
& & k_{0}\chi^{2}\delta-4k_{1}\left( \sigma^{2}+\zeta^{2}+\delta^{2}\right)
\delta-2k_{2}\left( \delta^{3}+3\sigma^{2}\delta\right) +k_{3}\chi\delta
\zeta \nonumber\\
& + & \frac{2}{3} d \left( \frac{\delta}{\sigma^{2}-\delta^{2}}\right)
-\sum g_{\delta i}\rho_{i}^{s} = 0
\label{delta}
\end{eqnarray}
\begin{eqnarray}
& & k_{0}\chi \left( \sigma^{2}+\zeta^{2}+\delta^{2}\right)-k_{3}
\left( \sigma^{2}-\delta^{2}\right)\zeta + \chi^{3}\left[1
+{\rm {ln}}\left( \frac{\chi^{4}}{\chi_{0}^{4}}\right) \right]
+(4k_{4}-d)\chi^{3}
\nonumber\\
& - & \frac{4}{3} d \chi^{3} {\rm {ln}} \Bigg ( \bigg (\frac{\left( \sigma^{2}
-\delta^{2}\right) \zeta}{\sigma_{0}^{2}\zeta_{0}} \bigg )
\bigg (\frac{\chi}{\chi_0}\bigg)^3 \Bigg )
+\frac{2\chi}{\chi_{0}^{2}}\left[ m_{\pi}^{2}
f_{\pi}\sigma +\left(\sqrt{2}m_{k}^{2}f_{k}-\frac{1}{\sqrt{2}}
m_{\pi}^{2}f_{\pi} \right) \zeta\right] = 0
\label{chi}
\end{eqnarray}
In the above, ${\rho_i}^s$ are the scalar densities for the baryons,
given as
\begin{eqnarray}
\rho_{i}^{s} = \gamma_{i}\int\frac{d^{3}k}{(2\pi)^{3}}
\frac{m_{i}^{*}}{E_{i}^{*}(k)}
\Bigg ( \frac {1}{e^{({E_i}^* (k) -{\mu_i}^*)/T}+1}
+ \frac {1}{e^{({E_i}^* (k) +{\mu_i}^*)/T}+1} \Bigg )
\label{scaldens}
\end{eqnarray}
where, ${E_i}^*(k)=(k^2+{{m_i}^*}^2)^{1/2}$, and, ${\mu _i}^*
=\mu_i -g_{\omega i}\omega -g_{\rho i}\rho -g_{\phi i}\phi$, are the single
particle energy and the effective chemical potential
for the baryon of species $i$, and,
$\gamma_i$=2 is the spin degeneracy factor \cite{isoamss}.
The above coupled equations of motion are solved to obtain the density
and temperature dependent values of the scalar fields ($\sigma$,
$\zeta$ and $\delta$) and the dilaton field, $\chi$, in the isospin
asymmetric hot nuclear medium. As has been already mentioned, the value
of the $\chi$ is related to the scalar gluon condensate in the hot
hadronic medium, and is used to compute the in-medium masses of charmonium
states, in the present investigation. The isospin asymmetry in the medium
is introduced through the scalar-isovector field $\delta$
and therefore the dilaton field obtained after solving the above
equations is also dependent on the isospin asymmetry parameter,
$\eta$ defined as $\eta= ({\rho_n -\rho_p})/({2 \rho_B})$,
where $\rho_n$ and $\rho_p$ are the number densities of the neutron
and the proton and $\rho_B$ is the baryon density. In the present
investigation, we study the effect of isospin asymmetry of the medium
on the masses of the charmonium states $J/\psi, \psi(3686)$
and $\psi(3770)$.
The comparison of the trace of the energy momentum tensor arising
from the trace anomaly of QCD with that of the present chiral model
gives the relation of the dilaton field to the scalar gluon condensate.
We have, in the limit of massless quarks \cite{cohen},
\begin{equation}
\theta_{\mu}^{\mu} = \langle \frac{\beta_{QCD}}{2g}
G_{\mu\nu}^{a} G^{\mu\nu a} \rangle \equiv -(1 - d)\chi^{4}
\label{tensor2}
\end{equation}
The parameter $d$ originates from the second logarithmic term of equation
(\ref{scalebreak}). To get an insight into the value of the parameter
$d$, we recall that the QCD $\beta$ function at one loop level, for
$N_{c}$ colors and $N_{f}$ flavors is given by
\begin{equation}
\beta_{\rm {QCD}} \left( g \right) = -\frac{11 N_{c} g^{3}}{48 \pi^{2}}
\left( 1 - \frac{2 N_{f}}{11 N_{c}} \right) + O(g^{5})
\label{beta}
\end{equation}
In the above equation, the first term in the parentheses arises from
the (antiscreening) self-interaction of the gluons and the second term,
proportional to $N_{f}$, arises from the (screening) contribution of
quark pairs. Equations (\ref{tensor2}) and (\ref{beta}) suggest the
value of $d$ to be 6/33 for three flavors and three colors, and
for the case of three colors and two flavors, the value of $d$
turns out to be 4/33, to be consistent with the one loop estimate
of QCD $\beta$ function. These values give the order of magnitude
about which the parameter $d$ can be taken \cite{heide1}, since one
cannot rely on the one-loop estimate for $\beta_{\rm {QCD}}(g)$.
In the present investigation of the in-medium properties of the
charmonium states due to the medium modification of the dilaton
field within chiral $SU(3)$ model, we use the value of $d$=0.064
\cite{amarind}. This parameter, along with the other parameters
corresponding to the scalar Lagrangian density, ${\cal L}_0$
given by ({\ref{lagscal}), are fitted so as to ensure
extrema in the vacuum for the $\sigma$, $\zeta$ and $\chi$ field
equations, to reproduce the vacuum masses of the $\eta$ and $\eta '$
mesons, the mass of the $\sigma$ meson around 500 MeV, and,
pressure, p($\rho_0$)=0,
with $\rho_0$ as the nuclear matter saturation density \cite{papa,amarind}.
The trace of the energy-momentum tensor in QCD, using the
one loop beta function given by equation (\ref{beta}),
for $N_c$=3 and $N_f$=3, is given as,
\begin{equation}
\theta_{\mu}^{\mu} = - \frac{9}{8} \frac{\alpha_{s}}{\pi}
G_{\mu\nu}^{a} G^{\mu\nu a}
\label{tensor4}
\end{equation}
Using equations (\ref{tensor2}) and (\ref{tensor4}), we can write
\begin{equation}
\left\langle \frac{\alpha_{s}}{\pi} G_{\mu\nu}^{a} G^{ \mu\nu a}
\right\rangle = \frac{8}{9}(1 - d) \chi^{4}
\label{chiglu}
\end{equation}
We thus see from the equation (\ref{chiglu}) that the scalar
gluon condensate $\left\langle \frac{\alpha_{s}}{\pi} G_{\mu\nu}^{a}
G^{\mu\nu a}\right\rangle$ is proportional to the fourth power of the
dilaton field, $\chi$, in the chiral SU(3) model.
As mentioned earlier, the in-medium masses of charmonium states are
modified due to the gluon condensates. Therefore, we need to know the
change in the gluon condensate with density and temperature
of the asymmetric nuclear medium, which is calculated from the
modification of the $\chi$ field, by using equation (\ref{chiglu}).
From the QCD sum rule calculations, the mass shift of the charmonium
states in the medium is due to the gluon condensates \cite{haya1,lee1}.
For heavy quark systems, there are two independent lowest dimension
operators: the scalar gluon condensate ( $\left\langle
\frac{\alpha_{s}}{\pi} G_{\mu\nu}^{a} G^{\mu\nu a}\right\rangle$ )
and the condensate of the twist 2 gluon operator ( $\left\langle
\frac{\alpha_{s}}{\pi} G_{\mu\nu}^{a} G^{\mu\alpha a}\right\rangle$ ).
These operators can be rewritten in terms of the color electric and
color magnetic fields, $\langle \frac{\alpha_s}{\pi} {\vec E}^2\rangle$
and $\langle \frac{\alpha_s}{\pi} {\vec B}^2\rangle$. Additionally,
since the Wilson coefficients for the operator $\langle \frac{\alpha_s}{\pi}
{\vec B}^2\rangle$ vanishes in the non-relativistic limit, the only
contribution from the gluon condensates is proportional to $\langle
\frac{\alpha_s}{\pi} {\vec E}^2\rangle$, similar to the second order
Stark effect. Hence, the mass shift of the charmonium states arises
due to the change in the operator $\langle \frac{\alpha_s}{\pi}
{\vec E}^2\rangle$ in the medium from its vacuum value \cite {lee1}.
In the leading order mass shift formula derived in the large charm
mass limit \cite{pes1}, the shift in the mass of the charmonium
state is given as
\cite{lee1}
\begin{equation}
\Delta m_{\psi} (\epsilon) = -\frac{1}{9} \int dk^{2} \vert
\frac{\partial \psi (k)}{\partial k} \vert^{2} \frac{k}{k^{2}
/ m_{c} + \epsilon} \bigg (
\left\langle \frac{\alpha_{s}}{\pi} E^{2} \right\rangle-
\left\langle \frac{\alpha_{s}}{\pi} E^{2} \right\rangle_{0}
\bigg ).
\label{mass1}
\end{equation}
In the above, $m_c$ is the mass of the charm quark, taken as 1.95 GeV
\cite{lee1}, $m_\psi$ is the vacuum mass of the charmonium state
and $\epsilon = 2 m_{c} - m_{\psi}$.
$\psi (k)$ is the wave function of the charmonium state
in the momentum space, normalized as $\int\frac{d^{3}k}{2\pi^{3}}
\vert \psi(k) \vert^{2} = 1 $ \cite{leetemp}.
At finite densities, in the linear density approximation, the change
in the value of $\langle \frac{\alpha_s}{\pi} {\vec E}^2\rangle$,
from its vacuum value, is given as
\begin{equation}
\left\langle \frac{\alpha_{s}}{\pi} E^{2} \right\rangle-
\left\langle \frac{\alpha_{s}}{\pi} E^{2} \right\rangle_{0}
=
\left\langle \frac{\alpha_{s}}{\pi} E^{2} \right\rangle _{N}
\frac {\rho_B}{2 M_N},
\end{equation}
and the mass shift in the charmonium states reduces to \cite{lee1}
\begin{equation}
\Delta m_{\psi} (\epsilon) = -\frac{1}{9} \int dk^{2} \vert
\frac{\partial \psi (k)}{\partial k} \vert^{2} \frac{k}{k^{2}
/ m_{c} + \epsilon}
\left\langle \frac{\alpha_{s}}{\pi} E^{2} \right\rangle _{N}
\frac {\rho_B}{2 M_N}.
\label{masslindens}
\end{equation}
In the above, $\left\langle \frac{\alpha_{s}}{\pi} E^{2}
\right\rangle _{N}$ is the expectation value of
$\left\langle \frac{\alpha_{s}}{\pi} E^{2} \right\rangle$
with respect to the nucleon.
The expectation value of the scalar gluon condensate can be expressed
in terms of the color electric field and the color magnetic field
as \cite{david}
\begin{equation}
\left\langle
\frac{\alpha_{s}}{\pi} G_{\mu\nu}^{a} G^{\mu\nu a}\right\rangle
=-2 \left\langle \frac{\alpha_{s}}{\pi} (E^{2} - B^{2}) \right\rangle.
\end{equation}
In the non-relativistic limit, as already mentioned, the contribution
from the magnetic field vanishes and hence, we can write,
\begin{equation}
\left\langle \frac{\alpha_{s}}{\pi} E^{2} \right\rangle
=-\frac {1}{2}
\left\langle \frac{\alpha_{s}}{\pi}
G_{\mu\nu}^{a} G^{\mu\nu a}\right\rangle
\label{e2glu}
\end{equation}
Using equations (\ref{chiglu}), (\ref{mass1}) and (\ref{e2glu}),
we obtain the expression for the mass shift in the charmonium
in the hot and dense nuclear medium, which arises from
the change in the dilaton field in the present investigation,
as
\begin{equation}
\Delta m_{\psi} (\epsilon) = \frac{4}{81} (1 - d) \int dk^{2}
\vert \frac{\partial \psi (k)}{\partial k} \vert^{2} \frac{k}{k^{2}
/ m_{c} + \epsilon} \left( \chi^{4} - {\chi_0}^{4}\right).
\label{masspsi}
\end{equation}
In the above, $\chi$ and $\chi_0$ are the values of the dilaton field
in the nuclear medium and the vacuum respectively.
In the present investigation, the wave functions for the charmonium states
are taken to be Gaussian and are given as \cite{charmwavefn}
\begin{equation}
\psi_{N, l} = Normalization \times Y_{l}^{m} (\theta, \phi)
(\beta^{2} r^{2})^{\frac{1}2{} l} exp^{-\frac{1}{2} \beta^{2} r^{2}}
L_{N - 1}^{l + \frac{1}{2}} \left( \beta^{2} r^{2}\right)
\label{wavefn}
\end{equation}
where $\beta^{2} = M \omega / h$ characterizes the strength of the
harmonic potential, $M = m_{c}/2$ is the reduced mass of
the charm quark and charm anti-quark system, and $L_{p}^{k} (z)$
is the associated Laguerre Polynomial. As in Ref. \cite{lee1},
the oscillator constant $\beta$ is determined from the mean squared
radii $\langle r^{2} \rangle$ as 0.46$^{2}$ fm$^2$, 0.96$^{2}$ fm$^2$
and 1 fm$^{2}$ for the charmonium states $J/\psi(3097) $, $\psi(3686)$ and
$\psi(3770)$, respectively. This gives the value for the parameter
$\beta$ as 0.51 GeV, 0.38 GeV and 0.37 GeV for $J/\psi(3097)$,
$\psi(3686$ and $\psi(3770)$, assuming that these
charmonium states are in the 1S, 2S and 1D states respectively.
Knowing the wave functions of the charmonium states and
calculating the medium modification of the dilaton field
in the hot nuclear matter, we obtain the mass shift of the
charmonium states, $J/\psi$, $\psi (3686)$ and $\psi (3770)$
respectively. In the next section we shall present the results
of the present investigation of these in-medium charmonium masses
in hot asymmetric nuclear matter.
\section{Results and Discussions}
In this section, we first investigate the effects of density,
isospin-asymmetry and temperature of the nuclear medium on the dilaton
field $\chi$ using a chiral SU(3) model. From the medium modification
of the $\chi$ field, we shall then study the in-medium masses of
charmonium states $J/\psi$, $\psi(3686)$ and $\psi(3770)$ using
equation (\ref{masspsi}).
The values of the parameters used in the present investigation,
are : $k_{0} = 2.54, k_{1} = 1.35, k_{2} = -4.78, k_{3} = -2.77$,
$k_{4} = -0.22$ and $d = 0.064$, which are the parameters
occurring in the scalar meson interactions defined in equation
(\ref{lagscal}).
The vacuum values of the scalar isoscalar fields, $\sigma$ and $\zeta$
and the dilaton field $\chi$ are $-93.3$ MeV, $-106.6$ MeV and 409.77 MeV
respectively.
The values, $g_{\sigma N} = 10.6$ and $g_{\zeta N} = -0.47$ are
determined by fitting to the vacuum baryon masses. The other parameters
fitted to the asymmetric nuclear matter saturation properties
in the mean-field approximation are: $g_{\omega N}$ = 13.3,
$g_{\rho p}$ = 5.5, $g_{4}$ = 79.7, $g_{\delta p}$ = 2.5,
$m_{\zeta}$ = 1024.5 MeV, $ m_{\sigma}$ = 466.5 MeV
and $m_{\delta}$ = 899.5 MeV. The nuclear matter saturation
density used in the present investigation is $0.15$ fm$^{-3}$.
\begin{figure}
\includegraphics[width=16cm,height=16cm]{fig1.eps}
\caption{(Color online)
The dilaton field $\chi$ plotted as a function of the
temperature, at given baryon densities,
for different values of the isospin asymmetry parameter, $\eta$.
}
\label{chitemp}
\end{figure}
The variations of the dilaton field $\chi$ with density, temperature
and isospin asymmetry, within the chiral SU(3) model, are obtained by
solving the coupled equations of motion of scalar fields given by
equations (\ref{sigma}), (\ref{zeta}), (\ref{delta}) and (\ref{chi}).
In figure \ref{chitemp}, we show the variation of dilaton field $\chi$,
with temperature, for both zero and finite baryon densities,
and for selected values of the isospin asymmetry parameter,
$\eta$ = 0, 0.1, 0.3 and 0.5. At zero baryon density, it is observed
that the value of the dilaton field remains almost
a constant upto a temperature of about 130 MeV above which it is
seen to drop with increase in temperature. However, the drop
in the dilaton field is seen to be very small. The value of the
dilaton field is seen to change from 409.8 MeV at T=0 to about
409.7 MeV and 409.3 MeV at T=150 MeV and T=175 MeV respectively.
The thermal distribution functions have an effect of increasing
the scalar densities at zero baryon density, i.e., for $\mu_i ^*$=0,
as can be seen from the expression of the scalar densities, given
by (\ref{scaldens}). This effect seems to be negligible upto
a temperature of about 130 MeV. This leads to a decrease in
the magnitudes of scalar fields, $\sigma$ and $\zeta$.
This behaviour of the scalar fields is reflected in the value of $\chi$,
which is solved from the coupled equations of motion of the
scalar fields, given by equations (\ref{sigma}), (\ref{zeta}),
(\ref{delta}) and (\ref{chi}), as a drop as we
increase the temperature above a temperature of about 130 MeV.
The scalar densities attaining nonzero values at high temperatures,
even at zero baryon density, indicates the presence of baryon-antibaryon
pairs in the thermal bath and has already been observed in the literature
\cite{kristof1,scalartemp}. This leads
to the baryon masses to be different from their vacuum masses above this
temperature, arising from modifications of the scalar fields $\sigma$ and
$\zeta$.
For finite density situations, the behaviour of the $\chi$ field
with temperature is seen to be very different from the zero density
case, as can be seen from the subplots (b),(c) and (d) of figure \ref{chitemp},
where the $\chi$ field is plotted as a function of the temperature
for densities $\rho_0$, 2$\rho_0$ and 4$\rho_0$ respectively.
At finite densities, one observes first a rise and then a decrease of
the dilaton field with temperature. This is related to the fact that
at finite densities, the magnitude of the $\sigma$ field (as well as
of the $\zeta$ field) first show an increase and then a drop with further
increase of the temperature \cite{amarvind} which is reflected in the
behaviour of $\chi$ field, since it is solved from the coupled
equations of the scalar fields. The reason for the different behaviour
of the scalar fields ($\sigma$ and $\zeta$) at zero and finite densities
can be understood in the following manner \cite{kristof1}. As has been
already mentioned, the thermal distribution functions in (\ref{scaldens})
have an effect of increasing the scalar densities at zero baryon density, i.e.,
for $\mu_i ^*$=0. However, at finite densities, i.e., for nonzero values
of the effective chemical potential, ${\mu_i}^*$, for increasing temperature,
there are contributions also from higher momenta, thereby,
increasing the denominator of the integrand on the right hand side of
the equation (\ref{scaldens}). This leads to a decrease in the scalar
density. The competing effects of the thermal distribution functions
and the contributions of the higher momenta states
give rise to the observed effect of the scalar density
and hence of the $\sigma$ and $\zeta$ fields with temperature
at finite baryon densities \cite{kristof1}. This kind of behaviour
of the scalar $\sigma$ field on temperature at finite densities
has also been observed in the Walecka model by Li and Ko \cite{liko},
which was reflected as an increase in the mass of the nucleon with
temperature at finite densities in the mean field calculations.
The effects of the behaviour of the scalar fields on the value
of the $\chi$ field, obtained from solving the coupled equations
(\ref{sigma}) to (\ref{chi}) for the scalar fields, are shown in
figure \ref{chitemp}.
In figure \ref{chitemp}, it is observed that for a given value of
isospin asymmetry parameter $\eta$, the dilaton field $\chi$ decreases
with increase in the density of the nuclear medium. The drop
in the value of $\chi$ with density is seen to be much larger as compared
to its modification with temperature at a given density.
For isospin symmetric nuclear medium ($\eta = 0$) at temperature $T = 0$,
the reduction in the dilaton field $\chi$ from its vacuum value ($\chi_0$
= 409.8 MeV), is seen to be about 3 MeV at $\rho_{B} = \rho_{0}$
and about 13 MeV, for $\rho_{B} = 4\rho_{0}$.
As we move from isospin symmetric medium, with $\eta = 0$, to isospin
asymmetric medium, at temperature $T = 0$, and, for a given value
of density, there is seen to be an increase in the value of the
dilaton field $\chi$. However, the effect of isospin asymmetry
of the medium on the value of the dilaton field is observed to be
negligible upto about a density of nuclear matter saturation density,
and is appreciable only at higher values of densities as can be seen
in figure \ref{chitemp}. At nuclear matter saturation density,
$\rho_{0}$, the value of dilaton field $\chi$ changes from
$406.4$ MeV in symmetric nuclear medium ($\eta = 0$) to $406.5$ MeV in
the isospin asymmetric nuclear medium ($\eta = 0.5$). At a density of
about $4\rho_{0}$, the values of the dilaton field are modified to
396.7 MeV and 398 MeV at $\eta =0$ and $0.5$, respectively. Thus the
increase in the dilaton field $\chi$ with isospin asymmetry of the
medium is seen to be more at zero temperature as we move to
higher densities.
At a finite density, $\rho_{B}$, and for given isospin asymmetry
parameter $\eta$, the dilaton field $\chi$ is seen to first increase
with temperature and above a particular value of the temperature,
it is seen to decrease with further increase in temperature. At the nuclear
saturation density $\rho_{B} = \rho_{0}$ and in isospin symmetric
nuclear medium ($\eta = 0$) the value of the dilaton field $\chi$
increases upto a temperature of about $T = 145$ MeV, above with there
is a drop in the dilaton field. For $\rho_B$=$\rho_0$ in the asymmetric
nuclear matter with $\eta = 0.5$, there is seen to be a rise in the value
of $\chi$ upto a temperature of about 120 MeV, above which it starts
decreasing. As it has been already mentioned, at zero temperature and for a
given value of density, the dilaton field $\chi$ is found to increase
with increase in the isospin asymmetry of the nuclear medium. But
from figure \ref{chitemp}, it is observed that at high temperatures
and for a given density, the value of the dilaton field $\chi$ becomes
higher in symmetric nuclear medium as compared to isospin asymmetric
nuclear medium e.g. at nuclear saturation density $\rho_{B} = \rho_{0}$
and temperature $T = 150$ MeV the values of dilaton field $\chi$ are
$407.3$ MeV and $407$ MeV at $\eta = 0$ and $0.5$ respectively.
At density $\rho_{B} = 4 \rho_{0}$, $T = 150$ MeV the values of
dilaton field $\chi$ are seen to be $399.1$ MeV and $398.7$ MeV
for $\eta = 0$ and $0.5$ respectively. This observed behaviour
of the $\chi$ is related
to the fact that at finite densities and for isospin asymmetric matter,
there are contributions from the scalar isovector $\delta$ field,
whose magnitude is seen to decrease for higher temperatures
for given densities, whereas $\delta$ field has zero contribution
for isospin symmetric matter.
\begin{figure}
\includegraphics[width=16cm,height=16cm]{fig2.eps}
\caption{(Color online)
The mass shift of J/$\psi$ plotted as a function of the
baryon density in units of nuclear matter saturation density
at given temperatures,
for different values of the isospin asymmetry parameter, $\eta$.
}
\label{mjpsi}
\end{figure}
\begin{figure}
\includegraphics[width=16cm,height=16cm]{fig3.eps}
\caption{(Color online)
The mass shift of $\psi$(3686) plotted as a function of the
baryon density in units of nuclear matter saturation density
at given temperatures,
for different values of the isospin asymmetry parameter, $\eta$.
}
\label{mpsi1}
\end{figure}
\begin{figure}
\includegraphics[width=16cm,height=16cm]{fig4.eps}
\caption{(Color online)
The mass shift of $\psi$(3770) plotted as a function of the
baryon density in units of nuclear matter saturation density
at given temperatures,
for different values of the isospin asymmetry parameter, $\eta$.
}
\label{mpsi2}
\end{figure}
We shall now investigate how the behaviour of the dilaton field $\chi$
in the hot asymmetric nuclear medium affects the in-medium masses of
the charmonium states $J/\psi$, $\psi(3686)$ and $\psi(3770)$.
In figures \ref{mjpsi}, \ref{mpsi1} and \ref{mpsi2},
we show the shifts of the masses of charmonium states $J/\psi$, $\psi(3686)$
and $\psi(3770)$ from their vacuum values, as functions of the baryon density
for given values of temperature $T$, and for different values of the
isospin asymmetry parameter, $\eta$. We have shown the results for the
values of the temperature, T = 0, 50, 100 and 150 MeV. At the nuclear matter
saturation density, $\rho_{B}$ = $\rho_{0}$ at temperature $T = 0$,
the mass-shift for $J/\psi$ meson is observed to be about $-8.6$ MeV
in the isospin symmetric nuclear medium ($\eta = 0$)
and in the asymmetric nuclear medium, with isospin asymmetry parameter
$\eta = 0.5$, it is seen to be $-8.4$ MeV. At $\rho_{B} = 4\rho_{0}$,
temperature $T = 0$, the mass-shift for $J/\psi$ meson is $-32.2$ MeV
in the isospin symmetric nuclear medium ($\eta = 0$) and in isospin
asymmetric nuclear medium ($\eta = 0.5$), it changes to $-29.2$ MeV.
The increase in the magnitude of the mass-shift, with density $\rho_{B}$,
is because of the larger drop in the dilaton field $\chi$ at higher densities.
However, with increase in the isospin asymmetry of the medium the magnitude
of the mass-shift decreases because the drop in the dilaton field $\chi$
is less at a higher value of the isospin asymmetry parameter $\eta$.
At the nuclear matter saturation density $\rho_{B} = \rho_{0}$,
and for temperature $T = 0$, the mass-shift for $\psi(3686)$
is observed to be about $-117$ and $-114$ MeV for values of the
$\eta = 0$ and $0.5$ respectively, and for $\psi(3770)$,
the values of the mass-shift are seen to be about $-155$ MeV and
$-$150 MeV respectively.
At $\rho_{B} = 4\rho_{0}$ and zero temperature, the values
of the mass-shift for $\psi(3686)$ are modified to $-436$ MeV
and $-396$ MeV for $\eta = 0$ and $0.5$ respectively, and,
for $\psi(3770)$, the drop in the masses are about
$-577$ MeV and $-523$ MeV respectively.
As mentioned earlier, the drop in the dilaton field, $\chi$,
at finite temperature is less than at zero temperature and
this behaviour is reflected in the smaller mass-shifts of the
charmonium states at finite temperatures as compared to the zero
temperature case. At nuclear matter saturation density $\rho_{B} = \rho_{0}$,
and for temperature $T = 100$ MeV, the values of the the mass-shift for
the $J/\psi$ meson are observed to be about $-6.77$ MeV and
$-6.81$ MeV for isospin symmetric ($\eta = 0$)
and isospin asymmetric ($\eta = 0.5$) nuclear medium respectively.
At baryon density $\rho_{B} = 4\rho_{0}$, temperature $T = 100$ MeV,
the mass-shift for $J/\psi$ is observed to be $-28.4$ MeV and $-27.2$ MeV
for isospin symmetric ($\eta = 0$) and isospin asymmetric ($\eta = 0.5$)
nuclear medium respectively. For the excited charmonium states $\psi(3686)$
and $\psi(3770)$, the mass-shifts at nuclear matter saturation density
$\rho_{B} = \rho_{0}$ and temperature $T = 100$ MeV, are observed to
be $-91.8$ MeV and $-121.4$ MeV respectively, for isospin symmetric nuclear
matter ($\eta = 0$) and $-92.4$ MeV and $-122$ MeV for the isospin
asymmetric nuclear medium with $\eta = 0.5$.
For baryon density $\rho_{B} = 4\rho_{0}$,
and temperature $T = 100$ MeV, mass-shift for the charmonium states
$\psi(3686)$ and $\psi(3770)$ are modified to about $-386$ MeV and $-510$
MeV respectively, for isospin symmetric ($\eta = 0$) and
$-369$ MeV and $-488$ MeV for isospin asymmetric
nuclear medium with $\eta = 0.5$.
For temperature $T = 150$ MeV and at the nuclear matter saturation
density $\rho_{B} = \rho_{0}$, the mass-shifts for the charmonium states
$J/\psi, \psi(3686)$ and $\psi(3770)$ are seen to be $-6.25$, $-85$
and $-112$ MeV respectively in the isospin symmetric nuclear
medium ($\eta = 0$). These values are modified to $-7.2$, $-98$
and $-129$ MeV respectively, in the isospin asymmetric nuclear medium
with $\eta = 0.5$. At a baryon density $\rho_{B} = 4\rho_{0}$,
the values of the mass-shift for $J/\psi, \psi(3686)$ and $\psi(3770)$
are observed to be $-26.4$, $-358$ and $-473$ MeV in isospin
symmetric nuclear medium ($\eta = 0$) and in isospin asymmetric
nuclear medium with $\eta = 0.5$, these values
are modified to $-27.6$, $-375$ and $-494$ MeV respectively.
Note that at high temperatures e.g at $T = 150$ MeV the mass-shift
in isospin asymmetric nuclear medium ($\eta = 0.5$) is more as
compared to isospin symmetric nuclear medium ($\eta = 0$). This is
opposite to what is observed for the zero temperature case.
The reason is that at high
temperatures the dilaton field $\chi$ has more drop in the isospin
asymmetric nuclear medium ($\eta = 0.5$) as compared to the isospin
symmetric nuclear medium ($\eta = 0$), due to the contributions
from the $\delta$ field for nonzero $\eta$, which is observed to
decrease in its magnitude at high temperatures.
The values of the mass-shift for the charmonium states obtained
within the present investigation, at nuclear saturation density
$\rho_{0}$ and temperature $T =0$, are in good agreement with the
the mass shifts of $J/\Psi$, $\Psi (3686)$ and $\Psi (3770)$
as $-8$, $-100$ and $-140$ MeV respectively, at the nuclear matter
saturation density, computed in Ref. \cite{lee1} from the
second order QCD Stark effect, with the gluon condensate in the nuclear
medium computed in the linear density approximation. The
mass-shift for $J/\psi$ has also been studied with the QCD sum
rules in \cite{klingl} and the value at nuclear saturation density
was observed to be about $-7$ MeV.
In Ref. \cite{kimlee} the operator product expansion was carried out
upto dimension six and the mass shift for $J/\psi$ was calculated to be
$-4$ MeV at nuclear matter saturation density $\rho_{0}$ and
at zero temperature. The effect of temperature on the $J/\psi$
in the deconfinement phase was studied in \cite{leetemp,cesa}.
In these investigations, it was reported that $J/\psi$ mass
is essentially constant in a wide range of temperatures and
above a particular value of the temperature, $T$, there was observed
to be a sharp change in the mass of $J/\psi$ in the deconfined phase
\cite{lee3}. In the present work, we have studied the effects of
temperature, density and isospin asymmetry, on the mass modifications
of the charmonium states ($J/\psi, \psi(3686)$ and $\psi(3770)$)
in the confined hadronic phase, arising due to modifications
of a scalar dilaton field which simulates the gluon condensates of
QCD, within a chiral SU(3) model. The effect of temperature was
found to be small for the charmonium states $J/\psi(3097)$,
$\psi(3686)$ and $\psi(3770)$, whereas, the masses of charmonium states
observed to vary considerably with density, in the present investigation.
In summary, in the present work, we have investigated the effects
of density, temperature and isospin asymmetry of the nuclear medium on the
masses of the charmonium states $J/\psi, \psi(3686)$
and $\psi(3770)$, arising due to modification of the scalar dilaton
field, $\chi$, which simulates the gluon condensates of QCD,
within the chiral SU(3) model and second order QCD Stark effect.
The change in the mass of $J/\psi$ with density is observed to be small
at nuclear matter saturation density and is in agreement
with the QCD sum rule calculations. There is seen to be an
appreciable drop in the in-medium masses of excited charmonium
states $\psi(3686)$ and $\psi(3770)$ with the density.
At the nuclear matter saturation density, the mass shifts
of these states are similar to the values obtained using
the QCD second order Stark effect with the modifications
of the gluon condensates computed in the linear density
approximation \cite{lee1}.
For a given value of density and temperature, the effect of the
isospin asymmetry of the medium on the in-medium masses of the
charmonium states is found to be small. This is due to the fact that
the magnitude of the $\delta$ field remains small as compared to
the $\sigma$ and $\zeta$ fields. At finite densities, the effect
of the temperature on the charmonium states is found
to decrease the values of mass-shift upto a particular temperature,
above which the mass shift is seen to rise. This is because of
an initial increase in the dilaton field $\chi$ and then a drop
with further increase in the temperature, at a given baryon density,
arising from solving the coupled equations for the scalar fields.
This is related to the fact that the scalar densities of the nucleons
initially drop and then rise with the temperature at finite values
of the baryon densities. The mass drop of the excited charmonium
states ($\Psi (3686)$ and $\Psi (3770)$) are large enough
to be seen in the dilepton spectra emitted from their
decays in experiments involving $\bar p$-A annihilation
in the future facility at GSI, provided these states decay
inside the nucleus. The life time of the $J/\Psi$ has been
shown to be almost constant in the nuclear medium,
whereas for these excited charmonium states, the lifetimes
are shown to reduce to less than 5 fm/c, due to appreciable
increase in their decay widths \cite{charmwavefn}.
Hence a significant fraction of the produced excited charmonium states
in these experiments are expected to decay inside the nucleus
\cite{Golubeva}. The in-medium properties of the excited charmonium
states $\psi(3686)$ and $\psi(3770)$ can be studied in the
dilepton spectra in $\bar p$-A experiments in the future facility
of the FAIR, GSI. The mass shifts of the charmonium states in the hot
nuclear medium seem to be appreciable at high densities as compared
to the temperature effects on these masses, and these should show in
observables like the production of these charmonium states, as well as
of the open charmed mesons in the compressed baryonic matter (CBM)
experiment at the future facility at GSI, where baryonic matter
at high densities and moderate temperatures will be produced.
\acknowledgements
One of the authors (AM) is grateful to the Frankfurt Institute for Advanced
Research (FIAS), University of Frankfurt, for warm hospitality and
acknowledges financial support from Alexander von Humboldt Stiftung
when this work was initiated. Financial support from Department of
Science and Technology, Government of India (project no. SR/S2/HEP-21/2006)
is also gratefully acknowledged.
| proofpile-arXiv_065-4971 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
Suppose you have a ruler that is about the length of your hand. With
that ruler, you can measure the size of all the visible objects in
your office. That scaling of objects in your office with the length
of the ruler means that those objects have a natural linear scaling
in relation to your ruler.
Now consider the distances from your office to various galaxies.
Your ruler is of no use, because you cannot distinguish whether a
particular galaxy moves farther away by one ruler unit. Instead,
for two galaxies, you can measure the ratio of distances from your
office to each galaxy. You might, for example, find that one galaxy
is twice as far as another, or, in general, that a galaxy is some
percentage farther away than another.
Percentage changes define a ratio scale of measure, which has
natural units in logarithmic measure \citep{Hand04Measurement}. For
example, a doubling of distance always adds $\log(2)$ to the
logarithm of the distance, no matter what the initial distance.
Measurement naturally grades from linear at local magnitudes to
logarithmic at distant magnitudes when compared to some local
reference scale. The transition between linear and logarithmic
varies between problems. Measures from some phenomena remain
primarily in the linear domain, such as measures of height and
weight in humans. Measures for other phenomena remain primarily in
the logarithmic domain, such as cosmological distances. Other
phenomena scale between the linear and logarithmic domains, such as
fluctuations in the price of financial assets
\citep{Aparicio01Empirical} or the distribution of income and wealth
\citep{dragulescu01exponential}.
The second section of this article shows how the characteristic
scaling of measurement constrains the most likely probability
distribution of observations. We use the standard method of maximum
entropy to find the most likely probability distribution
\citep{Jaynes03Science}. But, rather than follow the traditional
approach of starting with the information in the summary statistics
of observations, such as the mean or variance, we begin with the
information in the characteristic scale of measurement. We argue
that measurement sets the fundamental nature of information and
shapes the probability distribution of observations. We present a
novel extension of the method of maximum entropy to incorporate
information about the scale of measurement.
The third section emphasizes the naturalness of the measurement
scale that grades from linear at small magnitudes to logarithmic at
large magnitudes. This linear to logarithmic scaling leads to
observations that often follow a linear-log exponential or Student's
probability distribution. A linear-log exponential distribution is
an exponential shape for small magnitudes and a power law shape for
large magnitudes. Student's distribution is a Gaussian shape for
small fluctuations from the mean and a power law for large
fluctuations from the mean. The shapes correspond to linear scaling
at small magnitudes and logarithmic scaling at large magnitudes.
Many naturally observed patterns follow these distributions. The
particular form depends on whether the measurement scale for a
problem is primarily linear, primarily logarithmic, or grades from
linear to logarithmic.
The fourth section inverts the natural linear to logarithmic scaling
for magnitudes. Because magnitudes often scale from linear to
logarithmic as one moves from small to large magnitudes, inverse
measures often scale from logarithmic to linear as one moves from
small to large magnitudes. This logarithmic to linear scaling leads
to observations that often follow a gamma probability distribution.
A gamma distribution is a power law shape for small magnitudes and
an exponential shape for large magnitudes, corresponding to
logarithmic scaling at small values and linear scaling at large
values. The gamma distribution includes as special cases the
exponential distribution, the power law distribution, and the
chi-square distribution, subsuming many commonly observed patterns.
The fifth section demonstrates that the Laplace integral transform
provides the formal connection between the inverse measurement
scales. The Laplace transform, like its analytic continuation the
Fourier transform, changes a magnitude with dimension $d$ on one
scale into an inverse magnitude with dimension $1/d$ on the other
scale. This inversion explains the close association between the
linear to logarithmic scaling as magnitudes increase and the inverse
scale that grades from logarithmic to linear as magnitudes increase.
We discuss the general role of integral transforms in changing the
scale of measurement. Superstatistics is the averaging of a
probability distribution with a variable parameter over a
probability distribution for the variable parameter
\citep{Beck03Superstatistics}. We show that superstatistics is a
special case of an integral transform, and thus can be understood as
a particular way in which to change the scale of measurement.
In the sixth section, we relate our study of measurement invariance
for continuous variables to previous methods of maximum entropy for
discrete variables. We also distinguish the general definition of
measurement scale by information invariance from our particular
argument about the commonness of linear-log scales.
In the discussion, we contrast our emphasis on the primacy of
measurement with alternative approaches to understanding
measurement, randomness, and probability. One common approach
changes the definition of randomness and entropy to incorporate a
change in measurement scale \citep{Tsallis09Introduction}. We argue
that our method makes more sense, because we directly incorporate
the change in measurement scale as a kind of information, rather
than alter the definition of randomness and entropy to match each
change in measurement scale. It is measurement that changes
empirically between problems rather than the abstract meaning of
randomness and information. Although we focus on the duality between
linear to logarithmic scaling and its inverse logarithmic to linear
scaling, our general approach applies to any type of measure
invariance and measurement scale.
\section{Measurement, Information Invariance, and Probability}
We derive most likely probability distributions. Our method follows
the maximum entropy approach
\citep{Jaynes57Information,Jaynes57Informationb,Jaynes03Science}.
That approach assumes that the most likely distribution has the
maximum amount of randomness, or entropy, subject to the constraint
that the distribution must capture all of the information available
to us. For example, if we know the average value of a sample of
observations, and we know that all values from the underlying
probability distribution are positive, then all candidate
probability distributions must have only positive values and have a
mean value that agrees with the average of the empirically observed
values. By maximum entropy, the most random distribution
constrained to have positive values and a fixed mean is the
exponential distribution.
We express the available information by constraints. Typical
constraints include the average or variance of observations. But we
must use all available information, which may include information
about the scale of measurement itself. Previous studies have
discussed how the scale of measurement provides information.
However, that aspect of maximum entropy has not been fully developed
\citep{Jaynes03Science,Frank09The-common}. Our goal is to develop
the central role of measurement scaling in shaping the commonly
observed \linebreak probability distributions.
In the following sections, we show how to use information about
measurement invariances and associated measurement scales to find
most likely probability distributions.
\subsection{Maximum entropy}
The method of maximum entropy defines the most likely probability
distribution as the distribution that maximizes a measure of entropy
(randomness) subject to various information constraints. We write
the quantity to be maximized as
\begin{equation}\label{eq:maxEnt}
\Lambda = {\cal{E}} - \alpha C_0 - \sum_{i=1}^n\lambda_iC_i
\end{equation}
where ${\cal{E}}$ measures entropy, the $C_i$ are the constraints to be
satisfied, and $\alpha$ and the $\lambda_i$ are the Lagrange multipliers to
be found by satisfying the constraints. Let $C_0=\int p_y{\hbox{\rm d}} y -1$
be the constraint that the probabilities must total one, where $p_y$
is the probability distribution function of $y$. The other
constraints are usually written as $C_i= \int p_yf_i(y){\hbox{\rm d}} y
-\angb{f_i(y)}$, where the $f_i(y)$ are various transformed
measurements of $y$. Angle brackets denote mean values. A mean value
is either the average of some function applied to each of a sample
of observed values, or an a priori assumption about the average
value of some function with respect to a candidate set of
probability laws. If $f_i(y)=y^i$, then $\angb{y^i}$ are the moments
of the distribution---either the moments estimated from observations
or a priori values of the moments set by assumption. The moments
are often regarded as ``normal'' constraints, although from a
mathematical point of view, any properly formed constraint can be
used.
Here, we confine ourselves to a single constraint of measurement. We
express that constraint with a more general notation, $C_1= \int
p_y\textrm{T}[f(y)]{\hbox{\rm d}} y -\angb{\textrm{T}[f(y)]}$, where $\textrm{T}()$ is a
transformation. We could, of course, express the constraining
function for $y$ directly through $f(y)$. However, we wish to
distinguish between an initial function $f(y)$ that can be regarded
as a normal measurement, in any sense in which one chooses to
interpret the meaning of normal, and a transformation of normal
measurements denoted by $\textrm{T}()$ that arises from information about
the measurement scale.
The maximum entropy distribution is obtained by solving the set of equations
\begin{equation}\label{eq:maxEntSoln}
\povr{\Lambda}{p_y} = \povr{{\cal{E}}}{p_y} - \alpha -
\lambda\textrm{T}[f(y)]=0
\end{equation}
where one checks the candidate solution for a maximum and obtains
$\alpha$ and $\lambda$ by satisfying the constraint on total probability
and the constraint on $\angb{\textrm{T}[f(y)]}$. We assume that we can
treat entropy measures as the continuous limit of the discrete case.
In the standard approach, we define entropy by Shannon information
\begin{equation}\label{eq:shannonDef}
{\cal{E}}=-\int p_y\log(p_y){\hbox{\rm d}} y
\end{equation}
which yields the solution of \Eq{maxEntSoln} as
\begin{equation}\label{eq:shannonSoln}
p_y = ke^{- \lambda\textrm{T}[f(y)]}
\end{equation}
where $k$ and $\lambda$ satisfy the two constraints.
\subsection{Measurement and transformation}\label{sectionInvar}
Maximum entropy, in order to be a useful method, must capture all of
the available information about a particular problem. One form of
information concerns transformations to the measurement scale that
leave the most likely probability distribution unchanged. Suppose,
for example, that we obtain the same information from measurements
of $x$ and transformed measurements, $\textrm{G}(x)$. Put another way, if
one has access only to measurements on the $\textrm{G}(x)$ scale, one has
the same information that would be obtained if the measurements were
reported on the $x$ scale. We say that the measurements $x$ and
$\textrm{G}(x)$ are equivalent with respect to information, or that the
transformation $x \rightarrow \textrm{G}(x)$ is an invariance
\citep{Hand04Measurement,luce08measurement,narens08meaningfulness}.
To capture this information invariance in maximum entropy, we must
express our measurements on a transformed scale. In particular, we
must choose the transformation, $\textrm{T}()$, for expressing measurements
so that
\begin{equation}\label{eq:transDef}
\textrm{T}(x) = \gamma + \delta\textrm{T}[\textrm{G}(x)]
\end{equation}
for some arbitrary constants $\gamma$ and $\delta$. Putting this
definition of $\textrm{T}(x)$ into \Eq{shannonSoln} shows that we get the
same maximum entropy solution whether we use the direct scale $x$ or
the alternative measurement scale, $\textrm{G}(x)$, because the $k$ and
$\lambda$ constants will adjust to the constants $\gamma$ and $\delta$ so that
the distribution remains unchanged.
Given the transformation $\textrm{T}(x)$, the derivative of that
transformation expresses the information invariance in terms of
measurement invariance. In particular, we have the following
invariance of the measurement scale under a change ${\hbox{\rm d}} x$
\begin{equation}\label{eq:scalePropto}
{\hbox{\rm d}}\textrm{T}(x) \propto {\hbox{\rm d}}\textrm{T}[\textrm{G}(x)]
\end{equation}
We may also examine $m_x=\textrm{T}'(x)={\hbox{\rm d}}\textrm{T}(x)/{\hbox{\rm d}} x$ to obtain the
change in measurement scale required to preserve the information
invariance between $x$ and $\textrm{G}(x)$.
If we know the measurement invariance, $\textrm{G}(x)$, we can find the
correct transformation from \Eq{transDef}. If we know the
transformation $\textrm{T}(x)$, we can find $\textrm{G}(x)$ by inverting
\Eq{transDef} to obtain
\begin{equation}\label{eq:GfromT}
\textrm{G}(x) = \Tr^{-1}\left[\ovr{\textrm{T}(x)-\gamma}{\delta}\right]
\end{equation}
Alternatively, we may deduce the transformation $\textrm{T}(x)$ by
examining the form of a given probability distribution and using
\Eq{shannonSoln} to find the associated transformation.
In summary, $x$ and $\textrm{G}(x)$ provide invariant information, and the
transformation of measurements $\textrm{T}(x)$ captures that information
invariance in terms of measurement invariance.
\subsection{Example: ratio and scale invariance}
Suppose the information we obtain from positive-valued measurements
depends only on the ratio of measurements, $y_2/y_1$. In this
particular case, all measurements with the same ratio map to the
same value, so we say that the measurement scale has ratio
invariance. Pure ratio measurements also have scale invariance,
because ratios do not depend on the magnitude or scale of the
observations.
We express the invariances that characterize a measurement scale by
the transformations that leave the information in the measurements
unchanged
\citep{Hand04Measurement,luce08measurement,narens08meaningfulness}.
If we obtain values $x$ and use the measurement scale from the
transformation $\textrm{T}(x) = \log(x)$, the information in $x$ is the
same as in $\textrm{G}(x) = x^c$, because $\textrm{T}(x) = \log(x)$ and
$\textrm{T}[\textrm{G}(x)] = c\log(x)$, so in general $\textrm{T}(x) \propto
\textrm{T}[\textrm{G}(x)]$, which means that the information in the measurement
scale given by $\textrm{T}(x)$ is invariant under the transformation
$\textrm{G}(x)$.
We can express the invariance in a way that captures how measurement
relates to information and probability. The transformation $\textrm{T}(x)
= \log(x)$ shrinks increments on the uniform scaling of $x$ so that
each equally spaced increment on the original uniform scale shrinks
to length $1/x$ on the transformed scale. We can in general
quantify the deformation in incremental scaling by the derivative of
the transformation $\textrm{T}(x)$ with respect to $x$. In the case of the
logarithmic measurement scale with ratio invariance, the measure
invariance in \Eq{scalePropto} is
\begin{equation*}
{\hbox{\rm d}}\log(x)\propto{\hbox{\rm d}}\log[\textrm{G}(x)] \Rightarrow \ovr{1}{x} \propto \ovr{c}{x}
\end{equation*}
showing in another way that the logarithmic measure $\textrm{T}(x)$ is
invariant under the transformation $\textrm{G}(x)$. With regard to
probability or information, we can think of the logarithmic scale
with ratio invariance as having an expected density of probability
per increment in proportion to $1/x$, so that the expected density
of observations at scale $x$ decreases in proportion to $1/x$.
Roughly, we may also say that the information value of an increment
decreases in proportion to $1/x$. For example, the increment length
of our hand is an informative measure for the visible objects near
us, but provides essentially no information on a cosmological scale.
If we have measurements $f(y) = y$, and we transform those
measurements in a way consistent with a ratio and scale invariance
of information, then we have the transformed measures $\textrm{T}[f(y)] =
\log(y)$. The constraint for maximum entropy corresponds to
$\angb{\log(y)}$, which is logarithm of the geometric mean of the
observations on the direct scale $y$. Given that constraint, the
maximum entropy distribution is a power law
\begin{equation*}
p_y=ke^{-\lambda\textrm{T}[f(y)]}=ke^{-\lambda\log(y)}=ky^{-\lambda}
\end{equation*}
For $y\ge 1$, we can solve for the constants $k$ and $\lambda$, yielding
$p_y= \delta y^{-(1+\delta)}$, with $\delta=1/\angb{\log(y)}$.
\section{The linear to Logarithmic Measurement Scale}
\subsection{Measurement}
In the previous section, we obtained ratio and scale invariance with
a measure $m_x=\textrm{T}'(x) \propto 1/x$. In this section, we consider
the more general measure
\begin{equation*}
m_x \propto \ovr{1}{1+bx}
\end{equation*}
At small values of $x$, the measure becomes linear, $m_x \propto 1$,
and at large values of $x$, the measure becomes ratio invariant
(logarithmic), $m_x \propto 1/x$. This measure has scale dependence
with ratio invariance at large scales, because the measure changes
with the magnitude (scale) of $x$, becoming ratio invariant at large
values of $x$. The parameter $b$ controls the scale at which the
measure grades between linear and logarithmic.
Given $m_x=\textrm{T}'(x)$, we can integrate this deformation of
measurement to obtain the associated scale of measurement as
\begin{equation}\label{eq:linearlog}
\textrm{T}(x) = \ovr{1}{a}\log(1+bx)=\log(1+bx)^\ovr{1}{a}\propto \log(1+bx)
\end{equation}
where we have expressed the proportionality constant as $1/a$ and we
have dropped the constant of integration. The expression
$\log(1+bx)$ is just a logarithmic measurement scale for positive
values in relation to a fixed origin at $x=0$, because $\log(1) =
0$. The standard logarithmic expression, $\log(x)$, has an implicit
origin for positive values at $x=1$, which is only appropriate for
purely ratio invariant problems with no notion of an origin to set
the scale of magnitudes. In most empirical problems, there is some
information about the scaling of magnitudes. Thus, $\log(1+bx)$ is
more often the natural measurement scale.
Next, we seek an expression $\textrm{G}(x)$ to describe the information
invariance in the measurement scale, such that the information in
$x$ and in $\textrm{G}(x)$ is the same. The expression in
\Eq{scalePropto}, \linebreak ${\hbox{\rm d}}\textrm{T}(x)\propto{\hbox{\rm d}} \textrm{G}[\textrm{T}(x)]$, sets the
condition for information invariance, leading to
\begin{equation}\label{eq:Gtrans1}
\textrm{G}(x)=\ovr{(1+bx)^\ovr{1}{a} - 1}{b}
\end{equation}
On the measurement scale $\textrm{T}(x)$, the information in $x$ is the same as in $\textrm{G}(x)$, because
\begin{equation*}
{\hbox{\rm d}}\textrm{T}(x)\propto{\hbox{\rm d}}\textrm{T}[\textrm{G}(x)] \Rightarrow \ovr{b/a}{1+bx} \propto \ovr{b/a^2}{1+bx}
\end{equation*}
We now use $x=f(y)$ to account for initial normal measures that may
be taken in any way we choose. Typically, we use direct values,
$f(y)=y$, or squared values, $f(y)=y^2$, corresponding to initial
measures related to the first and second moments---the average and
variance. For now, we use $f(y)$ to hold the place of whatever
direct values we will use. Later, we consider the interpretations
of the first and second moments.
\subsection{Probability}
The constraint for maximum entropy corresponds to
$\angb{\textrm{T}[f(y)]}=\angb{\log[1+bf(y)]^\ovr{1}{a}}$, a value that
approximately corresponds to an interpolation between the linear
mean and the geometric mean of $f(y)$. Given that constraint, the
maximum entropy distribution from \Eq{shannonSoln} is
\begin{equation}\label{eq:linearLogResult}
p_y \propto [1+bf(y)]^{-\alpha}
\end{equation}
where $\alpha=\lambda/a$ acts as a single parameter chosen to satisfy the
constraint, and $b$ is a parameter derived from the measurement
invariance that expresses the natural scale of measurement for a particular problem.
From \Eq{linearLogResult}, we can express simple results when in
either the purely linear or purely logarithmic regime. For small
values of $bf(y)$ we can write $p_y \propto e^{-\alpha bf(y)}$. For
large values of $bf(y)$ we can write $p_y \propto f(y)^{-\alpha}$,
where we absorb $b^{-\alpha}$ into the proportionality constant. Thus,
the probability distribution grades from exponential in $f(y)$ at
small magnitudes to a power law in $f(y)$ at large magnitudes,
corresponding to the grading of the linear to logarithmic
measurement scale.
\subsection{Transition between linear and logarithmic scales}
We mentioned that one can obtain the parameter $\alpha$ in
\Eq{linearLogResult} directly from the constraint
$\angb{\textrm{T}[f(y)]}$, which can be calculated directly from observed
values of the process or set by assumption. What about the
parameter $b$ that sets the grading between the linear and
logarithmic regimes?
When we are in the logarithmic regime at large values of $bf(y)$,
probabilities scale as $p_y \propto f(y)^{-\alpha}$ independently of
$b$. Thus, with respect to $b$, we only need to know the magnitude
of observations above which ratio invariance and logarithmic scaling
become reasonable descriptions of the measurement scale.
In the linear regime, $p_y \propto e^{-\alpha bf(y)}$, thus $b$ only
arises as a constant multiplier of $\alpha$ and so can be subsumed into
a single combined parameter $\beta=\alpha b$ estimated from the single
constraint. However, it is useful to consider the meaning of $b$ in
the linear regime to provide guidance for how to interpret $b$ in
the mixed regime in which we need the full expression in
\Eq{linearLogResult}.
When $f(y)=y$, the linear regime yields an exponential distribution
$p_y \propto e^{-\alpha by}$. In this case, $b$ weights the intensity
or rate of the process $\alpha$ that sets the scaling of the
distribution.
When $f(y)=y^2$, the linear regime yields a Gaussian distribution
$p_y \propto e^{-\alpha by^2}$, where $2\alpha b$ is the reciprocal of
the variance that defines the precision of measurements---the amount
of information a measurement provides about the location of the
average value. In this case, $b$ weights the precision of
measurement. The greater the value of $b$, the more information per
increment on the measurement scale.
\subsection{Linear-log exponential distribution}
When $f(y)=y$, we obtain from \Eq{linearLogResult} what we will call
the linear-log exponential distribution
\begin{equation}\label{eq:linearlogExp}
p_y \propto [1+by]^{-\alpha}
\end{equation}
for $y>0$. This distribution is often called the generalized type
II Pareto distribution or the Lomax distribution
\citep{Johnson94Distributions}. Small values of $by$ lead to an
exponential shape, $p_y \propto e^{-\alpha by}$. Large values of $by$
lead to power law tails, $p_y \propto y^{-\alpha}$. The parameter $b$
determines the grading from the exponential to the power law. Small
values of $b$ extend the exponential to higher values of $y$,
whereas large values of $b$ move the extent of the power law shape
toward smaller values of $y$. Many natural phenomena follow a
linear-log exponential distribution \citep{Tsallis09Introduction}.
\subsection{Student's distribution}
When $f(y)=y^2$, we obtain from \Eq{linearLogResult} Student's distribution
\begin{equation}\label{eq:students}
p_y \propto [1+by^2]^{-\alpha}
\end{equation}
Here, we assume that $y$ expresses deviations from the average.
Small deviations lead to a Gaussian shape around the mean, $p_y
\propto e^{-\alpha by^2}$. Large deviations lead to power law tails,
$p_y \propto f(y)^{-\alpha}$. The parameter $b$ determines the grading
from the Gaussian to the power law. Small values of $b$ expand the
Gaussian shape far from the mean, whereas large values of $b$ move
the extent of the power law shape closer to the central value at the
average. Many natural phenomena expressed as deviations from a
central value follow Student's distribution
\citep{Tsallis09Introduction}.
The ubiquity of both Student's distribution and the linear-log
exponential distribution arises from the fact that the grading
between linear measurement scaling at small magnitudes and
logarithmic measurement scaling at large magnitudes is inevitably
widespread. Many cases will be primarily in the linear regime and
so be mostly exponential or Gaussian except in the extreme tails.
Many other cases will be primarily in the logarithmic regime and so
be mostly power law except in the regime of small deviations near
the origin or the central location. Other cases will produce
measurements across both scales and their transition.
\section{The Inverse Logarithmic to Linear Measurement Scale}
We have argued that the linear to logarithmic measurement scale is
likely to be common. Magnitudes such as time or distance naturally
grade from linear at small scales to logarithmic at large scales.
Many problems measure inverse dimensions, such as the reciprocals of
time or distance. If magnitudes of time or space naturally grade
from linear to logarithmic as scale increases from small to large,
then how do the reciprocals scale? In this section, we argue that
the inverse scale naturally grades from logarithmic to linear as
scale increases from small to large.
We first describe the logarithmic to linear measurement scale and
its consequences for probability. We then show the sense in which
the logarithmic to linear scale is the natural inverse of the linear
to logarithmic scale.
\subsection{Measurement}
The transformation
\begin{equation*}
\textrm{T}(x) = x + b\log(x)
\end{equation*}
corresponds to the change in measurement scale $m_x=1+b/x$. As $x$
becomes small, the measurement scaling $m_x\rightarrow1/x$ becomes
the ratio-invariant logarithmic scale. As $x$ increases, the
measurement scaling $m_x\rightarrow1$ becomes the uniform measure
associated with the standard linear scale. Thus, the scaling
$m_x=1+b/x$ interpolates between logarithmic and linear
measurements, with the weighting of the two scales shifting from
logarithmic to linear as $x$ increases from small to large values.
\subsection{Probability}
The constraint for maximum entropy corresponds to
$\angb{\textrm{T}[f(y)]}=\angb{f(y)+b\log[f(y)]}$, a value that
interpolates between the linear mean and the geometric mean of
$f(y)$. Given that constraint, the maximum entropy distribution is
$p_y \propto f(y)^{-\lambda b}e^{-\lambda f(y)}$, with $\lambda$ chosen to
satisfy the constraint.
The direct measure $f(y)=y$ for positive values is the gamma distribution
\begin{equation}\label{eq:gamma}
p_y \propto y^{-\lambda b}e^{-\lambda y}
\end{equation}
As $y$ becomes small, the distribution approaches a power law form,
$p_y\propto y^{-\lambda b}$. As $y$ becomes large, the distribution
approaches an exponential form in the tails, $p_y\propto e^{-\lambda
y}$. Thus, the distribution grades from power law at small scales
to exponential at large scales, corresponding to the measurement
scale that grades from logarithmic to linear as magnitude increases.
Larger values of $b$ extend the power law to higher magnitudes by
pushing the logarithmic to linear change in measure to higher
magnitudes. The combination of power law and exponential shapes in
the gamma distribution is the direct inverse of the linear-log
exponential distribution given in \Eq{linearlogExp}.
The squared values $f(y)=y^2$, which we interpret as squared deviations from the average value, \linebreak lead to
\begin{equation}\label{eq:gammaGauss}
p_y \propto y^{-\lambda b}e^{-\lambda y^2 / 2}
\end{equation}
where the exponent of two on the first power law component is
subsumed in the other parameters. This distribution is a power law
at small scales with Gaussian tails at large scales, providing the
inverse of Student's distribution in \Eq{students}. This
distribution is a form of the generalized gamma distribution
\citep{Johnson94Distributions}, which we call the gamma-Gauss
distribution. This distribution may, for example, arise as the sum
of truncated power laws or L\'evy flights \citep{Frank09The-common}.
\section{Integral Transforms and Superstatistics}
The previous sections showed that linear to logarithmic scaling has
a simple relation to its inverse of logarithmic to linear scaling.
That simple relation suggests that the two inverse scales can be
connected by some sort of transformation of measure. We will now
show the connection.
Suppose we start with a particular measurement scale given by
$\textrm{T}(x)$ and its associated probability distribution given by
\begin{equation*}
p_x \propto e^{-\alpha\textrm{T}(x)}
\end{equation*}
Consider a second measurement scale $\widetilde{\Tr}(\sigma)$ with associated probability distribution
\begin{equation*}
p_\sigma \propto e^{-\lambda\widetilde{\Tr}(\sigma)}
\end{equation*}
What sort of transformation relates the two measurement scales?
The integral transforms often provide a way to connect two measurement scales. For example, we could write
\begin{equation}\label{eq:integralTransform}
p_x \propto \int_{\sigma^-}^{\sigma^+}p_\sigma g_{x|\sigma}{\hbox{\rm d}}\sigma
\end{equation}
This expression is called an integral transform of $p_\sigma$ with
respect to the transformation kernel $g_{x|\sigma}$. If we interpret
$g_{x|\sigma}$ as a probability distribution of $x$ given a parameter
$\sigma$, and $p_\sigma$ as a probability distribution over the variable
parameter $\sigma$, then the expression for $p_x$ is called a
superstatistic: the probability distribution, $p_x$, that arises
when one starts with a different distribution, $g_{x|\sigma}$, and
averages that distribution over a variable parameter with
distribution $p_\sigma$ \citep{Beck03Superstatistics}.
It is often useful to think of a superstatistic as an integral
transform that transforms the measurement scale. In particular, we
can expand \Eq{integralTransform} as
\begin{equation}\label{eq:integralTransform2}
e^{-\alpha\textrm{T}(x)} \propto \int_{\sigma^-}^{\sigma^+}e^{-\lambda\widetilde{\Tr}(\sigma)} g_{x|\sigma}{\hbox{\rm d}}\sigma
\end{equation}
which shows that the transformation kernel $g_{x|\sigma}$ changes the
measurement scale from $\widetilde{\Tr}(\sigma)$ to $\textrm{T}(x)$. It is not necessary
to think of $g_{x|\sigma}$ as a probability distribution---the
essential role of $g_{x|\sigma}$ concerns a change in measurement
scale.
The Laplace transform provides the connection between our inverse
linear-logarithmic measurement scales. To begin, expand the right
side of \Eq{integralTransform2} using the Laplace transform kernel
$g_{x|\sigma}=e^{-\sigma x}$, and use the inverse logarithmic to linear
measurement scale, $\widetilde{\Tr}(\sigma)=\sigma+b\log(\sigma)$. Integrating from
zero to infinity yields
\begin{equation*}
e^{-\alpha\textrm{T}(x)} \propto (1+x/\lambda)^{b\lambda-1}
\end{equation*}
with the requirement that $b\lambda<1$. From this, we have $\textrm{T}(x)
\propto \log(1+x/\lambda)$, which is the linear to logarithmic scale.
Thus, the Laplace transform inverts the logarithmic to linear scale
into the linear to logarithmic scale. The inverse Laplace transform
converts in the other direction.
If we use $x=y$, then the transform relates the linear-log
exponential distribution of \Eq{linearlogExp} to the gamma
distribution of \Eq{gamma}. If we use $x=y^2$, then the transform
relates Student's distribution of \Eq{students} to the gamma-Gauss
distribution of \Eq{gammaGauss}.
The Laplace transform inverts the measurement scales. This
inversion is consistent with a common property of Laplace
transforms, in which the transform inverts a measure with dimension
$d$ to a measure with dimension $1/d$. One sometimes interprets the
inversion as a change from a direct measure to a rate or frequency.
Here, it is only the inversion of dimension that is significant. The
inversion arises because, in the transformation kernel
$g_{x|\sigma}=e^{-\sigma x}$, the exponent $\sigma x$ is typically
non-dimensional, so that the dimensions of $\sigma$ and $x$ are
reciprocals of each other. The transformation takes a distribution
in $\sigma$ given by $p_\sigma$ and returns a distribution in $x$ given by
$p_x$. Thus, the transformation typically inverts \linebreak the dimension.
\section{Connections and Caveats}
\subsection{Discrete versus continuous variables}
We used measure invariance to analyze maximum entropy for continuous
variables. We did not discuss discrete variables, because measure
invariance applied to discrete variables has been widely and
correctly used in maximum entropy
\citep{Jaynes03Science,Sivia06Tutorial,Frank09The-common}. In the
Discussion, we describe why previous attempts to apply invariance to
continuous variables did not work in general. That failure
motivated our \linebreak current study.
Here, we briefly review measure invariance in the discrete case for
comparison with our analysis of continuous variables. We use the
particular example of $N$ Bernoulli trials with a sample measure of
the number of successes $y=0,1,\ldots,N$. Frank
\citep{Frank09The-common} describes the measure invariance for this
case: ``How many different ways can we can obtain $y=0$ successes in
$N$ trials? Just one: a series of failures on every trial. How many
different ways can we obtain $y=1$ success? There are $N$ different
ways: a success on the first trial and failures on the others; a
success on the second trial, and failures on the others; and so on.
The uniform solution by maximum entropy tells us that each different
combination is equally likely. Because each value of $y$ maps to a
different number of combinations, we must make a correction for the
fact that measurements on $y$ are distinct from measurements on the
equally likely combinations. In particular, we must formulate a
measure\ldots that accounts for how the uniformly distributed basis
of combinations translates into variable values of the number of
successes, $y$. Put another way, $y$ is invariant to changes in the
order of outcomes given a fixed number of successes. That invariance
captures a lack of information that must be included in our
analysis.''
In this particular discrete case, transformations of order do not
change our information about the total number of successes. Our
measurement scale expresses that invariance, and that invariance is
in turn captured in the maximum entropy distribution.
The nature of invariance is easy to see in the discrete case by
combinatorics. The difficulty in past work has been in figuring out
exactly how to capture the same notion of invariance in the
continuous case. We showed that the answer is perhaps as simple as
it could be: use the transformations that do not change information
in the context of a particular problem. Jaynes
\citep{Jaynes03Science} hinted at this approach, but did not develop
and apply the idea in a general way.
\subsection{General measure invariance versus particular linear-log scales}
Our analysis followed two distinct lines of argument. First, we
presented the general expression for invariance as a form of
information in maximum entropy. We developed that expression
particularly for the case of continuous variables. The general
expression sets the conditions that define measurement scales and
the relation between measurement and probability. But the general
expression does not tell us what particular measurement scale will
arise in any problem.
Our second line of argument claimed that various types of grading
between linear and logarithmic measures arise very commonly in
natural problems. Our argument for commonness is primarily
inductive and partially subjective. On the inductive side, the
associated probability distributions seem to be those most commonly
observed in nature. On the subjective side, the apparently simplest
assumptions about invariance lead to what we called the common
gradings between linear and logarithmic scales. We do not know of
any way to prove commonness or naturalness. For now, we are content
that the general mathematical arguments lead in a simple way to
those probability distributions that appear to arise commonly in
nature.
A different view of what is common or what is simple would of course
lead to different information invariances, measurement scales, and
probability distributions. In that case, our general mathematical
methods would provide the tools by which to analyze the alternative
view.
\section{Discussion}
We developed four topics. First, we provided a new extension to the
method of maximum entropy in which we use the measurement scale as a
primary type of information constraint. Second, we argued that a
measurement scale that grades from linear to logarithmic as
magnitude increases is likely to be very common. The linear-log
exponential and Student's distributions follow immediately from this
measurement scale. Third, we showed that the inverse measure that
grades from logarithmic at small scales to linear at large scales
leads to the gamma and gamma-Gauss distributions. Fourth, we
demonstrated that the two measurement scales are natural inverses
related by the Laplace integral transform. Superstatistics are a
special case of integral transforms and can be understood as changes
in measurement scale.
In this discussion, we focus on measurement invariance, alternative
definitions of entropy, and maximum entropy methods.
Jaynes \citep{Jaynes03Science} summarized the problem of
incorporating measurement invariance as a form of information in
maximum entropy. The standard conclusion is that one should use
relative entropy to account for measurement invariance. In our
notation, for a measurement scale $\textrm{T}(y)$ with measure deformation
\linebreak $m_y=\textrm{T}'(y)$, the form of relative entropy is the Kullback-Leibler
divergence
\begin{equation*}
{\cal{E}} = -\int p_y\log\left(\ovr{p_y}{m_y}\right){\hbox{\rm d}} y
\end{equation*}
in which the $m_y$ is proportional to a prior probability
distribution that incorporates the information from the measurement
scale and leads to the analysis of maximum relative entropy. This
approach works in cases where the measure change, $m_y$, is directly
related to a change in the probability measure. Such changes in
probability measure typically arise in combinatorial problems, such
as a type of measurement that cannot distinguish between the order
of elements in sets.
For continuous deformations of the measurement scale, using $m_y$ as
a relative scaling for probability does not always give the correct
answer. In particular, if one uses the constraint $\angb{f(y)}$ and
the measure $m_y$ in the above definition of relative entropy, the
maximum relative entropy gives the \linebreak probability distribution
\begin{equation*}
p_y \propto m_ye^{-\lambda f(y)}
\end{equation*}
which is often not the correct result. Instead, the correct result
follows from the method we gave, in which the information from
measurement invariance is incorporated by transforming the
constraint as $\angb{\textrm{T}[f(y)]}$, yielding the maximum entropy
solution
\begin{equation*}
p_y \propto e^{-\lambda \textrm{T}[f(y)]}
\end{equation*}
It is possible to change the definition of entropy, such that
maximum entropy applied to the transformed measure of entropy plus
the direct constraint $\angb{f(y)}$ gives the correct answer
\citep{Tsallis09Introduction}. The resulting probability
distributions are of course the same when transforming the
constraint or using an appropriate matching transformation of the
entropy measure. We discuss the mathematical relation between the
alternative transformations in a later paper.
We prefer the transformation of the constraint, because that
approach directly shows how information about measurement alters
scale and is incorporated into maximum entropy. By contrast,
changing the definition of entropy requires each measurement scale
to have its own particular definition of entropy and information.
Measurement is inherently an empirical phenomenon that is particular
to each type of problem and so naturally should be changed as the
nature of the problem changes. The abstract notions of entropy and
information are not inherently empirical factors that change among
problems, so it seems perverse to change the definition of
randomness with each transformation of measurement.
The Tsallis and R{\'e}nyi\ entropy measures are transformations of
Shannon entropy that incorporate the scaling of measurement from
linear to logarithmic as magnitude increases
\citep{Tsallis09Introduction}. Those forms of entropy therefore
could be used as a common alternative to Shannon entropy whenever
measurements scale linearly to logarithmically. Although
mathematically correct, such entropies change the definition of
randomness to hide the particular underlying transformation of
measurement. That approach makes it very difficult to understand
how alternative measurement scales alter the expected types of
\linebreak probability distributions.
\section{Conclusions}
Linear and logarithmic measurements seem to be the most common
natural scales. However, as magnitudes change, measurement often
grades between the linear and logarithmic scales. That transition
between scales is often overlooked. We showed how a measurement
scale that grades from linear to logarithmic as magnitude increases
leads to some of the most common patterns of nature expressed as the
linear-log exponential distribution and Student's distribution.
Those distributions include the exponential, power law, and Gaussian
distributions as special cases, and also include hybrids between
those distribution that must commonly arise when measurements span
the linear and logarithmic regimes.
We showed that a measure grading from logarithmic to linear as
magnitude increases is a natural inverse scale. That measurement
scale leads to the gamma and gamma-Gauss distributions. Those
distributions are also composed of exponential, power law, and
Gaussian components. However, those distributions have the power
law forms at small magnitudes corresponding to the logarithmic
measure at small magnitudes, whereas the inverse scale has the power
law components at large magnitudes corresponding to the logarithmic
measure at large magnitudes.
The two measurement scales are natural inverses connected by the
Laplace transform. That transform inverts the dimension, so that a
magnitude of dimension $d$ on one scale becomes an inverse magnitude
of dimension $1/d$ on the other scale. Inversion connects the two
major scaling patterns commonly found in nature. Our methods of
incorporating information about measurement scale into maximum
entropy also apply to other forms of measurement scaling and
invariance, providing a general method to study the relations
between measurement, information, and probability.
\section*{Acknowledgements}
SAF is supported by National Science Foundation grant EF-0822399,
National Institute of General Medical Sciences MIDAS Program grant
U01-GM-76499, and a grant from the James S.~McDonnell Foundation.
DES thanks Insight Venture Partners for support.
\bibliographystyle{mdpi}
\makeatletter
\renewcommand\@biblabel[1]{#1. }
\makeatother
| proofpile-arXiv_065-4983 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
\section{Introduction}
The continuity equation governs the conservation of mass/charge/ probability of any closed system. This equation involves the spatial distribution of the flux density that is related to the temporal variation of the particle density (charge/mass). Ordinarily, this equation is derived from the equation of motion. The motion of any continuous charge/ mass distribution can be thought of a continuum (field or fluid). The continuity equation guarantees that no loss or gain of such quantities. This equation provides us with information about the system. The information is carried from one point to another by a particle (field) wave. We therefore, trust that such an equation should exhibit this property. For electromagnetic field, the information about the motion of the charges is carried away by photons. Because electrons have particle-wave nature, the flow of electrons is not like the flow of classical objects. The information about the movement of electrons is carried by the wave-particle nature that electrons have. The current and charge/mass densities are unified if we formulate our basic equations governing a particular system in terms of Quaternion. We have recently, shown that Maxwell equations can be written as a single quaternionic equation [1]. Moreover, we have found that the quaternionic Lorentz force led to the Biot-Savart law. The magnetic field produced by the charged particles of the medium is found to be always perpendicular to the particle direction of motion.
This magnetic field produces a longitudinal wave called \emph{electroscalar} wave propagates with speed of light in vacuum. Moreover, the electromagnetic field travels with speed of light in vacuum is the presence of a medium if the current and charge densities satisfy the generalized continuity equation.
We aim in this work at investigating a generalized equation expressing a wave-like nature for the charge and current densities, using Quaternions and seek a wave-like continuity equation. Such a study will yield the ordinary continuity equation and some other additional equations. This system of equations will reveal the wave nature of the current and charge(mass/probability) densities. We call the set of these equations the generalized continuity equation (GCE). Maxwell, Schrodinger, Klein-Gordon and diffusion equations are shown to be compatible with this GCE.
Using the GCE, we have shown that Ohm's law is equivalent to a Schrodinger-like equation. Hence, the electrical properties of metals are consequences of the wave-particle behavior of electrons and not merely due to the drift of electrons.
\section{Continuity Equation}
\markright{Arbab I. Arbab On The New Gauge Transformations Of Maxwell's Equations}
The flow of any continuous medium is governed by the continuity equation. We would like here to write the continuity equation in terms of quaternions. The multiplication rule for the two quaternion, $\widetilde{A}=(a_0\,, \vec{A})$ and $\widetilde{B}=(b_0\,, \vec{B})$ is given by [2]
\begin{equation}
\widetilde{A}\widetilde{B}=\left(a_0b_0-\vec{A}\cdot\vec{B}\,\,, a_0\vec{B}+\vec{A}b_0+\vec{A}\times\vec{B}\right).
\end{equation}
Accordingly, one can write the quaternion continuity equation in the form
\begin{equation}
\widetilde{\nabla}\widetilde{J}=\left[-\,\left(\vec{\nabla}\cdot
\vec{J}+\frac{\partial \rho}{\partial t}\right) \,,\,
\frac{i}{c}\,\left(\frac{\partial \vec{J}}{\partial
t}+\vec{\nabla}\rho\, c^2\right)+\vec{\nabla}\times
\vec{J}\,\right]=0\,,
\end{equation}
where
\begin{equation}
\widetilde{\nabla}=\left(\frac{i}{c}\frac{\partial}{\partial
t}\, , \vec{\nabla}\right)\,,\qquad \widetilde{J}=\left(i\rho c\, ,
\vec{J}\right)\,.
\end{equation}
This implies that
\begin{equation}
\vec{\nabla}\cdot \vec{J}+\frac{\partial \rho}{\partial
t}=0\,,
\end{equation}
\begin{equation}
\vec{\nabla}\rho+\frac{1}{c^2}\frac{\partial
\vec{J}}{\partial t}=0\,,
\end{equation}
and
\begin{equation}
\vec{\nabla}\times
\vec{J}=0\,.
\end{equation}
We call Eqs.(3) - (5) the \emph{generalized continuity equation} (\textcolor[rgb]{1.00,0.00,0.00}{GCE}). Equation (6) states the current density $\vec{J}$ is irrotational.
In a covariant form, Eqs.(4) - (6) read
\begin{equation}
\partial_\mu J^\mu=0\,,\qquad N_{\mu\nu}\equiv\partial_\mu J_\nu-\partial_\nu J_\mu=0\,.
\end{equation}
Notice that the tensor $N_{\mu\nu}$ is an antisymmetric tensor. It is evident from Eq.(7) that Eqs.(4) - (6) are Lorentz invariant.
Now differentiate Eq.(4) partially with respect to time and use Eq.(5), we obtain
\begin{equation}
\frac{1}{c^2}\frac{\partial^2\rho}{\partial t^2}-\nabla\,^2\rho=0\,,
\end{equation}
Similarly, take the divergence of Eq.(5) and use Eq.(4), we obtain
\begin{equation}
\frac{1}{c^2}\frac{\partial^2\vec{J}}{\partial
t^2}-\nabla\,^2\vec{J}=0\,,
\end{equation}
where $\rho=\rho(r,t)$ and $\vec{J}=\vec{J}(\vec{r},t)$. Therefore, both the current density and charge density satisfy the solution of a wave equation propagating with speed of light.
We remark that the \textcolor[rgb]{1.00,0.00,0.00}{GCE} is applicable to any flow whether created by charged particles or neutral ones.
\markright{Arbab I. Arbab On The New Gauge Transformations of Maxwell's Equations}
\section{Maxwell's Equations}
\markright{Arbab I. Arbab On The New Gauge Transformations of Maxwell's Equations}
Maxwell's equations are given by [3]
\begin{equation}
\hspace{-2.5cm}\vec{\nabla}\cdot \vec{E}=\frac{\rho}{\varepsilon_0}\,\,,\qquad \vec{\nabla}\cdot \vec{B}=0\,,
\qquad
\vec{\nabla}\times \vec{E}+\frac{\partial\vec{B}}{\partial t}=0\,, \qquad \vec{\nabla}\times \vec{B}-\frac{1}{c^2}\frac{\partial\vec{E}}{\partial t}=\mu_0\vec{J}\,.
\end{equation}
Now differentiate Faraday's equation partially with respect to time and employ Ampere's equation and the vector identity $\vec{\nabla}\times\left(\vec{\nabla}\times\vec{B}\right)=\vec{\nabla}(\vec{\nabla}\cdot\vec{B})-\nabla^2\vec{B}$\, to get
\begin{equation}
\frac{1}{c^2}\frac{\partial^2\vec{B}}{\partial
t^2}-\nabla\,^2\vec{B}=\mu_0\left(\vec{\nabla}\times \vec{J}\right)\,.
\end{equation}
Similarly, differentiating Ampere's equation partially with respect to time and employing Gauss and Faraday equations yields
\begin{equation}
\frac{1}{c^2}\frac{\partial^2\vec{E}}{\partial
t^2}-\nabla^2\vec{E}=-\frac{1}{\varepsilon_0}\left(\vec{\nabla}\rho +\frac{1}{c^2}\frac{\partial
\vec{J}}{\partial t}\right)\,,
\end{equation}
According to the \textcolor[rgb]{1.00,0.00,0.00}{GCE} the left hand sides of Eqs.(11) and (12) vanish. This entitles us to say that the electromagnetic field travels with speed of light in free space (when $\rho=0,\vec{J}=0)$ and in presence of charge and current; as far as the current and charge densities satisfy the \textcolor[rgb]{1.00,0.00,0.00}{GCE}.
Now consider a conducting media defined by the conduction current density (Ohm's law)
\begin{equation}
\vec{J}=\sigma\,\vec{E}\,.
\end{equation}
Taking the divergence of Eq.(13) and using the Gauss law, one obtains
\begin{equation}
\vec{\nabla}\cdot\vec{J}=\sigma\,\vec{\nabla}\cdot\vec{E}=\sigma\frac{\rho}{\varepsilon_0}\,.
\end{equation}
Differentiating Eq.(14) partially with respect to time and using Eq.(5) yields
\begin{equation}
\nabla^2\rho=-\frac{\sigma}{c^2\varepsilon_0}\frac{\partial\rho}{\partial t}\,,\qquad \frac{\partial\rho}{\partial t}=-D_c\nabla^2\rho\,,\qquad D_c=\frac{c^2\varepsilon_0}{\sigma}=\frac{1}{\mu_0\sigma}\,.
\end{equation}
Equation (15) is a Schrodinger-like wave equation describing the motion of free electrons. A metal can be viewed as a gas of free electrons (Fermions). The thermal properties of these electrons are determine by Fermi-Dirac statistics. Thus, Ohm's law satisfies Schrodinger equation. This is a very interesting result, since the electron motion is governed by Schrodinger equation owing to the wave-particle nature of the electrons. Thus, electrons move in a metal as a material wave (de Broglie wave).
The Drude has proposed a model of electrical conduction of electrons to explain the transport properties of electrons in materials (especially metals) [4]. The model, which is an application of kinetic theory, assumes that the microscopic behavior of electrons in a solid may be treated classically and looks much like a pinball machine, with a sea of constantly jittering electrons bouncing and re-bouncing off heavier, relatively immobile positive ions. He has further shown that the Ohm's law is true, where he related the current density to the electric field by a linear similar formula. He also connected the current as a response to a time-dependent electric field by a complex conductivity.
This simple classical Drude model provides a very good explanation of DC and AC conductivity in metals, the Hall effect, and thermal conductivity (due to electrons) in metals. The failure of Drude model to explain specific heat of metal is connected with its disregard to the wave nature of electrons.
Hence, the motion of electrons in metals in not due to diffusion of electrons, the electrons is transported quantum mechanically. Hence, a quantum Ohm's law should be considered instead in which the current density $\vec{J}$ is obtained from Schrodinger theory of a free electron (Fermi gas).
Thus, quantum mechanically, an electron is viewed as a wave traveling through a medium. When the wavelength of the electrons is larger than the crystal spacing, the electrons will propagate freely throughout the metal without collision, therefore their scattering result only from the imperfections in the crystal lattice of the metal.
In Schrodinger terminology the diffusion constant $D_c=\frac{\hbar}{2m\,i}$, which shows that the diffusion constant is complex. This would imply that the conductivity is complex. In Schrodinger paradigm, the current density and charge density are probabilistic quantities determined by the wave function of the electron ($\psi$) as [5]
\begin{equation}
\rho=\psi^*\psi\,, \qquad \vec{J}=\frac{\hbar}{2mi}(\psi^*\nabla\psi-\psi\nabla\psi^*)
\end{equation}
The current density and the charge here are related by the generalized continuity equation. Therefore, the Ohm's current is a probabilistic wave-like current.
It is thought that electrons move in a metal by drifting, but we have seen here the electrical transport properties of metals are propagated by a a material wave rather than drifting of electrons.
Now, apply Eq.(4) in Eq.(13) and employ Gauss law to get
\begin{equation}
\frac{\partial\rho}{\partial t}=-\frac{\sigma}{\varepsilon_0}\,\rho\,.
\end{equation}
Let us write the charge density is a separable form
\begin{equation}
\rho(r,t)\equiv \rho(r)\rho(t)\,.
\end{equation}
The time dependence of the charge density $\rho(r,t)$ is obtained from Eq.(17) as
\begin{equation}
\rho(t)=\rho_0\exp(-\frac{\sigma}{\varepsilon_0})\,t\,,\qquad \rho_0=\rm const.
\end{equation}
The constant $\tau\equiv\frac{\sigma}{\varepsilon_0}$ is know as the relaxation time, and it is a measure of how fast a conducting medium reaches electrostatic equilibrium.
The spatial dependence of the charge density is obtain from substituting Eq.(17) in Eq.(15). This yields the equation
\begin{equation}
\nabla^2\rho (r)-\frac{1}{\lambda_c^2}\rho(r)=0\,,\qquad \lambda_c=\frac{\varepsilon_0c}{\sigma}=\frac{1}{\mu_0c\,\sigma}=\frac{D_c}{c}\,.
\end{equation}
The solution of Eq.(20) is given by
\begin{equation}
\rho(r)=A\exp(-\frac{r}{\lambda_c})\,,\qquad A=\rm const.
\end{equation}
Therefore, the charge density distribution (space-time) is given by
\begin{equation}
\rho(r,t)=Be^{-\frac{1}{\lambda_c}(\,r+c\,t)}+Fe^{-\frac{1}{\lambda_c}(-\,r+c\,t)}\,,\qquad B\,, F=\rm const
\end{equation}
This shows that the charges density decays in space and time. However, if $\omega_c$ is complex, we will have an oscillatory charge density solution. This is true if we consider $\sigma$ to be complex.
The spreading of wavepackets in quantum mechanics is directly related to the spreading of probability densities in diffusion (see sec. 4 below).
Now take the gradient of Eq.(14) and employ Eq.(5) and the vector identity $\nabla\times(\vec{\nabla}\times \vec{J})=\nabla(\vec{\nabla}\cdot \vec{J})-\nabla^2\vec{J}$\, [6] we obtain
\begin{equation}
\frac{\partial \vec{J}}{\partial t}=-D_c\nabla^2\vec{J}\,.
\end{equation}
This equation shows that $\vec{J}$ is governed by Schrodinger-like equation. Hence, both the current density and charged density are governed by a Schrodinger-like equation.
Now applying Eq.(13) to Eq.(5) and using Eq.(10), yield
\begin{equation}
\frac{\partial \vec{E}}{\partial t}=-D_c\nabla^2\vec{E}\,,
\end{equation}
where we have used the vector identity $\nabla\times(\vec{\nabla}\times \vec{E})=\nabla(\vec{\nabla}\cdot \vec{E})-\nabla^2\vec{E}$ and the fact that $\vec{\nabla}\times \vec{E}=0$. This equation can also be obtained directly from Eq.(23) by dividing both sides of Eq.(23) by $\sigma$ and using Eq.(13).
Equation (20) may define a limiting conductivity of a material ($\sigma_0$), when we equate $\lambda_c$ to Compton wavelength of the electron of mass $m_e$. In this case one finds
\begin{equation}
\sigma_0=\frac{m_e}{\mu_0h}\,.
\end{equation}
This amounts to a value of $\sigma_0=1.09\times10^{9}\Omega^{-1}\, m^{-1}$. This can be compared with the best conductivity of metal (Silver) which is $6.3\times 10\,^{7}\Omega^{-1}\, m^{-1}$. Thus, the maximum possible conductivity is set by quantum mechanics and governed by Eq.(25). Because of this the conductivity at zero Kelvin is never infinite but should be limited to $\sigma_0$.
\subsection{Fluid nature of electromagnetic field}
We have recently found that the magnetic field created by the charged particle to be given by [1]
\begin{equation}
\vec{B}_m=\frac{\vec{v}}{c^2}\times\vec{E}\,.
\end{equation}
In magnetohydrodynamics this field is govern by the magnetohydrodynamic equation
\begin{equation}
\frac{\partial \vec{B}}{\partial t}=\vec{v}\times(\nabla\times \vec{B})\,.
\end{equation}
For any other fluid one has in general
\begin{equation}
\frac{\partial \vec{\omega}}{\partial t}=\vec{v}\times(\nabla\times \vec{\omega})\,,
\end{equation}
where $\omega=\vec{\nabla}\times\vec{v}$\, is the vorticity of the fluid. This shows that the vorticity of a fluid is an integral part to the fluid flow. When an electromagnetic field encounters a charged particle, the charged particle induces a vorticity in the field (or fluid). This equally occurs in fluids when a fluid encounters an obstacle. I have recently shown that Eq.(6), i.e., $\vec{\nabla}\times \vec{J}=0$, implies that $\vec{\omega}=\frac{\vec{v}}{c^2}\times (-\frac{\partial \vec{v}}{\partial t})$ [7]. Owing to the magnetic field in Eq.(24), we have shown recently that this field gives rise to a longitudinal wave (electroscalar wave) traveling at speed of light in vacuum besides the electromagnetic fields [1].
Hence, it is evident that there is a genuine one-to-one correspondence between electrodynamics and hydrodynamics, i.e., they are intimately related. This would immediately imply that the electromagnetic fields propagation mimics the fluids flow. The electric field resembles the local acceleration and the magnetic fields resembles the vorticity of the moving fluid.
Using Eq.(13) in Eq.(24), the magnetic field (and vorticity) produced by the electrons in a metal vanishes, since $\vec{B}_m=\frac{\vec{v}\times \vec{J}}{c^2\sigma}=0$, where $\vec{J}=\rho \,\vec{v}.$ Hence, the Lorentz force on conduction electrons is only electric. Hence, no magnetic field inside the conductor can be created from the motion of electrons. Therefore, even if we write Ohm's law in the general form $\vec{J}=\sigma(\vec{E}+\vec{v}\times\vec{B})$, we would have obtained the form in Eq.(13) too.
\markright{Arbab I. Arbab On The New Gauge Transformations of Maxwell's Equations}
\section{Diffusion Equation}
Diffusion is a transport phenomena resulting from random molecular motion of molecules from a region of higher concentration to a lower concentration. The result of diffusion is a gradual mixing of material. Diffusion is of fundamental importance in physics, chemistry, and biology.
\markright{Arbab I. Arbab On The New Gauge Transformations of Maxwell's Equations}
The diffusion equation is given by [8]
\begin{equation}
\vec{J}=-D\vec{\nabla}\rho\,,
\end{equation}
where $D$ is the diffusion constant. This is known as Fick's law.
Taking the divergence of Eq.(27) and using Eq.(4), one finds
\begin{equation}
\frac{\partial \rho}{\partial t}=D\nabla^2 \rho\,.
\end{equation}
This shows that the density $\rho$ satisfies the Diffusion equation. The normalized solution of the Eq.(28) is
\begin{equation}
\rho (x,t)= \frac{1}{\sqrt{4\pi D\,t}} \exp\,(-\frac{x^2}{4Dt})\,,
\end{equation}
which when applied in Eq.(27) yields the current density $\vec{J}$. It is obvious that the current in Eq.(27) satisfies the \textcolor[rgb]{1.00,0.00,0.00}{GCE}, viz., Eq.(4) - (6).
Differentiation Eq.(27) partially with respect to time and employing Eqs.(4) - (5) and the vector identity, $\vec{\nabla}(\vec{\nabla}\cdot\vec{J})=\nabla^2\vec{J}+\vec{\nabla}\times(\vec{\nabla}\times \vec{J})$\,[6] one gets
\begin{equation}
\frac{\partial \vec{J}}{\partial t}=D\nabla^2 \vec{J}\,.
\end{equation}
Hence, both $\rho$ and $\vec{J}$ satisfy the diffusion equation.
\section{Dirac Equation}
We would like here to show that the generalized continuity equations, viz., Eq.(7) are compatible with Dirac equation. To this aim, we apply the current density 4-vector according to Dirac formalism, i.e., $J_\mu=\overline{\psi}\gamma_\mu \psi$ in Eq.(7). This yields [5]
\begin{equation}\label{3}
\partial^\mu J^\nu-\partial^\nu J^\mu =(\partial^\mu\overline{\psi})\gamma^\nu \psi+\overline{\psi}\gamma^\nu \partial^\mu \psi -
(\partial^\nu\overline{\psi})\gamma^\mu \psi-\overline{\psi}(\gamma^\mu \partial^\nu \psi)=0.
\end{equation}
The first term in the above equation can be written as
\begin{equation}\label{3}
(\partial^\mu\overline{\psi})\gamma^\nu \psi=(\partial^\mu\psi^+)\gamma^0\gamma^\nu \psi=(\partial^\mu\psi^+)\gamma^{\nu+}\gamma^0 \psi=(\gamma^\nu\partial^\mu\psi)^+\gamma^0 \psi.
\end{equation}
Hence,
\begin{equation}
(\partial^\mu\overline{\psi})\gamma^\nu \psi=(\overline{\psi}\gamma^\nu \partial^\mu \psi)^+.
\end{equation}
Similarly,
\begin{equation}
(\partial^\nu\overline{\psi})\gamma^\mu \psi=(\overline{\psi}\gamma^\mu \partial^\nu \psi)^+.
\end{equation}
Substituting these terms in Eq.(31) completes the proof, since for a plane wave $\partial^\mu\psi=ik^\mu\psi$.
\section{Klein-Gordon Equation}
It is the equation of motion of scalar particles with integral spin. For such a system the current density 4-vector is given by [5]
\begin{equation}\label{3}
J_\mu=\frac{i\hbar}{2m}\left(\phi^*\partial_\mu\phi-(\partial_\mu\phi^*)\phi\right),
\end{equation}
so that the generalized continuity equation becomes
\begin{equation}\label{3}
\hspace{-2cm}\partial_\mu J_\nu-\partial_\nu J_\mu =\frac{i\hbar}{2m}\left[\partial_\mu(\phi^*\partial_\nu\phi-(\partial_\nu\phi^*)\phi)-\partial_\nu(\phi^*\partial_\mu\phi-(\partial_\mu\phi^*)\phi)\right]=0,
\end{equation}
and hence, Klein-Gordon equations is compatible with our \textcolor[rgb]{1.00,0.00,0.00}{GCE} too.
\section{Schrodinger Equation}
It the equation of motion governing the motion of non-relativistic particles, eg. electrons.
The current density and probability density in Schrodinger formalism are given by [5]
\begin{equation}\label{3}
\rho=\psi^*\psi,\qquad \vec{J}=\frac{\hbar}{2mi}(\psi^*\nabla\psi-(\nabla\psi^*)\psi).
\end{equation}
Applying Eq.(6), one gets
\begin{equation}
\vec{\nabla}\times \vec{J}=\frac{\hbar}{2\,mi}\left[(\vec{\nabla}\times(\psi^*\nabla\psi)-\vec{\nabla}\times(\nabla\psi^*)\psi)\right].
\end{equation}
Using the two vector identities $\vec{\nabla}\times(f\vec{A})=f(\vec{\nabla}\times \vec{A})-\vec{A}\times(\vec{\nabla}f)$ and $\vec{\nabla}\times(\vec{\nabla} f)=0$, Eq.(38) vanishes, and hence one of the \textcolor[rgb]{1.00,0.00,0.00}{GCE} is satisfied. We now apply Eq.(5) in Eq.(37) to get
\begin{equation}
\vec{\nabla}(\rho c^2)+\frac{\partial \vec{J}}{\partial t}=\vec{\nabla}(\psi^*\psi)+\frac{\hbar}{2\,mi}\frac{\partial}{\partial t}\left[\,\psi^*\nabla\psi-(\nabla\psi^*)\,\psi\,\right]=0.
\end{equation}
Upon using the Schrodinger equation,
\begin{equation}
H\psi=i\hbar\frac{\partial \psi}{\partial t}\,, \qquad \psi^*H=-i\hbar\frac{\partial \psi^*}{\partial t}\,
\end{equation}
Eq.(39) vanishes for a plane wave equation, i.e., $\psi\,(r,t)=A\exp\,i(\vec{k}\cdot \vec{r}-\omega\,t)\,, A={\rm const.}$. Therefore, the \textcolor[rgb]{1.00,0.00,0.00}{GCE} is satisfied by Dirac, Klein-Gordon and Schrodinger equations. This implies that these \textcolor[rgb]{1.00,0.00,0.00}{GCE} is indeed fundamental in formulating any field theoretic models involving the motion of particles or fluids. This equation will have immense consequences when applied to
the theory of electrodynamics. Such an application will lead to better formulations and understanding of the astrophysical phenomena pertaining to the evolution of dense objects.
\section{Concluding Remarks}
We have derived in this paper the generalized continuity equations and showed how it is applicable to the particle flow. We have shown that the the generalized continuity equation is lorentz invariant. We have then shown that the basic equations of motion are compatible with the generalized continuity equation. The diffusion equation is in agreement with the \textcolor[rgb]{1.00,0.00,0.00}{GCE}. Moreover, the current and density in Dirac, Klein-Gordon and Schrodinger formalisms are compatible with the \textcolor[rgb]{1.00,0.00,0.00}{GCE} too. The classical Ohm's law is found to be compatible with Schrodinger-like equation. Transport properties of metals are shown to be propagated by a material wave rather than drifting and diffusion.
\section*{Acknowledgments} One of us (AIA) would like to thank the university of Khartoum for financial support of this work and the Sultan Qaboos university for hospitality where this work is carried out.
\section*{References}
$[1]$ Arbab, A. I., and Satti, Z. A., \emph{Progress in Physics}, 2, \textbf{8} (2009).\\
$[2]$ Tait, Peter Guthrie, \emph{An elementary treatise on quaternions}, 2nd ed., Cambridge University Press (1873);
Kelland, P. and Tait, P. G., \emph{Introduction to Quaternions}, 3rd ed. London: Macmillan, (1904).\\
$[3]$ Jackson, J.D., \emph{Classical Electrodynamics}, New York, John Wiley, 2nd ed. (1975).\\
$[4]$ Drude, Paul, \emph{Annalen der Physik} \textbf{306}, (3), 566 (1900).\\
$[5]$ Bjorken, J.D. and Drell, S.D., \emph{Relativistic Quantum Mechanics}, McGraw-Hill Book Company, (1964).\\
$[6]$ Lawden, D.F., \emph{Tensor Calculus and Relativity}, Methuen, London, (1968).\\
$[7]$ Arbab, A. I., \emph{\tt On the analogy between the electrodynamics and hydrodynamics using quaternions}, to appear in the 14th International Conference on Modelling Fluid Flow (CMFF'09), Budapest, Hungary, 9-12 September (2009).\\
$[8]$ Fick A., \emph{Ann. Physik, Leipzig}, \textbf{170}, 59, (1855).\\
\end{document}
| proofpile-arXiv_065-4992 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{INTRODUCTION}
\label{intro}
Because the outer regions of the solar atmosphere are threaded by
a magnetic field, they can support a wide range of oscillatory
phenomena. Theoretical aspects of these waves and oscillations
have been the subject of extensive investigations
\citep[e.g.,][]{Roberts1983,Roberts1984}. These initial studies
were mainly driven by radio observations of short period
oscillations \citep[e.g.,][]{Rosenberg1970}. More recent
detections of coronal oscillatory phenomena have resulted in
considerable additional theoretical work. Recent reviews include
\citet{Roberts2000} and \citet{Roberts2003}. Observational
detections of coronal oscillatory phenomena include detections of
spatial oscillations of coronal structures, which have been
interpreted as fast kink mode MHD disturbances
\citep[e.g.,][]{Aschwanden1999}; intensity oscillations, which
have been interpreted as propagating slow magnetoacoustic waves
\citep[e.g.,][]{DeForest1998,Berghmans1999}; and Doppler shift
oscillations, which have been interpreted as slow mode MHD waves
\citep[e.g.,][]{Wang2002}. The interaction between the
theoretical work and the growing body of oscillation observations
provides fertile ground for testing our understanding of the
structure and dynamics of the corona.
The EUV Imaging Spectrometer (EIS) on the \textit{Hinode}
satellite is an excellent tool for studying oscillatory phenomena
in the corona. \citet{Culhane2007} provides a detailed
description of EIS, and the overall \textit{Hinode} mission is
described in \citet{Kosugi2007}. Briefly, EIS produces stigmatic
spectra in two 40~\AA\ wavelength bands centered at 195 and
270~\AA. Two slits (1\arcsec\ and 2\arcsec) provide line
profiles, and two slots (40\arcsec\ and 266\arcsec) provide
monochromatic images. Moving a fine mirror mechanism allows EIS
to build up spectroheliograms in selected emission lines by
rastering a region of interest. With typical exposure times of 30
to 90~s, however, it can take considerable time to construct an
image. Higher time cadences can be achieved by keeping the EIS
slit or slot fixed on the Sun and making repeated exposures. This
sit-and-stare mode is ideal for searching for oscillatory
phenomena.
EIS Doppler shift data have already been used for a number of
investigations of oscillatory phenomena.
\citet{VanDoorsselaere2008} have detected kink mode MHD
oscillations with a period near 5 minutes. \citet{Mariska2008}
observed damped slow magnetoacoustic standing waves with periods
of about 35 minutes. \citet{Wang2009} have detected slow mode
magnetoacoustic waves with 5 minute periods propagating upward
from the chromosphere to the corona in an active region.
\citet{Wang2009a} have also observed propagating slow
magnetoacoustic waves with periods of 12 to 25 minutes in a large
fan structure associated with an active region. Analysis of
oscillatory data with EIS is still just beginning. The amplitudes
observed have all been very small---typically 1 to 2 km~s$^{-1}$.
Thus a clear picture of the nature of the low-amplitude coronal
oscillations has yet to emerge. In this paper, we add to that
picture by analyzing portions of an EIS sit-and-stare active
region observation that shows evidence for Doppler-shift
oscillations in a number of EUV emission lines. We also use data
from the \textit{Hinode} Solar Optical Telescope (SOT) to relate
the phenomena observed in the corona with EIS to chromospheric
features and the magnetic field.
\section{OBSERVATIONS}
\begin{figure*}
\plotone{f1.eps}
\caption{Context EIS spectroheliograms in 9 wavelength windows
obtained from 13:33:46 to 17:56:57 UT on 2007 August 22 (left)
and from 01:55:43 to 06:18:53 UT on 2007 August 23 (right). The
location of the EIS slit for the sit-and-stare observation is
marked with the vertical line. The horizontal lines along the
slit mark the location of the oscillation data analyzed in this
paper. The post-flare loops discussed in the text are the
bright features east and south of the marked region in the
three bottom right panels.}
\label{fig:context_panel}
\end{figure*}
The observations discussed in this paper were taken in an area of
enhanced EUV and soft X-ray emission just west of NOAA active
region 10969. The complete EIS data set consists of a set of
$256\arcsec \times 256\arcsec$ context spectroheliograms in 20
spectral windows, followed by 7.4~h of sit-and-stare observations
in 20 spectral windows, and finally a second set of context
spectroheliograms. All the EIS sit-and-stare data were processed
using software provided by the EIS team. This processing removes
detector bias and dark current, hot pixels, dusty pixels, and
cosmic rays, and then applies a calibration based on the
prelaunch absolute calibration. The result is a set of
intensities in ergs cm$^{-2}$ s$^{-1}$ sr$^{-1}$ \AA$^{-1}$. The
data were also corrected for the EIS slit tilt and the orbital
variation in the line centroids. For emission lines in selected
wavelength windows, the sit-and-stare data were fitted with
Gaussian line profiles plus a background, providing the total
intensity, location of the line center, and the width.
Figure~\ref{fig:context_panel} shows the pre- and
post-sit-and-stare spectroheliograms and provides a more detailed
view of the area on the Sun covered by the sit-and-stare
observations. The spectroheliograms were obtained with the
1\arcsec\ EIS slit using an exposure time of 60~s. Note that the
two sets of spectroheliograms have small differences in the
pointing. Examination of the two sets of images shows that
considerable evolution has taken place in the observed region
between the times each set was taken. In particular, new loop
structures have developed at locations to the south of the bright
core of the emitting region, suggesting that some flare-like
heating may have taken place between the two sets of context
observations. Space Weather Prediction Center data show that
there was one B1.2 class flare between 07:50 and 08:10~UT on 2007
August 22 at an unknown location. Since other flares occurred in
this active region, it is likely that this one is also associated
with it. There were, however, no events recorded during the time
of the EIS observations.
\begin{figure}[b]
\plotone{f2.eps}
\caption{A portion of an MDI magnetogram taken on 2007 August 22
at 14:27 UT showing the area covered by the EIS
spectroheliograms and the approximate location of the EIS slit
during the sit-and-stare observation. The horizontal lines
along the slit mark the location of the oscillation data
analyzed in this paper, and the area of weak magnetic flux that
is the focus of this analysis is marked with an arrow. The image
has been scaled so that the range of magnetic fluxes displayed
is $\pm100$ Gauss.}
\label{fig:mdi_mag}
\end{figure}
\begin{figure}
\plotone{f3.eps}
\caption{\textit{TRACE} 171~\AA\ (Fe$\;$\small{IX/X}) image taken
on 2007 August 22 at 20:01:19 UT. The box shows the location of
the EIS spectroheliograms taken before and after the
sit-and-stare observations, and the vertical line with the box
shows the location of the sit-and stare observation. The
horizontal lines along the slit mark the location of the
oscillation data analyzed in this paper.}
\label{fig:trace}
\end{figure}
Both magnetogram data from the \textit{SOHO} Michelson Doppler
Imager (MDI) and \textit{Hinode} Solar Optical Telescope (SOT)
and coronal images taken in the 171 \AA\ (Fe$\;$\small{IX/X})
filter with the \textit{Transition Region and Coronal Explorer}
(\textit{TRACE}) were also examined. The magnetograms, one of
which is shown in Figure~\ref{fig:mdi_mag} with the location of
the EIS slit for the sit-and-stare observation indicated, show an
extended bipolar plage region without any visible sunspots. As
can be seen in the magnetogram, the EIS slit crosses two regions
of positive (white) polarity. The weaker one, which has a size of
about 6\arcsec and is marked with an arrow, is the focus of this
analysis.
\begin{figure*}
\plotone{f4.ps}
\caption{Coalignment of EIS and SOT. Top left: EIS
spectroheliogram taken in \ion{O}{6} 184.12 \AA\ at the same
time as the spectroheliograms shown in
Figure~\ref{fig:context_panel}, the dark vertical line near the
center marks the slit location for the sit-and-stare
observations. Top right: MDI full disk magnetogram taken at
14:27 UT. Bottom right: \textit{Hinode} SP magnetogram, taken
between 18:03 UT and 19:01 UT, at the same time the \ion{Ca}{2}
H time sequence was taken. Bottom left: temporal average of the
time sequence in \ion{Ca}{2} H taken between 19:50 and 21:40
UT. The black circles in the bottom images encompass a small
network bright point that overlaps with the EIS slit.}
\label{fig:coalign}
\end{figure*}
The \textit{TRACE} images, one of which is shown in
Figure~\ref{fig:trace}, show that the western part of the region
near the right edge of the box that outlines the edges of the
second EIS spectroheliogram exhibits fan-like coronal structures
extending radially out in all directions, while the eastern part
consists of shorter loop structures that connect the opposite
polarities. The full set of \textit{TRACE} images that cover most
of the time period covered by the context and sit-and-stare
observations are included as a movie. The movie shows generally
quiescent behavior in the fan-like structures, but multiple
brightenings in the eastern loop system. The largest of these
took place at around 01:31~UT on August 23 and shows a filament
eruption, which resulted in a system of post-flare loops that are
visible in the \ion{Fe}{15} 284.16~\AA\ and \ion{Fe}{16}
262.98~\AA\ spectroheliograms in the right panels of
Figure~\ref{fig:context_panel}. Some of the loops in the eastern
part cross the EIS slit location for the sit-and-stare
observations, but they are for the most part north of the area we
focus on.
We have also examined images taken with the Extreme Ultraviolet
Imaging Telescope \citep[EUVI,][]{Wuelser2004} on the Ahead (A)
and Behind (B) spacecraft of the \textit{Solar Terrestrial
Relations Observatory} (\textit{STEREO}). EUVI observes the
entire solar disk and the corona up to 1.4 R$_\sun$ with a pixel
size of 1.59\arcsec. On 2007 August 22, the separation angle of
\textit{STEREO} with Earth was 14.985$\degr$ for spacecraft A and
11.442$\degr$ for spacecraft B. Our target region was rather near
the limb as seen from spacecraft A. EUVI/B, however, provided
continuous on-disk images between 2007 August 22, 11:00 UT and
2007 August 23, 11:30 UT, which allowed us to study the
development of the coronal structures of the region during the
time of the EIS observations.
We produced movies in all four EUVI channels (171~\AA , 195~\AA,
284~\AA, and 304~\AA), with 171~\AA\ having the highest cadence
of 2.5 min. The EUVI movies generally show the same behavior as
that shown in the \textit{TRACE} data. EUVI captured two larger
events during this period, the first one started at around
12:24~UT on August 22, showing multiple brightenings of the
eastern loop system although it is not clear if a CME had been
launched. The second one took place at around 01:31~UT on August
23, involving the western part of the region and shows a filament
eruption with a system of post-flare loops evolving at the time
the second EIS context spectroheliogram was taken.
SOT obtained \ion{Ca}{2} H observations at a one-minute cadence
during the time interval of the sit-and-stare observations that
are the focus of this study. To coalign those observations we use
an EIS \ion{O}{6} 184.12 \AA\ spectroheliogram, MDI magnetograms,
SOT Spectro-Polarimeter (SP) magnetograms, and SOT \ion{Ca}{2} H
line filtergrams. The EIS \ion{O}{6} 184.12 \AA\ spectroheliogram
was part of the scan shown in Figure~\ref{fig:context_panel}
(left), taken several hours before the sit-and-stare
observations. Coalignment was carried out by rescaling the
images, using cross-correlation techniques to overlap the
selected subfields and finally blinking the images to ensure
minimal offsets.
Figure~\ref{fig:coalign} displays the \ion{O}{6} 184.12
\AA\ spectroheliogram in the top left corner, with the dark
vertical line representing the slit position of the sit-and-stare
observations. The top right image is from a full-disk MDI
magnetogram taken at 14:27 UT. The three small structures in the
lower left corner of the magnetogram were used to coalign the
\ion{O}{6} image, which shows bright emission structures at the
same locations. In the next step we use SP data to coalign with
the MDI magnetogram. The SP was scanning the target region on
2007 August 22 between 18:03 UT and 19:01 UT, at the same time
the \ion{Ca}{2} H image sequence was taken. The lower right image
in Figure~\ref{fig:coalign} gives the apparent longitudinal field
strength as derived from the Stokes spectra. Comparing the two
magnetograms we can see that the overall structure of the
bi-polar region has hardly evolved over time and that the SP data
show considerable additional fine structure in the magnetic
field. We also note that in the time between the MDI magnetogram
and SP magnetogram flux cancellation has taken place at the
polarity inversion line of the bi-polar region (at around $x =
-610$\arcsec). The eruption observed a few hours later in EUVI
and \textit{TRACE} was probably triggered by this flux
cancellation.
\begin{figure}
\plotone{f5.eps}
\caption{EIS sit-and-stare Doppler-shift data in four emission
lines covering the entire sit-and-stare observation. The
Doppler shift in each window has been adjusted so that the zero
value is the average over the window. The maximum and minimum
values plotted in each case are $+20$ and $-20$ km s$^{-1}$,
respectively. This study focuses on the spatial range from
$-200$\arcsec\ to $-215$\arcsec\ in the time interval from
20:07 to 21:53~UT.}
\label{fig:doppler_local}
\end{figure}
\begin{figure}
\plotone{f6.eps}
\caption{Doppler shift data averaged over the 16 detector rows
from $-200$\arcsec\ to $-215$\arcsec. The emission lines shown
from top to bottom are \ion{Fe}{11} 188.23~\AA, \ion{Fe}{12}
195.12~\AA, \ion{Fe}{13} 202.04~\AA, \ion{Fe}{14} 274.20~\AA,
and \ion{Fe}{15} 284.16~\AA. Time is measured in minutes from
the start of the sit-and-stare data at 18:13:06 UT. The zero
value for Doppler shifts is set to the average Doppler shift in
each line and is shown using the horizontal lines. Data for
each emission line are displaced from the next set by 4 km
s$^{-1}$.}
\label{fig:doppler_average}
\end{figure}
The target area for the oscillation study is in the black circle,
a small magnetic region of positive polarity. In the final
coalignment step the SOT \ion{Ca}{2} H images were coaligned with
the SP scan. The bottom left image in Figure~\ref{fig:coalign}
shows the temporal average of the time sequence taken between
19:50 UT and 21:40 UT. The black circle encompasses a small
network bright point that overlaps with the EIS slit and the
region averaged along the slit in the analysis that is the focus
of this paper. As we have already noted, the EIS
spectroheliograms taken before and after the sit-and-stare
observation show that considerable structural evolution has taken
place. In the \ion{Fe}{15} 264.78~\AA\ spectroheliogram taken
after the sit-and-stare observation, there is significant
emission within the 6\arcsec\ region of the EIS slit that is the
focus of this paper, suggesting that the location is near one end
of one of the coronal loops seen in the lower three panels in the
right side of Figure~\ref{fig:context_panel}. On the other hand
the images in the EIS spectroheliograms taken immediately before
the sit-and-stare observation show much less evidence for a loop
rooted at that location. Thus we can only conclude that the
bright feature we observe in the \ion{Ca}{2} H line may be the
footpoint of a coronal loop. We do note, however, that there are
no other strong magnetic concentrations close to the location of
the region that is the focus of this study. The ones to the
south-east are about 10\arcsec\ away---more than our co-alignment
errors. We believe that loop footpoints will be anchored in
strong flux concentrations.
The basic building block for the sit-and-stare observation was
EIS Study ID~56. This study takes 50 30 s exposures in 20
spectral windows with a solar $y$-position coverage of
400\arcsec. The entire sit-and-stare observation consisted of
four executions of the study, with each execution invoked with a
repetition count of four. Thus, the full sit-and-stare data set
consists of 16 repetitions of the basic building block. There was
a brief delay between each invocation of the study, leading to
gaps in the resulting time series.
Over both orbital periods and shorter time intervals, the
\textit{Hinode} pointing fluctuates in both the solar $x$- and
$y$-directions by up to about 3\arcsec\ peak-to-peak. This is due
to both spacecraft pointing variations and changes in the
location of EIS relative to the location of the Sun sensors on
the spacecraft. Studies of the latter variations have shown that
they are well-correlated with similar variations observed in data
from the \textit{Hinode} X-Ray Telescope (XRT). We have therefore
used a modified version of the software provided by the XRT team
to compute the average $y$-position of each pixel along the slit
over each exposure and then interpolated the fitted centroid
positions onto a uniform $y$-position grid.
It is not, of course, possible to correct the sit-and-stare
observations for fluctuations in the $x$-position on the Sun.
Plots of the $x$-position pointing fluctuations show that they
tend to be smoother than the $y$-direction fluctuations, and that
they are dominated by the orbital period. If periodic Doppler
shifts are present only in very small structures, we would thus
expect the signal to show a modulation with the orbital period.
Larger structures, $\geq 3$\arcsec\ in the $x$-direction, that
display coherent Doppler shifts should not be affected by the
spacecraft $x$-position pointing variations.
Over much of the EIS slit during the sit-and-stare observation
there is no evidence for interesting dynamical behavior in the
Doppler shift observations. The portion of the slit that covers
the brighter core area of the region does, however, show evidence
for periodic changes in the Doppler shifts.
Figure~\ref{fig:doppler_local} shows the measured Doppler-shift
data in four emission lines over this $y$-position range as a
function of time. The gaps between each invocation of the study
appear as wider pixels near 20, 22, and 0 UT. Note that the data
are also affected by passage of the spacecraft through the South
Atlantic Anomaly (SAA). A small region of compromised data
appears near 19:30 UT, and major SAA passages are evident near
21:00 and 22:45 UT.
The display shows clear evidence for periodic fluctuations in all
the emission lines at $y$-positions of roughly $-170$\arcsec\ to
$-180$\arcsec during the first hour shown in the plots. Periodic
fluctuations are also visible over $y$-locations centered near
$-210$\arcsec. Note that there is some evidence of longer period
fluctuations, for example in the \ion{Fe}{15}
284.16~\AA\ emission line. These fluctuations are probably the
result of the correction for orbital line centroid shifts not
fully removing those variations. In the remainder of this paper,
we focus on the area near $-210$\arcsec\ that shows evidence for
oscillatory phenomena.
\section{ANALYSIS}
\subsection{Doppler Shift Oscillations}
As we noted earlier, solar $y$-positions between
$-200$\arcsec\ and $-215$\arcsec\ show considerable oscillatory
behavior, particularly in the set of data taken beginning at
20:07:01 UT. Figure~\ref{fig:doppler_average} shows the averaged
Doppler shift data over the time period of this set of
observations for, from top to bottom, \ion{Fe}{11} 188.23~\AA,
\ion{Fe}{12} 195.12~\AA, \ion{Fe}{13} 202.04~\AA, \ion{Fe}{14}
274.20~\AA, and \ion{Fe}{15} 284.16~\AA. The Doppler shifts have
been averaged over the 16 detector rows from $-200$\arcsec\ to
$-215$\arcsec. Data from 172 to 180 minutes were taken during SAA
passage and have been removed from the plot.
The Doppler shift data over this portion of the EIS slit show
clear evidence for low-amplitude, roughly 2--4 km s$^{-1}$,
oscillatory behavior with a period near 10 minutes. For some of
the time period, particularly after 180 minutes, there appears to
be a clear trend for the oscillations to display increasing
amplitude as a function of increasing temperature of line
formation.
\begin{deluxetable}{lcccccc}
\tablecaption{Periods and Amplitudes Detected in Doppler Shift
and Intensity Data\label{table:doppler_periods}}
\tablewidth{0pt} \tablecolumns{4} \tablehead{ \colhead{} &
\colhead{Wavelength} & \colhead{Log $T$} & \colhead{$P_D$} &
\colhead{$\delta v$} & \colhead{$P_I$} & \colhead{$\delta I/I$}
\\ \colhead{Ion} & \colhead{(\AA)} & \colhead{(K)} &
\colhead{(minutes)} & \colhead{(km s$^{-1}$)} &
\colhead{(minutes)} & \colhead{(\%)} } \startdata \ion{Fe}{11}
& 188.23 & 6.07 & 10.0 & 1.2 & 11.8 & 1.1 \\ \ion{Fe}{12} &
195.12 & 6.11 & 10.1 & 1.1 & 11.4 & 0.9 \\ \ion{Fe}{13} & 202.04
& 6.20 & \phn 9.1 & 1.3 & 11.2 & 1.4 \\ \ion{Fe}{14} & 274.20 &
6.28 & \phn 9.1 & 1.3 & 11.4 & 2.0 \\ \ion{Fe}{15} & 284.16 &
6.32 & \phn 9.0 & 1.4 & \phn 7.6 & 2.4 \\ \enddata
\end{deluxetable}
Because there is a significant gap in the Doppler shift data,
neither Fourier time series analysis nor wavelet analysis is
appropriate. Instead, we examine the time series by calculating
periodograms using the approach outlined in \citet{Horne1986} and
\citet{Press1989}. Figure~\ref{fig:pgram_v} shows the
periodograms calculated from the Doppler shift data shown in
Figure~\ref{fig:doppler_average}. Also plotted on the figure are
the 99\% and 95\% significance levels. All but one of the time
series show a peak in the periodogram at the 99\% confidence
level, and the largest peak in the \ion{Fe}{15}
284.16~\AA\ emission line data is at the 95\% confidence level.
Table~\ref{table:doppler_periods} lists the period of the most
significant peak in each of the panels shown in the figure.
To estimate the amplitude of the oscillations, we detrend the
Doppler shift time series by subtracting a background computed
using averaged data in a 10-minute window centered on each data
point and then compute the rms amplitude as the standard
deviation of the mean. For a sine wave, the peak velocity is the
rms value multiplied by $\sqrt 2$. These peak velocity values are
also listed in the table in the $\delta v$ column. Visual
inspection of the data suggests that the numbers in the table are
smaller than what might be obtained by fitting the data. The
numbers do confirm the impression that the oscillation amplitude
increases with increasing temperature of line formation.
It is clear from the Doppler shift plots in
Figure~\ref{fig:doppler_average} that the oscillations are not
always present. Instead, they appear for a few periods and then
disappear. To understand better this behavior, we have fitted the
time intervals where oscillations are obvious with a combination
of a damped sine wave and a polynomial background. Thus, for each
time period where oscillations are present we assume that the
data can be fitted with a function of the form
\begin{equation}
v(t) = A_0 \sin(\omega t + \phi) \exp(-\lambda t) + B(t),
\end{equation}
where
\begin{equation}
B(t) = b_0 + b_1 t + b_2 t^2 + b_3 t^3 + \cdots
\end{equation}
is the trend in the background data. Time is measured from an
initial time $t_0$, which is different for each set of
oscillations we fit. The fits were carried out using
Levenberg-Marquardt least-squares minimization
\citep{Bevington1969}. Generally only two terms in the background
polynomial were necessary.
\begin{figure}
\plotone{f7.eps}
\caption{Periodograms computed for the Doppler shift data shown
in Figure~\ref{fig:doppler_average}. The solid horizontal line
on each plot indicates the power equivalent to a calculated
false alarm probability of 1\% and the dashed line is for a
false alarm probability of 5\%.}
\label{fig:pgram_v}
\end{figure}
\begin{figure}
\plotone{f8.eps}
\caption{Decaying sine wave fits to the Doppler shift data
beginning roughly 142 minutes after the start of the
sit-and-stare observation. The plotted data has had the
polynomial background removed. Vertical dashed lines show the
time range used in the fitting.}
\label{fig:fit_fig_v}
\end{figure}
Figure~\ref{fig:fit_fig_v} shows the results of this fitting for
the Doppler shift data beginning 142.5 minutes after the start
of the sit-and-stare observation. All the fits show roughly the
same amplitudes, periods, and phases. Emission lines formed at
the higher temperatures (\ion{Fe}{14} and \ion{Fe}{15}) show
clear evidence for more than one full oscillation period. At lower
temperatures, the oscillatory signal damps much more rapidly.
Table~\ref{table:oscillation_d_fits} lists the amplitudes, $A_0$,
periods, $P$, phases, $\phi$, and the inverse of the decay rate,
$\lambda$, that result from fitting all the time periods in the
data for which a reasonable fit to the Doppler shift data could
be obtained. The periods are consistent with the results of the
periodogram analysis for the entire time interval. Generally, the
amplitudes are larger than the $\delta v$ values show in
Table~\ref{table:doppler_periods}. Note that some of the fits
show negative decay times, indicating that some of the
oscillations show a tendency to grow with time. In these cases,
this is not followed by a decay, but rather a rapid loss of the
oscillatory signal.
All the fits use the start times $t_0$ listed in the table and
thus the phase values for each time interval can not be directly
compared. When the phases are adjusted to a common start time,
the values do not agree. Thus while the periods are similar for
each time interval in which oscillations are observed, it appears
that each event is being independently excited.
\begin{deluxetable}{llcccc}
\tablecaption{Doppler Shift Oscillation
Properties\label{table:oscillation_d_fits}} \tablewidth{0pt}
\tablehead{
\colhead{$t_0$} & \colhead{} &
\colhead{$A_0$} & \colhead{$P$} & \colhead{$\phi$} &
\colhead{$\lambda^{-1}$} \\
\colhead{(min)} & \colhead{Ion} & \colhead{(km s$^{-1}$)} &
\colhead{(min)} & \colhead{(rad)} & \colhead{(min)}
} \startdata
113.9 & \ion{Fe}{11} & 0.9 & \phn 8.8 & 1.8 & \phs \phn \phn 19.0 \\
113.9 & \ion{Fe}{12} & 0.7 & \phn 9.2 & 2.3 & $-216$ \\
113.9 & \ion{Fe}{13} & 0.7 & \phn 8.8 & 2.3 & \phn \phn $-73.3$ \\
113.9 & \ion{Fe}{14} & 0.3 & 10.7 & 3.9 & \phn \phn $-10.9$ \\
113.9 & \ion{Fe}{15} & 0.5 & \phn 9.5 & 2.9 & \phn \phn $-24.5$ \\
142.5 & \ion{Fe}{11} & 2.4 & 10.4 & 1.0 & \phs \phn \phn 5.1 \\
142.5 & \ion{Fe}{12} & 2.0 & \phn 8.4 & 0.9 & \phs \phn \phn 6.6 \\
142.5 & \ion{Fe}{13} & 1.6 & \phn 8.2 & 1.2 & \phs \phn \phn 15.0 \\
142.5 & \ion{Fe}{14} & 2.0 & \phn 8.7 & 1.9 & \phs \phn \phn 23.9 \\
142.5 & \ion{Fe}{15} & 2.2 & \phn 8.9 & 2.1 & \phs \phn \phn 23.3 \\
190.0 & \ion{Fe}{11} & 0.3 & \phn 8.8 & 4.7 & $-411$ \\
190.0 & \ion{Fe}{12} & 0.8 & \phn 7.5 & 3.5 & \phs \phn \phn 36.8 \\
190.0 & \ion{Fe}{13} & 1.2 & \phn 7.3 & 3.1 & \phs \phn \phn 36.3 \\
190.0 & \ion{Fe}{14} & 1.5 & \phn 7.3 & 3.2 & \phs \phn \phn 43.6 \\
190.0 & \ion{Fe}{15} & 2.2 & \phn 7.5 & 3.4 & \phs \phn \phn 12.8 \\
\enddata
\end{deluxetable}
Examining the amplitude of the oscillations in each time interval
as a function of the temperature of line formation shows no clear
trend. The data set starting at 113.9 minutes seems to show
evidence for a decrease in amplitude with increasing temperature,
while the data set beginning at 190.0 minutes shows the opposite
trend. Similarly, there is no clear trend in the periods of the
oscillations in each data set as a function of temperature.
\subsection{Intensity Oscillations}
If the observed Doppler shift oscillations are acoustic in
nature, then they should also be visible in the intensity data.
For a linear sound wave $v = c_{\textrm{s}} \delta \rho/\rho$,
where $v$ is the amplitude of the wave, $c_{\textrm{s}}$ is the
sound speed, and $\delta \rho$ is the density perturbation on the
background density $\rho$. Taking an amplitude of 2 km s$^{-1}$,
yields values of $\delta \rho/\rho$ of around 1\%. Since the
intensity fluctuation, $\delta I/I$, is proportional to $2 \delta
\rho/\rho$, we expect only about a 2\% fluctuation in the
measured intensity. This number could of course increase if the
actual velocity is much larger due to a large difference between
the line-of-sight and the direction of the coronal structure
being measured. Figure~\ref{fig:intensity_average} shows the
measured intensity data averaged over the same locations as the
Doppler shift data shown in Figure~\ref{fig:doppler_average}. The
data show little or no evidence for oscillations with the periods
measured in the Doppler shift data. This is borne out by a
periodogram analysis of the time series in the figure, which show
no significant peaks.
If, however, we detrend the data by subtracting the gradually
evolving background signal, there is some evidence for an
oscillatory signal. Figure~\ref{fig:detrend_i} shows the data in
Figure~\ref{fig:intensity_average} with a background consisting
of a 10-minute average of the data centered on each data point
subtracted. All the emission lines show some evidence for an
oscillatory signal, with the \ion{Fe}{13} 202.04~\AA\ emission
line being the most obvious.
\begin{figure}[b]
\plotone{f9.eps}
\caption{Normalized intensity data averaged over the 16 detector
rows from $-200$\arcsec\ to $-215$\arcsec. The emission lines
shown from top to bottom are \ion{Fe}{11} 188.23~\AA,
\ion{Fe}{12} 195.12~\AA, \ion{Fe}{13} 202.04~\AA, \ion{Fe}{14}
274.20~\AA, and \ion{Fe}{15} 284.16~\AA. Time is measured in
minutes from the start of the sit-and-stare data at 18:13:06
UT. Data for each emission line are displaced from the next
set by 0.2.}
\label{fig:intensity_average}
\end{figure}
\begin{figure}
\plotone{f10.eps}
\caption{Detrended intensity data averaged over the 16 detector
rows from $-200$\arcsec\ to $-215$\arcsec. The emission lines
shown from top to bottom are \ion{Fe}{11} 188.23~\AA,
\ion{Fe}{12} 195.12~\AA, \ion{Fe}{13} 202.04~\AA, \ion{Fe}{14}
274.20~\AA, and \ion{Fe}{15} 284.16~\AA. Time is measured in
minutes from the start of the sit-and-stare data at 18:13:06
UT. Data for each emission line are displaced from the next
set by 100 ergs cm$^{-2}$ s$^{-1}$ sr$^{-1}$.}
\label{fig:detrend_i}
\end{figure}
\begin{figure}
\plotone{f11.eps}
\caption{Periodograms computed for the detrended intensity data
shown in normalized form in Figure~\ref{fig:intensity_average}.
The solid horizontal line on each plot indicates the power
equivalent to a calculated false alarm probability of 1\% and
the dashed line is for a false alarm probability of 5\%.}
\label{fig:pgram_i}
\end{figure}
\begin{figure}
\plotone{f12.eps}
\caption{Decaying sine wave fits to the detrended intensity data
beginning roughly 138 minutes after the start of the
sit-and-stare observation. The intensities have been converted
to residual intensities by taking the difference between the
intensity at each data point and the running mean intensity and
dividing it by the running mean. The plotted data has also had
the polynomial background used in the fit removed. Vertical
dashed lines show the time range used in the fitting. Also
plotted as green curves on the panels for the \ion{Fe}{14} and
\ion{Fe}{15} intensity data are the fitting results for the
\ion{Fe}{14} and \ion{Fe}{15} Doppler shift data. Those plots
show $-v(t)$.}
\label{fig:fit_fig_i}
\end{figure}
Figure~\ref{fig:pgram_i} shows periodograms constructed for the
emission lines shown in Figure~\ref{fig:detrend_i}. Each
periodogram shows a significant peak. The periods for the
strongest peak in each periodogram are listed in
Table~\ref{table:doppler_periods}. The periods are generally
consistent with those determined from the Doppler shift data.
Also listed in the table is an estimate of the intensity
fluctuation in each emission line. This was obtained by computing
the standard deviation of the detrended intensity in the time
series for each emission line and then dividing the result by the
average intensity. The values are roughly consistent with those
expected based on the $\delta v$ estimates listed in
Table~\ref{table:doppler_periods}.
While the oscillatory signal is much less strong in the detrended
intensity data than in the Doppler shift data, it is possible to
fit some of the data in roughly the same time intervals that were
used for the results listed in
Table~\ref{table:oscillation_d_fits}.
Table~\ref{table:oscillation_i_fits} shows the resulting fit
information. Note that intensity oscillation fits were not
possible for all the lines for which the Doppler shift data could
be fitted. Figure~\ref{fig:fit_fig_i} shows an example of the
fits to the detrended intensity data. For these plots the
intensity has been converted to a residual intensity expressed in
\% by taking the difference between each data point and the
10-minute average and dividing it by the 10-minute average. The
plotted points have also had the polynomial background
subtracted.To facilitate comparisons with the Doppler shift data
we have also plotted as green curves on the panels for the
\ion{Fe}{14} and \ion{Fe}{15} intensity data the fitting results
for the \ion{Fe}{14} and \ion{Fe}{15} Doppler shift data.
Comparison of the fits in Figure~\ref{fig:fit_fig_i} with those
in Figure~\ref{fig:fit_fig_v} shows some similarities and many
differences between the two data sets. For the lines with a
reasonably strong signal, the periods measured for the residual
intensity data are similar to those measured for the Doppler
shifts. In addition, the residual intensities show the same trend
to larger amplitudes as the temperature of line formation
increases. The intensity oscillations clearly start earlier than
the Doppler shift oscillations. Moreover, while the Doppler shift
oscillations are damped, the intensity oscillations appear to
grow with time over the same interval. This appears to be the
case for the other time intervals as well. Fitting the detrended
intensity signal is more challenging than fitting the Doppler
shift signal. Also, implicit in the fitting is the idea that a
damped sine wave can fully represent what is probably a more
complex signal. Thus, we are reluctant to read too much into this
growth in the intensity until we can determine from additional
data sets if it is a common phenomenon.
\begin{deluxetable}{llcccc}
\tablecaption{Intensity Oscillation
Properties\label{table:oscillation_i_fits}} \tablewidth{0pt}
\tablehead{
\colhead{$t_0$} & \colhead{} &
\colhead{$A_0$} & \colhead{$P$} & \colhead{$\phi$} &
\colhead{$\lambda^{-1}$} \\
\colhead{(min)} & \colhead{Ion} &
\colhead{(\%)} &
\colhead{(min)} & \colhead{(rad)} & \colhead{(min)}
} \startdata
113.9 & \ion{Fe}{12} & 2.1 & \phn 8.0 & 4.2 & \phs \phn 8.2 \\
113.9 & \ion{Fe}{13} & 1.0 & 12.3 & 4.4 & \phs 37.6 \\
137.7 & \ion{Fe}{11} & 0.2 & 12.7 & 0.0 & $-11.4$ \\
137.7 & \ion{Fe}{12} & 0.4 & 10.3 & 3.9 & $-17.8$ \\
137.7 & \ion{Fe}{13} & 1.1 & \phn 9.9 & 3.5 & $-50.2$ \\
137.7 & \ion{Fe}{14} & 2.1 & \phn 9.3 & 3.0 & $-41.8$ \\
137.7 & \ion{Fe}{15} & 2.6 & \phn 9.3 & 3.0 & $-48.6$ \\
190.0 & \ion{Fe}{11} & 0.5 & \phn 8.8 & 5.0 & $-81.1$ \\
190.0 & \ion{Fe}{12} & 0.4 & 13.7 & 3.7 & $-32.6$ \\
190.0 & \ion{Fe}{13} & 1.1 & 13.5 & 3.8 & $-83.6$ \\
190.0 & \ion{Fe}{14} & 0.8 & 12.8 & 2.5 & $-20.3$ \\
190.0 & \ion{Fe}{15} & 1.3 & \phn 6.9 & 5.6 & $-22.8$ \\
\enddata
\end{deluxetable}
An important factor in determining the nature of the oscillations
is the phase difference between the Doppler shift signal and the
intensity signal. Comparing the phases of the fits listed in
Table~\ref{table:oscillation_d_fits} with those in
Table~\ref{table:oscillation_i_fits} is difficult because the
periods are not identical, but, since the periods are close, the
differences do not significantly alter any conclusions that we
might draw. To facilitate this comparison we have plotted as
green curves on the panels for the \ion{Fe}{14} and \ion{Fe}{15}
intensity data in Figure~\ref{fig:fit_fig_i} the fitting results
for the \ion{Fe}{14} and \ion{Fe}{15} Doppler shift data. For the
Doppler shift data, we plot $-v(t)$. The curves show that for
these two ions, the intensity variations are close to $180\degr$
out of phase with the Doppler shift variations. Since we define
the Doppler shift as $c\, \delta \lambda/\!\lambda$, this means
that the peak intensity corresponds to a blueshift, indicating an
upward propagating wave. For the other two time intervals, the
situation is more ambiguous. Examination of the tables shows that
in many cases, the periods are significantly different for the
same Doppler shift and intensity data in the same line. In those
cases where the periods are close (e.g., \ion{Fe}{12} at
$t_0=113.9$ minutes and \ion{Fe}{15} at $t_0=190.0$ minutes),
examination of the plots similar to Figures~\ref{fig:fit_fig_v}
and \ref{fig:fit_fig_i} shows the same $180\degr$ phase shift,
again indicating upwardly propagating oscillations.
Even for the cases where the periods are close, the agreements in
the phases are only approximate. For the \ion{Fe}{14} and
\ion{Fe}{15} intensity and Doppler shift fits shown in
Figure~\ref{fig:fit_fig_i}, the intensity oscillation leads the
Doppler shift oscillation by a small fraction of a period. For
both the \ion{Fe}{12} data at $t_0=113.9$ minutes and the
\ion{Fe}{15} data at $t_0=190.0$ minutes, the Doppler shift
oscillation leads the intensity oscillation by a fraction of a
period. In both cases, this difference is less than the $1/4$
period expected for a standing-mode MHD wave \citep{Sakurai2002}.
\citet{Wang2009} observed propagating waves with periods in the
four to six minute range in EIS active region observations. They
noted that for most cases the Doppler shift and intensity
oscillations were nearly in phase. In the cases where there was a
difference, the phase of the intensity was earlier than the
Doppler shift, as is the case for the data shown in
Figures~\ref{fig:fit_fig_v} and \ref{fig:fit_fig_i}. Theoretical
modeling of propagating slow waves with periods near five minutes
show that thermal conduction can produce phase shifts between the
intensity and Doppler shifts \citep{Owen2009}. Further study of
EIS data sets where both the Doppler shift and intensity can be
fitted could provide valuable constraints on these models.
\subsection{Density Oscillations}
The electron density is one factor in determining the Alfv\'{e}n
speed in the oscillating plasma. Moreover, for magnetoacoustic
fluctuations, we expect the density to also oscillate. Thus a
direct measurement can aid in disentangling the nature of the
oscillations. The sit-and-stare observations included
density-sensitive line pairs of \ion{Fe}{12}
(186.88~\AA/195.12~\AA) and \ion{Fe}{13} (203.83~\AA/202.04~\AA).
Using data from version 5.2 of the CHIANTI database
\citep{Landi2006,Dere1997}, we computed the electron density at
each time for the row-averaged data. For \ion{Fe}{12}, CHIANTI
uses energy levels and radiative decay rates from
\citet{DelZanna2005}, electron collision strengths from
\citet{Storey2005}, and proton collision rate coefficients from
\citet{Landman1978}. For \ion{Fe}{13}, CHIANTI uses energy levels
from \citet{Penn1994}, \citet{Jupen1993}, and version 1.0 of the
NIST database; radiative decay rates from \citet{Young2004},
electron collision strengths from \citet{Gupta1998}, and proton
collision rate coefficients from \citet{Landman1975}. These
diagnostics are discussed in detail in \citet{Young2009}.
\begin{figure}
\plotone{f13.eps}
\caption{Electron density determined using the \ion{Fe}{12}
186.88~\AA/195.12~\AA\ ratio (top) and the \ion{Fe}{13}
203.83~\AA/202.04~\AA\ ratio (bottom) for the data beginning
roughly 138 minutes after the start of the sit-and-stare
observation.}
\label{fig:density}
\end{figure}
Figure~\ref{fig:density} shows the derived electron densities as
a function of time for the same time interval shown in
Figures~\ref{fig:fit_fig_v} and \ref{fig:fit_fig_i}. Both sets of
derived densities show the same overall time behavior, but the
absolute values differ by nearly a factor of three. Differences
between these diagnostics have been noted before
\citep[e.g.,][]{Young2009}, and are thought to be due to issues
with the atomic data. It is not yet clear which of the values
should be considered the most reliable.
\begin{figure}
\plotone{f14.eps}
\caption{Decaying sine wave fits to the smoothed electron density
determined using the \ion{Fe}{12} 186.88~\AA/195.12~\AA\ ratio
(top) and the \ion{Fe}{13} 203.83~\AA/202.04~\AA\ ratio
(bottom) for the data beginning 190 minutes after the start of
the sit-and-stare observation along with. The plotted data has
had the polynomial background removed. Vertical dashed lines
show the time range used in the fitting.}
\label{fig:density_det_fit}
\end{figure}
Neither of the density time series shown in the figure display
any evidence for the oscillations detected in the detrended
intensity data. If we smooth the density time series with a
10-minute running mean, there is some evidence for oscillatory
behavior over the time range beginning at 190 minutes. Decaying
sine wave fits to that region are show in
Figure~\ref{fig:density_det_fit}. For the \ion{Fe}{12} time
series the fit has an amplitude of $1.9 \times 10^7$~cm$^{-3}$, a
period of 13.5 minutes, a decay time of $-46.4$ minutes, and a
phase of 3.5 radians, all consistent with the values listed for
the \ion{Fe}{12} data in Table~\ref{table:oscillation_i_fits} for
this time interval. For the \ion{Fe}{13 } time series the fit has
an amplitude of $5.1 \times 10^6$~cm$^{-3}$, a period of 10.9
minutes, a decay time of $-25.1$ minutes, and a phase of 1.9
radians. With the exception of the phase, these values are
generally consistent with the \ion{Fe}{13} data in
Table~\ref{table:oscillation_i_fits} for this time interval. The
amplitudes for the \ion{Fe}{12} and \ion{Fe}{13} fits are 0.5\%
and 0.4\% of the average density in the time interval. Since the
observed intensity fluctuations are only about 1\%, though, and
we expect $\delta I/I$ to be proportional to $2 \delta
\rho/\rho$, they are consistent with the observed intensity
oscillations.
\subsection{Underlying Chromospheric Behavior}
As we pointed out earlier, SOT obtained a time sequence of
\ion{Ca}{2} H images that is co-temporal with the EIS
sit-and-stare observations starting at 19:50 UT and ending at
21:40 UT, with a constant cadence of 60~s. To further use this
data, we applied the standard reduction procedure provided by the
SOT team that is available in SolarSoft. The images of the
\ion{Ca}{2} H sequence were then carefully aligned using Fourier
cross-correlation techniques to remove residual jitter and the
drift of the SOT correlation tracker.
As can be seen in the lower left panel of
Figure~\ref{fig:coalign}, the EIS slit covers the network bright
point, which is the focus of the chromospheric analysis.
Considering the accuracy of the coalignment and the spatial
averaging applied to the EIS data, we also average the
\ion{Ca}{2} H signal.
Figure~\ref{fig:ca_i} shows the time history of the \ion{Ca}{2}
H-line intensity for three different sized spatial areas centered
on the feature shown in Figure~\ref{fig:coalign}. The sizes of
the regions are given in SOT pixels, which are $0.10896$\arcsec\
in size. As expected for chromospheric lines, all three averaged
data sets show evidence for intensity oscillations with a period
near 5 minutes. The data, however, are quite noisy and
periodograms constructed from them show no significant peaks.
Detrending the data, however, results in periodograms with
significant peaks. Figure~\ref{fig:ca_periodograms} shows
periodograms for the three data sets with each detrended by
subtracting a 9-minute running mean from each data point. The
periodograms show clear evidence for oscillations near 5 minutes.
\begin{figure}
\plotone{f15.eps}
\caption{\ion{Ca}{2} H intensity data averaged over different
spatial areas centered on the magnetic feature highlighted in
Figure~\ref{fig:coalign}. Each data set has had the mean value
subtracted. Spatial areas are given in SOT pixels, which are
0.10896\arcsec in size, giving areas of $2.18\arcsec \times
2.18\arcsec$, $3.77\arcsec \times 3.77\arcsec$, and
$4.36\arcsec \times 4.36\arcsec$, respectively.}
\label{fig:ca_i}
\end{figure}
\begin{figure}
\plotone{f16.eps}
\caption{Periodograms computed for the detrended \ion{Ca}{2} H
intensity data. The size of the area averaged over in pixels in
indicated in each plot. The solid horizontal line on each plot
indicates the power equivalent to a calculated false alarm
probability of 1\% and the dashed line is for a false alarm
probability of 5\%. Spatial areas are given in SOT pixels,
which are 0.10896\arcsec in size, giving areas of $2.18\arcsec
\times 2.18\arcsec$, $3.77\arcsec \times 3.77\arcsec$, and
$4.36\arcsec \times 4.36\arcsec$, respectively.}
\label{fig:ca_periodograms}
\end{figure}
There is also some evidence for power at periods between 9 and 10
minutes, but with more than a 5\% false alarm probability. If we
instead detrend the data with an 11-minute running mean, then the
peak between 9 and 10 minutes becomes more prominent and is at or
above the 5\% false alarm probability level for the
$30~\mathrm{pixel} \times 30~\mathrm{pixel}$ and
$20~\mathrm{pixel} \times 20~\mathrm{pixel}$ data sets. Wavelet
analysis of the data sets detrended with both a 9- and 11-minute
running mean shows significant signal near 9 minutes.
Note that the data in Figure~\ref{fig:ca_i} shows three larger
peaks in all three data sets. The first peak, near 110 minutes,
occurs before the beginning of the EIS data plots in
Figures~\ref{fig:doppler_average} and \ref{fig:detrend_i}. The
other two, at roughly 140 and 170 minutes, come just before
significant oscillatory signal is observed in the EIS Doppler
shift and detrended intensity data. It is tempting to suggest
that these enhancements correspond to chromospheric events that
resulted in the oscillations observed with EIS.
In principle the EIS \ion{He}{2} 256.32~\AA\ data can bridge the
gap in temperature between the SOT \ion{Ca}{2} data and the Fe
lines formed at higher temperatures that are the main focus of
this study. In practice the \ion{He}{2} data are challenging to
analyze. The line is closely blended with a \ion{Si}{10} line at
256.37~\AA\ along with a smaller contribution from \ion{Fe}{10}
and \ion{Fe}{13} ions at slightly longer wavelengths
\citep{Brown2008}. In an effort to see if a connection can be
made, we have made two-component Gaussian fits to the
row-averaged \ion{He}{2} data. Periodograms of the resulting
Doppler shift data show no significant periods. Periodograms of
the \ion{He}{2} fitted intensities detrended with a 10-minute
running mean show no peaks at the 1\% false alarm probably level
and one peak with a period between 12 and 13 minutes at the 5\%
false alarm probability level. Examining the detrended
\ion{He}{2} intensity data, we do not see a peak in the data at
140 minutes, but do see a significant increase at 170 minutes.
Thus the \ion{He}{2} data only weakly support our suggestion that
the enhancements seen in the \ion{Ca}{2} data correspond to
chromospheric events that result in the oscillations observed
with EIS in the Fe lines.
\section{DISCUSSION AND CONCLUSIONS}
As we pointed out in \S\ref{intro}, a number of investigations
have detected Doppler shift oscillations with EIS. Based on the
phase differences between the Doppler shift and the intensity, we
believe that the signals we have detected are upwardly
propagating magnetoacoustic waves. The periods we detect are
between 7 and 14 minutes. For a set of observations that begins
at a particular time, there is considerable scatter in the
measured periods and amplitudes. This is probably due to the
relatively weak signal we are analyzing. But it may also be an
indication that a simple sine wave fit is not a good
representation of the data. It is likely that each line-of-sight
passes through a complex, time-dependent dynamical system. While
a single flux tube may respond to an oscillatory signal by
exhibiting a damped sine wave in the Doppler shift, a more
complex line-of-sight may display a superposition of waves.
Coalignment of the EIS data with both SOT and MDI magnetograms
shows that the portion of the EIS slit analyzed in this study
corresponds to a unipolar flux concentration. SOT \ion{Ca}{2}
images show that the intensity of this feature exhibits 5-minute
oscillations typical of chromospheric plasma, but also exhibits
some evidence for longer period oscillations in the time range
detected by EIS. Moreover, the \ion{Ca}{2} intensity data show
that the oscillations observed in EIS are related to significant
enhancements in the \ion{Ca}{2} intensities, suggesting that a
small chromospheric heating event triggered the observed EIS
response.
\citet{Wang2009} also detected propagating slow magnetoacoustic
waves in an active region observed with EIS, which they
associated with the footpoint of a coronal loop. While the
oscillation periods they measured---5 minutes---were smaller than
those detected here, many of the overall characteristics we see
are the same. In each case, the oscillation only persists for a
few cycles and the phase relationship indicates an upwardly
propagating wave. In contrast with their results, however, we do
not see a consistent trend for the oscillation amplitude to
decrease with increasing temperature of line formation.
Examination of both the Doppler shift data in
Figure~\ref{fig:doppler_average} and the results in
Table~\ref{table:oscillation_d_fits} shows that in one case the
amplitude has a tendency to decrease with increasing temperature
of line formation (oscillation beginning at 113.9 minutes) and in
another case the amplitude clearly increases with increasing
temperature of line formation (oscillation beginning at 190
minutes). Thus it does not appear that the results reported by
\citet{Wang2009} are always the case. \citet{O'Shea2002} noted
that for oscillations observed above a sunspot the amplitude
decreased with increasing temperature until the temperature of
formation of \ion{Mg}{10}, which is formed at roughly 1~MK. They
then saw an increase in amplitude in emission from \ion{Fe}{16}.
All the EIS lines we have included in this study have
temperatures of formation greater than 1~MK.
Combined EIS and SUMER polar coronal hole observations have also
shown evidence propagating slow magnetoacoustic waves
\citep{Banerjee2009}. These waves have periods in the 10--30
minute range. These waves appear to be more like those we observe
in that they are have periods longer than those studied by
\citet{Wang2009}.
\citet{Wang2009} suggested that the waves they observed were the
result of leakage of photospheric p-mode oscillations upward into
the corona. The longer periods we and \citet{Banerjee2009}
observe are probably not related to p-modes. Instead, we
speculate that the periods of the waves are related to the
impulsive heating which may be producing them. If an instability
is near the point where rather than generating a catastrophic
release of energy it wanders back and forth between generating
heating and turning back off, waves would be created. The heating
source could be at a single location, or, for example, locations
near each other where instability in one place causes a second
nearby location to go unstable and begin heating plasma. In this
view, the periods provide some insight into the timescale for the
heating to rise and fall and thus may be able to place limits on
possible heating mechanisms.
The behavior of slow magnetoacoustic oscillations as a function
of temperature has been the subject of considerable theoretical
work. It is generally believed that the damping of the waves is
due to thermal conduction
\citep[e.g.,][]{DeMoortel2003,Klimchuk2004}. Because thermal
conduction scales as a high power of the temperature, conductive
damping should be stronger for oscillations detected in higher
temperature emission lines \citep[e.g.,][]{Porter1994,Ofman2002}.
Earlier EIS observations of the damping of standing slow
magnetoacoustic waves, however, show that this is not always the
case \citep{Mariska2008}. Thus, the temperature behavior of both
the amplitude of the oscillations and the damping, differ from
some earlier results. We believe that additional observations
will be required to understand fully the physical picture of what
is occurring in the low corona when oscillations are observed.
Given the complex set of structures that may be in the line of
sight to any given solar location under the EIS slit, we are not
entirely surprised that different data sets should yield
different results, which in some cases differ from models. For
example, none of the current models for oscillations in the outer
layers of the solar atmosphere take into account the possibility
that what appear to be single structures in the data might
actually be bundles of threads with differing physical
conditions.
Our observations along with others
\citep[e.g.,][]{Wang2009,Wang2009a,Banerjee2009} show that
low-amplitude upwardly propagating slow magnetoacoustic waves are
not uncommon in the low corona. The periods observed to date
range from 5 minutes to 30 minutes. In all cases, however, the
wave amplitudes are too small to contribute significantly to
coronal heating. But understanding how the waves are generated
and behave as a function of line formation temperature and the
structure of the magnetic field should lead to a more complete
understanding of the structure of the low corona and its
connection with the underlying portions of the atmosphere.
Instruments like those on \textit{Hinode} that can simultaneously
observe both the chromosphere and the corona, should provide
valuable additional insight into these waves as the new solar
cycle rises and more active regions become available for study.
\acknowledgments \textit{Hinode} is a Japanese mission developed,
launched, and operated by ISAS/JAXA in partnership with NAOJ,
NASA, and STFC (UK). Additional operational support is provided
by ESA and NSC (Norway). The authors acknowledge support from the
NASA \textit{Hinode} program. CHIANTI is a collaborative project
involving NRL (USA), RAL (UK), MSSL (UK), the Universities of
Florence (Italy) and Cambridge (UK), and George Mason University
(USA). We thank the anonymous referee for his or her very helpful
comments.
\bibliographystyle{apj}
| proofpile-arXiv_065-4995 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
The main goal of the time scales approach is a unification of the differential and difference calculus \cite{Hi,Hi2,ABOP}.
There are many papers and books where notions and notation concerning time scales are explained in detail \cite{Hi,ABOP,BP-I}, see also \cite{BG-Lap}.
A time scale $\T$ is defined as an arbitrary closed subset of $\R$ \cite{Hi,ABOP}. The forward jump operator $\sigma$ is defined as $\sigma (t) = \inf \{ s \in \T : s > t \}$ (and we assume $\inf \emptyset = \sup \T$). We usually denote $t^\sigma := \sigma (t)$.
A point $t \in \T$ is called right-dense iff $t^\sigma = t$ and right-scattered iff $t^\sigma > t$. Similarly one can define the backward jump-operator $\rho$, left-dense points and left-scattered points \cite{Hi}. Sometimes we have to exclude from considerations the left-scattered maximum of $\T$ (if it exists). $\T$ minus the left-scattered maximum will be denoted by $\T^\kappa$ (if such maximum does not exist then $\T^\kappa = \T$).
{\it Graininess} \ $\mu = \mu (t)$ is defined, for $t \in \T^\kappa$, as $\mu (t) = t^\sigma - t$.
The delta derivative (an analogue of the derivative for functions defined on time scales) is defined by
\begin{equation} \label{delta}
x^\Delta (t) := \lim_{\stackrel{\displaystyle s\rightarrow t}{s \neq \sigma (t)}} \frac{ x (\sigma (t) ) - x (s) }{\sigma (t) - s } \ , \qquad (t \in \T^\kappa) \ .
\end{equation}
If $t$ is right-dense, then $x^\Delta (t) = \dot x (t)$. If $t$ is right-scattered, then $x^\Delta$ is the corresponding difference quotient.
An important notion is rd-continuity. A function is said to be righ-dense continuous (rd-continuous), if it is continuous at right-dense points and at left-dense points there exist finite left-hand limits.
The graininess $\mu$ is rd-continuous but, in general, is not continuous.
{\it Dynamic equations} \ are time scales analogues of differential equations (i.e., they may contain delta derivatives, jump operators and sometimes the graininess of the considered time scale) \cite{ABOP,BP-I}.
In this paper we consider the problem of defining elementary (or special) functions on time scales. We propose a new approach to exponential, hyperbolic and trigonometric functions.
We point out that the definition of such functions is not unique.
We suggest new definitions (improved exponential, hyperbolic and trigonometric functions) based on the Cayley transformation. The new functions preserve more properties of their continuous counterparts in comparison to the existing definitions. In particular, our exponential function maps the imaginary axis into the unit circle and trigonometric functions satisfy the Pythagorean identity. Dynamic equations satisfied by our improved functions have a natural similarity to the corresponding differential equations.
We also propose the notion of {\it exact} time scales analogues of special functions and we identify Hilger's definition of trigonometric functions \cite{Hi-spec} with exact analogues of these functions. Exact discretizations of differential equations \cite{Po,Mic,Ag} are intended as a way to connect smooth and discrete case, but in an apparently different way than the time scales calculus. In this paper we incorporate the exact discretizations in the framework of the time scales approach.
We discuss exact exponential, hyperbolic and trigonometric functions on constant time scales. An extension of these results on more general time scales seems to be rather difficult.
\section{Survey of existing definitions}
The exponential function on time scales has been introduced by Hilger \cite{Hi}, and his definition seems to be commonly accepted \cite{ABOP,BP-I}.
A different situation has place in the case of hyperbolic and trigonometric functions, where
Hilger's approach \cite{Hi-spec} differs from Bohner-Peterson's approach \cite{BP12}.
\subsection{Exponential function}
Hilger defined the exponential function as follows:
\begin{equation} \label{exp-Hi}
e_\alpha (t, \tau) := \exp \left( \int_\tau^t \xi_{\mu (s)} ( \alpha (s)) \Delta s \right) \ , \qquad
e_\alpha (t) := e_\alpha (t, 0) \ ,
\end{equation}
where
\begin{equation}
\xi_h (z) := \frac{1}{h} \log (1 + z h) \quad ({\rm for} \ \ h > 0) \quad {\rm and} \ \ \xi_0 (z) := z \ .
\end{equation}
This definition applies to the so called $\mu$-{\it regressive} functions $\alpha = \alpha (t)$, i.e., those satisfying
\begin{equation}
1 + \mu (t) \alpha (t) \neq 0 \quad {\rm for \ all} \quad t\in \T^\kappa \ .
\end{equation}
Such functions are usually called regressive, but we reserve this name for another class of functions, see Definition~\ref{Def-regr}.
In the constant discrete case ($\T = \varepsilon \Z$, $\alpha = {\rm const}$) we have
\begin{equation}
e_\alpha (t) = (1 + \alpha \varepsilon)^{\frac{t}{\varepsilon}} \ ,
\end{equation}
and in the case $\T = \R$ we have
\begin{equation}
e_\alpha (t) = \exp \int_0^t \alpha (\tau) d \tau \ .
\end{equation}
\begin{Th}[\cite{Hi,BP12}]
If $\alpha, \beta : \T \rightarrow \C$ are $\mu$-regressive and rd-continuous, then
the following properties hold:
\begin{enumerate}
\item $e_\alpha (t^\sigma, t_0) = (1 + \mu (t) \alpha (t) ) \ e_\alpha (t, t_0)$ \ ,
\item $( e_\alpha (t, t_0) )^{-1} = e_{\ominus^\mu \alpha} (t, t_0) $ \ ,
\item $e_\alpha (t, t_0) \ e_\alpha (t_0, t_1) = e_\alpha (t, t_1)$ \ ,
\item $e_\alpha (t, t_0) \ e_\beta (t, t_0) = e_{\alpha\oplus^\mu \beta} (t, t_0) $ \ ,
\end{enumerate}
where \ $\alpha \oplus^\mu \beta := \alpha + \beta + \mu \alpha \beta$ \ and \ $\ominus^\mu \alpha := \frac{- \alpha}{1 + \mu \alpha}$.
\end{Th}
The addition $\oplus^\mu$ is usually denoted by $\oplus$. However, we reserve the notation $\oplus$ for another addition, see Definition~\ref{def-oplus}. The exponential function \rf{exp-Hi} solves the Cauchy problem: $x^\Delta = \alpha x$, $x (0) = 1$, see \cite{Hi}.
Similar considerations (with the delta derivative replaced by the nabla derivative) lead to the
nabla exponential function \cite{ABEPT}. In the case $\T = \varepsilon \Z$ the nabla exponential function is given by
\begin{equation}
{\hat e}_\alpha (t) = (1 - \alpha \varepsilon)^{- \frac{t}{\varepsilon}}
\end{equation}
and solves the Cauchy problem: $x^\nabla = \alpha x$, $x (0) = 1$. The existence of the above two definitions reflects the duality between delta and nabla calculus.
A linear combination of delta nad nabla derivatives (the diamond-$\alpha$ derivative \cite{SFHD}) leads to another definition of the exponential function (the so-called diamond-alpha exponential function \cite{MT}).
\subsection{Hilger's approach to hyperbolic and trigonometric functions}
The first approach to hyperbolic functions has been proposed by Hilger \cite{Hi-spec},
\begin{equation} \label{hyp-Hi}
\cosh_\alpha (t) = \frac{e_\alpha (t) + e_{\ominus^\mu\alpha} (t) }{2} \ , \qquad
\sinh_\alpha (t) = \frac{e_\alpha (t) - e_{\ominus^\mu\alpha} (t) }{2} \ ,
\end{equation}
where $\alpha$ is $\mu$-regressive. Among its advantages we have the identity
\begin{equation} \label{Pyt-hyp}
\cosh_\alpha^2 (t) - \sinh_\alpha^2 (t) = 1 \ .
\end{equation}
Delta derivatives of these functions are linear combinations of both hyperbolic functions, e.g.,
\begin{equation}
\cosh_\alpha^\Delta (t) = \frac{\alpha + [\ominus^\mu \alpha]}{2} \cosh_\alpha (t) + \frac{\alpha - [\ominus^\mu \alpha]}{2} \sinh_\alpha (t) \ .
\end{equation}
In the constant discrete case ($\T = \varepsilon \Z$, $\alpha = {\rm const}$) we have
\begin{equation} \begin{array}{l} \displaystyle
\cosh_\alpha (t) =
\frac{ (1 + \alpha \varepsilon)^{\frac{t}{\varepsilon}} + (1 + \alpha \varepsilon)^{-\frac{t}{\varepsilon}} }{2} \ , \\[3ex] \displaystyle
\sinh_\alpha (t) =
\frac{ (1 + \alpha \varepsilon)^{\frac{t}{\varepsilon}} - (1 + \alpha \varepsilon)^{-\frac{t}{\varepsilon}} }{2} \ .
\end{array} \end{equation}
The hyperbolic functions \rf{hyp-Hi} evaluated at imaginary $\alpha$ are not real. Thus the definition \rf{hyp-Hi} cannot be extended on trigonometric functions by substituting $i \omega$ for $\alpha$.
In order to treat trigonometric functions Hilger introduced another map $\alpha \rightarrow \overset{\circ}{\iota} \omega$, see \cite{Hi-spec}. In the constant case (i.e., $\mu (t) = {\rm const}$ and $\omega = {\rm const}$) the final result is very simple
\begin{equation}
\cos_\omega (t) = \cos \omega t \ , \quad \sin_\omega (t) = \sin\omega t \ .
\end{equation}
In fact, this is a particular case of the {\it exact discretization} (see Section~\ref{sec-exact}), although Hilger's motivation seems to be different. Exact discretizations have many advantages (compare \cite{Ci-oscyl}) but, unfortunatelly, their delta derivatives are quite complicated. For instance, at right scattered points we have:
\begin{equation} \label{delta-sin-ex}
(\sin\omega t)^\Delta = \frac{\sin\omega\mu}{\mu} \cos\omega t + \frac{\cos\omega\mu - 1}{\mu} \sin\omega t \ ,
\end{equation}
compare \cite{Hi-spec}, see also Section~\ref{sec-exact}.
\subsection{Bohner-Peterson's approach to hyperbolic and tri\-gonometric functions}
The second approach has been proposed by Bohner and Peterson \cite{BP12,BP-exp}
\begin{equation} \label{hyp-BP}
\begin{array}{l} \displaystyle
\cosh_\alpha (t) = \frac{e_\alpha (t) + e_{-\alpha} (t) }{2} \ ,
\\[2ex] \displaystyle
\sinh_\alpha (t) = \frac{e_\alpha (t) - e_{-\alpha} (t) }{2} \ ,
\end{array} \end{equation}
where $\alpha$ is $\mu$-regressive. The hyperbolic functions defined by \rf{hyp-BP} satisfy
\begin{equation}
\cosh_\alpha^\Delta (t) = \alpha \sinh_\alpha (t) \ , \qquad
\sinh_\alpha^\Delta (t) = \alpha \cosh_\alpha (t) \ ,
\end{equation}
The identity \rf{Pyt-hyp} is not valid. Instead we have
\begin{equation} \label{Pyt-BP-hyp}
\cosh_\alpha^2 (t) - \sinh_\alpha^2 (t) = e_{-\mu\alpha^2} (t) \ .
\end{equation}
Bohner and Peterson define trigonometric functions in a natural way, evaluating hyperbolic functions at the imaginary axis
\begin{equation}
\cos_\omega (t) = \cosh_{i \omega} (t) \ , \qquad i \sin_\omega (t) = \sinh_{i \omega} (t) \ .
\end{equation}
Then,
\begin{equation} \label{Pyt-BP-trig}
\cos_\omega^2 (t) + \sin_\omega^2 (t) = e_{\mu\omega^2} (t) \ .
\end{equation}
In the constant discrete case ($\T = \varepsilon \Z$, $\alpha = {\rm const}$, $\omega = {\rm const}$) we have
\begin{equation}
\begin{array}{l} \displaystyle
\cosh_\alpha (t) =
\frac{ (1 + \alpha \varepsilon)^{\frac{t}{\varepsilon}} + (1 - \alpha \varepsilon)^{\frac{t}{\varepsilon}} }{2} \ , \\[2ex] \displaystyle
\sinh_\alpha (t) =
\frac{ (1 + \alpha \varepsilon)^{\frac{t}{\varepsilon}} - (1 - \alpha \varepsilon)^{\frac{t}{\varepsilon}} }{2} \ .
\end{array} \end{equation}
Moreover,
\begin{equation}
e_{-\varepsilon \alpha^2} (t) = (1 - \varepsilon^2 \alpha^2)^{\frac{t}{\varepsilon}} \ , \quad
e_{\varepsilon \omega^2} (t) = (1 + \varepsilon^2 \omega^2)^{\frac{t}{\varepsilon}} \ .
\end{equation}
Therefore,
\begin{equation}
\lim_{t \rightarrow - \infty} e_{-\varepsilon \alpha^2} (t) = \infty \ , \qquad
\lim_{t \rightarrow \infty} e_{-\varepsilon \alpha^2} (t) = 0 \ ,
\end{equation}
provided that $|\alpha \varepsilon| < 1$. Similarly,
\begin{equation}
\lim_{t \rightarrow - \infty} e_{\varepsilon \omega^2} (t) = 0 \ , \qquad
\lim_{t \rightarrow \infty} e_{\varepsilon \omega^2} (t) = \infty \ ,
\end{equation}
Therefore, the definition \rf{hyp-BP} leads to Pythagorean-like identities \rf{Pyt-BP-hyp}, \rf{Pyt-BP-trig} which have essentially different behaviour in the discrete and continuous case.
\section{Approach motivated by the Cayley transformation}
\label{sec-main}
In this section we present new definitions of exponential, hyperbolic and trigonometric functions and their properties. We will tentatively refer to them as `improved' functions, because they simulate the behaviour of their continuous counterparts better than the previous definitions. Our new definitions are based on the classical Cayley transformation:
\begin{equation} \label{cay}
z \rightarrow {\rm cay} (z,a) = \frac{1 + a z}{1- a z} \ ,
\end{equation}
see, for instance, \cite{Is-Cay}.
\subsection{New definition of the exponential function}
In order to formulate our definition we need to redefine a notion of regressivity.
\begin{Def} \label{Def-regr}
The function $\alpha : \T \in \C$ is regressive if \ $\mu (t) \alpha (t) \neq \pm 2$ \ for any $t \in \T^\kappa$.
\end{Def}
\begin{Def}
The improved exponential function (or the Cayley-exponen\-tial function) on a time scale is defined by
\begin{equation} \label{E-for}
E_\alpha (t, t_0) := \exp \left( \int_{t_0}^t \zeta_{\mu (s)} ( \alpha (s)) \Delta s \right) \ , \qquad E_\alpha (t) := E_\alpha (t, 0) \ ,
\end{equation}
where $\alpha = \alpha (t)$ is a given rd-continuous regressive function and
\begin{equation} \label{zeta}
\zeta_h (z) := \frac{1}{h} \log \frac{1 + \frac{1}{2} z h}{1 - \frac{1}{2} z h} \quad ({\rm for} \ \ h > 0) \quad {\rm and} \ \ \zeta_0 (z) := z \ .
\end{equation}
\end{Def}
Here and in what follows the logarithm is understood as a principal branch of the complex logaritm with image $[-i\pi, i\pi]$.
\begin{lem}
If $\alpha$ is rd-continuous and regressive, then the delta-integral in \rf{E-for} exists.
\end{lem}
\begin{Proof}
The assumption of regressivity of $\alpha$ implies that the logarithm in \rf{E-for} exists (is finite) for any $t \in \T^\kappa $. Thus
the function $t \rightarrow \zeta_\mu (t) (\alpha (t))$ has no singularities. To complete the proof we will show that
$\zeta_\mu \circ \alpha $ is rd-continuous (which implies that it has an antiderivative, see \cite{Hi}). At right-dense $t_0$ we have
\begin{equation}
\lim_{t\rightarrow t_0} \alpha (t) = \alpha (t_0) \ , \qquad
\lim_{t\rightarrow t_0} \mu (t) = \mu (t_0) = 0 \ ,
\end{equation}
because $\alpha$ and $\mu$ are continuous at right-dense points. Therefore
\begin{equation}
\lim_{t\rightarrow t_0} \zeta_{\mu (t)} (\alpha (t)) = \lim_{t\rightarrow t_0} \frac{1}{\mu (t)} \log \frac{1 + \frac{1}{2} \alpha (t) \mu (t) }{1 - \frac{1}{2} \alpha (t) \mu (t) } = \alpha (t_0) \ .
\end{equation}
On the other hand, $\zeta_{\mu (t_0)} (\alpha (t_0)) = \zeta_0 (\alpha (t_0)) = \alpha (t_0)$. Therefore, $\zeta_\mu \circ \alpha$ is continuous at right-dense points. At left-dense $s_0$ we denote
\begin{equation}
\alpha (s_0^-) := \lim_{t\rightarrow s_0^-} \alpha (t) \ , \qquad
\mu (s_0^-) := \lim_{t\rightarrow s_0^-} \mu (t) = 0 \ .
\end{equation}
In general $\alpha (s_0^-) \neq \alpha (s_0)$, $\mu (s_0^-) \neq \mu (s_0)$ but rd-continuity guarantees that all these values are finite. Then,
\begin{equation}
\lim_{t\rightarrow s_0^-} \zeta_{\mu (t)} (\alpha (t)) = \lim_{t\rightarrow s_0^-} \frac{1}{\mu (t)} \log \frac{1 + \frac{1}{2} \alpha (t) \mu (t) }{1 - \frac{1}{2} \alpha (t) \mu (t) } = \alpha (s_0^-) \ ,
\end{equation}
and the existence of this finite limit means that $\zeta_\mu \circ \alpha$ is rd-continuous.
\end{Proof}
\par \vspace{0.5cm} \par
In the constant discrete case ($\T = \varepsilon \Z$, $\alpha = {\rm const}$) we have
\begin{equation} \label{exp-dis}
E_\alpha (t) = \left( \frac{1 + \frac{1}{2} \alpha \varepsilon}{1 - \frac{1}{2} \alpha \varepsilon} \right)^{\frac{t}{\varepsilon}} \ ,
\end{equation}
and in the case $\T = \R$ we have, as usual,
\begin{equation}
E_\alpha (t) = \exp \int_0^t \alpha (\tau) d \tau \ .
\end{equation}
The formula \rf{exp-dis} appeared earlier in different contexts, see for instance \cite{Is-Cay,Mer}.
The new definition \rf{E-for} of the exponential function can be related to Hilger's definition \rf{exp-Hi} with an other exponent. Namely,
we are going to prove that $E_\alpha (t, t_0) = e_\beta (t, t_0)$ provided that
\begin{equation} \label{betal}
\beta (t) = \frac{ \alpha (t)}{1 - \frac{1}{2}{\mu (t) \alpha (t)}} \ .
\end{equation}
\begin{Th} \label{Ee}
For any regressive, rd-continuous $\alpha = \alpha (t)$ there corresponds a unique $\mu$-regressive, rd-continuous $\beta = \beta (t)$ (given by \rf{betal}) such that
$E_\alpha (t, t_0) = e_\beta (t, t_0)$.
For $\mu$-regressive, rd-continuous $\beta$ satisfying $\mu \beta \neq -2$ there exists a unique regressive, rd-continuous $\alpha$ given by
\begin{equation} \label{albet}
\alpha (t) = \frac{\beta (t)}{1 + \frac{1}{2} \mu (t) \beta (t)} \ ,
\end{equation}
such that $E_\alpha (t, t_0) = e_\beta (t, t_0)$.
\end{Th}
\begin{Proof}
$E_\alpha (t, t_0) = e_\beta (t, t_0) $ if and only if the integrands in \rf{exp-Hi} and \rf{E-for} coincide, i.e., $\xi_\mu ( \beta) = \zeta_\mu (\alpha) $. Thus $\beta (t) = \alpha (t)$ at right-dense points, and for $\mu \neq 0$ (i.e., at right-scattered points) we get
\begin{equation}
1 + \mu \beta = \frac{1 + \frac{1}{2} \mu \alpha}{1 - \frac{1}{2} \mu \alpha} \ .
\end{equation}
Both cases lead to a single condition \rf{betal} (or, equivalently, to \rf{albet}). To complete the proof we verify that $\mu \beta = -1$ iff $\mu \alpha = - 2$. Therefore for any regressive $\alpha$ we have $\mu \beta \neq - 1$. Then, $\mu \alpha = 2$ corresponds to $\mu \beta = \pm \infty$. Thus for any $\mu$-regressive $\beta$ we have $|\mu \alpha| \neq 2$. Finally, we observe that $\mu \beta= -2$ corresponds to $\mu \alpha = \pm \infty$, and for any other values of $\beta$ the value of $\alpha$ is uniquely determined by \rf{albet}.
To complete the proof we have to show that $\alpha$ is rd-continuous iff $\beta$ is rd-continuous.
At right-dense $t_0$ functions $\alpha$, $\mu$ are continuous and also $\mu (t_0) = 0$. Therefore, from \rf{albet} we get
\begin{equation}
\lim_{t\rightarrow t_0} \alpha (t) = \lim_{t\rightarrow t_0} \beta (t) \ , \qquad \alpha (t_0) = \beta (t_0) \ .
\end{equation}
Hence, $\alpha$ is continuous at $t_0$ if and only if $\beta$ is continuous at $t_0$. If $s_0$ is left-dense, then \ $\mu (t) \rightarrow 0$ \ as \ $t \rightarrow s_0^-$. As a consequence, we have
\begin{equation}
\alpha (s_0^-) \equiv \lim_{t\rightarrow s_0^-} \alpha (t) = \lim_{t\rightarrow s_0^-} \beta (t) \equiv \beta (s_0^-) \ .
\end{equation}
Therefore, $\alpha (s_0^-)$ exists if and only if $\beta (s_0^-)$ exists, which ends the proof.
\end{Proof}
\subsection{Properties of the Cayley-exponential function}
First of all, we observe a close relation between $\zeta_h$ and the Cayley transformation \rf{cay},
\begin{equation}
e^{h \zeta_h (z)} = {\rm cay} (z, \frac{1}{2} h) \ .
\end{equation}
The transformation inverse to $\zeta_h$ is given by
\begin{equation} \label{zth}
z \equiv \zeta_h^{-1} (\zeta) = \frac{2}{h} \tanh \frac{h \zeta}{2} \quad (h \neq 0) \ , \qquad \zeta_0^{-1} (\zeta) = \zeta \ .
\end{equation}
In particular,
\begin{equation}
z = \frac{2 i}{h} \tan \frac{ h \omega}{2} \qquad {\rm for} \quad
\zeta = i \omega \ .
\end{equation}
Therefore, $\zeta_h^{-1}$ maps the open segment $(-\frac{\pi i}{h}, \frac{\pi i}{h}) \subset i \R$ onto $i \R$, and $\R$ is mapped onto the real segment $(-\frac{2}{h}, \frac{2}{h})$.
\begin{cor}
$\zeta_h$ maps the imaginary axis onto the segment $(-\frac{\pi i}{h}, \frac{\pi i}{h}) \subset i \R$.
\end{cor}
\par \vspace{0.5cm} \par
\begin{lem} Denoting $\zeta = \gamma + i \eta$ and taking into account \rf{zth}, we have
\begin{equation} \begin{array}{l} \displaystyle
|z | < \frac{2}{h} \quad \Longleftrightarrow \quad \cos \eta h > 0 \ , \\[2ex]\displaystyle
|z | = \frac{2}{h} \quad \Longleftrightarrow \quad \cos \eta h = 0 \ , \\[2ex]\displaystyle
|z | > \frac{2}{h} \quad \Longleftrightarrow \quad \cos \eta h < 0 \ .
\end{array} \end{equation}
\end{lem}
\begin{Proof} We compute:
\begin{equation}
\tanh \frac{h \zeta}{2} = \frac{ e^{h \gamma + i h \eta} - 1 }{ e^{h \gamma + i h \eta} + 1 } = \frac{ e^{h \gamma} \cos h \eta - 1 + i e^{h \gamma} \sin h \eta }{ e^{h \gamma} \cos h \eta + 1 + i e^{h \gamma} \sin h \eta } \ ,
\end{equation}
\begin{equation}
\left| \tanh \frac{h \zeta}{2} \right| = \sqrt{ \frac{ e^{2 h \gamma} - 2 e^{h \gamma} \cos h \eta + 1 }{ e^{2 h \gamma} + 2 e^{h \gamma} \cos h \eta + 1 } } \ .
\end{equation}
To complete the proof it is enough to notice that $e^{h \gamma} > 0$ and \ $\frac{h z}{2} = \tanh\frac{h \zeta}{2} $.
\end{Proof}
\par \vspace{0.5cm} \par
\begin{cor}
$\zeta_h$ maps the disc \ $|z| < \frac{2}{h}$ \ onto the strip \ $ - \frac{\pi}{2 h} < \eta < \frac{\pi}{2 h} $.
\end{cor}
\par \vspace{0.5cm} \par
\begin{Def} \label{def-oplus}
Given a time scale $\T$ and two functions $\alpha, \beta: \T \rightarrow \C$, we define
\begin{equation} \label{oplusmu}
\alpha \oplus \beta := \frac{\alpha + \beta}{1 + \frac{1}{4} \mu^2 \alpha \beta } \ .
\end{equation}
where $\mu = \mu (t)$ is the graininess of $\T$.
\end{Def}
\begin{lem} \label{lem-zeta}
The function $\zeta_\mu$ has the following properties:
\begin{equation}
\overline{\zeta_\mu (\alpha)} = \zeta_\mu (\bar \alpha) \ , \quad \zeta_\mu (-\alpha) = - \zeta_\mu (\alpha) , \quad \zeta_{-\mu} (\alpha) = \zeta_\mu (\alpha) \ ,
\end{equation}
\begin{equation}
\zeta_\mu (\alpha) + \zeta_\mu (\beta) = \zeta_\mu (\alpha\oplus\beta) ,
\end{equation}
where bar denotes the complex conjugate and we assume (in order to avoid infinities)
$\frac{1}{2} \mu \alpha \neq - 1$, $\frac{1}{2} \mu \beta \neq -1 $ and $\frac{1}{2} \mu \beta \neq - \left( \frac{1}{2} \mu \alpha \right)^{-1}$.
\end{lem}
\begin{Proof} The function $\zeta_\mu = \zeta_\mu (\alpha)$ is analytic with respect to $\alpha$ (and $\mu$ is real). Hence the first property follows. Other properties can be shown by direct calculation:
\begin{equation} \begin{array}{l} \displaystyle \label{proof-gamma}
\zeta_\mu (-\alpha) = \frac{1}{\mu} \log \frac{1 - \frac{1}{2} \alpha \mu}{1 + \frac{1}{2} \alpha \mu} = - \frac{1}{\mu} \log \frac{1 + \frac{1}{2} \alpha \mu}{1 - \frac{1}{2} \alpha \mu} = - \zeta_\mu (\alpha) \ ,
\\[3ex]\displaystyle
\zeta_{-\mu} (\alpha) = - \frac{1}{\mu} \log \frac{1 - \frac{1}{2} \alpha \mu}{1 + \frac{1}{2} \alpha \mu} = \frac{1}{\mu} \log \frac{1 + \frac{1}{2} \alpha \mu}{1 - \frac{1}{2} \alpha \mu} = \zeta_\mu (\alpha) \ ,
\\[3ex]\displaystyle
\zeta_\mu (\alpha) + \zeta_\mu (\beta) = \frac{1}{\mu} \log \frac{ 1+ \frac{1}{4} \mu^2 \alpha \beta + \frac{1}{2} \mu (\alpha +\beta) }{1+ \frac{1}{4} \mu^2 \alpha \beta - \frac{1}{2} \mu (\alpha +\beta)} = \zeta_\mu (\alpha\oplus\beta) \ ,
\end{array} \end{equation}
provided that $\frac{1}{4} \mu^2 \alpha \beta \neq - 1$.
\end{Proof}
\par \vspace{0.5cm} \par
\begin{Th} \label{Th-exp-properties}
If $\alpha, \beta : \T \rightarrow \C$ are regressive and rd-continuous, then
the following properties hold:
\begin{enumerate}
\item $\displaystyle E_\alpha (t^\sigma, t_0) = \frac{1 + \frac{1}{2} \mu (t) \alpha (t)}{1 - \frac{1}{2} \mu (t) \alpha (t) } \ E_\alpha (t, t_0)$ \ ,
\item $( E_\alpha (t, t_0) )^{-1} = E_{-\alpha} (t, t_0) $ \ ,
\item $ \overline{ E_\alpha (t, t_0)} = E_{\bar \alpha} (t, t_0)$ \ ,
\item $E_\alpha (t, t_0) \ E_\alpha (t_0, t_1) = E_\alpha (t, t_1)$ \ ,
\item $E_\alpha (t, t_0) \ E_\beta (t, t_0) = E_{\alpha\oplus \beta} (t, t_0) $ \ ,
\end{enumerate}
where we use a standard notation $t^\sigma \equiv \sigma (t)$.
\end{Th}
\begin{Proof}
It is sufficient to prove the first property for right-scattered points ($t^\sigma > t$).
\begin{equation} \begin{array}{l} \displaystyle \label{exp11}
E_\alpha (t^\sigma, t_0) =
\exp \left( \int_{t}^{\sigma (t)} \zeta_{\mu (\tau) } (\alpha (\tau)) \Delta\tau \right) \exp \left( \int_{t_0}^t \zeta_{\mu (\tau) } (\alpha (\tau)) \Delta\tau \right)
\end{array} \end{equation}
Then, using \rf{zeta}, we get
\[
\int_{t}^{\sigma (t)} \zeta_{\mu (\tau) } (\alpha (\tau)) \Delta\tau = \mu (t) \zeta_{\mu (t) } (\alpha (t)) = \log \frac{1 + \frac{1}{2} \mu (t) \alpha (t) }{ 1 - \frac{1}{2} \mu (t) \alpha (t) } \ ,
\]
and substituting it into \rf{exp11} we get the first property. The second property follows directly from $\zeta_\mu (-\alpha) = - \zeta_\mu (\alpha)$, see Lemma~\ref{lem-zeta}. Indeed,
\[ \displaystyle
E_\alpha^{-1} (t, t_0) =
\exp \left( - \int_{t_0}^t \zeta_{\mu (\tau) } (\alpha (\tau)) \Delta\tau \right) = \exp \int_{t_0}^t \zeta_{\mu (\tau) } (- \alpha (\tau)) \Delta\tau = E_{-\alpha} (t, t_0) .
\]
The third property follows directly from analycity of the exponential function and from Lemma~\ref{lem-zeta}. We recall that $t \in \T \subset \R$. Indeed,
\[
\overline{ E_\alpha (t, t_0)} =
\exp \int_{t_0}^t \overline{ \zeta_\mu (\tau) (\alpha (\tau))} \Delta\tau = \exp \int_{t_0}^t \zeta_\mu (\tau) ( \overline{\alpha (\tau) } ) \Delta\tau = E_{\bar \alpha} (t, t_0) \ .
\]
The fourth property is derived in a straightforward way:
\[
E_\alpha (t, t_0) \ E_\alpha (t_0, t_1) = \exp \left( \int_{t_0}^{t} \zeta_{\mu (t)} (\alpha (t)) \Delta t + \int_{t_1}^{t_0} \zeta_{\mu (t)} (\alpha (t)) \Delta t \right) = E_\alpha (t, t_1) .
\]
Finally,
\[
E_\alpha (t, t_0) \ E_\beta (t, t_0) = \exp \int_{t_0}^{t} \left( \zeta_{\mu (t)} (\alpha (t)) + \zeta_{\mu (t)} (\beta (t)) \right) \Delta t = E_{\alpha\oplus\beta} (t, t_0) ,
\]
where we took into account Lemma~\ref{lem-zeta}.
\end{Proof}
\par \vspace{0.5cm} \par
The formula \rf{oplusmu} is identical with the Lorentz velocity transformation of the special theory of relativity (the role of the speed of light $c$ is played by $\frac{2}{\mu}$).
Denoting
\begin{equation}
\alpha':= \frac{1}{2} \mu \alpha \ , \quad
\beta' = \frac{1}{2} \mu \beta \ .
\end{equation}
we can rewrite the formula \rf{oplusmu} in a simpler form
\begin{equation} \label{oplus'}
\alpha'\oplus\beta' = \frac{\alpha' + \beta'}{1 + \alpha' \beta'} \ .
\end{equation}
\par \vspace{0.5cm} \par
\begin{lem}
If $\alpha'$ and $\beta'$ are real functions on $\T$ and $\alpha'\oplus\beta'$ is given by \rf{oplus'}, then
\begin{equation} \label{abg<1}
| \alpha' | < 1 \ \ {\rm and} \ \
| \beta' | < 1 \quad \Longrightarrow \quad
| \alpha'\oplus\beta' | < 1 \ ,
\end{equation}
\begin{equation} \label{abg=1}
| \alpha'\oplus\beta' | = 1 \quad \Longleftrightarrow \quad
| \alpha' | = 1 \ \ {\rm or} \ \ | \beta' | = 1 \ .
\end{equation}
\end{lem}
\begin{Proof} Using \rf{oplus'} we compute
\begin{equation}
1 - (\alpha'\oplus\beta')^2 =
\frac{(1-(\alpha')^2)(1 - (\beta')^2)}{(1 + \alpha' \beta')^2 } \ ,
\end{equation}
which immediately yields \rf{abg=1}. Then,
$| \alpha' | < 1$ and $| \beta' | < 1$ imply that the righ-hand side is positive. Hence, we have \rf{abg<1}.
\end{Proof}
\begin{Def} \label{Def-posit}
The function $\alpha : \T \rightarrow \R$ is called positively regressive if for all $t \in \T^\kappa $ we have \ $ |\alpha (t) \mu (t) | < 2$.
\end{Def}
\begin{Th}
If $\alpha : \T \rightarrow \R$ is rd-continuous and positively regressive, then the exponential function $E_\alpha$ is positive (i.e., $E_\alpha (t) > 0$ for all \ $t \in \T$).
\end{Th}
\begin{Proof} If $\alpha$ is real and positively regressive, then (for any $t \in \T^\kappa$) we have
\begin{equation}
\frac{1 + \frac{1}{2} \mu (t) \alpha (t) }{1 - \frac{1}{2} \mu (t) \alpha (t) } > 0 \ .
\end{equation}
Thus $\zeta_{\mu (t)} (\alpha (t))$ is real for any $t \in \T^\kappa$ and, as a consequence, the exponential function is positive.
\end{Proof}
The regressivity condition $|\alpha (t) \mu (t) | < 2$ is automatically satisfied at right-dense points. At right-scattered points we have
\begin{equation}
E_\alpha (t^\sigma) - E_\alpha (t) = \mu (t) \alpha (t) \frac{ E_\alpha (t^\sigma) + E_\alpha (t) }{2} \ .
\end{equation}
Therefore, the condition $| \alpha (t) \mu (t) | < 2$ is equivalent to
\begin{equation}
| E_\alpha (t^\sigma) - E^\alpha (t) | < | E_\alpha (t^\sigma) + E^\alpha (t) |
\end{equation}
In the real case it means that $E_\alpha (t^\sigma)$ and $E_\alpha (t)$ have the same sign.
\par \vspace{0.5cm} \par
\begin{Th}
The set of positively regressive real functions ${\cal R}_+$ is an abelian group with respect to the addition $\oplus$.
\end{Th}
\begin{Proof} The formula \rf{oplusmu} obviously yields $\alpha\oplus\beta = \beta\oplus\alpha$. The element inverse to $\alpha$ (given simply by $\ominus\alpha = - \alpha$) always exists in ${\cal R}_+$.
Therefore, it is sufficient to show that ${\cal R}_+$ is closed with respect to $\oplus$.
Taking into account \rf{oplusmu} we have:
\begin{equation}
\frac{1 + \frac{1}{2} \mu \alpha \oplus \beta}{1 - \frac{1}{2} \mu \alpha \oplus \beta} = \frac{ (1 + \frac{1}{2} \mu \alpha)(1 + \frac{1}{2} \mu \beta) }{ (1 - \frac{1}{2} \mu \alpha)(1 - \frac{1}{2} \mu \beta) }
\end{equation}
(compare the last line of \rf{proof-gamma}).
If $\alpha, \beta \in {\cal R}_+$, then all four terms at the right-hand side are positive. Therefore, the left-hand side is positive which means that $|\mu \alpha\oplus\beta| < 2$.
\end{Proof}
The set ${\cal R}$ of all regressive functions
is not closed with respect to $\oplus$. Indeed, suppose that a regressive $\alpha$ is given. The formula $\mu^2 \alpha \beta = - 4$ uniquely determines $\beta$ which also has to be regressive. However, in this case $\alpha\oplus\beta$ becomes infinite.
\par \vspace{0.5cm} \par
\begin{lem} \label{lem-albet}
\begin{equation}
x^\Delta (t) = \beta (t) x (t) \quad \Longleftrightarrow \quad
x^\Delta (t) = \alpha (t) \av{x (t)} \ ,
\end{equation}
where $\beta$ is given by \rf{betal} and
\begin{equation} \label{avg}
\av{x (t)} := \frac{ x (t) + x (\sigma (t)) }{2} \ .
\end{equation}
\end{lem}
\begin{Proof} Direct computation shows
\[
x^\Delta = \alpha \av{x } \quad \Longleftrightarrow \quad
2 x^\Delta (1 + \frac{1}{2} \mu \beta) = \beta (x + x^\sigma)
\quad \Longleftrightarrow \quad
x^\Delta = \beta x \ ,
\]
where first we used \rf{albet}, and then we substituted $x^\sigma = x + \mu x^\Delta$.
\end{Proof}
\par \vspace{0.5cm} \par
\begin{Th} \label{Th-Cau-Del}
The exponential function $E_\alpha (t, t_0)$, defined by \rf{E-for},
is a unique solution of the following Cauchy problem:
\begin{equation} \label{xca}
x^\Delta (t) = \alpha (t) \av{x (t)} \ , \quad x (t_0) = 1 \ ,
\end{equation}
where $\alpha$ is regressive rd-continuous function and $\av{x (t)}$ is defined \rf{avg}.
\end{Th}
\begin{Proof}
By Lemma~\ref{lem-albet} the initial value problem \rf{xca} is equivalent to
\begin{equation} \label{xcb}
x^\Delta (t) = \beta (t) x (t) \ , \quad x (t_0) = 1 \ .
\end{equation}
Taking into account that $\mu$ and $\alpha$ are rd-continuous, we have $\beta$ rd-continuous (compare Theorem~\ref{Ee}). Therefore, we can use Hilger's theorem concerning the problem \rf{xcb}, stating that its unique solution is $e_\beta (t, t_0)$.
Finally, we use Theorem~\ref{Ee} once more.
\end{Proof}
In the discrete case the equation \rf{xca} (treated as a numerical scheme) can be interpreted either as the trapezoidal rule, implicit midpoint rule or the discrete gradient method \cite{LaG,MQR2,HLW}. These implicit methods are more accurate than the explicit (forward) Euler scheme (which is related to the equation $x^\Delta = \alpha x$) and can
preserve more qualitative, geometrical and physical characteristics of the considered differential equations, e.g., integrals of motion \cite{HLW}.
\subsection{New definitions of hyperbolic and trigonometric functions}
Our definition of improved hyperbolic and trigonometric functions follows in a natural way (similarly as in the continuous case) from the definition of the exponential function.
\begin{Def} \label{def-hyp}
Cayley-hyperbolic functions on a time scale are defined by
\begin{equation} \label{def-chsh}
{\rm Cosh}_\alpha (t) := \frac{E_\alpha (t) + E_{-\alpha} (t) }{2} \ , \quad {\rm Sinh}_\alpha (t) := \frac{E_\alpha (t) - E_{-\alpha} (t) }{2} \ ,
\end{equation}
where the exponential function $E_\alpha$ is defined by \rf{E-for}.
\end{Def}
\begin{Def} \label{def-trig}
Cayley-trigonometric functions on a time scale are defined by
\begin{equation} \label{cos-sin}
{\rm Cos}_\omega (t) := \frac{E_{i \omega} (t) + E_{- i \omega} (t) }{2} \ , \quad
{\rm Sin}_\omega (t) := \frac{E_{i \omega} (t) - E_{- i \omega} (t) }{2 i} \ .
\end{equation}
In other words, ${\rm Cos}_\omega (t) = {\rm Cosh}_{i \omega} (t)$,
\ $i \, {\rm Sin}_\omega (t) = {\rm Sinh}_{i \omega} (t)$.
\end{Def}
Properties of our hyperbolic and trigonometric functions are identical, or almost identical, as in the continuous case. Note that below we often use the notation \rf{avg}.
\begin{Th} Cayley-hyperbolic functions satisfy
\begin{equation} \label{Pyth-hyp}
{\rm Cosh}_\alpha^2 (t) - {\rm Sinh}_\alpha^2 (t) = 1 \ ,
\end{equation}
\begin{equation} \label{der-chsh}
{\rm Cosh}_\alpha^\Delta (t) = \alpha (t) \av{ {\rm Sinh}_\alpha (t)} \ , \qquad
{\rm Sinh}_\alpha^\Delta (t) = \alpha (t) \av{ {\rm Cosh}_\alpha (t)} \ .
\end{equation}
\end{Th}
\begin{Proof} By Theorem~\ref{Th-exp-properties} we have $E_\alpha (t) E_{-\alpha} (t) = 1$. This is sufficient to directly verify the identity \rf{Pyth-hyp}. By Theorem~\ref{Th-Cau-Del} we have
$E_\alpha^\Delta (t) = \alpha (t) \av{ E_\alpha (t) }$ and
differentiating \rf{def-chsh} we get \rf{der-chsh}.
\end{Proof}
\begin{Th} Cayley-trigonometric functions are real-valued for real $\omega$ and satisfy
\begin{equation} \label{Pyth-trig}
{\rm Cos}_\omega^2 (t) + {\rm Sin}_\omega^2 (t) = 1 \ ,
\end{equation}
\begin{equation} \label{der-cossin}
{\rm Cos}_\omega^\Delta (t) = - \omega (t) \av{ {\rm Sin}_\omega (t)} \ , \qquad
{\rm Sin}_\omega^\Delta (t) = \omega (t) \av{ {\rm Cos}_\omega (t)} \ .
\end{equation}
\end{Th}
\begin{Proof} By Theorem~\ref{Th-exp-properties} we have $\overline{E_{i\omega } (t)} = E_{-i \omega} (t)$. Thus the reality of trigonometric functions follows directly from Definition~\ref{def-trig}. Using $E_{i\omega } (t) E_{-i \omega} (t) = 1$ we get the Pythagorean identity \rf{Pyth-trig} starting from \rf{cos-sin}. Derivatives \rf{der-cossin} can be obtained by straightforward differentiation of the exponential functions using Theorem~\ref{Th-Cau-Del}.
\end{Proof}
\begin{Th} \label{Th-imag}
The function $\C \ni \alpha \rightarrow E_\alpha (t) $ maps the imaginary axis into
the unit circle, i.e., \ ${\rm Re} \alpha (t) \equiv 0 \ \Longrightarrow \ |E_\alpha (t)| \equiv 1$.
\end{Th}
\begin{Proof} We compute
\begin{equation}
|E_\alpha (t)|^2 = E_\alpha (t) \overline{ E_\alpha (t)} = E_\alpha (t) E_{\bar \alpha} (t) = E_{\alpha\oplus{\bar \alpha}} \ ,
\end{equation}
where we used twice Theorem~\ref{Th-exp-properties}. From the formula \rf{oplusmu} we see immediately that
\[
\alpha \oplus \bar\alpha = 0 \quad \Longleftrightarrow \quad \bar\alpha = - \alpha \ ,
\]
i.e., $\alpha = i\omega$, where $\omega (t) \in \R$ for $t \in \T$. Therefore, $E_{\alpha\oplus\bar\alpha} (t) \equiv 1$ for imaginary $\alpha$.
\end{Proof}
Theorem~\ref{Th-imag} is crucial for a proper definition of trigonometric functions. Indeed, $|E_{i\omega}| = 1$ is equivalent to
$ ({\rm Re} E_{i\omega})^2 + ({\rm Im} E_{i\omega})^2 = 1$ and we can identify $ {\rm Re} E_{i\omega}$ and ${\rm Im} E_{i\omega}$ with trigonometric functions.
At the end of this section we present second order dynamic equations satisfied by hyperbolic and trigonometric functions with constant $\alpha$ and $\omega$, respectively.
\begin{lem} \label{lem-av-com}
Averaging commutes with delta differentiation, i.e.,
\begin{equation} \label{av-del-com}
\av{x (t)}^\Delta = \av{ x^\Delta (t) } \ .
\end{equation}
\end{lem}
\begin{Proof} \quad $ \displaystyle
\av{x (t)}^\Delta = \frac{1}{2} \left( x (t) + x(t^\sigma) \right)^\Delta = \frac{1}{2} \left( x^\Delta (t) + x^\Delta (t^\sigma) \right) = \av{ x^\Delta (t) } \ .
$
\end{Proof}
\begin{prop} \label{prop-hyp-osc}
If \ $\alpha (t) = {\rm const}$, then improved hyperbolic functions on a time scale, ${\rm Cosh}_\alpha$ and ${\rm Sinh}_\alpha$, satisfy the equation
\begin{equation}
x^{\Delta\Delta} (t) = \alpha^2 \av{\av{x (t)}} \ ,
\end{equation}
where \ $\av{\av{ x (t) }} \equiv \frac{1}{4} \left( x + x^\sigma + x^{\sigma\sigma} \right)$ \ and \ $x^\sigma (t) := x (t^\sigma)$.
\end{prop}
\begin{Proof} follows immediately from \rf{der-chsh} and \rf{av-del-com}. Indeed,
\[
{\rm Cosh}_\alpha^{\Delta\Delta} (t) = \alpha \av{\rm Sinh_\alpha (t) }^\Delta = \alpha \av{ \alpha \av{{\rm Cosh_\alpha (t) } } } = \alpha^2 \av{\av{ {\rm Cosh_\alpha (t) }} } \ .
\]
The same calculation can be done for $\rm Sinh_\alpha^{\Delta\Delta}$.
\end{Proof}
\begin{prop} \label{prop-osc}
If \ $\omega (t) = {\rm const}$, then improved trigonometric functions on a time scale, ${\rm Cos}_\omega$ and ${\rm Sin}_\omega$, satisfy the equation
\begin{equation} \label{oscyl}
x^{\Delta\Delta} + \omega^2 \av{\av{x (t)}} = 0 \ .
\end{equation}
\end{prop}
\begin{Proof} by straightforward calculation, compare the proof of Proposition~\ref{prop-hyp-osc}. \end{Proof}
\section{Exact special functions on time scales}
\label{sec-exact}
The simplest (almost trivial) way to construct time scales analogues of special functions is to take their {\it exact} values.
\begin{Def}
Given a function $f : \R \rightarrow \C$ we define its {\em exact} analogue $\tilde f : \T \rightarrow \C$ as $\tilde f := f|_{\T}$, i.e.,
\begin{equation}
\tilde f (t) := f (t) \quad ({\rm for}\ t \in \T) \ .
\end{equation}
\end{Def}
Although the path $f \rightarrow \tilde f$ is obvious and unique, the inverse correspondence may cause serious problems. Usually, it is not easy to find or to indicate the most appropriate (or most natural) real function corresponding to a given function on $\T$.
\subsection{Exact exponential function on time scales}
The continuous exponential function is given by $e_a (t) = \exp \int_{t_0}^t a (\tau) d \tau$, where $a : \R \rightarrow \C$ is a given function.
A non-trivial question is to choose a function $a = a(t)$, provided that we intend to define an exact exponential corresponding to a given function $\alpha : \T \rightarrow \C$. In general, the choice seems to be highly non-unique.
In this paper we confine ourselves to the simplest case, $\alpha = {\rm const}$, when it is natural to take \ $a (t) = \alpha = {\rm const}$.
\begin{Def}
The exact exponential function $E^{ex}_\alpha (t, t_0)$ (where $\alpha$ is a complex constant and $t, t_0 \in \T$) is defined by
$E^{ex}_\alpha (t, t_0) := e^{\alpha (t-t_0)}$.
\end{Def}
\begin{Th}
The exact exponential function $E^{ex}_\alpha (t, t_0)$
satisfies the dynamic equation
\begin{equation} \label{dyn-ex}
x^\Delta (t) = \alpha \psi_{\alpha} (t) \av{x (t)} \ ,
\qquad x (t_0) = 1 \ ,
\end{equation}
where $\psi_\alpha (t) = 1$ for right-dense points and
\begin{equation} \label{psi}
\psi_\alpha (t) = \frac{2}{\alpha \mu (t) }{\tanh \frac{\alpha \mu (t)}{2} }
\end{equation}
for right-scattered points.
\end{Th}
\begin{Proof} For right-dense $t$ the equation \rf{dyn-ex} reduces to $x^\Delta = \alpha x$. Then, still assuming $t$ right-dense, we compute
\[
(e^{\alpha (t-t_0)})^\Delta = \frac{d}{dt} e^{\alpha (t-t_0)} = \alpha e^{\alpha (t-t_0)} \ ,
\]
i.e., $e^{\alpha (t-t_0)}$ satisfies the equation $x^\Delta = \alpha x$.
For right-scattered $t$ we have:
\[ \begin{array}{l} \displaystyle
E^{ex}_\alpha (t^\sigma, t_0) - E^{ex}_\alpha (t, t_0) = e^{\alpha (t^\sigma - t_0) } - e^{\alpha (t - t_0)} = e^{\alpha (t - t_0)} \left( e^{\alpha \mu} - 1 \right) \ , \\[2ex]
E^{ex}_\alpha (t^\sigma,t_0) + E^{ex}_\alpha (t, t_0) = e^{\alpha (t^\sigma - t_0) } + e^{\alpha (t - t_0)} = e^{\alpha (t - t_0)} \left( e^{\alpha \mu} + 1 \right) \ ,
\end{array} \]
and, therefore,
\begin{equation}
E^{ex}_\alpha (t^\sigma, t_0) - E^{ex}_\alpha (t, t_0) = \tanh\frac{\alpha\mu}{2} \left( E^{ex}_\alpha (t^\sigma, t_0) + E^{ex}_\alpha (t, t_0) \right) \ ,
\end{equation}
which is equivalent to \rf{dyn-ex}. The initial condition is obviously satisfied.
\end{Proof}
The equation \rf{dyn-ex} can be interpreted as the dynamic equation \rf{xca} with a modified $\alpha$ (i.e., $\alpha \rightarrow \alpha \psi_\alpha $). Another interpretation can be obtained by a modification of the delta derivative. We define
\begin{equation}
x^{\Delta'_\alpha} (t) := \lim_{\stackrel{\displaystyle s \rightarrow t}{s \neq \sigma (t)} }
\frac{ x (t^\sigma) - x (s)}{ \delta_\alpha (t^\sigma - s) }
\end{equation}
where $\delta_\alpha$ is a function given by
\begin{equation}
\delta_\alpha (\mu) : = \frac{2}{\alpha} \tanh \frac{ \alpha \mu}{2} \ .
\end{equation}
\begin{lem}
\begin{equation} \label{delt1}
x^\Delta (t) = \psi_\alpha (t) x^{\Delta'_\alpha} (t)
\end{equation}
\end{lem}
\begin{Proof} We compute
\[ \lim_{\stackrel{\displaystyle s \rightarrow t}{s \neq \sigma (t)} }\frac{ x (t^\sigma) - x (s)}{ t^\sigma - s } =
\lim_{\stackrel{\displaystyle s \rightarrow t}{s \neq \sigma (t)} }
\frac{ x (t^\sigma) - x (s)}{ \delta_\alpha (t^\sigma - s) } \lim_{\stackrel{\displaystyle s \rightarrow t}{s \neq \sigma (t)} }
\frac{ \delta_\alpha (t^\sigma - s) }{ t^\sigma - s} = x^{\Delta'_\alpha} (t) \psi_\alpha (t) \ ,
\]
which yields \rf{delt1}.
\end{Proof}
\begin{cor} The equation \rf{dyn-ex}, satisfied by the exact exponential function, can be rewritten as
\begin{equation}
x^{\Delta'_\alpha} (t) = \alpha \av{ x (t) } \ , \qquad x (t_0) = 1 \ .
\end{equation}
\end{cor}
The equation \rf{dyn-ex} is the exact discretization of the equation $\dot x = \alpha x$. In general,
by the exact discretization of an ordinary differential equation $\dot x = f (x)$, where $x (t) \in \R^N$, we mean the difference equation $X_{n+1} = F (X_n)$, where $X_n \in \R^N$, such that $X_n = x (t_n)$. Any equation has an implicit exact discretization (provided that the solution exists). It is worthwhile to point out that all linear ordinary differential equations with constant coefficients have explicit exact discretizations \cite{Po,Mic,Ag}.
\subsection{Exact hyperbolic and trigonometric functions on time scales}
In order to simplify notation we confine ourselves to $t_0 = 0$. All results can be obviously extended on the general case.
\begin{Def} Given a real constant $\alpha$, exact hyperbolic functions on a time scale $\T$ are defined by
\begin{equation} \begin{array}{l} \displaystyle
\cosh^{ex}_\alpha (t) = \frac{ E^{ex}_{\alpha} (t) + E^{ex}_{-\alpha} (t)}{2} = \cosh \alpha t \ , \\[2ex] \displaystyle
\sinh^{ex}_\alpha (t) = \frac{ E^{ex}_{\alpha} (t) - E^{ex}_{-\alpha} (t)}{2} = \sinh\alpha t \ .
\end{array} \end{equation}
\end{Def}
We point out that in this definition $t\in\T$. The same remark concerns the next definition.
\begin{Def} Given a real constant $\omega$, exact trigonometric functions on a time scale $\T$ are defined by
\begin{equation} \begin{array}{l} \displaystyle
\cos^{ex}_\omega (t) = \frac{ E^{ex}_{i\omega} (t) + E^{ex}_{-i\omega} (t)}{2} = \cos \omega t \ , \\[2ex] \displaystyle
\sin^{ex}_\omega (t) = \frac{ E^{ex}_{i\omega} (t) - E^{ex}_{-i\omega} (t)}{2 i} = \sin\omega t \ .
\end{array} \end{equation}
\end{Def}
It turns out that dynamic equations satisfied by exact trigonometric and hyperbolic functions are rather awkward in the case of arbitrary time scales. These equations simplify considerably under asumption of constant graininess. Therefore, from now on, we confine ourselves to the case $\mu (t) = {\rm const}$.
Moreover, we observe that the function $\psi_\alpha$ (defined by \rf{psi}) is symmetric. In particular, $\psi_{i\omega} (t) = \psi_{-i\omega} (t)$. We denote:
\begin{equation} \label{phi}
\phi ( x ) := \frac{2}{ x } \tan \frac{ x}{2} \quad ({\rm for} \ x \neq 0) \ , \quad \phi (0) = 1\ .
\end{equation}
Therefore, for $\mu (t) = {\rm const}$ we have
\begin{equation}
\psi_{i \omega} (t) = \phi (\omega\mu) = {\rm const} \ .
\end{equation}
\begin{Th}
We assume that the time scale $\T$ has constant graininess $\mu$.
Then exact trigonometric functions on $\T$, i.e., $\cos^{ex}_\omega$ and $\sin^{ex}_\omega$,
satisfy the dynamic equation
\begin{equation} \label{osc-avav}
x^{\Delta\Delta} (t) + \omega^2 \phi^2 (\omega\mu) \, \av{\av{ x (t) }} = 0 \ ,
\end{equation}
which is equivalent to
\begin{equation} \label{osc-sinc-ex}
x^{\Delta \Delta} (t) + \omega^2 \left({\rm sinc} \frac{\omega\mu}{2} \right)^2 x (t^\sigma ) = 0 \ ,
\end{equation}
where
\begin{equation} \label{sinc}
{\rm sinc} (x) := \frac{\sin x}{x} \qquad ({\rm for} \ \ x \neq 0) \ , \qquad {\rm sinc}( 0 ) := 1 \ .
\end{equation}
\end{Th}
\begin{Proof} By Theorem~\ref{Th-Cau-Del} we have
\begin{equation}
( E^{ex}_{\pm i\omega} )^\Delta = \pm i\omega \psi_{\pm i\omega} (t) \av{ E^{ex}_{\pm i\omega} } \ .
\end{equation}
Therefore, taking also \rf{phi} into account, we get
\begin{equation} \begin{array}{l} \label{delta-sca}
(\cos^{ex}_\omega (t))^\Delta = \frac{1}{2} \left( E^{ex}_{i\omega} (t) + E^{ex}_{-i\omega} (t) \right)^\Delta = - \omega \phi (\omega\mu) \av{ \sin^{ex}_\omega (t) } \ ,
\\[3ex]
(\sin^{ex}_\omega (t))^\Delta = \frac{1}{2i} \left( E^{ex}_{i\omega} (t) - E^{ex}_{-i\omega} (t) \right)^\Delta = \omega \phi (\omega\mu) \av{ \cos^{ex}_\omega (t) } \ .
\end{array} \end{equation}
We notice that the second formula is equivalent to \rf{delta-sin-ex}. Applying Lemma~\ref{lem-av-com} to \rf{delta-sca}, we obtain
\begin{equation} \begin{array}{l}
( \cos^{ex}_\omega (t) )^{\Delta \Delta} = - \omega^2 \phi^2 (\omega\mu) \av{\av{ \cos^{ex}_\omega (t) }} \ , \\[2ex]
( \sin^{ex}_\omega (t) )^{\Delta \Delta} = - \omega^2 \phi^2 (\omega\mu) \av{\av{ \sin^{ex}_\omega (t) }} \ ,
\end{array} \end{equation}
i.e., we have \rf{osc-avav}. The equivalence between \rf{osc-avav} and \rf{osc-sinc-ex} is obvious for $\mu = 0$. In the case $\mu \neq 0$ we use trigonometric identities:
\begin{equation} \begin{array}{l} \displaystyle
\sin(\omega t) + \sin(\omega t + 2 \omega \mu) + 2 \sin(\omega t + \omega\mu) = 2 (1 + \cos\omega\mu) \sin(\omega t + \omega \mu) , \\[2ex] \displaystyle
\cos(\omega t) + \cos(\omega t + 2 \omega \mu) + 2 \cos(\omega t + \omega\mu) = 2 (1 + \cos\omega\mu) \cos(\omega t + \omega \mu) ,
\end{array} \end{equation}
obtaining
\begin{equation} \begin{array}{l} \displaystyle
\av{\av{ \sin^{ex}_\omega (t) }} = \left( \cos\frac{\omega\mu}{2} \right)^2 \sin(\omega t + \omega\mu) \ , \\[2ex] \displaystyle
\av{\av{ \cos^{ex}_\omega (t) }} = \left( \cos\frac{\omega\mu}{2} \right)^2 \cos(\omega t + \omega\mu) \ .
\end{array} \end{equation}
Taking into account that \ ${\rm sinc} \frac{\omega\mu}{2} = \phi (\omega\mu) \cos \frac{\omega\mu}{2} $ \ and \ $t^\sigma = t + \mu$, we get
\begin{equation}
\phi^2 (\omega\mu) \av{\av{ x (t) }} = \left( {\rm sinc} \frac{\omega\mu}{2} \right)^2 x (t^\sigma) \ ,
\end{equation}
where $x (t)$ is an arbitrary linear combination of $\sin^{ex}_{\omega}$ and $\cos^{ex}_{\omega}$, which ends the proof.
\end{Proof}
The exact discretization of the harmonic oscillator equation leads to
another modification of the delta derivative (see
\cite{Ci-oscyl,CR-ade}),
\begin{equation} \label{deltabis}
x^{\Delta''_\omega} (t) = \lim_{\stackrel{\displaystyle s \rightarrow t}{s \neq \sigma (t)} }
\frac{ x (t^\sigma) - x (s) \cos (\omega t^\sigma - \omega s )}{\omega^{-1} \sin (\omega t^\sigma - \omega s)} \ .
\end{equation}
In order to avoid infinite values of $x^{\Delta''_\omega}$ we assume $|\omega \mu (t) | < \pi$. All positively regressive constant functions $\omega$ (see Definition~\ref{Def-posit}) obviously satisfy this requirement.
\begin{prop}
If $x = x (t)$ satisfies $\ddot x + \omega^2 x = 0$ for $t\in \R$, then
\begin{equation} \label{pochodne}
( x (t) |_{t\in \T} )^{\Delta''_\omega } = \dot x (t) |_{t\in \T} \ .
\end{equation}
\end{prop}
\begin{Proof}
By assumption, $x (t) = A \cos\omega t + B \sin \omega t$. Then
\[
x (t^\sigma) = \cos \omega\mu \ (A \cos\omega t + B \sin \omega t) + \sin\omega\mu \left( B \cos\omega t - A \sin\omega t \right) \ ,
\]
because $t^\sigma = t+\mu$. By direct computation we verify
\[
x (t^\sigma ) - x (s) \cos (\omega t^\sigma - \omega s) = ( B \cos \omega s - A \sin \omega s ) \sin (\omega t^\sigma - \omega s) \ .
\]
Therefore,
\[
x^{\Delta''_\omega} (t) = \lim_{\stackrel{\displaystyle s \rightarrow t}{s \neq \sigma (t)} }
\frac{ x (t^\sigma) - x (s) \cos (\omega t^\sigma - \omega s )}{\omega^{-1} \sin (\omega t^\sigma - \omega s)} = \omega (B \cos \omega t - A \sin \omega t ) = \dot x (t) ,
\]
which ends the proof.
\end{Proof}
\begin{lem} \label{lem-delbis}
\begin{equation} \label{delbis}
x^\Delta (t) = {\rm sinc} (\omega\mu) \ x^{\Delta''_\omega} (t) - \frac{1}{2} \mu \omega^2 \left( {\rm sinc} \frac{\omega \mu }{2} \right)^2 x (t) \ .
\end{equation}
\end{lem}
\begin{Proof} Substituting the definition \rf{deltabis} into the right-hand side of the formula \rf{delbis}, and taking into account that
\begin{equation} \begin{array}{l} \displaystyle
\lim_{\stackrel{\displaystyle s \rightarrow t}{s \neq \sigma (t)} } \frac{ \sin (\omega t^\sigma - \omega s) }{\omega t^\sigma - \omega s } = {\rm sinc (\omega\mu) } \ , \\[3ex] \displaystyle
\lim_{\stackrel{\displaystyle s \rightarrow t}{s \neq \sigma (t)} } \frac{ x (s) ( 1 - \cos (\omega t^\sigma - \omega s) )}{ t^\sigma - s} = \frac{1}{2} \mu \omega^2 \left( {\rm sinc} \frac{\omega \mu }{2} \right)^2 x (t)\ ,
\end{array} \end{equation}
we obtain the left-hand side of \rf{delbis}, compare \rf{delta}.
\end{Proof}
\begin{prop} The equation \rf{osc-sinc-ex}, satisfied by exact trigonometric functions, can be rewritten as
\begin{equation} \label{ex-osc}
x^{ \Delta''_\omega \Delta''_\omega} (t) + \omega^2 x (t) = 0 \ .
\end{equation}
\end{prop}
\begin{Proof}
The formula \rf{delbis} can be rewritten as
\begin{equation}
x^\Delta (t) = {\rm sinc} \frac{\omega\mu}{2} \left( \cos \frac{\omega\mu}{2} \ x^{\Delta''_\omega} (t) - \omega \sin \frac{\omega\mu}{2} \ x (t) \right) \ ,
\end{equation}
and, therefore,
\[
x^{\Delta \Delta} (t) = {\rm sinc}^2 \frac{\omega\mu}{2}
\left( \cos^2 \frac{\omega\mu}{2} \ x^{\Delta''_\omega \Delta''_\omega} (t) - \omega (\sin\omega\mu) \, x^{\Delta''_\omega} (t) + \omega^2 \sin^2 \frac{\omega\mu}{2} \, x (t) \right) .
\]
Moreover, $x (t^\sigma) = x (t) + \mu x^\Delta (t)$. Hence
\begin{equation}
x (t^\sigma) = (\cos\omega\mu) \ x (t) + \frac{\sin \omega\mu}{\omega} \ x^{\Delta''_\omega} (t) \ .
\end{equation}
Thus we can easily verify that
\begin{equation}
x^{\Delta\Delta} (t) + \omega^2 {\rm sinc}^2 \frac{\omega\mu}{2} x (t) \equiv
{\rm sinc}^2 \frac{\omega\mu}{2} \cos^2 \frac{\omega\mu}{2} \left(
x^{ \Delta''_\omega \Delta''_\omega} (t) + \omega^2 x (t) \right) ,
\end{equation}
which ends the proof.
\end{Proof}
The exact discretization of the harmonic oscillator equation $\ddot x + \omega^2 x = 0$ was discussed in detail in \cite{Ci-oscyl}. In particular, we presented there the discrete version of the formula \rf{ex-osc} and related results.
\section{Conclusions and future directions}
We proposed two different new approaches to the construction of exponential, hyperbolic and trigonometric functions on time scales.
The resulting functions preserve most of the qualitative properties of the corresponding continuous functions. In particular, Pythagorean trigonometric identities hold exactly on any time scale. Dynamic equations satisfied by Cayley-motivated functions have a natural similarity to the corresponding diferential equations.
The first approach is based on the Cayley transformation.
It has important advantages because simulates better the behaviour of the exponential function, both qualitatively (e.g., the Cayley-exponential function maps the imaginary axis into the unit circle) and quantitatively, because
\begin{equation}
\frac{1 + \frac{1}{2} \mu \alpha}{1 - \frac{1}{2} \mu \alpha} =
1 + \alpha \mu + \frac{1}{2} (\alpha \mu)^2 + \ldots
\end{equation}
i.e., this factor approximates $\exp (\alpha \mu)$ up to second order terms, while $1 + \mu \alpha$ or $(1 - \mu \alpha)^{-1}$ are only first order approximations.
Our approach has some disadvantages, as well. The promising notion of complete delta differentiability \cite{BG-part,Ci-pst} becomes very difficult or impossible to apply (because integral curves of our new dynamic systems on time scales, like \rf{xca}, are not completely delta differentiable). Moreover,
dynamic equation on time scales become implicit (and equations of standard delta calculus on time scales are explicit). However, nabla calculus is also implicit, and it is very well known that implicit numerical finite difference schemes have better properties than explicit schemes, see for instance \cite{HLW}.
The second approach consists in exact discretization.
In this paper we confined ourselves to the simplest case, i.e., to the exponential function $E_\alpha (t)$ with $\alpha = {\rm const}$. It would be interesting to define and study exact exponential functions for a larger class of functions $\alpha$. We leave it as an interesting open problem.
Other problems are associated with finding dynamic systems which correspond to exact discretizations.
In the case of linear equations this subject is well known, but more general results are diffcult to be obtained.
We point out that definitions of elementary functions on time scales are not unique. We presented some arguments in advantage of our definitions but in principle one can develop several different theories of elementary and special functions on time scales. It seems important to develop and understand different approaches, and, if possible, to find a ``vocabulary'' to translate results.
Similarly, one can develop several different theories of dynamic systems on time scales closely related to different numerical finite difference schemes.
For instance, the standard delta calculus corresponds to forward (explicit) Euler scheme, the nable calculus corresponds to the implicit Euler scheme, and my proposition is related to the trapezoidal rule (and to the discrete gradient methods). Therefore there are no unique ``natural'' time scales analogues of dynamic systems. One can choose among many possibilities, including the above three approaches and the exact discretization (which explicitly exists only for a very limited class of differential equations). Another promising possibility is the so called ``locally exact'' discretization \cite{Ci-oscyl,CR-grad}.
The definitions presented in our paper seem to be entirely new as far as time scales are concerned but their discrete analogues ($\T = \varepsilon \Z$) have been used since a long time. After completing this work I found a lot of references where rational or Cayley-like forms of the exponential function are used in the discrete case, see for instance \cite{Is-Cay,Fer,Duf,ZD,DJM1,NQC,Mer,BMS}. It would be interesting to specify our results to the quantum calculus case \cite{KC}, where the approach presented in this paper probably has not been applied yet.
It is obvious that the proposed modification of the basic definitions should have an essential influence on many branches of the time scales calculus, including the theory of dynamic equations \cite{ABOP,BP-I}, Hamiltonian systems \cite{ABR}, and the Fourier and Laplace transforms \cite{BG-Lap}.
Finally, we notice that exponential functions on time scales are defined (here and in other papers) on real time scales. It would be important to extend these definitions on the complex domain which is so natural for continuous exponential functions. Such extensions are well known in the discrete case and in the quantum calculus, see for instance \cite{Fer,Duf,Mer,BMS}.
\par \vspace{0.5cm} \par
{\it Acknowledgements.} I am grateful to Stefan Hilger for encouragement and sending me the paper \cite{Hi-spec}, to Maciej Nieszporski for turning my attention on Mercat's and Nijhoff's papers \cite{NQC,Mer}, and to Adam Doliwa for the reference \cite{DJM1}. Discussions with Maciej Nieszporski concerning discretizations of Lax pairs, see \cite{CMNN}, turned out to be useful also in the context of this paper.
| proofpile-arXiv_065-5001 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
The subsurface structure of sunspots is poorly known.
Previous attempts to use helioseismology to determine the
subsurface properties of sunspots have mainly been done
under the assumptions that the sunspot is non-magnetic
(it is usually treated as an equivalent sound-speed perturbation) and that it is a weak perturbation to the quiet-Sun.
Neither of these two assumptions is justifiable.
The helioseismology results have been\,---\,perhaps unsurprisingly\,---\,confusing and contradictory \citep[e.g.,][]{gizon09}.
The effects of near-surface magnetic and structural perturbations on solar waves
are strong is more easily dealt with in numerical simulations.
Examples of such simulations of wave propagation through prescribed sunspot models
include \cite{cameron08, hanasoge08, khomenko08, parchevsky09}.
Necessary for all these attempts are appropriate model sunspots. There are numerous sunspot models
available, some of which have been
used in helioseismic studies. For recent reviews of sunspot models see, e.g., \citet{Solanki2003}, \citet{Thomas2008}, and \citet{Moradi10}.
Various authors, e.g., \citet{khomenko08a}, \citet{Moradi2008}, and \citet{Moradi2009ApJ}, have constructed magnetohydrostatic parametric sunspot models for use in helioseismology.
In a previous paper \citep{cameron08}, we considered a self-similar magnetohydrostatic model.
Although the deep structure of sunspots is of the utmost interest, it is likely to be swamped in the helioseismic observations unless we are able to accurately model and remove the surface effects \citep[see, e.g.,][]{Gizon10}. The aim of this paper is to construct a simple parametric sunspot model, which captures the main effects of the near-surface layers of sunspots on the waves.
In principle, the 3D properties (magnetic field, pressure, density, temperature, Wilson depression, flows) of sunspots can be inferred in the photosphere and above using spectropolarimetric inversions \citep[e.g.,][]{mathew03}.
However, today, these inversions are unavailable for most sunspots.
In such circumstances, we choose to construct sunspot models that are based upon
semi-empirical models of the vertical structure of typical sunspots.
In producing our models, we treat separately the thermodynamic structure and the magnetic field:
we do not require magnetostatic equilibrium. There are several reasons for
not requiring the model to be hydrostatic. The first is that we are
much more interested in the waves then in whether the background model
is magnetodydrostatic. For this reason we are more interested in geting
for example, the sound speed, density, and magnetic field to be close to
those which are observed.
A second reason is that we are not including non-axisymettric structure
in our model, which certainly affects the force-balance.
A third reason is that sunspots
have both Evershed and moat flows, the existence of which implies
a net force and so indicates that the sunspot is not strictly
magnetohydrostatic.
In this paper, we take the umbral and penumbral models
to match those of existing semi-empirical models.
In the absence of horizontal magnetic field measurements we assume that the
field inclination at the umbra/penumbra boundary is approximately $45^{\circ}$.
We do not consider the effects of the Evershed or moat flows,
although they could be included in the framework we set out.
The surface model of the sunspot needs to be smoothly connected to the quiet-Sun
model below the surface, and details will be provided in Section 2.
Since we are not including the chromosphere in our simulations, we have chosen
to smoothly transition the sunspot model above the surface to the quiet-Sun model.
The description for constructing models including the surface structure
of sunspots, which will be fleshed out in Section 2, is intended to be
generic. For illustrative purposes we choose parameter values of the sunspot model that are appropriate for the
sunspot in NOAA 9787, which was observed by SOHO/MDI in January 2002 \citep{gizon09}.
In Section~3, we describe the setup of the numerical simulation of the propagation of f, p$_1$, and p$_2$ wave packets through the model sunspot. We use the SLiM code \citep{cameron07}.
We briefly compare the simulations and the SOHO/MDI observations in Section~4.
In Section~5, we conclude that this simple sunspot model, which is intended to
be a good description of the sunspot's surface properties, leads to a good
helioseismic signature and provides a testbed for future studies.
\section{Model sunspot}
The sunspot models that we describe below are cylindrically symmetric and thus we use a
cylindrical-polar coordinate system to describe the spot, where $z$ is the height and $r$ is the horizontal
distance from the axis of the sunspot.
\subsection{Thermodynamical aspects}
For this paper we use the umbral Model-E of \cite{maltby86} and
the penumbral model of \cite{ding89}.
These models give the pressure and density as functions of height.
For the quiet-Sun background in which we embed the sunspot, we use Model S \citep{JCD86}.
Zero height ($z=0$) is defined as in Model S.
There is some freedom in choosing the depths of the umbral (Wilson) and penumbral depressions.
We choose a Wilson depression of 550~km.
More precisely, we place the $\tau_{5000}=1$ surface of the umbra at a height
\begin{equation}
z_{\rm{u}} = z_0 - 550 \;\mbox{km},
\end{equation}
which is defined with respect to the height of the $\tau_{5000}$ height of the quiet Sun reference model of \cite{maltby86}
at with $z_0$ being approximately $-70$ km. This value of $550$~km for the Wilson depression produces a match between the density of the Maltby
model and that of the quiet Sun approximately 100~km below the $\tau_{5000}=1$ surface (see Figure~\ref{fig_atm_umbra}).
For the penumbra, we place the $\tau_{5000}=1$ surface at a height
\begin{equation}
z_{\rm{p}} = z_0 - 150 \;\mbox{km}.
\end{equation}
A penumbral depression of 150~km is consistent with spectropolarimetric measurements \citep[e.g.,][]{Mathew2004}.
The umbral model of \citet{maltby86} is plotted in Figure~\ref{fig_atm_umbra} for $z>z_{\rm u} - 116\; \rm{km} = -736$~km.
The penumbral model of \citet{ding89} is plotted in Figure~\ref{fig_atm_umbra} for $z>z_{\rm p} - 220\; \rm{km} = -440$~km.
\begin{figure}
\includegraphics[width=0.45\textwidth]{umb_p.eps}
\includegraphics[width=0.45\textwidth]{umb_rho.eps}
\includegraphics[width=0.45\textwidth]{umb_T.eps}
\includegraphics[width=0.45\textwidth]{umb_cs.eps}
\caption{The red-dashed curves show the semi-empirical umbral model from \cite{maltby86} as a function of height.
The black curves are the quiet-Sun values from model-S. The blue curves show the pressure the umbral model described in this paper.
The panels show: presssure (top-left panel), density (top-right),
temperature (bottom-left), and sound-speed (bottom-right).
The vertical dashed lines are the heights where $\tau_{5000}=1$
in the quiet-Sun ($z=z_0$) and the umbra ($z=z_{\rm u}$).}
\label{fig_atm_umbra}
\end{figure}
\begin{figure}
\includegraphics[width=0.45\textwidth]{pen_P.eps}
\includegraphics[width=0.45\textwidth]{pen_rho.eps}
\includegraphics[width=0.45\textwidth]{pen_T.eps}
\includegraphics[width=0.45\textwidth]{pen_cs.eps}
\caption{The red-dashed curves show the semi-empirical penumbral model from \cite{ding89}.
The black curves are the quiet-Sun values from Model S.
The blue curves show the penumbral model described in this paper.
Shown are, the pressure (top-left panel), density (top-right),
temperature (bottom-left), and sound-speed (bottom-right).
The vertical dashed lines are the heights where $\tau_{5000}=1$
in the quiet-Sun ($z=z_0$) and the penumbra ($z=z_{\rm p}$).}
\label{fig_atm_penumbra}
\end{figure}
\subsection{Geometrical aspects}
The umbral and penumbral models discussed above need to be smoothly embedded in Model S.
In order to do so, we modify the semi-empirical models above and below the $\tau_{5000}=1$ levels of the umbra and the penumbra, as follows.
In our model, the pressure along the axis of the sunspot, denoted by $P_{\rm u}$, is such that
\begin{equation}
\ln P_{\rm u}(z) = w(z) \ln P_{\mathrm{Maltby}}(z) + (1-w(z)) \ln P_{\mathrm{qs}}(z) .
\end{equation}
In this expression, $P_{\mathrm{Maltby}}(z)$ is the umbral pressure given by \citet{maltby86} for $z>-736$~km, and is extrapolated for $z<-736$~km assuming constant pressure scale height. The quantity $P_{\mathrm{qs}}(z)$ is the quiet-Sun pressure from Model S.
The function $w(z)$ is a weighting function given by
\begin{equation}
w(z)= \left\{
\begin{array}{l l}
0& z \le z_{\rm{u}} -380\;\mbox{km} ,\\
\cos[\pi \frac{(z_{\rm{u}} - 80 \;\mbox{km} - z)}{300 \;\mbox{km}}] /2 + 1/2 &z_{\rm{u}} - 380 \;\mbox{km} < z \le z_{\rm{u}} - 80 \;\mbox{km} ,\\
1 & z_{\rm{u}} - 80 \;\mbox{km} < z \le z_{\rm{u}} + 20 \;\mbox{km} ,\\
\cos[\pi \frac{(z_{\rm{u}} + 20 \;\mbox{km} - z)}{(900 \;\mbox{km} - z_{\rm{u}})}]/2 + 1/2 & z_{\rm{u}} + 20 \;\mbox{km} < z \le 900 \;\mbox{km} , \\
0 & 900 \;\mbox{km}<z .
\end{array} \right.
\end{equation}
The resulting umbral pressure is plotted in Figure~\ref{fig_atm_umbra}. The pressure near $z=z_{\rm u}$ is that of \citet{maltby86}. Below, it smoothly merges with the quiet Sun pressure at $z=z_{\rm u} - 380 \; \rm{km} = - 1000$~km. Above, the pressure tends to the quiet-Sun value and there is a significant departure from \citet{maltby86} for $z \gtrsim -200$~km (by more than a factor of two).
The vertical profile of the density is treated in the same manner as the pressure. The Maltby model does not explicitly contain all the properties that we require, so we use the OPAL tables to derive the sound speed from the pressure and the density.
The penumbral pressure and the density of \citet{ding89} are embedded in Model S in a similar way as was done
for the umbra, except that the weighting function $w(z)$ is replaced by
\begin{equation}
w(z)= \left\{\begin{array}{l l}
0& z \le z_{\rm{p}} -280\;\mbox{km},\\
\cos[\pi \frac{(z_{\rm{p}} - 80 \;\mbox{km} - z)}{200 \;\mbox{km}}] /2 + 1/2 &z_{\rm{p}} - 280 \;\mbox{km} < z \le z_{\rm{p}} - 80 \;\mbox{km} ,\\
1 & z_{\rm{p}} - 80 \;\mbox{km} < z \le z_{\rm{p}} + 20 \;\mbox{km} ,\\
\cos[\pi \frac{(z_{\rm{p}} + 20 \;\mbox{km} - z)}{(800 \;\mbox{km} - z_{\rm{p}})}]/2 + 1/2 & z_{\rm{p}} + 20 \;\mbox{km} < z \le 800 \;\mbox{km}\\
0 & 800 \;\mbox{km} < z.
\end{array} \right.
\end{equation}
The resulting penumbral pressure and density are plotted in Figure~\ref{fig_atm_penumbra}.
Thus far we have described umbral, penumbral, and quiet-Sun models.
We use them to form a three-dimensional cylindrically-symmetric sunspot model.
Since we are modeling only the very near-surface layers of the sunspot (top ~1 Mm), we do not
pay excessive attention to the fanning of the interfaces with depth.
In our 3D sunspot model, each thermodynamic quantity is the product of a function of $z$ and a
function of $r$.
The radii of the umbra and the penumbra are denoted by $r_{\rm u}$ and $r_{\rm p}$.
We combine the three model components (umbra, penumbra, quiet-Sun) in the $r$ coordinate
using the weight functions shown in Figure~\ref{fig_weights}.
The transitions between the 1-D atmospheres have a width of $6$~Mm and are described by raised cosines.
For the sunspot in Active Region 9787, we take $r_{\rm{u}}=10$~Mm and $r_{\rm{p}}=20$~Mm for the umbral and penumbral radii.
The sound speed was again reconstructed using the density, pressure, and the OPAL tables to ensure consistency.
We note that the temperature does not appear in the equations and is used only for the purpose of calculating the properties of the spectral lines used in helioseismology (e.g., the MDI Ni 676.8~nm line).
\begin{figure}
\includegraphics[width=0.45\textwidth]{masks.eps}
\caption{Relative weights used to combine the umbral (red), penumbral (black), and quiet-Sun (blue) models in the horizontal radial direction.}
\label{fig_weights}
\end{figure}
\subsection{Observables}
Doppler velocity is the primary observable in helioseismology.
In this section, which is parenthetical, we consider the
question of how to compute a quantity resembling the observations from the sunspot model.
This topic has previously been considered by, e.g., \cite{wachter08}.
Here we use the STOPRO code \citep{solanki87} to synthesize, assuming LTE, the formation
of the Ni 676.8~nm line, which is the line used by the SOHO/MDI instrument.
We obtain a continuum intensity image near the Ni line, shown in Figure~\ref{fig_sim_I}, together with observations. The bright rings at the transitions between the umbra, penumbra, and quiet Sun are undesirable artifacts due to the simplified transitions between the different model components.
This may have to be addressed in a future study by adjusting the weights from Figure~\ref{fig_weights}.
\begin{figure}
\includegraphics[width=0.48\textwidth]{sim_I.eps}
\includegraphics[width=0.48\textwidth]{obs_I.eps}
\caption{Left panel: Model of the sunspot continuum intensity image near the Ni $676.8$~nm line calculated using STOPRO. The red arcs have radii 10 Mm, 20 Mm, and 30 Mm.
Noticeable are bright rings at the transitions between the umbra, penumbra, and quiet Sun.
Right panel: Observed SOHO/MDI intensity image of the sunspot in AR 9787 averaged over 20\,--\,28 January 2002. }
\label{fig_sim_I}
\end{figure}
We compute the response functions for vertical velocity perturbations \citep{beckers75} as functions of height, horizontal position, and wavelength, $\lambda$. We integrate the response function over the wavelength
bandpass [$-100$~m\AA, $-34$~m\AA] measured from line center. This wavelength range was chosen because
it is where the slope of the line is maximal, i.e. where the response to velocity perturbations
is largest. The response function is normalized so that its $z$-integral is unity at each horizontal position. These are the weights (Figure~\ref{fig_sim_rf}) to be multiplied by the vertical velocity and then integrated in the vertical direction to give a Doppler velocity map.
Whether the bright rings affect the Doppler velocity in practice is unclear. We believe, however, that the response function computed here will be useful in studies investigating the diagnostic potential of the helioseismic signal within the sunspot.
\begin{figure}
\includegraphics[width=0.5\textwidth]{sim_rf_norm.eps}
\caption{The weights (being the normalized response function of the Ni 676.8~nm line)
which allow us to create synthetic helioseismic Doppler velocity maps. The height
is with reference to the $\tau_{5000}=1$ surface in the quiet-Sun. }
\label{fig_sim_rf}
\end{figure}
\subsection{Magnetic field}
When observations of the vector magnetic field are available, they can and should be
included in constraining the sunspot model. In this paper, we consider a more generic case and assume a Gaussian dependence of the vertical
magnetic field in the radial direction. The important parameters are then the maximum magnetic field at the surface, $B_0$,
the half-width at half maximum (HWHM) of the Gaussian profile at the surface, $h_0$, and the inclination of the magnetic field at the umbral-penumbral boundary.
Explicitly, the dependence of $B_z$ on $r$ and $z$ is given by
\begin{eqnarray}
B_z(r,z)=B_z(r=0,z) \exp[-(\ln 2) r^2/ h(z)^2]
\label{eq.radspot}
\end{eqnarray}
where $h(z)$ is the half width at half maximum (HWHM) of the Gaussian.
Since the magnetic flux at each height must be conserved, we have
\begin{equation}
h(z)=h_0 \sqrt{B_0/B_z(r=0,z)} ,
\end{equation}
where $h_0$ is the surface value of $h$.
For the sunspot in Active Region 9787 we have chosen $B_0 = 3$~kG and $h_0=10$~Mm.
The function $B_z(r=0,z)$ is unknown. It is a goal of helioseismology to constrain $B_z$ along the sunspot axis.
Here we choose a two-parameter function as follows:
\begin{eqnarray}
B_z(r=0,z)=B_0 e^{-(z/\alpha) L(-z/\alpha)L(z/\alpha_2)} ,
\label{eq.logistic}
\end{eqnarray}
where $L(z)=1/(1+e^{-z})$ is the logistic function.
Since $d B_z /dz(r=0,z=0) = - B_0/(4\alpha)$, the parameter $\alpha$ controls the vertical gradient of $B_z$
near the surface. Together with the condition $\nabla\cdot{\bf B}=0$, $d B_z/dz (r=0,z=0)$ determines the full
surface vector magnetic field. We chose $\alpha=1.25$~Mm so that at $z=0$ the inclination of the magnetic
field is $45^\circ$ at the umbra/penumbra boundary, in agreement with observations. The parameter $\alpha_2$
controls the field strength at depth; we chose $\alpha_2=18.4$~Mm.
Figure~\ref{fig_spot_field} summarizes the magnetic properties of the sunspot model.
The magnetic field strength increases rapidly with depth as a consequence of the choice of a large $\alpha_2$
in Equation~(\ref{eq.logistic}). Equation~(\ref{eq.radspot}) implies in turn that the radius of the tube shrinks fast,
from 10~Mm at the surface down to $1.3$~Mm at $z=-25$~Mm (just spatially resolved in the numerical simulations of Section~3).
This sunspot model---as monolithic as imaginable---is only one possible choice among many. We note that it would be
straightforward to consider model sunspots that fan out as a function of depth, e.g., by decreasing the parameter $\alpha_2$.
Figure~\ref{fig_spot_field} also shows the ratio of the fast-mode speed, $v_{\rm f}$, to the sound speed, $c$, along the
sunspot axis, which is nearly unity for depths greater than 1 Mm, but increases very rapidly near the surface. For example,
we have $v_{\rm f}/c = 1.9$ at $z=z_u$ and $v_{\rm f}/c = 3.9$ at $z=-400$~km. The height at which the Alfv\'en and sound
speeds are equal ($a=c$) is $z=-750$~km.
We comment that the construction of both the thermodynamic and magnetic properties of this
model aims to get the surface properties correct. Additional information about the surface,
when available, could easily be incorporated into our sunspot model. Where information is missing
the assumption that the sunspot has similar properties to those of other sunspots is possibly the
best that can be done. A summary of the choices made to model the sunspot in AR 9787, which was
observed by MDI, is given in Table 1.
\begin{figure}
\includegraphics[width=0.45\textwidth]{Bz_axis.eps}
\includegraphics[width=0.45\textwidth]{VfvsCs.eps}
\includegraphics[width=0.45\textwidth]{B_angle.eps}
\includegraphics[width=0.45\textwidth]{B_surf.eps}
\caption{The upper left panel shows the vertical magnetic field strength along the
axis of the spot. The upper right panel shows the ratio of the fast mode speed to the sound speed also along the
sunspot axis. The lower panels show the inclination (left) and strength (right) of the magnetic field at $z=0$ }
\label{fig_spot_field}
\end{figure}
\begin{table*}
\center
\caption{Basic choices made in constructing the sunspot model for AR 9787.}
\begin{tabular}{ll}\hline
Umbral model & \citet{maltby86} \\
Penumbral model& \citet{ding89} \\
Umbral radius & $r_{\rm{u}} = 10$ Mm \\
Penumbral radius& $r_{\rm{p}} = 20$ Mm \\
Umbral depression ($\tau_{5000}=1$)& $z_0 - z_{\rm{u}}$ = 550 km \\
Penumbral depression ($\tau_{5000}=1$)& $z_0 - z_{\rm{p}}$ = 150 km \\
On-axis surface vertical field& $B_{0}$ = 3 kG \\
Radial profile of $B_z$ & Gaussian with HWHM=10 Mm \\
Field inclination at umbra/penumbra boundary & $45^\circ$ \\ \hline
\end{tabular}
\end{table*}
\section{Numerical simulation of the propagation of waves through the sunspot model}
We want to simulate the propagation of planar wave packets through the model sunspot
and surrounding quite-Sun using the SLiM code described in \cite{cameron07}
and \cite{cameron08}. The size of the box is $145.77$~Mm in both horizontal coordinates.
We used 200 Fourier modes in each of the two horizontal directions; almost all of the wave energy resides
in the 35 longest horizontal wavelengths.
The vertical extent of the box is $-25$~Mm $< z < 2.5$~Mm, where the physical domain, $-20$~Mm $< z < 0.5$~Mm, is sandwiched between two "sponge" layers that reduce strongly wave reflection from the boundaries \citep{cameron08}. We use 1098 grid points and a finite difference scheme in the vertical direction.
Thus far the quiet-Sun reference model has been Model S which is convectively unstable.
For the purpose of computing the propagation of solar waves through our sunspot models using linear numerical simulations, we need a stable background model in which to embed the sunspot model. We use Convectively Stabilized Model B (CSM\_B) from \cite{schunker10}.
We keep the relative perturbations in pressure, density, and sound speed fixed. For example, this means that the new pressure is given by
\begin{eqnarray}
p(r,z)=P(r,z) \; \frac{P_{\mathrm{CSM\_B}}(z)}{P_{\mathrm{qs}}(z)},
\end{eqnarray}
where $P(r,z)$ is the pressure as discussed above. The same treatment is applied to density, $\rho$, and sound speed, $c$.
Separate simulations are done for f, p$_1$, and p$_2$ wave packets. In each case, a wave
packet is made of approximately 30 Fourier modes. At $t=0$ all the modes are in phase at $x=-43.73$ Mm,
while the sunspot is centered at $x=y=0$. The initial conditions at $t=0$ are such that the wave packet
propagates in the positive $x$ direction, towards the sunspot.
The Alfv\'en velocity increases strongly above the photospheric layers of the spot
(see Figure~\ref{fig_spot_field}). This is a problem for two reasons. The first and most important
of these is that it causes the wavelengths of both the fast-mode and Alfv\'en waves to become
very large. The upper boundary, which is located at $z=2.5$ Mm, becomes only a fraction of a
wavelength away from the surface. This makes the simulation sensitive to the artificial upper
boundary. The second reason is that our code is explicit and hence subject to a CFL condition:
High wave speeds require very small time steps and correspondingly large amounts of
computer power. In order to address this problem, we notice that we expect all solar waves
that reach the region where the Alfv\'en speed is high to continue to propagate upward out of the
domain. We therefore reduce the Alfv\'en speed in this region in the simulation to increase the
time the waves spend in the sponge layer. This reduces the influence of the upper boundary and allows us to have
a larger time step. In practice, we multiply the Lorentz force by $8 c^2 /(8 c^2 +a^2)$, where
$a(r,z)$ is the Alfv\'en speed, and $c(z)$ is the sound speed of the quiet Sun at the same height.
This limits the fast-mode speed to be a maximum of three times the quiet-Sun
sound speed at the same geometric height. Since the sound speed near the
$\tau=1$ layer in the spot is less than half of that of the quiet Sun at the same height,
the maximum value of the fast mode speed in this critical region is approximately six times the local
sound speed. The functional form chosen to limit the Lorentz force modifies $a$ at and below
the $c=a$ level by less than 2\%. This corresponds to a change in field strength of
less than 30~G or a minimal change in the height of the $c=a$ surface.
\section{Preliminary comparison of simulations with observed MDI cross-covariances of the
Doppler velocity}
There exist excellent observations (SOHO/MDI Doppler velocity) of helioseismic waves around the sunspot in AR 9787
\citep{gizon09}. The wave field around the sunspot can be characterized by the temporal cross-covariance
of the observed random wave field. It has been argued \citep[][and references therein]{Gizon10} that the observed cross-covariance
is closely related to the Green's function (the response of the Sun to a localized source).
Hence the observed cross-covariance is comparable to the surface
vertical velocity from initial-value numerical simulations of the wave packet propagation.
In \cite{cameron08} we studied the propagation of an f-mode wave packet through a
simplified magnetohydrostatic sunspot model. We found that we could constrain the
surface magnetic field by comparing the observed f-mode cross-covariance with a simulated f-mode wave packet.
Here we assess the seismic signature of the semi-empirical sunspot model described in Section~2 by comparing the simulations and observations.
The cross-covariance is constructed in the same way as in \cite{cameron08}, to which we refer the reader. In short, it is computed according to
\begin{equation}
C(x,y,t)=\int_0^T {\bar{\phi}} (t')\phi(x,y,t'+t) \, dt',
\end{equation}
where $T=7$~days is the observation time, $\phi$ is the observed Doppler velocity, ${\bar{\phi}}$ is the average of $\phi$ over the line $x=-43.73$~Mm, and $t$ is the time lag. We select three different wave packets (f, p$_1$, and p$_2$) by filtering $\phi$ along particular mode ridges.
As said in Section~3, for the numerical simulations we consider plane wave packets starting at $x=-43.73$~Mm and propagating in the $+x$ direction towards the model sunspot (at the origin). The initial conditions of the simulation are chosen such that, in the far field, the simulated vertical velocity has the same temporal power as the observed cross-covariance.
Figure~\ref{fig_p1_s} shows the observed cross-covariances
and simulated wave packets for the p$_1$ modes at four consecutive time lags.
The main features seen in the cross-covariances, the speedup of the waves across the
sunspot as well as their loss of energy at short wavelengths, are also seen in the simulations.
The details of the perturbed waveforms can only be understood in the context of finite-wavelength scattering \citep[][]{Gizon2006}.
\begin{figure}
\includegraphics[width=0.45\textwidth, trim=2.5cm 2.5cm 3.5cm 0.05, clip=true]{P1_slice_sim0.eps}
\includegraphics[width=0.45\textwidth, trim=2.5cm 2.5cm 3.5cm 0.05, clip=true]{P1_slice_obs0.eps}
\includegraphics[width=0.45\textwidth, trim=2.5cm 2.5cm 3.5cm 0.05, clip=true]{P1_slice_sim140.eps}
\includegraphics[width=0.45\textwidth, trim=2.5cm 2.5cm 3.5cm 0.05, clip=true]{P1_slice_obs140.eps}
\includegraphics[width=0.45\textwidth, trim=2.5cm 2.5cm 3.5cm 0.05, clip=true]{P1_slice_sim144.eps}
\includegraphics[width=0.45\textwidth, trim=2.5cm 2.5cm 3.5cm 0.05, clip=true]{P1_slice_obs144.eps}
\includegraphics[width=0.45\textwidth, trim=2.5cm 0.05 3.5cm 0.05, clip=true]{P1_slice_sim148.eps}
\includegraphics[width=0.45\textwidth, trim=2.5cm 0.05 3.5cm 0.05, clip=true]{P1_slice_obs148.eps}
\caption{(Left panels) Simulated vertical velocity of p$_1$ wave packets at times $t=0,$ 140, 144, and
148~min (from top to bottom). White shades are for positive values, dark shades for negative values.
The gray scale covers the full range of values in each row.
(Right panels) Observed cross-covariance at the corresponding time-lags.
The $t=0$ frame from the simulation is the prescribed initial condition.
The red and blue crosses (hereafter points A and B respectively) indicate two particular locations used in the subsequent analysis (Figures~\ref{fig_ts} and \ref{fig_fd}).}
\label{fig_p1_s}
\end{figure}
For a more detailed comparison we have concentrated on the two spatial locations $A=(x,y)=(60,0)$~Mm and $B=(60,60)$~Mm, marked
with crosses in Figure~\ref{fig_p1_s}. The first point lies behind the sunspot, where the effects of the sunspot are easily noticeable.
The second point is away from the scattered field and serves as a quiet-Sun reference.
The corresponding simulated and observed wave packets are plotted as functions of time in Figure~\ref{fig_ts}. The match again looks qualitatively good.
\begin{figure}
\includegraphics[width=0.45\textwidth]{f_sim_ts.eps}
\includegraphics[width=0.45\textwidth]{f_obs_ts.eps}
\includegraphics[width=0.45\textwidth]{P1_sim_ts.eps}
\includegraphics[width=0.45\textwidth]{P1_obs_ts.eps}
\includegraphics[width=0.45\textwidth]{P2_obs_ts.eps}
\includegraphics[width=0.45\textwidth]{P2_sim_ts.eps}
\caption{Time series from the simulations (left panels) or from the
observed cross-covariance
(right panels) for the (top to bottom) f, p$_1$ and p$_2$ wave packets. The red curve corresponds
to point A (the red cross) in Figure~\ref{fig_p1_s}, the blue curve to
point B (the blue cross).}
\label{fig_ts}
\end{figure}
To proceed further we Fourier analyze the time series
at points A and B: we consider the power spectra and phases.
The temporal power spectra are shown in the left panels of Figure~\ref{fig_fd}.
The power spectrum of the simulations is similar to that of cross-covariance for
the p$_1$ modes, less so for the f and p$_2$ modes. We note that the differences between
the simulated and observed power spectra are seen at both spatial points.
We also analyzed the phase difference between the wave packets at the two points (A minus B) as a function of
frequency (Figure~\ref{fig_fd}). The phase shifts introduced by the sunspot are reasonably well reproduced by the
simulations, although some differences exist especially for the f modes.
\begin{figure}
\includegraphics[width=0.45\textwidth]{f_amp.eps}
\includegraphics[width=0.45\textwidth]{f_phase.eps}
\includegraphics[width=0.45\textwidth]{p1_amp.eps}
\includegraphics[width=0.45\textwidth]{p1_phase.eps}
\includegraphics[width=0.45\textwidth]{p2_amp.eps}
\includegraphics[width=0.45\textwidth]{p2_phase.eps}
\caption{Frequency analysis of the curves shown in Figure 8.
The left panels shows the power spectra of the time series from Figure 8,
with the solid lines indicating the values for the numerical simulations
and the dots indicating the values from the observed cross-covariances.
The red curves are for point B (away from the sunspot) abd the blue curves for point A (behind the sunspot).
The top panels are for f mode wave packets, the middle for p$_1$, and the bottom for p$_2$.
The right panels show the phase differences between points A and B, i.e. sunspot minus quiet-Sun.
The solid lines are for the numerical simulations and the dots are for the observations.
The error bars are root-mean-square deviations computed over $0.3$-mHz intervals.
}
\label{fig_fd}
\end{figure}
In both the observations and the simulations we see that the power in the f and p$_1$ wavepackets
peaks substantially below $3$~mHz. The reason for this can be understood by noting that
wave attenuation increases sharply with frequency. Even though the initial power spectra have most of their energy near 3~mHz,
very little of the power at and above 3~mHz reaches the points $A$ and $B$. The remaining low frequency
waves have more sensitivity to the deeper layers and are thus the most important in
determining the subsurface structure of the sunspot. It is also apparent that the sunspot more
strongly 'damps' higher frequency waves. This is understandable since these waves have more of their
energy in the near-surface region, where mode conversion (from the f and p modes into
Alfv\'en and slow-magnetoacoustic modes that propagate away from the surface) is efficient.
Over the frequency range studied, the power 'absorption' coefficient increases with frequency \citep[in accordance with][]{Braun95,Braun2008}.
The same trend is also seen in the phase shifts, with low frequency waves being almost unaffected by the sunspot whilst high
frequency waves have phase shifts which range up to 300 degrees (for the frequencies shown). A strong frequency dependence of the travel-time perturbations had already been reported by \citet{Braun2008} for ridges p$_1$ through p$_4$ \citep[see also][]{Braun2006,Birch2009}.
The reason why the phase shifts are strongest for high-frequency waves is again that they have more energy near the surface.
The possibility of such strong phase shifts should be born in mind when interpreting (and measuring) helioseismic waves
near sunspots.
\section{Conclusion}
We have outlined a method for constructing magnetic models of the near-surface
layers of sunspots by including available observational constraints
and semi-empirical models of the umbra and penumbra.
Applying this type of model to the sunspot of AR 9787, we showed that
the observed helioseismic signature of the sunspot model is reasonably well captured.
Our approach was to compare numerical simulations of wave propagation through the model sunspot
and the observed SOHO/MDI cross-covariances of the Doppler velocity.
Possible improvements of the simulation include a better treatment of wave attenuation (esp. in plage),
improved initial conditions, and inclusion of the moat flow. It should also be noted that improved observations
of the surface vector magnetic field by SDO/HMI will help tune the sunspot models.
The dominant influence of the sunspot's surface layers on helioseismic waves means that it is necessary to model it
very well in order to extract information from the much weaker signature of the sunspot's
subsurface structure. The sunspot model used here accounts for most of the
helioseismic signal, although there are substantial differences that remain to be explored.
In any case, the model provides a testbed which is sufficiently similar to a real sunspot to be used for
numerous future studies in sunspot seismology.
Finally, we remark that our numerical simulations of linear waves could potentially be used to interpret other helioseismic observations than the cross-covariance (cf. helioseismic holography or Fourier-Hankel analysis), as well as other magnetic phenomena \citep[e.g., small magnetic flux tubes, see][]{Duvall2006}.
\section*{Acknowledgments}
This work is supported by the European Research Council under the European Community's Seventh Framework Programme/ERC grant agree\-ment \#210949, "Seismic Imaging of the Solar Interior", to PI L. Gizon (contribution towards Milestones 3 and 4). SOHO is a project of international collaboration between ESA and NASA.
\bibliographystyle{spr-mp-sola}
| proofpile-arXiv_065-5009 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
Infinite dimensional manifolds $\mathcal M$ such as the loop space Maps$(S^1,M)$ of a manifold or more generally the space Maps$(N,M)$ of maps between
manifolds have interesting geometry. The structure group of these infinite dimensional manifolds
(i.e. of their tangent bundles $\mathcal E$) is a gauge group of a finite rank bundle $E\to N$ over the source space. When the manifolds have
Riemannian metrics, the mapping spaces have Levi-Civita connections with connection and
curvature forms taking values in $\Psi{\rm DO}_{\leq 0} = \Psi{\rm DO}_{\leq 0}(E)$, the algebra of nonpositive integer order classical pseudodifferential operators ($\Psi{\rm DO}$s) acting
on sections of $E$. Thus for geometric purposes, the structure group should be enlarged to
$\pdo_0^*$, the group of invertible zeroth order classical $\Psi{\rm DO}$s, since at least formally Lie$(\pdo_0^*) = \Psi{\rm DO}_{\leq 0}.$
As discussed in \cite{L-P}, \cite{MRT}, \cite{P-R2}, the generalizations of Chern-Weil theory to $\pdo_0^*$-bundles
are classified by the set of traces on $\Psi{\rm DO}_{\leq 0}$, i.e. by the Hochschild cohomology group
$HH^0(\Psi{\rm DO}_{\leq 0}, {\mathbb C}).$ Indeed, given such a trace $T:\Psi{\rm DO}_{\leq 0}\to {\mathbb C}$, one defines $T$-Chern classes of a connection with curvature $\Omega\in\Lambda^2(\mathcal M, \Psi{\rm DO}_{\leq 0})$ by the de Rham class
$c_k^T(\mathcal E) = [T(\Omega^k)]\in H^{2k}(\mathcal M, {\mathbb C}).$
These traces roughly break into two classes: the Wodzicki residue and the integral of the zeroth/leading order symbol
over the unit cosphere bundle. In \S2, we prove that the Wodzicki-Chern classes vanish if
the structure group reduces from $\pdo_0^*$ to the subgroup with leading order symbol given by the identity.
We conjecture that the Wodzicki-Chern classes always vanish, and sketch a possible
superconnection proof.
These vanishing results, which were previously known only for loop spaces, reinforce the importance of the nontrivial Wodzicki-Chern-Simons classes produced in \cite{MRT}.
In the beginning of \S3, we discuss the analytic and topological issues involved with universal bundle calculations of Chern classes associated to the leading order symbol trace.
The main issue is that the classifying space $B\Psi{\rm DO}_0^*$ may not be a manifold, so we want to
extend the leading order symbol trace to an algebra whose corresponding classifying space is clearly a manifold.
\S3 is devoted to the analytic issue of extending the leading order symbol trace to a Lie algebra containing $\Psi{\rm DO}_{\leq 0}$. It is easier to work with the quotient of $\Psi{\rm DO}_{\leq 0}$ by smoothing operators, and in Proposition \ref{prop extend} we find an extension (really a factorization)
of the leading order symbol trace to a
relatively large subalgebra ${\mathcal Q}$ inside a quotient
of the set of all bounded operators on a Hilbert space. These results may not be optimal, so this section should be considered work in progress.
Unfortunately, the classifying space $BQ$ associated to $\mathcal Q$ may not be a manifold, so we are unable to construct universal geometric characteristic classes. In \S4, we take a smaller
extension of the leading order symbol trace with corresponding classifying space a manifold. We then
show that the leading order Chern classes of gauge bundles are nontrivial in general.
This implies that there is a topological theory of characteristic classes of $\pdo_0^*$-bundles involving
the cohomology of $B\pdo_0^*$. As a first step toward understanding this cohomology,
we use the nonvanishing of leading order Chern classes on mapping spaces
to show in Theorem \ref{last theorem} that for $E^\ell\to N$,
$H^*(B\pdo_0^*, {\mathbb C})$ surjects onto
the polynomial algebra $H^*(BU(\ell), {\mathbb C}) = {\mathbb C}[c_1(EU(\ell)),\ldots,c_\ell(EU(\ell))]$.
This complements Rochon's work \cite{R} on the
homotopy groups of a certain stablilization of $\pdo_0^*$. The proof shows that $H^*(B\mathcal G,{\mathbb C})$ also surjects onto $H^*(BU(\ell),{\mathbb C})$, where $\mathcal G$ is the gauge group of $E$.
For comparison,
$H^*(B\mathcal G_0,{\mathbb C})$, where $\mathcal G_0$ is the group of based gauge transformations, has been completely determined by different methods, and $H^*(B\mathcal G,{\mathbb C})$ is known if the center of the underlying finite dimensional Lie group is finite
\cite[p.~181]{dk}.
As much as possible, we sidestep the trickier analytic and topological aspects of $B\pdo_0^*$
by working with
de Rham cohomology only. The questions of whether $\pdo_0^*$ is a tame Fr\'echet
space \cite{H}, \cite{Omori} and so has a good exponential map, the relationships among
the cohomology with respect to the norm topology, the Fr\'echet topology and intermediate Banach norm topologies, and whether the de Rham
theorem holds for $B\pdo_0^*$ \cite{beggs} are not addressed.
The role of $\pdo_0^*$ in infinite dimensional geometry was explained to us by Sylvie Paycha, and we gratefully acknowledge our many conversations with her. We also would like to thank Varghese
Mathai for suggesting we consider the closure of $\pdo_0^*$ discussed below.
The referee both pointed out serious errors and gave valuable
suggestions for
simplifying and clarifying this paper, which we gratefully acknowledge.
Finally, this paper is in many ways inspired by the seminal text \cite{shubin} of Misha Shubin, whose clear writing has made a difficult subject accessible to so many mathematicians.
\section{Vanishing of Wodzicki-Chern classes of $\Psi{\rm DO}$-bundles}
Let $\pdo_0^* = \pdo_0^*(E)$ be the group of zeroth order invertible classical $\Psi{\rm DO}$s acting on sections of a fixed finite rank complex bundle $E\to N.$
We recall the setup for $\pdo_0^*$-bundles. Fix a complete Riemannian
metric on $N$ and a Hermitian metric on $E$. For a real parameter $s_0 \gg 0$,
let $H^{s_0}\Gamma(E) = H^{s_0}\Gamma(E\to N)$ be the sections of $E$ of Sobolev class $s_0.$ This space depends on the
Riemannian metric if $N$ is noncompact, and of course can be defined via local charts without choosing a metric. Let $\mathcal E$ be a Banach bundle over a base $B$ such that locally $\mathcal E|_U \simeq U\times H^{s_0}\Gamma(E)$ and such that the transition
functions lie in $\pdo_0^*(E).$ Then we call $\mathcal E$ a $\pdo_0^*$- or $\pdo_0^*(E)$-bundle over $B$.
The role of $s_0$ is not very important. We could take the $C^\infty$ Fr\'echet topology on the sections
of $E$, since this is a tame Fr\'echet space in the sense of \cite{H}.
As explained in \cite{eells}, the tangent bundle to Maps${}^{s_0}(N,M)$, the space of $H^{s_0}$ maps between manifolds $N, M$, is a
$\pdo_0^*$-bundles.
Fix a component Maps${}^{s_0}_f(N,M)$ of a map $f:M\to N$. We can take $H^{s_0}\Gamma(f^*TM \to N)$ as the tangent space $T_f{\rm Maps}^{s_0}(N,M) =
T_f{\rm Maps}_f^{s_0}(M,N).$ Exponentiating sufficiently short
sections \\
$X\in H^{s_0}\Gamma(f^*TM)$ via $n\mapsto \exp_{f(n),M} X_n$ gives a coordinate neighborhood
of $f$ in\\
Maps${}^{s_0}(N,M)$, making Maps${}^{s_0}(N,M)$ into a Banach manifold. The transition maps for
$T{\rm Maps}^{s_0}(N,M)$
for nearby maps are of the form $d\exp_{f_1}\circ d\exp_{f}^{-1}$,
which are easily seen to be isomorphic to gauge transformations of $f^*TM.$ Since gauge
transformations are invertible multiplication operators, $T{\rm Maps}^{s_0}(N,M)$ is a $\pdo_0^*$-bundle, although at this point there is no need to pass from gauge bundles to $\pdo_0^*$-bundles.
Note that the gauge group depends on the component of $f$.
In particular, for the loop space $LM = {\rm Maps}^{s_0}(S^1,M)$, each complexified tangent space $T_\gamma LM$ is $H^{s_0}\Gamma(S^1\times {\mathbb C}^n\to S^1)$ for $M^n$ oriented. For convenience, we
will always complexify tangent bundles.
\begin{remark} These bundles fit into the framework of the families index theorem. Start with a fibration of manifolds with an auxiliary bundle
$$\begin{CD} @. E\\
@.@VVV\\
Z@>>> M\\
@.@VV{\pi}V\\
@. B
\end{CD}
$$
Here $M$ and $B$ are manifolds, and the fiber is modeled on a compact manifold $Z$. The
structure group of the fibration is Diff$(Z)$.
Pushing down the sheaf of sections of $E$ via $\pi$
gives an infinite rank bundle
$$\begin{CD} H^{s_0}\Gamma(E_b) @>>> \mathcal E = \pi_*E\\
@.@VVV\\
@. B
\end{CD}
$$
with the fiber modeled on the $H^{s_0}$ sections of $E_b = E|_{Z_b}$ over $Z_b = \pi^{-1}(b)$ for
one $b$ in each component of $B$. The structure
group of $\mathcal E$ is now a semidirect product $\mathcal G\ltimes {\rm Diff}(Z)$, where $\mathcal G$ is the gauge group of $E_b.$
In particular, for ${\rm ev}:N\times {\rm Maps}^{s_0}(N,M) \to M, {\rm ev}(n,f) = f(n)$, and
$$\begin{CD} @. E = {\rm ev}^*TM @>>> TM\\
@. @VVV @VVV\\
N@>>> N\times {\rm Maps}^{s_0}(N,M) @>{{\rm ev}}>> M\\
@.@VV{\pi = \pi_2}V@.\\
@. {\rm Maps}^{s_0}(N,M) @.
\end{CD}
$$
we get $\mathcal E = \pi_*{\rm ev}^*TM = T{\rm Maps}^{s_0}(N,M).$ Since the fibration is trivial, the structure group is just the gauge group. Defining characteristic classes for nontrivial fibrations is open at present.
\end{remark}
As explained in the introduction, any trace on the gauge group $\mathcal G$ of $E\to N$ will give a Chern-Weil
theory of characteristic classes on $\mathcal E.$ Such a trace will be used in \S3. However,
using a wider class of traces is natural, as we now explain. The
choice of Riemannian metrics on $N, M$ leads to a family of Riemannian metrics on Maps$(N,M).$ Namely, pick $s \gg 0, s \leq s_0$. For $X, Y \in T_f{\rm Maps}^{s_0}(N,M)$, set
\begin{equation}\label{one}
\langle X, Y\rangle_{f,s} = \int_N \langle X_n, (1+\Delta)^{s}Y_n\rangle_f(n)\ {\rm dvol}(n),
\end{equation}
where $\Delta = ({\rm ev}^*\nabla)^*({\rm ev}^*\nabla)$ and $\nabla = \nabla^M$ is the Levi-Civita connection on $M$. Here we assume $N$ is compact. For example, when $N = S^1$,
${\rm ev}^*\nabla$ is covariant differentiation along the loop $f$. Equivalently, we are taking the
$L^2$ inner product of the pointwise $H^{s}$ norms of $X$ and $Y$.
The metric (\ref{one}) gives rise to a Levi-Civita connection $\nabla^{s}$ by the Koszul
formula
\begin{eqnarray}\label{two}
\langle\nabla^s_YX,Z\rangle_s &=&\ X\langle Y,Z\rangle_s +Y\langle X,Z\rangle_s
-Z\langle X,Y\rangle_s\\
&&\qquad
+\langle [X,Y],Z\rangle_s+\langle [Z,X],Y\rangle_s -\langle[Y,Z],X\rangle_s.\nonumber
\end{eqnarray}
provided the right hand side is a continuous linear functional of $Z\in$\\
$T_f{\rm Maps}^{s_0}(N,M).$ As explained in \cite{MRT}, the only problematic
term $Z\langle X,Y\rangle_s$ is
continuous in $Z$ for $s\in {\mathbb Z}^+$, but this probably fails otherwise.
Restricting ourselves to $s\in {\mathbb Z}^+$, we find that the
connection one-form and curvature two-form of $\nabla^s$ take values in $\Psi{\rm DO}_{\leq 0}({\rm ev}^*TM).$ (This is \cite[Thm. 2.1, Prop. 2.3]{MRT} for $LM$, and the proof generalizes.) Because these natural
connections do not take values in $\Gamma {\rm End}(E,E)$, the Lie algebra of the gauge group Aut$(E)$,
we have to extend the structure group of $T{\rm Maps}^{s_0}(N,M)$ to $\pdo_0^*.$ Note that $\pdo_0^*$ acts as bounded operators on $T_f{\rm Maps}^{s_0}(N,M)$ for all choices of $s_0$,
so the structure group is independent of this choice.
The
zeroth order parts of the connection and curvature forms
are just the connection and curvature forms of ${\rm ev}^*\nabla^M_{f(n)}$,
so only the negative order parts contain new information.
To extract the new information, we pick the unique trace on $\Psi{\rm DO}_{\leq 0}(E\to N)$ that detects negative order terms, namely the Wodzicki residue
\begin{equation}\label{three} {\rm res}^{\rm w}:\Psi{\rm DO}_{\leq 0}\to{\mathbb C},\ {\rm res}^{\rm w}(A) = (2\pi)^{-n}\int_{S^*N}
{\rm tr}\ \sigma_{-n}(A)(x, \xi)
\ d\xi {\rm dvol}(x),
\end{equation}
where $S^*N$ is the unit cosphere bundle of $N^n$.
We now pass to general $\pdo_0^*$-bundles $\mathcal E\to \mathcal M$ which admit connections; for example, if $\mathcal M$ admits a partition of
unity, then $\mathcal E\to\mathcal M$ possesses connections.
Standard Chern-Weil theory extends to
justify the following definition.
\begin{definition} The k${}^{\rm th}$ Wodzicki-Chern class $c_k^{\rm w}(\mathcal E)$ of the $\pdo_0^*$-bundle $\mathcal E\to \mathcal M$ admitting a connection $\nabla$ is
the de Rham cohomology class $[{\rm res}^{\rm w}(\Omega^k)]$, where $\Omega$ is the curvature of $\nabla.$
\end{definition}
As for ${\rm Maps}^{s_0}(N,M)$, the Wodzicki-Chern classes are independent of the choice of Sobolev parameter $s_0$.
These classes easily vanish for $\pdo_0^*$-bundles such as $T{\rm Maps}^{s_0}(N,M)$ which restrict to gauge bundles.
For such bundles admit a connection taking values in the Lie algebra $\Gamma
{\rm End}(E,E)$ of the gauge group. The curvature two-form is thus a multiplication operator with vanishing
Wodzicki residue. Since the Wodzicki-Chern class is independent of connection, these classes
vanish.
We give a subclass of $\pdo_0^*$-bundles for which the Wodzicki-Chern classes vanish. Recall that
paracompact Hilbert
manifolds admit partitions of unity. Since ${\rm Maps}^{s_0}(N,M)$ is a Hilbert
manifold and a metrizable
space for $M, N$ closed, it is paracompact and so admits partitions of unity. Moreover, by a theorem of Milnor, ${\rm Maps}^{s_0}(N,M)$
has the
homotopy type of a CW complex in the compact-open topology. This carries over to the Sobolev topology by e.g. putting a connection $\nabla$
on $M$, and using the heat operator associated to $\nabla^* \nabla$ to
homotop continuous maps to smooth maps.
\begin{theorem}
Let ${\rm Ell}^*\subset \pdo_0^*$ be the subgroup of invertible zeroth order elliptic operators whose leading order symbol is the identity. Assume $\mathcal M$ is a manifold homotopy equivalent to a CW complex and admitting a cover with a subordinate partition of unity. If $\mathcal E\to\mathcal M$ is an ${\rm Ell}^*$-bundle, then the Wodzicki-Chern classes $c_k^{\rm w}(\mathcal E)$ are zero.
\end{theorem}
\begin{proof}
By \cite[Prop. 15.4]{bw}, for $V = S^n$ or $B^n$, a continuous map $f:V\to {\rm Ell}^*$ is homotopic
within ${\rm Ell}^*$ to a map $g:V\to {\rm GL}_\infty$, the set of operators of the form $I + P$, where $P$ is a
finite rank operator.
For $V = S^n$, we get that the inclusion $i:{\rm GL}_\infty\to{\rm Ell}^*$ induces a surjection
$i_*:\pi_k({\rm GL}_\infty)\to \pi_k({\rm Ell}^*)$ on all homotopy groups, and for $V = B^n$ we get that $i_*$ is
injective. From the diagram
$$\begin{CD}
@>>> \pi_k({\rm GL}_\infty) @>>> \pi_k(E{\rm GL}_\infty)@>>> \pi_k(B{\rm GL}_\infty)@>>>\\
@. @Vi_* VV @V(Ei)_* VV @V(Bi)_*VV\\
@>>> \pi_k({\rm Ell}^*) @>>> \pi_k(E{\rm Ell}^*)@>>> \pi_k(B{\rm Ell}^*)@>>>
\end{CD}$$
we get $(Bi)_*:\pi_k(B{\rm GL}_\infty)\to \pi_k(B{\rm Ell}^*)$ is an isomorphism for all $k$.
These classifying spaces are weakly equivalent to CW complexes \cite[Thm. 7.8.1]{spanier}.
This implies that $[X, B{\rm GL}_\infty] = [X,B{\rm Ell}^*]$ for any CW complex $X$. In particular, any ${\rm Ell}^*$-bundle reduces to a ${\rm GL}_\infty$-bundle.
Thus $\mathcal E\to\mathcal M$ admits a ${\rm GL}_\infty$-connection.
In fact, the proof of \cite[Prop. 15.4]{bw} implicitly
shows that ${\rm GL}_\infty$ is homotopy equivalent to $\Psi_{-\infty}^*$, the group of invertible $\Psi{\rm DO}$s of the form $I + P$, where $P$ is a finite rank operator given by a smooth kernel. Thus we may assume that the connection one-form $\theta$ takes values in
Lie$(\Psi_{-\infty}^*)$, the space of smoothing operators.
The curvature two-form is given locally by
$\Omega_\alpha = d\theta_\alpha +\theta_\alpha\wedge \theta_\alpha$, and hence
$\Omega^k$ also takes values in smoothing operators.
The Wodzicki residue of $\Omega^k$ therefore vanishes,
so $c_k^{\rm w}(\mathcal E) = 0.$
\end{proof}
Based on this result and calculations, the following conjecture seems plausible.
\begin{conjecture} The Wodzicki-Chern classes vanish on any $\pdo_0^*$-bundle over a base manifold
admitting a partition of unity.
\end{conjecture}
We will see in \S3 that ${\rm Ell}^*$ is not a deformation retraction of $\pdo_0^*$ in general, so
the conjecture does not follow from the previous theorem.
We now outline a putative proof of the conjecture based on
the families index theorem setup for
trivial fibrations as in Remark 2.1; details will appear elsewhere.
As noted above, the Wodzicki-Chern classes vanish for these gauge bundles, but the
superconnection techniques given below may generalize to other, perhaps all, $\pdo_0^*$-bundles.
Let $\nabla = \nabla^0 \oplus \nabla^1$ be a graded connection on a graded infinite dimensional
bundle $\mathcal E = \mathcal E^0 \oplus \mathcal E^1$ over a base space $B$. Let $R = R^0 \oplus R^1$ be the corresponding curvature form. The connection and curvature forms take values in $\Psi{\rm DO}$s of
nonpositive order.
We choose a smooth family of nonpositive order $\Psi{\rm DO}$s
$ a : \mathcal E^0 \rightarrow \mathcal E^1$,
and set
$$ A : \mathcal E \rightarrow \mathcal E, \ A = \left(\begin{array}{cc} 0&a^*\\a&0\end{array}\right).$$
We form the superconnection
$B_t = \nabla + t^{1/2}A.$
For convenience, assume that $A$ has constant order zero. Then the heat operator
$\exp(-B_t^2)$
is a smooth family of zero order $\Psi{\rm DO}$s, as seen from an analysis of its asymptotic expansion.
The standard transgression formula for the local families index theorem is of the form
${\rm str}(\exp(-B_{t_1}^2)) - {\rm str}
(\exp(-B_{t_2}^2)) = d\int_{t_1}^{t_2} \alpha_t dt$,
where str is the supertrace
and $\int\alpha_t$ is an explicit Chern-Simons form \cite{B2}. In the $t\to\infty$ limit, the connection becomes a connection on the finite rank index bundle, and defines there a smoothing operator on which the Wodzicki residue trace is zero. As $t \to 0$, the Wodzicki residue of $e^{-B_t^2}$ approaches
the residue of $e^{-R} $.
This limit exists because we are using the Wodzicki residue and not the classical trace, as again follows from an analysis of the symbol asymptotics. This shows that the Wodzicki-Chern character
and hence the $c_k^{\rm w}(\mathcal E)$ vanish
in cohomology.
We can manipulate the
choice of $A$ and the bundle $\mathcal E^1$ to infer the result for a non-graded bundle. That is, we take $\mathcal E^0, \nabla^0, R^0$ to be a fixed $\Psi{\rm DO}_0^*$-bundle with connection
and curvature form, and take $\mathcal E^1$
to be a trivial bundle with a flat connection $\nabla^1$.
Choose $a$ to be an elliptic family of $\Psi{\rm DO}$s of order zero parametrized by $B$.
Then the graded Wodzicki-Chern character reduces to the Wodzicki-Chern character of $E^0$, and we are done.
It may be that a refined version of this argument gives the vanishing of the Wodzicki-Chern
character as a differential form, in which the Wodzicki-Chern-Simons classes of \cite{MRT}
would always be defined.
\section{Extending the leading order symbol trace}
In contrast to the Wodzicki-Chern classes, the leading order Chern classes are often nonzero, and
we can use them to detect elements of $H^*(B\pdo_0^*,{\mathbb C}).$ As above, let $\mathcal E$ be a $\pdo_0^*$-bundle with fiber modeled on $H^{s_0}\Gamma(E\to N).$
Throughout this section, we use the following conventions: (i) the manifold $N$ is closed and
connected; (ii) all cohomology groups $H^*(X,{\mathbb C})$ are de Rham cohomology; (iii)
Maps$(N,M)$ denotes ${\rm Maps}^{s_0}(N,M)$ for a fixed large Sobolev parameter $s_0$; (iv) smooth
sections of a bundle $F$ are denoted by $\Gamma F.$
\begin{definition} The k${}^{\rm th}$ leading order Chern class $c_k^{\rm lo}(\mathcal E)$ of the $\pdo_0^*$-bundle $\mathcal E\to \mathcal M$ admitting a connection $\nabla$ is
the de Rham cohomology class of
$$\int_{S^*N} {\rm tr}\ \sigma_0(\Omega^k)(n,\xi)\ d\xi\ {\rm dvol}(n),$$
where $\Omega$ is the curvature of $\nabla.$
\end{definition}
The point is that the {\it leading order symbol trace} $\int_{S^*N} {\rm tr}\ \sigma_0:\Psi{\rm DO}_{\leq 0}\to{\mathbb C}$ is a trace on this subalgebra, although it does not extend to a trace on all $\Psi{\rm DO}$s.
An obvious approach to calculating leading order classes would be to find a universal connection on $E\pdo_0^*\to B\pdo_0^*.$ However,
it seems difficult to build a model of $B\pdo_0^*$ more concrete than the general Milnor construction. In particular, it is not clear that $B\pdo_0^*$ is a manifold, so the existence of a connection on $E\pdo_0^*$ may be moot.
Alternatively,
since elements of $\pdo_0^*$ are bounded operators on the Hilbert space $\mathcal H = H^{s_0}\Gamma(E\to N)$, we can
let $\bar\Psi = \overline{\Psi{\rm DO}_0^*}$ be the closure of $\Psi{\rm DO}_0^*$ in $GL(\mathcal H)$ in the norm topology.
$\bar\Psi$
acts freely on the contractible space
$GL(\mathcal H)$, so $E\bar\Psi = GL(\mathcal H)$ and $B\bar\Psi = E\bar\Psi/\bar\Psi$.
$GL( \mathcal H)$ is a
Banach manifold, and since the Frobenius theorem holds in this context, $B\bar\Psi$ is also a Banach
manifold \cite{Omori}. In particular, $E\bar\Psi\to B\bar\Psi$ admits a connection. (It would be interesting to know if $E\bar\Psi$
has a universal connection.) Unfortunately, it is not clear that the leading order symbol trace extends to ${\rm Lie}(\bar\Psi)$, so defining leading order symbol classes for $\bar\Psi$-bundles is problematic.
We separate these problems into two issues. The first strictly analytic issue
is to find a large subalgebra of $\gl(\mathcal H)$ with an extension of the leading order symbol trace. Our solution in Proposition \ref{prop extend}
in fact acts on a quotient algebra of a subalgebra of $\gl(\mathcal H)$.
This leads
to
a different version of $\bar\Psi$ such that $\bar\Psi$-bundles with connection have
a good theory of characteristic classes (see Definition \ref{def}). However, the existence
of a universal $\bar\Psi$-bundle with connection is unclear, so we cannot use this theory to
detect elements in $H^*(B\Psi{\rm DO}_0^*,{\mathbb C}).$
The second issue is to find a Lie algebra ${\mathfrak g}$ such that $\pdo_{\leq 0}$ surjects onto
${\mathfrak g}$ and such that the corresponding classifying space $BG$ is a manifold. In fact,
we can reinterpret well known results to show that ${\mathfrak g} = H^{s_0}\Gamma{\rm End}(\pi^*E)$ works for $E\stackrel{\pi}{\to} N$.
Since the leading order symbol trace extends to ${\mathfrak g}$, we can
define characteristic classes of $\Psi{\rm DO}^*_0$-bundles to be pullbacks of the leading order
symbol classes of $EG\to BG$ (Definition \ref{def big}). This approach allows us to
detect elements in $H^*(B\Psi{\rm DO}_0^*,{\mathbb C})$ (Theorem \ref{last theorem}).
In this section, we discuss analytic questions related to extensions of the leading order symbol trace. In \S4, we discuss the
topological questions related to the second issue.
\medskip
To begin the analysis of the first issue, we
first check that at the Lie algebra level, $\pdo_{\leq 0}$ embeds continuously in ${\mathfrak gl}(\mathcal H)$, a result
probably already known.
For a fixed choice of a finite precompact cover $\{U_\ell\}$ of $N$ and a subordinate partition of unity $\{\phi_\ell\},$
we write $A\in \pdo_{\leq 0}$ as
$A = A^1 + A^0,$ where $A^1 =
\sum_{j,k}' \phi_jA\phi_k$, with the sum over $j, k$ with ${\rm supp}
\ \phi_j\cap
{\rm supp}\ \phi_k \neq \emptyset$, and $A^0 = A-A^1.$
Then $A^1$ is properly supported and has the classical local symbol
$\sigma(\phi_jA\phi_k)$ in $U_j$,
and $A^0$ has a smooth kernel $k(x,y)$ \cite[Prop.~18.1.22]{hor}.
The Fr\'echet topology on the classical
$\Psi{\rm DO}$s of nonpositive integer order is given locally by the family of seminorms
$$
\sup_{x,\xi} |\partial_x^\beta\partial_\xi^\alpha\sigma(\phi_jA\phi_k)
(x,\xi)|(1+|\xi|)^{|\alpha|},
$$
\begin{equation}\label{frechet}\sup_{x, |\xi|=1}|\partial_x^\beta\partial_\xi^\alpha \sigma_{-m}(\phi_jA\phi_k)(x,\xi)|,
\end{equation}
$$\sup_{x,\xi} |\partial_x^\beta
\partial_\xi^\alpha(\sigma(\phi_jA\phi_k)(x,\xi) - \psi(\xi)\sum_{m=0}^{T-1}\sigma_{-m}(\phi_j
A\phi_k)(x,\xi))
|(1+|\xi|)^{|\alpha| +T},$$
$$ \sup_{x,y}|\partial_x^\alpha\partial_y^\beta k(x,y)|,$$
where $\psi$ is a smooth function vanishing near zero and identically one outside a small ball centered at the origin \cite[\S18.1]{hor}. The topology is independent of the choices of $\psi$, $\{U_\ell\}$, and $\{\phi_\ell\}.$
Since elements $A$ of the gauge group $\mathcal G$ of $E$ are order zero multiplication
operators with $\sigma_0(A)(x,\xi)$ independent of $\xi$, the gauge group inherits the
usual $C^\infty$ Fr\'echet topology in $x$.
\begin{lemma} \label{cont-incl} For the Fr\'echet topology on $\pdo_{\leq 0}$ and the norm topology on ${\mathfrak gl}(\mathcal H)$, the
inclusion $\pdo_{\leq 0} \to{\mathfrak gl}(\mathcal H)$ is continuous.
\end{lemma}
\begin{proof}
We follow \cite[Lemma 1.2.1]{gilkey}. Let $A_i\to 0$ in $\Psi{\rm DO}_{\leq 0}.$ Since $H^{s_0}$ is isometric
to $L^2 = H^0$ for any $s_0$, it suffices to show that $\Vert A_i\Vert\to 0$ for $A_i:L^2\to L^2.$
As usual, the computations reduce to estimates in local charts. We abuse notation by
writing $\sigma(\phi_jA_i\phi_k)$ as $\sigma(A^1_i) = a_i.$
Then
$$\widehat{A^1_if}(\zeta) = \int e^{ix\cdot(\xi-\zeta)} a_i(x,\xi)\hat f(\xi) d\xi dx
= \int q_i(\zeta-\xi, \xi)\hat f(\xi) d\xi$$
for $q_i(\zeta, \xi) = \int e^{-ix\cdot \zeta} a_i(x,\xi) dx.$ (We are using a normalized version of $dx$ in the Fourier transform.) By \cite[Lemma 1.1.6]{gilkey},
$|A^1_i f|_0 = \sup_g \frac{|(A^1_if,g)|}{|g|_0}$, where $g$ is a Schwarz function and we use the
$L^2$ inner product. By Cauchy-Schwarz,
$$|(A^1_if,g)| \leq
\left(\int |q_i(\zeta-\xi,\xi)| \ |\hat f(\xi)|^2 d\zeta d\xi\right)^{1/2}
\left(\int |q_i(\zeta-\xi,\xi)| \ |\hat g(\xi)|^2 d\zeta d\xi\right)^{1/2}.$$
We claim that \begin{equation}\label{est}
|q_i(\zeta-\xi, \xi)| \leq C_i, \int |q_i(\zeta-\xi, \xi)| d\xi \leq C'_i,
\int |q_i(\zeta-\xi, \xi)| d\zeta \leq C''_i,
\end{equation}
with $C_i, C'_i, C''_i\to 0$ as $i\to\infty.$ If so,
$|(A^1_if,g)| \leq D_i |f|_0\ |g|_0$ with $D_i\to 0$, and so $\Vert A^0_i\Vert \to 0.$
For the claim, we know $|\partial_x^\alpha a_i(x,\xi)| \leq C_{\alpha, i}$ with $C_{\alpha,i}\to 0.$ Since $a_i$ has compact $x$ support, we get
$$|\zeta^\alpha q_i(\zeta, \xi)| = \left| \int e^{-ix\cdot \zeta} \partial^\alpha_x a_i(x,\xi) dx\right|
\leq C_{\alpha,i}$$ for a different constant decreasing to zero. In particular,
$|q_i(\zeta, \xi)| \leq C_i(1+|\xi|)^{-1-n/2}$, for $n = {\rm dim}(N).$ This shows that
$q_i(\zeta-\xi, \xi)$ is integrable and satisfies (\ref{est}).
It is straightforward to show that $A^0_i\to 0$ in the Fr\'echet topology on smooth kernels
implies $\Vert A^0_i\Vert \to 0.$ Thus $\Vert A_i\Vert \leq \Vert A^0_i\Vert + \Vert A^1_i\Vert \to 0.$
\end{proof}
In order
to extend the leading order Chern class to $\bar\Psi$-bundles,
we associate an operator to the symbol of $A \in\pdo_{\leq 0}.$ Set
$${\rm Op}_1(A) (f)(x) =
\sum_{j,k}{}' \int _{U_j} e^{i(x-y)\cdot\xi} \sigma(\phi_j A\phi_k)(x.\xi) f(y)dyd\xi
$$
Then $A - {\rm Op}_1(A)\in\pdo_{-\infty}$, the closed ideal of $\Psi{\rm DO}$s of order $-\infty$, and
$\sigma(A) \stackrel{\rm def}{=} \sigma(A^1) = \sigma({\rm Op}_1(A)).$ Note that ${\rm Op}_1(A)$ is
shorthand for the $\Psi{\rm DO}$ ${\rm Op}(\sigma(A))$
noncanonically associated to $\sigma(A) \in \Gamma({\rm End}(\pi^*E)\to S^*N).$
\begin{definition} $${\rm Op}_1 = \{{\rm Op}_1(A): A\in \pdo_{\leq 0}\}$$
\end{definition}
We emphasize that ${\rm Op}_1$ depends on a fixed atlas and subordinate partition of unity for
$N$.
The closed vector space ${\rm Op}_1$ is not an algebra, but the linear map $o:\pdo_{\leq 0}\to{\rm Op}_1, A\mapsto {\rm Op}_1(A)$ is continuous.
Let $\overline{\rm Op}_1$ be the closure of ${\rm Op}_1$ in $\gl(\mathcal H)$.
Fix $K>0$, and set
\begin{eqnarray} \label{decay} {\rm Op}_1^K &=& \{ {\rm Op}_1(A): |\partial^\alpha_\xi (\sigma(A) - \sigma_0(A))(x,\xi)|
\leq K(1 + |\xi|) ^{-1}, \\
&&\quad |\partial_\xi^\alpha \sigma_0(A)(x,\xi)| \leq K,
\ {\rm for}\
|\alpha| \leq 1,
\forall (x,\xi)\in T^*U_j,
\forall j\}.\nonumber
\end{eqnarray}
Since $\sigma_0$ has homogeneity zero and $S^*N$ is compact, every ${\rm Op}_1(A)\in {\rm Op}_1$
lies in some ${\rm Op}_1^K.$
\begin{lemma} \label{key lemma} $A\mapsto \int_{S^*N}{\rm tr}\ \sigma_0(A)(n,\xi) d\xi\ {\rm dvol}(n)$ extends from a continuous map
on ${\rm Op}_1^K$ to a continuous map on $\overline{{\rm Op}_1^K}.$
\end{lemma}
\begin{proof}
We must show that if $\{{\rm Op}_1(A_i)\}\subset {\rm Op}_1^K$ is Cauchy in the norm topology on
${\rm End}(\mathcal H)$, then $\{{\rm tr}\ \sigma(A_i)\}$ is Cauchy in $L^1(S^*N).$ Fix a finite
cover $\{U_\ell\}$ of $N$ with $\bar U_\ell$ compact.
The hypothesis is
$$\left\Vert \int_{T^*U_k}e^{i n\cdot \xi}\phi_\ell(n)(\sigma(A_i) -\sigma(A_j))(n,\xi)
\phi_k(n) \widehat{ f}(\xi)\ d\xi\right\Vert_{s_0} <
\epsilon \Vert f\Vert_{s_0}$$
for $i, j > N(\epsilon),$ and for each $\ell,k$ with ${\rm supp}\ \phi_\ell \cap {\rm supp}\
\phi_k\neq \emptyset.$
Sobolev embedding implies that
\begin{equation}\label{gij}
n\mapsto g_{i,j}(n) = \frac{1}{ \Vert f\Vert_{s_0}}\int_{T_n^*U_k} e^{in\cdot \xi}\phi_\ell(n)
(\sigma(A_i) -\sigma(A_j))(n,\xi)\phi_k(n) \widehat{ f}(\xi)\ d\xi
\end{equation}
is $\epsilon$-small in $C^r(U_k)$ for any $r < s_0 - ({\rm dim}\ N)/2$ and any fixed $f\in H^{s_0}.$
Fix $U_\ell$, and pick $\xi_0$ in the cotangent space of a point $n_1\in U_\ell$
with $\phi_\ell(n_1)\phi_k(n_1)$\\
$\neq 0$. We can
identify all cotangent spaces in $T^*U_\ell$ with $T^*_{n_1}U_\ell.$ We claim that
$$h_{i,j}(n, \xi_0) =\phi_\ell(n)( \sigma(A_i) -\sigma(A_j))(n,\xi_0)\phi_k(n)$$
has $|h_{i,j}(n,\xi_0)|<\epsilon$ for all $n\in U_k$, for all $\ell$, and for
$i, j \gg 0$. Since $\sigma_0$ has
homogeneity zero, we may assume that $\xi_0\in S^*N.$
Thus we claim that \\
$\{\phi_\ell(n)\sigma(A_i)(n, \xi_0)\phi_k(n)\}$ is
Cauchy in this fixed chart, and so by compactness
$\{\sum_{\ell, k}'\sigma(\phi_\ell A_i\phi_k) = \sigma(A_i)\}$ will be uniformly Cauchy on all of $S^*N.$
For the moment, we assume that our symbols are scalar valued.
If
the claim fails,
then by compactness of $N$ there exists $n_0$ and $\epsilon >0$ such that there exist $i_k, j_k \to\infty$ with $|h_{i_k j_k}(n_0,\xi_0)| > \epsilon.$
Let $\widehat {f} = \widehat {f}_{\xi_0, \delta}$ be a
bump function of height
$
e^{-in_0\cdot\xi_0}$ concentrated on $B_{r(\delta)}(\xi_0)$, the metric ball in $T_{n_0}^*U_\ell$ centered at $\xi_0$ and of volume $\delta$,
and let $b_{\xi_0, \delta}$ be the corresponding bump function of height one.
Taylor's theorem in the form
\begin{equation}\label{taylor}
a(\xi_0) - a(\xi) = - \sum_\ell (\xi-\xi_0)^k\int_0^1 \partial^\ell_\xi a(\xi_0 + t(\xi-\xi_0)) dt
\end{equation}
applied to $a(\xi) = h_{i_kj_k}(n_0,\xi)$
implies
\begin{eqnarray*}
\lefteqn{
\left| \int_{T_{n_0}^*U_\ell} e^{in_0(\xi-\xi_0)} h_{i_kj_k}
(n_0, \xi)b_{\xi_0, \delta} d\xi \right| }\\
&\geq &
\left| h_{i_kj_k}
(n_0, \xi_0) \int_{T_{n_0}^*U_\ell} e^{in_0\xi} b_{\xi_0, \delta}
d\xi\right| \\
&&\qquad -\left| \int_{T_{n_0}^*U_\ell} e^{in_0(\xi-\xi_0)}( h_{i_kj_k} (n_0, \xi_0) - h_{i_kj_k}(n_0,\xi)) b_{\xi_0, \delta} d\xi\right|\\
&\geq & \frac{1}{2} h_{i_kj_k}(n_0,\xi_0) \delta - r(\delta) F(\delta, (n_0, \xi_0))\delta,
\end{eqnarray*}
for some $F(\delta,(n_0,\xi_0) )\to 0$ as $\delta \to 0$. To
produce this $F$, we use $|(\xi-\xi_0)^k|\leq r(\delta)$
and (\ref{decay}) with $|\alpha| = 1$ to bound the partial derivatives of
$ h_{i_kj_k}(n_0,\xi)$ in (\ref{taylor}) by a constant independent of $i_k, j_k.$
For $\delta$ small enough,
$ r(\delta)F(\delta, (n_0, \xi_0))) < \frac{1}{4}|h_{i_kj_k}(n_0, \xi_0)|$
for all $k$, and so
\begin{equation}\label{hij}
\left| \int_{T_{n_0}^*U_\ell}
e^{in_0(\xi-\xi_0)} h_{i_kj_k}
(n_0, \xi)b_{\xi_0, \delta} d\xi \right| > \frac{1}{4}|h_{i_kj_k}(n_0, \xi_0)|\delta
\end{equation}
Similarly, with some abuse of notation, we have
$$\Vert f\Vert_{s_0} =
\left( \sum_\ell \int_{U_\ell}
(1+|\xi|^2)^{s_0} |\widehat{ (\phi_\ell\cdot f)}(\xi)|^2 d\xi\right)^{1/2}.$$
Since $|\widehat{ (\phi_\ell\cdot f)}(\xi)|
\leq C\cdot \hat \phi_\ell(\xi-\xi_0)\delta$ for some constant $C$ which we can take independent of $\delta$ for small $\delta$, we get
\begin{eqnarray}\label{denom}
\Vert f\Vert_{s_0} & \leq & \left( \Sigma_\ell \int_{U_\ell} (1+|\xi|^2)^{s_0} C \delta^2|\hat\phi_\ell(\xi-\xi_0)|^2
\right)^{1/2}\\
&\leq& C\delta\nonumber
\end{eqnarray}
where $C$ changes from line to line.
Substituting (\ref{hij}) and (\ref{denom}) into (\ref{gij}), we obtain
$$ |g_{i_kj_k}(n_0)| \geq C|h_{i_kj_k}(n_0, \xi_0)| \geq C\epsilon$$
for all $k$, a contradiction. Thus $h_{ij}(n, \xi_0)$ has the claimed estimate.
If the symbol is matrix valued, we replace the bump functions by sections of the
bundle $E$ having the $r^{\rm th}$
coordinate in some local chart given by the bump functions and all other coordinates zero.
The argument above shows that
the $r^{\rm th}$ columns of $\sigma(A_i)$ form a Cauchy sequence, and so each sequence
of entries $\{\sigma(A_i)^s_r\}$ is Cauchy.
If $\{{\rm tr}(\sigma_0(A_i))\}$ is not uniformly Cauchy on $S^*N$, then there exists $\epsilon >0$
with an infinite sequence of $i, j$ and
$(n, \xi)\in S^*N$ such that
\begin{eqnarray*} \lefteqn{
|{\rm tr}(\sigma(A_i))(n , \lambda\xi) - {\rm tr}(\sigma(A_j))(n , \lambda\xi)| }\\
&=&
|{\rm tr}((\sigma-\sigma_0)(A_i))(n , \lambda\xi) + {\rm tr}\ \sigma_0(A_i))(n , \lambda\xi)\\
&&\qquad
-{\rm tr}((\sigma-\sigma_0)(A_j))(n , \lambda\xi) - {\rm tr}\ \sigma_0(A_j))(n , \lambda\xi)|\\
&\geq & |{\rm tr}(\sigma_0(A_i))(n , \xi) - {\rm tr}\ \sigma_0(A_j))(n , \xi)| \\
&&\qquad
- |{\rm tr}((\sigma-\sigma_0)(A_i))(n , \lambda\xi)
- {\rm tr}((\sigma-\sigma_0)(A_j))(n , \lambda\xi)|\\
&\geq & \epsilon - 2K(1+|\lambda|)^{-1}
\end{eqnarray*}
for all $\lambda >0.$ For $\lambda
\gg 0$ this contradicts that $h_{i,j}$ is Cauchy.
This implies that
$\{{\rm tr}(\sigma_0(A_i))\}$ is uniformly Cauchy on $S^*N$, so the claimed extension exists.
The continuity of the extension is immediate.
\end{proof}
Fix $K$, and set
$$\mathcal A =\mathcal A^{K} = \cup_{n\in {\mathbb Z}^+} \overline{{\rm Op}_1^{nK}} \subset {\mathfrak gl}(\mathcal H),$$
where the closure is taken in the norm topology.
Then
$$\bar \mathcal A = \overline{\cup_{n\in {\mathbb Z}^+} \overline{{\rm Op}_1^{nK}} }
= \overline {\cup_{n\in {\mathbb Z}^+} {\rm Op}_1^{nK} } = \overline {{\rm Op}_1} \subset {\mathfrak gl}({\mathcal H})$$
is independent of $K$.
\begin{corollary} $A\mapsto \int_{S^*N}{\rm tr}\ \sigma_0(A)(n,\xi) d\xi\ {\rm dvol}(n)$ extends from a continuous map
on ${\rm Op}_1$ to a continuous map on $\overline{{\rm Op}_1} = \bar\mathcal A.$
\end{corollary}
\begin{proof} Fix $K$. We first show that the leading order symbol extends to $\mathcal A.$
For $n'>n,$ the inclusion $i_{n,n'}:\overline{{\rm Op}_1^{nK}}\to \overline{
{\rm Op}_1^{n'K}}$ has
$\sigma^{n'K}\circ i_{n,n'} = \sigma^{nK}$ for the extensions $\sigma^{nK}, \sigma^{n'K}$ on
$\overline{{\rm Op}_1^{nK}}, \overline{{\rm Op}_1^{nK'}}$.
Thus for any $K$, we
can unambiguously set $\sigma(A) = \sigma^{nK}(A)$ for $A\in \overline{{\rm Op}_1^{nK}}$.
The continuity follows from the previous lemma. Since the extension is linear, it is immediate that the extension is infinitely
Fr\'echet differentiable on $\mathcal A$ inside the Banach space ${\mathfrak gl}(\mathcal H).$
Since $\sigma$ is continuous on $\mathcal A$, it extends to a continuous linear functional on $\bar\mathcal A$, which is again smooth.
\end{proof}
To discuss the tracial properties of extensions of $\int_{S^*N}{\rm tr}\ \sigma_0$, we must have
algebras of operators.
Since ${\rm Op}_1(AB) - {\rm Op}_1(A){\rm Op}_1(B)\in \pdo_{-\infty}$, ${\rm Op}_1/\pdo_{-\infty}$ (i.e. ${\rm Op}_1/(\pdo_{-\infty}\cap{\rm Op}_1)$)
is an algebra. Since $A-{\rm Op}_1(A)\in\pdo_{-\infty}$, we have ${\rm Op}_1/\pdo_{-\infty}\simeq \pdo_{\leq 0}/\pdo_{-\infty}.$
Note that if ${\rm Op}_1'(A)$ is defined as for
${\rm Op}_1(A)$ but with respect to a different
atlas and partition of unity, then
${\rm Op}_1(A) - {\rm Op}_1'(A)$ is a smoothing operator. Thus ${\rm Op}_1/\pdo_{-\infty}$ is canonically defined, independent of these choices.
Let $\overline{\rm Op}_1$ be the closure of ${\rm Op}_1$ in $\gl(\mathcal H)$, and let
$\mathcal C$ denote the closure of $\pdo_{-\infty}$ in $\overline{\rm Op}_1.$ $\mathcal C$ is easily a closed ideal in $\overline{\rm Op}_1.$
On quotients of normed algebras, we take
the quotient norm $\Vert [A]\Vert = \inf\{\Vert A\Vert: A\in [A]\}.$
\begin{proposition}\label{prop extend} $A\mapsto \int_{S^*N}{\rm tr}\ \sigma_0(A)(n,\xi) d\xi\ {\rm dvol}(n)$ extends from a continuous trace
on $\pdo_{\leq 0}/\pdo_{-\infty}$ to a continuous trace $\sigma$ on $\overline{\rm Op}_1/\mathcal C\subset \gl(\mathcal H)/\mathcal C.$
\end{proposition}
\begin{proof} Since $\pdo_{-\infty}\subset \mathcal C$, it is immediate that the leading order symbol integral descends to a continuous
trace on $\pdo_{\leq 0}/\pdo_{-\infty}\simeq {\rm Op}_1/\pdo_{-\infty}$ and extends to a continuous map on $\overline{\rm Op}_1/\mathcal C$.
To see that the extension is a trace,
take $A, B\in \overline{\rm Op}_1/\mathcal C$ and $A_i, B_i\in {\rm Op}_1/\pdo_{-\infty}$ with
$A_i\to A, B_i\to B$. The leading order symbol
trace
vanishes on $[A_i, B_i]$, so by continuity $\sigma([A,B]) = 0.$
\end{proof}
We can pass from the Lie algebra to the Lie group level via the commutative diagram
\begin{equation}\label{cd
\begin{CD}
\Gamma{\rm End}(E) @>>> \Psi{\rm DO}_{\leq 0} @>>> \frac{\Psi{\rm DO}_{\leq 0}}{\pdo_{-\infty}} \simeq
\frac{{\rm Op}_1}{\pdo_{-\infty}} @>>> \frac{\overline{\rm Op}_1}{\mathcal C} \\%@>>> \frac{\gl(\mathcal H)}{\mathcal C}\\
@V\exp VV @V\exp VV @V\exp VV @V\exp VV\\% @V\exp VV\\
\mathcal G @>>>\Psi{\rm DO}_0^*@>>> \frac{ \Psi{\rm DO}_0^*}{(I + \pdo_{-\infty})^*} @.
\frac{\exp(\overline{\rm Op}_1)}{(I+\mathcal C)^* }
\end{CD}
\end{equation}
$\mathcal G$ is
the gauge group of $E\stackrel{\pi}{\to} N$, $(I + \pdo_{-\infty})^*$
refers to invertible operators $I + B, B\in \pdo_{-\infty}$, and similarly for other
groups on the bottom line.
The diagram consists of
continuous maps if the spaces in the first three columns have either the norm or
the Fr\'echet topology and the spaces in the last column have the norm topology.
The exponential map is clearly surjective in the first column, and by standard Banach space arguments, the exponential map is surjective in the fourth column. The surjectivity of the second and third column is not obvious. In particular, there is no obvious map from $
\frac{ \Psi{\rm DO}_0^*}{(I + \pdo_{-\infty})^*}$ to
$\frac{\exp(\overline{\rm Op}_1)}{(I+\mathcal C)^* }$.
In any case,
the maps on the bottom line are group
homomorphisms.
As discussed in the beginning of this section, it would be desirable to work with $\bar\Psi = \overline{\Psi{\rm DO}_0^*}$ or the closure of the standard variant $ \frac{ \Psi{\rm DO}_0^*}{(I + \pdo_{-\infty})^*} $.
However, it seems difficult if not impossible
to extend the leading order symbol trace to the corresponding Lie algebras $\overline{\pdo_{\leq 0}},
\overline{\Psi{\rm DO}_{\leq 0}}/\overline{\pdo_{-\infty}}.$ The following re-definition is our substitute for $\overline{\Psi{\rm DO}_0^*}$, as it is the group associated to the largest known Lie algebra with an extension
(strictly speaking, a factorization) of the leading order symbol trace.
\begin{definition}\label{def} $\bar\Psi = \frac{\exp(\overline{\rm Op}_1)}{(I+\mathcal C)^* }$.
\end{definition}
The smooth factorization of the leading order symbol trace $\sigma: \Psi{\rm DO}_{\leq 0}\to{\mathbb C}$
through
$\overline{\rm Op}_1/\mathcal C$ gives us a geometric theory of characteristic classes for $\bar\Psi$-bundles.
\begin{theorem} \label{ext thm} The leading order Chern classes $c_k^{\rm lo}$ extend to $\bar\Psi$-bundles over
paracompact manifolds.
\end{theorem}
\begin{proof} Such bundles $\mathcal E\to B$ admit connections with curvature
$\Omega\in$\\
$ \Lambda^2(B, \overline{\rm Op}_1/\mathcal C).$
By the previous proposition, we may set
$$c_k^{\rm lo}(\nabla)_b \stackrel{\rm def}{=}
\sigma(\Omega^k_b)$$
at each $b\in B.$ Since $\sigma$ is smooth, the corresponding de Rham
$c_k^{\rm lo}(E\bar\Psi)$ class is closed and
independent of connection
as in finite dimensions.
\end{proof}
In summary, the leading order trace on $\Psi{\rm DO}_0^*$-bundles trivially factors
through $\Psi{\rm DO}_0^*/(I+\pdo_{-\infty})^*$-bundles, as this quotient just removes the smoothing term
$A^0$ from an invertible $\Psi{\rm DO}$ $A$. By Proposition \ref{prop extend}, this
trace then extends continuously to $\overline{\rm Op}_1/\mathcal C$. This space is morally the closure of
${\rm Op}_1/\pdo_{-\infty}$ in $\gl(\mathcal H)/\mathcal C$, but it is not clear that $\mathcal C$ is an ideal in $\gl(\mathcal H).$ In any case, the work in this section has the feel of extending a continuous trace from a set to
its closure.
\begin{remark} The groups in (\ref{cd}) give rise to different bundle theories, since the homotopy types of $\Psi{\rm DO}_0^*$, and $\Psi{\rm DO}_0^*/(I + \pdo_{-\infty})^*$ differ \cite{R}; the homotopy type of $\mathcal G$, discussed in \cite{A-B1}, is almost surely different from that of $\Psi{\rm DO}_0^*, \Psi{\rm DO}_0^*/(I + \pdo_{-\infty})^*$.
The relationship between the topology of $\Psi{\rm DO}_0^*/(I + \pdo_{-\infty})^*$ and $\bar\Psi = \exp(\overline{\rm Op}_1)(I+\mathcal C)^*$ is
completely open, so presumably the bundle theories for these groups also differ.
Geometrically, one can construct $\bar\Psi$-connections by using a partition of unity to glue together trivial connections over trivializing neighborhoods in a paracompact base. However, even for finite dimensional Lie groups, it is difficult to compute the corresponding Chern classes of such glued up connections. As a result, we do not have examples of nontrivial leading order Chern classes for $\bar\Psi$-bundles.
\end{remark}
\section{Detecting cohomology of $B\Psi{\rm DO}_0^*$}
We now turn to the second issue
discussed in the beginning of \S3, namely that classifying spaces are not manifolds
in general.
We could obtain information about $H^*(B\bar\Psi,{\mathbb C})$ if $E\bar\Psi\to B\bar\Psi$ admitted a connection, but this presupposes that $B\bar\Psi$ is a manifold. As an alternative, we consider the
exact sequence of algebras associated to $E\stackrel{\pi}{\to} N$:
$$0\to \pdo_{ -1}\to\pdo_{\leq 0}\stackrel{\sigma_0}{\to} \Gamma{\rm End}(\pi^*E\to S^*N)\to 0,$$
which gives $\Gamma{\rm End}(\pi^*E\to S^*N)\simeq \pdo_{\leq 0}/\pdo_{ -1}.$
Here $\pdo_{ -1}$ is the algebra of classical integer order $\Psi{\rm DO}$s of order at most $-1.$
Note that the quotient $\Psi{\rm DO}_{\leq 0}/\pdo_{-\infty}$ considered in \S3 is more complicated topologically than $\Psi{\rm DO}_{\leq 0}/\pdo_{ -1}.$
We obtain the diagram
\begin{equation}
\label{diagone}
\begin{CD}
\Gamma{\rm End}(E) @>>> \Psi{\rm DO}_{\leq 0} @>>>\Gamma {\rm End}(\pi^*E)
\\
@V\exp VV @V\exp VV @V\exp VV \\
\mathcal G(E) @>j>>\Psi{\rm DO}_0^*@>m>> \mathcal G(\pi^*E)\end{CD}
\end{equation}
By Lemma \ref{cont-incl}, $j$ and $m$ are continuous,
where $\mathcal G(E)$ has the Fr\'echet or norm topology,
$\Psi{\rm DO}_0^*$
has the Frech\'et topology, and $\mathcal G(\pi^*E)$ has the Fr\'echet or the norm topology.
The bottom line of this diagram induces
\begin{equation}\label{diag}
\begin{CD}
E\mathcal G(E) @>>> E\pdo_0^*@>>> E\mathcal G(\pi^*E)\\
@VVV@VVV@VVV\\
B\mathcal G(E)@>Bj>> B\pdo_0^* @>Bm>> B\mathcal G(\pi^*E)
\end{CD}
\end{equation}
since $E\pdo_0^* \simeq (Bm)^* E\mathcal G(\pi^*E)$ and similarly for $E\mathcal G(E)$ by the Milnor construction.
We can now define leading order Chern classes of $\pdo_0^*$-bundles, avoiding the question
of the existence of connections on $E\pdo_0^*.$ By \cite{A-B1}, for $E^\ell\to N$ and $\mathcal G = \mathcal G(E)$,
$B\mathcal G= {\rm Maps}_{0}(N,BU(\ell))=\{f:N\to BU(\ell) | f^*EU(\ell) \simeq E\}$, and
$E\mathcal G|_f$ is the subset of ${\rm Maps}(E, EU(\ell))$ covering $f$. Equivalently,
$E\mathcal G = \pi_*{\rm ev}^*EU(\ell).$ Recall that we are using maps in a fixed large
Sobolev class;
these maps uniformly approximate smooth maps, so that the homotopy types of
$E\mathcal G$ and $B\mathcal G$ are the same for smooth maps or maps in this Sobolev
class.
Thus $B\mathcal G(E)$ and
$B\mathcal G(\pi^*E)$ are Banach manifolds. For any topological group $G$, $BG$ admits a partition of unity \cite
[Thm. 4.11.2]{huse}. Thus $E\mathcal G(\pi^*E)\to B\mathcal G(\pi^*E)$ admits a connection
with curvature $\Omega\in \Lambda^2(B\mathcal G(\pi^*E), \Gamma{\rm End}(\pi^*E)).$ Note
that the leading order symbol trace on $\pdo_{\leq 0}$ obviously induces a trace
$\sigma$ on $\pdo_{\leq 0}/\pdo_{ -1}\simeq \Gamma
{\rm End}(\pi^*E).$ Therefore $E\mathcal G(\pi^*E)\to B\mathcal G(\pi^*E)$ has associated de Rham
classes $c_k^{\rm lo}(E\mathcal G(\pi^*E)) = [\sigma(\Omega^k)] \in H^{2k}(B\mathcal G(\pi^*E), {\mathbb C}).$
\renewcommand{\bar\Psi}{\mathcal G(\pi^*(E)}
The following definition is natural in light of (\ref{diag}).
\begin{definition}\label{def big} The k${}^{\rm th}$ leading order Chern class $c_k^{\rm lo}(E\pdo_0^*)$ is the de Rham
cohomology class of $(Bm)^*c_k^{\rm lo}(E\bar\Psi) \in H^{2k}(B\pdo_0^*, {\mathbb C}).$
\end{definition}
Set $\mathcal G = \mathcal G(E)$. Let
$\mathcal E\to B$ be a $\mathcal G$-bundle, classified by a map $g:B\to B\mathcal G$.
The maps $j$ and $m\circ j$ in (\ref{diagone}) are injective, so every $\mathcal G(E)$-bundle is
both a $\Psi{\rm DO}_0^*(E)$-bundle and a
$\mathcal G(\pi^*E)$-bundle. We get
$$c_k^{lo}(\mathcal E) = g^*c_k^{lo}(E\mathcal G) = g^*Bj^*c_k^{lo}(E\pdo_0^*) = g^*Bj^*Bm^*c_k^{\rm lo}(E\bar\Psi).$$
This gives an easy criterion to detect cohomology classes for the classifying spaces.
\begin{lemma}\label{lem:one} Let $\mathcal E$ be a $\mathcal G$-bundle with $c_k^{\rm lo}(\mathcal E) \neq 0.$ Then the
cohomology classes
$c_k^{\rm lo}(E\mathcal G)\in H^{2k}(B\mathcal G, {\mathbb C}),\ c_k^{\rm lo}(E\pdo_0^*)\in H^{2k}(B\pdo_0^*, {\mathbb C}),
c_k^{\rm lo}(E\bar\Psi)\in $\\
$H^{2k}(B\bar\Psi, {\mathbb C})$
are all nonzero.
\end{lemma}
As in Remark 2.1, let $\pi: {\rm Maps}(N,M) \times N\to
{\rm Maps}(N,M)$ be the projection, and for
$n\in N$, define ${\rm ev}_{n}: {\rm Maps}(N,M)\to M$ by ${\rm ev}_n(f) = {\rm ev}(n,f) = f(n).$
\begin{example} \label{example}
Let $F\to M$ be a complex bundle, and set $E = {\rm ev}^*F\to {\rm Maps}_f(N,M) \times M$,
$ \mathcal E = \pi_*{\rm ev}^*F \to {\rm Maps}_f(N,M).$ Here ${\rm Maps}_f(N,M)$ is the component of a fixed $f:N\to M.$
Then the Lemma applies with $\mathcal G = \mathcal G(f^*F)$, since $\mathcal E_g$ is noncanonically
isomorphic to $H^{s_0}\Gamma(f^*F)$ for all $g\in {\rm Maps}_f(N,M).$
\end{example}
\begin{lemma} \label{lem:ConnEG}
$E\mathcal G$ has a universal connection with connection one-form
$\theta^{E\mathcal G}$ defined on $s\in\Gamma(E\mathcal G)$ by
\begin{equation*}
(\theta^{E\mathcal G}_Z s)(\gamma)
(\alpha)=\left( ({\rm ev}^*\theta^u)_{(Z,0)} u_s\right)(\gamma,\alpha).
\end{equation*}
Here $\theta^u$ is the universal connection on $EU(k))\to BU(k)$, and
$u_s: {\rm Maps}(N,M)\times N \to {\rm ev}^*EU(k)$ is defined by
$u_s(f,n)=s(f)(n)$.
\end{lemma}
\begin{corollary}
The curvature $\Omega^{E\mathcal G}$ of $\theta^{E\mathcal G}$ satisfies
\begin{equation*}\label{eq:pullback}
\Omega^{E\mathcal G}(Z,W)s(f)(n)={\rm ev}^*\Omega^u ((Z,0),(W,0)) u_s(f,n).
\end{equation*}
\end{corollary}
The proofs are in \cite[\S4]{P-R} for loop spaces ($N = S^1$) and easily extend.
As a result, the leading order Chern classes of gauge bundles are pullbacks of finite dimensional
classes.
\begin{lemma}\label{eq:IO}
Fix $n_0\in N$. Then
$$c_k^{\rm lo}(E\mathcal G) = {\rm vol}(S^*N)\cdot {\rm ev}_{n_0}^* c_k(EU(\ell)).$$
If $\mathcal F\to {\rm Maps}(N,M)$ is given by $\mathcal F = \pi_*{\rm ev}^*F$ for a bundle $F\to M$,
then $$c_k^{\rm lo}(\mathcal F)
= {\rm vol}(S^*N)\cdot {\rm ev}_{n_0}^* c_k(F).$$
\end{lemma}
\begin{proof}
For all $n_0\in N$, the maps
${\rm ev}_{n_0} $ are homotopic, so the de Rham class of
\begin{equation*}\label{eq:Inde}
\int_{S^*N}{\rm tr}\ \sigma_0({\rm ev}_{n_0}^*(\Omega^u)^k) \ d\xi\ {\rm dvol}(n_0) =
{\rm vol}(S^*N)\cdot {\rm tr}\ \sigma_0({\rm ev}_{n_0}^* (\Omega^u)^k)
\end{equation*}
is
independent of $n_0$.
Since $\Omega^u$ is a multiplication operator, we get
$$c_k^{\rm lo}(E\mathcal G) ={\rm vol}(S^*N)\cdot \left[{\rm ev}_{n_0}^* ({\rm tr}\ \Omega^u)^k\right] =
{\rm vol}(S^*N)\cdot {\rm ev}_{n_0}^* c_k(EU(\ell)).$$
The proof for $\mathcal F$ is identical.
\end{proof}
Note that $({\rm vol}(S^*N)\cdot (2\pi i)^k)c_k^{\rm lo}(E\mathcal G) \in H^{2k}(B\mathcal G, {\mathbb Z})$. It is
not clear that we can normalize $c_k^{\rm lo}(E\pdo_0^*), c_k^{\rm lo}(E\bar\Psi)$ to be integer classes.
We can produce examples of nontrivial $c_k^{\rm lo}({\rm Maps}(N,M) = c_k^{\rm lo}(T{\rm Maps}(N,M))$ as well
as other cohomology classes for ${\rm Maps}(N,M).$
\begin{theorem}\label{4.7}
(i) Let $M$ have $c_k(M) = c_k(TM\otimes {\mathbb C})\neq 0$. Then
$$0\neq c_k^{\rm lo}({\rm Maps}(N,M)) \in H^{2k}({\rm Maps}(N,M), {\mathbb C}).$$
(ii) Let $F^\ell\to M$ be a finite rank Hermitian bundle with $c_k(F)\neq 0.$ Then
$$0\neq c_k^{\rm lo}(\pi_*{\rm ev}^*F)\in H^{2k}({\rm Maps}(N,M), {\mathbb C}).$$
\end{theorem}
\begin{proof}
(i)
Let $h:M\to BU(m)$ classify $TM\otimes {\mathbb C}.$ (Strictly speaking, we take a classifying
map into a Grassmannian $BU(m, K)$ of $m$-planes in ${\mathbb C}^K$ for $K\gg 0$, so that the target space is a finite dimensional manifold.)
For fixed $f\in{\rm Maps}(N,M)$, the gauge group $\mathcal G$ of $f^*(TM\otimes {\mathbb C})$ has
\begin{eqnarray*}B\mathcal G &=& \{g\in {\rm Maps}(N, BU(m)): g^*BU(m) \simeq f^*(TM\otimes {\mathbb C})\} \\
&=& \{g\in {\rm Maps}(N, BU(m)): g\sim hf\}.
\end{eqnarray*}
Therefore the map
$$\tilde h:{\rm Maps}(N,M)\to {\rm Maps}(N, BU(m)),\ \tilde h(f) = hf$$
classifies $T{\rm Maps}(N,M),$ and so for fixed $n\in N$,
$$c_k^{\rm lo}({\rm Maps}(N,M)) = {\rm vol}(S^*N)\cdot \tilde h^*{\rm ev}_n^* c_k(EU(m)),$$
by Lemma \ref{eq:IO}.
Let $[a] $ be a $2k$-cycle with $\langle c_k(M),[a]\rangle \neq 0.$ (Here the bracket refers
to integration of forms over cycles.)
For $i:M\to {\rm Maps}(N,M), i(m_0)(n) = m_0$, set
$[\tilde a] = i_*[a].$
Then
$$\langle c_k^{\rm lo}({\rm Maps}(N,M)), [\tilde a]\rangle = {\rm vol}(S^*N)\cdot\langle c_k(EU(m)), {\rm ev}_{n,*}
\tilde h_*[\tilde a]\rangle.$$
It is immediate that ${\rm ev}_n \tilde h i = h$, so
${\rm ev}_{n,*} \tilde h_*[\tilde a] = h_*[a].$ Therefore
\begin{eqnarray*}\langle c_k^{\rm lo}({\rm Maps}(N,M)), [\tilde a]\rangle &=& {\rm vol}(S^*N)\cdot\langle
h^* c_k(EU(m)),[a]\rangle \\
&=& {\rm vol}(S^*N)\cdot\langle c_k(TM\otimes {\mathbb C}), [a]\rangle\\
& \neq& 0.
\end{eqnarray*}
(ii) Let $h:M\to BU(\ell)$ classify $F$. As above, $\tilde h$ classifies $\mathcal F = \pi_*{\rm ev}^*F$. Thus
$$c_k^{\rm lo}(\mathcal F) = \tilde h^* c_k^{\rm lo}(E\mathcal G) = {\rm vol}(S^*N)\cdot \tilde h^*{\rm ev}_n^*c_k(EU(\ell))$$
by Lemma \ref{eq:IO}. As in (i), we get
$$\langle c_k^{\rm lo}(\mathcal F), [\tilde a]\rangle = \langle c_k(F), [a]\rangle \neq 0$$
for some cycle $[a]$. Alternatively, we can use the last statement in Lemma 3.13 and
${\rm ev}_n i = {\rm Id}$ to reach the same conclusion.
\end{proof}
In this proof,
the cycle $[\tilde a]$ has image in ${\rm Maps}_c(N,M)$, the component of the constant maps, so the result is really about bundles over this component. We can improve this to cover all components.
\begin{corollary} \label{cor3} For $f\in{\rm Maps}(N,M)$, let ${\rm Maps}_f(N,M)$ denote the connected component
of $f$. Let $F^\ell\to M$ be a finite rank Hermitian bundle with $c_k(F)\neq 0.$ Assume that $M$ is connected. Then
$$0\neq c_k^{\rm lo}(\pi_*{\rm ev}^*F)\in H^{2k}({\rm Maps}_f(N,M), {\mathbb C}).$$
\end{corollary}
\begin{proof}
We claim that for fixed $n_0\in N$ the map
$${\rm ev}_{n_0, *}:H_*({\rm Maps}_f(N,M), {\mathbb C}) \to H_*(M,{\mathbb C})$$
is
surjective.
As a first step, we show that for a fixed $m_0\in M$, we can homotop $f$ to a map $\tilde f$ with $\tilde f(n_0) = m_0$. By the tubular neighborhood theorem applied
to the one-manifold/path from $f(n_0)$ to $m_0$, there exists a coordinate chart $W = \phi({\mathbb R}^n)$
containing $f(n_0)$ and $m_0.$ Take small coordinate balls $U$ containing $\phi^{-1}(
f(n_0))$ and $V$
containing $\phi^{-1}(m_0)$, such that $V$ is a translate $\vec T + U$ of
with $\vec T = \phi^{-1}(m_0) - \phi^{-1}(n_0)$. We may assume that $U$ is a ball of radius $r$ centered at $\phi^{-1}(n_0)$. Let $\psi:[0,r]\to {\mathbb R}$ be a nonnegative bump
functions which is one near zero and zero near $r$.
Define $f_t:N\to M$ by
$$f_t(n) = \left\{ \begin{array}{ll} f(n), &
\ \ f(n)\not \in \phi(U),\\
\phi[ (1-t) \phi^{-1}(f(n))&\\
\ \ + t\psi(d(\phi^{-1}(f(n)), \phi^{-1}(f(n_0)))(\vec T + \phi^{-1}(f(n))]
& \ \ f(n)\in \phi(U). \end{array}\right. $$
In other words, $f_t = f$ outside $f^{-1}(\phi(U))$ and moves points $f(n)\in \phi(U)$ towards $\phi(V)$, with $f_t(n_0)$ moving $f(n_0)$ to
$m_0$. Now set $\tilde f = f_1.$
Take a $k$-cycle $\sum r_i \sigma_i$ in $M$. Let $\Delta^k = \{(x_1,\ldots, x_k): x_i\geq 0, \sum x_i \leq 1\}$ be the standard $k$-simplex.
By subdivision, we may assume that each $\sigma_i(\Delta^k)$ is in
a coordinate patch $V'= V'_i = \phi(V_i)$ in the notation above.
Construct the corresponding neighborhood $U' = \phi(U_i)$ of
$f(n_0).$ Set $m_0 = \sigma_i(\vec 0)$. Take
a map $\alpha_i:\Delta^k\to{\rm Maps}_f(N,M)$ with $\alpha(\vec 0) = f$ and $\alpha_i(x)(n_0)\in U'$
for all $x\in \Delta^k$. By suitably modifying the bump function to vanish near
$d(\phi^{-1}(\alpha_i(x)(n_0)), \partial U)$, we can form a simplex $\tilde \alpha_i:
\Delta^k\to{\rm Maps}_f(N,M)$ with $\tilde\alpha_i(x)(n_0) = \sigma_i(x)$ and $\tilde\alpha_i(\vec 0)
= \tilde f.$
Clearly $\sum r_i\tilde\alpha_i$ is a cycle in ${\rm Maps}_f(N,M)$ with ${\rm ev}_{n_0,*} [\sum r_i\tilde\alpha_i] =
[\sum_i r_i\sigma_i].$ This finishes the claim. Note that this construction is an {\it ad hoc} replacement for the map $i$ in the last theorem.
Pick a cycle $[b]\in H_{2k}(M, {\mathbb C})$ such that $\langle c_k(F), [b]\rangle \neq 0.$ Pick
$[\tilde b]\in $\\
$H_{2k}({\rm Maps}_f(N,M), {\mathbb C}$ with ${\rm ev}_{n_0,*}[\tilde b] = [b].$ Then
$$ \langle c_k^{\rm lo}(\pi_*{\rm ev}^*F), [\tilde b]\rangle =
{\rm vol}(S^*N)\cdot\langle c_k(F), {\rm ev}_{n_0,*}[b]\rangle \neq 0$$
as in Theorem \ref{4.7}(ii).
\end{proof}
This gives information about the cohomology rings of the various classifying spaces.
Recall that we are working with either the Fr\'echet or the norm topology on $\pdo_0^*$.
\begin{proposition}\label{last prop} Fix a closed manifold $N$ and a
connected manifold $M$.
Let $F^\ell\to M$ be a finite rank Hermitian bundle, choose $f:N\to M$, and
let $\mathcal G, \pdo_0^*,$ refer to
the gauge groups and $\Psi{\rm DO}$ groups acting on sections of $F$. Let $H^*_{F}(M,{\mathbb C})$ be
the subring of $H^*(M,{\mathbb C})$ generated by the Chern classes of $F$.
Then for $X = B\mathcal G, B\pdo_0^*, B\mathcal G(\pi^*F)$, there is a surjective map from
$H^{*}(X,{\mathbb C})$ to an isomorphic copy of $H^*_{F}(M,{\mathbb C})$ in
$H^*( {\rm Maps}_f(N, M),{\mathbb C})$, where ${\rm Maps}_f(N,M)$ is the component of $f$ in ${\rm Maps}(N,M).$
\end{proposition}
\begin{proof}
Set $\mathcal F = \pi_*ev^*F.$ The proof of Corollary \ref{cor3} shows that
if a polynomial
$p(c_0^{\rm lo}(\mathcal F), ..., c_\ell^{\rm lo}(\mathcal F))\in H^*({\rm Maps}_f(N,M), {\mathbb C})$ vanishes, then
$p(c_0(F),...,c_\ell(F))$\\
$ = 0.$ Thus $H_F^*(M,{\mathbb C})$ injects
into $H^*({\rm Maps}_f(N,M), {\mathbb C})$, for any $N$, with image the ring generated by
$c_k^{\rm lo}(\mathcal F)$, $k\leq \ell.$
Let $h$ classify $F$. In the notation of (\ref{diag}) and the previous
theorem, we have
$$c_k^{\rm lo}(\mathcal F) = (Bj\circ Bm\circ \tilde h)^*c_k^{\rm lo}(E\mathcal G(\pi^*F)) = (Bm\circ \tilde h)^*c_k^{\rm lo}(E\pdo_0^*)
= \tilde h^*c_k^{\rm lo}(E\mathcal G).$$
The surjectivity of $H^*(X,{\mathbb C}) \to {\rm Im}(H_F^*(M,{\mathbb C}))$ is now immediate.
\end{proof}
This gives the result on the cohomology of $B\mathcal G, B\pdo_0^*, B\bar\Psi$ stated in the Introduction.
\begin{theorem}\label{last theorem}
Let $E^\ell\to N$ be a finite rank Hermitian bundle, and let
$\mathcal G, \pdo_0^*,
refer to
the gauge groups and $\Psi{\rm DO}$ groups acting on sections of $E$. Then
for $X = B\mathcal G, B\pdo_0^*, B\mathcal G(\pi^*E)$, there is a surjective map from
$H^{*}(X,{\mathbb C})$ to the polynomial algebra $H^*(BU(\ell), {\mathbb C}) = {\mathbb C}[c_1(EU(\ell)),\ldots, c_\ell(EU(\ell))].$
\end{theorem}
\begin{proof}
Let $M = BU(\ell, K)$ be the Grassmannian of $\ell$-planes in ${\mathbb C}^K$, for $K \gg 0,$
let $F = EU(\ell, K),$ and let
$f:N\to M$ classify $E$. On the component ${\rm Maps}_f(N,M)$ of $f$, $\mathcal E = \pi_*{\rm ev}^*EU(\ell,K)$
has structure group $\mathcal G(f^*EU(\ell,K)) = \mathcal G(E).$ $H^*(M,{\mathbb C})$ is a polynomial
algebra with generators
$ c_1(EU(\ell, K)),\ldots, $\\
$c_\ell(EU(\ell, K))$ truncated above
dim$(M) = \ell(K-\ell).$ By the previous proposition, $H^*(X,{\mathbb C})$ surjects onto this
algebra. Letting $K$ go to infinity finishes the proof.
\end{proof}
\begin{remark}
(i) A proof of Theorem \ref{last theorem} for $H^*(B\mathcal G,{\mathbb C})$ that avoids most of the analysis
can be extracted from Lemma \ref{eq:IO} through
Proposition \ref{last prop}.
(ii) $E\mathcal G(E)\to B\mathcal G(E)$ is trivial as a $GL(\mathcal H)$-bundle by Kuiper's theorem.
However,
$E\mathcal G(E)\to B\mathcal G(E)$
is nontrivial as a $\mathcal G(E)$-bundle, as it has nontrivial leading order characteristic classes.
\end{remark}
We conclude with a result that complements Rochon's calculations of the homotopy groups of $\pdo_0^*$ \cite{R}.
\begin{corollary} In the setup of the Proposition \ref{last prop}, if $H^*_F(M,{\mathbb C})$ is nontrivial,
then ${\rm Ell}^*(F)$
is not a deformation retract of $\pdo_0^*(F).$
\end{corollary}
\begin{proof} Assume ${\rm Ell}^*$ is a deformation retract of $\pdo_0^*$. Then every $\pdo_0^*$-bundle
admits a reduction to a ${\rm Ell}^*$-bundle.
Let $\mathcal E\to B$ be a ${\rm Ell}^*$-bundle admitting a connection. Lie$({\rm Ell}^*)$ is the
algebra of negative order $\Psi{\rm DO}$s, so the connection and curvature forms have vanishing
leading order Chern classes. For $B = {\rm Maps}(M,M)$ and $f = {\rm id}$, the
proof of Proposition \ref{last prop} gives an injection of $H_F(M,{\mathbb C})$ into the subring of
$H^*({\rm Maps}(M,M),{\mathbb C})$ generated by the leading order Chern classes. This is a contradiction.
\end{proof}
\bibliographystyle{amsplain}
| proofpile-arXiv_065-5021 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
{The success of helioseismology and the promise of asteroseismology
have motivated numerous efforts to measure oscillations in solar-type
stars. These began with ground-based observations \citep[for recent
reviews see][]{B+K2007c,AChDC2008} and now extend to space-based
photometry, particularly with the {\em CoRoT} and {\em Kepler Missions}
\citep[e.g.,][]{MBA2008,GBChD2010}.}
We have carried out a multi-site spectroscopic campaign to measure
oscillations in the F5 star Procyon~A (HR 2943; HD 61421; HIP 37279). We
obtained high-precision velocity observations over more than three weeks
with eleven telescopes, with almost continuous coverage for the central ten
days. In Paper~I \citep{AKB2008PaperI} we described the details of the
observations and data reduction, measured the mean oscillation amplitudes,
gave a crude estimate for the mode lifetime and discussed slow variations
in the velocity curve that we attributed to rotational modulation of active
regions. In this paper we describe the procedure used to extract the mode
parameters, provide a list of oscillation frequencies, and give an improved
estimate of the mode lifetimes.
\section{Properties of Solar-Like Oscillations} \label{sec.solar-like}
We begin with a brief summary of the relevant properties of solar-like
oscillations (for reviews see, for example,
\citealt{B+G94,B+K2003,ChD2004}).
To a good approximation, in main-sequence stars the cyclic frequencies of
low-degree p-mode oscillations are regularly spaced, following the
asymptotic relation \citep{Tas80,Gou86}:
\begin{equation}
\nu_{n,l} \approx \mbox{$\Delta \nu$} (n + {\textstyle\frac{1}{2}} l + \epsilon) - l(l+1) D_0.
\label{eq.asymptotic}
\end{equation}
Here $n$ (the radial order) and $l$ (the angular degree) are integers,
$\mbox{$\Delta \nu$}$ (the large separation) depends on the sound travel time across the
whole star, $D_0$ is sensitive to the sound speed near the core and
$\epsilon$ is sensitive to {the reflection properties of} the surface
layers. It is conventional to define three so-called small frequency
separations that are sensitive to the sound speed in the core: $\dnu{02}$
is the spacing between adjacent modes with $l=0$ and $l=2$ (for which $n$
will differ by 1); $\dnu{13}$ is the spacing between adjacent modes with
$l=1$ and $l=3$ (ditto); and $\dnu{01}$ is the amount by which $l=1$ modes
are offset from the midpoint of the $l=0$ modes on either
side.\footnote{One can also define an equivalent quantity, $\dnu{10}$, as
the offset of $l=0$ modes from the midpoint between the surrounding $l=1$
modes, {so that $\dnu{10} = \nu_{n, 0} - {\textstyle\frac{1}{2}}(\nu_{n-1, 1} +
\nu_{n,1})$.}} {To be explicit, for a given radial order, $n$, these
separations are defined as follows:}
\begin{eqnarray}
\dnu{02} & = & \nu_{n, 0} - \nu_{n-1,2} \label{eq.dnu02} \\
\dnu{01} & = & {\textstyle\frac{1}{2}}(\nu_{n, 0} + \nu_{n+1,0}) - \nu_{n, 1}
\label{eq.dnu01} \\
\dnu{13} & = & \nu_{n, 1} - \nu_{n-1,3}. \label{eq.dnu13}
\end{eqnarray}
If the asymptotic relation (equation~\ref{eq.asymptotic}) were to hold
exactly, it would follow that {all of these separations would be
independent of $n$ and that} $\dnu{02} = 6 D_0$, $\dnu{13} = 10 D_0$ and
$\dnu{01} = 2 D_0$. {In practice,} equation~(\ref{eq.asymptotic}) is only
an approximation. In the Sun and other stars, theoretical models and
observations show that $\mbox{$\Delta \nu$}$, $D_0$ and $\epsilon$ vary somewhat with
frequency, and also with~$l$. Consequently, the small separations also
vary with frequency.
The mode amplitudes are determined by the excitation and damping, which are
stochastic processes involving near-surface convection. We typically
observe modes over a range of frequencies, which in Procyon is especially
broad (about 400--1400\,\mbox{$\mu$Hz}; see Paper~I). The observed amplitudes also
depend on $l$ via various projection factors (see Table~1 of
\citealt{KBA2008}). Note in particular that velocity measurements are much
more sensitive to modes with $l=3$ than are intensity measurements. The
mean mode amplitudes are modified for a given observing run by the
stochastic nature of the excitation, resulting in considerable scatter of
the peak heights about the envelope.
Oscillations in the Sun are long-lived compared to their periods, which
allows their frequencies to be measured very precisely. However, the
lifetime is not infinite and the damping results in each mode in the power
spectrum being split into multiple peaks under a Lorentzian profile. The
FWHM of this Lorentzian, which is referred to as the linewidth $\Gamma$, is
inversely proportional to the mode lifetime: $\Gamma = 1/(\pi\tau)$.
{We follow the usual definition that $\tau$ is the time for the mode
amplitude to decay by a factor of~$e$.} The solar value of $\tau$ for the
strongest modes ranges from 2 to 4\,days, as a decreasing function of
frequency \citep[e.g.,][]{CEI97}.
Procyon is an evolved star, with theoretical models showing that it is
close to, or just past, the end of the main sequence
\citep[e.g.,][]{G+D93,BMM99,CDG99,DiM+ChD2001,KTM2004,ECB2005,PBM2006,BKP2007,GKG2008}.
As such, its oscillation spectrum may show deviations from the regular
comb-like structure described by equation~(\ref{eq.asymptotic}), especially
at low frequencies. This is because some modes, particularly those with
$l=1$, are shifted by avoided crossings with gravity modes in the stellar
core (also called `mode bumping'; see \citealt{Osa75,ASW77}). These
so-called `mixed modes' have p-mode character near the surface but g-mode
character in the deep interior. Some of the theoretical models of Procyon
cited above indeed predict these mixed modes, depending on the evolutionary
state of the star, and we must keep this in mind when attempting to
identify oscillation modes in the power spectrum. The mixed modes are rich
in information because they probe the stellar core and are very sensitive
to age, but they complicate the task of mode identification.
We should also keep in mind that mixed modes are expected to have
{lower amplitudes and} longer lifetimes (smaller linewidths) than
regular p modes because they have larger mode inertias
\citep[e.g.,][]{ChD2004}. {Hence, for a data series that is many times
longer than the lifetime of the pure p modes, a mixed mode may appear in
the power spectrum as a narrow peak that is higher than the others, even
though its power (amplitude squared) is not especially large. }
Another potential complication is that stellar rotation causes modes with
$l \ge 1$ to split into multiplets. The peaks of these multiplets are
characterized by the azimuthal degree~$m$, which takes on values of~$m = 0,
\pm1, \ldots, \pm l$, with a separation that directly measures the rotation
rate averaged over the region of the star that is sampled by the mode. The
measurements are particularly difficult because a long time series is
needed to resolve the rotational splittings. We argue in
Appendix~\ref{app.rotation} that the low value of $v\sin i$ observed in
Procyon implies that rotational splitting of frequencies is not measurable
in our observations.
\section{Weighting the time series} \label{sec.weights}
The time series of velocity observations was obtained over 25 days using 11
telescopes at eight observatories and contains just over 22\,500 data
points. As discussed in Paper~I, the velocity curve shows slow variations
that we attribute to a combination of instrumental drifts and rotational
modulation of stellar active regions. We have removed these slow
variations by subtracting all the power below 280$\mu$Hz, to prevent spectral
leakage into higher frequencies that would degrade the oscillation
spectrum. We take this high-pass-filtered time series of velocities,
together with their associated measurement uncertainties, as the starting
point in our analysis.
\subsection{Noise-optimized weights}
Using weights when analyzing ground-based observations of stellar
oscillations \citep[e.g.,][]{GBK93,FJK95} allows one to take into account
the significant variations in data quality during a typical observing
campaign, especially when two or more telescopes are used. The usual
practice, which we followed in Paper~I, is to calculate the weights for a
time series from the measurement uncertainties, $\sigma_i$, according to
$w_i=1/\sigma_i^2$.
These ``raw'' weights can then be adjusted to minimize the noise level in
the final power spectrum by identifying and revising those uncertainties
that are too optimistic, and at the same time rescaling the uncertainties
to be in agreement with the actual noise levels in the data. This
procedure is described in Paper~I and references therein. The time series
of these noise-optimized weights is shown in Figure~\ref{fig.weights}{\em
a}. These are the same as those shown in Figure~1{\em d} of Paper~I, but
this time as weights rather than uncertainties.
The power spectrum of Procyon based on these noise-optimized weights is
shown in Figure~\ref{fig.power}{\em a}. This is the same as shown in
Paper~I (lower panel of Figure~6), except that the power at low
frequencies, which arises from the slow variations, has been removed. As
described in Paper~I, the noise level above 3\,mHz in this noise-optimized
spectrum is 1.9\,\mbox{cm\,s$^{-1}$}{} in amplitude. This includes some degree of
spectral leakage from the oscillations and if we high-pass filter the
spectrum up to 3\,mHz to remove the oscillation signal, the noise level
drops to 1.5\,\mbox{cm\,s$^{-1}$}{} in amplitude.
The task of extracting oscillation frequencies from the power spectrum is
complicated by the presence of structure in the spectral window, which are
caused by gaps or otherwise uneven coverage in the time series. The
spectral window using the noise-optimized weights is shown in
Figure~\ref{fig.window}{\em a}. Prominent sidelobes at $\pm 11.57\,\mbox{$\mu$Hz}$
correspond to aliasing at one cycle per day. Indeed, the prospect of
reducing these sidelobes is the main reason for acquiring multi-site
observations. However, even with good coverage the velocity precision
varies greatly, both for a given telescope during the run and from one
telescope to another (see Figure~\ref{fig.weights}{\em a}). As pointed out
in Paper~I, using these measurement uncertainties as weights has the effect
of increasing the sidelobes in the spectral window. We now discuss a
technique for addressing this issue.
\subsection{Sidelobe-optimized weights}
Adjusting the weights allows one to suppress the sidelobe structure; the
trade-off is an increase in the noise level. This technique is routinely
used in radio astronomy when synthesising images from interferometers
\citep[e.g.,][]{H+B74}. {An extreme case is to set all weights to be
equal, which is the same as not using weights at all. This is certainly
not optimal because it produces a power spectrum with greatly increased
noise (by a factor of 2.3) but still having significant sidelobes, as can
be seen in Figure 6{\it a} of Paper~I.} Adjusting the weights on a
night-by-night basis in order to minimize the sidelobes was used in the
analysis of dual-site observations of \mbox{$\alpha$~Cen~A}{} \citep{BKB2004}, \mbox{$\alpha$~Cen~B}{}
\citep{KBB2005}, and \mbox{$\beta$~Hyi}{} \citep{BKA2007}. For our multi-site Procyon
data this is impractical {because of the large number of (partly
overlapping) telescope nights.} We have developed a more general algorithm
for adjusting weights to minimize the sidelobes (H. Kjeldsen et al., in
prep.). {The new method, which is superior because it does not assume the
oscillations are coherent over the whole observing run, is based on the
principle that equal weight is given to all segments of the time series.}
The method produces the cleanest possible spectral window in terms of
suppressing the sidelobes, and we have tested it with good results using
published data for $\alpha$~Cen A and B, and $\beta$~Hyi \citep{AKB2010}.
The new method operates with two timescales, $T_1$ and $T_2$. All data
segments of length $T_1$ (=2\,hr, in this case) are required to have the
same total weight throughout the time series, with the relaxing condition
that variations on time scales longer than $T_2$ (=12\,hr) are retained.
To be explicit, the algorithm works as follows. We adjust the weights so
that all segments of length $T_1$ have the same total weight. That is, for
each point $w_i$ in the time series of weights, define $\{S_i\}$ to be the
set of weights in a segment of width $T_1$ centered at that time stamp, and
divide each $w_i$ by the sum of the weights in~$\{S_i\}$. However, this
adjustment suffers from edge effects, since it gives undue weight to points
adjacent to a gap. To compensate, we also divide by an asymmetry factor
\begin{equation}
R = 1+ \left|\frac{\Sigma_{\rm left} - \Sigma_{\rm right}}{\Sigma_{\rm left} +
\Sigma_{\rm right}}\right|.
\end{equation}
Here, $\Sigma_{\rm left}$ is the sum of the weights in the segment
$\{S_i\}$ that have time stamps less than $t_i$, and $\Sigma_{\rm right}$
is the sum of the weights in the segment $\{S_i\}$ that have time stamps
greater than $t_i$. Note that $R$ ranges from 1, for points that are
symmetrically placed in their $T$ bin, up to 2 for points at one edge of
a gap.
Once the above procedure is done for $T_1$, which is the shortest timescale
on which we wish to adjust the weights, we do it again with $T_2$, which is
the longest timescale for adjusting the weights. Finally, we divide the
first set of adjusted weights by the second set, and this gives the weights
that we adopt (Figure~\ref{fig.weights}{\em b}).
\subsection{The sidelobe-optimized power spectrum}
Figure~\ref{fig.power}{\em b} shows the power spectrum of Procyon based on
the sidelobe-optimized weights. The spectral window has improved
tremendously (Figure~\ref{fig.window}{\em b}), while the noise level at
high frequencies (above 3\,mHz) has increased by a factor of 2.0.
The power spectrum now clearly shows a regular series of peaks, which are
even more obvious after smoothing (Figure~\ref{fig.power}{\em c}). We see
that the large separation of the star is about 55\,\mbox{$\mu$Hz}, confirming the
value indicated by several previous studies
\citep{MMM98,MSL99,MLA2004,ECB2004,R+RC2005,LKB2007,GKG2008}. The very
strong peak at 446\,\mbox{$\mu$Hz}{} appears to be a candidate for a mixed mode,
especially given its narrowness {(see Section~\ref{sec.solar-like})}.
Plotting the power spectrum in \'echelle format using a large separation of
56\,\mbox{$\mu$Hz}\ (Figure~\ref{fig.echelle.image.WIN56}) clearly shows two ridges,
as expected.\footnote{When making an \'echelle diagram, it is common to
plot $\nu$ versus ($\nu \bmod \mbox{$\Delta \nu$}$), in which case each order slopes
upwards slightly. However, for gray-scale images it is preferable to keep
the orders horizontal, {as was done in the original presentation of the
diagram \citep{GFP83}.} We have followed that approach in this paper, and
the value given on the vertical axis indicates the frequency at the middle
of each order.} The upper parts are vertical but the lower parts are
tilted, indicating a change in the large separation as a function of
frequency. This large amount of curvature in the \'echelle diagram goes a
long way towards explaining the lack of agreement between previous studies
on the mode frequencies of Procyon {(see the list of references given
in the previous paragraph)}.
The advantage of using the sidelobe-optimized weights is demonstrated by
Figure~\ref{fig.echelle.image.SNR56}. This is the same as
Figure~\ref{fig.echelle.image.WIN56} but for the noise-optimized weights
and the ridges are no longer clearly defined.
\section{Identification of the ridges}
\label{sec.ridge.id}
We know from asymptotic theory (see equation~\ref{eq.asymptotic}) that one
of the ridges in the \'echelle diagram
(Figure~\ref{fig.echelle.image.WIN56}) corresponds to modes with even
degree ($l=0$ and 2) and the other to modes with odd degree ($l=1$ and 3).
However, it is not immediately obvious which is which. We also need to
keep in mind that the asymptotic relation in evolved stars does not hold
exactly. We designate the two possibilities Scenario~A, in which the
left-hand ridge in Figure~\ref{fig.echelle.image.WIN56} corresponds to
modes with odd degree, and Scenario~B, in which the same ridge corresponds
to modes with even degree. Figure~\ref{fig.collapse3} shows the Procyon
power spectrum collapsed along several orders. We see now double peaks
that suggest the identifications shown, which corresponds to Scenario~B.
We can check that the small separation $\dnu{01}$ has the expected sign.
According to asymptotic theory (equation~\ref{eq.asymptotic}), each $l=1$
mode should be at a slightly lower frequency than the mid-point of the
adjacent $l=0$ modes. This is indeed the case for the identifications
given in Figure~\ref{fig.collapse3}, but would not be if the even and odd
degrees were reversed. We should be careful, however, since \dnu{01} has
been observed to have the opposite sign in red giant stars
\citep{CDRB2010,BHS2010}.
The problem of ridge identification in F stars was first encountered by
\citet{AMA2008} when analysing CoRoT observations of HD~49933 and has been
followed up by numerous authors
\citep{BAB2009,BBC2009,GKW2009,M+A2009,Rox2009,KGG2010}. Two other F stars
observed by CoRoT have presented the same problem, namely HD~181906
\citep{GRS2009} and HD~181420 \citep{BDB2009}. A discussion of the issue
was recently given by \citet{B+K2010}, who proposed a solution to the
problem that involves comparing two (or more) stars on a single \'echelle
diagram after first scaling their frequencies.
Figure~\ref{fig.echelle.corot} shows the \'echelle diagram for Procyon
overlaid with scaled frequencies for two stars observed by CoRoT, using the
method described by \citet{B+K2010}. The filled symbols are scaled
oscillation frequencies for the G0 star HD~49385 observed by CoRoT
\citep{DBM2010}. The scaling involved multiplying all frequencies by a
factor of 0.993 before plotting them, with this factor being chosen to
align the symbols as closely as possible with the Procyon ridges. For this
star the CoRoT data gave an unambiguous mode identification, which is
indicated by the symbol shapes. This confirms that the left-hand ridge of
Procyon corresponds to modes with even $l$ (Scenario~B).
The open symbols in Figure~\ref{fig.echelle.corot} are oscillation
frequencies for HD~49933 from the revised identification by
\citet[][Scenario~B]{BBC2009}, after multiplying by a scaling factor of
0.6565. The alignment with HD~49385 was already demonstrated by
\citet{B+K2010}. We show HD~49933 here for comparison and to draw
attention to the different amounts of bending at the bottom of the
right-hand ($l=1$) ridge for the three stars. {The CoRoT target that
is most similar to Procyon is HD170987 but unfortunately the $S/N$ ratio is
too low to provide a clear identification of the ridges \citep{MGC2010}.}
The above considerations give us confidence that Scenario~B in Procyon is
the correct identification, and we now proceed on that basis.
\section{Frequencies of the Ridge Centroids}
\label{sec.ridge.centroids}
Our next step in the analysis was to measure the centroids of the two
ridges in the \'echelle diagram. We first removed the strong peak at
446\,\mbox{$\mu$Hz}\ (it was replaced by the mean noise level). We believe this to
be a mixed mode and its extreme power means that it would significantly distort
the result. We then smoothed the power spectrum to a resolution of
10\,\mbox{$\mu$Hz}\ (FWHM). To further improve the visibility of the ridges, we
also averaged across several orders, which corresponds to smoothing in the
vertical direction in the \'echelle diagram
\citep{BKB2004,KBB2005,Kar2007}. That is, for a given value of \mbox{$\Delta \nu$}\ we
define the ``order-averaged'' power-spectrum to be
\begin{equation}
{\rm OAPS}(\nu, \mbox{$\Delta \nu$}) = \sum_{j=-4}^4 c_j PS(\nu + j \mbox{$\Delta \nu$}).
\label{eq.OAPS}
\end{equation}
The coefficients $c_j$ are chosen to give a smoothing with a FWHM of
$k\mbox{$\Delta \nu$}$:
\begin{equation}
c_j = c_{-j} = \frac{1}{1 + (2j/k)^2}.
\end{equation}
We show in Figure~\ref{fig.idl9} the OAPS based on smoothing over 4 orders
($k=4.0$), and so we used $(c_0,\ldots, c_4) = (1, 0.8, 0.5, 0.31, 0.2)$.
The OAPS is plotted for three values of the large separations (54, 55 and
56\,\mbox{$\mu$Hz}) and they are superimposed. The three curves are hardly
distinguishable and we see that the positions of the maxima are not
sensitive to the value of \mbox{$\Delta \nu$}.
We next calculated a modified version of the OAPS in which the value at
each frequency is the maximum value of the OAPS over a range of large
separations (53--57\,\mbox{$\mu$Hz}). This is basically the same as the comb
response, as used previously by several authors
\citep{KBV95,MMM98,MSL99,LKB2007}. The maxima of this function define the
centroids of the two ridges, which are shown in
Figure~\ref{fig.echelle.ridges56}.
In Figure~\ref{fig.collapse.ridges} we show the full power spectrum of
Procyon (using sidelobe-optimized weights) collapsed along the ridges.
This is similar to Figure~\ref{fig.collapse3} except that {each order
was shifted before the summation, so as to align the ridge peaks (symbols
in Figure~\ref{fig.echelle.ridges56}) and hence remove the curvature.}
This was done separately for both the even- and odd-degree ridges, as shown
in the two panels of Figure~\ref{fig.collapse.ridges}. The collapsed
spectrum clearly shows the power corresponding to $l=0$--3, as well as the
extra power from the mixed modes (for this figure, the peak at 446\,\mbox{$\mu$Hz}\
has not been removed).
In Section~\ref{sec.clean} below, we use the ridges to guide our
identification of the individual modes. First, however, we show that some
asteroseismological inferences can be made solely from the ridges
themselves. {This is explained in more detail in
Appendix~\ref{app.ridges}.}
\subsection{Large separation of the ridges}
Figure~\ref{fig.seps.ridges}{\it a} shows the variation with frequency of
the large separation for each of the two ridges (diamonds and triangles).
{The smoothing across orders (equation~\ref{eq.OAPS}) means that the
ridge frequencies are correlated from one order to the next and so we used
simulations to estimate uncertainties for the ridge centroids.}
The oscillatory behavior of \mbox{$\Delta \nu$}\ as a function of frequency seen in
Figure~\ref{fig.seps.ridges}{\it a} is presumably a signature of the helium
ionization zone {\citep[e.g.][]{Gou90}}.
The oscillation is also seen in Figure~\ref{fig.seps.ridges}{\it b}, which
shows the second differences for the two ridges, defined as follows
\citep[see][]{Gou90,BTCG2004,H+G2007}:
\begin{eqnarray}
\Delta_2\nu_{n, \rm even} & = & \nu_{n-1,\rm even} - 2 \nu_{n,\rm even} + \nu_{n+1, \rm even}\\
\Delta_2\nu_{n, \rm odd} & = & \nu_{n-1,\rm odd} - 2 \nu_{n,\rm odd} + \nu_{n+1, \rm odd}.
\end{eqnarray}
The period of the oscillation is $\sim$500\,\mbox{$\mu$Hz}, which implies a glitch
at an acoustic depth that is approximately twice the inverse of this value
\citep{Gou90,H+G2007}, namely $\sim$1000\,s. {To determine this more
precisely, we calculated the power spectrum of the second differences for
both the odd and even ridges, and measured the highest peak. We found the
period of the oscillation in the second differences to be
$508\pm18\,\mbox{$\mu$Hz}$.} Comparing this result with theoretical models will be
the subject of a future paper.
{The dotted lines in Figure~\ref{fig.seps.ridges}{\it a} show the variation
of \mbox{$\Delta \nu$}\ with frequency calculated from the autocorrelation of the time
series using the method of \citet[][see also \citealt{R+V2006}]{M+A2009}.
The mixed mode at 446\,\mbox{$\mu$Hz}\ was first removed and the smoothing filter
had FWHM equal to 3 times the mean large separation. We see general
agreement with the values calculated from the ridge separations. Some of
the differences presumably arise because the autocorrelation analysis of
the time series averages the large separation over all degrees.
\subsection{Small separation of the ridges}
\label{sec.ridge.dnu}
Using only the centroids of the ridges, we can measure a small separation
that is useful for asteroseismology. By analogy with \dnu{01}\ (see
equation~\ref{eq.dnu01}), we define it as the amount by which the odd
ridge is offset from the midpoint of the two adjacent even ridges, with a
positive value corresponding to a leftwards shift (as observed in the Sun).
That is,
\begin{equation}
\delta\nu_{\rm even,odd} =
\frac{\nu_{n,\rm even} + \nu_{n+1,\rm even}}{2} - \nu_{n, \rm odd}.
\label{eq.dnu_even_odd}
\end{equation}
Figure~\ref{fig.seps.ridges}{\it c} shows our measurements of this small
separation.\footnote{We could also define a small separation \dnu{\rm
odd,even} to be the amount by which the centroid of the {\rm even} ridge is
offset rightwards from the midpoint of the adjacent {\rm odd} ridges. This
gives similar results.} It is related in a simple way to the conventional
small separations \dnu{01}, \dnu{02}, and \dnu{13} (see
Appendix~\ref{app.ridges} for details) and so, like them, it gives
information about the sound speed in the core. Our measurements of this
small separation can be compared with theoretical models using the
equations in Appendix~\ref{app.ridges} \citep[e.g., see][]{ChD+H2009}.
\section{Frequencies of individual modes}
\label{sec.clean}
We have extracted oscillation frequencies from the time series using the
standard procedure of iterative sine-wave fitting. Each step of the
iteration involves finding the strongest peak in the sidelobe-optimized
power spectrum and subtracting the corresponding sinusoid from the time
series. Figure~\ref{fig.echelle.clean.sidelobe} shows the result. The two
ridges are clearly visible but the finite mode lifetime causes many modes
to be split into two or more peaks. We might also be tempted to propose
that some of the multiple peaks are due to rotational splitting but, as
shown in Appendix~\ref{app.rotation}, this is unlikely to be the case.
Deciding on a final list of mode frequencies with correct $l$
identifications is somewhat subjective. To guide this process, we used the
ridge centroids shown in Figure~\ref{fig.echelle.ridges56} as well as the
small separations $\dnu{02}$ and $\dnu{13}$ from the collapsed power
spectrum (see Figures~\ref{fig.collapse3} and~\ref{fig.collapse.ridges}).
Each frequency extracted using iterative sine-wave fitting that lay close
to a ridge was assigned an $l$ value and multiple peaks from the same mode
were averaged. The final mode frequencies are listed in
Table~\ref{tab.freq.matrix}, while peaks with $S/N \ge 3.5$ that we have not
identified are listed in Table~\ref{tab.freq.list.other}.
Figures~\ref{fig.power.zoom} and~\ref{fig.echelle.cleanid} show these peaks
overlaid on the sidelobe-optimized power spectrum.
{Figure~\ref{fig.seps.freq} shows the three small separations
(equations~\ref{eq.dnu02}--\ref{eq.dnu13}) as calculated from the
frequencies listed in Table~\ref{tab.freq.matrix}. }
{The uncertainties in the mode frequencies are shown in parentheses in
Table~\ref{tab.freq.matrix}. These depend on the $S/N$ ratio of the peak
and were calibrated using simulations \citep[e.g., see][]{BKA2007}.}
The entries in Table~\ref{tab.freq.list.other} {are mostly false peaks due
to noise and to residuals from the iterative sine-wave fitting,} but may
include some genuine modes. To check whether some of them may be daily
aliases of each other or of genuine modes, we calculated the differences of
all combinations of frequencies in Tables~\ref{tab.freq.matrix}
and~\ref{tab.freq.list.other}. The histogram of these pairwise differences
was flat in the vicinity of 11.6\,\mbox{$\mu$Hz}\ and showed no excess, confirming
that daily aliases do not contribute significantly to the list of
frequencies in the tables.
{We also checked whether the number peaks in
Table~\ref{tab.freq.list.other} agrees with expectations. We did this by
analysing a simulated time series that matched the observations in terms of
oscillations properties (frequencies, amplitudes and mode lifetimes), noise
level, window function and distribution of weights. We extracted peaks
from the simulated power spectrum using iterative sine-wave fitting, as
before, and found the number of ``extra'' peaks (not coinciding with the
oscillation ridges) to be similar to that seen in
Figure~\ref{fig.echelle.clean.sidelobe}. Finally, we remark that the peak
at 408\,\mbox{$\mu$Hz}\ is a candidate for a mixed mode with $l=1$, given that it
lies in the same order as the previously identified mixed mode at
446\,\mbox{$\mu$Hz}\ (note that we expect one extra $l=1$ mode to occur at an
avoided crossing). }
The modes listed in Table~\ref{tab.freq.matrix} span 20 radial orders and
more than a factor of 4 in frequency. This range is similar to that
obtained from long-term studies of the Sun \citep[e.g.,][]{BCD2009} and is
unprecedented in asteroseismology. It was made possible by the unusually
broad range of excited modes in Procyon and the high S/N of our data.
Since the stellar background at low frequencies in intensity measurements
is expected to be much higher than for velocity measurements, it seems
unlikely that even the best data from the {\em Kepler Mission} will return
such a wide range of frequencies in a single target.
\section{Mode lifetimes} \label{sec.lifetimes}
As discussed in Section~\ref{sec.solar-like}, if the time series is
sufficiently long then damping causes each mode in the power spectrum to be
split into a series of peaks under a Lorentzian envelope having FWHM
$\Gamma = 1/(\pi\tau)$, where $\tau$ is the mode lifetime. Our
observations of Procyon are not long enough to resolve the modes into clear
Lorentzians, and instead we see each mode as a small number of peaks
(sometimes one). Furthermore, the centroid of these peaks may be offset
from the position of the true mode, as illustrated in Figure~1 of
\citet{ADJ90}. This last feature allows one to use the scatter of the
extracted frequencies about smooth ridges in the \'echelle diagram,
calibrated using simulations, to estimate the mode lifetime
\citep{KBB2005,BKA2007}. That method cannot be applied to Procyon because
the $l=0$ and $l=2$ ridges are not well-resolved and the $l=1$ ridge is
affected by mixed modes.
Rather than looking at frequency shifts, we have estimated the mode
lifetime from the variations in mode amplitudes (again calibrated using
simulations). This method is less precise but has the advantage of being
independent of the mode identifications
\citep[e.g.,][]{LKB2007,CKB2007,BKA2007}. In Paper~I we calculated the
smoothed amplitude curve for Procyon in ten 2-day segments and used the
fluctuations about the mean to make a rough estimate of the mode lifetime:
$\tau = 1.5_{-0.8}^{+1.9}$\,days. We have attempted to improve on that
estimate by considering the amplitude fluctuations of individual modes, as
has been done for the Sun \citep[e.g.,][]{T+F92,BGG96,C+G98}, but were not
able to produce well-calibrated results for Procyon.
Instead, we have measured the ``peakiness'' of the power spectrum
\citep[see][]{BKA2007} by calculating the ratio between the square of the
mean amplitude of the 15 highest peaks in the range 500--1300\,\mbox{$\mu$Hz}\
(found by iterative sine-wave fitting) and the mean power in the same
frequency range. The value for this ratio from our observations of Procyon
is 6.9. We made a large number of simulations (3600) having a range of
mode lifetimes and with the observed frequency spectrum, noise level,
window function and weights. Comparing the simulations with the
observations led to a mode lifetime for Procyon of
$1.29^{+0.55}_{-0.49}$\,days.
This agrees with the value found in Paper~I but is more precise, confirming
that modes in Procyon are significantly more short-lived than those of the
Sun. As discussed in Section~\ref{sec.solar-like}, the dominant modes in
the Sun have lifetimes of 2--4\,days \citep[e.g.,][]{CEI97}. The tendency
for hotter stars to have shorter mode lifetimes has recently been discussed
by \citet{CHK2009}.
\section{Fitting to the power spectrum}
\label{sec.fit}
Extracting mode parameters by fitting directly to the power spectrum is
widely used in helioseismology, where the time series extends continuously
for months or even years, and so the individual modes are well-resolved
\citep[e.g.,][]{ADJ90}. Mode fitting has not been applied to ground-based
observations of solar-type oscillations because these data typically have
shorter durations and significant gaps. Global fitting has been carried
out on spacecraft data, beginning with the 50-d time series of \mbox{$\alpha$~Cen~A}\
taken with the WIRE spacecraft \citep{FCE2006} and the 60-d light curve of
HD\,49933 from CoRoT \citep{AMA2008}. Our observations of Procyon are much
shorter than either of these cases but, given the quality of the data and
the spectral window, we considered it worthwhile to attempt a fit.
{Global fits to the Procyon power spectrum were made by several of us.
Here, we present results from} a fit using a Bayesian approach
\citep[e.g.,][]{Gre2005}, which allowed us to include in a straightforward
way our prior knowledge of the oscillation properties. The parameters to
be extracted were the frequencies, heights and linewidths of the modes. To
obtain the marginal probability distributions of these parameters and their
associated uncertainties, we employed an APT MCMC (Automated Parallel
Tempering Markov Chain Monte Carlo) algorithm. It implements the
Metropolis-Hastings sampler by performing a random walk in parameter space
while drawing samples from the posterior distribution \citep{Gre2005}.
Further details of our implementation of the algorithm will be given
elsewhere (T.L. Campante et al., in prep.).
The details of the fitting are as follows:
\begin{itemize}
\item The fitting was performed over 17 orders (5--21) using the
sidelobe-optimized power spectrum. In each order we fitted modes with
$l=0$, 1, and 2, with each individual profile being described by a
symmetric Lorentzian with FWHM~$\Gamma$ and height~$H$. The mode
frequencies were constrained to lie close to the ridges and to have only
small jumps from one order to the next (a Gaussian prior with $\sigma =
3\,\mbox{$\mu$Hz}$). {The S/N ratios of modes with $l=3$ were too low to
permit a fit. In order to take their power into account, we included
them in the model with their frequencies fixed by the asymptotic relation
(equation~\ref{eq.asymptotic}).}
\item The data are not good enough to provide a useful estimate of the
linewidth of every mode, or even of every order. Therefore, the
linewidth was parametrized as a linear function of frequency, defined by
two parameters $\Gamma_{600}$ and $\Gamma_{1200}$, which are the values
at 600 and 1200\,\mbox{$\mu$Hz}. These parameters were determined by the fit, in
which both were assigned a uniform prior in the range 0--10\,\mbox{$\mu$Hz}.
\item The height of each mode is related to the linewidth and amplitude
according to \citep{CHE2005}:
\begin{equation}
H = \frac{2A^2}{\pi \Gamma}.
\end{equation}
The amplitudes $A$ of the modes were determined as follows. For the
radial modes ($l=0$) we used the smoothed amplitude curve measured from
our observations, as shown in Figure~10 of Paper~I. The amplitudes of
the non-radial modes ($l=1$--3) were then calculated from the radial
modes using the ratios given in Table~1 of \citet{KBA2008}, namely
$S_0:S_1:S_2:S_3 = 1.00:1.35:1.02:0.47$.
\item The background was fitted as a flat function.
\item We calculated the rotationally-split profiles of the non-radial
modes using the description given by \citet{G+S2003}. The inclination
angle of the rotation axis was fixed at $31^{\circ}$, which is the
inclination of the binary orbit \citep{GWL2000} and, as discussed in
Paper~I (Section~4.1), is consistent with the rotational modulation of the
velocity curve. The rotational splitting was fixed at 0.7\,\mbox{$\mu$Hz}, which
was chosen to match the observed value of $v\sin i = 3.16$\,\mbox{km\,s$^{-1}$}\
\citep{APAL2002}, given the known radius of the star. As discussed in
Appendix~\ref{app.rotation}, choosing different values for the inclination
(and hence the splitting) does not affect the mode profile, assuming
reasonable values of the linewidth.
\end{itemize}
We carried out the global fit using both scenarios discussed in
Section~\ref{sec.ridge.id}. The fit for Scenario~B is shown as the smooth
curve in Figure~\ref{fig.power.zoom} {and the fitted frequencies are
given in Table~\ref{tab.freq.scenarioB}. Note that the mixed mode at
446\,\mbox{$\mu$Hz}\ was not properly fitted because it lies too far from the ridge
(see the first bullet point above). To check the agreement with the
results discussed in Section~\ref{sec.clean}, we examined the differences
betweens the frequencies in Tables~\ref{tab.freq.matrix}
and~\ref{tab.freq.scenarioB}. We found a reduced $\chi^2$ of 0.74, which
indicates good agreement. A value less than 1 is not surprising given that
both methods were constrained to find modes close to the ridges. }
The fitted linewidths (assumed to be a linear function of frequency, as
described above) gave mode lifetimes of
$1.5 \pm 0.4$\,days at 600\,\mbox{$\mu$Hz}\ and
$0.6 \pm 0.3$\,days at 1200\,\mbox{$\mu$Hz}.
These agree with the single value of $1.29^{+0.55}_{-0.49}$\,days found
above (Section~\ref{sec.lifetimes}), and indicate that the lifetime
increases towards lower frequencies, as is the case for the Sun {and
for the F-type CoRoT targets HD~49933 \citep{BBC2009} and HD~181420
\citep{BDB2009}.}
We also carried out the global fit using Scenario~A. We found through
Bayesian model selection that Scenario~A was statistically favored over
Scenario~B {by a factor of 10:1.} This factor classifies as
``significant'' on the scale of \citeauthor{Jef61} (\citeyear{Jef61}; see
Table~1 of \citealt{Lid2009}). On the same scale, posterior odds of at
least $\sim$13:1 are required for a classification of ``strong to very
strong'', and ``decisive'' requires at least $\sim$150:1. In our Bayesian
fit to Procyon, the odds ratio in favor of Scenario~A did not exceed 13:1,
even when different sets of priors were imposed.
{In light of the strong arguments given in Section~\ref{sec.ridge.id} in
favour of Scenario~B, we do not consider the result from Bayesian model
selection to be sufficiently compelling to} cause us to reverse our
identification. {Of course, it is possible that Scenario~A is correct
and, for completeness, we show these fitted frequencies in
Table~\ref{tab.freq.scenarioA}. The fit using Scenario~A gave mode
lifetimes of
$0.9 \pm 0.2$\,days at 600\,\mbox{$\mu$Hz}\ and
$1.0 \pm 0.3$\,days at 1200\,\mbox{$\mu$Hz}.}
\section{{Preliminary comparison with} models}
A detailed comparison of the observed frequencies of Procyon with
theoretical models is beyond the scope of this paper, but we will make some
{preliminary comments on the systematic offset between the two.} It is
well-established that incorrect modeling of the surface layers of the Sun
is responsible for discrepancies between the observed and calculated
oscillation frequencies \citep{ChDDL88,DPV88,RChDN99,LRD2002}.
To address this problem for other stars, \citet{KBChD2008} proposed an
empirical correction to be applied to model frequencies that takes
advantage of the fact that the offset between {observations and models is
independent of $l$} and goes to zero with decreasing frequency. They
measured the offset for the Sun to be a power law with exponent $b=4.9$ and
applied this correction to the radial modes of other stars, finding very
good results that allowed them to estimate mean stellar densities very
accurately (better than 0.5~per cent).
We have applied this method to Procyon, comparing our observed frequencies
for the radial modes with various published models to determine the scaling
factor $r$ and the offset (see \citealt{KBChD2008} for details of the
method). The results are shown in Figure~\ref{fig.near.surface}.
Interestingly, the offset between the observations and scaled models does
not go to zero with decreasing frequency. This contrasts with the G and
K-type stars investigated by \citet{KBChD2008}, namely the Sun, \mbox{$\alpha$~Cen~A}\ and
B, and \mbox{$\beta$~Hyi}.
The method of \citet{KBChD2008} assumes the correction to be applied to the
models to have the same form as in the Sun, namely a power law with an
exponent of $b=4.9$. The fit in Figure~\ref{fig.near.surface} is poor and
is not improved by modest adjustments to $b$. Instead, the results seem to
imply an offset that is constant. Setting $b=0$ and repeating the
calculations produces the results shown in Figure~\ref{fig.near.surface0},
where {we indeed see a roughly constant offset between the models and the
observations of about 20\,\mbox{$\mu$Hz}.}
As a check, we can consider the density implied for Procyon. The stellar
radius can be calculated from the interferometric radius and the parallax.
The angular diameter of $5.404 \pm 0.031$\,mas \citep[][Table~7]{ALK2005}
and the revised {\em Hipparcos} parallax of $285.93 \pm 0.88$\,mas\
\citep{vanLee2007} give a radius of $2.041 \pm 0.015 \,R_\sun$.
Procyon is in a binary system (the secondary is a white dwarf), allowing
the mass to be determined from astrometry. \citet{GWL2000} found a value
of $1.497 \pm 0.037\,M_\sun$, while \citet{G+H2006} found $1.431 \pm
0.034\,M_\sun$ {(see \citealt{GKG2008} for further discussion).}
The density obtained using the fits shown in Figure~\ref{fig.near.surface}
is in the range 0.255--0.258\,g\,cm$^{-3}$. Combining with the radius
implies a mass in the range 1.54--1.56\,$M_\sun$. The density obtained
using the fits shown in Figure~\ref{fig.near.surface0} is in the range
0.242--0.244\,g\,cm$^{-3}$, implying a mass of 1.46--1.48\,$M_\sun$. The
latter case seems to be in much better agreement with the astrometrically
determined mass, lending some support to the idea that the offset is
constant.
We can also consider the possibility that our mode identification is wrong
and that Scenario~A is the correct one (see Sections~\ref{sec.ridge.id}
and~\ref{sec.fit}). With this reversed identification, the radial modes in
Procyon are those in Table~\ref{tab.freq.matrix} listed as having $l=1$.
Assuming these to be radial modes, the offset between them and the model
frequencies is again constant, as we would expect, but this time with a
mean value close to zero. {The implied density for Procyon is again
consistent with the observed mass and radius.}
The preceding discussion makes it clear that the correction that needs to
be applied to models of Procyon is very different from that for the Sun and
other cool stars, {regardless of whether Scenario B or A is correct.
In particular, the substantial nearly-constant offset implied by
Figure~\ref{fig.near.surface} would indicate errors in the modeling
extending well beyond the near-surface layers. We also note that in terms
of the asymptotic expression (equation~\ref{eq.asymptotic}) a constant
offset would imply an error in the calculation of $\epsilon$.}
\section{Conclusion}
We have analyzed results from a multi-site campaign on Procyon that
obtained high-precision velocity observations over more than three weeks
\citep[][Paper~I]{AKB2008PaperI}. We developed a new method for adjusting
the weights in the time series that allowed us to minimize the sidelobes in
the power spectrum that arise from diurnal gaps and so to construct an
\'echelle diagram that shows two clear ridges of power. To identify the
odd and even ridges, we summed the power across several orders. We found
structures characteristic of $l=0$ and 2 in one ridge and $l=1$ and 3 in
the other. This identification was confirmed by comparing our Procyon data
in a scaled \'echelle diagram \citep{B+K2010} with other stars for which
the ridge identification is known. We showed that the frequencies of the
ridge centroids and their large and small separations are easily measured
and are useful {diagnostics} for asteroseismology. In particular, an
oscillation in the large separation appears to indicate a glitch in the
sound-speed profile at an acoustic depth of $\sim$1000\,s.
We identify a strong narrow peak at 446\,\mbox{$\mu$Hz}, which falls slightly away
from the $l=1$ ridge, as a mixed mode. In Table~\ref{tab.freq.matrix} we
give frequencies, extracted using iterative sine-wave fitting, for 55 modes
with angular degrees $l$ of 0--3. These cover 20 radial orders and a
factor of more than 4 in frequency, which reflects the broad range of
excited modes in Procyon and the high S/N of our data, especially at low
frequencies. Intensity measurements will suffer from a much higher stellar
background at low frequencies, making it unlikely that even the best data
from the {\em Kepler Mission} will yield the wide range of frequencies
found here. This is a strong argument in favor of {continuing efforts
towards ground-based Doppler studies, such as} the SONG network (Stellar
Observations Network Group; \citealt{GChDA2008}), {which is currently
under construction, and the proposed Antarctic instrument SIAMOIS (Seismic
Interferometer to Measure Oscillations in the Interior of Stars;
\citealt{MAC2008}). }
We estimated the mean lifetime of the modes by comparing the ``peakiness''
of the power spectrum with simulations and found a value of
$1.29^{+0.55}_{-0.49}$\,days, significantly below that of the Sun. A
global fit to the power spectrum using Bayesian methods confirmed this
result and provided evidence that the lifetime increases towards lower
frequencies. {It also casts some doubt on the mode identifications.
We still favor the identification discussed above, but leave open the
possibility that this may need to be reversed.} Finally, comparing the
observed frequencies of radial modes in Procyon with published theoretical
models showed an offset that {appears to be constant with frequency,
making it very different from that seen in} the Sun and other cool stars.
Detailed comparisons of our results with theoretical models will be carried
out in future papers.
{We would be happy to make the data presented in this paper available on
request.}
\acknowledgments
This work was supported financially by
the Australian Research Council,
the Danish Natural Science Research Council,
the Swiss National Science Foundation,
NSF grant AST-9988087 (RPB) and by SUN Microsystems.
{We gratefully acknowledge support from the European Helio- and
Asteroseismology Network (HELAS), a major international collaboration
funded by the European Commission's Sixth Framework Programme.}
\clearpage
\begin{deluxetable}{rrrrr}
\tablecolumns{5}
\tablewidth{0pc}
\tablecaption{Oscillation Frequencies in Procyon (in \mbox{$\mu$Hz})
\label{tab.freq.matrix}}
\tablehead{
\colhead{Order} & \colhead{$l=0$} & \colhead{$l=1$} & \colhead{$l=2$} & \colhead{$l=3$} }
\startdata
4 &
\multicolumn{1}{c}{\nodata} &
331.3 (0.8) &
\multicolumn{1}{c}{\nodata} &
\multicolumn{1}{c}{\nodata} \\
5 &
\multicolumn{1}{c}{\nodata} &
387.7 (0.7) &
\multicolumn{1}{c}{\nodata} &
\multicolumn{1}{c}{\nodata} \\
6 &
415.5 (0.8) &
445.8 (0.3) &
411.7 (0.7) &
\multicolumn{1}{c}{\nodata} \\
7 &
466.5 (1.0) &
498.6 (0.7) &
464.5 (0.9) &
488.7 (0.9) \\
8 &
\multicolumn{1}{c}{\nodata} &
551.5 (0.7) &
\multicolumn{1}{c}{\nodata} &
544.4 (0.9) \\
9 &
576.0 (0.7) &
608.2 (0.5) &
\multicolumn{1}{c}{\nodata} &
\multicolumn{1}{c}{\nodata} \\
10 &
630.7 (0.6) &
660.6 (0.7) &
627.0 (1.1) &
653.6 (0.8) \\
11 &
685.6 (0.7) &
712.1 (0.5) &
681.9 (0.7) &
\multicolumn{1}{c}{\nodata} \\
12 &
739.2 (0.7) &
766.5 (0.5) &
736.2 (0.5) &
\multicolumn{1}{c}{\nodata} \\
13 &
793.7 (0.9) &
817.2 (0.6) &
792.3 (0.9) &
\multicolumn{1}{c}{\nodata} \\
14 &
849.1 (0.7) &
873.5 (0.6) &
845.4 (0.6) &
869.5 (0.6) \\
15 &
901.9 (0.8) &
929.2 (0.7) &
\multicolumn{1}{c}{\nodata} &
926.6 (0.6) \\
16 &
957.8 (0.6) &
985.3 (0.7) &
956.4 (0.5) &
980.4 (0.9) \\
17 &
1015.8 (0.6) &
1040.0 (0.7) &
\multicolumn{1}{c}{\nodata} &
1034.5 (0.7) \\
18 &
1073.9 (0.7) &
1096.5 (0.7) &
1068.5 (0.7) &
\multicolumn{1}{c}{\nodata} \\
19 &
1126.7 (0.5) &
1154.6 (0.9) &
1124.3 (0.9) &
\multicolumn{1}{c}{\nodata} \\
20 &
1182.0 (0.7) &
1208.5 (0.6) &
1179.9 (1.0) &
\multicolumn{1}{c}{\nodata} \\
21 &
1238.3 (0.9) &
1264.6 (1.0) &
1237.0 (0.8) &
\multicolumn{1}{c}{\nodata} \\
22 &
1295.2 (1.0) &
\multicolumn{1}{c}{\nodata} &
1292.8 (1.0) &
\multicolumn{1}{c}{\nodata} \\
23 &
1352.6 (1.1) &
1375.7 (1.0) &
1348.2 (1.0) &
\multicolumn{1}{c}{\nodata} \\
\enddata
\end{deluxetable}
\begin{deluxetable}{rr}
\tablecolumns{2}
\tablewidth{0pc}
\tablecaption{Unidentified Peaks with $S/N\ge3.5$ \label{tab.freq.list.other}}
\tablehead{
\colhead{$\nu$} & \colhead{$S/N$} \\
\colhead{(\mbox{$\mu$Hz})} & \colhead{} }
\startdata
407.6 (0.8) & 3.5 \\
512.8 (0.8) & 3.6 \\
622.8 (0.6) & 4.3 \\
679.1 (0.7) & 4.0 \\
723.5 (0.6) & 4.7 \\
770.5 (0.7) & 4.1 \\
878.5 (0.6) & 4.4 \\
890.8 (0.7) & 3.6 \\
935.6 (0.7) & 3.9 \\
1057.2 (0.7) & 3.7 \\
1384.3 (0.7) & 3.6 \\
\enddata
\end{deluxetable}
\begin{deluxetable}{rrrr}
\tablecolumns{4}
\tablewidth{0pc}
\tablecaption{Frequencies from global fit using Scenario B (in \mbox{$\mu$Hz}, with
$-$/$+$ uncertainties)
\label{tab.freq.scenarioB}}
\tablehead{
\colhead{Order} & \colhead{$l=0$} & \colhead{$l=1$} & \colhead{$l=2$} }
\startdata
5 &
363.6 (0.8/0.9) &
387.5 (0.6/0.6) &
358.5 (1.3/1.2) \\
6 &
415.3 (3.3/1.0) &
\multicolumn{1}{c}{\nodata} &
408.1 (1.0/3.7) \\
7 &
469.7 (1.6/2.1) &
498.8 (0.7/0.8) &
465.3 (1.1/1.3) \\
8 &
522.3 (1.4/1.4) &
551.6 (0.8/0.7) &
519.0 (1.5/1.6) \\
9 &
577.0 (1.6/2.5) &
607.6 (0.6/0.7) &
573.9 (2.2/2.8) \\
10 &
631.3 (0.8/0.8) &
660.3 (1.0/1.3) &
627.4 (2.1/2.8) \\
11 &
685.6 (1.2/1.6) &
714.7 (1.4/1.2) &
681.2 (2.3/1.9) \\
12 &
740.1 (1.6/1.7) &
768.6 (0.9/1.0) &
737.0 (1.5/1.7) \\
13 &
793.2 (1.3/1.7) &
820.0 (1.7/1.2) &
790.9 (2.0/1.9) \\
14 &
847.3 (1.2/1.4) &
872.7 (1.1/0.9) &
844.7 (1.7/1.5) \\
15 &
901.0 (1.8/1.7) &
927.5 (0.8/0.8) &
898.6 (2.1/2.1) \\
16 &
958.7 (1.4/1.1) &
983.9 (1.0/1.3) &
957.2 (1.0/1.3) \\
17 &
1015.9 (1.5/1.8) &
1039.5 (1.6/1.7) &
1014.0 (1.8/2.4) \\
18 &
1073.2 (1.5/2.2) &
1096.6 (1.1/1.0) &
1070.3 (2.2/2.3) \\
19 &
1127.2 (1.0/1.3) &
1151.8 (1.4/1.4) &
1125.9 (1.3/1.4) \\
20 &
1182.3 (1.5/1.4) &
1207.9 (1.4/1.1) &
1180.5 (1.6/1.6) \\
21 &
1236.9 (1.7/1.6) &
1267.4 (1.7/1.5) &
1235.5 (2.0/1.7) \\
\enddata
\end{deluxetable}
\begin{deluxetable}{rrrr}
\tablecolumns{4}
\tablewidth{0pc}
\tablecaption{Frequencies from global fit using Scenario A (in \mbox{$\mu$Hz}, with
$-$/$+$ uncertainties)
\label{tab.freq.scenarioA}}
\tablehead{
\colhead{Order} & \colhead{$l=0$} & \colhead{$l=1$} & \colhead{$l=2$} }
\startdata
5 &
387.7 (1.9/1.8) &
361.9 (1.8/2.0) &
385.1 (1.9/2.6) \\
6 &
\multicolumn{1}{c}{\nodata} &
412.5 (1.7/2.3) &
439.3 (2.6/2.6) \\
7 &
498.7 (1.1/1.6) &
467.6 (1.4/1.3) &
493.2 (2.6/2.0) \\
8 &
552.2 (1.5/1.5) &
520.7 (1.2/1.3) &
549.3 (2.2/2.0) \\
9 &
607.8 (1.0/0.9) &
576.2 (1.1/1.4) &
605.4 (2.2/2.3) \\
10 &
661.3 (1.3/1.5) &
631.1 (0.7/0.8) &
657.1 (1.7/1.6) \\
11 &
716.8 (1.3/1.7) &
684.7 (1.2/1.2) &
712.6 (1.2/1.2) \\
12 &
769.9 (1.2/1.3) &
739.1 (1.1/1.2) &
766.6 (1.4/1.4) \\
13 &
822.7 (1.9/2.7) &
792.9 (1.3/1.3) &
817.8 (1.3/1.4) \\
14 &
874.5 (1.3/1.3) &
846.4 (0.9/0.8) &
869.9 (1.6/1.3) \\
15 &
928.8 (1.2/1.2) &
900.0 (1.3/1.4) &
925.9 (1.3/1.1) \\
16 &
985.1 (1.0/1.1) &
958.2 (0.8/0.8) &
980.9 (1.9/1.6) \\
17 &
1043.4 (2.8/2.8) &
1015.7 (1.0/0.9) &
1035.2 (1.0/0.8) \\
18 &
1097.6 (1.5/0.9) &
1072.5 (1.1/1.2) &
1091.8 (3.7/4.2) \\
19 &
1153.7 (0.9/0.8) &
1126.9 (0.5/0.6) &
1146.8 (1.3/1.0) \\
20 &
1209.1 (0.8/0.9) &
1181.8 (1.0/0.9) &
1204.8 (1.3/1.4) \\
21 &
1269.2 (1.0/1.1) &
1237.1 (0.9/0.9) &
1264.8 (1.5/1.5) \\
\enddata
\end{deluxetable}
\begin{figure*}
\epsscale{1.0}
\plotone{fig01.eps}
\caption[]{\label{fig.weights} Weights for time series of velocity
observations of Procyon, optimized to minimize: ({\em a})~the noise level
and ({\em b})~the height of the sidelobes. }
\end{figure*}
\begin{figure*}
\epsscale{1.0}
\plotone{fig02a.eps}
\plotone{fig02bc.eps}
\caption[]{\label{fig.power} Power spectrum of oscillations in Procyon:
({\em a})~using the noise-optimized weights;
({\em b})~using the sidelobe-optimized weights;
({\em c})~using the sidelobe-optimized weights and smoothing
by convolution with a Gaussian with FWHM 2\,\mbox{$\mu$Hz}.
}
\end{figure*}
\clearpage
\begin{figure}
\epsscale{1.2}
\plotone{fig03.eps}
\caption[]{\label{fig.window} Spectral window for the Procyon observations
using ({\em a})~noise-optimized weights and ({\em b})~sidelobe-optimized
weights. }
\end{figure}
\begin{figure}
\epsscale{1.2}
\plotone{fig04.eps}
\caption[]{\label{fig.echelle.image.WIN56} Power spectrum of Procyon in
echelle format using a large separation of 56\,\mbox{$\mu$Hz}, based on the
sidelobe-optimized weights.
Two ridges are clearly visible. The upper parts are vertical
but the lower parts are tilted, indicating a change in the large separation
as a function of frequency. The orders are numbered sequentially on the
right-hand side.}
\end{figure}
\begin{figure}
\epsscale{1.2}
\plotone{fig05.eps}
\caption[]{\label{fig.echelle.image.SNR56} Same as
Fig.~\ref{fig.echelle.image.WIN56}, but for the noise-optimized weights.
The sidelobes from daily aliasing mean that the ridges can no longer be
clearly distinguished. }
\end{figure}
\begin{figure}
\epsscale{1.2}
\plotone{fig06.eps}
\caption[]{\label{fig.collapse3}The power spectrum of Procyon collapsed
along several orders. Note that the power spectrum was first smoothed
slightly by convolving with a Gaussian with FWHM 0.5\,\mbox{$\mu$Hz}.
The dotted lines are separated by exactly $\mbox{$\Delta \nu$}/2$, to guide the
eye in assessing the 0--1 small separation}
\end{figure}
\begin{figure}
\epsscale{1.2}
\plotone{fig07.eps}
\caption[]{\label{fig.echelle.corot} \'Echelle diagram for Procyon smoothed
to 2\,\mbox{$\mu$Hz}\ (greyscale) overlaid with scaled frequencies for two
stars observed by CoRoT. The filled symbols are oscillation
frequencies for HD~49385 reported by \citet{DBM2010}, after
multiplying by 0.993. Open symbols are oscillation frequencies for
HD~49933 from the revised identification by
\citet[][Scenario~B]{BBC2009} after multiplying by 0.6565. Symbol
shapes indicate mode degree: $l=0$ (circles), $l=1$ (triangles), and
$l=2$ (squares).}
\end{figure}
\begin{figure}
\epsscale{1.2}
\plotone{fig08.eps}
\caption[]{\label{fig.idl9} Order-averaged power spectrum (OAPS), where
smoothing was done with a FWHM of 4.0 orders (see text). The OAPS is
plotted for three values of the large separations (54, 55 and 56\,\mbox{$\mu$Hz})
and we see that the positions of the maxima are not very sensitive to the
value of \mbox{$\Delta \nu$}.}
\end{figure}
\begin{figure}
\epsscale{1.2}
\plotone{fig09.eps}
\caption[]{\label{fig.echelle.ridges56} Centroids of the two ridges, as
measured from the comb response. The grayscale shows the
sidelobe-optimized power spectrum from which the peaks were calculated.}
\end{figure}
\begin{figure}
\epsscale{1.2}
\plotone{fig10.eps}
\caption[]{\label{fig.collapse.ridges}The power spectrum of Procyon
collapsed along the ridges, over the full range of oscillations (18
orders). {The upper panel shows the left-hand ridge, which we
identify with modes having even degree, and the lower panel shows the
right-hand ridge (odd degree).} Note that the power spectrum was first
smoothed slightly by convolving with a Gaussian with FWHM 0.6\,\mbox{$\mu$Hz}. }
\end{figure}
\begin{figure}
\epsscale{0.6}
\plotone{fig11a.eps}
\plotone{fig11b.eps}
\plotone{fig11c.eps}
\caption[]{\label{fig.seps.ridges} Symbols show the frequency separations
in Procyon as a function of frequency, as measured from the ridge
centroids: (a) large frequency separation, (b) second differences, and
(c) small frequency separation. The dotted lines in panel~{\it a} show
the variation in \mbox{$\Delta \nu$}\ (with $\pm1\sigma$ range) calculated from the
autocorrelation of the time series -- see the text.}
\end{figure}
\begin{figure}
\epsscale{1.2}
\plotone{fig12.eps}
\caption[]{\label{fig.echelle.clean.sidelobe} Peaks extracted from
sidelobe-optimized power spectrum using iterative sine-wave fitting.
Symbol size is proportional to amplitude (after the background noise has
been subtracted). The grayscale shows the sidelobe-optimized power
spectrum on which the fitting was performed, to guide the eye.}
\end{figure}
\begin{figure*}
\epsscale{1.0}
\plotone{fig13.eps}
\caption[]{\label{fig.power.zoom}The power spectrum of Procyon at full
resolution, {with the orders in each column arranged from top to
bottom, for easy comparison with the \'echelle diagrams.} Vertical dashed
lines show the mode frequencies listed in Table~\ref{tab.freq.matrix} and
dotted lines show the peaks that have not been identified, as listed in
Table~\ref{tab.freq.list.other}. The smooth curve shows the global fit
to the power spectrum for Scenario~B (see Section~\ref{sec.fit}). }
\end{figure*}
\begin{figure}
\epsscale{1.2}
\plotone{fig14.eps}
\caption[]{\label{fig.echelle.cleanid} The power spectrum of Procyon
overlaid with mode frequencies listed in Table~\ref{tab.freq.matrix}.
Symbols indicate angular degree (squares: $l=0$; diamonds: $l=1$;
crosses: $l=2$; pluses: $l=3$). Asterisks show the peaks that have not
been identified, as listed in Table~\ref{tab.freq.list.other}. }
\end{figure}
\begin{figure}
\epsscale{1.2}
\plotone{fig15.eps}
\caption[]{\label{fig.seps.freq} Small frequency separations in Procyon, as
measured from the mode frequencies listed in
Table~\ref{tab.freq.matrix}. }
\end{figure}
\begin{figure}
\epsscale{1.2}
\plotone{fig16.eps}
\caption[]{\label{fig.near.surface} The difference between observed
frequencies of radial modes in Procyon and those of scaled models. The
symbols indicate different models, as follows:
squares from \citet[][Table~2]{CDG99},
crosses from \citet[][]{DiM+ChD2001},
asterisks from \citet[][Table~4]{KTM2004}, and
triangles from \citet[][model M1a]{ECB2005}.
In each case, the dotted curve shows the correction calculated using
equation~(4) of \citet{KBChD2008}.}
\end{figure}
\begin{figure}
\epsscale{1.2}
\plotone{fig17.eps}
\caption[]{\label{fig.near.surface0} Same as
Figure~\ref{fig.near.surface}, but with a constant near-surface
correction ($b=0$). }
\end{figure}
\clearpage
| proofpile-arXiv_065-5022 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
Turbulence of a system is thought to be a condition of the system for which the space and time evolution of the dynamical variables are chaotic, but for which a statistical distribution functional is applicable. The time evolution of an initially given probability distribution functional for the dynamical variables is given by the deterministic evolution equations that govern the system, called deterministic chaos.
A fluid environment is a dynamical system, having energy input mechanisms at long distance scales, and energy dissipation mechanisms due to viscosity occurring at small distance scales. Hence, the fluid has an attractor, called the Strange Attractor in the turbulent regime, for its dynamical variables. These considerations apply to the standard system of interacting, or self-interacting, classical fields which are governed by evolution equations that are at most first order in the time derivatives, and that possess dissipation mechanisms, together with stationary boundary conditions. It is proposed here that the fluid probability density functional also has an attractor for its time evolution, and an approach to generating this time evolution is presented. The same mechanism that causes the dynamical variables to have an attractor in phase space, that is, the tendency for the equilibration of energy input rates and energy output rates to set in, also causes an arbitrary initial statistics to evolve toward an attracting statistics, which is stationary in time.
It is the stationary statistics that allow the Kolmogorov scaling ideas to have applicability. The evolution of the fluid's statistics can be set up as part of a space-time path integral. Ensemble averages of any dynamical variable can be formulated in terms of this path integral. Fluid space-time sampling techniques naturally suggest a useful way, using a relatively arbitrary initial statistics functional, to calculate averages.
\section{Description of the Mathematical Approach to Turbulence}
Let us set up the evolution equations for incompressible fluid dynamics. The extension of the following to compressible fluids will pose no unusual difficulty.
We have,$$\rho \frac{d\vec{v}}{dt}=\vec{f}+\eta \nabla ^{2}\vec{v},$$where $\vec{f} $ is an external force density, such as due to a scalar pressure field, and $\eta $ is the coefficient of viscosity. $\vec{v} $ is the fluid velocity field. We also have from the conservation of mass, $$ \nabla \cdot (\rho \vec{v} ) + \frac{\partial \rho }{\partial t}=0. $$ Here, $\rho $ is the mass density of the fluid. If this mass density is a constant, then the velocity field is divergenceless. Also, $$\frac{d\vec{v}}{dt}=\frac{\partial \vec{v}}{\partial t} + \vec{v}\cdot \nabla \vec{v}.$$ So we have the fluid dynamic system, \begin{eqnarray} \frac{\partial \vec{v}}{\partial t} & = & -\frac{\nabla P}{\rho } - \vec{v} \cdot \nabla \vec{v} + \nu \nabla ^{2} \vec{v} \label{eq:first}\\ \nabla \cdot \vec{v} & = & 0,\label{eq:second} \end{eqnarray} where $P$ is the pressure, and $\nu \equiv \frac{\eta }{\rho }.$
We drop the external force density in what follows. This is not an essential step. What are needed are a set of interacting fields, together with stationary boundary conditions to allow a deterministic time evolution of the set. We also associate with the spatial velocity field a probability density functional, $\rho [v,t].$ The fluid statistics time-evolves according to deterministic chaos ~\cite{Ro:Rosen} ~\cite{Th:Thacker},$$\rho [v_{f},t_{f}]=\int d[v_{0} ]K[v_{f},t_{f};v_{0},t_{0}]\rho [v_{0},t_{0}],$$ where the kernel is the delta functional, $$K[v_{f},t_{f};v_{0},t_{0}]=\delta [v_{f}-f[t_{f};v_{0},t_{0}]].$$ That is, the number, $\rho $, associated with the spatial velocity field $v_{0}$ at time $t_{0}$, will be associated with $v_{f}$ at time $t_{f}$, where $v_{f}$ is the velocity field $v_{0}$ deterministically evolves into from time $t_{0}.$ Given a functional of the spatial velocity fields, $A[v]$, its ensemble average, at time $t_{f}$ is, $$<A[v]>=\int d[v_{f}]A[v_{f}]K[v_{f},t_{f};v_{0},t_{0}]\rho [v_{0},t_{0}]d[v_{0}].$$
We want to propagate the fluid's statistics according to deterministic chaos, even though the detailed fluid orbits are chaotic. Let, \begin{eqnarray*} <A[v]> & = &\int A[v_{f}]\rho [v_{f},t_{f}]d[v_{f}] \\ & = & \int A[v_{f}]K[v_{f},v_{0}]\rho [v_{0},t_{0}]d[v_{f}]d[v_{0}]\\ & = & \int A[f[v_{0}]]\rho [v_{0},t_{0}]d[v_{0}],\end{eqnarray*} where, \begin{eqnarray*} K[v_{f},v_{0}] & = & \delta [v_{f}- f[v_{0}]] \\ & = & \int \delta [v_{f} - f_{1}[v_{1}]]\delta [v_{1} - f_{1}[v_{0}]]d[v_{1}] \\ & = & \int \delta [v_{f} - f_{2}[v_{1}]]\delta [v_{1} - f_{2}[v_{2}]]\delta [v_{2} - f_{2}[v_{0}]]d[v_{1}]d[v_{2}] \\ & = & \int \delta [v_{f} - f_{3}[v_{1}]]\delta [v_{1} - f_{3}[v_{2}]]\delta [v_{2} - f_{3}[v_{3}]]\delta [v_{3} - f_{3}[v_{0}]]d[v_{1}]d[v_{2}]d[v_{3}] \\ & = & \cdots ,\end{eqnarray*} where the velocity fields, $v_{1}$, $v_{2}$, $v_{3}$, ... occur in chronological order, $v_{1}$ being closest to the time $t_{f}.$ Eventually, we have an $f_{M},$ where $M$ is large, such that $v_{M}=f_{M}[v_{0}]$ is infinitesimally different from $v_{0}.$
Hence, $$<A[v]>=\int d[v]A[v_{f}]\delta [v - f_{M}[v]]\rho [v_{0},t_{0}],$$ where the functional integration measure is, $$d[v]=d[v_{f}]d[v_{1}]\cdots d[v_{M}]d[v_{0}],$$ and $$<A[v]>=\int A[f_{M}[\cdots [f_{M}[f_{M}[v_{0}]]]\cdots ]]\rho [v_{0},t_{0}]d[v_{0}].$$
The exact rule, $f_{M}[v],$ requires a solution of the fluid dynamic equations, incorporating the boundary conditions. The exact rule, $f_{M}[v],$ is difficult to find. Let us use an approximate rule, $F_{M}[v].$ Then, we may say, with motivation to follow, \begin{eqnarray} <A[v]> &= & \int d[v]A[v_{f}]\delta [v-F_{M}[v]]\lambda [\nabla \cdot v]\lambda [v - v_{B}]\rho [v_{0},t_{0}]. \end{eqnarray} The functional integration is over all space-time velocity fields within the spatial system, between times $t_{0}$ and $t_{f}.$ $\delta [v - F_{M}[v]]$ is a space-time delta functional. $F_{M}[v]$ generates $v$ from a $v$ an infinitesimal instant earlier. The $\lambda $ functionals are evaluated at a particular instant. The needed properties of the $\lambda $ functionals are $\lambda [0]=1,$ and $\lambda [g \neq 0]=0.$ $\lambda [g] $ could be, $$\lim_{\epsilon _{1} \rightarrow 0^{+}} e^{-g^{2}/\epsilon_{1}}.$$ We have, then, \begin{eqnarray} <A[v]> & = & \int d[v_{0}]A[F_{M}[F_{M}[\cdots [F_{M}[v_{0}]]\cdots ]]]\rho [v_{0},t_{0}] \nonumber \\ & & \lambda [F_{M}[\cdots [F_{M}[v_{0}]]\cdots] - v_{B}]\cdots \lambda [v_{0} - v_{B}] \nonumber \\ & & \lambda [\nabla \cdot F_{M}[ \cdots [F_{M}[v_{0}]]\cdots ]]\cdots \lambda [\nabla \cdot F_{M}[v_{0}]] \lambda [\nabla \cdot v_{0}]. ~\label{eq:average} \end{eqnarray} The right-hand side of (\ref{eq:average}) equals $<A[v]>$, because $\rho [v_{0},t_{0}]$ will be non-zero only for spatial fields $v_{0}$ that make all the $\lambda$'s equal one. This means that we only utilize divergenceless spatial velocity fields satisfying, also, the spatial boundary conditions.
The spatial velocity fields have an attractor, determined by the stationary boundary conditions on the fluid \cite{Ru:Ruelle}. When the boundary conditions allow steady laminar flow to become established, the attractor consists of a single spatial velocity field. When the Reynolds number becomes large enough, bifurcations set in, or the onset of instability occurs, and the attractor begins to consist of more than one spatial velocity field. In the turbulent regime, the attractor consists of many spatial velocity fields, and the fluid accesses these fields according to a probability distribution \cite{Fe:Feigenbaum} \cite{Ka:Kadanoff} \cite{Ka:Chaos}.
Given a functional of the spatial velocity fields, $A[v]$, and the fluid dynamic system of equations, (\ref{eq:first}), (\ref{eq:second}), we will say that its ensemble average when the system has reached its attractor is, \begin{eqnarray} <A[v]> & = & \lim_{t_{f}-t_{0}\rightarrow \infty} \int d[v]A[v_{f}]\delta [v - F_{M}[v]] \lambda [\nabla \cdot v] \lambda [v - v_{B}] \nonumber \\ & & \cdot \rho [v_{0},t_{0}]\label{eq:path}. \end{eqnarray} The delta functional condition, $\delta [v - F_{M}[v]]$ implements equation (\ref{eq:first}). $F_{M}[v]$ is an approximate rule for carrying $v$ from, an earlier instant to a later instant. In the first approximation, $$F_{M}[v]=v - \Delta t (\vec{v} \cdot \nabla v -\nu \nabla ^{2} v),$$ where $v$ is evaluated at an instant $\Delta t$ earlier. $F_{M}[v]$ can be expressed in all higher powers of $\Delta t$, with only higher order spatial derivatives being required. This is due to the fact that the hydrodynamic evolution equations are, at most, first-order in the time derivatives. $\lambda [\nabla \cdot v]$ implements a zero divergence condition on the spatial velocity fields, and $\lambda [v - v_{B}]$ requires the spatial velocity fields to have values $v_{B}$ on the spatial boundaries.
Let us consider the path integral (\ref{eq:path}) to be on a space-time lattice. We could use, $$\delta (x)= \lim_{\epsilon \rightarrow 0^{+}} \frac{e^{-x^{2}/\epsilon }}{\sqrt{\pi \epsilon }}.$$ We have for the average of $A[v]$ in the steady-state (attractor), letting $\epsilon_{1} (\Delta x)^{2}=\epsilon $, where $\epsilon _{1}$ is in the $\lambda $-functional for the zero divergence condition, and $\Delta x$ being the Cartesian spatial distance step size, \begin{eqnarray} <A[v]> & = & \lim_{\epsilon \rightarrow 0^{+}} \lim_{t_{f} - t_{0} \rightarrow \infty } \int d[v]e^{-H[v]/\epsilon } A[v_{f}]\rho [v_{0}].\label{eq:path2}\end{eqnarray} Also, $$\rho [v_{0}] \equiv \rho [v_{0},t_{0}],$$ and the functional integration measure, $d[v],$ is, $$d[v]=(\frac{1}{\sqrt{\pi \epsilon}})^{3N(T-1)} \prod_{ijkl} dv_{ijkl}.$$ $H[v]$ is a functional of the lattice space-time velocity field. Also, neglecting boundary effects, \begin{eqnarray*} H[v] & = & \sum ((v_{l}-v_{l-1}-g[v_{l-1}]\Delta t)^{2} + (v_{x,ijkl} - v_{x,i-1,jkl} + \cdots )^{2})\\ & + & \sum^{'} (v_{ijkl} - v_{B,ijkl})^{2}, \end{eqnarray*} where $$ g[v_{l}]=-\vec{v}_{l} \cdot \nabla v_{l} + \nu \nabla ^{2} v_{l},$$ or, \begin{eqnarray*} g[v_{l}] & = & -v_{x,ijkl}\frac{(v_{ijkl} - v_{i-1,jkl})}{\Delta x}+ \cdots +\nu \frac{(v_{ijkl}-2v_{i-1,jkl}+v_{i-2,jkl})}{(\Delta x)^{2}}+\cdots. \end{eqnarray*} $N$ is the number of spatial lattice points in the system, and $T$ is the number of time slices. We have $\sum $ as a sum over all space-time lattice points in the system, and $\sum^{'}$ as a sum over all space-time lattice points on the spatial boundaries. Also, $ijk$ are spatial lattice indices, and $l$ is the index in the time direction.
This discretization technique is expected to get better as one increases the lattice fineness and makes use of higher powers of the time step and higher order finite difference approximations of the spatial derivatives \cite{Wa:Warsi}. A good approximation to the attracting statistics as a starting point will shorten the evolution time required for averages occurring in the steady state. Gaussian statistics, however, could be a good generic starting point. The path integral (\ref{eq:path2}) can be evaluated with Monte Carlo techniques utilizing importance sampling. A calculation of the stationary spatial velocity field that would exist, for the given boundary conditions, if that field were stable, is expected to be a good starting point from which to begin a sampling of the velocity field space-time configuration space. The average values, for the starting Gaussian statistics, can be taken as the values of the velocities of the calculated stationary spatial velocity field.
We have, $$<A[v]>=J\sum_{i=1}^{n}\frac{A[v_{f,i}]\rho [v_{0,i}]}{n},$$ where, $$J=\int e^{-H[v]/\epsilon }d[v],$$ and the space-time configurations are distributed according to the weighting $e^{-H[v]/\epsilon }.$ $n$ is the number of space-time configurations in the sample. $A[v_{f,i}]$ is the observable $A[v]$ evaluated at the final time slice of the $i^{th}$ space-time configuration. The value $\rho [v_{0,i}]$ is attached to the initial time slice of the $i^{th}$ configuration. We need, $$1=J\sum_{i=1}^{n} \frac{\rho [v_{0,i}]}{n}.$$ So, we must do the integral, $J,$ and constrain $\sum \rho [v_{0,i}]$ to equal $n/J,$ possibly by varying the variances in the Gaussians of $\rho [v,t_{0}].$
\section{Summary}
We have said that the time evolution of the statistics also has an attractor for the following reasons; (1) It is a way to get a solution to the problem of arriving at the steady state turbulence statistics. One knows that the steady state statistics is stationary with respect to its time-evolver. Postulating that this statistics is the attractor of its time-evolver means one does not have to have the statistics to get the statistics, thereby offering an approach to the closure problem for the determination of correlations.
(2) The statistical physics approach has been successful in equilibrium thermodynamics, where the derivation of the microcannonical ensemble can be taken as the indication that equilibrium is the attractor for the dynamical system when there is a Galilean frame for which the boundary conditions allow no energy input into the system. In the attractor, the mean energy input is equilibrated with the mean energy output, because in the attractor dissipative losses have effectively shut off, and the system becomes effectively conservative. That is, the system becomes describable by a time-independent Hamiltonian. The stationarity of the statistics requires the vanishing of the Poisson bracket of the statistics with this Hamiltonian, resulting in the statistics of equal a priori probability.
(3) The dynamical system, of which a fluid is an example, has an attractor \cite{Ru:Ruelle}. The dynamics of the statistical approach should mirror the dynamics of the actual dynamical system.
(4) The statistics of the dynamical system, prior to reaching the attractor, has no reason to be unique. The statistics of the attractor is expected to be unique, in which the geometry of the system, the stationary boundary conditions, and viscosities, all of which determine the Reynolds number, play a crucial role in determining the attractor.
(5) When the stationary statistics of the fluid is reached, the equilibration of energy input and energy output rates has set in \cite{Fr:Frisch}.
\section{Conclusions}
In the discretized version of the path integral that attempts to arrive at the stationary statistical effects in the generation of the ensemble average of an observable, using an approximate rule for the dynamics, one should arrive at a greater insensitivity to a starting statistics, and a generation of stationary statistical effects, to within any desired level of approximation. One is trying to use the turbulent transience to get at steady state turbulence effects. These stationary statistical effects become the backdrop for Kolmogorov's ideas of self-similarity and the resulting scaling exponents.
The potential usefulness of lower dimensional models should not be underestimated \cite{Lo:Lorenz}. However, it is hoped for, here, that a transition to a detailed use of the Navier-Stokes equations can be brought about.
\section{Acknowledgments}
I wish to acknowledge the Department of Chemistry and Physics of the Arkansas State University, and the Department of Physics and Astrophysics at the University of North Dakota for the environments necessary for this work. I wish to thank Professor Leo P. Kadanoff, Professor Joseph A. Johnson, and Professor William A. Schwalm for informative and useful discussions.
| proofpile-arXiv_065-5031 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
\label{s:intro}
Pulsar Timing Arrays (PTAs), such as the Parkes PTA (PPTA) ~\cite{man08}, the European PTA (EPTA)~\cite{jan08}, Nanograv~\cite{NANOGrav}, the International Pulsar Timing Array (IPTA) project~\cite{HobbsEtAl:2009}, and in the future the Square Kilometre Array (SKA)~\cite{laz09} provide a unique means to study the population of massive black hole (MBH) binary systems with masses above $\sim 10^7\,M_\odot$ by monitoring stable radio pulsars: in fact, gravitational waves (GWs) generated by MBH binaries (MBHBs) affect the propagation of electromagnetic signals and leave a distinct signature on the time of arrival of the radio pulses~\cite{EstabrookWahlquist:1975,Sazhin:1978,Detweiler:1979,HellingsDowns:1983}. MBH formation and evolution scenarios~\cite{vhm03, kz06, mal07, yoo07} predict the existence of a large number of MBHBs. Whereas the high redshift, low(er) mass systems will be targeted by the planned Laser Interferometer Space Antenna ({\it LISA}~\cite{bender98})~\cite{enelt,uaiti,ses04,ses05,ses07}, massive and lower redshift ($z\lower.5ex\hbox{\ltsima} 2$) binaries radiating in the (gravitational) frequency range $\sim 10^{-9}\,\mathrm{Hz} - 10^{-6} \,\mathrm{Hz}$ will be directly accessible to PTAs. These systems imprint a typical signature on the time-of-arrival of radio-pulses at a level of $\approx 1-100$ ns~\cite{papII}, which is comparable with the timing stability of several pulsars~\cite{Hobbs:2009}, with more expected to be discovered and monitored in the future. PTAs therefore provide a direct observational window onto the MBH binary population, and can contribute to address a number of astrophysical open issues, such as the shape of the bright end of the MBH mass function, the nature of the MBH-bulge relation at high masses, and the dynamical evolution at sub-parsec scales of the most massive binaries in the Universe (particularly relevant to the so-called ``final parsec problem''~\cite{mm03}).
Gravitational radiation from the cosmic population of MBHBs produces two classes of signals in PTA data: (i) a stochastic GW background generated by the incoherent superposition of radiation from the whole MBHB population~\cite{rr95, phi01, jaffe, jen05, jen06, papI} and (ii) individually resolvable, deterministic signals produced by single sources that are sufficiently massive and/or close so that the gravitational signal stands above the root-mean-square (rms) level of the background ~\cite{papII}. In~\cite{papII} (SVV, hereafter) we explored a comprehensive range of MBH population models and found that, assuming a simple order-of-magnitude criterion to estimate whether sources are resolvable above the background level $\approx 1$-to-10 individual MBHBs could be observed by future PTAs surveys. The observation of GWs from individual systems would open a new avenue for a direct census of the properties of MBHBs, offering invaluable new information about galaxy formation scenarios. The observation of systems at this stage along their merger path would also provide key insights into the understanding of the interaction between MBHBs and the stellar/gaseous environment~\cite{KocsisSesana}, and how these interactions affect the black hole-bulge correlations during the merger process. If an electro-magnetic counterpart of a MBHB identified with PTAs was to be found, such a system could offer a unique laboratory for both accretion physics (on small scales) and the interplay between black holes and their host galaxies (on large scales).
The prospects of achieving these scientific goals raise the question of what astrophysical information could be extracted from PTA data and the need to quantify the typical statistical errors that will affect the measurements, their dependence on the total number and spatial distribution of pulsars in the array (which affects the surveys observational strategies), and the consequences for multi-band observations. In this paper we estimate the statistical errors that affect the measurements of the source parameters focusing on MBHBs with no spins, in circular orbits, that are sufficiently far from coalescence so that gravitational radiation can be approximated as producing a signal with negligible frequency drift during the course of the observation time, $T \approx 10$ yr ("monochromatic" signal). This is the class of signals that in SVV we estimated to produce the bulk of the observational sample. The extension to eccentric binaries and systems with observable frequency derivative is deferred to a future work. GWs from monochromatic circular binaries constituted by non-spinning MBHs are described by seven independent parameters. We compute the expected statistical errors on the source parameters by evaluating the variance-covariance matrix -- the inverse of the Fisher information matrix -- of the observable parameters. The diagonal elements of such a matrix provide a robust lower limit to the statistical uncertainties (the so-called Cramer-Rao bound~\cite{JaynesBretthorst:2003, Cramer:1946}), which in the limit of high signal-to-noise ratio (SNR) tend to the actual statistical errors. Depending on the actual structure of the signal likelihood function and the SNR this could underestimate the actual errors, see \emph{e.g.}~\cite{NicholsonVecchio:1998,BalasubramanianDhurandhar:1998,Vallisneri:2008} for a discussion in the context of GW observations. Nonetheless, this analysis serves as an important benchmark and can then be refined by carrying out actual analyses on mock data sets and by estimating the full set of (marginalised) posterior density functions of the parameters. The main results of the paper can be summarised as follows:
\begin{itemize}
\item At least three (not co-aligned) pulsars in the PTA are necessary to fully resolve the source parameters;
\item The statistical errors on the source parameters, at \emph{fixed} SNR, decrease as the number of pulsars in the array increases. The typical accuracy greatly improves by adding pulsars up to $\approx 20$; for larger arrays, the actual gain become progressively smaller because the pulsars ``fill the sky" and the effectiveness of further triangulation saturates. In particular, for a fiducial case of an array of 100 pulsars randomly and uniformly distributed in the sky with optimal coherent SNR = 10 -- which may be appropriate for the SKA -- we find a typical GW source error box in the sky $\approx 40$ deg$^2$ and a fractional amplitude error of $\approx$ 30\%. The inclination and polarization angles can be determined within an error of $\sim 0.3$ rad, and the (constant) frequency is determined to sub-frequency resolution bin accuracy. These results are independent on the source gravitational-wave frequency.
\item When an anisotropic distribution of pulsars is considered, the typical source sky location accuracy improves linearly with the array sky coverage. The statistical errors on all the other parameters are essentially insensitive to the PTA sky coverage, as long as it covers more than $\sim 1$ srad.
\item The ongoing Parkes PTA aims at monitoring 20 pulsars with a 100 ns timing noise; the targeted pulsars are mainly located in the southern sky. A GW source in that part of the sky could be localized down to a precision of $\lesssim 10$ deg$^2$ at SNR$=10$, whereas in the northern hemisphere, the lack of monitored pulsars limits the error box to $\lower.5ex\hbox{\gtsima} 200$ deg$^2$. The median of the Parkes PTA angular resolution is $\approx 130\,(\mathrm{SNR}/10)^{-2}$ deg$^2$.
\end{itemize}
The paper is organised as follows. In Section II we describe the GW signal relevant to PTA and we introduce the quantities that come into play in the parameter estimation problem. A review of the Fisher information matrix technique and its application to the PTA case are provided in Section III. Section IV is devoted to the detailed presentation of the results, and in Section V we summarize the main findings of this study and point to future work. Unless otherwise specified, throughout the paper we use geometric units $G=c=1$.
\section{The signal}
\label{s:signal}
Observations of GWs using PTAs exploit the regularity of the time of arrival of radio pulses from pulsars. Gravitational radiation affects the arrival time of the electromagnetic signal by perturbing the null geodesics of photons traveling from a pulsar to the Earth. This was realized over thirty years ago~\cite{EstabrookWahlquist:1975,Sazhin:1978,Detweiler:1979,HellingsDowns:1983}, and the number and timing stability of radio pulsars known today and expected to be monitored with future surveys~\cite{man08,jan08,NANOGrav,laz09} make ensembles of pulsars -- PTAs -- ``cosmic detectors'' of gravitational radiation in the frequency range $\sim 10^{-9}\,\mathrm{Hz} - 10^{-6}\,\mathrm{Hz}$. Here we review the signal produced by a GW source in PTA observations.
Let us consider a GW metric perturbation $h_{ab}(t)$ in the transverse and traceless gauge (TT) described by the two independent (and time-dependent) polarisation amplitudes $h_+(t)$ and $h_\times(t)$ that carry the information about the GW source. Let us also indicate with $\hat\Omega$ the unit vector that identifies the direction of GW propagation (conversely, the direction to the GW source position in the sky is $-\hat\Omega$). The metric perturbation can therefore be written as:
\begin{equation}
h_{ab}(t,\hat\Omega) = e_{ab}^+(\hat\Omega) h_+(t,\hat\Omega) + e_{ab}^{\times}(\hat\Omega)\, h_\times(t,\hat\Omega),
\label{e:hab}
\end{equation}
where $e_{ab}^A(\hat\Omega)$ ($A = +\,,\times$) are the polarisation tensors, that are uniquely defined once one specifies the wave principal axes described by the unit vectors $\hat{m}$ and $\hat{n}$ as,
\begin{subequations}
\begin{align}
e_{ab}^+(\hat{\Omega}) &= \hat{m}_a \hat{m}_b - \hat{n}_a \hat{n}_b\,,
\label{e:e+}
\\
e_{ab}^{\times}(\hat{\Omega}) &= \hat{m}_a \hat{n}_b + \hat{n}_a \hat{m}_b\,.
\label{e:ex}
\end{align}
\end{subequations}
Let us now consider a pulsar emitting radio pulses with a frequency $\nu_0$. Radio waves propagate along the direction described by the unit vector $\hat{p}$, and in the background $h_{ab}$ the frequency of the pulse is affected. For an observer at Earth (or at the Solar System Barycentre), the frequency is shifted according to the characteristic two-pulse function~\cite{EstabrookWahlquist:1975}
\begin{eqnarray}
z(t,\hat{\Omega}) & \equiv & \frac{\nu(t) - \nu_0}{\nu_0}
\nonumber\\
& = & \frac{1}{2} \frac{\hat{p}^a\hat{p}^b}{1+\hat{p}^a\hat{\Omega}_a}\Delta h_{ab}(t;\hat{\Omega})\,.
\label{e:z}
\end{eqnarray}
Here $\nu(t)$ is the received frequency (say, at the Solar System Barycentre), and
\begin{equation}
\Delta h_{ab}(t) \equiv h_{ab}(t_\mathrm{p},\hat{\Omega}) - h_{ab}(t,\hat{\Omega})
\label{e:deltah}
\end{equation}
is the difference between the metric perturbation at the pulsar -- with spacetime coordinates $(t_\mathrm{p},\vec{x}_p)$ -- and at the receiver -- with spacetime coordinates $(t,\vec{x})$. The quantity that is actually observed is the time-residual $r(t)$, which is simply the time integral of Eq.~(\ref{e:z}),
\begin{equation}
r(t) = \int_0^t dt' z(t',\hat{\Omega})\,.
\label{e:r}
\end{equation}
We can re-write Eq.~(\ref{e:z}) in the form
\begin{equation}
z(t,\hat{\Omega}) = \sum_A F^A(\hat{\Omega}) \Delta h_{A}(t;\hat{\Omega})\,,
\label{e:z1}
\end{equation}
where
\begin{equation}
F^A(\hat{\Omega}) \equiv \frac{1}{2} \frac{\hat{p}^a\hat{p}^b}{1+\hat{p}^a\hat{\Omega}_a} e_{ab}^A(\hat{\Omega})
\label{e:FA}
\end{equation}
is the ``antenna beam pattern'', see Eqs~(\ref{e:hab}), (\ref{e:e+}) and~(\ref{e:ex})); here we use the Einstein summation convention for repeated indeces. Using the definitions~(\ref{e:e+}) and~(\ref{e:ex}) for the wave polarisation tensors, it is simple to show that the antenna beam patterns depend on the three direction cosines $\hat{m} \cdot \hat{p}$, $\hat{n} \cdot \hat{p}$ and $\hat{\Omega} \cdot \hat{p}$:
\begin{subequations}
\begin{align}
F^+(\hat{\Omega}) & = \frac{1}{2} \frac{(\hat{m} \cdot \hat{p})^2 - (\hat{n} \cdot \hat{p})^2}{1 + \hat{\Omega} \cdot \hat{p}}\,,\\
F^\times(\hat{\Omega}) & = \frac{(\hat{m} \cdot \hat{p})\,(\hat{n} \cdot \hat{p})}{1 + \hat{\Omega} \cdot \hat{p}}\,.
\end{align}
\end{subequations}
Let us now consider a reference frame $(x,y,z)$ fixed to the Solar System Barycentre. The source location in the sky is defined by the usual polar angles $(\theta,\phi)$. The unit vectors that define the wave principal axes are given by (cf. Eqs. (B4) and (B5) in Appendix B of ~\cite{Anderson-et-al:2001}; here we adopt the same convention used in high-frequency laser interferometric observations)
\begin{subequations}
\begin{align}
\vec{m} & =
(\sin\phi \cos\psi - \sin\psi \cos\phi \cos\theta) \hat{x}
\nonumber\\
& -
(\cos\phi \cos\psi + \sin\psi \sin\phi \cos\theta) \hat{y}
\nonumber\\
& +
(\sin\psi \sin\theta) \hat{z}\,,
\label{e:m}\\
\vec{n} & =
(-\sin\phi \sin\psi - \cos\psi \cos\phi \cos\theta) \hat{x}
\nonumber\\
& +
(\cos\phi \sin\psi - \cos\psi \sin\phi \cos\theta) \hat{y}
\nonumber\\
& +
(\cos\psi \sin\theta) \hat{z}\,,
\label{e:n}
\end{align}
\end{subequations}
where $\hat{x}$, $\hat{y}$ and $\hat{z}$ are the unit vectors along the axis of the reference frame, $x$, $y$, and $z$, respectively.
The angle $\psi$ is the wave polarisation angle, defined as the angle counter-clockwise about the direction of propagation from the line of nodes to the axis described by $\vec{m}$. The wave propagates in the direction $\hat\Omega = \vec{m} \times \vec{n}$, which is explicitly given by:
\begin{equation}
\hat{\Omega} =
- (\sin\theta \cos\phi)\, \hat{x}
- (\sin\theta \sin\phi)\, \hat{y}
- \cos\theta \hat{z}\,.
\end{equation}
Analogously, the unit vector
\begin{equation}
\hat{p}_\alpha =
(\sin\theta_\alpha \cos\phi_\alpha)\, \hat{x}
+ (\sin\theta_\alpha \sin\phi_\alpha)\, \hat{y}
+ \cos\theta_\alpha \hat{z}
\end{equation}
identifies the position in the sky of the $\alpha$-th pulsar using the polar angles $(\theta_\alpha,\phi_\alpha)$.
We will now derive the expression of the PTA signal, Eq.~\ref{e:z}, produced by a circular, non-precessing binary system of MBHs emitting almost monochromatic radiation, \emph{i.e} with negligible frequency drift during the observation time, $T\approx 10$ yr. The results are presented in Section~\ref{ss:timing-residuals}. In the next sub-section, we firstly justify the astrophysical assumptions.
\subsection{Astrophysical assumptions}
\label{ss:astrophysics}
Let us justify (and discuss the limitations of) the assumptions that we have made on the nature of the sources, that lead us to consider circular, non-precessing binary systems generating quasi-monochromatic radiation, before providing the result in Eq.~(\ref{researth}). We derive general expressions for the phase evolution displacement introduced by the frequency drift and by the eccentricity induced periastron precession, and the change in the orbital angular momentum direction caused by the spin-orbit coupling induced precession. The size of each of these effects is then evaluated by considering a realistic (within our current astrophysical understanding) selected population of resolvable MBHBs taken from SVV. Throughout the paper we will consider binary systems with masses $m_1$ and $m_2$ ($m_2 \le m_1$), and \emph{chirp mass} ${\cal M}=m_1^{3/5}m_2^{3/5}/(m_1+m_2)^{1/5}$, emitting a GW frequency $f$. We also define $M = m_1 + m_2$, $\mu = m_1 m_2/m$ and $q=m_2/m_1$, the total mass, the reduced mass and the mass ratio, respectively. Our notation is such that all the quantities are the observed (redshifted) ones, such that \emph{e.g.} the {\it intrinsic} (rest-frame) mass of the primary MBH is $m_{1,r}=m_1/(1+z)$ and the {\it rest frame} GW frequency is $f_r=f(1+z)$. We normalize all the results to
\begin{align}
& M_9 = \frac{M}{10^9\,M_{\odot}}\,,
\nonumber\\
& {\cal M}_{8.5} = \frac{{\cal M}}{10^{8.5}\,M_{\odot}}\,,
\nonumber\\
& f_{50} = \frac{f}{50\,{\rm nHz}}\,,
\nonumber\\
& T_{10} = \frac{T}{10\,{\rm yr}}\,,
\nonumber
\end{align}
which are the typical values for individually resolvable sources found in SVV, and the typical observation timespan.
\subsubsection{Gravitational wave frequency evolution}
A binary with the properties defined above, evolves due to radiation reaction through an adiabatic in-spiral phase, with \emph{GW frequency} $f(t)$ changing at a rate (at the leading Newtonian order)
\begin{equation}
\frac{df}{dt} = \frac{96}{5}\pi^{8/3} {\cal M}^{5/3} f^{11/3}\,.
\label{e:dfdt}
\end{equation}
The in-spiral phase terminates at the last stable orbit (LSO), that for a Schwarzschild black hole in circular orbit corresponds to the frequency
\begin{equation}
f_\mathrm{LSO} = 4.4\times 10^{-6}\,M_9^{-1}\,\,\mathrm{Hz}\,.
\label{e:flso}
\end{equation}
The observational window of PTAs is set at low frequency by the overall duration of the monitoring of pulsars $T \approx 10$ yr, and at high frequency by the cadence of the observation, $\approx 1$ week: the PTA observational window is therefore in the range $\sim 10^{-9} - 10^{-6}$ Hz. In SVV we explored the physical properties of MBHBs that are likely to be observed in this frequency range: PTAs will resolve binaries with $m_{1,2} \lower.5ex\hbox{\gtsima} 10^8 M_\odot$ and in the frequency range $\approx 10^{-8} - 10^{-7}$ Hz. In this mass-frequency region, PTAs will observe the in-spiral portion of the coalescence of a binary system and one can ignore post-Newtonian corrections to the amplitude and phase evolution, as the velocity of the binary is:
\begin{align}
v & = (\pi f M)^{2/3}\,,
\nonumber\\
& = 1.73\times 10^{-2} M_9^{2/3}f_{50}^{2/3}\,.
\label{e:v}
\end{align}
Stated in different terms, the systems will be far from plunge, as the time to coalescence for a binary radiating at frequency $f$ is (at the leading Newtonian quadrupole order and for a circular orbit system)
\begin{equation}
t_\mathrm{coal} \simeq 4\times 10^3\,{\cal M}_{8.5}^{-5/3}\,f_{50}^{-8/3}\,\mathrm{yr}\,.
\label{e:tcoal}
\end{equation}
As a consequence the frequency evolution during the observation time is going to be small, and can be neglected. In fact, it is simple to estimate the total frequency shift of radiation over the observation period
\begin{equation}
\Delta f \approx \dot{f} T \approx 0.05\,{\cal M}_{8.5}^{5/3}\,f_{50}^{11/3}\,T_{10}\,\,\, \mathrm{nHz}\,,
\label{e:fdrift}
\end{equation}
which is negligible with respect to the frequency resolution bin $\approx 3 T_{10}^{-1}$ nHz; correspondingly, the additional phase contribution
\begin{equation}
\Delta \Phi \approx \pi \dot{f} T^2 \approx 0.04\,{\cal M}_{8.5}^{5/3}\,f_{50}^{11/3}\,T_{10}^2\,\,\, \mathrm{rad},
\label{e:phasedrift}
\end{equation}
is much smaller than 1 rad. Eqs.~(\ref{e:fdrift}) and~(\ref{e:phasedrift}) clearly show that it is more than legitimate in this initial study to ignore any frequency derivative, and treat gravitational radiation as \emph{monochromatic} over the observational period.
\subsubsection{Spin effects}
We now justify our assumption of neglecting the spins in the modelling of the waveform. From an astrophysical point of view, very little precise information about the spin of MBHs can be extracted directly from observations. However, several theoretical arguments support the existence of a population of rapidly spinning MBHs. If coherent accretion from a thin disk \cite{ss73} is the dominant growth mechanism, then MBH spin-up is inevitable \cite{thorne74}; jet production in active galactic nuclei is best explained by the presence of rapidly spinning MBHs \cite{nemmen07}; in the hierarchical formation context, though MBHB mergers tend to spin down the remnant \cite{hughes03}, detailed growth models that take into account both mergers and accretion lead to populations of rapidly spinning MBHs \cite{vp05,bv08}. Spins have two main effects on the gravitational waveforms emitted during the in-spiral: (i) they affect the phase evolution~\cite{BlanchetEtAl:1995}, and (ii) they cause the orbital plane to precess through spin-orbit and spin-spin coupling~\cite{ApostolatosEtAl:1994,Kidder:1995}. The effect of the spins on the phase evolution is completely negligible for the astrophysical systems observable by PTAs: the additional phase contribution enters at the lowest order at the post$^{1.5}$-Newtonian order, that is proportional to $v^3$, and we have already shown that $v \ll 1$, see Eq.~(\ref{e:v}). Precession would provide a characteristic imprint on the signal through amplitude and phase modulations produced by the orbital plane precession, and as a consequence the time-dependent polarisation of the waves as observed by a PTA. It is fairly simple to quantify the change of the orientation of the orbital angular momentum unit vector $\vec{L}$ during a typical observation. The rate of change of the precession angle is at the leading order:
\begin{equation}
\frac{d\alpha_p}{dt} = \left(2 + \frac{3 m_2}{2 m_1}\right) \frac{L + S}{a^3}
\label{e:dalphadt}
\end{equation}
where $L = \sqrt{a \mu^2 M}$ is the magnitude of the orbital angular momentum and $S$ is the total intrinsic spin of the black holes. As long as $\mu/M\gg v/c$, we have that $L \gg S$. This is always the case for resolvable MBHBs; we find indeed that these systems are in general characterised by $q\gtrsim0.1$ (therefore $\mu/M\gtrsim0.1$), while from Eq. (\ref{e:v}) we know that in general $v/c\sim0.01$ . In this case, from Eq.~(\ref{e:dalphadt}) one obtains
\begin{eqnarray}
\Delta \alpha_p & \approx& 2\pi^{5/3} \left(1 + \frac{3 m_2}{4 m_1}\right) \mu M^{-1/3} f^{5/3} T
\nonumber\\
& \approx & 0.8 \left(1 + \frac{3 m_2}{4 m_1}\right)\left(\frac{\mu}{M}\right)M_9^{2/3}f_{50}^{5/3}T_{10}\,\mathrm{rad}\,,
\label{spin}
\end{eqnarray}
which is independent of $S$. The effect is maximum for equal mass binaries, $m_1 = m_2$, ${\mu}/{M} = 0.25$; in this case $\Delta \alpha_p \approx 0.3$ rad. It is therefore clear that in general spins will not play an important role, and we will neglect their effect in the modeling of signals at the PTA output. It is however interesting to notice that for a $10^9 M_\odot$ binary system observed for 10 years at $\approx 10^{-7}$ Hz, which is consistent with astrophysical expectations (see SVV) the orientation of the orbital angular momentum would change by $\Delta \alpha_p \approx 1$ rad. The Square-Kilometre-Array has therefore a concrete chance of detecting this signature, and to provide direct insights onto MBH spins.
\subsubsection{Eccentricity of the binary}
Let us finally consider the assumption of circular orbits, and the possible effects of neglecting eccentricity in the analysis. The presence of a residual eccentricity at orbital separations corresponding to the PTA observational window has two consequences on the observed signal: (i) the power of radiation is not confined to the harmonic at twice the orbital frequency but is spread on the (in principle infinite) set of harmonics at integer multiples of the inverse of the orbital period, and (ii) the source periapse precesses in the plane of the orbit at a rate
\begin{align}
\frac{d\gamma}{dt} & = 3\pi f \frac{\left(\pi f M\right)^{2/3}}{\left(1 - e^2\right)}\,,
\nonumber\\
& \simeq 3.9\times10^{-9} \left(1 - e^2\right)^{-1}\,M_{9}^{2/3}\,f_{50}^{5/3}\,\mathrm{rad\,\,s}^{-1}
\label{e:dgammadt}
\end{align}
which introduces additional modulations in phase and (as a consequence) amplitude in the signal recorded at the Earth. In Eq.~(\ref{e:dgammadt}) $\gamma(t)$ is the angle of the periapse measured with respect to a fixed frame attached to the source. We now briefly consider the two effects in turn. The presence of eccentricity "splits" each polarisation amplitude $h_+(t)$ and $h_\times(t)$ into harmonics according to (see \emph{e.g.} Eqs.~(5-6) in Ref.~\cite{WillemsVecchioKalogera:2008} and references therein):
\begin{eqnarray}
h^{+}_n(t) & = & A \Bigl\{-(1 + \cos^2\iota)u_n(e) \cos\left[\frac{n}{2}\,\Phi(t) + 2 \gamma(t)\right]
\nonumber \\
& & -(1 + \cos^2\iota) v_n(e) \cos\left[\frac{n}{2}\,\Phi(t) - 2 \gamma(t)\right]
\nonumber \\
& & + \sin^2\iota\, w_n(e) \cos\left[\frac{n}{2}\,\Phi(t)\right] \Bigr\},
\label{e:h+}\\
h^{\times}_{n}(t) & = & 2 A \cos\iota \Bigl\{u_n(e) \sin\left[\frac{n}{2}\,\Phi(t) + 2 \gamma(t)\right]
\nonumber\\
& & + v_n(e) \sin(\left[\frac{n}{2}\,\Phi(t) - 2 \gamma(t)\right]) \Bigr\}\,,
\label{e:hx}
\end{eqnarray}
where
\begin{equation}
\Phi(t) = 2\pi\int^t f(t') dt'\,,
\label{e:Phi}
\end{equation}
is the GW phase and $f(t)$ the instantaneous GW frequency corresponding to twice the inverse of the orbital period. The source inclination angle $\iota$ is defined as $\cos\iota = -\hat{\Omega}^a {\hat L}_a$, where ${\hat L}$ is the unit vector that describes the orientation of the source orbital plane, and the amplitude coefficients $u_n(e)$, $v_n(e)$, and $w_n(e)$ are linear combinations of the Bessel functions of the first kind $J_{n}(ne)$, $J_{n\pm 1}(ne)$ and $J_{n\pm 2}(ne)$. For an astrophysically plausible range of eccentricities $e\lower.5ex\hbox{\ltsima} 0.3$ -- see Fig.~\ref{fig1a} and the discussion below -- $|u_n(e)| \gg |v_n(e)|\,,|w_n(e)|$ and most of the power will still be confined into the $n=2$ harmonic at twice the orbital frequency, see \emph{e.g.} Fig. 3 of\ Ref.~\cite{PetersMathews:1963}. On the other hand, the change of the periapse position even for low eccentricity values may introduce significant phase shifts over coherent observations lasting several years. In fact the phase of the recorded signal is shifted by an additional contribution $2\gamma(t)$. This means that the actual frequency of the observed signal recorded at the instrument corresponds to $f(t) + {\dot{\gamma}}/{\pi}$ and differs by a measurable amount from $f(t)$. Nonetheless, one can still model the radiation observed at the PTA output as monochromatic, as long as th periapse precession term ${\dot{\gamma}}/{\pi}$ introduces a phase shift $\Delta \Phi_\gamma $ quadratic in time that is $\ll 1$ rad, which is equivalent to the condition that we have imposed on the change of the phase produced by the frequency shift induced by radiation reaction, see Eqs.~(\ref{e:fdrift}) and~(\ref{e:phasedrift}). From Eq.~(\ref{e:dgammadt}) and~(\ref{e:dfdt}), this condition yields:
\begin{align}
\Delta \Phi_\gamma & \approx \frac{d^2\gamma}{dt^2} T^2 = \frac{96\pi^{13/3}}{\left(1 - e^2\right)} M^{2/3}{\cal M}^{5/3} f^{13/3} T^2
\nonumber\\
& \approx 2\times10^{-3} \left(1 - e^2\right)^{-1} M_9^{2/3}{\cal M}_{8.5}^{5/3}\,f_{50}^{13/3}\,T_{10}^2\,\mathrm{rad}\,.
\label{e:dgamma}
\end{align}
We therefore see that the effect of the eccentricity will be in general negligible.
\subsubsection{Tests on a massive black hole population}
\begin{figure}
\centerline{\psfig{file=f1.ps,width=84.0mm}}
\caption{Testing the circular monochromatic non--spinning binary approximation. Upper left panel: distribution of the phase displacement $\Delta \Phi$ introduced by the frequency drift of the binaries. Upper right panel: change in the orbital angular momentum direction $\Delta \alpha_p$ introduced by the spin-orbit coupling. Lower left panel: eccentricity distribution of the systems. Lower right panel: distribution of phase displacement $\Delta \Phi_\gamma$ induced by relativistic periastron precession due to non-zero eccentricity of the binaries. The distributions are constructed considering all the resolvable MBHBs with residuals $>1$ns (solid lines), 10ns (long--dashed lines) and 100ns (short--dashed lines), found in 1000 Monte Carlo realizations of the Tu-SA models described in SVV, and they are normalised so that their integrals are unity.}
\label{fig1a}
\end{figure}
We can quantify more rigorously whether the assumption of monochromatic signal at the PTA output is justified, by evaluating the distributions of $\Delta \Phi$, $\Delta \alpha_p$ and $\Delta \Phi_\gamma$ on an astrophysically motivated population of resolvable MBHBs. We consider the Tu-SA MBHB population model discussed in SVV (see Section 2.2 of SVV for a detailed description) and we explore the orbital evolution, including a possible non-zero eccentricity of the observable systems. The binaries are assumed to be in circular orbit at the moment of pairing and are self consistently evolved taking into account stellar scattering and GW emission~\cite{Sesana-prep}. We generate 1000 Monte Carlo realisations of the entire population of GW signals in the PTA band and we collect the individually resolvable sources generating coherent timing residuals greater than 1, 10 and 100 ns, respectively, over 10 years. In Fig. \ref{fig1a} we plot the distributions relevant to this analysis. We see from the two upper panels, that in general, treating the system as "monochromatic" with negligible spin effects is a good approximation. If we consider a 1 ns threshold (solid lines), the phase displacement $\Delta \Phi$ introduced by the frequency drift and the orbital angular momentum direction change $\Delta \alpha_p$ due two spin-orbit coupling are always $<1$ rad, and in $\sim 80$\% of the cases are $<0.1$ rad. The lower left panel of Fig. \ref{fig1a} shows the eccentricity distribution of the same sample of individually resolvable sources. Almost all the sources are characterised by $e \lower.5ex\hbox{\ltsima} 0.1$ with a long tail extending down to $e \lower.5ex\hbox{\ltsima} 10^{-3}$ in the PTA band. The typical periastron precession--induced additional phase $2\dot{\gamma}T$ can be larger than 1 rad. However, this additional contribution grows linearly with time, and, as discussed before, will result in a measured frequency which differs from the intrinsic one by a small amount $\dot{\gamma}/\pi\lesssim 1$nHz. The ``non-monocromatic'' phase contribution $\Delta \Phi_\gamma$ that changes quadratically with time and is described by Eq. (\ref{e:dgamma}) is instead plotted in the lower right panel of Fig. \ref{fig1a}. Values of $\Delta \Phi_\gamma$ are typically of the order $10^{-3}$, completely negligible in the context of our analysis. Note that, as a general trend, increasing the threshold in the source--induced timing residuals to 10 and 100 ns, all the effects tend to be suppressed. This is because resolvable sources generating larger residuals are usually found at lower frequencies, and all the effects have a steep dependence on frequency -- see Eqs.~(\ref{e:phasedrift}),~(\ref{spin}) and~(\ref{e:dgamma}). This means that none of the effects considered above should be an issue for ongoing PTA campaigns, which aim to reach a total sensitivity of $\gtrsim30$ ns, but may possibly play a role in recovering sources at the level of a few ns, which is relevant for the planned SKA. Needless to say that a residual eccentricity at the time of pairing may result in larger values of $e$ than those shown in Fig.~\ref{fig1a}~\cite{Sesana-prep}, causing a significant scatter of the signal power among several different harmonics; however, the presence of gas may lead to circularization before they reach a frequency $\approx 10^{-9}$ Hz (see, e.g.,~\cite{dot07}). Unfortunately, little is known about the eccentricity of subparsec massive binaries, and here we tackle the case of circular systems, deferring the study of precessing eccentric binaries to future work.
\subsection{Timing residuals}
\label{ss:timing-residuals}
\begin{figure}
\centerline{\psfig{file=f2.ps,width=84.0mm}}
\caption{Normalized distribution of $\Delta f_{\alpha}$ (see text) for the same sample of MBHBs considered in Fig. \ref{fig1a}, assuming observations with 100 isotropically distributed pulsars in the sky at a distance of 1 kpc. The vertical dotted line marks the width of the array's frequency resolution bin $\Delta f_r=1/T$ ($\approx 3\times 10^{-9}$Hz for $T=10$yr).}
\label{fig1b}
\end{figure}
We have shown that the assumption of circular, monochromatic, non-precessing binary is astrophysically reasonable, surely for this initial exploratory study. We now specify the signal observed at the output, Eq.~(\ref{e:r}), in this approximation. The two independent polarisation amplitudes generated by a binary system, Eqs.~(\ref{e:h+}) and~(\ref{e:hx}), can be written as:
\begin{subequations}
\begin{align}
h_+(t) & = A_\mathrm{gw} a(\iota) \cos\Phi(t)\,,
\label{e:h+1}
\\
h_{\times}(t) &= A_\mathrm{gw} b(\iota) \sin\Phi(t)\,,
\label{e:hx1}
\end{align}
\end{subequations}
where
\begin{equation}
A_\mathrm{gw}(f) = 2 \frac{{\cal M}^{5/3}}{D}\,\left[\pi f(t)\right]^{2/3}
\label{e:Agw}
\end{equation}
is the GW amplitude, $D$ the luminosity distance to the GW source, $\Phi(t)$ is the GW phase given by Eq. (\ref{e:Phi}), and $f(t)$ the instantaneous GW frequency (twice the inverse of the orbital period). The two functions
\begin{subequations}
\begin{align}
a(\iota) & = 1 + \cos^2 \iota
\label{e:aiota}
\\
b(\iota) &= -2 \cos\iota
\label{e:biota}
\end{align}
\end{subequations}
depend on the source inclination angle $\iota$, defined in the previous Section.
As described in Section II, Eqs.~(\ref{e:deltah}) and~(\ref{e:r}), the response function of each individual pulsar $\alpha$ consists of two terms, namely, the perturbation registered at the Earth at the time $t$ of data collection ($h_{ab}(t,\hat{\Omega})$), and the perturbation registered at the pulsar at a time $t-\tau_\alpha$ ($h_{ab}(t-\tau_\alpha,\hat{\Omega})$), where $\tau_\alpha$ is the light-travel-time from the pulsar to the Earth given by:
\begin{eqnarray}
\tau_\alpha & = & L_\alpha (1 + \hat{\Omega} \cdot \hat{p}_\alpha)
\nonumber\\
& \simeq & 1.1\times 10^{11}\,\frac{L_\alpha}{1\,\mathrm{kpc}}\,(1 + \hat{\Omega} \cdot \hat{p}_\alpha)\,\mathrm{s},
\end{eqnarray}
where $L_\alpha$ is the distance to the pulsar. We can therefore formally write the observed timing residuals, Eq.~(\ref{e:r}) for each pulsar $\alpha$ as:
\begin{equation}
r_\alpha(t) = r_\alpha^{(P)}(t) + r_\alpha^{(E)}(t)\,,
\label{e:r1}
\end{equation}
where $P$ and $E$ label the ``pulsar'' and ``Earth'' contribution, respectively. During the time $\tau_\alpha$ the frequency of the source -- although "monochromatic" over the time of observation $T$ of several years -- changes by
\begin{equation}
\Delta f_{\alpha}=\int_{t-\tau_\alpha}^{t} \frac{df}{dt} dt \sim \frac{df}{dt}\tau_\alpha \approx 15\,{\cal M}_{8.5}^{5/3}f_{50}^{11/3}\tau_{\alpha,1}\,\,\,\mathrm{nHz},
\end{equation}
where $\tau_{\alpha,1}$ is the pulsar-Earth light-travel-time normalized to a distance of 1 kpc. The frequency shift $\Delta f_{\alpha}$ depends both on the parameters of the source (emission frequency and chirp mass) and the properties of the pulsar (distance and sky location with respect to the source). We can quantify this effect over an astrophysically plausible sample of GW sources by considering the population shown in Fig.~\ref{fig1a}. Let us consider the same set of resolvable sources as above, and assume detection with a PTA of 100 pulsars randomly distributed in the sky, but all at a distance of 1 kpc. For each source we consider all the $\Delta f_{\alpha}$ related to each pulsar and we plot the results in Fig. \ref{fig1b}. The distribution has a peak around $\sim5\times 10^{-8}$ Hz, which is $\sim 10$ times larger than the typical frequency resolution bin for an observing time $T\approx 10$ yr. This means that \emph{the signal associated to each pulsar generates at the PTA output two monochromatic terms at two distinct frequencies.} All the "Earth-terms"corresponding to each individual pulsar share the same frequency and phase. They can therefore be coherently summed among the array, building up a distinct monochromatic peak which is not going to be affect by the pulsar terms (also known as "self-noise") which usually happen to be at much lower frequencies. The contribution to the Earth term from each individual pulsar can be written as
\begin{eqnarray}
r_\alpha^{(E)}(t) & = & R \,[a\, F^+_\alpha\,(\sin\Phi(t)-\sin\Phi_0)
\nonumber\\
& - & b\, F^\times_\alpha(\cos\Phi(t)-\cos\Phi_0)\,] ,
\label{researth}
\end{eqnarray}
with
\begin{equation}
R=\frac{A_{\rm gw}}{2\pi f}
\label{erre}
\end{equation}
and $\Phi(t)$ given by Eq. (\ref{e:Phi}). The Earth timing residuals are therefore described by a 7-dimensional vector encoding all (and only) the parameters of the source:
\begin{equation}
\vec{\lambda} = \{R,\theta,\phi,\psi,\iota,f,\Phi_0\}\,.
\label{par}
\end{equation}
Conversely, each individual pulsar term is characterized by a different amplitude, frequency and phase, that crucially \emph{depend also on the poorly constrained distance $L_\alpha$ to the pulsar}. In order to take advantage of the power contained in the pulsar term, one needs to introduce an additional parameter for each pulsar in the PTA. As a consequence, this turns a 7-parameter reconstruction problem into a $7+M$ parameter problem. More details about the PTA response to GWs are given in Appendix A. In this paper, we decided to consider simply the Earth-term (at the expense of a modest loss in total SNR) given by Eq. (\ref{researth}), which is completely specified by the 7-parameter vector~(\ref{par}). At present, it is not clear whether it would also be advantageous to include into the analysis the pulsar-terms, that require the addition of $M$ unknown search parameters. This is an open issue that deserves further investigations and will be considered in a future paper.
\section{Parameter estimation}
\label{s:fim}
In this section we briefly review the basic theory and key equations regarding the estimate of the statistical errors that affect the measurements of the source parameters. For a comprehensive discussion of this topic we refer the reader to~\cite{JaynesBretthorst:2003}.
The whole data set collected using a PTA consisting of $M$ pulsars can be schematically represented as a vector
\begin{equation}
\vec{d} = \left\{d_1, d_2, \dots, d_M\right\}\,,
\label{e:vecd}
\end{equation}
where the data form the monitoring each pulsar $(\alpha = 1,\dots,M)$ are given by
\begin{equation}
d_\alpha(t) = n_\alpha(t) + r_\alpha(t;\vec{\lambda})\,.
\label{e:da}
\end{equation}
In the previous equation $r_\alpha(t;\vec{\lambda})$, given by Eq.~(\ref{researth}), is the GW contribution to the timing residuals of the $\alpha$-th pulsar (the signal) -- to simplify notation we have dropped (and will do so from now on) the index "E", but it should be understood as we have stressed in the previous section that we will consider only the Earth-term in the analysis -- and $n_\alpha(t)$ is the noise that affects the observations. For this analysis we make the usual (simplifying) assumption that $n_\alpha$ is a zero-mean Gaussian and stationary random process characterised by the one-sided power spectral density $S_\alpha(f)$.
The inference process in which we are interested in this paper is how well one can infer the actual value of the unknown parameter vector $\vec\lambda$, Eq.~(\ref{par}), based on the data $\vec{d}$, Eq.~(\ref{e:vecd}), and any prior information on $\vec\lambda$ available before the experiment. Within the Bayesian framework, see \emph{e.g.}~\cite{bayesian-data-analysis}, one is therefore interested in deriving the posterior probability density function (PDF) $p(\vec\lambda | \vec d)$ of the unknown parameter vector given the data set and the prior information. Bayes' theorem yields
\begin{equation}
p(\vec\lambda | \vec d) = \frac{p(\vec\lambda)\,p(\vec d|\vec\lambda)}{p(\vec d)}\,,
\label{e:posterior}
\end{equation}
where $p(\vec d|\vec\lambda)$ is the likelihood function, $p(\vec\lambda)$ is the prior probability density of $\vec\lambda$, and $p(\vec d)$ is the marginal likelihood or evidence. In the neighborhood of the maximum-likelihood estimate value $\hat{{\vec \lambda}}$, the likelihood function can be approximated as a multi-variate Gaussian distribution,
\begin{equation}
p(\vec\lambda | \vec d) \propto p(\vec\lambda)
\exp{\left[-\frac{1}{2}\Gamma_{ab} \Delta\lambda_a \Delta\lambda_b\right]}\,,
\end{equation}
where $ \Delta\lambda_a = \hat{\lambda}_a - {\lambda}_a$ and the matrix $\Gamma_{ab}$ is the Fisher information matrix; here the indexes $a,b = 1,\dots, 7$ label the components of $\vec{\lambda}$.. Note that we have used Einstein's summation convention (and we do not distinguish between covariant and contravariant indeces). In the limit of large SNR, $\hat{{\vec \lambda}}$ tends to ${{\vec \lambda}}$, and the inverse of the Fisher information matrix provides a lower limit to the error covariance of unbiased estimators of ${{\vec \lambda}}$, the so-called Cramer-Rao bound~\cite{Cramer:1946}. The variance-covariance matrix is simply the inverse of the Fisher information matrix, and its elements are
\begin{subequations}
\begin{eqnarray}
\sigma_a^2 & = & \left(\Gamma^{-1}\right)_{aa}\,,
\label{e:sigma}
\\
c_{ab} & = & \frac{\left(\Gamma^{-1}\right)_{ab}}{\sqrt{\sigma_a^2\sigma_b^2}}\,,
\label{e:cab}
\end{eqnarray}
\end{subequations}
where $-1\le c_{ab} \le +1$ ($\forall a,b$) are the correlation coefficients. We can therefore interpret $\sigma_a^2$ as a way to quantifying the expected uncertainties on the measurements of the source parameters. We refer the reader to~\cite{Vallisneri:2008} and references therein for an in-depth discussion of the interpretation of the inverse of the Fisher information matrix in the context of assessing the prospect of the estimation of the source parameters for GW observations. Here it suffices to point out that MBHBs will likely be observed at the detection threshold (see SVV), and the results presented in Section~\ref{s:results} should indeed be regarded as lower-limits to the statistical errors that one can expect to obtain in real observations, see \emph{e.g.}~\cite{NicholsonVecchio:1998,BalasubramanianDhurandhar:1998,Vallisneri:2008}.
One of the parameters that is of particular interest is the source sky location, and we will discuss in the next Section the ability of PTA to define an error box in the sky. Following Ref.~\cite{Cutler:1998}, we define the PTA angular resolution, or source error box as
\begin{equation}
\Delta \Omega=2\pi\sqrt{({\rm sin}\theta \Delta \theta \Delta \phi)^2-({\rm sin}\theta c^{\theta\phi})^2}\,;
\label{domega}
\end{equation}
with this definition, the probability for a source to lay \emph{outside} the solid angle $\Delta \Omega_0$ is $e^{-\Delta \Omega_0/\Delta \Omega}$~\cite{Cutler:1998}.
We turn now on the actual computation of the Fisher information matrix $\Gamma_{ab}$. First of all we note that in observations of multiple pulsars in the array, one can safely consider the data from different pulsars as independent, and the likelihood function of $\vec{d}$ is therefore
\begin{eqnarray}
p(\vec d|\vec\lambda) & = & \prod_\alpha p(d_\alpha|\vec\lambda)
\nonumber\\
& \propto & \exp{\left[-\frac{1}{2}\Gamma_{ab} \Delta\lambda_a \Delta\lambda_b\right]}\,,
\end{eqnarray}
where the Fisher information matrix that characterises the \emph{joint} observations in the equation above is simply given by
\begin{equation}
\Gamma_{ab} = \sum_\alpha \Gamma_{ab}^{(\alpha)}\,.
\end{equation}
$\Gamma_{ab}^{(\alpha)}$ is the Fisher information matrix relevant to the observation with the $\alpha-$th pulsar, and is simply related to the derivatives of the GW signal with respect to the unknown parameters integrated over the observation:
\begin{equation}
\Gamma_{ab}^{(\alpha)} = \left(\frac{\partial r_\alpha(t; \vec\lambda)}{\partial\lambda_a} \Biggl|\Biggr.\frac{\partial r_\alpha(t; \vec\lambda)}{\partial\lambda_b}
\right)\,,
\label{e:Gamma_ab_a}
\end{equation}
where the inner product between two functions $x(t)$ and $y(t)$ is defined as
\begin{subequations}
\begin{eqnarray}
(x|y) & = & 2 \int_{0}^{\infty} \frac{\tilde x^*(f) \tilde y(f) + \tilde x(f) \tilde y^*(f)}{S_n(f)} df\,,
\label{e:innerxy}
\\
& \simeq & \frac{2}{S_0}\int_0^{T} x(t) y(t) dt\,,
\label{e:innerxyapprox}
\end{eqnarray}
\end{subequations}
and
\begin{equation}
\tilde x(f) = \int_{-\infty}^{+\infty} x(t) e^{-2\pi i f t}
\label{e:tildex}
\end{equation}
is the Fourier Transform of a generic function $x(t)$. The second equality, Eq.~(\ref{e:innerxyapprox}) is correct only in the case in which the noise spectral density is approximately constant (with value $S_0$) across the frequency region that provides support for the two functions $\tilde x(f)$ and $\tilde y(f)$. Eq.~(\ref{e:innerxyapprox}) is appropriate to compute the scalar product for the observation of gravitational radiation from MBHBs whose frequency evolution is negligible during the observation time, which is astrophysically justified as we have shown in Section~\ref{s:intro}.
In terms of the inner product $(.|.)$ -- Eqs.~(\ref{e:innerxy}) and~(\ref{e:innerxyapprox}) -- the optimal SNR at which a signal can be observed using $\alpha$ pulsars is
\begin{equation}
{\rm SNR}_\alpha^2 = (r_\alpha | r_\alpha)\,,
\label{e:rhoalpha}
\end{equation}
and the total coherent SNR produced by timing an array of $M$ pulsars is:
\begin{equation}
{\rm SNR}^2 = \sum_{\alpha = 1}^M {\rm SNR}_\alpha^2\,.
\label{e:rho}
\end{equation}
\section{Results}
\label{s:results}
\begin{table*}
\begin{center}
\begin{tabular}{ll|cccccc}
\hline
$M$ $\,\,$& $\Delta \Omega_\mathrm{PTA} [{\rm srad}]$ $\,\,$& $\,\,\Delta\Omega$ [deg$^2$] $\,\,$& $\,\,\Delta R/R$ $\,\,$& $\,\,\Delta \iota$ [rad] $\,\,$& $\,\,\Delta \psi$ [rad] $\,\,$& $\,\,\Delta f/(10^{-10}{\rm Hz})$ $\,\,$& $\,\,\Delta \Phi_0$ [rad] $\,\,$\\
\hline
3 & $4\pi$ & $2858^{+5182}_{-1693}$ & $2.00^{+4.46}_{-1.21}$ & $1.29^{+5.02}_{-0.92}$ & $2.45^{+9.85}_{-1.67}$ & $1.78^{+0.46}_{0.40}$ & $3.02^{+16.08}_{-2.23}$\\
4 & $4\pi$ & $804^{+662}_{-370}$ & $0.76^{+1.19}_{-0.39}$ & $0.55^{+1.79}_{-0.36}$ & $0.89^{+2.90}_{-0.54}$ & $1.78^{+0.41}_{-0.33}$ & $1.29^{+5.79}_{-0.88}$\\
5 & $4\pi$ & $495^{+308}_{-216}$ & $0.54^{+0.84}_{-0.25}$ & $0.43^{+1.35}_{-0.28}$ & $0.65^{+2.10}_{-0.39}$ & $1.78^{+0.36}_{-0.30}$ & $0.98^{+4.27}_{-0.62}$\\
10 & $4\pi$ & $193^{+127}_{-92}$ & $0.36^{+0.57}_{-0.17}$ & $0.30^{+0.93}_{-0.19}$ & $0.42^{+1.49}_{-0.25}$ & $1.78^{+0.26}_{-0.23}$ & $0.71^{+3.01}_{-0.41}$\\
20 & $4\pi$ & $99.1^{+65.3}_{-44.6}$ & $0.31^{+0.51}_{-0.15}$ & $0.27^{+0.83}_{-0.16}$ & $0.35^{+1.34}_{-0.21}$ & $1.78^{+0.22}_{-0.20}$ & $0.65^{+2.66}_{-0.36}$\\
50 & $4\pi$ & $55.8^{30.5+}_{-23.0}$ & $0.30^{+0.49}_{-0.14}$ & $0.25^{+0.80}_{-0.15}$ & $0.31^{+1.26}_{-0.19}$ & $1.78^{+0.17}_{-0.16}$ & $0.60^{+2.56}_{-0.33}$\\
100 & $4\pi$ & $41.3^{+18.4}_{-15.3}$ & $0.29^{+0.48}_{-0.14}$ & $0.25^{+0.77}_{-0.15}$ & $0.31^{+1.24}_{-0.19}$ & $1.78^{+0.13}_{-0.12}$ & $0.60^{+2.49}_{-0.33}$\\
200 & $4\pi$ & $32.8^{+13.5}_{-11.1}$ & $0.29^{+0.48}_{-0.14}$ & $0.24^{+0.75}_{-0.15}$ & $0.29^{+1.21}_{-0.18}$ & $1.78^{+0.13}_{-0.12}$ & $0.59^{+2.50}_{-0.31}$\\
500 & $4\pi$ & $26.7^{+8.4}_{-8.2}$ & $0.29^{+0.48}_{-0.14}$ & $0.24^{+0.75}_{-0.15}$ & $0.29^{+1.21}_{-0.18}$ & $1.78^{+0.08}_{-0.08}$ & $0.59^{+2.50}_{-0.31}$\\
1000 & $4\pi$ & $23.2^{+6.7}_{-6.8}$ & $0.29^{+0.48}_{-0.14}$ & $0.24^{+0.73}_{-0.15}$ & $0.29^{+1.19}_{-0.18}$ & $1.78^{+0.08}_{-0.08}$ & $0.59^{+2.36}_{-0.31}$\\
\hline
100 & $0.21$ & $3675^{+3019}_{-2536}$ & $1.02^{+0.76}_{-0.34}$ & $0.47^{+1.44}_{-0.29}$ & $0.59^{+2.29}_{-0.34}$ & $1.78^{+0.56}_{-0.40}$ & $1.07^{+4.68}_{-0.68}$\\
100 & $0.84$ & $902^{+633}_{-635}$ & $0.51^{+0.44}_{-0.16}$ & $0.29^{+0.88}_{-0.18}$ & $0.34^{+1.44}_{-0.19}$ & $1.78^{+0.31}_{-0.27}$ & $0.68^{+2.87}_{-0.38}$\\
100 & $1.84$ & $403^{+315}_{-300}$ & $0.38^{+0.43}_{-0.13}$ & $0.25^{+0.80}_{-0.15}$ & $0.31^{+1.27}_{-0.18}$ & $1.78^{+0.17}_{-0.16}$ & $0.60^{+2.56}_{-0.32}$\\
100 & $\pi$ & $227^{+216}_{-184}$ & $0.33^{+0.46}_{-0.12}$ & $0.25^{+0.77}_{-0.15}$ & $0.31^{+1.24}_{-0.19}$ & $1.78^{+0.13}_{-0.16}$ & $0.60^{+2.49}_{-0.33}$\\
100 & $2\pi$ & $65.6^{+156.2}_{-38.3}$ & $0.29^{+0.48}_{-0.13}$ & $0.25^{+0.77}_{-0.15}$ & $0.31^{+1.24}_{-0.18}$ & $1.78^{+0.13}_{-0.12}$ & $0.59^{+2.50}_{-0.31}$\\
100 & $4\pi$ & $41.3^{+18.4}_{-15.3}$ & $0.29^{+0.48}_{-0.14}$ & $0.25^{+0.77}_{-0.15}$ & $0.30^{+1.24}_{-0.19}$ & $1.78^{+0.13}_{-0.12}$ & $0.60^{+2.49}_{-0.32}$\\
\hline
\end{tabular}
\end{center}
\caption{Typical uncertainties in the measurement of the GW source parameters as a function of the total number of pulsars in the array $M$ and their sky coverage $\Delta \Omega_\mathrm{PTA}$ (the portion of the sky over which the pulsars are uniformly distributed). For each PTA configuration we consider $2.5\times10^4$--to--$1.6\times10^6$ (depending on the number of pulsars in the array) GW sources with random parameters. The GW source location is drawn uniformly in the sky, and the other parameters are drawn uniformly over the full range of $\psi$, $\phi_0$ and $\cos\iota$, $f_0$ is fixed at $5\times10^{-8}$Hz. In every Monte Carlo realisation, the optimal SNR is equal to 10. The table reports the median of the statistical errors $\Delta \lambda$ -- where $\lambda$ is a generic source parameter -- and the 25$^{{\rm th}}$ and 75$^{{\rm th}}$ percentile of the distributions obtained from the Monte Carlo samplings. Note that the errors $\Delta R/R$, $\Delta \iota$, $\Delta \psi$, $\Delta f$, $\Delta \Phi_0$ all scale as SNR$^{-1}$, the error $\Delta\Omega$ scales as SNR$^{-2}$.}
\label{tab:summary}
\end{table*}
In this section we present and discuss the results of our analysis aimed at determining the uncertainties surrounding the estimates of the GW source parameters. We focus in particular on the sky localization of a MBHB, which is of particular interest for possible identifications of electromagnetic counterparts, including the host galaxy and/or galactic nucleus in which the MBHB resides. For the case of binaries in circular orbit and whose gravitational radiation does not produce a measurable frequency drift, the mass and distance are degenerate, and can not be individually measured: one can only measure the combination ${\cal M}^{5/3}/D_L$. This prevents measurements of MBHB masses, which would be of great interest. On the other hand, the orientation of the orbital angular momentum -- through measurements of the inclination angle $\iota$ and the polarisation angle $\psi$ -- can be determined (although only with modest accuracy, as we will show below), which may be useful in determining the geometry of the system, if a counterpart is detected.
The uncertainties on the source parameters depend on a number of factors, including the actual MBHB parameters, the SNR, the total number of pulsars and their location in the sky with respect to the GW source. It is therefore impossible to provide a single figure of merit that quantifies of how well PTAs will be able to do GW astronomy. One can however derive some general trends and scalings, in particular how the results depend on the number of pulsars and their distribution in the sky, which we call the {\em sky coverage of the array}; this is of particular importance to design observational campaigns, and to explore tradeoffs in the observation strategy. In the following subsections, by means of extensive Monte Carlo simulations, we study the parameter estimation accuracy as a function of the number of pulsars in the array, the total SNR of the signal, and on the array sky coverage. All our major findings are summarised in Table \ref{tab:summary}.
\subsection{General behavior}
Before considering the details of the results we discuss conceptually the process by which the source parameters can be measured. Our discussion is based on the assumption that the processing of the data is done through a coherent analysis.The frequency of the signal is trivially measured, as this is the key parameter that needs to be matched in order for a template to remain in phase with the signal throughout the observation period. Furthermore, the amplitude of the GW signal determines the actual SNR, and is measured in a straightforward way. The amplitude $R$, or equivalently $A_\mathrm{gw}$, see Eqs.~(\ref{e:Agw}) and~(\ref{erre}), provides a constraint on the chirp mass and distance combination ${\cal M}^{5/3}/D_L$. However, in the case of monochromatic signals, these two parameters of great astrophysical interest can not be measured independently. If the frequency derivative $\dot{f}$ were also observable -- this case is not considered in this paper, as it likely pertains only to a small fraction of detectable binaries, see Section~\ref{s:signal} and Fig.~\ref{fig1b} -- then one would be able to measure independently both the luminosity distance and chirp mass. In fact, from the measurement of $\dot{f} \propto {\cal M}^{5/3} f^{11/3}$, that can be evaluated from the phase evolution of timing residuals, one can measure the chirp mass, which in turn, from the observation of the amplitude, would yield an estimate of the luminosity distance\footnote{We note that a direct measurement of the chirp mass would be possible if one could detect both the Earth- and pulsar-terms, \emph{provided that the distance to the pulsar was known}. In this case one has the GW frequency at Earth, the GW frequency at the pulsar, and the Earth-pulsar light-travel-time, which in turns provides a direct measure of $\dot{f}$, and as a consequence of the chirp mass.}. The remaining parameters, those that determine the geometry of the binary -- the source location in the sky, and the orientation of the orbital plane -- and the initial phase $\phi_0$ can be determined only if the PTA array contains at least three (not co-aligned) pulsars. The source location in the sky is simply reconstructed through geometrical triangulation, because the PTA signal for each pulsar encodes the source coordinates in the sky in the relative amplitude of the sine and cosine term of the response or, equivalently, the overall phase and amplitude of the sinusoidal PTA output signal, see Eqs.~(\ref{e:r}),~(\ref{e:z1}),~(\ref{e:FA}) and~(\ref{researth}). For the reader familiar with GW observations with {\it LISA}, we highlight a fundamental difference between {\it LISA} and PTAs in the determination of the source position in the sky. With {\it LISA}, the error box decreases as the signal frequency increases (everything else being equal), because the source location in the sky is reconstructed (primarily) through the location-dependent Doppler effect produced by the motion of the instrument during the observation, which is proportional to the signal frequency. This is not the case for PTAs, where the error-box is independent of the GW frequency. It depends however on the number of pulsars in the array -- as the number of pulsars increases, one has to select with increasingly higher precision the actual value of the angular parameters, in order to ensure that the same GW signal fits correctly the timing residuals of all the pulsars -- and the location of the pulsars in the sky.
\begin{figure}
\centerline{\psfig{file=f3.ps,width=84.0mm}}
\caption{The statistical errors that affect the determination of the source location $\Delta\Omega$, see Eq.~(\ref{domega}) (upper panels) and the signal amplitude $R$ (lower panels) for four randomly selected sources (corresponding to the different line styles). We increase the number of pulsars in the array fixing a total SNR$=10$, and we plot the results as a function of the number of pulsars $M$. In the left panels we consider selected edge-on ($\iota=\pi/2$) sources, while in the right panel we plot sources with intermediate inclination of $\iota=\pi/4$.}
\label{fig2a}
\end{figure}
We first consider how the parameter estimation depends on the total number of pulsars $M$ at fixed SNR. We consider a GW source with random parameters and we evaluate the inverse of the Fisher information matrix as we progressively add pulsars to the array. The pulsars are added randomly from a uniform distribution in the sky and the noise has the same spectral density for each pulsar. We also keep the total coherent SNR fixed, at the value SNR = 10. It is clear that in a real observation the SNR actually increases approximately as $\sqrt{M}$, and therefore depends on the number of pulsars in the array. However, by normalising our results to a constant total SNR, we are able to disentangle the change in the uncertainty on parameter estimation that depends on the number of pulsars from the change due simply to the SNR. The results are shown in Fig. \ref{fig2a}. The main effect of adding pulsars in the PTA is to improve the power of triangulation and to reduce the correlation between the source parameters. At least three pulsars in the array are needed to formally resolve all the parameters; however, given the strong correlation in particular amongst $R$, $\iota$ and $\psi$ (which will be discussed later in more detail) a SNR $\sim100$ is needed to locate the source in the sky with an accuracy $\lesssim 50$ deg$^2$ in this case. It is clear that the need to maintain phase coherency between the timing residuals from several pulsars leads to a steep (by orders of magnitude) increase in accuracy from $M=3$ to $M\approx 20$ (note that the current Parkes PTA counts 20 pulsars). Adding more pulsars to the array reduces the uncertainty location region in the sky $\Delta \Omega$ by a factor of $\approx 5$ going from 20 to 1000 pulsars, but has almost no impact on the determination of the other parameters (the bottom panels of Fig. \ref{fig2a} show that $\Delta R/R$ is essentially constant for $M \lower.5ex\hbox{\gtsima} 20$).
\begin{figure}
\centerline{\psfig{file=f4.ps,width=84.0mm}}
\caption{Same as Fig. \ref{fig2a}, but here, as we add pulsars to the PTA, we consistently take into account the effect on the total coherent SNR, and accordingly we plot the results as a function of the SNR. In the left panels we plot selected edge-on ($\iota=\pi/2$) sources, while in the right panel we consider selected sources with intermediate inclination of $\iota=\pi/4$. The dotted--dashed thin lines in the upper panels follow the scaling $\Delta\Omega \propto \mathrm{SNR}^{-2}$.}
\label{fig2b}
\end{figure}
Now that we have explored the effect of the number of pulsars alone (at fixed SNR) on the parameter errors, we can consider the case in which we also let the SNR change. We repeat the analysis described above, but now the SNR is not kept fixed and we let it vary self-consistently as pulsars are added to the array. The results plotted as a function of the total coherent SNR are shown in Fig. \ref{fig2b}. Once more, we concentrate in particular on the measurement of the amplitude $R$ and the error box in the sky $\Delta\Omega$. For $M \gg 1$, the error box in the sky and the amplitude measurements scale as expected according to $\Delta \Omega\propto\mathrm{SNR}^{-2}$ and $\Delta R/R \propto \mathrm{SNR}^{-1}$ (and so do all the other parameters not shown here) . However, for $\mathrm{SNR} \lower.5ex\hbox{\ltsima} 10$ the uncertainties departs quite dramatically from the scaling above simply due to fact that with only a handful of pulsars in the array the strong correlations amongst the parameters degrade the measurements. We stress that the results shown here are independent of the GW frequency; we directly checked this property by performing several tests, in which the source's frequency is drawn randomly in the range $10^{-8}$ Hz - $10^{-7}$ Hz.
\begin{figure}
\centerline{\psfig{file=f5.ps,width=84.0mm}}
\caption{The effect of the source orbital inclination $\iota$ on the estimate of the signal parameters. Upper panels: The correlation coefficients $c^{R\iota}$ (left) and $c^{\psi\Phi_0}$ (right) as a function of $\iota$. Middle and bottom panels: the statistical errors in the measurement of amplitude $R$, polarisation angle $\psi$, inclination angle and initial phase $\Phi_0$ for a fixed PTA coherent SNR = 10, making clear the connection between inclination, correlation (degeneracy) and parameter estimation. Each asterisk on the plots is a randomly generated source.}
\label{fig3}
\end{figure}
\begin{figure}
\centerline{\psfig{file=f6.ps,width=84.0mm}}
\caption{The distributions of the statistical errors of the source parameter measurements using a sample of 25000 randomly distributed sources (see text for more details), divided in three different inclination intervals: $\iota \in [0,\pi/6]\cup[5/6\pi,\pi]$ (dotted), $\iota \in [\pi/6,\pi/3]\cup[2/3\pi, 5/6\pi]$ (dashed) and $\iota\in [\pi/3, 2/3\pi]$ (solid). In each panel, the sum of the distribution's integrals performed over the three $\iota$ bins is unity.}
\label{fig4}
\end{figure}
The source inclination $\iota$ angle is strongly correlated with the signal amplitude $R$, and the polarisation angle $\psi$ is correlated to both $\iota$ and $\Phi_0$. The results are indeed affected by the actual value of the source inclination. Left panels in Figs. \ref{fig2a} and \ref{fig2b} refer to four different edge-on sources (i.e. $\iota=\pi/2$ and the radiation is linearly polarised). In this case, the parameter have the least correlation, and $\Delta R/R=$SNR$^{-1}$. Right panels in Figs. \ref{fig2a} and \ref{fig2b} refer to sources with an "intermediate" inclination $\iota=\pi/4$; here degeneracies start to play a significant role and cause a factor of $\approx 3$ degradation in $\Delta R/R$ estimation (still scaling as SNR$^{-1}$). Note, however, that the sky position accuracy is independent on $\iota$ (upper panels in Figs. \ref{fig2a} and \ref{fig2b}), because the sky coordinates $\theta$ and $\phi$ are only weakly correlated to the other source parameters. We further explore this point by considering the behaviour of the correlation coefficients ($c^{R\iota}$ and $c^{\psi\Phi_0}$) as a function of $\iota$. Fig. \ref{fig3} shows the correlation coefficients and statistical errors in the source's parameters for a sample of 1000 individual sources using a PTA with $M=100$ and total SNR$=10$ as a function of $\iota$. For a face-on source ($\iota=0, \pi$), both polarizations equally contribute to the signal, and any polarization angle $\psi$ can be perfectly 'reproduced' by tuning the source phase $\Phi_0$, i.e. the two parameters are completely degenerate and cannot be determined. Moving towards edge-on sources, progressively change the relative contribution of the two polarizations, breaking the degeneracy with the phase. Fig. \ref{fig4}, shows statistical error distributions for the different parameters over a sample of 25000 sources divided in three different $\iota$ bins. The degradation in the determination of $R$, $\iota$ and $\psi$ moving towards face-on sources is clear. Conversely, both $\theta$ and $\phi$ do not have any strongly dependent correlation with the other parameters, the estimation of $\Omega$ is then independent on the source inclination (lower right panel in Fig. \ref{fig4}).
\subsection{Isotropic distribution of pulsars}
\begin{figure}
\centerline{\psfig{file=f7.ps,width=84.0mm}}
\caption{Median expected statistical error on the source parameters. Each point (asterisk or square) is obtained by averaging over a large Monte Carlo sample of MBHBs (it ranges from $2.5\times 10^4$ when considering 1000 pulsars to $1.6\times10^6$ when using 3 pulsars). In each panel, solid lines (squares) represent the median statistical error as a function of the total coherent SNR, assuming 100 randomly distributed pulsars in the sky; the thick dashed lines (asterisks) represent the median statistical error as a function of the number of pulsars $M$ for a fixed total SNR$=10$. In this latter case, thin dashed lines label the 25$^{\rm th}$ and the 75$^{\rm th}$ percentile of the error distributions.}
\label{fig5}
\end{figure}
\begin{figure}
\centerline{\psfig{file=f8.ps,width=84.0mm}}
\caption{Distributions normalised to unity of the size of the error-box in the sky assuming an isotropic random distribution of pulsars in the array. Upper panel: from right to left the number of pulsars considered is $M=3, 5, 20, 100, 1000$, and we fixed a total SNR$=10$ in all cases. Lower panel: from right to left we consider SNR$=5, 10, 20, 50, 100$, and we fixed $M=100$.}
\label{fig6}
\end{figure}
In this Section we study the parameter estimation for a PTA whose pulsars are \emph{isotropically} distributed in the sky, and investigate how the results depend on the number $M$ of pulsars in the array and the SNR. Current PTAs have pulsars that are far from being isotropically located on the celestial sphere -- the anisotropic distribution of pulsars is discussed in the next Section -- but the isotropic case is useful to develop an understanding of the key factors that impact on the PTA performances for astronomy. It can also be considered representative of future PTAs, such as SKA, where many stable pulsars are expected to be discovered all over the sky.
We begin by fixing the total coherent SNR at which the GW signal is observed , and we set SNR$= 10$, regardless of the number of pulsars in the array, and explore the dependence of the results on the number of pulsars $M$ in the range 3-to-1000. We then consider a fiducial 'SKA-configuration' by fixing the total number of pulsars to $M=100$, and we explore how the results depend on the SNR for values $5 \le \mathrm{SNR} \le 100$. Throughout this analysis we assume that the timing noise is exactly the same for each pulsar and that the observations of each neutron star cover the same time span. The relative contribution of each of the pulsars in the PTA to the SNR is therefore solely dictated by the geometry of the system pulsar-Earth-source, that is the specific value of the beam patter function $F^{+,\times}(\theta, \phi,\psi)$. In total we consider 14 $M$-SNR combinations, and for each of them we generate $2.5\times 10^4$-to-$1.6\times10^6$ (depending on the total number of pulsars in the array) random sources in the sky. Each source is determined by the seven parameters described by Eq. (\ref{par}), which, in all the Monte Carlo simulations presented from now on, are chosen as follow. The angles $\theta$ and $\phi$ are randomly sampled from a uniform distribution in the sky; $\Phi_0$ and $\psi$ are drawn from a uniform distribution over their relevant intervals, [0,2$\pi$] and [0,$\pi$] respectively;$\iota$ is sampled according to a probability distribution $p(\iota)= \sin\iota/2$ in the interval $[0, \pi]$ and the frequency is fixed at $f=5\times 10^{-8}$ Hz. Finally the amplitude $R$ is set in such a way to normalise the signal to the pre-selected value of the SNR. For each source we generate $M$ pulsars randomly located in the sky and we calculate the Fisher information matrix and its inverse as detailed in Section~\ref{s:fim}. We also performed trial runs considering $f=10^{-7}$ Hz and $f=10^{-8}$ Hz (not shown here) to further cross-check that the results do not depend on the actual GW frequency.
Fig. \ref{fig5} shows the median statistical errors as a function of $M$ and SNR for all the six relevant source's parameters ($\theta$ and $\phi$ are combined into the single quantity $\Delta\Omega$, according to Eq.~(\ref{domega})). Let us focus on the $M$ dependence at a fixed SNR$=10$. The crucial astrophysical quantity is the sky location accuracy, which ranges from $\approx 3000$ deg$^2$ for $M=3$ -- approximately 10\% of the whole sky -- to $\approx 20$ deg$^2$ for $M=1000$. A PTA of 100 pulsars would be able to locate a MBHB within a typical error box of $\approx 40$ deg$^2$. The statistical errors for the other parameters are very weakly dependent on $M$ for $M\lower.5ex\hbox{\gtsima} 20$. The fractional error in the source amplitude is typically $\approx 30\%$, which unfortunately prevents to constrain an astrophysically meaningful ``slice'' in the ${\cal M}-D_L$ plane. The frequency of the source, which in this case was chosen to be $f = 5\times 10^{-8}$ Hz, is determined at a $\sim 0.1$ nHz level. Errors in the inclination and polarization angles are typically $\approx 0.3$ rad, which may provide useful information about the orientation of the binary orbital plane.
All the results have the expected scaling with respect to the SNR, i.e. $\Delta\Omega \propto 1/\mathrm{SNR}^2$, and for all the other parameters shown in Fig.~\ref{fig5} the uncertainties scale as $1/\mathrm{SNR}$. A typical source with a SNR$=100$ (which our current astrophysical understanding suggests to be fairly unlikely, see SVV) would be located in the sky within an error box $\lower.5ex\hbox{\ltsima} 1\,\mathrm{deg}^2$ for $M \lower.5ex\hbox{\gtsima} 10$, which would likely enable the identification of any potential electro-magnetic counterpart.
Distributions (normalised to unity) of $\Delta \Omega$ are shown in Fig. \ref{fig6}. The lower panel shows dependence on SNR (at fixed number of pulsars in the PTA, here set to 100), whose effect is to shift the distributions to smaller values of $\Delta \Omega$ as the SNR increases, without modifying the shape of the distribution. The upper panel shows the effectiveness of triangulation; by increasing the number of pulsars at fixed coherent SNR, not only the peak of the distribution shifts towards smaller values of $\Delta \Omega$, but the whole distribution becomes progressively narrower. If they yield the same SNR, PTAs containing a larger number of pulsars (sufficiently evenly distributed in the sky) with higher intrinsic noise are more powerful than PTAs containing fewer pulsars with very good timing stability, as they allow a more accurate parameter reconstruction (in particular for sky position) and they minimise the chance of GW sources to be located in "blind spots" in the sky (see next Section).
\subsection{Anisotropic distribution of pulsars}
\begin{figure}
\centerline{\psfig{file=f9.ps,width=84.0mm}}
\caption{Median statistical error in the source's parameter estimation as a function of the sky-coverage of the pulsar distribution composing the array. Each triangle is obtained averaging over a Monte Carlo generated sample of $1.6\times10^5$ sources. In each panel, solid lines (triangles) represent the median error, assuming $M=100$ and a total SNR$=10$ in the array; thin dashed lines label the 25$^{\rm th}$ and the 75$^{\rm th}$ percentile in the statistical error distributions.}
\label{fig7}
\end{figure}
\begin{figure*}
\centerline{\psfig{file=f10_color.ps,width=160.0mm}}
\caption{Sky maps of the median sky location accuracy for an anisotropic distribution of pulsars in the array. Contour plots are generated by dividing the sky into 1600 ($40\times40$) cells and considering all the randomly sampled sources falling within each cell; SNR$=10$ is considered. The pulsar distribution progressively fills the sky starting from the top left, eventually reaching an isotropic distribution in the bottom right panel (in this case, no distinctive features are present in the sky map). In each panel, 100 black dots label an indicative distribution of 100 pulsars used to generate the maps, to highlight the sky coverage. Labels on the contours refer to the median sky location accuracy expressed in square degrees, and the color--scale is given by the bars located on the right of each map.}
\label{fig8}
\end{figure*}
\begin{figure}
\centerline{\psfig{file=f11.ps,width=84.0mm}}
\caption{Normalized distributions of the statistical errors in sky position accuracy corresponding to the six sky maps shown in Fig. \ref{fig8}. Each distribution is generated using a random subsample of $2.5\times10^4$ sources.}
\label{fig9}
\end{figure}
\begin{figure*}
\centerline{\psfig{file=f12_color.ps,width=160.0mm}}
\caption{Sky maps of the median sky location accuracy for the Parkes PTA. Contour plots are generated as in Fig. \ref{fig8}. Top panel: we fix the source SNR$=10$ over the whole sky; in this case the sky position accuracy depends only on the different triangulation effectiveness as a function of the source sky location. Bottom panel: we fix the source chirp mass and distance to give a sky and polarization averaged SNR$=10$, and we consistently compute the mean SNR as a function of the sky position. The sky map is the result of the combination of triangulation efficiency and SNR as a function of the source sky location. The color--scale is given by the bars on the right, with solid angles expressed in deg$^2$.}
\label{fig10}
\end{figure*}
The sky distribution of the pulsars in a PTA is not necessarily isotropic. This is in fact the case for present PTAs, and it is likely to remain the norm rather than the exception, until SKA comes on-line. It is therefore useful -- as it also sheds new light on the ability of reconstructing the source parameters based on the crucial location of the pulsars of the array with respect to a GW source -- to explore the dependency of the results on what we call the "PTA sky coverage" $\Delta \Omega_\mathrm{PTA}$, i.e. the minimum solid angle in the sky enclosing the whole population of the pulsars in the array. We consider as a study case a `polar' distribution of 100 pulsars; the location in the sky of each pulsar is drawn from a uniform distribution in $\phi$ and $\cos\theta$ with parameters in the range $\phi \in [0,2\pi]$ and $\theta \in [0,\theta_{{\rm max}}]$, respectively. We then generate a random population of GW sources in the sky and proceed exactly as we have described in the previous section. We consider six different values of $\Delta \Omega_\mathrm{PTA}$, progressively increasing the sky coverage. We choose $\theta_\mathrm{max} = \pi/12, \pi/6, \pi/4, \pi/3, \pi/2, \pi$ corresponding to $\Delta \Omega_\mathrm{PTA}=0.21, 0.84, 1.84, \pi, 2\pi, 4\pi$ srad. As we are interested in investigating the geometry effects, we fix in each case the total optimal SNR to 10. We dedicate the next section to consider specifically the case of the 20 pulsars that are currently part of the Parkes PTA.
The median statistic errors on the source parameters as a function of the PTA sky coverage are shown in Fig. \ref{fig7}. As one would expect, the errors decrease as the sky coverage increases, even if the SNR is kept constant. This is due to the fact that as the pulsars in the array populate more evenly the sky, they place increasingly more stringent constraints on the relative phase differences amongst the same GW signal measured at each pulsar, which depends on the geometrical factors $F^{+,\times}$. The most important effect is that the sky position is pinned down with greater accuracy; at the same time, correlations between the sky location parameters and other parameters, in particular amplitude and inclination angle are reduced. $\Delta \Omega$ scales linearly (at fixed SNR) with $\Delta \Omega_\mathrm{PTA}$, but the others parameters do not experience such a drastic improvement. The statistical uncertainty on the amplitude improves as $\sqrt{\Delta \Omega_\mathrm{PTA}}$ for $\Delta \Omega_\mathrm{PTA} \lower.5ex\hbox{\ltsima} 1$ srad, then saturates. All the other parameters are much less sensitive to the sky coverage, showing only a mild improvement (a factor $\lesssim 2$) with increasing $\Delta \Omega_\mathrm{PTA}$ up to $\sim 1$ srad.
When one considers an anisotropic distribution of pulsars, the median values computed over a random uniform distribution of GW sources in the sky do not carry however the full set of information. In particular the error-box in the sky strongly depends on the actual source location. To show and quantify this effect, we use the outputs of the Monte Carlo runs to build sky maps of the median of $\Delta \Omega$ that we shown in Fig. \ref{fig8}. When the pulsars are clustered in a small $\Delta \Omega_\mathrm{PTA}$, the properties of the signals coming from that spot in the sky (and from the diametrically opposite one) are more susceptible to small variations with the propagation direction (due to the structure of the response functions $F^{+}$ and $F^{\times}$); the sky location can then be determined with a much better accuracy, $\Delta \Omega \sim 2$ deg$^2$. Conversely, triangulation is much less effective for sources located at right angles with respect to the bulk of the pulsars. For a polar $\Delta \Omega_\mathrm{PTA}=0.21$ srad, we find a typical $\Delta \Omega \gtrsim 5000$ srad for equatorial sources; i.e., their sky location is basically undetermined. Increasing the sky coverage of the array, obviously mitigates this effect, and in the limit $\Delta \Omega_\mathrm{PTA}=4\pi$ srad (which correspond to an isotropic pulsar distribution), we find a smooth homogeneous skymap without any recognizable feature (bottom right panel of Fig. \ref{fig8}). In this case the sky location accuracy is independent on the source sky position and, for $M = 100$ and $\mathrm{SNR} = 10$ we find $\Delta \Omega \sim 40$ deg$^2$. Fig. \ref{fig9} shows the normalized distributions of the statistical errors corresponding to the six skymaps shown in Fig. \ref{fig8}. It is interesting to notice the bimodality of the distribution for intermediate values of $\Delta \Omega_\mathrm{PTA}$, due to the fact that there is a sharp transition between sensitive and non sensitive areas in the sky (this is particularly evident looking at the contours in the bottom left panels of Fig. \ref{fig8}).
We also checked another anisotropic situation of potential interest: a distribution of pulsars clustered in the Galactic plane. We considered a distribution of pulsars covering a ring in the sky, with $\phi_\alpha$ is randomly sampled in the interval [0,$2\pi$] and latitude in the range [$-\pi/12, \pi/12$] around the equatorial plane, corresponding to a solid angle of $\Delta \Omega_\mathrm{PTA}=3.26$ srad. Assuming a source SNR$=10$, the median statistical error in the source sky location is $\sim 100$ deg$^2$, ranging from $\sim 10$ deg$^2$ in the equatorial plane, to $\sim 400$ deg$^2$ at the poles. Median errors on the other parameters are basically the same as in the isotropic case.
\subsection{The Parkes Pulsar Timing Array}
We finally consider the case that is most relevant to present observations: the potential capabilities of the Parkes Pulsar Timing Array. The goal of the survey is to monitor 20 milli-second pulsars for five years with timing residuals $\approx 100$ ns~\cite{man08}. This may be sufficient to enable the detection of the stochastic background generated by the whole population of MBHBs~\cite{papI}, but according to our current astrophysical understanding (see SVV) it is unlikely to lead to the detection of radiation from individual resolvable MBHBs, although there is still a non-negligible chance of detection. It is therefore interesting to investigate the potential of such a survey.
In our analysis we fix the location of the pulsars in the PTA to the coordinates of the 20 milli-second pulsars in the Parkes PTA, obtained from~\cite{ATNF-catalogue}; however for this exploratory analysis we set the noise spectral density of the timing residuals to be the same for each pulsar, \emph{i.e.} we do not take into account the different timing stability of the pulsars. We then generate a Monte Carlo sample of GW sources in the sky with the usual procedure. We consider two different approaches. Firstly, we explore the parameter estimation accuracy as a function of the GW source sky location for selected fixed array coherent SNRs (5, 10, 20, 50 and 100). Secondly, we fix the source chirp mass, frequency and distance (so that the sky and polarization averaged coherent SNR is 10) and we explore the parameter estimation accuracy as a function of the sky location. Skymaps of statistical error in the sky location are shown in Fig. \ref{fig10}. In the top panel we fix the SNR$=10$, independently on the source position in the sky; the median error in the sky location accuracy is $\Delta \Omega \sim 130$ deg$^2$, but it ranges from $\sim 10$ deg$^2$ to $\sim400$ deg$^2$ depending on the source's sky location. The median statistical errors that affect the determination of all the other source parameters are very similar to those for the isotropic pulsar distribution case when considering $M=20$, since the pulsar array covers almost half of the sky, see Fig. \ref{fig7}. In the bottom panel, we show the results when we fix the source parameters, and therefore, the total SNR in the array does depend on the source sky location. In the southern hemisphere, where almost all the pulsars are concentrated, the SNR can be as high as 15, while in the northern hemisphere it can easily go below 6. The general shape of the skymap is mildly affected, and shows an even larger imbalance between the two hemispheres. In this case, the median error is $\Delta \Omega \sim 160$ deg$^2$, ranging from $\sim 3$ deg$^2$ to $\sim900$ deg$^2$. It is fairly clear that adding a small ($\lower.5ex\hbox{\ltsima} 10$) number pulsars in the northern hemisphere to the pulsars already part of the Parkes PTA would significantly improve the uniformity of the array sensitivity and parameter estimation capability, reducing the risk of potentially detectable GW sources ending up in a "blind spot" of the array.
\section{Conclusions}
In this paper we have studied the expected uncertainties in the measurements of the parameters of a massive black hole binary systems by means of gravitational wave observations with Pulsar Timing Arrays. We have investigated how the results vary as a function of the signal-to-noise ratio, the number of pulsars in the array and their location in the sky with respect to a gravitational wave source. Our analysis is focused on MBHBs in circular orbit with negligible frequency evolution during the observation time ("monochromatic sources"), which we have shown to represent the majority of the observable sample, for sensible models of sub--parsec MBHB eccentricity evolution. The statistical errors are evaluated by computing the variance-covariance matrix of the observable parameters, assuming a coherent analysis of the Earth-terms only produced by the timing residuals of the pulsars in the array (see Section II B).
For a fiducial case of an array of 100 pulsars randomly distributed in the sky, assuming a coherent total SNR = 10, we find a typical error box in the sky $\Delta \Omega \approx 40$ deg$^2$ and a fractional amplitude error of $\approx 0.3$. The latter places only very weak constraints on the chirp mass-distance combination ${\cal M}^{5/3}/D_L$. At fixed SNR, the typical parameter accuracy is a very steep function of the number of pulsars in the PTA up to $\approx 20$. For PTAs containing more pulsars, the actual gain becomes progressively smaller because the pulsars ``fill the sky" and the effectiveness of further triangulation weakens. We also explored the impact of having an anisotropic distribution of pulsars finding that the typical source sky location accuracy improves linearly with the array sky coverage. For the specific case of the Parkes PTA where all the pulsars are located in the southern sky, the sensitivity and sky localisation are significantly better (by an order of magnitude) in the southern hemisphere, where the error-box is $\lesssim 10 \,\mathrm{deg}^2$ for a total coherent SNR = 10. In the northern hemisphere, the lack of monitored pulsars prevent a source location to be in an uncertainty region $\lesssim 200\,\mathrm{deg}^2$. The monitoring of a handful of pulsars in the northern hemisphere would significantly increase both the SNR and the parameter recovery of GW sources, and the International PTA~\cite{HobbsEtAl:2009} will provide such a capability in the short term future.
The main focus of our analysis is on the sky localisation because sufficiently small error-boxes in the sky may allow the identification of an electro-magnetic counterpart to a GW source. Even for error-boxes of the order of tens-to-hundreds of square degrees (much larger than \emph{e.g.} the typical {\it LISA} error-boxes~\cite{v04,k07,lh08}), the typical sources are expected to be massive (${\cal M} \lower.5ex\hbox{\gtsima} 10^{8}M_{\odot}$) and at low redshift ($z\lower.5ex\hbox{\ltsima} 1.5$), and therefore the number of associated massive galaxies in the error-box should be limited to a few hundreds. Signs of a recent merger, like the presence of tidal tails or irregularities in the galaxy luminosity profile, may help in the identification of potential counterparts. Furthermore, if nuclear activity is present, \emph{e.g.} in form of some accretion mechanism, the number of candidate counterparts would shrink to an handful, and periodic variability \cite{hkm07} could help in associating the correct galaxy host. We are currently investigating the astrophysical scenarios and possible observational signatures, and we plan to come back to this important point in the future. The advantage of a counterpart is obvious: the redshift measurement would allow us, by assuming the standard concordance cosmology, to measure the luminosity distance to the GW source, which in turn would break the degeneracy in the amplitude of the timing residuals $R \propto {\cal M}^{5/3}/(D_L f^{1/3})$ between the chirp mass and the distance, providing therefore a direct measure of ${\cal M}$.
The study presented in this paper deals with monochromatic signals. However, the detection of MBHBs which exhibit a measurable frequency drift would give significant payoffs, as it would allow to break the degeneracy between distance and chirp mass, and enable the direct measurement of both parameters. Such systems may be observable with the Square-Kilometre-Array. In the future, it is therefore important to extend the present analysis to these more general signals. However, as the frequency derivative has only modest correlations with the sky position parameters, we expect that the results for the determination of the error-box in the sky discussed in this paper will still hold. A further extension to the work is to consider MBHBs characterised by non-negligible eccentricity, which is currently in progress. Another extension to our present study is to consider both the Earth- and pulsar-terms in the analysis of the data and the investigation of the possible benefits of such scheme, assuming that the pulsar distance is not known to sufficient accuracy. This also raises the issue of possible observation campaigns that could yield an accurate (to better than 1 pc) determination of the pulsar distances used in PTAs. In this case the use of the pulsar-term in the analysis would not require the introduction of (many more) unknown parameters and would have the great benefit of breaking the degeneracy between chirp mass and distance.
The final world of caution goes to the interpretation of the results that we have presented in the paper. The approach based on the computation of the Fisher Information matrix is powerful and straightforward, and is justified at this stage to understand the broad capabilities of PTAs and to explore the impact on astronomy of different observational strategies. However, the statistical errors that we compute are strictly \emph{lower limits} to the actual errors obtained in a real analysis; the fact that at least until SKA comes on line, a detection of a MBHB will be at a moderate-to-low SNR should induce caution in the way in which the results presented here are interpreted. Moreover, in our current investigation, we have not dealt with a number of important effects that in real life play a significant role, such as different calibrations of different data sets, the change of systematic factors that affect the noise, possible non-Gaussianity and non-stationarity of the noise, etc. These (and other) important issues for the study of MBHBs with PTAs should be addressed more thoroughly in the future by performing actual mock analyses and developing suitable analysis algorithms.
| proofpile-arXiv_065-5042 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
The photoproduction of hyperons off the nucleon target, $\gamma N\to KY$ is important in hadron physics because it reveals the strangeness-related
interaction structures of hadrons. There have been abundant experimental and theoretical efforts dedicated to it. Many experimental collaborations,
such as CLAS at Jafferson laboratory~\cite{McNabb:2003nf,Bradford:2005pt,Bradford:2006ba}, LEPS at SPring-8
~\cite{Niiyama:2009zz,Kohri:2009xe,Hicks:2008yn,Muramatsu:2009zp}, etc, have performed energetic research activities for the $\Lambda$, $\Sigma$, and $\Xi$ photoproductions.
Up to the resonance region $\sqrt{s}\lesssim3$ GeV, a simple hadronic model including the tree-level diagrams with the nucleon and the
certain resonance intermediate states has successfully explained the experimental data~\cite{Janssen:2001pe,Nam:2005uq,Nam:2006cx,Oh:2007jd,Nam:2009cv}.
Although there are more complicated higher-order contributions such as the finite-state interactions~\cite{Chiang:2001pw} or hadronic loops~\cite{Ozaki:2007ka}, but one can reach the agreement between the model result and the data without those high-order contributions.
However, this simple model is hardly applied to high-energy regions, because it is only valid at relatively low energy. On the other hand, it is well known that various reactions at high energy and low momentum transfer have been well described by Regge theory. Therefore, to extend our model to higher energy without the sacrifice of the satisfactory description of the low
energy data, mesonic Regge trajectories, corresponding to all the
meson exchanges with the same quantum numbers but different spins in
the $t$ channel at tree level, were employed
~\cite{Corthals:2005ce,Corthals:2006nz,Ozaki:2009wp}. This Regge
description is supposed to be valid in the limit
$(s,|t|\,\mathrm{or}\,|u|)\to(\infty,0)$~\cite{Regge:1959mz}. Even
in the resonance region, it has been argued that the Regge
description is still applicable to a certain
extent~\cite{Ozaki:2009wp,Sibirtsev:2003qh}.
In this article, we investigate the $\Lambda(1520,3/2^{-})\equiv\Lambda^{*}$ photoproduction off the proton target, $\gamma p\to K^{+}\Lambda^{*}$,
beyond the resonance region with an extended model including the original hadronic model and the interpolated Regge contributions.
This reaction has been intensively studied by CLAS and LEPS collaborations recently.
As shown in Refs.~\cite{Nam:2005uq}, up to the resonance region, this production process is largely dominated by the contact-term contribution.
This interesting feature supported by the experiments~\cite{Nakano,Muramatsu:2009zp} is a consequence of gauge invariance in a certain
description for spin-$3/2$ fermions, i.e. the Rarita-Schwinger formalism~\cite{Rarita:1941mf,Nath:1971wp}. The contact-term dominance simplifies
the analyses of the production process to a great extent. For instance, according to it, 1) one can expect a significant difference in the production
strengths between the proton- and neutron-target experiments as long as the coupling strength $g_{K^{*}N\Lambda^{*}}$ is small~\cite{Nam:2005uq},
and 2) the computation of the polarization-transfer coefficients is almost without any unknown parameter~\cite{Nam:2009cv}.
To date, there have been only a few experimental data of the $\Lambda^{*}$ photoproduction off the proton target~
\cite{Boyarski:1970yc,Barber:1980zv,Muramatsu:2009zp,Kohri:2009xe}. Among them, Barber et al. for LAMP2 collaboration explored
the process up to $E_{\gamma}\approx4.8$ GeV~\cite{Barber:1980zv}, whereas Muramatsu et al. for LEPS collaboration did it up to $2.4$
GeV~\cite{Muramatsu:2009zp}. Both of them were focusing on the resonance region below $\sqrt{s}\lesssim3$ GeV.
Hence, as done in Ref.~\cite{Nam:2005uq}, a simple model with the Born diagrams accompanied with the contact-term
dominance can reproduce the data qualitatively well. However, at $E_{\gamma}=11$ GeV as done by Boyarski et al.~\cite{Boyarski:1970yc},
this simple model fails in the forward region. Nevertheless it still agrees with the data qualitatively well
beyond $|t|\approx0.2\,\mathrm{GeV}^{2}$. To solve this discrepancy in
the high-energy forward region, we are motivated to introduce the Regge description which contributes significantly in the limit $(s,|t|)\to(\infty,0)$
by construction. Assuming that the Regge contributions still remain non negligible in the limit $(s,|t|)\to(s_{\mathrm{threshold}},\mathrm{finite})$,
we introduce an interpolating ansatz between the two ends.
After fixing the parameters including a cutoff mass for the form factors, it is straightforward to calculate all the physical observables.
We then present the ($\sigma$ and $d\sigma/d\Omega$ as a function of $E_{\gamma}$) and angular dependences ($d\sigma/dt$ and $d\sigma/d\Omega$
as a function of $\theta$), photon-beam asymmetry ($\Sigma$), and polarization-transfer coefficients ($C_{x,1/2}$, $C_{x,3/2}$, $C_{z,1/2}$,
and $C_{z,3/2}$) of the production process. Here, $\theta$ stands for the angle between the incident photon and outgoing kaon in the center of mass frame. Furthermore the $K^{-}$-angle distributions function, $\mathcal{F}_{K^{-}}$ in the Gottfried-Jackson frame using $C_{z,1/2}$ and $C_{z,3/2}$
are also calculated.
This article is organized as follows: In Section II, we provide the general formalism to compute the $\Lambda^{*}$ photoproduciton.
Numerical results are given in Section III. Finally, Section IV is for summary and conclusion.
\section{General formalism}
\subsection{Feynman amplitudes}
The general formalism of the $\gamma(k_1)+ N(p_1)\to K(k_2)+\Lambda^{*}(p_2)$ reaction process is detailed here.
The relevant Feynman diagrams are shown in Fig.~\ref{FIG1}. Here, we include the nucleon-pole and nucleon-resonance contributions in
the $s$-channel, $\Lambda^{*}$-pole contribution in the $u$-channel, $K$- and $K^{*}$-exchanges in the $t$-channel, and the contact-term contribution.
The relevant interactions are given as follows:
\begin{eqnarray}
\label{eq:GROUND}
{\cal L}_{\gamma KK}&=&
ie_K\left[(\partial^{\mu}K^{\dagger})K-(\partial^{\mu}K)K^{\dagger}
\right]A_{\mu},
\nonumber\\
{\cal L}_{\gamma
NN}&=&-\bar{N}\left[e_N\rlap{/}{A}-\frac{e_Q\kappa_N}{4M_N}\sigma\cdot
F\right]N,
\cr \mathcal{L}_{\gamma\Lambda^*\Lambda^*}&=&
-\bar{\Lambda}^{*\mu}
\left[\left(-F_{1}\rlap{/}{\epsilon}g_{\mu\nu}+F_3\rlap{/}{\epsilon}\frac{k_{1
\mu}k_{1
\nu}}{2M^{2}_{\Lambda^*}}\right)-\frac{\rlap{/}{k}_{1}\rlap{/}{\epsilon}}
{2M_{\Lambda^*}}\left(-F_{2}g_{\mu\nu}+F_4\frac{k_{1\mu}k_{1 \nu}}
{2M^{2}_{\Lambda^*}}\right)\right] \Lambda^{*\nu}+\mathrm{h.c.}, \cr
\mathcal{L}_{\gamma KK^{*}}&=& g_{\gamma
KK^{*}}\epsilon_{\mu\nu\sigma\rho}(\partial^{\mu}A^{\nu})
(\partial^{\sigma}K)K^{*\rho}+\mathrm{h.c.} \cr
\mathcal{L}_{KN\Lambda^*}&=&\frac{g_{KN\Lambda^*}}{M_{\Lambda^*}}
\bar{\Lambda}^{*\mu}\partial_{\mu}K\gamma_5N\,+{\rm h.c.}, \cr
\mathcal{L}_{K^{*}N\Lambda^*}&=&
-\frac{iG_{1}}{M_{V}}\bar{\Lambda}^{*\mu}\gamma^{\nu}G_{\mu\nu}N
-\frac{G_{2}}{M^{2}_{V}}\bar{\Lambda}^{*\mu}
G_{\mu\nu}\partial^{\nu}N
+\frac{G_{3}}{M^{2}_{V}}\bar{\Lambda}^{*\mu}\partial^{\nu}G_{\mu\nu}N+\mathrm{h.c.},
\cr {\cal L}_{\gamma
KN\Lambda^*}&=&-\frac{ie_Ng_{KN\Lambda^*}}{M_{\Lambda^*}}
\bar{\Lambda}^{*\mu} A_{\mu}K\gamma_5N+{\rm h.c.},
\end{eqnarray}
where $e_{h}$ and $e_{Q}$ stand for the electric charge of a hadron $h$ and unit electric charge, respectively. $A$, $K$, $K^{*}$, $N$, and
$\Lambda^{*}$ are the fields for the photon, kaon, vector kaon, nucleon, and $\Lambda^{*}$. As for the spin-$3/2$ fermion field, we use the Rarita-Schwinger (RS)
vector-spinor field~\cite{Rarita:1941mf,Nath:1971wp}. We use the notation $\sigma\cdot F=\sigma_{\mu\nu}F^{\mu\nu}$, where $\sigma_{\mu\nu}=i(\gamma_{\mu}\gamma
_{\nu}-\gamma_{\nu}\gamma_{\mu})/2$ and the EM field strength tensor $F^{\mu\nu}=\partial^{\mu}A^{\nu}-\partial^{\nu}A^{\mu}$. $\kappa_{N,\Lambda^{*}}$
denotes the anomalous magnetic moments for $N$ and $\Lambda^{*}$. Although the spin-$3/2$ $\Lambda^{*}$ has four different electromagnetic form factors
$F_{1,2,3,4}$ as shown in Eq.~(\ref{eq:GROUND}), we only take into account the dipole one ($F_{2}\equiv e_{Q}\kappa_{\Lambda^{*}}$) but ignore the monopole
($F_{1}\equiv e_{\Lambda^{*}}=0$), the quadrupole ($F_{3}$), and the octupole ($F_{4}$) ones, since their contributions are negligible~\cite{Gourdin:1963ub}.
Using the $\gamma KK^{*}$ interaction given in Eq.~(\ref{eq:GROUND}) and experimental data~\cite{Amsler:2008zzb},
one can easily find that $g_{\gamma K^{*\pm}K^{\mp}}=0.254/\mathrm{GeV}$. The strength of the $g_{KN\Lambda^{*}}$ can be extracted from the experimental data of
the full and partial decay widths: $\Gamma_{\Lambda^{*}}\approx15.6$ MeV and $\Gamma_{\Lambda^{*}\to\bar{K}N}/\Gamma_{\Lambda^{*}}\approx0.45$~\cite{Amsler:2008zzb}.
The decay amplitude for $\Lambda^{*}\to \bar{K}N$ reads:
\begin{equation}
\label{eq:decay}
\Gamma_{\Lambda^{*}\to\bar{K}N}=\frac{g^{2}_{KN\Lambda^*}
|{\bm p}_{\bar{K}N}|}
{4\pi M^{2}_{\Lambda^{*}}M^{2}_{K}}
\left(\frac{1}{4}\sum_{\mathrm{spin}}|
\mathcal{M}_{\Lambda^{*}\to\bar{K}N}|^{2}\right)
,\,\,\,\,\,i\mathcal{M}_{\Lambda^{*}\to\bar{K}N}
=\bar{u}(q_{\Lambda^{*}})\gamma_{5}q_{\bar{K}}^{\mu}u_{\mu}(q_{N}),
\end{equation}
where ${\bm p}_{\bar{K}N}$ indicates the three momentum of the decaying particle which can be obtained by the K\"allen function for a decay $1\to2,3$~\cite{Amsler:2008zzb}:
\begin{equation}
\label{eq:kollen}
{\bm p}_{23}=\frac{\sqrt{[M^{2}_{1}-(M_{2}+M_{3})^{2}][M^{2}_{1}-(M_{2}-M_{3})^{2}]}}{2M_{1}}.
\end{equation}
Here $M_{i}$ stands for the mass of the $i$-th particle. Substituting the experimental information into Eq.~(\ref{eq:decay}) and using Eq.~(\ref{eq:kollen}),
one is led to $g_{KN\Lambda^*}\approx11$. As for the $K^{*}N\Lambda^{*}$ interaction, there are three individual terms~\cite{Oh:2007jd}, and we
defined a notation $G_{\mu\nu}=\partial_{\mu}K^{*}_{\nu}-\partial_{\nu}K^{*}_{\mu}$. Since we have only insufficient experimental and
theoretical~\cite{Hyodo:2006uw} information to determine all the coupling strengths for $G_{1,2,3}$,
we set $G_{2}$ and $G_{3}$ to be zero for simplicity. The scattering amplitudes for the reaction processes have be evaluated
as follows:
\begin{eqnarray}
\label{eq:AMP}
i\mathcal{M}_{s}&=&-\frac{g_{KN\Lambda^*}}{M_{K}}
\bar{u}^{\mu}_2k_{2\mu}{\gamma}_{5}
\left[\frac{e_{N}[(\rlap{/}{p}_{1}+M_{N})F_{c}+\rlap{/}{k}_{1}F_{s}]}
{s-M^{2}_{N}}\rlap{/}{\epsilon}
-\frac{e_{Q}\kappa_{p}}{2M_{N}}
\frac{(\rlap{/}{k}_{1}+\rlap{/}{p}_{1}+M_{N})F_{s}}
{s-M^{2}_{N}}
\rlap{/}{\epsilon}\rlap{/}{k}_{1}\right]u_1,
\cr
i\mathcal{M}_{u}&=&-\frac{e_{Q}g_{KN\lambda}\kappa_{\Lambda^*}F_{u}}
{2M_{K}M_{\Lambda}}
\bar{u}^{\mu}_{2}\rlap{/}{k}_{1}\rlap{/}{\epsilon}
\left[\frac{(\rlap{/}{p}_{2}-\rlap{/}{k}_{1}+M_{\Lambda^{*}})}
{u-M^{2}_{\Lambda^{*}}}
\right]
k_{2\mu}\gamma_{5}u_{1},
\cr
i\mathcal{M}^{K}_{t}&=&\frac{2e_{K}g_{KN\Lambda^*}F_{c}}{M_K}
\bar{u}^{\mu}_2
\left[\frac{(k_{1\mu}-k_{2\mu})(k_{2}\cdot\epsilon)}
{t-M^{2}_{K}} \right]
\gamma_{5}u_1,
\cr
i\mathcal{M}^{K^{*}}_{t}&=&
-\frac{ig_{\gamma{K}K^*}g_{K^{*}NB}F_{v}}{M_{K^{*}}}
\bar{u}^{\mu}_{2}\gamma_{\nu}
\left[\frac{(k^{\mu}_{1}-k^{\mu}_{2})g^{\nu\sigma}-
(k^{\nu}_{1}-k^{\nu}_{2})g^{\mu\sigma}
}{t-M^{2}_{K^*}}\right]
(\epsilon_{\rho\eta\xi\sigma}k^{\rho}_{1}
\epsilon^{\eta}k^{\xi}_{2})u_1,
\cr
i\mathcal{M}_{\mathrm{cont.}}
&=&\frac{e_{K}g_{KN\Lambda^*}F_{c}}{M_K}
\bar{u}^{\mu}_2\epsilon_{\mu}{\gamma}_{5}u_1,
\label{amplitudes}
\end{eqnarray}
where $s$, $t$, and $u$ indicate the Mandelstam variables, while $\epsilon$, $u_{1}$, and ${u}^{\mu}_{2}$
denote the photon polarization vector, nucleon spinor, and RS vector-spinor, respectively.
Since hadrons are not point-like, it is necessary to introduce the form factors representing their spatial distributions. It is rather technical to
include the form factors at the same time to preserve gauge invariance of the invariant amplitude. For this purpose, we employ the scheme developed in
Refs.~\cite{Haberzettl:1998eq,Davidson:2001rk}. This scheme preserves not only the Lorentz invariance but also the crossing symmetry of the invariant amplitude,
on top of the gauge invariance. Moreover, it satisfies the on-shell condition for the form factors: $F(q^{2}=0)=1$.
In this scheme, the form factors $F_{s,t,u,v}$ are defined generically as:
\begin{equation}
\label{eq:form}
F_{s}=\frac{\Lambda^{4}}{\Lambda^{4}+(s-M^{2}_{N})^{2}},
\,\,\,\,
F_{t}=\frac{\Lambda^{4}}{\Lambda^{4}+(t-M^{2}_{K})^{2}},
\,\,\,\,
F_{u}=\frac{\Lambda^{4}}{\Lambda^{4}+(u-M^{2}_{\Lambda^{*}})^{2}},
\,\,\,\,
F_{v}=\frac{\Lambda^{4}}{\Lambda^{4}+(t-M^{2}_{K^{*}})^{2}}.
\end{equation}
Here, $M_{s,t,u}$ are the masses of the off-shell particles in the $(s,t,u)$-channels. $\Lambda$ stands for a phenomenological cutoff parameter
determined by matching with experimental data. The {\it common} form factor $F_{c}$, which plays a crucial role to keep the gauge invariance, reads:
\begin{equation}
\label{eq:fc}
F_{c}=F_{s}+F_{t}-F_{s}F_{t}.
\end{equation}
It is clear that the $F_{c}$ satisfies the on-shell condition when one of $F_{s}$ and $F_{t}$ is on-shell. We note that there are several different
gauge-invariant form factors, as suggested in Ref.~\cite{Haberzettl:2006bn}. The choice of the scheme brings some uncertainty which
is numerically negligible.
\subsection{Resonance contribution}
There is little experimental information on the nucleon resonances coupling to $\Lambda^{*}$.
The situation is even worse for the hyperon resonances decaying
into $\gamma\Lambda^{*}$. Only some theoretical calculations have provided information on the
decays~\cite{Capstick:1998uh,Capstick:2000qj}. Unlike
the ground state $\Lambda(1116)$ photoproduction there nucleon and hyperon resonances play
important roles to reproduce the data~\cite{Janssen:2001pe}, the Born terms alone are enough to explain the available experimental data for the $\Lambda^{*}$ photorpdocution~\cite{Barber:1980zv,Muramatsu:2009zp}.
More dedicated experiments may show otherwise in the future.
Keeping this situation in mind, we attempt to include nucleon resonance contributions using the result of the relativistic constituent-quark model calculation~\cite{Capstick:1998uh}. Among the possible nucleon resonances given in Ref.~\cite{Capstick:1998uh}, we only choose $D_{13}(2080)$ with the two-star confirmation ($**$) but neglect $S_{11}(2080)$ and $D_{15}(2200)$ because that $S_{11}$ is still in poor confirmation ($*$), as for $D_{15}$, we do not have experimental data for the helicity amplitudes which are necessary to determine the strength of the transition $\gamma N\to N^{*}$~\cite{Amsler:2008zzb}. Moreover the spin $5/2$ Lorentz structure of $D_{15}$ bring theoretical uncertainties~\cite{Choi:2007gy}.
In the quark model of Ref.~\cite{Capstick:1998uh}, $N^{*}(1945,3/2^{-})$ is identified as $D_{13}$. However, we prefer to adopt the experimental value for the $D_{13}$ mass. Its transition and strong interactions are defined as follows:
\begin{eqnarray}
\label{eq:lreso}
\mathcal{L}_{\gamma NN^{*}}&=&
-\frac{ie_{Q}f_{1}}{2M_{N}}
\bar{N}^{*}_{\mu} \gamma_{\nu}F^{\mu\nu}N
-\frac{e_{Q}f_{2}}{(2M_{N})^{2}}
\bar{N}^{*}_{\mu}F^{\mu\nu}(\partial_{\nu}N)+\mathrm{h.c.},
\cr
\mathcal{L}_{KN^{*}\Lambda^{*}}&=&
\frac{g_{1}}{M_{K}}
\bar{\Lambda}^{*}_{\mu}\gamma_{5}\rlap{/}{\partial}
KN^{*\mu}+
\frac{ig_{2}}{M^{2}_{K}}\bar{\Lambda}^{*}_{\mu}\gamma_{5}
(\partial_{\mu}\partial_{\nu}
K)N^{*\nu}+\mathrm{h.c.},
\end{eqnarray}
where $N^{*}$ denotes the field for $D_{13}$. The coupling constants $f_{1}$ and $f_{2}$ can be computed using the helicity amplitudes~\cite{Oh:2007jd}:
\begin{eqnarray}
\label{eq:f1f2}
A^{p^{*}}_{1/2}&=&\frac{e_{Q}\sqrt{6}}{12}
\left(\frac{|\bm{k}_{\gamma N}|}{M_{D_{13}}M_{N}} \right)^{\frac{1}{2}}
\left[f_{1}+\frac{f_{2}}{4M^{2}_{N}}M_{D_{13}}(M_{D_{13}}+M_{N}) \right],
\cr
A^{p^{*}}_{3/2}&=&\frac{e_{Q}\sqrt{2}}{4M_{N}}
\left(\frac{|\bm{k}_{\gamma N}|M_{D_{13}}}{M_{N}} \right)^{\frac{1}{2}}
\left[f_{1}+\frac{f_{2}}{4M_{N}}(M_{D_{13}}+M_{N}) \right],
\end{eqnarray}
where the superscript $p^{*}$ indicates the positive-charge $D_{13}$, and $|\bm{k}_{\gamma N}|=828$ MeV in the decay process of $D_{13}\to\gamma N$ using Eq.~(\ref{eq:kollen}). The experimental values for $A^{p^{*}}_{1/2}$ and $A^{p^{*}}_{3/2}$ are taken from Ref.~\cite{Amsler:2008zzb}:
\begin{equation}
\label{eq:hel}
A^{p^{*}}_{1/2}=(-0.020\pm0.008)/\sqrt{\mathrm{GeV}},\,\,\,\,
A^{p^{*}}_{3/2}=(0.017\pm0.011)/\sqrt{\mathrm{GeV}}.
\end{equation}
We obtain $e_{Q}f_{1}=-0.19$ and $e_{Q}f_{2}=0.19$. The strong coupling strengths $g_{1}$ and $g_{2}$ are given by~\cite{Oh:2007jd}:
\begin{equation}
\label{eq:qqq}
G_{1}=G_{11}\frac{g_{1}}{M_{K}}+G_{12}\frac{g_{2}}{M^{2}_{K}},\,\,\,\,
G_{3}=G_{31}\frac{g_{1}}{M_{K}}+G_{32}\frac{g_{2}}{M^{2}_{K}}.
\end{equation}
Here the coefficients $G_{11,12,31,32}$ are defined as
\begin{eqnarray}
\label{eq:ggggg}
G_{11}&=&\frac{\sqrt{30}}{60\sqrt{\pi}}\frac{1}{M_{\Lambda^{*}}}
\left(\frac{|\bm{k}_{K\Lambda^{*}}|}{M_{D_{13}}} \right)^{\frac{1}{2}}
\sqrt{E_{\Lambda^{*}}-M_{\Lambda^{*}}}
(M_{D_{13}}+M_{\Lambda^{*}})(E_{\Lambda^{*}}+4M_{\Lambda^{*}}),
\cr
G_{12}&=&-\frac{\sqrt{30}}{60\sqrt{\pi}}\frac{|\bm{k}_{K\Lambda^{*}}|^{2}}{M_{\Lambda^{*}}}\sqrt{|\bm{k}_{\Lambda^{*}}|M_{D_{13}}}
\sqrt{E_{\Lambda^{*}}-M_{\Lambda^{*}}},
\cr
G_{31}&=&-\frac{\sqrt{30}}{20\sqrt{\pi}}\frac{1}{M_{\Lambda^{*}}}
\left(\frac{|\bm{k}_{K\Lambda^{*}}|}{M_{D_{13}}} \right)^{\frac{1}{2}}
\sqrt{E_{\Lambda^{*}}-M_{\Lambda^{*}}}
(M_{D_{13}}+M_{\Lambda^{*}})(E_{\Lambda^{*}}-M_{\Lambda^{*}}),
\cr
G_{32}&=&\frac{\sqrt{30}}{20\sqrt{\pi}}\frac{|\bm{k}_{K\Lambda^{*}}|^{2}}{M_{\Lambda^{*}}}\sqrt{|\bm{k}_{\Lambda^{*}}|M_{D_{13}}}
\sqrt{E_{\Lambda^{*}}-M_{\Lambda^{*}}},
\end{eqnarray}
where $|\bm{k}_{K\Lambda^{*}}|=224$ MeV in the decay process of $D_{13}\to K\Lambda^{*}$ and $E^{2}_{\Lambda^{*}}=M^{2}_{\Lambda^{*}}+\bm{k}^{2}_{K\Lambda^{*}}$. Employing the theoretical estimations on $G_{1}\approx-2.6\,\sqrt{\mathrm{MeV}}$ and $G_{3}\approx-0.2\,\sqrt{\mathrm{MeV}}$ given in Refs.~\cite{Capstick:1998uh,Capstick:2000qj}, one has $g_{1}=-1.07$ and $g_{2}=-3.75$.
Now the scattering amplitude for $D_{13}$ in the $s$-channel can be written as follows:
\begin{eqnarray}
\label{eq:re}
i\mathcal{M}^{*}_{s}&=&\bar{u}^{\mu}_{2}\gamma_{5}\Bigg\{
\frac{e_{Q}f_{1}g_{1}}
{2M_{K}M_{N}}\rlap{/}{k}_{2}
\left[\frac{(\rlap{/}{k}_{1}+\rlap{/}{p}_{1}+M_{D_{13}})
(k_{1\mu}\rlap{/}{\epsilon}-\rlap{/}{k}_{1}\epsilon_{\mu})}
{s-M^{2}_{D_{13}}-iM_{D_{13}}
\Gamma_{D_{13}}} \right]
\cr
&+&\frac{e_{Q}f_{2}g_{1}}
{4M_{K}M^{2}_{N}}\rlap{/}{k}_{2}
\left[\frac{(\rlap{/}{k}_{1}+\rlap{/}{p}_{1}+M_{D_{13}})
[k_{1\mu}(p_{1}\cdot\epsilon)-(p_{1}\cdot k_{1})\epsilon_{\mu}]}
{s-M^{2}_{D_{13}}-iM_{D_{13}}
\Gamma_{D_{13}}} \right]
\cr
&-&\frac{e_{Q}f_{1}g_{2}}
{2M^{2}_{K}M_{N}}k_{2\mu}
\left[\frac{(\rlap{/}{k}_{1}+\rlap{/}{p}_{1}+M_{D_{13}})
[(k_{1}\cdot k_{2})\rlap{/}{\epsilon}-\rlap{/}{k}_{1}(\epsilon\cdot k_{2})]}
{s-M^{2}_{D_{13}}-iM_{D_{13}}
\Gamma_{D_{13}}} \right]
\cr
&-&\frac{e_{Q}f_{2}g_{2}}
{4M^{2}_{K}M^{2}_{N}}k_{2\mu}
\left[\frac{(\rlap{/}{k}_{1}+\rlap{/}{p}_{1}+M_{D_{13}})
[(k_{1}\cdot k_{2})(\epsilon\cdot p_{2})-(\epsilon\cdot k_{2})(k_{1}\cdot p_{2})]}
{s-M^{2}_{D_{13}}-iM_{D_{13}}
\Gamma_{D_{13}}} \right]\Bigg\}u_{1}F_{s},
\end{eqnarray}
where $\Gamma_{D_{13}}$ is the full decay width and has large experimental uncertainty, $\Gamma_{D_{13}}=(87\sim1075)$ MeV~\cite{Amsler:2008zzb}. The preferred values for the PDG average locate at $(180\sim 450)$ MeV. Considering this situation, as a trial, we choose $\Gamma_{D_{13}}\approx 500$ MeV. Actually there are only small differences in the numerical results even for sizable changes in $\Gamma_{D_{13}}$. We find that the $D_{13}$ resonance contribution will become pronounced
only if $\Gamma_{D_{13}}$ becomes far narrower as lower than $\sim$ 100 MeV. However, such a narrow nucleon resonance is unlikely to exist unless there are unusual production mechanisms for the resonance such as the exotics. Therefore through this article we keep $\Gamma_{D_{13}}=500$ MeV.
\subsection{Regge contributions}
In this subsection, we explain how the Regge contributions are implemented in the $\Lambda^{*}$ photoproduction. As done in Refs.~\cite{Corthals:2005ce,Corthals:2006nz,Ozaki:2009wp}, considering the pseudoscalar and vector strange-meson Regge trajectories, we replace the $K$ and $K^{*}$ propagators in Eq.~(\ref{eq:AMP}) as follows:
\begin{eqnarray}
\label{eq:RT}
\frac{1}{t-M^{2}_{K}}\to\mathcal{D}_{K}
&=&\left(\frac{s}{s_{0}} \right)^{\alpha_{K}}
\frac{\pi\alpha'_{K}}{\Gamma(1+\alpha_{K})\sin(\pi\alpha_{K})},
\cr
\frac{1}{t-M^{2}_{K^{*}}}\to
\mathcal{D}_{K^{*}}&=&\left(\frac{s}{s_{0}} \right)^{\alpha_{K^{*}}-1}
\frac{\pi\alpha'_{K}}{\Gamma(\alpha_{K})\sin(\pi\alpha_{K})}.
\end{eqnarray}
Here $\alpha'_{K,K^{*}}$ indicate the slopes of the trajectories. $\alpha_{K}$ and $\alpha_{K^{*}}$ are the linear trajectories of the mesons for even and odd spins, respectively, given as functions of $t$ assigned as
\begin{equation}
\label{eq:TR}
\alpha_{K}=0.70\,\mathrm{GeV}^{-2}(t-M^{2}_{K}),
\,\,\,\,
\alpha_{K^{*}}=1+0.85\,\mathrm{GeV}^{-2}(t-M^{2}_{K^{*}}).
\end{equation}
Here is a caveat; in deriving Eq.~(\ref{eq:TR}), all the even and odd spin trajectories are assumed to be degenerate, although in reality the vector-kaon trajectories are not degenerated~\cite{Corthals:2005ce,Corthals:2006nz}. Moreover, for convenience, we have set the phase factor for the Reggeized propagators to be positive unity as done in Ref.~\cite{Ozaki:2009wp}. The cutoff parameter $s_{0}$ is chosen to be $1$ GeV~\cite{Corthals:2005ce,Corthals:2006nz}. Hereafter, we use a notation $i\mathcal{M}^{\mathrm{Regge}}$ for the amplitude with the Reggeized propagators in Eq.~(\ref{eq:RT}).
If we employ these Reggeized propagators in Eq.~(\ref{eq:RT}) for the invariant amplitude in Eq.~(\ref{eq:AMP}), the gauge invariance is broken. Fortunately, the $K^{*}$-exchange contribution is not affected since it is gauge invariant by itself according to the antisymmetric tensor structure: $k_{1}\cdot(i\mathcal{M}^{\mathrm{Regge}}_{K^{*}})=0$. Hence, it is enough to consider the $K$-exchange, electric $s$-channel, and contact-term contributions which are all proportional to $F_{c}$ as shown in Eq.~(\ref{eq:AMP}). This situation can be represented by
\begin{equation}
\label{eq:WT}
k_{1}\cdot(i\mathcal{M}^{\mathrm{Regge}}_{K}+i\mathcal{M}^{E}_{s}+i\mathcal{M}_{c})\ne0,
\end{equation}
resulting in the breakdown of gauge invariance of the scattering amplitude. To remedy this problem, we redefine the relevant amplitudes as follows~\cite{Corthals:2005ce,Corthals:2006nz}:
\begin{equation}
\label{eq:WT1}
i\mathcal{M}_{K}+i\mathcal{M}^{E}_{s}+i\mathcal{M}_{c}
\,\,\to\,\,
i\mathcal{M}^{\mathrm{Regge}}_{K}+(i\mathcal{M}^{E}_{s}
+i\mathcal{M}_{c})(t-M^{2}_{K})\mathcal{D}_{K}
=i\mathcal{M}^{\mathrm{Regge}}_{K}+i\bar{\mathcal{M}}^{E}_{s}
+i\bar{\mathcal{M}}_{c}.
\end{equation}
It is easy to show that Eq.~(\ref{eq:WT1}) satisfies the gauge invariance: $k_{1}\cdot(i\mathcal{M}^{\mathrm{Regge}}_{K}+i\bar{\mathcal{M}}^{E}_{s}+i\bar{\mathcal{M}}_{c})=0$.
Considering that the Reggeized propagators work appropriately for $(s,|t|)\to(\infty, 0)$ and assume that the Regge contributions survive even in the low-energy region $(s,|t|)\to(s_{\mathrm{threshold}}, \mathrm{finite})$, it is natural to expect a smooth interpolation between two regions. The meson propagators are supposed to smoothly shifted from $\mathcal{D}_{K,K^{*}}$ for $s\gtrsim s_{\mathrm{Regge}}$ to a usual one for $s\lesssim s_{\mathrm{Regge}}$. Here, $s_{\mathrm{Regge}}$ indicates a certain value of $s$ from which the Regge contributions become effective. Similar consideration is also possible for $|t|$, and we can set $t_{\mathrm{Regge}}$ as well. Hence, as a trial, we parametrize the smooth interpolation by redefining the form factors in the relevant invariant amplitudes in Eq.~(\ref{eq:AMP}) as follows:
\begin{equation}
\label{eq:R}
F_{c,v}\to\bar{F}_{c,v}\equiv
\left[(t-M^{2}_{K,K^{*}})\mathcal{D}_{K,K^{*}}\right]
\mathcal{R}+F_{c,v}(1-\mathcal{R}),\,\,\,\,\mathcal{R}=\mathcal{R}_{s}\mathcal{R}_{t},
\end{equation}
where
\begin{equation}
\label{eq:RSRT}
\mathcal{R}_{s}=
\frac{1}{2}
\left[1+\tanh\left(\frac{s-s_{\mathrm{Regge}}}{s_{0}} \right) \right],\,\,\,\,
\mathcal{R}_{t}=
1-\frac{1}{2}
\left[1+\tanh\left(\frac{|t|-t_{\mathrm{Regge}}}{t_{0}} \right) \right].
\end{equation}
Here, $s_{0}$ and $t_{0}$ denote free parameters to make the arguments of $\tanh$ in Eq.~(\ref{eq:RSRT}) dimensionless. It is easy to understand that $\mathcal{R}_{s}$ goes to unity as $s\to\infty$ and zero as $s\to0$ around $s_{\mathrm{Regge}}$, and $\mathcal{R}_{t}$ zero as $|t|\to\infty$ and unity as $|t|\to0$ around $t_{\mathrm{Regge}}$. These asymptotic behaviors of $\mathcal{R}_{s}$ and $\mathcal{R}_{t}$ ensure that $\bar{F}_{c,v}$ in Eq.~(\ref{eq:R}) interpolate the two regions smoothly as shown in Figure~\ref{FIG2} where we plot $\mathcal{R}$ as a function of $s$ and $|t|$, showing that $\mathcal{R}$ approaches to unity as $s\to\infty$ and $|t|\to0$ with arbitrary choices for $(s,t)_{\mathrm{Regge}}=(s,t)_{0}=(1,1)\,\mathrm{GeV}^{2}$. We will determine the parameters, $(s,t)_{\mathrm{Regge}}$ and $(s,t)_{0}$, with experimental data in the next Section.
\section{Numerical Results}
Here we present our numerical results. First, we label two models in Table~\ref{table1}.
The model A represent our full calculation includes the Regge contributions.
The model B includes the Born diagrams and the nucleon-resonance contribution from $D_{13}$ only.
Throughout this article, the numerical results from the model A will be represented by solid lines, whereas dashed lines will be for the model B.
There are several free parameters in our model. One is the vector-kaon coupling constant $g_{K^{*}N\Lambda^{*}}$. Its value was determined from the unitarized chiral model~\cite{Hyodo:2006uw}.
It is considerably smaller than $g_{KN\Lambda^{*}}$. Actually the experimental data from Ref.~\cite{Muramatsu:2009zp} showed that the $K^{*}$-exchange contribution must be far smaller than that of the contact term. As discussed in Refs. ~\cite{Nam:2005uq,Nam:2006cx,Nam:2009cv}, the effect from the $K^{*}$-exchange contribution with various choices of $g_{K^{*}N\Lambda^{*}}$ turns out to be not so essential.
In contrast, we note that the $K^{*}-$ exchange contributes significantly to the photon-beam asymmetry ($\Sigma$) ~\cite{Nam:2006cx}.
The experimental data of $\Sigma$ are given in Refs.~\cite{Kohri:2009xe,Muramatsu:2009zp}. For $\theta\lesssim60^{\circ}$, the value of $\Sigma$ was estimated to be $-0.01\pm0.07$, indicating that $g_{K^{*}N\Lambda^{*}}$ is small~\cite{Muramatsu:2009zp} compared with that given in Ref.~\cite{Nam:2006cx}. In Ref.~\cite{Kohri:2009xe}, it was also measured that $-0.1\lesssim\Sigma\lesssim0.1$ for $1.75\,\mathrm{GeV}\le E_{\gamma}\le2.4\,\mathrm{GeV}$, and this result supports $g_{K^{*}N\Lambda^{*}}\ll1$.
Taking into account all of these experimental and theoretical results, it is safe to set $g_{K^{*}N\Lambda^{*}}\approx0$.
There is a similar hybrid approach \cite{Toki08} based on the quark-gluon string mechanism in high energy region. They also found that
the K* exchange in t-channel is very small compared with the K exchange.
Thus, we will drop the $K^{*}$-exchange contribution from now on. Similarly, as shown in Ref.~\cite{Nam:2005uq}, the different choices of the anomalous magnetic moment of $\Lambda^{*}$, $\kappa_{\Lambda^{*}}$, does not make any significant numerical impact since the $u$-channel contribution is suppressed by the form factor $F_{u}$ in Eq.~(\ref{eq:form}). Hence, we will set $\kappa_{\Lambda^{*}}$ to be zero hereafter.
\begin{table}[b]
\begin{tabular}{c||c|c|c|c|c|c|c}
&$s$ channel&$u$ channel&$t_{K}$ channel&$t_{K^{*}}$ channel& contact term&$D_{13}$ resonance&Regge\\
\hline
Model A&$i\bar{\mathcal{M}}^{E}_{s}$,$i\mathcal{M}^{M}_{s}$&$i\mathcal{M}^{M}_{u}$&$i\mathcal{M}^{\mathrm{Regge}}_{K}$&$iM^{\mathrm{Regge}}_{K^{*}}$&$i\bar{\mathcal{M}}_{c}$&$i\mathcal{M}^{*}_{s}$
&$\mathcal{R}=\mathcal{R}_{s}\mathcal{R}_{t}$\\
Model B&$i\mathcal{M}^{E}_{s}$,$i\mathcal{M}^{M}_{s}$&$i\mathcal{M}^{M}_{u}$&$i\mathcal{M}_{K}$&$i\mathcal{M}_{K^{*}}$&$i\mathcal{M}_{c}$&$iM^{*}_{s}$&$\mathcal{R}=0$
\end{tabular}
\caption{Relevant amplitudes in the model A and B.}
\label{table1}
\end{table}
\subsection{Angular dependence}
We first study $d\sigma/dt$ for the low- and high-energy experiments~\cite{Barber:1980zv,Boyarski:1970yc}. The experiments were performed at $E_{\gamma}=(2.4\sim4.8)$ GeV~\cite{Barber:1980zv} and $E_{\gamma}=11$ GeV~\cite{Boyarski:1970yc}. In (A) of Figure~\ref{FIG3} the numerical results for $d\sigma/dt$ for the model A and B are almost identical except for very small $|t|$ regions, and reproduce the data qualitatively well. This observation indicates that the Regge contributions is
very small in the low-energy region as expected. In contrast, as in (B) of Figure~\ref{FIG3} with the data taken from Ref.~\cite{Boyarski:1970yc} (high energy), the results of the model A and B are very different. Note that a sudden change in the data around $|t|\approx0.2\,\mathrm{GeV}^{2}$ are reproduced by the model A but not by the model B. It shows that the smooth interpolation of the Regge contributions given by Eq.~(\ref{eq:R}) is necessary to explain the experimental data.
The parameters for $(s,t)_{\mathrm{Regge}}$ and $(s,t)_{0}$ employed for drawing the curves in Figure~\ref{FIG3}, are listed in Table~\ref{table2}. Here, we have chosen $s_{\mathrm{Regge}}=9\,\mathrm{GeV}^{2}$ which means that the Regge contributions become important for $\sqrt{s}>3$ GeV. The value of $t_{\mathrm{Regge}}$ is rather arbitrary. We set $t_{\mathrm{Regge}}=0.1\,\mathrm{GeV}^{2}$ because the physical situation changes drastically around this value. The other parameters, $(s,t)_{0}$ have been fixed to reproduce the data. We will adopt these values hereafter. Although the data of Ref.~\cite{Boyarski:1970yc} in the vicinity of $|t|\approx0$ show a decrease with respect to $|t|$, we did not fine tune our parameters for it because of the large experimental errors and qualitative nature of this work. We also find that the $D_{13}$ contribution is almost negligible as long as we use the input discussed in the previous section.
Now, we want to take a close look on the bump structure around $|t|\approx0.2\,\mathrm{GeV}^{2}$ shown in (B) of Figure~\ref{FIG3}. Since the angular dependences of the cross section is largely affected by the common form factor, it is instructive to show $F_{c}$ (left) and $\bar{F}_{c}$ (right) as functions of $s$ and $t$ in Figure~\ref{FIG4} with the parameters listed in Table~\ref{table2}. In the vicinity of small $|t|\lesssim0.2\,\mathrm{GeV}^{2}$ and large $s\gtrsim4\,\mathrm{GeV}^{2}$, the difference between two form factors become obvious, i.e. $F_{c}$ increases with respect to $|t|$ monotonically, while $\bar{F}_{c}$ shows a complicated structure as we approach small $|t|$ region. Moreover, we can clearly see a bump-like structure around $|t|\approx(0.1\sim0.2)\,\mathrm{GeV}^{2}$ at large $s$ region. This behavior of $\bar{F}_{c}$ cause the bump observed in the results for $d\sigma/dt$ as depicted in Figure~\ref{FIG3}. In other words, this structure is due to the Regge contributions for the high $E_{\gamma}$ region indeed. Hence we conclude that the present reaction process is still dominated by the contact-term contribution as in ~\cite{Nam:2005uq,Nam:2006cx}, and the Regge contributions modify it in the vicinity near $|t|\lesssim0.2\,\mathrm{GeV}^{2}$ beyond the resonance region.
\begin{table}[b]
\begin{tabular}{c|c|c|c|c}
$\Lambda_{\mathrm{A,B}}$
&$s_{\mathrm{Regge}}$&$s_{0}$&$t_{\mathrm{Regge}}$&$t_{0}$\\
\hline
$675$ GeV&$3.0\,\mathrm{GeV}^{2}$&
$1.0\,\mathrm{GeV}^{2}$&
$0.1\,\mathrm{GeV}^{2}$&
$0.08\,\mathrm{GeV}^{2}$
\end{tabular}
\caption{Cutoff mass for the model A and B, and input parameters for the function $\mathcal{R}$ in Eqs.~(\ref{eq:R}) and (\ref{eq:RSRT}).}
\label{table2}
\end{table}
In Figure~\ref{FIG5}, we depict the numerical results for
$d\sigma/dt$ as a function of $-t$ for the low- (A) and high-energy
(B) regions for more various energies. One finds that the Regge
contributions become visible beyond $E_{\gamma}\approx4$ GeV in the
small $|t|$ region. As the photon energy increases the bumps emerge
at $|t|=(0.1\sim0.2)\,\mathrm{GeV}^{2}$, indicating the effects from
the Regge contributions. In Figure~\ref{FIG6}, we plot
$d\sigma/d\Omega$ as a function of $\theta$. As given in (A), we
reproduce the experimental data qualitatively well for
$E_{\gamma}=(1.9\sim2.4)$ GeV, which represents the range of the
photon energy of LEPS collaboration~\cite{Muramatsu:2009zp}, showing
only negligible effects from the Regge contributions. The notations
for the data correspond to the channels ($K\bar{K}$, $KN$, and
$\bar{K}N$) in the $\gamma N\to K\bar{K}N$ reaction process and
analyzing methods (SB: side band and and MC: Monte Carlo). (B) of
the figure 6 shows the high-energy behavior of $d\sigma/d\Omega$ for
$E_{\gamma}=(2.9\sim9.9)$ GeV. The Regge contributions become
obvious beyond $E_{\gamma}\approx4$ GeV, and the bump in the
vicinity $\theta\approx10^{\circ}$ becomes narrower as $E_{\gamma}$
increases. The result od the model (B) is not shown here because it
is almost identical to the one of model (A).
\subsection{Energy dependence}
In Figure~\ref{FIG7} we present the numerical results for the total cross section as a function of $E_{\gamma}$ from the threshold to $E_{\gamma}=5$ GeV. We observe only small deviation between the model A and B beyond $E_{\gamma}\approx4$ GeV. It is consistent with the angular dependences as shown in the previous subsection. Obviously, there appear some unknown contributions at $E_{\gamma}\approx3$ GeV and $4$ GeV in the experimental data, which may correspond to nucleon or hyperon resonances not measured experimentally yet. For instance, at $E_{\gamma}\approx3$ corresponding to $\sqrt{s}\approx(2.5\sim2.6)$ GeV, $N^{*}(2600,11/2^{-})$ has been reported in Ref.~\cite{Amsler:2008zzb} with the $(***)$ confirmation. However, theoretical estimation for its coupling strength to $\Lambda^{*}$ is very small~\cite{Capstick:1998uh,Capstick:2000qj}.
In Figure~\ref{FIG8}, we plot $d\sigma/d\Omega$ as a function of $E_{\gamma}$ for $120^{\circ}\le\theta\le150^{\circ}$ (A) and for $\theta=(150\sim180)^{\circ}$ (B). The results of the model A and B coincide with each other because the Regge contribution is negligible in the low- energy region. Hence we only plot the numerical results of the model A. As for $\theta=(120\sim150)^{\circ}$ (A), the theoretical result reproduces the data qualitatively well, whereas it deviates much from the experimental data for $\theta=(150\sim180)^{\circ}$ (B). This deviation may signal a strong backward enhancement caused by unknown $u$-channel contributions which are not included in the present work. This strong backward enhancement is consistent with the increase in $d\sigma/d\Omega$ for $\theta=(100\sim180)^{\circ}$ as shown in (A) of Figure~\ref{FIG8}~\cite{Muramatsu:2009zp}. Although we do not show explicit results, we verified that, if we employ the simple Breit-Wigner form for a $u$-channel hyperon-resonance contribution as a trial, the increase in the backward region shown in (B) of Figure~\ref{FIG8} can be reproduced. However, it is a rather difficult to reproduce the data of $d\sigma/d\Omega$ in the backward direction simultaneously. Since we lack information on the interaction structure of the trial $u$-channel contribution, we will leave this task as a future work.
\subsection{Beam asymmetry}
The photon-beam asymmetry defined in Eq.~(\ref{eq:BA}) can be measured in experiments using a linearly polarized photon beam:
\begin{equation}
\label{eq:BA}
\Sigma=\frac{\frac{d\sigma}{d\Omega}_{\perp}
-\frac{d\sigma}{d\Omega}_{\parallel}}
{\frac{d\sigma}{d\Omega}_{\perp}
+\frac{d\sigma}{d\Omega}_{\parallel}},
\end{equation}
where the subscripts $\perp$ and $\parallel$ denote the directions of the polarization which are perpendicular and parallel to the reaction plane, respectively. Here the reaction plane is defined by the $y$-$z$ plane, on which the incident photon along the $z$ direction and outgoing kaon reside. In (A) of Figure~\ref{FIG9}, we show $\Sigma$ as a function of $\theta$ for $E_{\gamma}=(1.9\sim7.9)$ GeV. The low-energy behavior is consistent with the previous work~\cite{Nam:2006cx}. As the energy increases, there appear a deeper valley around $\theta\approx100^{\circ}$. This behavior is mainly due to the $K$-exchange contribution, since it is enhanced with respect to $E_{\gamma}$ and contains a term $\propto k_{2}\cdot\epsilon$. It becomes zero for $d\sigma/d\Omega_{\perp}$ and finite for $d\sigma/d\Omega_{\parallel}$ resulting in $\Sigma\to-1$ as understood by Eq.~(\ref{eq:BA}). Hence, unlike the angular and energy dependences the photon-beam asymmetry is largely affected by the $K$-exchange contribution. Interestingly, the model A and B produce almost the same results for all the energies. Therefore, we will plot only the numerical results of the model A hereafter. Considering that the $s$- and $u$-channel are strongly suppressed in the present framework~\cite{Nam:2005uq,Nam:2006cx,Nam:2009cv} and the gauge invariance of the invariant amplitude, the invariant amplitude can be simplified as,
\begin{equation}
\label{eq:SIMAMP}
i\mathcal{M}_{\mathrm{total}}\approx(i\mathcal{M}_{c}+i\mathcal{M}^{E}_{s}+i\mathcal{M}_{t})\bar{F}_{c}.
\end{equation}
Hence, in general, the form factor $\bar{F}_{c}$ is factorized from the amplitude. In some quantity such as the ratio of the amplitude squared $\sim|\mathcal{M}_{\mathrm{total}}|^{2}$ its effect will be canceled. This cancelation occurs in the photon-beam asymmetry in Eq.~(\ref{eq:BA}).
In the Figure 9, we show the experimental data from Ref.~\cite{Muramatsu:2009zp}, in which $\Sigma$ was estimated $-0.01\pm0.07$ for $\theta=(0\sim60)^{\circ}$ for the LEPS photon-energy range $E_{\gamma}=(1.75\sim2.4)$ GeV. We note that the numerical results are in good agreement with the data. There is a strong experimental support for the assumption of $g_{K^{*}N\Lambda^{*}}\ll1$ for the proton-target case as mentioned already.
In (B) of Figure~\ref{FIG9}, we draw $\Sigma$ as a function of $E_{\gamma}$ for $\theta=45^{\circ}$ and $135^{\circ}$, and $\bar{\Sigma}$ defined as~\cite{Nam:2006cx},
\begin{equation}
\label{eq:IBA}
\bar{\Sigma}(E_{\gamma})=\frac{1}{2}\int^{\pi}_{0} \Sigma(\theta,E_{\gamma})\,\sin\theta\,d\theta,
\end{equation}
where the factor $1/2$ for normalization. As $E_{\gamma}$ increases, the absolute values of $\Sigma$ and $\bar{\Sigma}$ become larger, since the $K$-exchange contribution is enhanced with respect to $E_{\gamma}$ as discussed above.
\subsection{Polarization-transfer coefficients}
In this subsection, the polarization-transfer coefficients $C_{x}$ and $C_{z}$ for the $\Lambda^{*}$ photoproduction are presented. The $C_{x}$ and $C_{z}$ are identified as the spin asymmetry along the direction of the polarization of the recoil baryon with the circularly polarized photon beam. Physically, these quantities indicate how much the initial helicity transferred to the recoil baryon polarized in a certain direction. First, we define the polarization-transfer coefficients in the $(x',y',z')$ coordinate, being similar to those for the spin-$1/2$ hyperon photoproduction as in Refs.~\cite{McNabb:2003nf,Anisovich:2007bq}:
\begin{equation}
\label{eq:CXZ}
C_{x',|S_{x'}|}=
\frac{\frac{d\sigma}{d\Omega}_{r,0,+S_{x'}}
-\frac{d\sigma}{d\Omega}_{r,0,-S_{x'}}}
{\frac{d\sigma}{d\Omega}_{r,0,+S_{x'}}
+\frac{d\sigma}{d\Omega}_{r,0,-S_{x'}}},\,\,\,\,
C_{z',|S_{z'}|}=
\frac{\frac{d\sigma}{d\Omega}_{r,0,+S_{z'}}
-\frac{d\sigma}{d\Omega}_{r,0,-S_{z'}}}
{\frac{d\sigma}{d\Omega}_{r,0,+S_{z'}}
+\frac{d\sigma}{d\Omega}_{r,0,-S_{z'}}},
\end{equation}
where the subscripts $r$, $0$, and $\pm S_{x,'z'}$ stand for the right-handed photon polarization, unpolarized target nucleon, and polarization of the recoil baryon along the $x'$- or $z'$-axis, respectively. Since the photon helicity is fixed to be $+1$ here, the $C_{x'}$ and $C_{z'}$ measures the polarization transfer to the recoil baryon. Moreover, the $C_{x'}$ and $C_{z'}$ behave as the components of a three vector so that it can be rotated to the $(x,y,z)$ coordinate as:
\begin{equation}
\label{eq:ro}
\left(\begin{array}{c}
C_{x}\\C_{z}\end{array}\right)
=\left(
\begin{array}{cc}
\cos{\theta_{K}}&\sin{\theta_{K}}\\
-\sin{\theta_{K}}&\cos{\theta_{K}}
\end{array}
\right)\left(\begin{array}{c}
C_{x'}\\C_{z'}\end{array}\right),
\end{equation}
where the $(x,y,z)$ coordinate stands for that the incident photon momentum is aligned to the $z$ direction. Being different from usual spin-$1/2$ baryon photoproductions, we will have four different polarization-transfer coefficients, $C_{x,1/2}$, $C_{z,1/2}$, $C_{x,3/2}$, and $C_{z,3/2}$, due to the total spin states of $\Lambda^{*}$. Note that, in terms of the helicity conservation, $C_{(x,z),1/2}$ and $C_{(x,z),3/2}$ should be zero and unity in the collinear limit ($\theta=0$ or $180^{\circ}$). More detailed discussions are given in Refs.~\cite{Nam:2009cv,Anisovich:2007bq,Schumacher:2008xw,Fasano:1992es,Artru:2006xf}.
In Figure~\ref{FIG10}, we depict the results of the polarization-transfer coefficients as functions of $\theta$ for $E_{\gamma}=2.4$ GeV (A), $2.9$ GeV (B), $3.4$ GeV (C), $3.9$ GeV (D), $4.4$ GeV (E), and $4.9$ GeV (F). Similarly to the photon-beam asymmetry, the Regge contributions are washed away again in these physical quantities, due to the same reason for the pohton-beam asymmetry understood by Eq.~(\ref{eq:CXZ}). Therefore the results of the model A and B show only negligible differences. It is quite different from the spin-$1/2$ $\Lambda(1116)$ photoproduction, in which a simple Regge model described experimental data qualitatively well~\cite{Bradford:2006ba}. The difference between the $\Lambda(1116)$ and $\Lambda^{*}$ photoproductions can be understood by the contact-term dominance in the later one. Moreover, the effects from resonances are of greater importance in the $\Lambda(1116)$ photoproduction~\cite{Janssen:2001pe}, than that of $\Lambda^{*}$~\cite{Nam:2009cv}.
Consequently, it is unlikely to have similar cancelation occurring in $\Lambda(1116)$ photoproduction
due to its complicated interference between the Born and resonance contributions.
As discussed in~\cite{Nam:2009cv}, the shapes of the polarization-transfer coefficients are basically made of the contact-term contribution which provides symmetric and oscillating curves around zero and unity~\cite{Nam:2009cv}. The symmetric shapes are shifted into those shown in the Figure 10, because of the $\theta$-dependent $K$-exchange contribution providing complicated structures around $\cos\theta\approx0.5$. Interestingly, the results show that the shapes of the curves remain almost the same for all the the values of $E_{\gamma}$. Obviously visible differences start to appear for $E_{\gamma}\ge3.9$ GeV in the vicinity near $\cos\theta=-0.5$. We show the results of the polarization-transfer coefficients as functions of $E_{\gamma}$ in Figure~\ref{FIG11} for the two different angles, $\theta=45^{\circ}$ (A) and $180^{\circ}$ (B). Again the Regge contributions are negligible as expected.
\subsection{$K^{-}$-angle distribution function}
Our last topic is the $K^{-}$-angle distribution function~\cite{Barber:1980zv,Muramatsu:2009zp} which is the angle distribution of $K^{-}$ decaying from $\Lambda^{*}$ ($\Lambda^{*}\to K^{-}p$) in the $t$-channel helicity frame, i.e. the Gottfried-Jackson frame~\cite{Schilling:1969um}. From this function, one can tell which meson-exchange contribution dominates the production process. According to the spin statistics, the function becomes $\sin^{2}\theta_{K^{-}}$ for $\Lambda^{*}$ in $S_{z}=\pm3/2$, whereas $\frac{1}{3}+\cos^{2}\theta_{K^{-}}$ for $\Lambda^{*}$ in $S_{z}=\pm1/2$. As in Ref.~\cite{Muramatsu:2009zp,Barrow:2001ds}, considering all the possible contributions, we can parametrize the function as,
\begin{equation}
\label{eq:DF}
\mathcal{F}_{K^{-}}
=A\sin^{2}\theta_{K^{-}}+B\left(\frac{1}{3}+\cos^{2}\theta_{K^{-}}\right),
\end{equation}
where $\mathcal{F}_{K^{-}}$ denotes the distribution function for convenience. The coefficients $A$ and $B$ stand for the strength of each spin state of $\Lambda^{*}$ with the normalization $A+B=1$. In principle there would be other hyperon contributions beside $\Lambda^{*}$ so that one can add an extra term to Eq.~(\ref{eq:DF}) representing the interference effects. However, we ignore this issue here for simplicity.
Before going further, it is worth mentioning about the experimental status for the quantity in hand. Note that each experiment provided a bit different result for $\mathcal{F}_{K^{-}}$. From LAMP2 collaboration~\cite{Barber:1980zv}, it was shown that $K^{-}$ decays mostly from $\Lambda^{*}$ in $S_{z}=\pm3/2$ state, showing a curve of $\mathcal{F}_{K^{-}}$ being close to $\sin^{2}\theta_{K^{-}}$ for $\theta=(20\sim40)^{\circ}$ ($A=0.880\pm0.076$ taken from Ref.~\cite{Muramatsu:2009zp}). On the contrary, using the data of electroproduction of $\Lambda^{*}$, CLAS collaboration showed rather complicated curves for $\mathcal{F}_{K^{-}}$ which is more or less close to that for the $S_{z}=\pm1/2$ state ($A=0.446\pm0.038$~\cite{Muramatsu:2009zp})~\cite{Barrow:2001ds}. Most recent experiment performed by LEPS collaboration, provided $\mathcal{F}_{K^{-}}$ for two different $\theta$-angle regions, $\theta=(0\sim80)^{\circ}$ and $\theta=(90\sim180)^{\circ}$. From their results, $\mathcal{F}_{K^{-}}$ looks similar to that for the $S_{z}=\pm3/2$ state in the backward region ($A=0.631\pm0.106$ via the side-band method~\cite{Muramatsu:2009zp}), whereas it shifts to the considerable mixture of the two states in the forward one ($A=0.520\pm0.063$~\cite{Muramatsu:2009zp}).
Here we want to provide our estimations on $\mathcal{F}_{K^{-}}$. Since the outgoing kaon ($K^{+}$) carries no spin, all the photon helicity is transferred to $\Lambda^{*}$ through the particle exchanged in the $t$-channel, Hence, it is natural to assume that the polarization-transfer coefficients in the $z$ direction should relate to the strength coefficients $A$ and $B$. Therefore, we express $A$ and $B$ in terms of $C_{z,1/2}$ and $C_{z,3/2}$ as follows:
\begin{equation}
\label{eq:AAA}
A=\frac{C_{z,3/2}}{C_{z,1/2}+C_{z,3/2}},\,\,\,\,
B=\frac{C_{z,1/2}}{C_{z,1/2}+C_{z,3/2}},
\end{equation}
In other words, $A$ denotes the strength that $\Lambda^{*}$ is in its $S_{z}=\pm3/2$ state, and $B$ for $S_{z}=\pm1/2$. In Figure~\ref{FIG12}, we depict $\mathcal{F}_{K^{-}}$ as a function of $\cos\theta$ and $\cos\theta_{K^{-}}$ at $E_{\gamma}=2.25$ GeV (first row), $2.35$ GeV (second row), and $4.25$ GeV (third row) for $\cos\theta=(0\sim1)$ (right column) and $\cos\theta=(0\sim-1)$ (left column). In the figure, we use the notation $\theta_{K^{+}}=\theta$. In general, we observe complicated mountains in the forward region, whereas the backward region shows simple sine curves ($\propto\sin^{2}\theta_{K^{-}}$ actually) for all the photon energies. In the vicinity near $\theta=0$, there is an area in which $\mathcal{F}_{K^{-}}\propto\sin^{2}\theta_{K^{-}}$. However, this area is shrunk as $E_{\gamma}$ increases. Just after this region, we face a second region where $\mathcal{F}_{K^{-}}\propto1+\cos^{2}\theta_{K^{-}}$. Again, this second region becomes narrower as $E_{\gamma}$ increases. After these regions and until $\theta\approx180^{\circ}$, $\mathcal{F}_{K^{-}}$ behaves as $\sin^{2}\theta_{K^{-}}$. From these observations, we conclude that the shape of $\mathcal{F}_{K^{-}}$ depends much on the value of $\cos\theta$ in the forward region, but insensitive to that in the backward one. In other words, unless we specify $\theta$ in the forward region, the shape of $\mathcal{F}_{K^{-}}$ can hardly be determined.
In (A) of Figure~\ref{FIG13}, $\mathcal{F}_{K^{-}}$ is plotted as a function of $\cos\theta_{K^{-}}$ for $E_{\gamma}=2.25$ GeV, $3.25$ GeV, and $4.25$ GeV at $\theta=45^{\circ}$ and $135^{\circ}$. In the backward region represented by $\theta=135^{\circ}$, the curves for $\mathcal{F}_{K^{-}}$ are similar to each other $\sim\sin^{2}\theta_{K^{-}}$, as expected from Figure~\ref{FIG12}. On the contrary, they are quite different in the forward region represented by $\theta=45^{\circ}$, depending on $E_{\gamma}$. This can be understood easily by seeing the left column of Figure~\ref{FIG12}; the curves, which are proportional to $\sin^{2}\theta_{K^{-}}$ or $\frac{1}{3}+\cos^{2}\theta_{K^{-}}$, are mixed, and the portion of each contribution depends on $E_{\gamma}$. In (B), we compare the numerical result for $E_{\gamma}=3.8$ GeV at $\theta=20^{\circ}$ with the experimental data taken from Ref.~\cite{Barber:1980zv} for $E_{\gamma}=(2.8\sim4.8)$ GeV and $\theta=(20\sim40)^{\circ}$. We normalize the experimental data with the numerical result by matching them at $\theta_{K^{-}}=90^{\circ}$ approximately. The theory and experiment are in a qualitative agreement, $\mathcal{F}_{K^{-}}\propto\sin^{2}\theta_{K^{-}}$. Although we did not show explicitly, theoretical result for $\mathcal{F}_{K^{-}}$ changes drastically around $\theta=25^{\circ}$. At $\theta\approx30^{\circ}$, the curve becomes $\sim\frac{1}{3}+\cos^{2}\theta_{K^{-}}$. This sudden change is consistent with the second row of Figure~\ref{FIG12}.
Similarly, we show the comparisons in (C) and (D) for $\theta=45^{\circ}$ and $135^{\circ}$, respectively, for $E_{\gamma}=2.25$ GeV with Ref.~\cite{Muramatsu:2009zp} for $E_{\gamma}=(1.75\sim2.4)$ GeV and $\theta=(0\sim180)^{\circ}$. Again, we normalized the experimental data to the numerical result for the backward-scattering region (D) as done above. Then, we used the same normalization for the forward-scattering region (C). As shown in (C), the experiment and theory start to deviate from each other beyond $\cos\theta_{K^{-}}\approx-0.2$. In Ref.~\cite{Muramatsu:2009zp}, it was argued that there can be a small destructive interference caused by the $K^{*}$-exchange contribution to explain the experimental data shown in (C). However, this is unlikely since that of $K^{*}$ exchange only gives negligible effect on $C_{z,1/2}$ and $C_{z,3/2}$~\cite{Nam:2009cv}. Hence, we consider the large deviation in (C) may come from the interference between $\Lambda^{*}$ and other hyperon contributions which are not taken into account in the present work. As in the backward region, $\mathcal{F}_{K^{-}}$ shows a curve $\sim\sin^{2}\theta_{K^{-}}$, and the experimental data behaves similarly. We list numerical values of $A$ calculated using Eq.~(\ref{eq:DF}), in Table~\ref{TAB3} for $\theta=45^{\circ}$ and $135^{\circ}$. Although we have not considered the interference, for these specific angles, present theoretical estimations on $A$ are very similar to those given in Ref.~\cite{Muramatsu:2009zp} as seen in the table.
\begin{table}[b]
\begin{tabular}{c||c|c|c||c|c|c}
&$2.25$ GeV&$3.25$ GeV&$4.25$ GeV
&\cite{Muramatsu:2009zp}
&\cite{Barber:1980zv}
&\cite{Barrow:2001ds}\\
\hline
\hline
$45^{\circ}$&$0.528$ &$0.364$ &$0.299$&$0.520\pm0.063$&$0.880\pm0.076$&$0.446\pm0.038$\\
\hline
$135^{\circ}$&$0.648$ &$0.631$ &$0.611$&$0.631\pm0.106$&-&-
\end{tabular}
\caption{Coefficient $A$ in Eq.~(\ref{eq:DF}). The row and column represent $\theta$ and $E_{\gamma}$, respectively. The values for $A$ for Refs.~\cite{Muramatsu:2009zp,Barber:1980zv,Barrow:2001ds} are taken from Ref.~\cite{Muramatsu:2009zp}.}
\label{TAB3}
\end{table}
\section{Discussion and Summary}
In this article, we have investigated the $\Lambda^{*}$ photoproduction off the proton target within a hadronic model including
the tree-level diagrams with the nucleon and the certain resonance intermediate states
and the Regge contributions. We computed the energy and angular dependences of the cross section and the polarization observables in the production process. We employed the gauge-invariant form factor scheme developed in Refs.~\cite{Haberzettl:1998eq,Davidson:2001rk,Haberzettl:2006bn}. Taking into account the fact that the Regge contributions become important at $(s,|t|)\to(\infty,0)$ and remain non negligible even for $(s,|t|)\to(s_{\mathrm{threshold}},\mathrm{finite})$, we adopt an interpolating ansatz to incorporate the physical situations.
With the inclusion of the Regge contributions, we followed the prescription of Refs.~\cite{Corthals:2005ce,Corthals:2006nz,Ozaki:2009wp} to preserve the gauge invariance. The common form factor $F_{c}$ is also replaced by the one modified by the Regge contributions. The important observations in the present work are summarized as follows:
\begin{itemize}
\item All the physical observables computed are comparable with the current experimental data except the data of the energy dependence at very backward direction.
\item The Regge contributions are necessary to explain the experimental data such as the angular dependences in the high-energy region.
\item From our results, the Regge contributions become significant beyond $E_{\gamma}\gtrsim4$ GeV in the forward region, $|t|\lesssim0.2\,\mathrm{GeV}^{2}$.
\item Bump structures appear in the angular dependences of the cross section in the forward regions above $E_{\gamma}\approx4$ GeV, which is due to the Regge contributions.
\item The $K^{*}$-exchange contribution can be ignored rather safely. The $D_{13}$ contribution turns out to be very small with the parameters extracted from current data and the quark model calculation~\cite{Capstick:1998uh}.
\item The polarization observables are insensitive to the Regge contributions because the cancelation of the relevant form factors. This can be understood by the contact-term dominance as a consequence of the gauge invariance.
\item The photon-beam asymmetry $\Sigma$ is largely dominated by the $K$-exchange contribution. The polarization-transfer coefficients is determined by the contact term contribution and the $\theta$-dependent $K$-exchange effect.
\item The $K^{-}$-angle distribution function, $\mathcal{F}_{K^{-}}$ shows the mixture of the curves proportional to $\sin^{2}\theta_{K^{-}}$ or $\frac{1}{3}+\cos^{2}\theta_{K^{-}}$ for the forward $K^{+}$ scattering region. In the backward region, it remains almost unchanged over $\theta$, showing the curve proportional to $\sin^{2}\theta_{K^{-}}$, indicating the spin-$3/2$ state of $\Lambda^{*}$ manifestly.
\end{itemize}
It was reported that LEPS and CLAS collaborations have planned to
upgrade their photon energies up to $E_{\gamma}\approx3$ GeV (LEPS2)
and $12$ GeV (CLAS12 especially for GPD physics), respectively.
Although the LEPS upgrade energy is not enough to see the Regge
contributions which starts to be effective over $E_{\gamma}\approx5$
GeV, it is still desirable because the low-energy data is important
to test whether the Regge contributions are absent or not up to that
energy, comparing with the present theoretical results. Moreover,
their linear and circular polarization data will be
welcomed~\cite{Kohri}. Since the angular and polarization
observables show significantly different behaviors for the Regge
contributions, the measurements of these quantities beyond
$E_{\gamma}\approx4$ GeV must be a crucial test of our model whose
essence is the contact-term dominance in terms of gauge invariance.
In addition, the theoretical estimations on the coefficient $A$ of
$\mathcal{F}_{K^{-}}$ will be a good guide to analyze the
experiments.
As mentioned already, there are still rooms to accommodate unknown $s$- and $u$-channel contributions which may improve the agreement between present model and the experimental data to a certain extent. In particular, the $u$-channel physics may play an important role in reproducing the rise in the backward region. Related works are underway and will appear elsewhere.
\section*{acknowledgment}
The authors are grateful to A.~Hosaka, T.~Nakano, H.~-Ch.~Kim, H.~Kohri, and S.~Ozaki for fruitful discussions. S.i.N. appreciates the technical support from Y.~Kwon. S.i.N was supported by the grant NSC 98-2811-M-033-008 and
C.W.K was supported by the grant NSC 96-2112-M-033-003-MY3 from National Science Council (NSC) of Taiwan.
The support from National Center for Theoretical Sciences (North) of Taiwan (under the grant number NSC 97-2119-M-002-001) is also acknowledged. The numerical calculations were performed using MIHO at RCNP, Osaka University, Japan.
| proofpile-arXiv_065-5046 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
Peer-to-peer file sharing has been a new paradigm for content
distribution over the Internet for many years. It differs from
the traditional client-server architecture in that after a client
starts downloading a file from one or more servers, it itself
becomes a server, able to serve other users' requests. In
other words, each participating node plays the role of both server
and client at the same time. There is no single bottleneck in the
system; the capacity grows when more nodes participate, resulting in
endless scalability. The best example is the highly popular
BitTorrent system~\cite{cohen}. It splits the sharing file into
small blocks. Users download missing blocks from their peers. Once
downloaded, those blocks become available to their peers.
Although many peer-to-peer file sharing systems were developed for
wired network, they may not be suitable for all kinds of wireless
systems. For example, in wireless ad hoc networks, a packet
typically has to traverse multiple hops from source to destination.
It was shown in \cite{gupta} that per-node
throughput changes at a rate $O(1/\sqrt{n \ln n})$, which drops to
zero for large $n$. Consequently, the multi-hop strategy is
intrinsically unscalable, no matter what protocols are used in the
network layer and above. To maintain scalability, a two-hop relay strategy
was considered in~\cite{gross}. It was shown that with node mobility,
per-node throughput becomes $O(1)$. As a result, system capacity scales linearly
with the number of nodes. This drastic difference motivates the design of many
mobility-assisted data transfer protocols. Some examples are
presented in~\cite{papa2,yuen}.
In this work, we focus on vehicular ad hoc networks (VANET), which consists of cars, trucks,
motorcycles, and all sorts of vehicles on the road.
A major characteristic of VANET is its highly dynamic topology.
Nodes are intermittently connected when they encounter one another
on the road. If traffic density is low, the proportion of time that
a node is connected to another node may be small, which may result
in large delay. On the other hand, the instantaneous transmission
rate can be very high, especially if transmission proceeds only when
two nodes are close to each other. Due to its nature, VANET is
particularly suitable for delay tolerant applications with large
bandwidth requirement. An example is that a content provider allows
its subscribers to download movies, music, or news from an
infostation at the roadside when they pass by, and to exchange
contents among themselves when they encounter one another on the
road. A user can simply run an application program in the background,
without the aware of download schedule. To facilitate the
development of these applications, a content distribution protocol
is needed. A BitTorrent-like protocol called CarTorrent was proposed
in~\cite{nandan}. Two other protocols were designed in~\cite{ahmed,lee} based on the
idea of network coding~\cite{raymond,li}. In this work, we adopt the
fountain code approach~\cite{mackey05}. Encoding is performed at infostations but not
at vehicles. This method can reduce processing time at vehicles, and reduce decoding
complexity if a suitable fountain code is used.
The contributions of this work are these: First, the application scenario is modeled, which reveals the relationship between coding, delay, and throughput. Second, exact formulae for throughput are derived, from which insights
on how mobility affects throughput can be gained. Our approach is similar to that in \cite{yuen2}, but with some major differences in modeling.
\section{Content Distribution for Vehicular Network}
We consider a one-dimensional vehicular network, which models the scenario where many cars are running on a highway. Suppose that a portion of car users subscribes to a content distribution network. They are interested in downloading a common file from the content provider. The file is split into $K$ smaller blocks $W_1, W_2, \ldots, W_K$, each of which consists of $L$ bits. These messages blocks are cached in infostations~\cite{frenkiel}, placed along the highway. When a car comes close to an infostation, it can download message blocks from it. Besides, a car can exchange message blocks with another car in proximity. We refer a car or an infostation as a {\em node} and say that a {\em node encounter} occurs when two nodes are approaching to within a transmit range $r$ from each other. Data exchange between the two nodes then begins. The amount of data exchange depends on the transmission bit rate $R_b$ and the connection time. We assume that non-adaptive radio is used so that $R_b$ is constant throughout the encounter period. We also assume channel coding is used so that the probability of decoding error is negligible.
We adopt a fountain code approach~\cite{mackey05} for file distribution at infostations. When a car is within the transmit range of an infostation, the infostation generates and transmits some encoded messages to the car. Each encoded message is obtained by linearly combining the original message blocks:
\begin{equation}
\sum_{k=1}^K c_k W_k, \label{encoded_message}
\end{equation}
where each $c_k$ is either 0 or 1, and the addition is performed over $\mathbb{F}_2$. The vector $\boldsymbol c = (c_1, c_2, \ldots, c_K)$ is called the {\em encoding vector}, which is generated randomly. There are various ways to generate it. One simple way is to pick a vector uniformly at random over $\mathbb{F}_2^K$. Another way is to generate it according to the robust soliton distribution in LT codes~\cite{luby}. Each packet consists of an encoded message as in~\eqref{encoded_message} and the corresponding encoding vector.
The protocol that we propose for packet exchange follows a two-hop strategy. When two cars are within the transmit range of each other, they will exchange those packets that are directly downloaded from infostations. Those packets that are received from other cars will not be forwarded again. In other words, each packet is transmitted in at most two hops: from an infostation to a car, and from that car to another car.
A vehicle can recover the original file if the encoding vectors in the received packets span the vector space~$\mathbb{F}_2^K$, which happens when $K$ linearly independent encoding vectors have been received. Indeed, if $\boldsymbol c_1$, $\boldsymbol c_2, \ldots, \boldsymbol c_K$ are encoding vectors that are linearly independent, the file can be decoded by inverting the $K\times K$ matrix whose $i$th row is $\boldsymbol c_i$ for $i=1,2,\ldots, K$.
\section{Throughput Analysis}
We assume that cars arrived at the highway follow a Poisson process with rate~$\lambda$. Each of them travels in the highway at constant velocity. Those coming from the left has positive velocity and are collectively called the {\em forward traffic}. Those coming from the right has negative velocity and are called the {\em reverse traffic}.
In the highway, two nodes are connected if their distance is less than or equal to the transmit range, denoted by $r$.
The connection time between two cars, $T_c$, depends on their relative speed and is given by
\begin{equation}
T_c = \frac{r}{|v - v'|}.
\end{equation}
Note that the difference of velocity $v-v'$ may be negative, and the sign
depends on their directions. The maximum number of packets that can be
exchanged during an encounter is $R_p T_c$, where $R_p$ is the transmission rate in packets per second and is equal to $R_b$ divided by the packet size in bits. Likewise, a car of velocity $v$ can download $R_p r/|v|$ packets from an infostation in one encounter.
We assume that there is an infostation at every entrance of the highway. When a vehicle enters the highway system, it collects some encoded message blocks from the infostation. As cars usually enter the highway at low speed, they should have picked up enough packets to be exchanged during any future encounter. When two nodes of velocities $v$ and $v'$ meet each other, we assume that the number of packets transmitted in each direction is $R_p r/(2|v-v'|)$.
Since the velocity of each node is assumed constant, two nodes meet each other at most once as they travel along the highway. We can therefore guarantee that any newly received packet by a car is statistically independent of the packets already stored in its buffer. Consequently, the packets received by a vehicle are all statistically independently.
The number of packets that must be received before $K$ linearly independent encoding vectors are obtained depends on the probability distribution of encoding vectors. Based on the assumption that the received encoding vectors are statistically independent, the following results apply: If the distribution of encoding vectors is uniform, the original file can be decoded with probability $1-\epsilon$, for some small constant $\epsilon$, after $K+\log_2(1/\epsilon)$ packets are received. If we use LT code with robust soliton distribution, the number of packets needed is $K+2S\log_2(S/\epsilon)$, where $S=c \sqrt{K}\log_e(K/\delta)$ and $c$ is a parameter of order~1~\cite{mackey05}. Given the probability of decoding failure $\epsilon$, the downloading time is obtained by
dividing the required number of packets by the packet rate. Our objective is to estimate the average downloading time of the file in VANET by analyzing the packet rate. In the sequel, we will call it {\em throughput}. We will first consider the case where the velocity distribution is discrete, and then extend the results to the continuous case at the end of this section.
\subsection{Discrete Velocity Distribution}
Suppose that the velocity $V$ of
a vehicle can take on values from a finite set, $\{v_1, v_2, \ldots,
v_M\}$, with probability $p_1, p_2, \ldots, p_M$ respectively, where
$\sum_{m=1}^M p_m = 1$. Denote the set $\{1, 2, \ldots, M\}$ by
${\cal M}$. This model is applicable to the scenario where the
traffic is heavy and nodes using different lanes are of different
speeds. A node, when entering the highway, can choose a suitable
lane.
We consider a specific node, called the {\em
observer node}, or simply the {\em observer}, traveling between a segment of highway between two consecutive infostations \texttt{A} and~\texttt{B}. We will analysis the throughput of the observer in this segment of the highway.
Suppose that the observer belongs to class $i$ for some $i\in\mathcal{M}$, and moves at speed $v_i$ in the forward direction from \texttt{A} to~\texttt{B}. Assume that the length of this segment of the highway is~$d$. The traveling time of the observer in this segment is given by
$t_i=d/v_i$. We denote $N_i$ as the number of {\em node encounters}
for the observer when traveling in this segment of highway.
Furthermore, for $k=1,2,\ldots, N_i$, we denote $B_i(k)$ as the
number of packets received from the $k$-th encounter. Assuming that the observer does not encounter two other nodes at the same time, the total number of packets received by the observer in this highway segment is
\begin{equation}
B_i = \frac{Rr}{v_iP} + \sum_{k=1}^{N_i} B_i(k).
\end{equation}
The first term corresponds to the packets directly downloaded from infostation \texttt{A} and the second the total number of packets from other vehicles.
In order to find the expected value of $B_i$, we split the
Poisson arrival process into $M$ independent Poisson streams with
rate $p_m \lambda$, where $m=1, 2, \ldots, M$. Let $\tilde{N}_{i,m}$ be the
number of encounters of the observer with nodes in class~$m$, so that
\begin{equation}
N_i = \tilde{N}_{i,1} + \tilde{N}_{i,2} + \ldots + \tilde{N}_{i,M}.
\end{equation}
The following lemma gives the expected value of $\tilde{N}_{i,m}$.
\begin{myle} \label{le:N_m}
$\tilde{N}_{i,m}$ is Poisson distributed with mean
\begin{equation}
E[\tilde{N}_{i,m}] = \lambda p_m |t_m - t_i|,
\end{equation}
where $t_m = d/ v_m$.
\end{myle}
\begin{proof}
Without loss of generality, suppose the observer enters the highway
segment at time 0 and departs at time $t_i$. We consider its encounter
with forward traffic and reverse traffic separately.
For forward traffic, consider a node of
velocity $v_m > 0$, which enters the highway segment at time $t$ and
departs at time $t + t_m$. Suppose the speed of the node is
lower than that of the observer, that is, $t_m > t_i$. It will
encounter the observer if and only if it enters the highway before
the observer does (i.e., $t<0$) and it departs the highway after the
observer does (i.e., $t + t_m > t_i$). In other words, an encounter
occurs if and only if $-(t_m - t_i) < t < 0$. Since the arrival
process is Poisson with rate $\lambda p_m$, the number of encounters
is Poisson distributed with mean $\lambda p_m (t_m - t_i)$. Next
suppose $t_m < t_i$. An encounter occurs if and only if the node
enters after time 0 (i.e., $t>0$) and it departs before $t_i$ (i.e.,
$t + t_m < t_i$). Again the number of encounters is Poisson
distributed with mean $\lambda p_m |t_m - t_i|$.
For reverse traffic, consider a node of velocity $v_m < 0$.
If it enters the highway before time 0 (i.e., $t < 0$), it will
encounter the observer if $t+ |t_m| > 0$. If it enters the highway
after time 0 (i.e., $t > 0$), it will encounter the observer if it
enters before $t_i$ (i.e., $t < t_i$). Combining the two cases, we
can see that an encounter occurs if $-|t_m| < t < t_i$. Hence, the
number of encounters is Poisson distributed with mean also equal to
$\lambda p_m |t_m - t_i|$.
\end{proof}
Let $\mathcal{M}_{-i}$ be the set ${\cal M} \setminus \{i\}$. We next obtain an
expression for the mean of $B_i$.
\begin{myle}
\begin{equation}
E[B_i] = \frac{R_p r t_i}{d}\Big[ 1 + \frac{\lambda}{2} \sum_{m \in \mathcal{M}_{-i}} p_m
|t_m| \Big].
\end{equation}
\end{myle}
\begin{proof}
The observer will only encounter a node in class $m$ for $m
\neq i$. When the observer meets another node of velocity $v_m$, $v_m\neq v_i$, the number of packets received is equal to $R_p r/(2|v_m - v_i|)$. We sum over all $m\in \mathcal{M}_{-i}$ and obtain
\begin{equation}
B_i = \frac{R_p r}{v_i} + \sum_{m \in \mathcal{M}_{-i}} \frac{\tilde{N}_{i,m} R_p r}{2|v_m - v_i|}.
\end{equation}
Taking expectation and using Lemma~\ref{le:N_m}, we have
\begin{align}
E[B_i]&= \frac{ R_p r}{v_i} + \sum_{m \in \mathcal{M}_{-i}} \frac{E[\tilde{N}_{i,m}] R_p r}{2|v_m - v_i|} \\
& = R_p r \Big[\frac{1}{v_i} + \sum_{m \in \mathcal{M}_{-i}} \frac{\lambda p_m |t_m - t_i|}{2 |v_m - v_i|} \Big]\\
& = R_p r \Big[\frac{t_i}{d} + \sum_{m \in \mathcal{M}_{-i}} \frac{\lambda p_m t_i |t_m|}{2d} \Big].
\end{align}
\end{proof}
Define $C_i= B_i/ t_i$ as the average throughput of the observer during its
traveling time on the highway segment. Then we have
\begin{equation} \label{eq:main1}
E[C_i] = \frac{R_p r}{d}\Big[ 1 + \frac{\lambda}{2} \sum_{m \in \mathcal{M}_{-i}} p_m
|t_m| \Big].
\end{equation}
Consider a particular time instant $t$. A car of velocity $v_m$ will
be on this highway segment if it enters this segment within the interval $[t-|t_m|, t]$. Therefore, the number of cars of velocity $v_m$ that are on the highway is Poisson distributed with mean equal to $\lambda p_m |t_m|$. The above equation can be rewritten in terms of car density as follows:
\begin{myth} Let $\rho_m \triangleq \lambda p_m |t_m|/d = \lambda p_m/|v_m|$ be the density of cars of velocity $v_m$. Then
\begin{equation}
E[C_i] = R_p r \left( \frac{1}{d} + \frac{1}{2}\sum_{m\in\mathcal{M}_{-i}} \rho_m \right). \label{eq:discrete1}
\end{equation} \label{thm:discrete1}
\end{myth}
The first term within the parenthesis in Theorem~\ref{thm:discrete1} can be regarded as the density of infostation in the highway segment, and the second term is the sum of car densities over all classes except the observer's class. It is interesting to find that the individual throughput depends only on the density of other nodes. Note that the density of nodes belonging to the same class is irrelevant because there will not be any intra-class encounter.
The per-node throughput can also be expressed as
\begin{align}
E[C_i] &= R_p r \left( \frac{1}{d} -\frac{1}{2} \rho_i + \frac{1}{2}\sum_{m=1}^M \rho_m \right). \\
&= R_p r \left( \frac{1}{d} -\frac{\lambda p_i }{2 |v_i|} + \frac{1}{2}\sum_{m=1}^M \rho_m \right).
\end{align}
We observe the following:
\begin{itemize}
\item {\em Low-Density Gain:} The class of cars that has the lowest density get the largest
average per-node throughput.
\item {\em High-Speed Gain:} If the speed distribution is equiprobable, i.e., $p_1= p_2=\cdots = p_M$, then the faster the car, the higher average throughput it gets.
\end{itemize}
Now let $C$ be the average per-node throughput. By averaging the per-node throughput in Theorem~\ref{thm:discrete1} over all velocity classes, we have
\begin{align}
E[C] &= \sum_{i=1}^M p_i E[C_i] \\
& = R_p r \Big[ \frac{1}{d} + \sum_{i=1}^M p_i \Big( \frac{1}{2}
\sum_{m \in \mathcal{M}_{-i}} \rho_m \Big)\Big] \label{eq:main2},
\end{align}
which can be rewritten as follows:
\begin{myth}
\begin{align}
E[C] &= R_p r \left( \frac{1}{d} -\frac{\bar{\rho}}{2} + \frac{1}{2}\sum_{m=1}^M \rho_m \right) \\
&= R_p r\Big[ \frac{1}{d} + \frac{\lambda}{2} \sum_{i \neq j} p_i p_j \Big(
\frac{1}{|v_i|} + \frac{1}{|v_j|} \Big) \Big],
\end{align} \label{thm:discrete2}
where $\bar{\rho} = \sum_i p_i \rho_i$.
\end{myth}
Note that system throughput varies linearly with $C$ and can be obtained by multiplying $C$ with number of users.
Based on the above result, the following facts can be observed:
\begin{itemize}
\item
{\em Incrementally Linear Scalability:} The average per-node throughput increases with the node arrival rate,~$\lambda$, in an incrementally linear fashion.
\item
{\em Mobility Reduces Throughput:} If all cars move faster, then the average per-node throughput decreases.
For example, suppose all cars double their speeds. Then the car density of each velocity class decreases by one half. According to Theorem~\ref{thm:discrete1}, the throughput of all users decreases. Hence the system throughput decreases.
\end{itemize}
Although the velocity of the cars cannot be controlled by the system, it is interesting to know which probability mass function maximizes system throughput, for a given velocity vector $\{v_1, v_2, \ldots, v_M\}$. We answer this question in the Appendix.
\subsection{Continuous Velocity Distribution}
The analysis for discrete velocity can be extended to the case where the velocity distribution is continuous. This model, called the {\em wide motorway model} in~\cite{kingman}, is applicable to the scenario where there are multiple lanes and moderate traffic. Since nodes can overtake others at different lanes, there is no interaction among the nodes even if they travel in the same direction. A node can have any speed the driver likes, subject to the speed limit.
Suppose that the velocity $V$ is a continuous random variable, whose probability density function
is $f_V(v)$, defined for $v \in [a,b]$. We divide the interval $[a,b]$ into many intervals, each of length $\Delta v$.
Each interval is approximated by a constant function. We assume that $f_V$ is Lipschitz continuous, so that we can approximate $f_V$ as close as we like by increasing the number of intervals. The next theorem is analogous to Theorem~\ref{thm:discrete1} and~\ref{thm:discrete2}.
\begin{myth} Let $C_i$ denote the throughput of a particular observer node with velocity $v_i$, and $C$ the average per-node throughput. Let $N$ be the number of cars in a highway segment of length~$d$. Then, for all $i$,
\begin{align}
E[C] = E[C_i] &= R_p r\Big(\frac{1}{d} + \frac{\lambda}{2} E\left[\frac{1}{|V|}\right] \Big) \label{eq:main3a}\\
&= R_p r\Big(\frac{1}{d} + \frac{1}{2} \frac{E[N]}{d} \Big). \label{eq:main3b} \end{align}
\end{myth}
\begin{proof}
As the number of intervals that partition $[a,b]$ approaches infinity, we can rewrite \eqref{eq:discrete1} as
\begin{equation}
E[C_i] = R_p r\Big( \frac{1}{d} + \frac{\lambda}{2} \int_a^b f_V(v) \frac{1}{|V|} dv \Big),
\end{equation}
which is equal to the right hand side of~\eqref{eq:main3a}. Let $T$ be the random variable $d/V$, which is the duration that a car of velocity $V$ stays in this highway segment.
Conditioned on the velocity, the number of cars of velocity $V$ is Poisson distributed with mean $E[N|V] = \lambda |T|$. Hence
\begin{align}
E[C_i] &= R_p r\Big( \frac{1}{d} + \frac{1}{2} \int_a^b f_V(v) \frac{\lambda |T|}{d} dv \Big) \\
&= R_p r\Big( \frac{1}{d} + \frac{1}{2} \frac{E[ E[N|V]]}{d}\Big),
\end{align}
which is~\eqref{eq:main3b}. Since $E[C_i]$ is independent of the velocity of the observer, we have $E[C] = E[C_i]$.
\end{proof}
Note that $E[N]/d$ is the car density on the highway.
Based on this theorem, we have the following observations.
The first one is the same as that in the case of discrete speed.
The second one is similar but not exactly the same. The last two observations are different.
\begin{itemize}
\item
{\em Incrementally Linear Scalability:} The average per-node throughput increases with the node arrival rate,~$\lambda$, in an incrementally linear fashion.
\item
{\em Mobility Reduces Throughput:} The average per-node throughput
changes at a rate~$O(E[1/|V|])$. It means that the higher the
mobility, the lower the car density, and the lower the
average per-node throughput.
\item
{\em Perfect Fairness:} $E[C_i]$ is independent of $v_i$. It means
that given the same background traffic on the highway, the throughput
of a node is independent of its own speed. In other words, all
nodes yield the same average throughput, which is different from the
case of discrete speed.
\item
{\em Equivalence of Forward and Reverse Traffics:} The average
throughput of a particular node yielded by encountering with forward traffic is
the same as that yielded by encountering with reverse traffic,
provided that the arrival rates and speed distributions of the two
directions are the same.
\end{itemize}
\section{Concluding Remarks}
We have analyzed the effect of mobility on the performance of a VANET.
Based on the Poisson arrival process, we derive simple formulae for throughput under
both discrete and continuous velocity distribution. There are two major results:
First, system throughput increases linearly with the arrival rate of vehicles. In
other words, the system is linearly scalable. Second, system throughput decreases when
all vehicles increase their speeds, implying that higher overall mobility is not beneficial.
We have also investigated the throughput of individual users. For the discrete velocity case,
the class of users having higher mobility and lower density has higher throughput. In contrast, for the continuous velocity case, all users have the same throughput.
In our analysis, we assume that at most two cars meet each other at
any time instant. If the traffic density is high, this assumption may not
hold. For example, a node can overhear the transmission of
other nodes. However, the performance of the system in such a
scenario depends on the details of a particular transmission
protocol, such as how transmission is initiated and how transmission
conflicts are resolved. This is not within the scope of our
framework and we leave it for future research.
| proofpile-arXiv_065-5060 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
\setcounter{equation}{0}
Jet production in $\mathrm{e}^+\mathrm{e}^-$ annihilation provides an ideal
environment for studies of the dynamics of the strong interaction,
described by the theory of quantum chromodynamics (QCD)~\cite{qcd}.
The kinematical distribution of jets closely reflects the parton-level
kinematics of the event. Consequently, the first observation of
three-jet final states at DESY PETRA~\cite{tasso}, produced through
quark--antiquark--gluon final states~\cite{ellis}, provided conclusive
evidence for the existence of the gluon. Jets are defined through a
jet algorithm, which is a procedure to recombine individual hadrons
into jets using a distance measure, resolution criterion and
recombination prescription. The theoretical description applies the
same jet algorithm to partons in the final state. Closely related to
jet cross sections are event-shape distributions. Event-shape
variables measure certain geometrical properties of hadronic final
states, and can equally be calculated in perturbative QCD from
partonic final states.
Jet cross sections and event-shape distributions were studied very
extensively at $\mathrm{e}^+\mathrm{e}^-$ colliders~\cite{reviews}, and
high-precision data are available from the LEP experiments
ALEPH~\cite{alephqcd}, OPAL~\cite{opal}, L3~\cite{l3},
DELPHI~\cite{delphi}, from SLD~\cite{sld} at SLAC and from
JADE~\cite{jade} at DESY PETRA. The theoretical description of these
data in perturbative QCD contains only a single parameter: the strong
coupling constant $\alpha_{\mathrm{s}}$. By comparing experimental results with
the theoretical description, one can thus perform a measurement of
$\alpha_{\mathrm{s}}$ from jet cross sections and event shapes. For a long
period, the perturbative description of these observables was based on
next-to-leading order (NLO)~\cite{ERT,kunszt,event} in perturbative
QCD, improved by the resummation of next-to-leading-logarithmic
corrections~\cite{nlla} to all orders. The uncertainty on these
theoretical predictions from missing higher-order terms results in a
theory error on the extraction of $\alpha_{\mathrm{s}}$, which was quantified to
be around five per cent, and thus larger than any source of
experimental uncertainty.
Owing to recent calculational progress, the QCD predictions for event
shapes~\cite{ourevent,weinzierlevent} and three-jet
production~\cite{our3j,weinzierl3j} are now accurate to
next-to-next-to-leading order (NNLO, $\alpha^2\alpha_\mathrm{s}^3$) in
QCD perturbation theory. Inclusion of these corrections results in an
estimated residual uncertainty of the QCD prediction from missing
higher orders at the level of well below five per cent for the
event-shape distributions, and around one per cent for the three-jet
cross section. Using these results (combined~\cite{gionata} with the
previously available resummed expressions), new determinations of
$\alpha_{\mathrm{s}}$ from event-shape and jet production data were performed,
resulting in a considerable improvement of the theory uncertainty to
three per cent from event shapes~\cite{asevent} and below two per cent
from jet rates~\cite{asjets}. A further improvement can be anticipated
for the event shapes from the resummation of subleading logarithmic
corrections~\cite{becherschwartz}.
At this level of theoretical precision, higher-order electroweak
effects could be of comparable magnitude. Until recently, only partial
calculations of electroweak corrections to three-jet production and
event shapes have been available~\cite{CarloniCalame:2008qn}, which
can not be compared with experimental data directly. In a previous
work~\cite{Denner:2009gx}, we briefly reported our results on the
first calculation of the NLO electroweak ($\alpha^3\alpha_{\mathrm{s}}$)
corrections to three-jet observables in $\mathrm{e}^+\mathrm{e}^-$ collisions
including the quark--antiquark--photon ($q\bar{q}\gamma$) final
states.
Here, we describe this calculation in detail and perform extensive
numerical studies on the impact of the electroweak corrections to
three-jet-like observables at different $\mathrm{e}^+\mathrm{e}^-$ collider
energies.
The full NLO electroweak ($\alpha^3\alpha_{\mathrm{s}}$) corrections to jet
observables contain four types of contributions: genuine weak
corrections from virtual exchanges, photonic corrections
to quark--antiquark--gluon ($q\bar qg$) final states, gluonic
corrections to $q\bar{q}\gamma$ final states, and QCD/electroweak
interference effects in $q\bar q q\bar q$ final states of identical
quark flavour. The latter were not included in our previous work, but
turn out to be numerically negligible as anticipated.
Any jet-like observable at NLO in the electroweak theory receives
virtual one-loop corrections and contributions from real photon
radiation. Experimental cuts on isolated hard photons in the final
state allow to suppress these real photon contributions. However,
photons radiated inside hadronic jets can often not be distinguished
from hadrons (like neutral pions), and are thus not removed by
experimental cuts. The real photon contribution at NLO thus results
from a complicated interplay of jet reconstruction and photon
isolation cuts.
Through the isolated-photon veto, these observables are sensitive to
final-state particle identification, and thus to fragmentation
processes. In our case, we must include a contribution from
quark-to-photon fragmentation~\cite{Koller:1978kq} to obtain a
well-defined and infrared-safe observable. Since our calculation is
among the very first to perform electroweak corrections to jet
observables with realistic photon isolation cuts, we describe the
relevant calculational aspects in detail below.
To define the observables considered here, we describe in
Section~\ref{jetrate} the jet-clustering algorithms used in
$\mathrm{e}^+\mathrm{e}^-$ annihilation and the standard set of event-shape
variables. In this section, we also review the current description of
these observables in perturbative QCD. The calculation of NLO
electroweak corrections is outlined in Section~\ref{sec:struc}, where
we describe the calculation of the virtual and real corrections in
detail. The real corrections contain infrared
divergences from unresolved photon and
gluon radiation. These infrared divergences cancel against similar
divergences in the virtual corrections. To accomplish this
cancellation, it is, however, necessary to extract them analytically
from the real corrections, which is done by a slicing or subtraction
procedure (described in Section~\ref{sec:realcorr}). Our results are
implemented into a parton-level event generator (described in
Section~\ref{sec:num}), which allows the simultaneous evaluation of
all event-shape variables and jet cross sections. Numerical results
for jet production and event-shape distributions for $\mathrm{e}^+\mathrm{e}^-$
collision energies at LEP1, LEP2 and a future linear collider are
presented in Section~\ref{sec:results}. At energies above the
$\mathrm{Z}$ peak, we observe non-trivial kinematical structures in the
distributions. It is shown that these structures are a remnant of the
radiative-return phenomenon, resulting from a complicated interplay of
event-selection and photon isolation cuts applied in the experimental
definition of the observables. Finally, we conclude with an outlook on
the impact of electroweak effects on future precision QCD studies at
$\mathrm{e}^+\mathrm{e}^-$ colliders in Section~\ref{sec:conc}.
\section{Jet observables}
\setcounter{equation}{0}
\label{jetrate}
A commonly used method for reconstructing jets was originally
introduced by the JADE group \cite{Bethke:1988zc}. The algorithm is
based on successive combinations. In a first step, each observed
particle is listed as a jet. In the next step, a resolution parameter
$y_{ij}$ is calculated for each particle pair, and the particle pair
leading to the smallest value of $y_{ij}$ is combined into a single
pseudo-particle. This yields a new list of jets, and the algorithm
proceeds with step two. The procedure is repeated until no pair of
particles is left with $y_{ij}<y_{\mathrm{cut}}$, where
$y_{\mathrm{cut}}$ is a preset cut-off.
Different proposals exist in the literature in how to define $y_{ij}$
(see e.g.\ \citere{Dissertori:2003pj}).
The original JADE definition reads
\begin{equation}
y_{ij,\mathrm{J}}=\frac{2E_iE_j\lrb1-\cos\theta_{ij}\right)}{E_{\mathrm{vis}}^2},
\label{yijJ}
\end{equation}
where $E_i$ is the energy of the $i$-th particle, $\cos\theta_{ij}$
the angle between the particles, and $E_{\mathrm{vis}}$ the total
visible hadronic energy in the event. Improving upon this definition,
different jet resolution parameters have been proposed. Most widely
used at LEP was the $k_\mathrm{T}$ or Durham algorithm
\cite{Brown:1990nm,Catani:1991hj}, which defines
\begin{equation}
y_{ij,\mathrm{D}}=\frac{2\min\left( E_i^2,E_j^2\right)\lrb1-\cos\theta_{ij}\right)}{E_{\mathrm{vis}}^2}\,.
\label{yijD}
\end{equation}
In addition to the choice of jet resolution parameter, there also
exist different ways of combining the four-momenta of the two
particles with the lowest $y_{ij}$ to one four-momentum $p_{ij}$. In
the so-called $E$-scheme one simply adds the two four-momenta, leading
to $p_{ij}=p_i+p_j$. In the $P$-scheme the invariant mass of the
pseudo-particle is set to zero by rescaling the energy
\begin{equation}
{\vec{p}}_{ij}={\vec{p}}_{i}+{\vec{p}}_{j},\qquad
E_{ij}=\vert{\vec{p}}_{i}+{\vec{p}}_{j}\vert.
\label{Pscheme}
\end{equation}
In the $P_0$-scheme, \refeq{Pscheme} is used to construct the
resulting four-momentum, however, after each recombination
$E_{\mathrm{vis}}$ is recalculated. Finally, in the $E_0$-scheme the
three-momentum rather than the energy is rescaled.
Since an event containing three jets is due to the emission of a gluon
off an \mbox{(anti-)quark} at a large angle and with significant
energy, the ratio of the number of observed three-jet to two-jet
events is, in leading order, proportional to the strong coupling
constant. In general, the $n$-jet rate $R_n(y)$, which depends on the
choice of the jet resolution parameter $y=y_{\mathrm{cut}}$, is
defined through the respective cross sections for $n\ge 2$ jets
\begin{equation}
R_n(y,\sqrt{s})=\frac{\sigma_{\mbox{\scriptsize$n$-jet}}}{\sigma_{\mathrm{had}}},
\label{n-jetrate}
\end{equation}
such that
\begin{equation}
\sum_{n=1}^\infty R_n(y) = 1
\end{equation}
and $\sqrt{s}$ is the centre-of-mass (CM) energy.
In order to characterise the topology of an event a large number of
observables have been developed. Most of them require at least three
momenta of final-state particles to be non-trivial. In the following
we introduce six variables which have been extensively used in
experimental analyses: thrust $T$ \cite{Brandt:1964sa,Farhi:1977sg},
the normalised heavy-jet mass $\rho$ \cite{Clavelli:1981yh}, the wide
and total jet broadenings $B_\mathrm{W}$ and $B_\mathrm{T}$
\cite{Rakow:1981qn,Catani:1992jc},
the $C$-parameter \cite{Parisi:1978eg,Donoghue:1979vi}, and the
transition from three-jet to two-jet final-state using
$y_{ij,\mathrm{D}}$
\cite{Catani:1991hj,Brown:1990nm,Brown:1991hx,Bethke:1991wk}.
\begin{itemize}
\item[$\bullet$]
Thrust is defined through
\begin{equation}
T=\max_{\vec{n}}\frac{\sum_i\vert \vec{p}_i\cdot \vec{n}\vert}{\sum_i\vert \vec{p}_i\vert},
\label{Thrust}
\end{equation}
where $\vec{p}_i$ is the three-momentum of the $i$-th particle, and
$\vec{n}$ is varied to maximise the momentum flow in its direction,
yielding the thrust axis.
\item[$\bullet$]
Every event can be divided into two hemispheres $H_1$ and $H_2$ by a
plane perpendicular to the thrust axis. In each hemisphere $H_i$ one
can calculate the invariant mass $M_i^2$, the larger of which yields
the heavy-jet mass
\begin{equation}
M_\mathrm{had}^2=\max\left( M_1^2,M_2^2\right),
\end{equation}
and the normalised heavy-jet mass
\begin{equation}
\rho=\frac{M_\mathrm{had}^2}{E_{\mathrm{vis}}^2}.
\end{equation}
\item[$\bullet$]
Using the definition of the hemispheres from above, one can calculate the hemisphere broadenings
\begin{equation}
B_i=\frac{\sum_{j\in H_i}\vert \vec{p}_j\times \vec{n}\vert}
{2\sum_j\vert \vec{p}_j\vert},
\quad i=1,2.
\end{equation}
The wide and total jet broadenings $B_\mathrm{W}$ and $B_\mathrm{T}$ are then obtained through
\begin{equation}
B_\mathrm{W}=\max\left( B_1,B_2\right), \qquad
B_\mathrm{T}=B_1+B_2.
\end{equation}
\item[$\bullet$]
Starting from the linearised momentum tensor
\begin{equation}
\Theta^{\alpha\beta}=\frac{1}{\sum_i\vert \vec{p}_i\vert}\sum_j\frac{p_j^\alpha p_j^\beta}{\vert
\vec{p}_j\vert}
,\quad \alpha,\beta=1,2,3,
\end{equation}
and its three eigenvalues $\lambda_1,\lambda_2,\lambda_3$, the $C$-parameter is defined through
\begin{equation}
C=3\left( \lambda_1\lambda_2 +\lambda_2\lambda_3 +\lambda_3\lambda_1 \right).
\end{equation}
\item[$\bullet$]
The jet transition variable $Y_3$ is defined as the value of the jet resolution parameter for which
an event changes from a three-jet-like to a two-jet-like configuration.
\end{itemize}
In the following we often denote event-shape observables generically $y$.
Taking $(1-T)$ instead of $T$ the two-particle configuration is located
at $y=0$ for all the event-shape variables defined above.
\subsection{Event shapes and jet rates in perturbation theory}
\label{esinpt}
At leading order (LO), $\mathcal{O}{\left(\alpha^2\alpha_{\mathrm{s}}\right)}$, the first
process that occurs at tree level in $\mathrm{e}^+\mathrm{e}^-$ annihilation is
gluon radiation off a quark or antiquark (see
\reffig{fi:borndiags_qqg}).
\begin{figure}
\centerline{\footnotesize
\begin{feynartspicture}(82,82)(1,1)
\FADiagram{}
\FAProp(0.,15.)(5.5,10.)(0.,){/Straight}{1}
\FALabel(1,13)[tr]{$\mathrm{e}$}
\FAProp(0.,5.)(5.5,10.)(0.,){/Straight}{-1}
\FALabel(-.2,8)[tl]{$\mathrm{e}$}
\FAProp(20.,17.)(15.5,13.5)(0.,){/Straight}{-1}
\FALabel(17.5,16)[br]{$q$}
\FAProp(20.,10.)(15.5,13.5)(0.,){/Cycles}{0}
\FALabel(15,9)[bl]{$\mathrm{g}$}
\FAProp(20.,3.)(12.,10.)(0.,){/Straight}{1}
\FALabel(16,5.5)[tr]{$q$}
\FAProp(5.5,10.)(12.,10.)(0.,){/Sine}{0}
\FALabel(8.75,8.93)[t]{$\gamma,\mathrm{Z}$}
\FAProp(15.5,13.5)(12.,10.)(0.,){/Straight}{-1}
\FALabel(13.134,12.366)[br]{$q$}
\FAVert(5.5,10.){0}
\FAVert(15.5,13.5){0}
\FAVert(12.,10.){0}
\end{feynartspicture}
\hspace{2em}
\begin{feynartspicture}(82,82)(1,1)
\FADiagram{}
\FAProp(0.,15.)(5.5,10.)(0.,){/Straight}{1}
\FALabel(1,13)[tr]{$\mathrm{e}$}
\FAProp(0.,5.)(5.5,10.)(0.,){/Straight}{-1}
\FALabel(-.2,8)[tl]{$\mathrm{e}$}
\FAProp(20.,17.)(11.5,10.)(0.,){/Straight}{-1}
\FALabel(15,14)[br]{$q$}
\FAProp(20.,10.)(15.5,6.5)(0.,){/Cycles}{0}
\FALabel(15.5,8.5)[bl]{$\mathrm{g}$}
\FAProp(20.,3.)(15.5,6.5)(0.,){/Straight}{1}
\FALabel(17,4)[tr]{$q$}
\FAProp(5.5,10.)(11.5,10.)(0.,){/Sine}{0}
\FALabel(8.75,8.93)[t]{$\gamma,\mathrm{Z}$}
\FAProp(11.5,10.)(15.5,6.5)(0.,){/Straight}{-1}
\FALabel(12.9593,7.56351)[tr]{$q$}
\FAVert(5.5,10.){0}
\FAVert(11.5,10.){0}
\FAVert(15.5,6.5){0}
\end{feynartspicture}
}
\vspace*{-2em}
\caption{Lowest-order diagrams for $\mathrm{e}^+\mathrm{e}^-\rightarrow q\bar{q}\mathrm{g}$.}
\label{fi:borndiags_qqg}
\end{figure}%
As mentioned above, by comparing the measured three-jet rate and
event-shape observables with theoretical predictions, one can
determine $\alpha_{\mathrm{s}}$.
In perturbation theory up to next-to-next-to-leading order (NNLO) in
QCD, the expansion of a distribution in the generic observable $y$ at
CM energy $\sqrt{s}$ for renormalisation scale $\mu=\sqrt{s}$ and
$\alpha_{\mathrm{s}}=\alpha_{\mathrm{s}}(s)$, normalised to the Born cross section
$\sigma_0(s)$ of the process $\mathrm{e}^+\mathrm{e}^-\rightarrow q\bar{q}$ is given
by
\begin{equation}
\frac{1}{\sigma_0}\frac{{\mathrm{d}}\sigma}{{\mathrm{d}} y}=\left(\frac{\alpha_{\mathrm{s}}}{2\pi}\right)\frac{{\mathrm{d}} A}{{\mathrm{d}} y}+
\left(\frac{\alpha_{\mathrm{s}}}{2\pi}\right)^2\frac{{\mathrm{d}} B}{{\mathrm{d}} y}+\left(\frac{\alpha_{\mathrm{s}}}{2\pi}\right)^3\frac{{\mathrm{d}} C}{{\mathrm{d}} y}
+\mathcal{O}\left(\alpha_{\mathrm{s}}^4\right),
\end{equation}
where $A$, $B$, and $C$ denote the QCD contributions of LO,
next-to-leading order (NLO), and NNLO. The experimentally measured
event-shape distribution is normalised to the total hadronic cross
section $\sigma_{\mathrm{had}}$, which for massless quarks reads
\begin{equation}
\sigma_{\mathrm{had}}=\sigma_0\lrb1+\left(\frac{\alpha_{\mathrm{s}}}{2\pi}\right) K_1+\left(\frac{\alpha_{\mathrm{s}}}{2\pi}\right)^2 K_2
+\mathcal{O}\left(\alpha_{\mathrm{s}}^3\right)\rrb,
\end{equation}
such that
\begin{equation}
\frac{1}{\sigma_{\mathrm{had}}}\frac{{\mathrm{d}}\sigma}{{\mathrm{d}} y}=\left(\frac{\alpha_{\mathrm{s}}}{2\pi}\right)\frac{{\mathrm{d}}
\bar{A}}{{\mathrm{d}} y}+
\left(\frac{\alpha_{\mathrm{s}}}{2\pi}\right)^2\frac{{\mathrm{d}} \bar{B}}{{\mathrm{d}} y}+\left(\frac{\alpha_{\mathrm{s}}}{2\pi}\right)^3\frac{{\mathrm{d}} \bar{C}}{{\mathrm{d}} y}
+\mathcal{O}\left(\alpha_{\mathrm{s}}^4\right),
\end{equation}
where
\begin{eqnarray}
\bar{A}=A,
\qquad
\bar{B}=B - A K_1,
\qquad
\bar{C}=C - B K_1 + A K_1^2 - A K_2.
\label{QCDycoeff}
\end{eqnarray}
The coefficients in \refeq{QCDycoeff} up to NLO have been calculated
in
\citeres{ERT,kunszt,event}.
Furthermore, kinemati\-cal\-ly-dominant leading and next-to-leading
logarithms have been resummed \cite{Catani:1991kz,nlla}, and
non-perturba\-ti\-ve models of power-suppressed hadronisation effects
have been included
\cite{Korchemsky:1994is,Dokshitzer:1995zt,Dokshitzer:1997ew,Dokshitzer:1998pt}
to increase the theoretical accuracy. Recently the first NNLO
calculations have been completed \cite{ourevent,weinzierlevent,our3j,weinzierl3j},
and the matching of next-to-leading logarithms and
next-to-next-to-leading logarithms to the fixed-order NNLO calculation
has been performed \cite{becherschwartz,gionata}. These results
have subsequently been used in precision
determinations~\cite{asevent,asjets,becherschwartz}
of the strong coupling constant $\alpha_{\mathrm{s}}$.
With regard to jet rates, fixed-order calculations are known up to
next-to-next-to-next-to-leading order ($\mathrm{N^3LO}$) in QCD for
the two-jet rate
\cite{Anastasiou:2004qd,GehrmannDeRidder:2004tv,Weinzierl:2006ij,our3j},
up to NNLO for the three-jet rate
\cite{ERT,kunszt,event,our3j,weinzierl3j},
and up to NLO for the four-jet rate
\cite{Signer:1996bf,Dixon:1997th,Nagy:1997yn,Campbell:1998nn,Weinzierl:1999yf}.
NLO electroweak (EW) corrections could be of comparable magnitude as
the NNLO QCD corrections and are therefore worth further
consideration. The factorisable EW corrections have been calculated
in \citere{Maina:2002wz} and a further step towards the full NLO EW
corrections has been made in \citere{CarloniCalame:2008qn}. In this
work we describe the first calculation of the complete NLO EW
corrections to the normalised event-shape distributions. First results
of this calculation on the thrust distribution and the three-jet rate
at $\sqrt{s} = M_{{\rm Z}}$ have been presented in
\citere{Denner:2009gx}.
In analogy to the QCD corrections, we write the total hadronic cross
section including $\mathcal{O}{\left(\alpha\right)}$ corrections as
\begin{equation}
\sigma_{\mathrm{had}}=\sigma_0\lrb1+\left(\frac{\alpha}{2\pi}\right)\delta_{\sigma,1}
+\mathcal{O}\left(\alpha^2\right)\rrb,
\label{sig0NLO}
\end{equation}
and the expansion of the observable ${\mathrm{d}}\sigma/{\mathrm{d}} y$ as
\begin{equation}
\frac{1}{\sigma_0}\frac{{\mathrm{d}}\sigma}{{\mathrm{d}} y}=\left(\frac{\alpha_{\mathrm{s}}}{2\pi}\right)\frac{{\mathrm{d}} A}{{\mathrm{d}} y}+
\left(\frac{\alpha}{2\pi}\right)\frac{{\mathrm{d}} \delta_\gamma}{{\mathrm{d}} y}+
\left(\frac{\alpha}{2\pi}\right)\left(\frac{\alpha_{\mathrm{s}}}{2\pi}\right)\frac{{\mathrm{d}} \delta_A}{{\mathrm{d}} y}+\mathcal{O}\left(\alpha^2\right),
\label{dsdy_EW}
\end{equation}
where the LO purely electromagnetic contribution $\delta_\gamma$
arises from tree-level quark--antiquark--photon ($q\bar{q}\gamma$)
final states without a gluon%
\footnote{Since the event-shape observables are calculated from parton
momenta, the $q\bar{q}\gamma$ final states contribute if the photon
is clustered with a quark into a jet and the event is no longer
removed by the photon cuts.} and $\delta_A$ comprises the NLO EW
corrections to the distribution ${\mathrm{d}}\sigma/{\mathrm{d}} y$.
Normalising \refeq{dsdy_EW} to $\sigma_{\mathrm{had}}$ yields
\begin{equation}
\frac{1}{\sigma_{\mathrm{had}}}\frac{{\mathrm{d}}\sigma}{{\mathrm{d}} y}=
\left(\frac{\alpha_{\mathrm{s}}}{2\pi}\right)
\frac{{\mathrm{d}} A}{{\mathrm{d}} y}+\left(\frac{\alpha}{2\pi}\right)
\frac{{\mathrm{d}} \delta_\gamma}{{\mathrm{d}} y}+ \left(\frac{\alpha}{2\pi}\right)
\left(\frac{\alpha_{\mathrm{s}}}{2\pi}\right)\left(\frac{{\mathrm{d}} \delta_A}{{\mathrm{d}} y}-
\frac{{\mathrm{d}} A}{{\mathrm{d}} y}\delta_{\sigma,1}\right)+\mathcal{O}\left(\alpha^2\right).
\label{dsdyhad_EW}
\end{equation}
Hence, the full $\mathcal{O}{\left(\alpha\right)}$ EW corrections%
\footnote{In \citere{Denner:2009gx} the definition of $\delta_{\mathrm{EW}}$
was somewhat different and, in particular, did not explicitly contain
the effect of $\delta_\gamma$.}
are given by
\begin{equation}
\frac{{\mathrm{d}}\delta_{\mathrm{EW}}}{{\mathrm{d}} y}=\left(\frac{\alpha}{2\pi}\right)\frac{{\mathrm{d}} \delta_\gamma}{{\mathrm{d}} y}+ \left(\frac{\alpha}{2\pi}\right)\left(\frac{\alpha_{\mathrm{s}}}{2\pi}\right)
\left(\frac{{\mathrm{d}} \delta_A}{{\mathrm{d}} y}-
\frac{{\mathrm{d}} A}{{\mathrm{d}} y}\delta_{\sigma,1}\right).
\label{deltaEW}
\end{equation}
In order to obtain a sensible ratio, all three contributions have to
be evaluated using the same event-selection cuts.
The EW corrections to both $\sigma_{\mathrm{had}}$ and the
distribution in $y$ contain large corrections due to initial-state
radiation (ISR). Since these are universal, they partially cancel in
the third term in \refeq{dsdyhad_EW}, leaving only a small remainder.
If we include higher-order leading-logarithmic (LL) ISR effects in both
$\sigma_{\mathrm{had}}$ and the distribution in $y$, this leads to
\begin{equation}
\sigma_{\mathrm{had}}=\sigma_0\lrb1+\left(\frac{\alpha}{2\pi}\right)\delta_{\sigma,1}
+\left(\frac{\alpha}{2\pi}\right)^2\delta_{\sigma,\ge 2,\mathrm{LL}}
+\mathcal{O}\left(\alpha^2\right)\rrb,
\label{sig0LL}
\end{equation}
and
\begin{equation}
\frac{1}{\sigma_0}\frac{{\mathrm{d}}\sigma}{{\mathrm{d}} y}=\left(\frac{\alpha_{\mathrm{s}}}{2\pi}\right)
\frac{{\mathrm{d}} A}{{\mathrm{d}} y}+
\left(\frac{\alpha}{2\pi}\right)\frac{{\mathrm{d}} \delta_\gamma}{{\mathrm{d}} y}+
\left(\frac{\alpha}{2\pi}\right)\left(\frac{\alpha_{\mathrm{s}}}{2\pi}\right)\frac{{\mathrm{d}} \delta_{A}}{{\mathrm{d}} y}+
\left(\frac{\alpha}{2\pi}\right)^2\left(\frac{\alpha_{\mathrm{s}}}{2\pi}\right)\frac{{\mathrm{d}} \delta_{A,\ge 2,\mathrm{LL}}}{{\mathrm{d}} y}
+\mathcal{O}\left(\alpha^2\right),
\label{dsdy_EW_log}
\end{equation}
where $\delta_{\sigma,\ge 2,\mathrm{LL}}$ and $\delta_{A,\ge
2,\mathrm{LL}}$ contain leading-logarithmic (LL) terms proportional to
$\alpha^n\ln^n({s}/{m_\mathrm{e}^2})$ with $n\ge 2$, as defined in
\refsec{hoisr}.
Here $\mathcal{O}\left(\alpha^2\right)$ stands for
two-loop electroweak effects without the enhancement of leading ISR logarithms.
For the normalised distribution this results in
\begin{eqnarray}
\frac{1}{\sigma_{\mathrm{had}}}\frac{{\mathrm{d}}\sigma}{{\mathrm{d}} y}&=&
\left(\frac{\alpha_{\mathrm{s}}}{2\pi}\right)\frac{{\mathrm{d}} A}{{\mathrm{d}} y}+\left(\frac{\alpha}{2\pi}\right) \frac{{\mathrm{d}} \delta_\gamma}{{\mathrm{d}} y}+\left(\frac{\alpha}{2\pi}\right)\left(\frac{\alpha_{\mathrm{s}}}{2\pi}\right)\left(\frac{{\mathrm{d}} \delta_A}{{\mathrm{d}} y}-
\frac{{\mathrm{d}} A}{{\mathrm{d}} y}\delta_{\sigma,1}\right)
\\
&&{}+\left(\frac{\alpha}{2\pi}\right)^2\left(\frac{\alpha_{\mathrm{s}}}{2\pi}\right)\left[
\left(\frac{{\mathrm{d}} \delta_{A,\ge 2,\mathrm{LL}}}{{\mathrm{d}} y}-\frac{{\mathrm{d}} A}{{\mathrm{d}} y}\delta_{\sigma,\ge 2,\mathrm{LL}}
\right)
-\frac{{\mathrm{d}} \delta_{A,1,\mathrm{LL}}}{{\mathrm{d}} y}\delta_{\sigma,1,\mathrm{LL}}+\frac{{\mathrm{d}} A}{{\mathrm{d}}
y}\delta_{\sigma,1,\mathrm{LL}}^2
\right]
+\mathcal{O}\left(\alpha^2\right),\nonumber
\label{dsdyhad_EW_log}
\end{eqnarray}
where $\delta_{A,1,\mathrm{LL}}$ and $\delta_{\sigma,1,\mathrm{LL}}$
denote the LL contributions contained in the NLO results.
Their second-order effect results from taking the ratio of two NLO-corrected
quantities.
Therefore, the higher-order LL corrections read
\begin{equation}
\frac{{\mathrm{d}}\delta_{\mathrm{EW,LL}}}{{\mathrm{d}} y}=
\left(\frac{\alpha}{2\pi}\right)^2\left(\frac{\alpha_{\mathrm{s}}}{2\pi}\right)\left[
\left(\frac{{\mathrm{d}} \delta_{A,\ge 2,\mathrm{LL}}}{{\mathrm{d}} y}-\frac{{\mathrm{d}} A}{{\mathrm{d}} y}\delta_{\sigma,\ge 2,\mathrm{LL}}
\right)
+\left(\frac{{\mathrm{d}} A}{{\mathrm{d}}
y}\delta_{\sigma,1,\mathrm{LL}}^2-\frac{{\mathrm{d}} \delta_{A,1,\mathrm{LL}}}{{\mathrm{d}} y}\delta_{\sigma,1,\mathrm{LL}}\right)\right].
\label{deltaEWLL}
\end{equation}
Due to the universality of ISR, the terms in the first and in the
second parenthesis in \refeq{deltaEWLL} separately cancel each other
numerically to a large extent.
The same decomposition as applied here for event-shape distributions
holds also for the three-jet rate, normalised to $\sigma_\mathrm{had}$.
\subsection{Particle identification}
\label{PI}
One of the virtues of $\mathrm{e}^+\mathrm{e}^-$ colliders is the precise knowledge
of the energy of the initial state. However, ISR of photons can lead
to difficulties in the determination of the total energy of the final
state. Therefore event-selection cuts have been devised to suppress
effects due to ISR. In the following we describe the procedure
employed by the ALEPH collaboration at LEP \cite{Barate:1996fi}, which
we use in our numerical evaluations.
First, particles are clustered into jets according to the Durham
algorithm with $y_{\mathrm{cut,D}}=0.002$ and $E$-scheme recombination.
Jets where the fraction of
energy carried by charged hadrons is less than 10\% are identified as
dominantly electromagnetic and are removed. In the next step the
remaining particles are clustered into two jets and the visible
invariant mass $M_{\mathrm{vis}}$ of the two-jet system is calculated.
Using total momentum conservation the reduced CM energy
$\sqrt{s'}$ is calculated. The event is rejected if $s'/s<0.81$. This
two-step procedure is later referred to as hard-photon cut procedure
(note that it is called anti-ISR cut procedure in
\citere{Barate:1996fi}).
Removing events where the photonic energy in a jet is higher than a
certain value, as it is done in the hard-photon cuts, causes potential
problems when perturbatively calculating EW corrections.
There one relies on the cancellation of infrared (IR) singularities between
virtual and real corrections when calculating an IR-safe
observable. Removing events where a photon is close to a final-state
charged fermion leads to non-IR-safe observables and spoils this
cancellation in the collinear region. This feature is common to all
observables with identified particles in the final state, and
IR-finiteness is restored by taking into account a contribution
from fragmentation processes, in our case from the quark-to-photon
fragmentation function.
\section{Structure of the calculation}
\setcounter{equation}{0}
\label{sec:struc}
To obtain the full EW ${\cal O}(\alpha^3\alpha_{\mathrm{s}})$
corrections to normalised event-shape distributions and jet cross
sections we need to derive the NLO EW corrections to the total
hadronic cross section and to the three-jet production cross section.
The total hadronic cross section is decomposed as:
\begin{eqnarray}
\sigma_{{\rm
had}} = \int{\mathrm{d}}\sigma^{\mathrm{e}^+\mathrm{e}^-\rightarrow q\bar{q}}_{\mathrm{Born}}+
\int{\mathrm{d}}\sigma^{\mathrm{e}^+\mathrm{e}^-\rightarrow q\bar{q}}_{\mathrm{virtual,EW}}+
\int{\mathrm{d}}\sigma_{\mathrm{real}}^{\mathrm{e}^+\mathrm{e}^-\rightarrow q\bar{q}\gamma},
\label{master_had}
\end{eqnarray}
where the first and second terms are the Born and one-loop EW
contributions to the process $\mathrm{e}^+\mathrm{e}^-\rightarrow q\overline{q}$,
while the last term is the real radiation contribution from the process
$\mathrm{e}^+\mathrm{e}^-\rightarrow q\overline{q}\gamma$.
Likewise, we decompose the total cross section for three-jet production
according to
\begin{eqnarray}
\int{\mathrm{d}}\sigma&=&
\int{\mathrm{d}}\sigma^{\mathrm{e}^+\mathrm{e}^-\rightarrow q\bar{q}g}_{\mathrm{Born}}+
\int{\mathrm{d}}\sigma^{\mathrm{e}^+\mathrm{e}^-\rightarrow q\bar{q}\gamma}_{\mathrm{Born}}\nonumber\\
&&{}+
\int{\mathrm{d}}\sigma^{\mathrm{e}^+\mathrm{e}^-\rightarrow q\bar{q}g}_{\mathrm{virtual,EW}}+
\int{\mathrm{d}}\sigma^{\mathrm{e}^+\mathrm{e}^-\rightarrow q\bar{q}\gamma}_{\mathrm{virtual,QCD}}
+\int{\mathrm{d}}\sigma_{\mathrm{real}}^{\mathrm{e}^+\mathrm{e}^-\rightarrow q\bar{q}g\gamma}
+\int{\mathrm{d}}\sigma_{\mathrm{interference}}^{\mathrm{e}^+\mathrm{e}^-\rightarrow q\bar{q}q\bar q},
\label{master_eq}
\end{eqnarray}
where the first and third terms are the Born and NLO EW
contributions of the process $\mathrm{e}^+\mathrm{e}^-\rightarrow q\overline{q}g$,
the second and fourth terms are the Born and one-loop QCD
contributions of the process $\mathrm{e}^+\mathrm{e}^-\rightarrow
q\overline{q}\gamma$, the fifth term is the contribution from the real
radiation process $\mathrm{e}^+\mathrm{e}^-\rightarrow q\overline{q}g\gamma$, and
the sixth term results from the contribution of the real radiation process
$\mathrm{e}^+\mathrm{e}^-\rightarrow q\overline{q}q\bar q$ with identical quark
flavours. In this work, we are interested in the virtual and real
radiation corrections of $\mathcal{O}\left(\alpha^3\alpha_{\mathrm{s}}\right)$, which
lead to the production of three or four jets, when treating photons
and hadrons democratically. For the $q\bar q q\bar q$ final state,
this order corresponds to the interference of the EW amplitude
with the QCD amplitude. We do not include the squares of the
EW and the QCD amplitudes for this process.
The former is of $\mathcal{O}\left(\alpha^4\right)$ and thus beyond the
considered accuracy, the latter is part of the NLO QCD corrections not
considered in this work.
We have performed two independent calculations each for the virtual
and real corrections, the results of which are in mutual numerical
agreement.
\subsection{Conventions and lowest-order cross section}
At the parton level we consider the processes
\begin{eqnarray}
\mathrm{e}^+(k_1,\sigma_1)+\mathrm{e}^-(k_2,\sigma_2)&\rightarrow&
q(k_3,\sigma_3)+\bar{q}(k_4,\sigma_4),\label{born_qq}\\
\mathrm{e}^+(k_1,\sigma_1)+\mathrm{e}^-(k_2,\sigma_2)&\rightarrow&
q(k_3,\sigma_3)+\bar{q}(k_4,\sigma_4)+\mathrm{g}(k_5,\lambda),\label{born_qqg}\\
\mathrm{e}^+(k_1,\sigma_1)+\mathrm{e}^-(k_2,\sigma_2)&\rightarrow&
q(k_3,\sigma_3)+\bar{q}(k_4,\sigma_4)+\gamma(k_5,\lambda),
\label{born_qqa}
\end{eqnarray}
where $q$ can be an up, down, charm, strange, or bottom quark. The
momenta $k_i$ of the corresponding particles as well as their
helicities $\sigma_i$ and $\lambda$ are given in parentheses. The
helicities of the fermions take the values $\sigma_i=\pm1/2$, and the
helicity of the gluon or the photon assumes the values $\lambda=\pm
1$.
We neglect the masses of the external fermions wherever possible and
keep them only as regulators of the mass-singular logarithms.
Therefore all amplitudes vanish unless $\sigma_1=-\sigma_2$ and
$\sigma_3=-\sigma_4$, and we define $\sigma=\sigma_2=-\sigma_1$ and
$\sigma'=\sigma_3=-\sigma_4$.
For later use, the following set of kinematical invariants is introduced:
\begin{equation}
s=\left(k_1+k_2\right)^2,\quad s_{ij}=\left(k_i+k_j\right)^2,\quad
s_{ai}=s_{ia}=\left(k_a -k_i\right)^2,\quad a=1,2,\quad i,j=3,4,5.
\end{equation}
For later convenience we employ the convention that indices $a,b=1,2$
refer to the initial and indices $i,j=3,4,5$ to the
final state, while the generic indices $I,J=1,\dots,5$
label all external particles.
The tree-level Feynman diagrams contributing to the process
\refeq{born_qq} are shown in \reffig{fi:borndiags_qq}, those
contributing to the process \refeq{born_qqg} in
\reffig{fi:borndiags_qqg}, and the ones contributing to the process
\refeq{born_qqa} in \reffig{fi:borndiags_qqa}.
\begin{figure}
\centerline{\footnotesize
\begin{feynartspicture}(82,82)(1,1)
\FADiagram{}
\FAProp(0.,15.)(5.5,10.)(0.,){/Straight}{1}
\FALabel(1,13)[tr]{$\mathrm{e}$}
\FAProp(0.,5.)(5.5,10.)(0.,){/Straight}{-1}
\FALabel(-.2,8)[tl]{$\mathrm{e}$}
\FAProp(5.5,10.)(12.,10.)(0.,){/Sine}{0}
\FALabel(8.75,8.93)[t]{$\gamma,\mathrm{Z}$}
\FAProp(17.5,15)(12,10.)(0.,){/Straight}{-1}
\FALabel(17.5,11.5)[br]{$q$}
\FAProp(17.5,5)(12,10.)(0.,){/Straight}{1}
\FALabel(17.5,8.5)[tr]{$q$}
\FAVert(5.5,10.){0}
\FAVert(12.,10.){0}
\end{feynartspicture}
}
\vspace*{-2.5em}
\caption{Lowest-order diagrams for $\mathrm{e}^+\mathrm{e}^-\rightarrow q\bar{q}$.}
\label{fi:borndiags_qq}
\end{figure}
\begin{figure}
\centerline{\footnotesize
\begin{feynartspicture}(82,82)(1,1)
\FADiagram{}
\FAProp(0.,15.)(5.5,10.)(0.,){/Straight}{1}
\FALabel(1,13)[tr]{$\mathrm{e}$}
\FAProp(0.,5.)(5.5,10.)(0.,){/Straight}{-1}
\FALabel(-.2,8)[tl]{$\mathrm{e}$}
\FAProp(20.,17.)(15.5,13.5)(0.,){/Straight}{-1}
\FALabel(17.5,16)[br]{$q$}
\FAProp(20.,10.)(15.5,13.5)(0.,){/Sine}{0}
\FALabel(15,9)[bl]{$\gamma$}
\FAProp(20.,3.)(12.,10.)(0.,){/Straight}{1}
\FALabel(16,5.5)[tr]{$q$}
\FAProp(5.5,10.)(12.,10.)(0.,){/Sine}{0}
\FALabel(8.75,8.93)[t]{$\gamma,\mathrm{Z}$}
\FAProp(15.5,13.5)(12.,10.)(0.,){/Straight}{-1}
\FALabel(13.134,12.366)[br]{$q$}
\FAVert(5.5,10.){0}
\FAVert(15.5,13.5){0}
\FAVert(12.,10.){0}
\end{feynartspicture}
\hspace{2em}
\begin{feynartspicture}(82,82)(1,1)
\FADiagram{}
\FAProp(0.,15.)(5.5,10.)(0.,){/Straight}{1}
\FALabel(1,13)[tr]{$\mathrm{e}$}
\FAProp(0.,5.)(5.5,10.)(0.,){/Straight}{-1}
\FALabel(-.2,8)[tl]{$\mathrm{e}$}
\FAProp(20.,17.)(11.5,10.)(0.,){/Straight}{-1}
\FALabel(15,14)[br]{$q$}
\FAProp(20.,10.)(15.5,6.5)(0.,){/Sine}{0}
\FALabel(15.5,8.5)[bl]{$\gamma$}
\FAProp(20.,3.)(15.5,6.5)(0.,){/Straight}{1}
\FALabel(17,4)[tr]{$q$}
\FAProp(5.5,10.)(11.5,10.)(0.,){/Sine}{0}
\FALabel(8.75,8.93)[t]{$\gamma,\mathrm{Z}$}
\FAProp(11.5,10.)(15.5,6.5)(0.,){/Straight}{-1}
\FALabel(12.9593,7.56351)[tr]{$q$}
\FAVert(5.5,10.){0}
\FAVert(11.5,10.){0}
\FAVert(15.5,6.5){0}
\end{feynartspicture}
\hspace{2em}
\begin{feynartspicture}(82,82)(1,1)
\FADiagram{}
\FAProp(0.,17.)(4.25,13.5)(0.,){/Straight}{1}
\FALabel(1,15)[tr]{$\mathrm{e}$}
\FAProp(4.25,13.5)(8.5,10.)(0.,){/Straight}{1}
\FALabel(5.5,11)[tr]{$\mathrm{e}$}
\FAProp(0.,3.)(8.5,10.)(0.,){/Straight}{-1}
\FALabel(-.2,6)[tl]{$\mathrm{e}$}
\FAProp(20.,15)(14.5,10.)(0.,){/Straight}{-1}
\FALabel(20,11.5)[br]{$q$}
\FAProp(4.25,13.5)(8.75,17)(0.,){/Sine}{0}
\FALabel(7.5,13)[bl]{$\gamma$}
\FAProp(20,5)(14.5,10.)(0.,){/Straight}{1}
\FALabel(20,8.5)[tr]{$q$}
\FAProp(8.5,10.)(14.5,10.)(0.,){/Sine}{0}
\FALabel(10.75,8.93)[t]{$\gamma,\mathrm{Z}$}
\FAVert(8.5,10.){0}
\FAVert(14.5,10.){0}
\FAVert(4.25,13.5){0}
\end{feynartspicture}
\hspace{2em}
\begin{feynartspicture}(82,82)(1,1)
\FADiagram{}
\FAProp(0.,17.)(8.5,10)(0.,){/Straight}{1}
\FALabel(1,15)[tr]{$\mathrm{e}$}
\FAProp(0.,3.)(4.25,6.5)(0.,){/Straight}{-1}
\FALabel(-.2,6)[tl]{$\mathrm{e}$}
\FAProp(4.25,6.5)(8.5,10.)(0.,){/Straight}{-1}
\FALabel(5.5,10)[tr]{$\mathrm{e}$}
\FAProp(20.,15)(14.5,10.)(0.,){/Straight}{-1}
\FALabel(20,11.5)[br]{$q$}
\FAProp(4.25,6.5)(8.75,3)(0.,){/Sine}{0}
\FALabel(4.5,1.5)[bl]{$\gamma$}
\FAProp(20,5)(14.5,10.)(0.,){/Straight}{1}
\FALabel(20,8.5)[tr]{$q$}
\FAProp(8.5,10.)(14.5,10.)(0.,){/Sine}{0}
\FALabel(10.75,8.93)[t]{$\gamma,\mathrm{Z}$}
\FAVert(8.5,10.){0}
\FAVert(14.5,10.){0}
\FAVert(4.25,6.5){0}
\end{feynartspicture}
}
\vspace{-2em}
\caption{Lowest-order diagrams for $\mathrm{e}^+\mathrm{e}^-\rightarrow q\bar{q}\gamma$.}
\label{fi:borndiags_qqa}
\end{figure}
The lowest-order partonic cross section for the processes given in
\refeqs{born_qqg} and (\ref{born_qqa}) reads
\begin{equation}
\int{\mathrm{d}}\sigma_{\mathrm{Born}} = \frac{1}{2s}
F_{\mathrm{C}}\sum_{\substack{\sigma,\sigma'=\pm\frac{1}{2}\\ \lambda=\pm 1}}
\frac{1}{4}(1-2P_1\sigma)(1+2P_2\sigma)\,
\int\mathrm{d}\Phi_3 \,
\vert\mathcal{M}_{0}^{\sigma\sigma'\lambda}\vert
^2\,\Theta_{\mathrm{cut}}(\Phi_3),
\label{eq:sigma0}
\end{equation}
where $F_{\mathrm{C}}$ is a colour factor, $P_{1,2}$ are the degrees of beam
polarisation of the incoming $\mathrm{e}^+$ and $\mathrm{e}^-$,
$\mathcal{M}_{0}^{\sigma\sigma'\lambda}$ is the colour-stripped Born
matrix element of the respective process, and the integral over the
three-particle phase space is defined by
\begin{equation}
\int\mathrm{d}\Phi_3=
\left( \prod_{i=3}^5 \int\frac{\mathrm{d}^3 \vec{k}_i}{(2\pi)^3 2k_i^0} \right)\,
(2\pi)^4 \delta\Biggl(k_1+k_2-\sum_{j=3}^5 k_j\Biggr).
\label{eq:dG3}
\end{equation}
For the process \refeq{born_qqa}, $F_{\mathrm{C}}=3$, and for the process
\refeq{born_qqg} $F_{\mathrm{C}}=4$. The dependence of the cross section on the
event-selection cuts is reflected by the step function
$\Theta_{\mathrm{cut}}\left( \Phi_3
\right)$. For the lowest-order cross sections of \refeq{born_qqg} and
\refeq{born_qqa} and the virtual corrections,
$\Theta_{\mathrm{cut}}$ depends on three-particle kinematics. It is
equal to $1$ if the event passes the cuts and equal to $0$ otherwise.
The formula corresponding to the process \refeq{born_qq} can be
obtained from \refeq{eq:sigma0} by omitting the dependence on and the
sum over the polarisation $\lambda$ of photon or gluon, using only the
two-particle phase space $\Phi_2$, and setting $F_{\mathrm{C}}=3$.
\subsection{Virtual corrections}
\label{sec:virt}
\subsubsection{Survey of diagrams and setup of the loop calculation}
We calculate the one-loop EW corrections to the processes given in
\refeqs{born_qq} and (\ref{born_qqg}), and the one-loop QCD
corrections to the process given in \refeq{born_qqa}. For
\refeq{born_qqg} and \refeq{born_qqa} their contributions to the cross
section are generically given by
\begin{equation}
\int{\mathrm{d}}\sigma_{\mathrm{virtual}} = \frac{1}{2s}
F_{\mathrm{C}}\sum_{\substack{\sigma,\sigma'=\pm\frac{1}{2}\\ \lambda=\pm 1}}
\frac{1}{4}(1\!-\!2P_1\sigma)(1\!+\!2P_2\sigma)
\int\mathrm{d}\Phi_3\,
2\mathop{\mathrm{Re}}\nolimits\left[\mathcal{M}_{0}^{\sigma\sigma'\lambda}\left(\mathcal{M}_{1}^{\sigma\sigma'\lambda}\right)^*\right]\!
\Theta_{\mathrm{cut}}\left( \Phi_3 \right),
\label{eq:sigma_virtual}
\end{equation}
where the notation is the same as in the previous section and
$\mathcal{M}_{1}^{\sigma\sigma'\lambda}$ denotes the contributions of
the virtual corrections to the matrix element after splitting off
the colour factor of the corresponding lowest-order amplitude.
The NLO EW virtual corrections to \refeq{born_qq} and
\refeq{born_qqg} receive contributions from self-energy,
vertex, box, and in the case with a gluon in the final state,
also pentagon diagrams. The structural diagrams for the process with
gluon emission containing the generic contributions of all possible
vertex functions are shown in \reffig{fi:genericvertex}. The
structural diagrams for the process without gluon emission can be
obtained by taking the first four and the sixth diagrams of
\reffig{fi:genericvertex} and discarding the outgoing gluon.
\begin{figure}
\centerline{\footnotesize
\begin{tabular}{ccc}
\begin{feynartspicture}(82,82)(1,1)
\FADiagram{}
\FAProp(0.,15.)(5.,10.)(0.,){/Straight}{1}
\FALabel(2.69678,13.1968)[bl]{$\mathrm{e}$}
\FAProp(0.,5.)(5.,10.)(0.,){/Straight}{-1}
\FALabel(2.69678,6.80322)[tl]{$\mathrm{e}$}
\FAProp(20.,17.)(16.5,13.5)(0.,){/Straight}{-1}
\FALabel(18.3032,15.6968)[br]{$q$}
\FAProp(20.,10.)(16.5,13.5)(0.,){/Cycles}{0}
\FALabel(17.3032,10.3032)[tr]{$\mathrm{g}$}
\FAProp(20.,3.)(13.,10.)(0.,){/Straight}{1}
\FALabel(16.48,5.98)[tr]{$q$}
\FAProp(9.,10.)(5.,10.)(0.,){/Sine}{0}
\FALabel(7.,8.93)[t]{$\!\gamma,\!\mathrm{Z}$}
\FAProp(9.,10.)(13.,10.)(0.,){/Sine}{0}
\FALabel(11.,8.93)[t]{$\gamma,\!\mathrm{Z}$}
\FAProp(13.,10.)(16.5,13.5)(0.,){/Straight}{1}
\FALabel(14.8032,12.1968)[br]{$q$}
\FAVert(5.,10.){0}
\FAVert(16.5,13.5){0}
\FAVert(13.,10.){0}
\FAVert(9.,10.){-1}
\end{feynartspicture}
&
\hspace{1em}
\begin{feynartspicture}(82,82)(1,1)
\FADiagram{}
\FAProp(0.,15.)(5.,10.)(0.,){/Straight}{1}
\FALabel(3.11602,13.116)[bl]{$\mathrm{e}$}
\FAProp(0.,5.)(5.,10.)(0.,){/Straight}{-1}
\FALabel(3.11602,6.88398)[tl]{$\mathrm{e}$}
\FAProp(20.,16.)(17.,14.)(0.,){/Straight}{-1}
\FALabel(18.1202,15.8097)[br]{$q$}
\FAProp(20.,11.5)(17.,14)(0.,){/Cycles}{0}
\FALabel(17.5,11.6)[tr]{$\mathrm{g}$}
\FAProp(20.,3.)(11.,10.)(0.,){/Straight}{1}
\FALabel(15.8181,7.04616)[bl]{$q$}
\FAProp(14.,12.)(17.,14.)(0.,){/Straight}{0}
\FALabel(15.1202,13.8097)[br]{$q$}
\FAProp(14.,12.)(11.,10.)(0.,){/Straight}{0}
\FALabel(12.1202,11.8097)[br]{$q$}
\FAProp(5.,10.)(11.,10.)(0.,){/Sine}{0}
\FALabel(8.,8.93)[t]{$\gamma,\!\mathrm{Z}$}
\FAVert(5.,10.){0}
\FAVert(17.,14.){0}
\FAVert(11.,10.){0}
\FAVert(14.,12.){-1}
\end{feynartspicture}
\hspace{1em}
&
\begin{feynartspicture}(82,82)(1,1)
\FADiagram{}
\FAProp(0.,15.)(6.,10.)(0.,){/Straight}{1}
\FALabel(3.18005,13.2121)[bl]{$\mathrm{e}$}
\FAProp(0.,5.)(6.,10.)(0.,){/Straight}{-1}
\FALabel(3.18005,6.78794)[tl]{$\mathrm{e}$}
\FAProp(20.,17.)(16.,13.5)(0.,){/Straight}{-1}
\FALabel(17.8154,16.2081)[br]{$q$}
\FAProp(20.,10.)(16.,13.5)(0.,){/Cycles}{0}
\FALabel(17,10.7919)[tr]{$\mathrm{g}$}
\FAProp(20.,3.)(12.,10.)(0.,){/Straight}{1}
\FALabel(15.98,5.48)[tr]{$q$}
\FAProp(6.,10.)(12.,10.)(0.,){/Sine}{0}
\FALabel(9.,8.93)[t]{$\gamma,\!\mathrm{Z}$}
\FAProp(16.,13.5)(12.,10.)(0.,){/Straight}{-1}
\FALabel(13.4593,12.4365)[br]{$q$}
\FAVert(6.,10.){0}
\FAVert(16.,13.5){0}
\FAVert(12.,10.){-1}
\end{feynartspicture}
\\[-1.5em]
\begin{feynartspicture}(82,82)(1,1)
\FADiagram{}
\FAProp(0.,15.)(5.5,10.)(0.,){/Straight}{1}
\FALabel(1,13)[tr]{$\mathrm{e}$}
\FAProp(0.,5.)(5.5,10.)(0.,){/Straight}{-1}
\FALabel(-.2,8)[tl]{$\mathrm{e}$}
\FAProp(20.,17.)(15.5,13.5)(0.,){/Straight}{-1}
\FALabel(17.5,16)[br]{$q$}
\FAProp(20.,10.)(15.5,13.5)(0.,){/Cycles}{0}
\FALabel(15,9)[bl]{$\mathrm{g}$}
\FAProp(20.,3.)(12.,10.)(0.,){/Straight}{1}
\FALabel(16,5.5)[tr]{$q$}
\FAProp(5.5,10.)(12.,10.)(0.,){/Sine}{0}
\FALabel(8.75,8.93)[t]{$\gamma,\!\mathrm{Z}$}
\FAProp(15.5,13.5)(12.,10.)(0.,){/Straight}{-1}
\FALabel(13.134,12.366)[br]{$q$}
\FAVert(5.5,10.){-1}
\FAVert(15.5,13.5){0}
\FAVert(12.,10.){0}
\end{feynartspicture}
\hspace{1em}
&
\begin{feynartspicture}(82,82)(1,1)
\FADiagram{}
\FAProp(0.,15.)(6.,10.)(0.,){/Straight}{1}
\FALabel(3.18005,13.2121)[bl]{$\mathrm{e}$}
\FAProp(0.,5.)(6.,10.)(0.,){/Straight}{-1}
\FALabel(3.18005,6.78794)[tl]{$\mathrm{e}$}
\FAProp(20.,17.)(16.,13.5)(0.,){/Straight}{-1}
\FALabel(17.8154,16.2081)[br]{$q$}
\FAProp(20.,10.)(16.,13.5)(0.,){/Cycles}{0}
\FALabel(17.5,10.)[tr]{$\mathrm{g}$}
\FAProp(20.,3.)(12.,10.)(0.,){/Straight}{1}
\FALabel(15.98,5.48)[tr]{$q$}
\FAProp(6.,10.)(12.,10.)(0.,){/Sine}{0}
\FALabel(9.,8.93)[t]{$\gamma,\!\mathrm{Z}$}
\FAProp(16.,13.5)(12.,10.)(0.,){/Straight}{-1}
\FALabel(13.4593,12.4365)[br]{$q$}
\FAVert(6.,10.){0}
\FAVert(12.,10.){0}
\FAVert(16.,13.5){-1}
\end{feynartspicture}
\hspace{1em}
&
\begin{feynartspicture}(82,82)(1,1)
\FADiagram{}
\FAProp(0.,15.)(10.,10.)(0.,){/Straight}{1}
\FALabel(4.6318,13.2436)[bl]{$\mathrm{e}$}
\FAProp(0.,5.)(10.,10.)(0.,){/Straight}{-1}
\FALabel(4.6318,6.75639)[tl]{$\mathrm{e}$}
\FAProp(20.,17.)(15.,13.5)(0.,){/Straight}{-1}
\FALabel(18.3366,16.2248)[br]{$q$}
\FAProp(20.,10.)(15.,13.5)(0.,){/Cycles}{0}
\FALabel(17,10.7752)[tr]{$\mathrm{g}$}
\FAProp(20.,3.)(10.,10.)(0.,){/Straight}{1}
\FALabel(14.98,5.98)[tr]{$q$}
\FAProp(15.,13.5)(10.,10.)(0.,){/Straight}{-1}
\FALabel(12.3366,12.2248)[br]{$q$}
\FAVert(15.,13.5){0}
\FAVert(10.,10.){-1}
\end{feynartspicture}
\\[-1.5em]
\begin{feynartspicture}(82,82)(1,1)
\FADiagram{}
\FAProp(0.,15.)(6.,10.)(0.,){/Straight}{1}
\FALabel(3.18005,13.2121)[bl]{$\mathrm{e}$}
\FAProp(0.,5.)(6.,10.)(0.,){/Straight}{-1}
\FALabel(3.18005,6.78794)[tl]{$\mathrm{e}$}
\FAProp(20.,17.)(16.,13.5)(0.,){/Straight}{-1}
\FALabel(17.8154,16.2081)[br]{$q$}
\FAProp(20.,10.)(16.,13.5)(0.,){/Straight}{1}
\FALabel(18,10.7919)[tr]{$q$}
\FAProp(20.,3.)(12.,10.)(0.,){/Cycles}{0}
\FALabel(15.,5.48)[tr]{$\mathrm{g}$}
\FAProp(6.,10.)(12.,10.)(0.,){/Sine}{0}
\FALabel(9.,8.93)[t]{$\gamma,\!\mathrm{Z}$}
\FAProp(16.,13.5)(12.,10.)(0.,){/Sine}{0}
\FALabel(13.4593,12.4365)[br]{$\gamma,\mathrm{Z}$}
\FAVert(6.,10.){0}
\FAVert(16.,13.5){0}
\FAVert(12.,10.){-1}
\end{feynartspicture}
\hspace{1em}
&
\begin{feynartspicture}(82,82)(1,1)
\FADiagram{}
\FAProp(0.,15.)(6.,10.)(0.,){/Straight}{1}
\FALabel(3.18005,13.2121)[bl]{$\mathrm{e}$}
\FAProp(0.,5.)(6.,10.)(0.,){/Straight}{-1}
\FALabel(3.18005,6.78794)[tl]{$\mathrm{e}$}
\FAProp(20.,17.)(14.,10.)(0.,){/Straight}{-1}
\FALabel(17.2902,14.1827)[br]{$q$}
\FAProp(20.,3.)(14.,10.)(0.,){/Straight}{1}
\FALabel(17.2902,5.8173)[tr]{$q$}
\FAProp(20.,10.)(14.,10.)(0.,){/Cycles}{0}
\FALabel(19.,10.72)[b]{$\mathrm{g}$}
\FAProp(6.,10.)(14.,10.)(0.,){/Sine}{0}
\FALabel(10.,8.93)[t]{$\gamma,\!\mathrm{Z}$}
\FAVert(6.,10.){0}
\FAVert(14.,10.){-1}
\end{feynartspicture}
\hspace{1em}
&
\begin{feynartspicture}(82,82)(1,1)
\FADiagram{}
\FAProp(0.,17.)(10.,10.)(0.,){/Straight}{1}
\FALabel(5.16337,14.2248)[bl]{$\mathrm{e}$}
\FAProp(0.,3.)(10.,10.)(0.,){/Straight}{-1}
\FALabel(4.66337,5.77519)[tl]{$\mathrm{e}$}
\FAProp(20.,17.)(10.,10.)(0.,){/Straight}{-1}
\FALabel(14.8366,14.2248)[br]{$q$}
\FAProp(20.,3.)(10.,10.)(0.,){/Straight}{1}
\FALabel(14.8366,5.77519)[tr]{$q$}
\FAProp(20.,10.)(10.,10.)(0.,){/Cycles}{0}
\FALabel(17.5,10.72)[b]{$\mathrm{g}$}
\FAVert(10.,10.){-1}
\end{feynartspicture}
\end{tabular}
}
\vspace{-1.5em}
\caption{Contributions of all possible vertex functions to $\mathrm{e}^+\mathrm{e}^-\rightarrow q\bar{q}\mathrm{g}$.}
\label{fi:genericvertex}
\end{figure}
The Feynman diagrams contributing to the 3-point vertex functions are
shown in \reffig{fi:three-point}, those contributing to the 4-point
and 5-point vertex functions in \reffig{fi:boxpenta}. The diagrams
for the Z-boson, photon, and quark self-energies can be found for
example in \citere{Bohm:1986rj}.
\begin{figure}
\centerline{\footnotesize
\begin{tabular}{cccc}
\begin{feynartspicture}(82,82)(1,1)
\FADiagram{}
\FAProp(0.,15.)(6.,15.)(0.,){/Straight}{1}
\FALabel(3.,16.07)[b]{$\mathrm{e}$}
\FAProp(0.,5.)(6.,5.)(0.,){/Straight}{-1}
\FALabel(3.,3.93)[t]{$\mathrm{e}$}
\FAProp(20.,10.)(14.,10.)(0.,){/Sine}{0}
\FALabel(17.5,10.52)[b]{$\gamma,\!\mathrm{Z}$}
\FAProp(6.,15.)(6.,5.)(0.,){/Sine}{0}
\FALabel(5.23,10.)[r]{$\gamma,\!\mathrm{Z}$}
\FAProp(6.,15.)(14.,10.)(0.,){/Straight}{1}
\FALabel(10.2451,14)[b]{$\mathrm{e}$}
\FAProp(6.,5.)(14.,10.)(0.,){/Straight}{-1}
\FALabel(10.2451,4.81991)[b]{$\mathrm{e}$}
\FAVert(6.,15.){0}
\FAVert(6.,5.){0}
\FAVert(14.,10.){0}
\end{feynartspicture}
\hspace{1em}
&
\begin{feynartspicture}(82,82)(1,1)
\FADiagram{}
\FAProp(0.,15.)(6.,15.)(0.,){/Straight}{1}
\FALabel(3.,16.07)[b]{$\mathrm{e}$}
\FAProp(0.,5.)(6.,5.)(0.,){/Straight}{-1}
\FALabel(3.,3.93)[t]{$\mathrm{e}$}
\FAProp(20.,10.)(14.,10.)(0.,){/Sine}{0}
\FALabel(17.5,10.52)[b]{$\gamma,\!\mathrm{Z}$}
\FAProp(6.,15.)(6.,5.)(0.,){/Sine}{0}
\FALabel(5.23,10.)[r]{$\mathrm{W}$}
\FAProp(6.,15.)(14.,10.)(0.,){/Straight}{1}
\FALabel(10.2451,13.1801)[b]{$\nu_e$}
\FAProp(6.,5.)(14.,10.)(0.,){/Straight}{-1}
\FALabel(10.2451,4.81991)[b]{$\nu_e$}
\FAVert(6.,15.){0}
\FAVert(6.,5.){0}
\FAVert(14.,10.){0}
\end{feynartspicture}
\hspace{1em}
&
\begin{feynartspicture}(82,82)(1,1)
\FADiagram{}
\FAProp(0.,15.)(6.,15.)(0.,){/Straight}{1}
\FALabel(3.,16.07)[b]{$\mathrm{e}$}
\FAProp(0.,5.)(6.,5.)(0.,){/Straight}{-1}
\FALabel(3.,3.93)[t]{$\mathrm{e}$}
\FAProp(20.,10.)(14.,10.)(0.,){/Sine}{0}
\FALabel(17.5,10.52)[b]{$\gamma,\!\mathrm{Z}$}
\FAProp(6.,15.)(6.,5.)(0.,){/Straight}{1}
\FALabel(5.23,10.)[r]{$\nu_e$}
\FAProp(6.,15.)(14.,10.)(0.,){/Sine}{0}
\FALabel(11,13.1801)[b]{$\mathrm{W}$}
\FAProp(6.,5.)(14.,10.)(0.,){/Sine}{0}
\FALabel(11,4.81991)[b]{$\mathrm{W}$}
\FAVert(6.,15.){0}
\FAVert(6.,5.){0}
\FAVert(14.,10.){0}
\end{feynartspicture}
&
\begin{feynartspicture}(82,82)(1,1)
\FADiagram{}
\FAProp(0.,15.)(6.,15.)(0.,){/Straight}{1}
\FALabel(3.,16.07)[b]{$q$}
\FAProp(0.,5.)(6.,5.)(0.,){/Straight}{-1}
\FALabel(3.,3.93)[t]{$q$}
\FAProp(20.,10.)(14.,10.)(0.,){/Sine}{0}
\FALabel(17.5,10.52)[b]{$\gamma,\!\mathrm{Z}$}
\FAProp(6.,15.)(6.,5.)(0.,){/Sine}{0}
\FALabel(5.23,10.)[r]{$\gamma,\!\mathrm{Z}$}
\FAProp(6.,15.)(14.,10.)(0.,){/Straight}{1}
\FALabel(10.2451,13.1801)[b]{$q$}
\FAProp(6.,5.)(14.,10.)(0.,){/Straight}{-1}
\FALabel(10.2451,4.81991)[b]{$q$}
\FAVert(6.,15.){0}
\FAVert(6.,5.){0}
\FAVert(14.,10.){0}
\end{feynartspicture}
\hspace{1em}
\\[-1.5em]
\begin{feynartspicture}(82,82)(1,1)
\FADiagram{}
\FAProp(0.,15.)(6.,15.)(0.,){/Straight}{1}
\FALabel(3.,16.07)[b]{$q$}
\FAProp(0.,5.)(6.,5.)(0.,){/Straight}{-1}
\FALabel(3.,3.93)[t]{$q$}
\FAProp(20.,10.)(14.,10.)(0.,){/Sine}{0}
\FALabel(17.5,10.52)[b]{$\gamma,\!\mathrm{Z}$}
\FAProp(6.,15.)(6.,5.)(0.,){/Sine}{0}
\FALabel(5.23,10.)[r]{$\mathrm{W}$}
\FAProp(6.,15.)(14.,10.)(0.,){/Straight}{1}
\FALabel(10.2451,13.1801)[b]{$q'$}
\FAProp(6.,5.)(14.,10.)(0.,){/Straight}{-1}
\FALabel(10.2451,4.81991)[b]{$q'$}
\FAVert(6.,15.){0}
\FAVert(6.,5.){0}
\FAVert(14.,10.){0}
\end{feynartspicture}
\hspace{1em}
&
\begin{feynartspicture}(82,82)(1,1)
\FADiagram{}
\FAProp(0.,15.)(6.,15.)(0.,){/Straight}{1}
\FALabel(3.,16.07)[b]{$q$}
\FAProp(0.,5.)(6.,5.)(0.,){/Straight}{-1}
\FALabel(3.,3.93)[t]{$q$}
\FAProp(20.,10.)(14.,10.)(0.,){/Sine}{0}
\FALabel(17.5,10.52)[b]{$\gamma,\!\mathrm{Z}$}
\FAProp(6.,15.)(6.,5.)(0.,){/Straight}{1}
\FALabel(5.23,10.)[r]{$q'$}
\FAProp(6.,15.)(14.,10.)(0.,){/Sine}{0}
\FALabel(11,13.1801)[b]{$\mathrm{W}$}
\FAProp(6.,5.)(14.,10.)(0.,){/Sine}{0}
\FALabel(11,4.81991)[b]{$\mathrm{W}$}
\FAVert(6.,15.){0}
\FAVert(6.,5.){0}
\FAVert(14.,10.){0}
\end{feynartspicture}
&
\begin{feynartspicture}(82,82)(1,1)
\FADiagram{}
\FAProp(0.,15.)(6.,15.)(0.,){/Cycles}{0}
\FALabel(3.,16.5)[b]{$\mathrm{g}$}
\FAProp(0.,5.)(6.,5.)(0.,){/Straight}{1}
\FALabel(3.,1.8)[b]{$q$}
\FAProp(20.,10.)(14.,10.)(0.,){/Straight}{-1}
\FALabel(17.5,10.52)[b]{$q$}
\FAProp(6.,15.)(6.,5.)(0.,){/Straight}{-1}
\FALabel(3,10.)[l]{$q$}
\FAProp(6.,15.)(14.,10.)(0.,){/Straight}{1}
\FALabel(10.5,13.1801)[b]{$q$}
\FAProp(6.,5.)(14.,10.)(0.,){/Sine}{0}
\FALabel(12,4.81991)[b]{$\gamma,\!\mathrm{Z}$}
\FAVert(6.,15.){0}
\FAVert(6.,5.){0}
\FAVert(14.,10.){0}
\end{feynartspicture}
\hspace{1em}
&
\begin{feynartspicture}(82,82)(1,1)
\FADiagram{}
\FAProp(0.,15.)(6.,15.)(0.,){/Cycles}{0}
\FALabel(3.,16.5)[b]{$\mathrm{g}$}
\FAProp(0.,5.)(6.,5.)(0.,){/Straight}{1}
\FALabel(3.,1.8)[b]{$q$}
\FAProp(20.,10.)(14.,10.)(0.,){/Straight}{-1}
\FALabel(17.5,10.52)[b]{$q$}
\FAProp(6.,15.)(6.,5.)(0.,){/Straight}{-1}
\FALabel(3,10.)[l]{$q'$}
\FAProp(6.,15.)(14.,10.)(0.,){/Straight}{1}
\FALabel(10.5,13.1801)[b]{$q'$}
\FAProp(6.,5.)(14.,10.)(0.,){/Sine}{0}
\FALabel(12,4.81991)[b]{$\mathrm{W}$}
\FAVert(6.,15.){0}
\FAVert(6.,5.){0}
\FAVert(14.,10.){0}
\end{feynartspicture}
\end{tabular}
}
\vspace{-1.5em}
\caption{Diagrams for the $\gamma q\bar{q}$, $\mathrm{Z} q\bar{q}$, $\gamma \mathrm{e}^+\mathrm{e}^-$, $\mathrm{Z} \mathrm{e}^+\mathrm{e}^-$,
and $\mathrm{g} q\bar{q}$ vertex functions.}
\label{fi:three-point}
\end{figure}
\begin{figure}
\centerline{\footnotesize
\begin{tabular}{cccc}
\begin{feynartspicture}(82,82)(1,1)
\FADiagram{}
\FAProp(0.,15.)(6.,15.)(0.,){/Sine}{0}
\FALabel(3.,16.07)[b]{$\gamma,\!\mathrm{Z}$}
\FAProp(0.,5.)(6.,5.)(0.,){/Straight}{-1}
\FALabel(3.,3.93)[t]{$q$}
\FAProp(20.,15.)(14.,15.)(0.,){/Cycles}{0}
\FALabel(17.,16.07)[b]{$\mathrm{g}$}
\FAProp(20.,5.)(14.,5.)(0.,){/Straight}{1}
\FALabel(17.,4.18)[t]{$q$}
\FAProp(6.,15.)(6.,5.)(0.,){/Straight}{1}
\FALabel(6.77,10.)[l]{$q$}
\FAProp(6.,15.)(14.,15.)(0.,){/Straight}{-1}
\FALabel(10.,16.07)[b]{$q$}
\FAProp(6.,5.)(14.,5.)(0.,){/Sine}{0}
\FALabel(10.,3.93)[t]{$\gamma,\!\mathrm{Z}$}
\FAProp(14.,15.)(14.,5.)(0.,){/Straight}{-1}
\FALabel(14.77,10.)[l]{$q$}
\FAVert(6.,15.){0}
\FAVert(6.,5.){0}
\FAVert(14.,15.){0}
\FAVert(14.,5.){0}
\end{feynartspicture}
\hspace{1em}
&
\begin{feynartspicture}(82,82)(1,1)
\FADiagram{}
\FAProp(0.,15.)(6.,15.)(0.,){/Sine}{0}
\FALabel(3.,16.07)[b]{$\gamma,\!\mathrm{Z}$}
\FAProp(0.,5.)(6.,5.)(0.,){/Straight}{-1}
\FALabel(3.,3.93)[t]{$q$}
\FAProp(20.,15.)(14.,15.)(0.,){/Cycles}{0}
\FALabel(17.,16.07)[b]{$\mathrm{g}$}
\FAProp(20.,5.)(14.,5.)(0.,){/Straight}{1}
\FALabel(17.,4.18)[t]{$q$}
\FAProp(6.,15.)(6.,5.)(0.,){/Straight}{1}
\FALabel(6.77,10.)[l]{$q'$}
\FAProp(6.,15.)(14.,15.)(0.,){/Straight}{-1}
\FALabel(10.,16.07)[b]{$q'$}
\FAProp(6.,5.)(14.,5.)(0.,){/Sine}{0}
\FALabel(10.,3.93)[t]{$\mathrm{W}$}
\FAProp(14.,15.)(14.,5.)(0.,){/Straight}{-1}
\FALabel(14.77,10.)[l]{$q'$}
\FAVert(6.,15.){0}
\FAVert(6.,5.){0}
\FAVert(14.,15.){0}
\FAVert(14.,5.){0}
\end{feynartspicture}
\hspace{1em}
&
\begin{feynartspicture}(82,82)(1,1)
\FADiagram{}
\FAProp(0.,15.)(6.,15.)(0.,){/Sine}{0}
\FALabel(3.,16.07)[b]{$\gamma,\!\mathrm{Z}$}
\FAProp(0.,5.)(6.,5.)(0.,){/Straight}{-1}
\FALabel(3.,3.93)[t]{$q$}
\FAProp(20.,15.)(14.,15.)(0.,){/Straight}{1}
\FALabel(17.,16.07)[b]{$q$}
\FAProp(20.,5.)(14.,5.)(0.,){/Cycles}{0}
\FALabel(17.,3)[t]{$\mathrm{g}$}
\FAProp(6.,15.)(6.,5.)(0.,){/Sine}{0}
\FALabel(6.77,10.)[l]{$\mathrm{W}$}
\FAProp(6.,15.)(14.,15.)(0.,){/Sine}{0}
\FALabel(10.,16.07)[b]{$\mathrm{W}$}
\FAProp(6.,5.)(14.,5.)(0.,){/Straight}{-1}
\FALabel(10.,3.93)[t]{$q'$}
\FAProp(14.,15.)(14.,5.)(0.,){/Straight}{1}
\FALabel(14.77,10.)[l]{$q'$}
\FAVert(6.,15.){0}
\FAVert(6.,5.){0}
\FAVert(14.,15.){0}
\FAVert(14.,5.){0}
\end{feynartspicture}
&
\begin{feynartspicture}(82,82)(1,1)
\FADiagram{}
\FAProp(0.,15.)(6.,15.)(0.,){/Straight}{1}
\FALabel(3.,16.07)[b]{$\mathrm{e}$}
\FAProp(0.,5.)(6.,5.)(0.,){/Straight}{-1}
\FALabel(3.,3.93)[t]{$\mathrm{e}$}
\FAProp(20.,15.)(14.,15.)(0.,){/Straight}{1}
\FALabel(17.,16.07)[b]{$q$}
\FAProp(20.,5.)(14.,5.)(0.,){/Straight}{-1}
\FALabel(17.,4.18)[t]{$q$}
\FAProp(6.,15.)(6.,5.)(0.,){/Straight}{1}
\FALabel(6.77,10.)[l]{$\mathrm{e}$}
\FAProp(6.,15.)(14.,15.)(0.,){/Sine}{0}
\FALabel(10.,16.07)[b]{$\gamma,\!\mathrm{Z}$}
\FAProp(6.,5.)(14.,5.)(0.,){/Sine}{0}
\FALabel(10.,3.93)[t]{$\gamma,\!\mathrm{Z}$}
\FAProp(14.,15.)(14.,5.)(0.,){/Straight}{1}
\FALabel(14.77,10.)[l]{$q$}
\FAVert(6.,15.){0}
\FAVert(6.,5.){0}
\FAVert(14.,15.){0}
\FAVert(14.,5.){0}
\end{feynartspicture}
\hspace{1em}
\\[-1.5em]
\begin{feynartspicture}(82,82)(1,1)
\FADiagram{}
\FAProp(0.,15.)(6.,15.)(0.,){/Straight}{1}
\FALabel(3.,16.07)[b]{$\mathrm{e}$}
\FAProp(0.,5.)(6.,5.)(0.,){/Straight}{-1}
\FALabel(3.,3.93)[t]{$\mathrm{e}$}
\FAProp(20.,15.)(14.,15.)(0.,){/Straight}{1}
\FALabel(17.,16.07)[b]{$q$}
\FAProp(20.,5.)(14.,5.)(0.,){/Straight}{-1}
\FALabel(17.,4.18)[t]{$q$}
\FAProp(6.,15.)(6.,5.)(0.,){/Straight}{1}
\FALabel(6.77,10.)[l]{$\nu_\mathrm{e}$}
\FAProp(6.,15.)(14.,15.)(0.,){/Sine}{0}
\FALabel(10.,16.07)[b]{$\mathrm{W}$}
\FAProp(6.,5.)(14.,5.)(0.,){/Sine}{0}
\FALabel(10.,3.93)[t]{$\mathrm{W}$}
\FAProp(14.,15.)(14.,5.)(0.,){/Straight}{1}
\FALabel(14.77,10.)[l]{$q'$}
\FAVert(6.,15.){0}
\FAVert(6.,5.){0}
\FAVert(14.,15.){0}
\FAVert(14.,5.){0}
\end{feynartspicture}
&
\begin{feynartspicture}(82,82)(1,1)
\FADiagram{}
\FAProp(0.,15.)(6.,15.)(0.,){/Straight}{1}
\FALabel(3.,16.07)[b]{$\mathrm{e}$}
\FAProp(0.,5.)(6.,5.)(0.,){/Straight}{-1}
\FALabel(3.,3.93)[t]{$\mathrm{e}$}
\FAProp(20.,15.)(14.,15.)(0.,){/Straight}{-1}
\FALabel(17.,16.07)[b]{$q$}
\FAProp(20.,10.)(14.,10.)(0.,){/Cycles}{0}
\FALabel(17.,8.4)[t]{$\mathrm{g}$}
\FAProp(20.,5.)(14.,5.)(0.,){/Straight}{1}
\FALabel(17.,4.18)[t]{$q$}
\FAProp(6.,15.)(6.,5.)(0.,){/Straight}{1}
\FALabel(5.23,10.)[r]{$\mathrm{e}$}
\FAProp(6.,15.)(14.,15.)(0.,){/Sine}{0}
\FALabel(10.,16.07)[b]{$\gamma,\!\mathrm{Z}$}
\FAProp(6.,5.)(14.,5.)(0.,){/Sine}{0}
\FALabel(10.,3.93)[t]{$\gamma,\!\mathrm{Z}$}
\FAProp(14.,15.)(14.,10.)(0.,){/Straight}{-1}
\FALabel(13.23,12.5)[r]{$q$}
\FAProp(14.,10.)(14.,5.)(0.,){/Straight}{-1}
\FALabel(13.23,7.5)[r]{$q$}
\FAVert(6.,15.){0}
\FAVert(6.,5.){0}
\FAVert(14.,15.){0}
\FAVert(14.,10.){0}
\FAVert(14.,5.){0}
\end{feynartspicture}
\hspace{1em}
&
\begin{feynartspicture}(82,82)(1,1)
\FADiagram{}
\FAProp(0.,15.)(6.,15.)(0.,){/Straight}{1}
\FALabel(3.,16.07)[b]{$\mathrm{e}$}
\FAProp(0.,5.)(6.,5.)(0.,){/Straight}{-1}
\FALabel(3.,3.93)[t]{$\mathrm{e}$}
\FAProp(20.,15.)(14.,15.)(0.,){/Straight}{-1}
\FALabel(17.,16.07)[b]{$q$}
\FAProp(20.,10.)(14.,10.)(0.,){/Cycles}{0}
\FALabel(17.,8.4)[t]{$\mathrm{g}$}
\FAProp(20.,5.)(14.,5.)(0.,){/Straight}{1}
\FALabel(17.,4.18)[t]{$q$}
\FAProp(6.,15.)(6.,5.)(0.,){/Straight}{1}
\FALabel(5.23,10.)[r]{$\nu_\mathrm{e}$}
\FAProp(6.,15.)(14.,15.)(0.,){/Sine}{0}
\FALabel(10.,16.07)[b]{$\mathrm{W}$}
\FAProp(6.,5.)(14.,5.)(0.,){/Sine}{0}
\FALabel(10.,3.93)[t]{$\mathrm{W}$}
\FAProp(14.,15.)(14.,10.)(0.,){/Straight}{-1}
\FALabel(13.23,12.5)[r]{$q'$}
\FAProp(14.,10.)(14.,5.)(0.,){/Straight}{-1}
\FALabel(13.23,7.5)[r]{$q'$}
\FAVert(6.,15.){0}
\FAVert(6.,5.){0}
\FAVert(14.,15.){0}
\FAVert(14.,10.){0}
\FAVert(14.,5.){0}
\end{feynartspicture}
\end{tabular}
}
\vspace{-1.5em}
\caption{Diagrams for the $\gamma \mathrm{g} q\bar{q}$, $\mathrm{Z} \mathrm{g} q\bar{q}$, $\mathrm{e}\Pe q\bar{q}$, and
$\mathrm{e}\Pe q\bar{q}\mathrm{g}$
vertex functions.}
\label{fi:boxpenta}
\end{figure}
The symbol $q$ stands for the quarks appearing in
\refeq{born_qq}--\refeq{born_qqa}, the symbols $q'$ for their
weak-isospin partners. Since we neglect the masses of the external
fermions wherever possible, there are no contributions involving the
physical Higgs boson coupling to those particles. For b quarks in the
final state, diagrams with W~bosons also have counterparts where the
W~bosons are replaced by would-be Goldstone bosons, which are included
in the calculation but not shown explicitly in the figures. We also
do not depict diagrams that can be obtained by reversing the charge
flow of the external quark lines in the first six diagrams of
\reffig{fi:genericvertex}.
In total we have $\mathcal{O}(200)$ contributing diagrams in the 't
Hooft--Feynman gauge for the process with gluon emission and
$\mathcal{O}(80)$ for the process without gluon emission, counting
closed-fermion-loop diagrams for each family only once.
The NLO QCD virtual corrections to \refeq{born_qqa} receive
contributions from self-energy, vertex, and box diagrams. The
corresponding Feynman diagrams are shown in
\reffig{fi:genericvertex_qqa}, where we have omitted quark
self-energy contributions. We do not depict diagrams that can be
obtained by either reversing the charge flow of the external
lepton lines in the first diagram or of the external quark lines in
the last three diagrams of \reffig{fi:genericvertex_qqa}. In total we
have $\mathcal{O}(20)$ contributing diagrams in this case.
\begin{figure}
\centerline{\footnotesize
\begin{feynartspicture}(90,90)(1,1)
\FADiagram{}
\FAProp(0.,16.5)(7.,10.)(0.,){/Straight}{1}
\FALabel(4.0747,13.9058)[bl]{$\mathrm{e}$}
\FAProp(0.,2.5)(4.,6.5)(0.,){/Straight}{-1}
\FALabel(2.61602,3.88398)[tl]{$\mathrm{e}$}
\FAProp(20.,18.5)(16.5,14.5)(0.,){/Straight}{-1}
\FALabel(17.5635,17.0407)[br]{$q$}
\FAProp(20.,1.5)(16.5,5.5)(0.,){/Straight}{1}
\FALabel(17.5635,2.95932)[tr]{$q$}
\FAProp(8.,2.5)(4.,6.5)(0.,){/Sine}{0}
\FALabel(6.61602,5.11602)[bl]{$\gamma$}
\FAProp(7.,10.)(4.,6.5)(0.,){/Straight}{1}
\FALabel(4.80315,8.77873)[br]{$\mathrm{e}$}
\FAProp(7.,10.)(12.,10.)(0.,){/Sine}{0}
\FALabel(9.5,11.07)[b]{$\gamma,\mathrm{Z}$}
\FAProp(16.5,14.5)(16.5,5.5)(0.,){/Cycles}{0}
\FALabel(18.27,10.)[l]{$\mathrm{g}$}
\FAProp(16.5,14.5)(12.,10.)(0.,){/Straight}{-1}
\FALabel(13.634,12.866)[br]{$q$}
\FAProp(16.5,5.5)(12.,10.)(0.,){/Straight}{1}
\FALabel(13.634,7.13398)[tr]{$q$}
\FAVert(7.,10.){0}
\FAVert(4.,6.5){0}
\FAVert(16.5,14.5){0}
\FAVert(16.5,5.5){0}
\FAVert(12.,10.){0}
\end{feynartspicture}
\hspace{2em}
\begin{feynartspicture}(90,90)(1,1)
\FADiagram{}
\FAProp(0.,15.)(4.,10.)(0.,){/Straight}{1}
\FALabel(1.26965,12.0117)[tr]{$\mathrm{e}$}
\FAProp(0.,5.)(4.,10.)(0.,){/Straight}{-1}
\FALabel(2.73035,7.01172)[tl]{$\mathrm{e}$}
\FAProp(20.,18.5)(16.,15.5)(0.,){/Straight}{-1}
\FALabel(17.55,17.76)[br]{$q$}
\FAProp(20.,1.5)(14.,6.5)(0.,){/Straight}{1}
\FALabel(15.151,4.26558)[tr]{$q$}
\FAProp(20.,11.5)(16.,15.5)(0.,){/Sine}{0}
\FALabel(18,10)[l]{$\gamma$}
\FAProp(4.,10.)(9.,10.)(0.,){/Sine}{0}
\FALabel(6.5,11.07)[b]{$\gamma,\mathrm{Z}$}
\FAProp(16.,15.5)(14.,14.)(0.,){/Straight}{-1}
\FALabel(14.55,15.51)[br]{$q$}
\FAProp(14.,6.5)(9.,10.)(0.,){/Straight}{1}
\FALabel(11.0911,7.46019)[tr]{$q$}
\FAProp(14.,6.5)(14.,14.)(0.,){/Cycles}{0}
\FALabel(15.07,10.25)[l]{$\mathrm{g}$}
\FAProp(9.,10.)(14.,14.)(0.,){/Straight}{1}
\FALabel(11.0117,12.7303)[br]{$q$}
\FAVert(4.,10.){0}
\FAVert(16.,15.5){0}
\FAVert(14.,6.5){0}
\FAVert(9.,10.){0}
\FAVert(14.,14.){0}
\end{feynartspicture}
\hspace{2em}
\begin{feynartspicture}(90,90)(1,1)
\FADiagram{}
\FAProp(0.,15.)(3.,9.)(0.,){/Straight}{1}
\FALabel(2.09361,12.9818)[bl]{$\mathrm{e}$}
\FAProp(0.,4.)(3.,9.)(0.,){/Straight}{-1}
\FALabel(0.650886,6.81747)[br]{$\mathrm{e}$}
\FAProp(20.,18.5)(16.,15.5)(0.,){/Straight}{-1}
\FALabel(17.55,17.76)[br]{$q$}
\FAProp(20.,1.5)(9.,9.)(0.,){/Straight}{1}
\FALabel(12.1553,5.65389)[tr]{$q$}
\FAProp(20.,8.)(17.,10.5)(0.,){/Sine}{0}
\FALabel(19.3409,7.34851)[tr]{$\gamma$}
\FAProp(3.,9.)(9.,9.)(0.,){/Sine}{0}
\FALabel(6.,7.93)[t]{$\gamma,\mathrm{Z}$}
\FAProp(9.,9.)(11.5,11.5)(0.,){/Straight}{1}
\FALabel(9.63398,10.866)[br]{$q$}
\FAProp(16.,15.5)(17.,10.5)(0.,){/Straight}{-1}
\FALabel(17.5399,13.304)[l]{$q$}
\FAProp(16.,15.5)(11.5,11.5)(0.,){/Cycles}{0}
\FALabel(13.2002,14.1785)[br]{$\mathrm{g}$}
\FAProp(17.,10.5)(11.5,11.5)(0.,){/Straight}{-1}
\FALabel(13.9727,9.955)[t]{$q$}
\FAVert(3.,9.){0}
\FAVert(16.,15.5){0}
\FAVert(9.,9.){0}
\FAVert(17.,10.5){0}
\FAVert(11.5,11.5){0}
\end{feynartspicture}
\hspace{2em}
\begin{feynartspicture}(90,90)(1,1)
\FADiagram{}
\FAProp(0.,15.)(3.,10.)(0.,){/Straight}{1}
\FALabel(0.650886,12.1825)[tr]{$\mathrm{e}$}
\FAProp(0.,5.)(3.,10.)(0.,){/Straight}{-1}
\FALabel(2.34911,7.18253)[tl]{$\mathrm{e}$}
\FAProp(20.,19.)(16.,15.5)(0.,){/Straight}{-1}
\FALabel(17.4593,17.9365)[br]{$q$}
\FAProp(20.,10.)(16.,10.)(0.,){/Straight}{1}
\FALabel(18.,8.93)[t]{$q$}
\FAProp(20.,1.5)(16.,5.)(0.,){/Sine}{0}
\FALabel(17.4593,2.56351)[tr]{$\gamma$}
\FAProp(3.,10.)(10.,10.)(0.,){/Sine}{0}
\FALabel(6.5,11.07)[b]{$\gamma,\mathrm{Z}$}
\FAProp(16.,15.5)(16.,10.)(0.,){/Cycles}{0}
\FALabel(17.92,13.7)[l]{$\mathrm{g}$}
\FAProp(16.,15.5)(10.,10.)(0.,){/Straight}{-1}
\FALabel(12.4326,13.4126)[br]{$q$}
\FAProp(16.,10.)(16.,5.)(0.,){/Straight}{1}
\FALabel(14.93,7.5)[r]{$q$}
\FAProp(16.,5.)(10.,10.)(0.,){/Straight}{1}
\FALabel(13.2559,6.14907)[tr]{$q$}
\FAVert(3.,10.){0}
\FAVert(16.,15.5){0}
\FAVert(16.,10.){0}
\FAVert(16.,5.){0}
\FAVert(10.,10.){0}
\end{feynartspicture}
}
\vspace{-1.5em}
\caption{Sample diagrams for virtual QCD corrections to the process
$\mathrm{e}^+\mathrm{e}^-\rightarrow q\bar{q}\gamma$.}
\label{fi:genericvertex_qqa}
\end{figure}
We treat the gauge-boson widths using the complex-mass scheme, which
has been worked out at the Born level in \citere{Denner:1999gp} and at
the one-loop level in \citere{Denner:2005fg}. In this framework the
masses of the Z and the W boson are complex quantities, defined at the
pole of the corresponding propagator in the complex plane. As a
consequence, derived quantities like the weak mixing angle also become
complex, and the renormalisation procedure has to be slightly
modified. Introducing complex masses everywhere in the Feynman rules
preserves all algebraic relations like Ward identities and therefore
also gauge invariance. Terms that break unitarity are beyond the
one-loop level (see \citere{Denner:2005fg}).
\subsubsection{Algebraic reduction of spinor chains}
We have performed two independent calculations of the virtual corrections.
In {\it version 1} of our calculation
we generated the amplitudes using {\sc FeynArts}~3.2
\cite{Hahn:2000kx} and employed {\sc FormCalc}~5
\cite{Hahn:1998yk} to algebraically manipulate the amplitudes, which
led to 150 different spinor structures. In order to reduce the number
of spinor structures, we applied the algorithm described in
\citere{Denner:2005fg} and extended it to the case with one external
gauge boson. In this way, we reduced all occurring spinor chains to
$\mathcal{O}(20)$ standard structures, the standard matrix elements
(SMEs), without creating coefficients that lead to numerical problems.
After the reduction of the spinor structures, we separate the matrix
elements into invariant coefficients $F_n$ and
SMEs $\hat{\mathcal{M}}$. The $F_n$ are linear
combinations of one-loop integrals with coefficients depending on
scalar kinematical variables, particle masses, and coupling factors,
the $\hat{\mathcal{M}}$ contain all spinorial objects and
the dependence on the helicities of the external particles (see
e.g.\ \citere{Denner:1991kt}):
\begin{equation}
\mathcal{M}^{\sigma\sigma' \lambda}=\sum_{n} F_n^{\sigma\sigma' \lambda}\left(\{s,s_{ij},s_{ai}\}\right)
\hat{\mathcal{M}}^{\sigma\sigma' \lambda}_n \left(k_1,k_2,k_3,k_4,k_5\right).
\end{equation}
All contributions to the matrix element involve the product of two
spinor chains corresponding to the incoming leptonic and the outgoing
hadronic current. These spinor chains can be contracted with one
another, with external momenta, or with the polarisation vector
of the outgoing photon or gluon.
In the following we describe the strategy to reduce all occurring
products of spinor chains and polarisation vectors to a few standard
structures. In this section we choose all particles incoming and
use the short-hand notation
\begin{equation}
[A]^\pm_{ab}=\bar{v}_a(k_a)\,A\,\omega_\pm\,u_b(k_b)
\end{equation}
for a spinor chain, where $\bar{v}_a(k_a)$ and $u_b(k_b)$ are spinors
for antifermions and fermions, respectively, with the chirality
projectors $\omega_\pm=(1\pm\gamma_5)/2$. We denote the external
polarisation vector by $\varepsilon$. Since we work with massless
external fermions, only odd numbers of Dirac matrices occur inside the
spinor chains.
The objects we want to simplify are of the form
\begin{equation}
\bar{v}_1(k_1)\,A\,\omega_\rho\,u_2(k_2)\,\times\,\bar{v}_3(k_3)\,B\,
\omega_\tau\,u_4(k_4)=[A]^{\rho}_{12}[B]^{\tau}_{34}.
\end{equation}
We make use of the Dirac algebra, the Dirac equation for the external
fermions, transversality of the polarisation vector, momentum
conservation, and relations resulting from the four-dimensionality of
space--time, which can be exploited after the cancellation of
UV divergences, which are dimensionally regularised in our work.%
\footnote{IR divergences, which may be regularised dimensionally, do
not pose problems in this context, as shown explicitly in the
appendix of \citere{Bredenstein:2008zb}. This fact is confirmed in
our calculations where we alternatively used a four-dimensional mass
regularisation scheme or dimensional regularisation, leading to the
same IR-finite sum of virtual and real corrections.} In four
dimensions one can relate a product of three Dirac matrices to a sum
where each term only consists of a single Dirac matrix multiplied by
either the metric tensor $g^{\mu\nu}$ or $\gamma_5$ and the totally
antisymmetric tensor $\epsilon^{\mu\nu\rho\sigma}$ through the
Chisholm identity as
\begin{equation}
\gamma^\mu\gamma^\nu\gamma^\rho=g^{\mu\nu}\gamma^\rho - g^{\mu\rho}\gamma^\nu + g^{\nu\rho}\gamma^\mu
+{\mathrm{i}}\epsilon^{\mu\nu\rho\sigma}\gamma_\sigma\gamma_5.
\end{equation}
A further consequence of the four-dimensionality of space--time is
the fact that one can decompose the metric tensor $g^{\mu\nu}$ in
terms of four linearly independent orthonormal basis vectors $n_l$
(see e.g.\ \citeres{Denner:2003iy,Denner:2003zp})
\begin{equation}
g^{\mu\nu}
=\sum_{k,l=1}^3 g_{kl}\, n_k^\mu n_l^\nu,
=n_0^\mu n_0^\nu-\sum_{l=1}^3n_l^\mu n_l^\nu,
\label{decomp-metric-tensor}
\end{equation}
with $(n_k n_l)=g_{kl}$, where
$(g_{kl})=\mathrm{diag}(1,-1,-1,-1)$. A convenient choice of the four
vectors $n_l$ in terms of three linearly independent massless external
momenta $k_i,k_j,k_k$ is given by
\begin{eqnarray}
n_0^\mu(k_i,k_j,k_k)&=&\frac{1}{\sqrt{2(k_ik_j)}}\left(k_i^\mu+k_j^\mu\right),\quad
n_1^\mu(k_i,k_j,k_k)=\frac{1}{\sqrt{2(k_ik_j)}}\left(k_i^\mu-k_j^\mu\right),\quad\nonumber\\
n_2^\mu(k_i,k_j,k_k)&=&-\frac{1}{\sqrt{2(k_ik_j)(k_ik_k)(k_jk_k)}}\Big\lbrack(k_jk_k)k_i^\mu
+(k_ik_k)k_j^\mu-(k_ik_j)k_k^\mu\Big\rbrack,\nonumber\\
n_3^\mu(k_i,k_j,k_k)&=&-\frac{1}{\sqrt{2(k_ik_j)(k_ik_k)(k_jk_k)}}
\epsilon^{\mu\nu\rho\sigma} k_{i,\nu}k_{j,\rho}k_{k,\sigma}.
\label{eq:nvec}
\end{eqnarray}
In particular, the construction of the forth independent momentum via
the totally antisymmetric tensor $\epsilon^{\mu\nu\rho\sigma}$
avoids the appearance of inverse Gram determinants.
For the reduction of spinor chains we use the following algorithm. In
the first step we disconnect two spinor chains which are contracted
with each other using the decomposition \refeq{decomp-metric-tensor},
\begin{equation}
\gamma_\mu \otimes\gamma^\mu=\mathpalette\make@slash{n}_0\otimes\mathpalette\make@slash{n}_0-\sum_{l=1}^3\mathpalette\make@slash{n}_l\otimes\mathpalette\make@slash{n}_l.
\end{equation}
The choice of the external momenta in the above decomposition strongly
depends on the position of the contracted Dirac matrices inside the
spinor chain. It is advantageous to choose them in such a way that one
can make use of the Dirac equations
\mbox{$\bar{v}(k_i)\mathpalette\make@slash{k}_i=0$}
and \mbox{$\mathpalette\make@slash{k_i}u(k_i)=0$} and the
mass-shell condition $\mathpalette\make@slash{k}_i^2=k_i^2=0$ in a very direct manner,
avoiding unnecessary anticommutations. We follow the algorithm
described in detail in \citere{Denner:2005fg}. After simplifying the
expressions using the identities above, there are remaining
contributions from the contraction of a basis vector $n_3$ with a
Dirac matrix inside the spinor chains, which can be eliminated by
employing the Chisholm identity as
\begin{equation}
\mathpalette\make@slash{n}_3(k_i,k_j,k_k)=-\frac{{\mathrm{i}}\big[\mathpalette\make@slash{k}_i\mathpalette\make@slash{k}_j\mathpalette\make@slash{k}_k-(k_ik_j)\mathpalette\make@slash{k}_k+(k_ik_k)\mathpalette\make@slash{k}_j
-(k_jk_k)\mathpalette\make@slash{k}_i\big]\gamma_5}{\sqrt{2(k_ik_j)(k_ik_k)(k_jk_k)}}.
\label{N3slash}
\end{equation}
In the calculation at hand, we have to deal with a maximum of three
contractions between the two spinor chains. After disconnecting them,
we are left with objects of the form
$\left[\mathpalette\make@slash{p}\right]_{ab}^\pm\,$, where the vector $p$ can either
be an external momentum $k_j$, $j\neq a,b$, or the polarisation vector
$\varepsilon$ of the external gluon or photon.
In the next step, we reduce the spinor chains that do not involve
$\varepsilon$ using the relation
\begin{eqnarray}
\mathpalette\make@slash{k}_m&=
k_{m,\mu} g^{\mu\nu}\gamma_\nu
\stackrel{\mbox{\scriptsize{(\ref{decomp-metric-tensor})}}}{=}
k_{m,\mu}n_0^\mu\mathpalette\make@slash{n}_0-\sum_{l=1}^3
k_{m,\mu}n_l^\mu\mathpalette\make@slash{n}_l.
\label{loosep}
\end{eqnarray}
Choosing the indices $i,j,k$ in \refeq{eq:nvec} appropriately,
this allows to eliminate all external momenta in the spinor chains
apart from one for each chain via
\begin{eqnarray}
\Bigl[ \mathpalette\make@slash{k}_m \Bigr]_{ab}^{\pm}
&\stackrel{\mbox{\scriptsize{(\ref{loosep})}}}{=} &
k_{m,\mu} \,
\sum_{i=0}^3 g^{i i} \,
n^{\mu}_i(k_a,k_b,k_n) \,
\Bigl[ \mathpalette\make@slash{n}_i(k_a,k_b,k_n)
\Bigr]_{ab}^{\pm} \qquad \nonumber \\
& \stackrel{\mbox{\scriptsize{(\ref{N3slash})}}}{=} &
\frac{ (k_a k_n)\,(k_b k_m)
-(k_a k_b)\,(k_n k_m)
+(k_a k_m)\,(k_n k_b)
\pm{\mathrm{i}}\,\epsilon^{\mu\nu\rho\sigma}
k_{a,\mu}k_{n,\nu}k_{b,\rho}k_{m,\sigma} }
{2\,(k_a k_n)\,(k_b k_n)} \,
\Bigl[ \mathpalette\make@slash{k}_n \Bigr]_{ab}^{\pm} ,
\label{Step2Case1}
\hspace{3em}
\end{eqnarray}
where $m\ne a,b,n$.
The described reduction allows us to express all occurring spinor
structures in terms of a linear combination of 20 SMEs
\begin{equation}
\left[\varepsilon\right]^\sigma_{12}\left[k_1\right]^\tau_{34},\,\,
\left[k_3\right]^\sigma_{12}\left[\varepsilon\right]^\tau_{34},\,\,
\mysp[\varepsilon,,k,1]\left[k_3\right]^\sigma_{12}\left[k_1\right]^\tau_{34},\,\,
\mysp[\varepsilon,,k,2]\left[k_3\right]^\sigma_{12}\left[k_1\right]^\tau_{34},\,\,
\mysp[\varepsilon,,k,3]\left[k_3\right]^\sigma_{12}\left[k_1\right]^\tau_{34}.
\label{SMEeps}
\end{equation}
The reduction to this basis introduces at most two summands per spinor
chain.
Inserting the SMEs \refeq{SMEeps} into the amplitude
reduces its size by a factor of two. Since different reduction
strategies did not lead to more concise results we chose to use the
SMEs given in \refeq{SMEeps} in this calculation.
For the virtual corrections to $\sigma_{\mathrm{had}}$,
where $k_5$ and $\varepsilon$ are absent, we use the four SMEs
\begin{equation}
\left[k_3\right]^\sigma_{12}\left[k_1\right]^\tau_{34}.
\label{SMEsigmahad}
\end{equation}
{\it Version 2} of our algebraic calculation starts from diagrammatic
expressions for the one-loop corrections generated by {\sc FeynArts}
1.0 \cite{Kublbeck:1990xc} and proceeds with the algebraic evaluation
of the loop amplitudes with an in-house {\sc Mathematica} program.
The algebraic manipulations do not make use of four-dimensional
identities for the Dirac chains for the $2\to3$ process, so that a
larger set of 64 SMEs had to be introduced. After rendering the
corresponding invariants $F_n$ UV~finite upon adding the counterterms
from the renormalisation and IR~finite upon including the ``endpoint
contributions'' from the subtraction function (see
\refsec{subtraction}), the SMEs are evaluated in four space--time
dimensions using the spinor formalism described in
\citere{Dittmaier:1998nn}. For the $2\to2$ process the only Dirac
structure that involves divergences is
$[\gamma^\mu]^\sigma_{12}[\gamma_\mu]^\tau_{34}$, and all other SMEs
are proportional to these, as can be deduced from the identities given
in (3.10) of \citere{Denner:2005fg}.
\subsubsection{Evaluation of the loop integrals}
The tensor integrals are evaluated as in the calculation of
\citeres{Denner:2005es,Denner:2005fg}, i.e.\ they are numerically
reduced to scalar master integrals. The master integrals are computed
using complex masses according to
\citeres{'tHooft:1978xw,Beenakker:1988jr,Denner:1991qq}, using two
independent Fortran implementations which are in mutual agreement.
Results for different regularisation schemes are translated into each
other with the method of \citere{Dittmaier:2003bc}. Tensor and scalar
5-point functions are directly expressed in terms of 4-point integrals
\cite{Denner:2002ii,Denner:2005nn}. Tensor 4-point and 3-point
integrals are reduced to scalar integrals with the Passarino--Veltman
algorithm \cite{Passarino:1978jh}. Although we already find sufficient
numerical stability with this procedure, we apply the dedicated
expansion methods of \citere{Denner:2005nn} in exceptional phase-space
regions where small Gram determinants appear.
UV divergences are regularised dimensionally. For the IR, i.e.\ soft
or collinear, divergences we either use pure dimensional
regularisation with massless gluons, photons, and fermions (except for
the top quark), or pure mass regularisation with infinitesimal photon,
gluon, and small fermion masses, which are only kept in the
mass-singular logarithms. When using dimensional regularisation, the
rational terms of IR origin are treated as described in Appendix~A of
\citere{Bredenstein:2008zb}.
\subsection{Real Corrections}
\label{sec:real}
In this section we describe how we evaluate the last two terms in
\refeq{master_eq}. The real radiation corrections to the total
hadronic cross section [last term in \refeq{master_had}] are computed
along the same lines.
The processes we consider are given by
\begin{align}
&\mathrm{e}^+(k_1,\sigma_1)+\mathrm{e}^-(k_2,\sigma_2)\rightarrow
q(k_3,\sigma_3)+\bar{q}(k_4,\sigma_4)
+\mathrm{g}(k_5,\lambda_1)+\gamma(k_6,\lambda_2),\quad q=\mathrm{u,d,c,s,b},
\label{process-brems}
\\
&\mathrm{e}^+(k_1,\sigma_1)+\mathrm{e}^-(k_2,\sigma_2)\rightarrow
q(k_3,\sigma_3)+\bar{q}(k_4,\sigma_4)
+q(k_5,\sigma_5)+\bar{q}(k_6,\sigma_6)
,\quad q=\mathrm{u,d,c,s,b}.
\label{process-4q}
\end{align}
The corresponding cross section is obtained as
\begin{eqnarray}
\int{\mathrm{d}}\sigma_{\mathrm{real}} &=& \frac{1}{2s} \,
F_{\mathrm{C}}\sum_{\sigma,\sigma'=\pm\frac{1}{2}}
\frac{1}{4}(1-2P_1\sigma)(1+2P_2\sigma)\,
\biggl\{
\sum_{\lambda_1,\lambda_2=\pm 1}
\int\mathrm{d}\Phi_4 \,
\vert\mathcal{M}_{\mathrm{real},q\bar q
g\gamma}^{\sigma\sigma'\lambda_1\lambda_2}\vert^2 \,
\Theta_{\mathrm{cut}}\left( \Phi_4 \right) \nonumber \\
& & {}+ \sum_{\sigma''=\pm\frac{1}{2}}
\int\mathrm{d}\Phi_4 \,
2\mathop{\mathrm{Re}}\nolimits\left\{
\left(\mathcal{M}_{\mathrm{real},q\bar q q\bar q}^{\sigma\sigma'\sigma'',\mathrm{EW}}\right)^*
\mathcal{M}_{\mathrm{real},q\bar q q\bar q}^{\sigma\sigma'\sigma'',\mathrm{QCD}}
\right\}
\Theta_{\mathrm{cut}}\left( \Phi_4 \right)\biggr\} ,
\label{eq:sigma_real}
\end{eqnarray}
where we have used helicity conservation to simplify the helicity sum,
such that $\sigma$ denotes the helicity of the incoming electron, and
$\sigma'$ and $\sigma''=\sigma_5=-\sigma_6$ the helicities of the
outgoing quarks.
The four-particle phase-space volume element reads
\begin{equation}
{\mathrm{d}}\Phi_4=\frac{1}{\left(2\pi\right)^{12}}\frac{{\mathrm{d}}^3 \vec k_3}{2k_3^0}
\frac{{\mathrm{d}}^3 \vec k_4}{2k_4^0}\frac{{\mathrm{d}}^3 \vec k_5}{2k_5^0}
\frac{{\mathrm{d}}^3 \vec k_6}{2k_6^0}
\left(2\pi\right)^{4}\delta^{(4)}\left(k_1+k_2-k_3-k_4-k_5-k_6\right),
\label{ps4}
\end{equation}
and $F_{\mathrm{C}}=4$ for both contributions. As in \refeq{eq:sigma0},
$\Theta_{\mathrm{cut}}\left( \Phi_4 \right)$ represents cuts used in the
event selection.
We only consider the ${\cal O}(\alpha^3\alpha_{\mathrm{s}})$
corrections in our analysis. At this order,
the process (\ref{process-4q}) receives a contribution only from the
interference of EW with QCD amplitudes, as illustrated in
Fig.~\ref{fi:interference-eeqqqq}. Owing to colour conservation, this
interference term is only non-zero for identical quark flavours
and for $\sigma'=\sigma''$. It is
non-singular over the full phase space defined by the event-selection
cuts.
\begin{figure}
\centerline{\footnotesize
\begin{feynartspicture}(200,90)(2,1)
\FADiagram{}
\FAProp(0.,15.)(8.,14.5)(0.,){/Straight}{1}
\FALabel(4.09669,15.817)[b]{$\mathrm{e}$}
\FAProp(0.,5.)(8.,5.5)(0.,){/Straight}{-1}
\FALabel(4.09669,4.18302)[t]{$\mathrm{e}$}
\FAProp(20.,18.)(14.,15.)(0.,){/Straight}{-1}
\FALabel(16.7868,17.4064)[br]{$q$}
\FAProp(20.,12.)(14.,15.)(0.,){/Straight}{1}
\FALabel(16.7868,12.5936)[tr]{$q$}
\FAProp(20.,8.)(14.,5.)(0.,){/Straight}{-1}
\FALabel(16.7868,7.40636)[br]{$q$}
\FAProp(20.,2.)(14.,5.)(0.,){/Straight}{1}
\FALabel(16.7868,2.59364)[tr]{$q$}
\FAProp(8.,14.5)(8.,5.5)(0.,){/Straight}{1}
\FALabel(6.93,10.)[r]{$\mathrm{e}$}
\FAProp(8.,14.5)(14.,15.)(0.,){/Sine}{0}
\FALabel(10.8713,15.8146)[b]{$\gamma/\mathrm{Z}$}
\FAProp(8.,5.5)(14.,5.)(0.,){/Sine}{0}
\FALabel(10.8713,4.18535)[t]{$\gamma/\mathrm{Z}$}
\FAVert(8.,14.5){0}
\FAVert(8.,5.5){0}
\FAVert(14.,15.){0}
\FAVert(14.,5.){0}
\FAProp(44.,15.)(40.,10.)(0.,){/Straight}{-1}
\FALabel(44.,13.0117)[tr]{$\mathrm{e}$}
\FAProp(44.,5.)(40.,10.)(0.,){/Straight}{1}
\FALabel(44.,7.91172)[tr]{$\mathrm{e}$}
\FAProp(20.,18.)(30.,15.)(0.,){/Straight}{1}
\FAProp(24.,10.)(30.,15.)(0.,){/Cycles}{0}
\FALabel(28.,10.6)[b]{$\mathrm{g}$}
\FAProp(40.,10.)(34.,10.)(0.,){/Sine}{0}
\FALabel(37.,11.07)[b]{$\mathrm{Z}/\gamma$}
\FAProp(30.,15.)(34.,10.)(0.,){/Straight}{1}
\FAProp(20.,2.)(34.,10.)(0.,){/Straight}{-1}
\FAProp(24.,10.)(20.,12.)(0.,){/Straight}{1}
\FAProp(24.,10.)(20.,8.)(0.,){/Straight}{-1}
\FAVert(40.,10.){0}
\FAVert(30.,15.){0}
\FAVert(24.,10.){0}
\FAVert(34.,10.){0}
\DashLine(96,5)(96,85){2}
\end{feynartspicture}
}
\vspace{-1.0em}
\caption{Sample diagram for a non-trivial interference
between EW and QCD diagrams in
$\mathrm{e}^+\mathrm{e}^-\rightarrow q\bar{q}q\bar{q}$.}
\label{fi:interference-eeqqqq}
\end{figure}
The integral of the process (\ref{process-brems}) over the
four-particle phase space contains IR divergences due to the emission
of a soft or collinear photon or gluon. Prior to numerical
implementation, one has to isolate these divergences and combine them
with the corresponding contributions from the virtual corrections. In
our implementation, we use three different methods for this task: two
variants of the phase-space slicing method
\cite{Baer:1988ux,Giele:1991vf,Giele:1993dj,Bohm:1993qx,Dittmaier:1993da,Wackeroth:1996hz,Baur:1998kt},
and the dipole subtraction method
\cite{Catani:1996vz,Dittmaier:1999mb,Dittmaier:2008md}.
In phase-space slicing, the phase space is split into singular and
non-singular regions. In the singular regions the integration over the
singular variables is performed analytically, while the non-singular
regions are integrated over fully numerically. In the dipole
subtraction method, a subtraction function that mimics the singular
behaviour of the integrand is added and subtracted, leaving a finite
four-particle phase-space integration and a remainder, where the
integration that leads to singularities is carried out analytically.
Both methods rely on factorisation properties of the matrix elements
and of the phase space in the soft and collinear limits. The
singularities are treated analytically so that a numerical integration
does not pose any problems.
Both methods described above are valid for NLO calculations. We employ
them separately for the photon and the gluon in the calculation at
hand. However, they are not sufficient in the region where both the
photon and the gluon become soft or collinear at the same time. This
region corresponds to two-jet production where the photon and the
gluon are unresolved. In this region close to the kinematic endpoint
in the event-shape distributions fixed-order calculations are in any
case not appropriate. To prevent problems with numerical stability from
this region, we impose a lower cut-off on each event-shape variable in
the first bin of the distribution that corresponds to two-jet
production.
To be able to compare the results of our calculation to experimental
measurements and to improve the accuracy of the theoretical prediction
relevant for the $\alpha_{\mathrm{s}}$ determination, we have to
incorporate the kinematical cuts used in the specific experiment. For
the event-selection cuts, $\Theta_{\mathrm{cut}}$, we apply the
procedure used by the ALEPH experiment, as described in \refsec{PI}.
Electromagnetic jets are removed by imposing an upper cut on the
photon energy in the jet. However, the cut on the photon energy also
removes events with a highly energetic photon collinear to a quark or
antiquark in the final state that lead to a configuration where the
photon and quark are in the same jet. The left-over collinear
singularity associated with this isolated-photon rejection is properly
accounted for by a contribution from the quark-to-photon fragmentation
function
(see next section).
Collinear photon emission in the initial state is regulated by the
mass of the electron and leads to large contributions of the form
$\alpha^n\ln^n(m_\mathrm{e}^2/s)$. The cut on the visible invariant mass
$M_{\mathrm{vis}}$ removes part of the hard photon emission collinear
to the incoming beam particles and therefore suppresses large
mass-singular logarithms.
\section{Treatment of soft and/or collinear singularities}
\label{sec:realcorr}
\setcounter{equation}{0}
In this section we describe the treatment of the soft and collinear
singularities for the process \refeq{process-brems}.
\subsection{Phase-space slicing}
\label{sec::softcoll}
In the phase-space slicing method, technical cut parameters are
introduced to
decompose the real radiation phase space into regions corresponding to
soft or collinear configurations (unresolved regions), and a resolved
region that is free from singularities. Consequently, the cross
section decomposes into a soft, a collinear, and a finite part. Since
we apply an isolation cut on the final-state photons (i.e.\ on a
specific, identified final-state particle), we must also include a
fragmentation contribution, so that the total real contribution
reads
\begin{equation}
{\mathrm{d}}\sigma_{\mathrm{real}}
{\mathrm{d}}\sigma_{\mathrm{soft}}+{\mathrm{d}}\sigma_{\mathrm{coll}}
-{\mathrm{d}}\sigma_{\mathrm{frag}}
+{\mathrm{d}}\sigma_{\mathrm{finite}}.
\label{eq::realsum}
\end{equation} In the following sections we first review the method for
collinear-safe observables, the necessary modifications for
non-collinear-safe event-selection cuts are described in
\refsecs{ncsslice} and \ref{quark-to-photonfragfac}.
\subsubsection{Collinear-safe observables}
\label{csslice}
Soft and collinear contributions contain IR divergences which are
evaluated analytically, while the finite contribution is evaluated
numerically. Since no quark-mass singularities from final-state
radiation remain for collinear-safe observables, no fragmentation
contribution is necessary in this case, i.e.\
${\mathrm{d}}\sigma_{\mathrm{frag}}=0$ in \refeq{eq::realsum}. We implemented
two different variants of phase-space slicing: the two-cut-off slicing
according to \citeres{Baur:1998kt,Denner:2000bj}, which uses mass
regularisation, and the one-cut-off slicing of
\citeres{Giele:1991vf,Giele:1993dj} within dimensional regularisation.
The application of both methods is described in detail below for
IR-divergent contributions due to unresolved photons. The unresolved
gluon contributions are obtained by an appropriate replacement of the
coupling constants.
In two-cut-off slicing, the splitting of the phase space into singular
and non-singular parts is achieved by introducing a cut $\delta_{\mathrm{s}}$ on
the photon energy $E_\gamma<\delta_{\mathrm{s}}\sqrt{s}/2=\Delta E$ in the CM
frame. The collinear region is defined by $E_\gamma>\Delta E$ and
$1>\cos\theta>1-\delta_{\mathrm{c}}$, where $\theta$ is the
smallest angle between the photon and any charged fermion in the CM system.
In the soft region the squared matrix element
$\vert\mathcal{M}_{\mathrm{real}}\vert^2$ and the phase-space element
${\mathrm{d}}\Phi_4$ factorise so that we can apply the soft-photon
approximation, e.g.\ given in \citeres{Yennie:1961ad,Denner:1991kt},
and introduce an infinitesimal photon mass $m_\gamma$ as a regulator.
The resulting soft-photon contribution is
\begin{eqnarray}
{\mathrm{d}}\sigma_{\mathrm{soft}}&=&{\mathrm{d}}\sigma_{\mathrm{Born}} \,
\frac{\alpha}{2\pi}\sum_{I=1}^{4}\sum_{J=I+1}^{4}(-1)^{I+J}Q_IQ_J
\left\{
2\ln\left(\frac{2\Delta E}{m_\gamma}\right)\left[
2-\ln\left(\frac{s_{IJ}^2}{m_I^2m_J^2}\right)
\right] \right.
\nonumber\\
&&{}
\left.
{}-2\ln\left(\frac{4E_IE_J}{m_Im_J}\right)
+\frac{1}{2}\ln^2\left(\frac{4E_I^2}{m_I^2}\right)+
\frac{1}{2}\ln^2\left(\frac{4E_J^2}{m_J^2}\right)+\frac{2\pi^2}{3}+
2\mathop{\mathrm{Li}_2}\nolimits\left(1-\frac{4E_IE_J}{s_{IJ}}\right)
\right\},
\hspace{2em}
\end{eqnarray}
where we keep the masses of the fermions as regulators for the collinear
singularities ($E_I\gg m_I$),
i.e.\ we have $m_1=m_2=\mathswitch {m_\Pe}$ and $m_3=m_4=m_q$,
and ${\mathrm{d}}\sigma_{\mathrm{Born}}$ denotes the
Born cross section for $\mathrm{e}^+\mathrm{e}^- \to q\bar q g$.
Hard collinear contributions arise from the limit where the photon
becomes collinear either with one of the incoming beams or with the
outgoing (anti-)quark. These contributions contain mass-singular
logarithms, which are regularised by the fermion masses. Their
integrated forms are again proportional to the Born cross section for
$\mathrm{e}^+\mathrm{e}^- \to q\bar q g$.
We split the collinear photon
contributions into those from initial-state radiation and those from
final-state radiation:
\begin{equation} {\mathrm{d}}\sigma_{\mathrm{coll}}
={\mathrm{d}}\sigma^{{\rm initial}}_{\mathrm{coll}}
+ {\mathrm{d}}\sigma^{{\rm final}}_{\mathrm{coll}}.
\end{equation}
In the case of initial-state photon emission, the available
$\mathrm{e}^+\mathrm{e}^-$ CM energy is reduced, and the Born process is probed at
this reduced energy. Moreover, the mass regularisation introduces a
spin-flip term. We indicate these two features in the argument of the
Born cross section, where $k_i$ and $P_i$ being are the momenta and
the degrees of polarisation of the respective beams. The integrated
initial-state and final-state collinear contributions thus read
\begin{eqnarray}
{\mathrm{d}}\sigma_{\mathrm{coll}}^{\mathrm{initial}} &=&\sum_{a=1}^2
\frac{\alpha}{2\pi}Q_a^2\int_0^{1-\delta_{\mathrm{s}}} {\mathrm{d}}
z\biggl\{{\mathrm{d}}\sigma_{\mathrm{Born}}\left(k_a\to zk_a\right)
P_{ff}(z)\left[\ln\left(\frac{{s}}{m_a^2}\frac{\delta_{\mathrm{c}}}{2}\frac{1}{z}\right)
-1\right]
\nonumber\\
&&\hspace*{33.5mm}{}+{\mathrm{d}}\sigma_{\mathrm{Born}} \left(k_a\to
zk_a,P_a\to -P_a\right)(1-z)\biggr\}
\label{eq.:collin}
\end{eqnarray}
and
\begin{eqnarray}
{\mathrm{d}} \sigma_{\mathrm{coll}}^{\mathrm{final}}&=&
\sum_{i=3}^4
\frac{\alpha}{2\pi}Q_i^2\, {\mathrm{d}}\sigma_{\mathrm{Born}}
\left\{
\left[
\frac{3}{2}+2\ln\left(\frac{\Delta E}{E_i}\right)
\right]
\left[
1-\ln\left(\frac{2E_i^2\delta_{\mathrm{c}}}{m_i^2}\right)
\right]
+3-\frac{2\pi^2}{3}\right\},
\label{slicing_final}
\end{eqnarray}
where
\begin{equation}
P_{ff}(z)=\frac{1+z^2}{1-z}
\end{equation}
denotes the $f\to f\gamma$ splitting function. The parameters
$\delta_{\mathrm{s}}$ and $\delta_{\mathrm{c}}$ govern the splitting
of the phase space in the different regions, but the final result does
not depend on them. They have to be chosen small enough to guarantee
that the applied approximations are valid. Therefore, varying these
parameters in a certain range and showing independence of the results
serves as an important check of the calculation.
In the one-cut-off slicing, soft and collinear regions are defined by a
cut $y_{{\rm min}}$ on two-particle invariants $s_{IJ} = (k_I+k_J)^2$
normalised to the $\mathrm{e}^+\mathrm{e}^-$
CM energy squared,
$y_{IJ} = s_{IJ}/s$. The soft region of parton $I$ is defined by
$|y_{IJ}| < y_{{\rm min}}$ for all $J$,
while the collinear region of partons $I$ and $J$
is defined by $|y_{IJ}| < y_{{\rm min}}$, while all other invariants $|y_{IK}| \ge
y_{{\rm min}}$ for $K\neq J$.
Following \citere{Giele:1993dj}, the soft and collinear divergent
contributions are first calculated in the unphysical kinematical situation
of all particles outgoing, and later on continued to the physical kinematics
by means of a crossing function,
such that
\begin{equation}
{\mathrm{d}}\sigma_{\mathrm{coll}}
={\mathrm{d}}\sigma^{{\rm out}}_{\mathrm{coll}}
+ {\mathrm{d}}\sigma^{{\rm cross}}_{\mathrm{coll}}
.\end{equation}
The combination of the first term with the soft contribution
yields \cite{Giele:1991vf}
\begin{eqnarray}
{\mathrm{d}}\sigma_{\mathrm{soft}}
+{\mathrm{d}}\sigma^{{\rm out}}_{\mathrm{coll}}&=&{\mathrm{d}}\sigma_{\mathrm{Born}}
\frac{\alpha}{2\pi} \, \frac{(4\pi)^\epsilon}{\Gamma(1-\epsilon)}
\sum_{I,J=1 \atop I\ne J}^{4}(-1)^{I+J}Q_IQ_J\nonumber\\
&&\times
\left[\frac{1}{\epsilon^2} \left(\frac{\mu^2}{|s_{IJ}|}\right)^\epsilon + \frac{3}{2\epsilon}
\left(\frac{\mu^2}{|s_{IJ}|}\right)^\epsilon
-\ln^2 \left( \frac{|y_{IJ}|}{y_{{\rm min}}}\right)
+ \frac{3}{2} \ln \left( \frac{|y_{IJ}|}{y_{{\rm min}}}\right)
+ \frac{7}{2} - \frac{\pi^2}{3}
\right]\,,\quad
\label{eq:sliceff}
\end{eqnarray}
where we used dimensional regularisation in $d=4-2\epsilon$ dimensions with
mass parameter $\mu$ required to maintain a dimensionless coupling.
Kinematical crossing of $\mathrm{e}^+\mathrm{e}^-$ to the initial state introduces
crossing functions, which were derived for QCD using factorisation
of parton distributions in the $\overline{{\rm MS}}$ scheme in
\citere{Giele:1993dj}. For photon radiation off incoming electrons,
this $\overline{{\rm MS}}$ expression is converted using
\citere{Baur:1998kt} to a mass-regularised form, involving the
physical electron mass. This results in the following crossing term,
\begin{eqnarray}
\label{eq:onecutcoll}
{\mathrm{d}} \sigma_{\mathrm{coll}}^{\mathrm{cross}}
&=&\sum_{a=1,2}\frac{\alpha}{2\pi}Q_a^2\int_0^{1}
{\mathrm{d}} z \, {\mathrm{d}}\sigma_{\mathrm{Born}}\left(k_a\to zk_a\right) \, \Biggl\{
\left[ \frac{\pi^2}{3} - \frac{5}{4} \right] \delta(1-z)
\nonumber \\
&& + \left[ P_{ff}(z) \left( \ln (y_{{\rm min}}) +
\ln\left(\frac{s}{m_a^2(1-z)}\right) -1\right) +1-z\right]_+ \Biggr\},
\end{eqnarray}
where we use the usual $\left[\dots\right]_+$ prescription
\begin{equation}
\int_0^1{\mathrm{d}} x \left[ f(x)\right]_+g(x)=\int_0^1{\mathrm{d}} x f(x)\left[ g(x)-g(1)\right].
\end{equation}
The results \refeq{eq:sliceff} and \refeq{eq:onecutcoll} apply to
unpolarized cross sections, but the polarization effects of the incoming
particles can be easily restored
by the spin-flip terms of \refeq{eq.:collin},
\begin{equation}
{\mathrm{d}} \sigma_{\mathrm{coll}}^{\mathrm{pol}} =
{\mathrm{d}} \sigma_{\mathrm{coll}}
+\sum_{a=1,2}\frac{\alpha}{2\pi}Q_a^2\int_0^{1} {\mathrm{d}} z \,
\Bigl[{\mathrm{d}}\sigma_{\mathrm{Born}}\left(k_a\to zk_a,P_a\to -P_a\right)
-{\mathrm{d}}\sigma_{\mathrm{Born}}\left(k_a\to zk_a\right) \Bigr] (1-z).
\label{eq:onecutcoll_pol}
\end{equation}
\subsubsection{Non-collinear-safe observables}
\label{ncsslice}
The upper cut on the photon energy inside jets affects the slicing
procedure. Imposing this cut in the soft and finite parts of
\refeq{eq::realsum} is trivial. Since the cut acts
outside the soft region, this part is not affected at all.
Only the collinear singular part needs some non-trivial
adjustments.
More precisely, only collinear final-state radiation needs to be
considered, since photons collinear to the incoming electrons or
positrons can never appear inside jets within detector coverage.
In \refsec{csslice} we parametrised the collinear region in
terms of the energy fraction $z$ carried by the (anti-)quark that
results from the splitting. The experimental cut, however, is a cut on
the energy fraction $1-z$ of the photon after the splitting, i.e.\
the cut on the energy fraction of the photon reads
$1-z< z_{\mathrm{cut}}$.
In the two-cut-off slicing approach, imposing this cut
generalises \refeq{slicing_final} to
\begin{equation}
\mathrm{d}\sigma_{\mathrm{coll}}^{\mathrm{final}}(z_{\mathrm{cut}})
=\sum_{i=3}^4
\frac{\alpha}{2\pi}Q_i^2
{\mathrm{d}}\sigma_{\mathrm{Born}} \,
\int_{1-z_{\mathrm{cut}}}^{1-\Delta E/E_i}\mathrm{d}z \Bigg\{
P_{ff}(z)\left[\mathrm{ln}\left(\frac{2E_i^2\delta_{\mathrm{c}}}{m_i^2}z^2\right)-1\right]
+ (1-z)\Bigg\}.
\end{equation}
Performing the integration yields
\begin{align}
\label{cutint}
\mathrm{d}\sigma_{\mathrm{coll}}^{\mathrm{final}}(z_{\mathrm{cut}})
=&\sum_{i=3}^4\frac{\alpha}{2\pi}Q_i^2 \,
{\mathrm{d}}\sigma_{\mathrm{Born}}\left\{
\left[
-2 z_{\mathrm{cut}} + \frac{z_{\mathrm{cut}}^2}{2} - 2\ln\left(\frac{\Delta E/E_i}{z_{\mathrm{cut}}}\right)
\right]\ln\left(\frac{2E_i^2\delta_{\mathrm{c}}}{m_i^2}\right)
+2\ln\left(\frac{\Delta E/E_i}{z_{\mathrm{cut}}}\right)
\right. \nonumber
\\
&\left.{}
-4\mathop{\mathrm{Li}_2}\nolimits\left(z_{\mathrm{cut}}\right)
+(1-z_{\mathrm{cut}})(3-z_{\mathrm{cut}}) \ln(1-z_{\mathrm{cut}})\right.
+5z_{\mathrm{cut}}-\frac{z_{\mathrm{cut}}^2}{2}
\Bigg\}.
\end{align}
In \refeq{cutint} we have the original
dependence on the slicing parameters and the mass regulators, already
present in \refeq{slicing_final}, plus an additional term that depends
on the slicing parameters, the mass regulators, and the cut on the
photon energy. It is exactly this term that gives rise to left-over
singularities.
Within the one-cut-off approach, a cut on the
final-state photon energy in jets can be conveniently taken into
account by including
a collinear photon contribution in ${\mathrm{d}}\sigma_{\mathrm{coll}}$,
\begin{equation} {\mathrm{d}}\sigma_{\mathrm{coll}}
={\mathrm{d}}\sigma^{{\rm out}}_{\mathrm{coll}}
+ {\mathrm{d}}\sigma^{{\rm cross}}_{\mathrm{coll}}
- {\mathrm{d}}\sigma^{{\gamma}}_{\mathrm{coll}}(z_{\mathrm{cut}}),
\end{equation}
which subtracts the contributions of collinear photons with energies
above $z_{\mathrm{cut}}$.
This contribution reads
\begin{eqnarray}
\mathrm{d}\sigma_{\mathrm{coll}}^{\gamma}(z_{\mathrm{cut}})
&=& \sum_{i=3}^4
\frac{\alpha}{2\pi}Q_i^2
{\mathrm{d}}\sigma_{\mathrm{Born}} \,
\int^{1-z_{\mathrm{cut}}}_{0}\mathrm{d}z \,
\Biggl\{\frac{(4\pi\mu^2)^\epsilon}{\Gamma(1-\epsilon)}\,
\frac{P_{ff}^{(\epsilon)}(z)}{\left[z(1-z)\right]^\epsilon}
\int_0^{s_{\mathrm{min}}}
{\mathrm{d}} s_{q\gamma}\frac{1}{s_{q\gamma}^{1+\epsilon}}
\Biggr\}\nonumber \\
&=& -\sum_{i=3}^4
\frac{\alpha}{2\pi}Q_i^2
{\mathrm{d}}\sigma_{\mathrm{Born}} \,
\int^{1-z_{\mathrm{cut}}}_{0}\mathrm{d}z \,\Biggl\{
\frac{1}{\epsilon} \,\left(\frac{4\pi\mu^2}{s_{\mathrm{min}}}\right)^\epsilon\frac{1}{\Gamma(1-\epsilon)}
\frac{P^{(\epsilon)}_{ff}(z)}{\left[z(1-z)\right]^\epsilon} \Biggr\}\,,
\label{eq:sliceint}
\end{eqnarray}
where
the $\epsilon$-dependent splitting function $P^{(\epsilon)}_{ff}$ is given by
\begin{equation}
P^{(\epsilon)}_{ff}(z)=\frac{1+z^2-\epsilon\,(1-z)^2}{1-z}.
\end{equation}
The derivation of this collinear unresolved photon factor is described in
detail in~\citeres{Glover:1993xc,Poulsen:2006}.
In both slicing approaches, we observe that the introduction of the cut on the
final-state photon energy results in uncompensated collinear singularities
in (\ref{cutint}) and (\ref{eq:sliceint}). These are properly accounted
for by factorisation of the quark-to-photon fragmentation function,
explained in detail in
Section~\ref{quark-to-photonfragfac} below.
\subsection{Dipole subtraction}
\label{subtraction}
The basic idea of any subtraction method is to subtract an auxiliary
function from the integrand that features the same singular behaviour
in the soft and collinear limits. The partially integrated auxiliary
function is then added to the virtual corrections (and counterterms
from the factorisation of parton distributions and fragmentation
functions) resulting in analytic cancellation of IR singularities. We
use the dipole subtraction method, first introduced in
\citere{Catani:1996vz} for massless QCD and later generalised to
massive fermions for collinear-safe and non-collinear-safe observables
in \citere{Dittmaier:1999mb} and \citere{Dittmaier:2008md}, respectively.
Since we regulate IR
divergences with particle masses, we follow the description of
\citeres{Denner:2000bj,Dittmaier:1999mb,Dittmaier:2008md}.
Again we first describe the method
for the case of an unresolved photon; the gluon case is obtained by
a suitable replacement of couplings. In this section we suppress the
weighted sum over initial-state polarisations.
\subsubsection{Collinear-safe observables}
\label{cssub}
Suppressing flux and colour factors, the
$\Theta$-function related to the phase-space cuts,
dipole subtraction is based on the general formula
\begin{equation}
\int{\mathrm{d}}\Phi_4\sum_\lambda\vert\mathcal{M}_{\mathrm{real}}\vert^2=
\int{\mathrm{d}}\Phi_4\left(\sum_\lambda\vert\mathcal{M}_{\mathrm{real}}\vert^2-
\vert\mathcal{M}_{\mathrm{sub}}\vert^2\right)
+\int{\mathrm{d}}\Phi_4\,\vert\mathcal{M}_{\mathrm{sub}}\vert^2,
\label{master_sub}
\end{equation}
where $\lambda$ labels the photon polarisation.
The subtraction function $\mathcal{M}_{\mathrm{sub}}$ is
constructed from ordered pairs $IJ$ of charged fermions, where fermion
$I$ is called the emitter and fermion $J$ the spectator. Only the
kinematics of the emitter leads to singularities. The spectator fermion
is used to balance energy-momentum conservation when combining the
momentum of the photon with the momentum of the emitter. Making the
dependence of the subtraction function on the emitter--spectator pair
explicit, we can write
\begin{equation}
\vert\mathcal{M}_{IJ,\mathrm{sub}}\left(\Phi_4\right)\vert^2=
-(-1)^{I+J}Q_IQ_Je^2\sum_{\tau=\pm}g_{IJ,\tau}^{(\mathrm{sub})}
\left( k_I,k_J,k\right)\vert\mathcal{M}_{\mathrm{Born}}(\tilde{\Phi}_{3,IJ},
\tau\sigma_I)\vert^2,
\label{def_subtractionij}
\end{equation} where $\sigma_I$ is the helicity of the emitter and $k=k_6$ the
photon momentum.
The sum over $\tau=\pm$ takes care of a possible spin-flip resulting from
collinear photon emission.
The subtraction function
$\mathcal{M}_{IJ,\mathrm{sub}}$ depends on the whole four-particle
phase space $\Phi_4$, whereas the Born matrix elements depend only on
three-particle phase spaces $\tilde{\Phi}_{3,IJ}$. The mappings of
the four-particle on the three-particle phase spaces, which are
different for each emitter--spectator pair $IJ$ and explicitly given
in \citere{Dittmaier:1999mb}, ensure proper factorisation in each
singular limit.
In the massless case, the dipole factors are explicitly given by
\begin{eqnarray}
g_{ij,+}^{(\mathrm{sub})}(k_i,k_j,k)&=&\frac{1}{(k_ik)\lrb1-y_{ij}\right)}
\left[\frac{2}{1-z_{ij}\lrb1-y_{ij}\right)}-1-z_{ij}\right],\nonumber\\
g_{ia,+}^{(\mathrm{sub})}(k_i,k_a,k)&=&\frac{1}{(k_ik)x_{ia}}
\left[\frac{2}{2-x_{ia}-z_{ia}}-1-z_{ia}\right],\nonumber\\
g_{ai,+}^{(\mathrm{sub})}(k_a,k_i,k)&=&\frac{1}{(k_ak)x_{ia}}
\left[\frac{2}{2-x_{ia}-z_{ia}}-1-x_{ia}\right],\nonumber\\
g_{ab,+}^{(\mathrm{sub})}(k_a,k_b,k)&=&\frac{1}{(k_ak)x_{ab}}
\left[\frac{2}{1-x_{ab}}-1-x_{ab}\right],\nonumber\\
g_{ij,-}^{(\mathrm{sub})}(k_i,k_j,k)&=&
g_{ia,-}^{(\mathrm{sub})}(k_i,k_a,k)=
g_{ai,-}^{(\mathrm{sub})}(k_a,k_i,k)=
g_{ab,-}^{(\mathrm{sub})}(k_a,k_b,k)=0,
\end{eqnarray}
where we denote final-state particles with the letters $i,j$ and
initial-state particles with the letters $a,b$ and use the definitions
\begin{eqnarray}
x_{ab}&=&\frac{k_ak_b-k_ak-k_bk}{k_ak_b}, \qquad
x_{ia}=\frac{k_ak_i+k_ak-k_ik}{k_ak_i+k_ak}, \qquad
y_{ij}=\frac{k_ik}{k_ik_j+k_ik+k_jk},
\nonumber\\
z_{ia}&=&\frac{k_ak_i}{k_ak_i+k_ak}, \qquad
z_{ij}=\frac{k_ik_j}{k_ik_j+k_jk}.
\label{sub_invariants}
\end{eqnarray}
The finite part of the real corrections thus reads
\begin{equation}
\int{\mathrm{d}}\sigma^{\mathrm{finite}}_{\mathrm{real}}=
\frac{1}{2s}\int{\mathrm{d}}\Phi_4\left[
\sum_\lambda\vert\mathcal{M}_{\mathrm{real}}\vert^2 \,
\Theta_{\mathrm{cut}}(\Phi_4) -
\sum_{I,J=1 \atop I\neq J}^4\vert\mathcal{M}_{IJ,\mathrm{sub}}\vert^2 \,
\Theta_{\mathrm{cut}}(\tilde{\Phi}_{3,IJ})
\right],
\label{eq:sigmasubfinite}
\end{equation}
where from now on we suppress the spin sums of the fermions in the notation.
The cuts on the momenta of the final-state particles are included in
terms of the functions $\Theta_{\mathrm{cut}}(\Phi_4)$ and
$\Theta_{\mathrm{cut}}(\tilde{\Phi}_{3,IJ})$, where the arguments
signal which momenta are subject to the cuts. Note that already at
this point collinear safety is assumed, because the emitter momentum
in $\tilde{\Phi}_{3,IJ}$ tends to the sum of emitter and photon
momenta in the collinear limit by construction, i.e.\ a recombination
of these two particles in the collinear limit is understood. This has
to be changed in the treatment of non-collinear-safe observables
considered below.
Note that the vanishing of the spin-flip parts $g_{IJ,-}$ only holds
in the difference \refeq{eq:sigmasubfinite}, where the collinear singularities cancel;
the spin-flip parts contribute when the collinear singular region in
$\vert\mathcal{M}_{IJ,\mathrm{sub}}\vert^2$ is integrated over, as will become
apparent in the following.
We turn now to the treatment of the singular contributions. Splitting
the four-particle phase space into a three-particle phase space and
the photonic part,
\begin{equation}
\int{\mathrm{d}}\Phi_4=\int_0^1{\mathrm{d}}
x\int{\mathrm{d}}\tilde{\Phi}_{3,IJ}(x)\int{\mathrm{d}}\Phi_{\gamma,IJ},
\end{equation}
where the photonic part of the phase space, ${\mathrm{d}}\Phi_{\gamma,IJ}$,
depends on the mass regulators $m_I$ and $m_\gamma$ of the fermions
and of the photon, we have to compensate for the reduction of the CM
energy in the case of ISR radiation, which is indicated by the
$x$-dependence of the three-particle phase space. The integration
over $x$, thus, becomes process dependent and cannot be carried out
analytically in general. Instead, it is possible to split off
the singular contribution resulting from the endpoint of this
integration at $x\to1$ upon introducing a $[\dots]_+$ distribution.
Leaving details of the analytic integrations over the singular phase
spaces ${\mathrm{d}}\tilde{\Phi}_{3,IJ}(x)$ to \citere{Dittmaier:1999mb}, the
result for the integrated singular part of the real corrections reads
\begin{eqnarray} \int{\mathrm{d}}\sigma^{\mathrm{sing}}_{\mathrm{real}}
&=&-\frac{\alpha}{2\pi}\sum_{{I,J=1} \atop {I\neq
J}}^{4}\sum_{\tau=\pm}(-1)^{I+J}Q_IQ_J
\nonumber\\
&& \times\biggl\{ \int_0^1\frac{{\mathrm{d}} x}{2sx}\int{\mathrm{d}}
\tilde{\Phi}_{3,IJ}(x)\left[\mathcal{G}_{IJ,\tau}^{(\mathrm{sub})}(\tilde{s}_{IJ},x)\right]_+
\Bigl\vert\mathcal{M}_{\mathrm{Born}}
\Bigl(\tilde{\Phi}_{3,IJ}(x),\tau\sigma_I\Bigr)\Bigr\vert^2 \,
\Theta_{\mathrm{cut}}\Bigl(\tilde{\Phi}_{3,IJ}(x)\Bigr)
\nonumber\\
&&{} +\frac{1}{2s} \int{\mathrm{d}}\Phi_3
\,G_{IJ,\tau}^{(\mathrm{sub})}(s_{IJ})\bigl\vert\mathcal{M}_{\mathrm{Born}}(\Phi_3,\tau\sigma_I)\bigr\vert^2
\, \Theta_{\mathrm{cut}}(\Phi_3) \biggr\}, \end{eqnarray} where the functions
$\mathcal{G}_{IJ,\tau}^{(\mathrm{sub})}(\tilde{s}_{IJ},x)$ and
$G_{IJ,\tau}^{(\mathrm{sub})}(s_{IJ})$ result from the analytic
integration over the photonic part of the phase space.
For the final--final emitter--spectator case, there is no convolution
part $\mathcal{G}_{ij,\tau}^{(\mathrm{sub})}$, i.e.\
\begin{equation}
\mathcal{G}_{ij,\tau}^{(\mathrm{sub})}(s_{ij},x)=0, \qquad
G_{ij,\tau}^{(\mathrm{sub})}(s_{ij}) =
8\pi^2\int{\mathrm{d}}\Phi_{\gamma,ij} \,
g_{ij,\tau}^{(\mathrm{sub})}\left( k_i,k_j,k\right),
\end{equation}
in contrast to the other emitter--spectator cases, where
\begin{equation}
\mathcal{G}_{IJ,\tau}^{(\mathrm{sub})}(s_{IJ},x)=
8\pi^2\, x \int{\mathrm{d}}\Phi_{\gamma,IJ} \,
g_{IJ,\tau}^{(\mathrm{sub})}\left( k_I,k_J,k\right),
\qquad
G_{IJ,\tau}^{(\mathrm{sub})}(s_{IJ}) = \int_0^1{\mathrm{d}} x\,
\mathcal{G}_{IJ,\tau}^{(\mathrm{sub})}(s_{IJ},x).
\end{equation}
It should be noted that the invariant $\tilde{s}_{IJ}$ in the
integration over
$\left[\mathcal{G}_{IJ,\tau}^{(\mathrm{sub})}(\tilde{s}_{IJ},x)\right]_+$
consistently takes the values
$\tilde{s}_{IJ}=2\tilde{k}_I\tilde{k}_J$, i.e.\ in the ``endpoint'' at
$x=1$ this variable corresponds to the invariant $s_{IJ}=2k_Ik_J$ of
the three-particle phase space $\Phi_3=\tilde{\Phi}_{3,IJ}(x=1)$
corresponding to the phase space without photon. The explicit results
for the functions $\mathcal{G}$ and $G$ read
\begin{eqnarray}
\mathcal{G}_{ia,+}^{(\mathrm{sub})}(\tilde{s}_{ia},x)&=&\frac{1}{1-x}
\lsb2\ln\left(\frac{2-x}{1-x}\right)-\frac{3}{2}\right],
\nonumber\\
\mathcal{G}_{ai,+}^{(\mathrm{sub})}(\tilde{s}_{ai},x)&=&P_{ff}(x)
\left[\ln\left(\frac{\vert \tilde{s}_{ai}\vert}{m_a^2 x}\right)-1\right]
-\frac{2}{1-x}\ln(2-x)+(1+x)\ln(1-x),
\nonumber\\
\mathcal{G}_{ab,+}^{(\mathrm{sub})}(\tilde{s}_{ab},x)&=&P_{ff}(x)
\left[\ln\left(\frac{s}{m_a^2}\right)-1\right],
\nonumber\\
\mathcal{G}_{ij,\pm}^{(\mathrm{sub})}(\tilde{s}_{ij},x)&=&
\mathcal{G}_{ia,-}^{(\mathrm{sub})}(\tilde{s}_{ia},x)=0, \quad
\mathcal{G}_{ab,-}^{(\mathrm{sub})}(\tilde{s}_{ab},x)=
\mathcal{G}_{ai,-}^{(\mathrm{sub})}(\tilde{s}_{ai},x)=1-x
\label{mathcalG}
\end{eqnarray}
and
\begin{eqnarray}
G_{ij,+}^{(\mathrm{sub})}({s}_{ij})&=&
\mathcal{L}\left({s}_{ij},m_i^2\right)-\frac{\pi^2}{3}+1, \qquad
G_{ia,+}^{(\mathrm{sub})}({s}_{ia})=
\mathcal{L}\left(\vert{s}_{ia}\vert,m_i^2\right)-\frac{\pi^2}{2}+1,
\nonumber\\
G_{ai,+}^{(\mathrm{sub})}({s}_{ai})&=&
\mathcal{L}\left(\vert{s}_{ai}\vert,m_a^2\right)+\frac{\pi^2}{6}-\frac{3}{2},
\qquad
G_{ab,+}^{(\mathrm{sub})}({s}_{ab})=
\mathcal{L}\left( s_{ab},m_a^2\right)-\frac{\pi^2}{3}+\frac{3}{2},
\nonumber\\
G_{ij,-}^{(\mathrm{sub})}({s}_{ij})&=&
G_{ia,-}^{(\mathrm{sub})}({s}_{ia})=
G_{ai,-}^{(\mathrm{sub})}({s}_{ai})=
G_{ab,-}^{(\mathrm{sub})}({s}_{ab})=\frac{1}{2},
\label{subG}
\end{eqnarray}
where the auxiliary function
\begin{equation}
\mathcal{L}\left({s},m_i^2\right)=\ln\left(\frac{m_\gamma^2}{{s}}\right)
\left[\ln\left(\frac{m_i^2}{{s}}\right)+1\right]
-\frac{1}{2}\ln^2\left(\frac{m_i^2}{{s}}\right)
+\frac{1}{2}\ln\left(\frac{m_i^2}{{s}}\right)
\end{equation}
contains the soft and collinear singularities of the endpoint parts.
These $G$ terms, which have the same kinematics as the lowest-order
contribution, exactly cancel
the corresponding singular contribution from the virtual
corrections.
\subsubsection{Non-collinear-safe observables}
\label{ncssub}
In \citere{Dittmaier:2008md} the dipole subtraction method for
collinear-safe photon radiation~\cite{Dittmaier:1999mb}, as briefly
described above, has been generalised to non-collinear-safe radiation.
There, in the collinear photon radiation cone around a charged
final-state particle, the fraction $z$ of the charged particle's
energy is not fully integrated over, but the cut procedure affects the
range of the $z$~integration. In the auxiliary three-particle phase
spaces $\tilde{\Phi}_{3,ij}(x)$ introduced in
\refsec{cssub} the role of $z$ is played by the variables
$z_{ij}$ and $z_{ia}$ defined in \refeq{sub_invariants}. The
transition from the collinear-safe to the non-collinear-safe case
requires both a modification in the subtraction procedure and a
non-trivial change in the analytical integration of the subtracted
parts that will be re-added again. We briefly describe these
generalisations in the following and refer to the original formulation
in \citere{Dittmaier:2008md} for more details.
In the subtraction procedure, as given in \refeq{eq:sigmasubfinite},
the cut prescription $\Theta_{\mathrm{cut}}(\tilde{\Phi}_{3,iJ})$
is modified in such a way that the auxiliary momentum
$\tilde k_{iJ}$ for the emitter--photon system is replaced
by the two collinear momenta $\tilde k_i = z_{iJ}\tilde k_{iJ}$
and $\tilde k=(1-z_{iJ})\tilde k_{iJ}$ for the emitter and the
photon, respectively. This procedure concerns only contributions
of final-state emitters.
On the side of the re-added subtraction function this implies that
the cut on $z_{iJ}$ has to be respected as well. In practice this
means that in most cases at least the non-singular parts of
this integration have to be performed numerically.
The detailed procedure is described in the following separately
for the two cases of
final--final and final--initial emitter--spectator pairs.
\paragraph{Final-state emitter and final-state spectator}
\mbox{}\\[.3em]
The integration of the radiator functions for a final-state emitter
$i$ and a final-state spectator $j$ is of the form
\begin{equation}
G_{ij,\tau}^{(\mathrm{sub})}(\tilde{s}_{ij}) = \frac{\tilde{s}_{ij}}{2}
\int{\mathrm{d}} y_{ij}\,(1-y_{ij}) \int{\mathrm{d}} z_{ij} \,
g_{ij,\tau}^{(\mathrm{sub})}(k_i,k_j,k),
\label{ff_G_coll}
\end{equation}
where $y_{ij}$ and $z_{ij}$ are given in \refeq{sub_invariants}.
The limits of integration, which can be explicitly found in
\citere{Dittmaier:1999mb},
depend on the regulator masses and Lorentz invariants of the
emitter, spectator, and photon system.
The explicit integration leads to the results given in \refeq{subG}
in the massless limit. In the non-collinear-safe case
we want to use information on the photon momentum in the collinear cone,
which is controlled by the variable $z_{ij}$, i.e.\
we have to interchange the order of the integrations in
\refeq{ff_G_coll} and leave the integration over $z_{ij}$ open.
To this end, we consider
\begin{equation}
\bar{\mathcal{G}}_{ij,\tau}^{(\mathrm{sub})}(\tilde{s}_{ij},z_{ij})
= \frac{\tilde{s}_{ij}}{2}
\int_{y_1(z_{ij})}^{y_2(z_{ij})}{\mathrm{d}} y_{ij} \, (1-y_{ij}) \,
g_{ij,\tau}^{(\mathrm{sub})}(k_i,k_j,k),
\label{ff_mathcalG_int}
\end{equation}
where the limits $y_{1,2}(z_{ij})$ of the $y_{ij}$ depend on the
mass regulators (see \citere{Dittmaier:2008md} for details).
The soft singularity contained in \refeq{ff_mathcalG_int} can be split off
by employing a $[\dots]_+$ distribution,
\begin{equation}
\bar{\mathcal{G}}_{ij,\tau}^{(\mathrm{sub})}(\tilde{s}_{ij},z)=
G_{ij,\tau}^{(\mathrm{sub})}(\tilde{s}_{ij})\delta(1-z)+\left[\bar{\mathcal{G}}_{ij,\tau}^{(\mathrm{sub})}(\tilde{s}_{ij},z)\right]_+,
\end{equation}
so that this singularity appears only in the quantity
$G_{ij,\tau}^{(\mathrm{sub})}(\tilde{s}_{ij})$ already given in
\refeq{subG}. In the limit of small fermion masses, the integral in
\refeq{ff_mathcalG_int} can be carried out explicitly, resulting in
\begin{equation}
\bar{\mathcal{G}}_{ij,+}^{(\mathrm{sub})}(\tilde{s}_{ij},z)=P_{ff}(z)\left[
\ln\left(\frac{\tilde{s}_{ij}z}{m_i^2}\right)-1\right]+(1+z)\ln(1-z),
\qquad
\bar{\mathcal{G}}_{ij,-}^{(\mathrm{sub})}(\tilde{s}_{ij},z)=1-z.
\end{equation}
The explicit form of the $ij$ contribution
$|\mathcal{M}_{\mathrm{sub},ij}\left(\Phi_1\right)|^2$ then reads
\begin{eqnarray}\label{final-final-gen}
\int {\mathrm{d}} \Phi_1 \,
|\mathcal{M}_{\mathrm{sub},ij}\left(\Phi_1;\sigma_i\right)|^2&=&
-\frac{\alpha}{2\pi}\sum_{\tau=\pm}(-1)^{i+j}Q_i Q_j \int {\mathrm{d}}
\tilde{\Phi}_{0,ij}\int_0^1{\mathrm{d}} z
\nonumber\\
&&{}\times\left\{ G_{ij,\tau}^{(\mathrm{sub})}\left( \tilde{s}_{ij}\right)\delta(1-z)
+\left[\bar{\mathcal{G}}_{ij,\tau}^{(\mathrm{sub})}\left(
\tilde{s}_{ij},z\right)\right]_+\right\}
\nonumber\\
&&{}\times \left|\mathcal{M}_0\left(\tilde{k}_i,\tilde{k}_j;\tau\sigma_i\right)\right|^2 \,
\Theta_{\mathrm{cut}}\left( k_i=z\tilde{k}_i,
k=(1\!-\!z)\tilde{k}_i,\tilde{k}_j,\left\{ k_n\right\}\right).
\hspace{2em}
\end{eqnarray}
The term in curly brackets in \refeq{final-final-gen} consists of a
term proportional to a $\delta$-function in $z$, which is the usual
endpoint contribution, and a $\left[\dots\right]_+$ prescription, acting
only on $\Theta_{\mathrm{cut}}$. In our case $\Theta_{\mathrm{cut}}$
just provides a lower cut-off on the $z$-integration,
$\Theta_{\mathrm{cut}}=\theta(z-1+z_{\mathrm{cut}})$, and we find
\begin{eqnarray}
\int {\mathrm{d}} \Phi_1 |\mathcal{M}_{\mathrm{sub},ij}\left(\Phi_1;\sigma_i\right)|^2
&=&-\frac{\alpha}{2\pi}\sum_{\tau=\pm}(-1)^{i+j}Q_i Q_j \int {\mathrm{d}}
\tilde{\Phi}_{0,ij} \,
\left|\mathcal{M}_0\left(\tilde{k}_i,\tilde{k}_j;\tau\sigma_i\right)\right|^2
\nonumber\\
&& {}
\times
\left\{ G_{ij,\tau}^{(\mathrm{sub})}\left( \tilde{s}_{ij}\right)
-\int_0^{1-z_{\mathrm{cut}}}{\mathrm{d}} z \,\,\bar{\mathcal{G}}_{ij,\tau}^{(\mathrm{sub})}
\left(\tilde{s}_{ij},z\right)\right\}.
\label{eq:ffint}
\end{eqnarray}
The $z$-integration in the second term of \refeq{eq:ffint} can be carried out explicitly,
and the sum over $\tau$ can be performed, because we consider only unpolarized final states.
In this way we obtain
\begin{align}
\sum_{\tau=\pm}\int_0^{1-z_{\mathrm{cut}}}{\mathrm{d}} z \,
\bar{\mathcal{G}}_{ij,\tau}^{(\mathrm{sub})}\left(
\tilde{s}_{ij},z\right)
=&-\frac{\pi^2}{3}+ \left[ \frac{1}{2}
-2\ln\left( \frac{\tilde{s}_{ij}}{m_i^2} \right)\right] \ln\left( z_{\mathrm{cut}}\right)
\nonumber\\
&{}+\frac{1}{2}(1-z_{\mathrm{cut}})\left[ 3
-(3-z_{\mathrm{cut}})\ln\left( \frac{\tilde{s}_{ij}(1-z_{\mathrm{cut}})}{m_i^2 z_{\mathrm{cut}}}
\right)\right]
+ 2\mathop{\mathrm{Li}_2}\nolimits\left( z_{\mathrm{cut}}\right).
\label{eq:Gijbarint}
\end{align}
\paragraph{Final-state emitter and initial-state spectator}
\mbox{}\\[.3em]
In the case of a final-state emitter $i$ and an initial-state spectator $a$,
the integration of
$\vert\mathcal{M}_{ia,\mathrm{sub}}\vert^2$ over
$x=x_{ia}$ is performed using a $\left[\dots\right]_+$ prescription,
\begin{equation}
\frac{-\tilde{s}_{ia}}{2}
\int_{0}^{x_1}{\mathrm{d}} x_{ia}\int{\mathrm{d}} z_{ia} \,
g_{ia,\tau}^{(\mathrm{sub})}(k_i,k_a,k)\dots
=\int_{0}^{1}{\mathrm{d}} x\left\{
G_{ia,\tau}^{(\mathrm{sub})}(\tilde{s}_{ia})\delta(1-x)+\left[\mathcal{G}_{ia,\tau}^{(\mathrm{sub})}(\tilde{s}_{ia},x)\right]_+
\right\}\dots,
\end{equation}
where the ellipses stand for $x$-dependent, process-specific functions
like the Born matrix element squared or flux factors. Here we used
the fact that the upper limit $x_1$ of the $x$-integration is equal to
one up to mass terms that are only relevant for the regularisation of
the singularities which eventually appear in
$G_{ia,\tau}^{(\mathrm{sub})}$. The integration over $x$ is usually
done numerically. Details of the derivation of the functions
$G_{ia,\tau}^{(\mathrm{sub})}$ and
$\mathcal{G}_{ia,\tau}^{(\mathrm{sub})}$, which are given in
\refeq{subG} and \refeq{mathcalG}, can be found in
\citere{Dittmaier:1999mb}.
In the non-collinear-safe case (see again \citere{Dittmaier:2008md}
for details), the ellipses also involve terms like the cut function
that depend on $z_{ia}$. Therefore the whole integration has to be
done in general numerically. To this end, a procedure is employed that
isolates the occurring singularities in the endpoint. The basic idea
in this procedure is the use of a generalised $\left[\dots\right]_+$
prescription that acts on multiple variables. Denoting the usual
$\left[\dots\right]_+$ prescription in an $n$-dimensional integral over the
variables $r_i$, $i=1,\ldots,n$, by
\begin{equation}
\int{\mathrm{d}}^n {\bf{r}}\left[ g({\bf{r}})\right]_{+,(a)}^{(r_i)}f({\bf{r}})\equiv\int{\mathrm{d}}^n {\bf{r}}\,g({\bf{r}})
\left( f({\bf{r}})-f({\bf{r}})\vert_{r_i=a}\right),
\end{equation}
the natural generalisation to two variables reads
\begin{eqnarray}
\int{\mathrm{d}}^n {\bf{r}}\left[ g({\bf{r}})\right]_{+,(a,b)}^{(r_i,r_j)}f({\bf{r}})&\equiv&
\int{\mathrm{d}}^n {\bf{r}}\left[\lsb g({\bf{r}})\right]_{+,(a)}^{(r_i)}\right]_{+,(b)}^{(r_j)}f({\bf{r}})
\nonumber\\
&=&\int{\mathrm{d}}^n {\bf{r}}\,g({\bf{r}})
\left( f({\bf{r}})-f({\bf{r}})\vert_{r_i=a}-f({\bf{r}})\vert_{r_j=b}+
f({\bf{r}})\vert_{{r_i=a} \atop {r_j=b}}\right).
\end{eqnarray}
To recover the usual notation, we drop the subscripts $a$ and/or $b$
if they are equal to one.
The generic form of the integral we want to perform is
\begin{equation}
I[f]\equiv \frac{-\tilde{s}_{ia}}{2}
\int_{0}^{x_1}{\mathrm{d}} x\int_{z_1(x)}^{z_2(x)}{\mathrm{d}} z \,
g_{ia,\tau}^{(\mathrm{sub})}(k_i,k_a,k) f(x,z),
\end{equation}
with $f(x,z)$ denoting an integrable function of $x=x_{ia}$
and $z=z_{ia}$. After introducing the multiple $\left[\dots\right]_+$
distribution, all soft and collinear singularities are integrated
out, and the result contains only regular integrations
over $x$ and $z$ within unit boundaries,
\begin{eqnarray}
I[f]&=&
\int_0^1{\mathrm{d}} x\int_0^1{\mathrm{d}} z \left[ \bar{g}_{ia,\tau}^{(\mathrm{sub})}(x,z)\right]_+^{(x,z)}f(x,z)+
\int_0^1{\mathrm{d}} x f(x,1)\left[ \mathcal{G}_{ia,\tau}^{(\mathrm{sub})}(\tilde{s}_{ia},x)\right]_+\nonumber\\
&&{}+
\int_0^1{\mathrm{d}} z f(1,z)\left[ \bar{\mathcal{G}}_{ia,\tau}^{(\mathrm{sub})}(\tilde{s}_{ia},z)\right]_+ +
f(1,1)G_{ia,\tau}^{(\mathrm{sub})}(\tilde{s}_{ia}),
\end{eqnarray}
with the additional integration kernels
\begin{eqnarray}
\bar{g}_{ia,+}^{(\mathrm{sub})}(x,z)&=&\frac{1}{1-x}\left(\frac{2}{2-x-z}-1-z\right),
\qquad
\bar{g}_{ia,-}^{(\mathrm{sub})}(x,z)=0,
\nonumber\\
\bar{\mathcal{G}}_{ia,+}^{(\mathrm{sub})}(\tilde{s}_{ia},z)&=&P_{ff}(z)\left[
\ln\left(\frac{\vert\tilde{s}_{ia}\vert z}{m_i^2}\right)-1
\right]-\frac{2\ln(2-z)}{1-z}+(1+z)\ln(1-z),
\nonumber\\
\bar{\mathcal{G}}_{ia,-}^{(\mathrm{sub})}(\tilde{s}_{ia},z)&=&1-z.
\end{eqnarray}
Using this result, the explicit form of the $ia$ contribution
$|\mathcal{M}_{\mathrm{sub},ia}\left(\Phi_1\right)|^2$ reads
\begin{eqnarray}
\lefteqn{
\int {\mathrm{d}} \Phi_1 \,
|\mathcal{M}_{\mathrm{sub},ia}\left(\Phi_1;\sigma_i\right)|^2
=
-\frac{\alpha}{2\pi}\sum_{\tau=\pm}
(-1)^{a+i}Q_a Q_i\int_0^1 {\mathrm{d}} x \int {\mathrm{d}}
\tilde{\Phi}_{0,ia}\left( \tilde{s}_{ia},x\right)\int_0^1{\mathrm{d}} z
} &&
\nonumber\\
&&{}\times\frac{1}{x}\left\{ G_{ia,\tau}^{(\mathrm{sub})}\left( \tilde{s}_{ia}\right)\delta(1-x)\delta(1-z)
+\left[\mathcal{G}_{ia,\tau}^{(\mathrm{sub})}\left(
\tilde{s}_{ia},x\right)\right]_+\delta(1-z)\right.
\nonumber\\
&&\hspace*{10mm}{}\left.+
\left[\bar{\mathcal{G}}_{ia,\tau}^{(\mathrm{sub})}\left(
\tilde{s}_{ia},z\right)\right]_+\delta(1-x)+
\left[\bar{g}_{ia,\tau}^{(\mathrm{sub})}\left(
x,z\right)\right]_+^{x,z}\right\}
\nonumber\\
&&{}\times
\left|\mathcal{M}_0\left(\tilde{k}_i(x),\tilde{k}_a(x);\tau\sigma_i\right)\right|^2 \,
\Theta_{\mathrm{cut}}\left( k_i=z\tilde{k}_i(x),
k=(1\!-\!z)\tilde{k}_i(x),\left\{ k_n(x)\right\}\right).
\label{eq:figen}
\end{eqnarray}
Inserting the explicit form of the cut function,
$\Theta_{\mathrm{cut}}=\theta(z-1+z_{\mathrm{cut}})$, this yields
\begin{eqnarray}
\lefteqn{
\int {\mathrm{d}} \Phi_1 \,
|\mathcal{M}_{\mathrm{sub},ia}\left(\Phi_1;\sigma_i\right)|^2
=-\frac{\alpha}{2\pi}\sum_{\tau=\pm}(-1)^{a+i}
Q_a Q_i} &&
\nonumber\\
&& {}
\times\biggl\{
\int {\mathrm{d}}
\tilde{\Phi}_{0,ia}\left( \tilde{s}_{ia}\right)
\left[ G_{ia,\tau}^{(\mathrm{sub})}\left(
\tilde{s}_{ia}\right) - \int_0^{1-z_{\mathrm{cut}}}{\mathrm{d}} z\,
\bar{\mathcal{G}}_{ia,\tau}^{(\mathrm{sub})}\left( \tilde{s}_{ia},z\right)
\right]\left|\mathcal{M}_0\left(\tilde{k}_i,\tilde{k}_a;\tau\sigma_i\right)\right|^2
\\
&&{}
+\int_0^1 {\mathrm{d}} x \int {\mathrm{d}}\tilde{\Phi}_{0,ia}\left( \tilde{s}_{ia},x\right)
\frac{1}{x}\left[\mathcal{G}_{ia,\tau}^{(\mathrm{sub})}\left(
\tilde{s}_{ia},x\right)
-\int_0^{1-z_{\mathrm{cut}}}{\mathrm{d}} z\,\bar{g}_{ia,\tau}^{(\mathrm{sub})}\left(
x,z\right)
\right]_+
\left|\mathcal{M}_0\left(\tilde{k}_i(x),\tilde{k}_a(x);\tau\sigma_i\right)\right|^2\biggr\}.
\nonumber
\label{eq:fiint}
\end{eqnarray}
The $z$-integrations and the sum over $\tau$ can again be performed,
resulting in
\begin{eqnarray}
\lefteqn{
\sum_{\tau=\pm}\int_0^{1-z_{\mathrm{cut}}}{\mathrm{d}} z \,
\bar{\mathcal{G}}_{ia,\tau}^{(\mathrm{sub})}\left(
\tilde{s}_{ia},z\right)
=
-\frac{\pi^2}{2}+ \left[ \frac{1}{2}
-2\ln\left( \frac{\vert\tilde{s}_{ia}\vert}{m_i^2} \right)\right]
\ln\left( z_{\mathrm{cut}}\right)
} &&
\nonumber\\
&&{}+\frac{1}{2}(1-z_{\mathrm{cut}})\left[ 3
-(3-z_{\mathrm{cut}})\ln\left( \frac{\vert \tilde{s}_{ia}\vert(1-z_{\mathrm{cut}})}
{m_i^2z_{\mathrm{cut}}} \right)\right]
+ 2\mathop{\mathrm{Li}_2}\nolimits\left( z_{\mathrm{cut}}\right) - 2\mathop{\mathrm{Li}_2}\nolimits\left( -z_{\mathrm{cut}}\right)
\label{eq:Giabarint}
\end{eqnarray}
and
\begin{equation}
\sum_{\tau=\pm}\int_0^{1-z_{\mathrm{cut}}}{\mathrm{d}} z \,
\bar{g}_{ia,\tau}^{(\mathrm{sub})}\left( x,z\right)
=\frac{1}{1-x} \left[
-\frac{1}{2}(1-z_{\mathrm{cut}})(3-z_{\mathrm{cut}})
+2\ln\left(\frac{2-x}{1-x+z_{\mathrm{cut}}}\right) \right].
\label{eq:giaint}
\end{equation}
\subsection{The quark-to-photon fragmentation function}
\label{quark-to-photonfragfac}
In the previous sections we described how we can deal with identified
particles in the final state that lead to non-IR-safe
observables using phase-space slicing and the subtraction method. To
restore IR safety one factorises the resulting left-over
singularities into an experimentally determined fragmentation
function, in our case the quark-to-photon fragmentation function.
In \citere{Glover:1993xc} a method has been proposed in how to measure
the quark-to-photon fragmentation function, i.e.~the probability of a
quark splitting into a quark and a photon, using the
$\mathrm{e}^+\mathrm{e}^-\rightarrow n\,\mathrm{jet}+\mathrm{photon}$ cross section.
The method has been extended to next-to-leading order in QCD
in \citere{GehrmannDeRidder:1997wx}.
The key feature of the proposed method is the democratic clustering of
both hadrons and photons into jets, where one keeps track of the
fraction of photonic energy in the jet. This treatment of the photon
in the jet enhances the non-perturbative part of the quark-to-photon
fragmentation function \cite{Koller:1978kq,Glover:1993xc}, which in
turn can be measured in $\mathrm{e}^+\mathrm{e}^-$ annihilation.
In \citere{Glover:1993xc} the quark-to-photon fragmentation function
was theoretically defined using dimensional regularisation and
one-cut-off slicing. To be able to use the results obtained in this
way in our calculation, we need to translate this definition to mass
regularisation. We first summarize the results in dimensional
regularisation.
Since fragmentation is a long-distance process, it cannot be
calculated entirely in perturbation theory. The fragmentation
function $D_{q\rightarrow\gamma}(z_\gamma)$ describes the probability
of a quark fragmenting into a jet containing a photon carrying $z_\gamma$ of
the jet energy. Photons inside hadronic jets can have two origins: (a) The
perturbatively calculable radiation of a photon off a quark, which
contains a collinear divergence, described by a
perturbative contribution $\hbox{d}\sigma_{{\rm coll}}^\gamma$,
dependent on the method used to regulate the collinear singularity;
(b) the non-perturbative production of
a photon in the hadronisation process of the quark into a hadronic jet,
which is described by a bare non-perturbative fragmentation function
$D^{{\rm bare}}_{q\rightarrow\gamma}(z_\gamma)$.
Both processes refer to the infrared dynamics inside the quark jet, and
can a priori not be separated from each other.
Within dimensional regularisation and one-cutoff-slicing, the photon
fragmentation contribution was studied in detail in
\citeres{Glover:1993xc,Poulsen:2006}. Exploiting the universal
factorisation of matrix elements and phase space in the collinear
limit, one obtains for the cross section for the emission of one
collinear photon with energy fraction $z_\gamma$ above
$z_{\mathrm{cut}}$ off a quark $q$ and invariant mass of the
photon--quark pair below $s_{\mathrm{min}}$,
\begin{eqnarray}
\hbox{d}\sigma_{{\rm coll}}^\gamma(z_{\mathrm{cut}})
&=&
\frac{\alpha Q_q^2}{2\pi} \,\hbox{d} \sigma_0
\int_{z_{\mathrm{cut}}}^1 \hbox{d} z_\gamma
\frac{\left({4\pi\mu^2}\right)^\epsilon}{\Gamma(1-\epsilon)}
\frac{P^{(\epsilon)}_{ff}(1-z_\gamma)}{\left[z_\gamma(1-z_\gamma)\right]^\epsilon}
\int_0^{s_{\mathrm{min}}}
{\mathrm{d}} s_{q\gamma}\frac{1}{s_{q\gamma}^{1+\epsilon}}
\nonumber\\
&=&-
\frac{\alpha Q_q^2}{2\pi} \,\hbox{d} \sigma_0
\int_{z_{\mathrm{cut}}}^1 \hbox{d} z_\gamma
\frac{1}{\epsilon}\left(
\frac{4\pi\mu^2}{s_{\mathrm{min}}}\right)^\epsilon\frac{1}{\Gamma(1-\epsilon)}
\frac{P^{(\epsilon)}_{ff}(1-z_\gamma)}{\left[z_\gamma(1-z_\gamma)\right]^\epsilon}
\nonumber\\
&=&
- \frac{\alpha Q_q^2}{2\pi} \,\hbox{d} \sigma_0
\int_{z_{\mathrm{cut}}}^1 \hbox{d} z_\gamma
\nonumber\\&&{}
\times\left[
\frac{1}{\epsilon}\left(
\frac{4\pi\mu^2}{s_{\mathrm{min}}}\right)^\epsilon\frac{1}{\Gamma(1-\epsilon)}
{P_{ff}(1-z_\gamma)}
- P_{ff}(1-z_\gamma) \ln\left(z_\gamma(1-z_\gamma)\right) - z_\gamma \right]
\,,\qquad
\label{frag_p_eps}
\end{eqnarray}
where $\hbox{d} \sigma_0$ is a reduced cross section with the quark--photon
system replaced by its parent quark. The same expression was obtained in
(\ref{eq:sliceint}) above for the hard-photon cut contribution,
where $z=1-z_\gamma$.
The infrared singularity present in this perturbative contribution
must be balanced by a divergent piece in the bare fragmentation
function, which contributes to the photon-emission cross section as
\begin{equation}
\hbox{d} \sigma_{{\rm frag}}(z_{\mathrm{cut}}) =
\hbox{d} \sigma_0
\int_{z_{\mathrm{cut}}}^1 \hbox{d} z_\gamma
D^{{\rm bare}}_{q\rightarrow\gamma}(z_\gamma)
\end{equation}
To make this cancellation explicit, one introduces a
factorisation scale $\mu_{\mathrm{F}}$ into the bare fragmentation function, which
then reads in dimensional regularisation
\begin{eqnarray}
D_{q\rightarrow\gamma}^{\mathrm{bare,DR}}(z_\gamma)&=&D_{q\rightarrow\gamma}(z_\gamma,\mu_{\mathrm{F}})+
\frac{\alpha Q_q^2}{2\pi}
\frac{1}{\epsilon}\left(
\frac{4\pi\mu^2}{\mu_{\mathrm{F}}^2}\right)^\epsilon\frac{1}{\Gamma(1-\epsilon)}
P_{ff}(1-z_\gamma)\,.
\label{eq:massfactDR}
\end{eqnarray}
The factorised non-perturbative fragmentation function
$D_{q\rightarrow\gamma}(z_\gamma,\mu_{\mathrm{F}})$ is then finite, and can be
determined from experimental data. Its variation with the scale $\mu_{\mathrm{F}}$
is described by the Altarelli--Parisi evolution equation, which reads to
leading order
\begin{equation}
\frac{{\rm d}D_{q \to \gamma}(z_\gamma,\mu_{\mathrm{F}})}{{\rm d}\ln \mu_{\mathrm{F}}^2}
=\frac{\alpha Q_{q}^2}{2 \pi}P_{ff}(1-z_\gamma).
\label{evolution}
\end{equation}
The fixed-order exact solution at ${\cal O}(\alpha)$ is given by
\begin{equation}
D_{q \to \gamma}(z_\gamma,\mu_{\mathrm{F}})=\frac{\alpha Q_{q}^2}{2 \pi}
\,P_{ff}(1-z_\gamma)
\ln \left(\frac{\mu_{\mathrm{F}}^2}{\mu_{0}^2}\right) + D_{q \to \gamma}
(z_\gamma,\mu_{0}),
\label{eq:npFF}
\end{equation}
where $D_{q \to \gamma}(z_\gamma,\mu_{0})$ is the quark-to-photon
fragmentation function at some initial scale $\mu_{0}$. This function
and the initial scale $\mu_0$ cannot be calculated and have to be
determined from experimental data. The first determination of $D_{q
\to \gamma}(z,\mu_{0})$ was performed by the ALEPH
collaboration~\cite{aleph} using the ansatz
\begin{equation}
D_{q\rightarrow\gamma}^{\mathrm{ALEPH}}(z_\gamma,\mu_{\mathrm{F}})=
\frac{\alpha Q_q^2}{2\pi}
\left[
P_{ff}(1-z_\gamma)\ln\left(\frac{\mu_{\mathrm{F}}^2}{\mu_0^2}
\frac{1}{(1-z_\gamma)^2}\right)+C
\right]
,
\label{ff_ALEPH}
\end{equation}
with fitting parameters $C$ and $\mu_0$. The fit to the
photon-plus-one-jet rate~\cite{aleph} yielded
\begin{equation}
\mu_0 = 0.22~\mbox{GeV};\qquad C=-12.1\,.
\end{equation}
Through the factorisation formula (\ref{eq:massfactDR}), a
relation between bare fragmentation function and the collinear photon
contribution is established. As a result, the sum of both
contributions is finite, but depends on the slicing parameter
$s_{\mathrm{min}}$:
\begin{eqnarray}
\lefteqn{\hbox{d}\sigma_{{\rm coll}}^\gamma(z_{\mathrm{cut}})
+ \hbox{d}\sigma_{{\rm frag}}(z_{\mathrm{cut}})
=
\hbox{d} \sigma_0
\int_{z_{\mathrm{cut}}}^1 \hbox{d} z_\gamma }\qquad
\nonumber\\&&{} \times
\left( D_{q\rightarrow\gamma}(z_\gamma,\mu_{\mathrm{F}})
+
\frac{\alpha Q_q^2}{2\pi}
\left[
P_{ff}(1-z_\gamma)\ln\left(\frac{s_{\mathrm{min}}}{\mu_{\mathrm{F}}^2}(1-
z_\gamma)z_\gamma\right)+z_\gamma
\right]\right) \,
.
\label{frag_MSbar}
\end{eqnarray}
Inserting the solution
(\ref{eq:npFF}) into (\ref{frag_MSbar}), $\hbox{d}\sigma_{{\rm coll}}^\gamma
+ \hbox{d}\sigma_{{\rm frag}}$ becomes independent of the factorisation
scale $\mu_{\mathrm{F}}$.
We can now determine the contribution of the quark-to-photon
fragmentation function to 3-jet production. Since photons above
$z_{{\rm cut}}$ are vetoed we have to subtract the
photon-fragmentation contribution from the real corrections, with the
hard-photon cut procedure included, and the virtual corrections,
resulting in the IR-safe cross section
\begin{equation}
\int {\mathrm{d}}\sigma^{\mbox{\scriptsize IR-safe}}=\int{\mathrm{d}}\sigma_{\mathrm{virt}}+
\int{\mathrm{d}}\sigma_{\mathrm{real}}(z_{\mathrm{cut}})
-\int{\mathrm{d}}\sigma_{\mathrm{frag}}(z_{\mathrm{cut}}).
\label{master_real_frag}
\end{equation}
Taking into account all quarks (and antiquarks) in the final state, the
photon-fragmentation contribution reads:
\begin{equation}
{\mathrm{d}}\sigma_{\mathrm{frag}}(z_{\mathrm{cut}})=\sum_{i=3}^4
{\mathrm{d}}\sigma_{\mathrm{Born}}\int_{z_{\mathrm{cut}}}^1{\mathrm{d}} z_\gamma \,
D^{\mathrm{bare}}_{q_i\rightarrow\gamma}(z_\gamma).
\label{eq:dsigfinalzcut}
\end{equation}
Physically we can motivate this approach as follows. The hard photon cut removes
collinear quark-photon pairs above a predefined photon energy fraction. This
results in an incomplete cancellation of collinear singularities between the real
photon radiation and the virtual
QED-type corrections (where the hard photon cut is not acting, since the virtual photon is not observed). By
subtracting ${\mathrm{d}}\sigma_{\mathrm{frag}}$ from the
$\mathcal{O}\left(\alpha\right)$ corrections, we correct for the effect of
the hard-photon cut and compensate for excess terms related to
collinear photon emission. In this way we can define an IR-safe
quantity even in the presence of the hard-photon cuts.
In order to use the fragmentation function in our calculation with
mass regularization we have to translate \refeq{eq:massfactDR} to mass
regularization. For consistency with previous sections, we use from
now on the quark energy $z=1-z_\gamma$ as collinear variable.
Using results of \citere{Baur:1998kt}, the collinear photon
contribution in mass regularisation and one-cutoff slicing is obtained
as
\begin{eqnarray} \hbox{d}\sigma_{{\rm coll}}^\gamma(z_{\mathrm{cut}}) &=& \frac{\alpha Q_q^2}{2\pi}
\,\hbox{d} \sigma_0
\int_0^{1-z_{\mathrm{cut}}} \hbox{d} z
\int_{m_q^2/z}^{s_{\mathrm{min}}} \frac{{\mathrm{d}} s_{q\gamma}}{s_{q\gamma}-m_q^2}
\left[P_{ff}(z)-\frac{2m_q^2}{s_{q\gamma}-m_q^2}\right] \nonumber\\&=&
\frac{\alpha Q_q^2}{2\pi}\,\hbox{d} \sigma_0
\int_0^{1-z_{\mathrm{cut}}} \hbox{d} z
\left[P_{ff}(z)\ln\left(\frac{s_{\mathrm{min}}}{m_q^2}\frac{z}{1-z}\right)-\frac{2z}{1-z}\right]\,.
\label{ff_m}
\end{eqnarray}
Using this result and the independence of \refeq{frag_MSbar} on the
regularisation scheme we find the bare fragmentation function
in mass regularization
\begin{eqnarray}
D_{q\rightarrow\gamma}^{\mathrm{bare,MR}}(1-z)&=&
D_{q\rightarrow\gamma}(1-z,\mu_{\mathrm{F}})+
\frac{\alpha Q_q^2}{2\pi}
P_{ff}(z)\left[\ln\left(
\frac{m_q^2}{\mu_{\mathrm{F}}^2}(1-z)^2
\right)+1
\right],
\label{bareff_mass}
\end{eqnarray}
where the finite terms ensure that the $\overline{{\rm MS}}$ scheme
factorised fragmentation function
$D_{q\rightarrow\gamma}(z_\gamma,\mu_{\mathrm{F}})$ is identical in the
dimensionally regularised and the mass-regularised expressions. After
inserting the ALEPH ansatz \refeq{ff_ALEPH} for $D_{q \to
\gamma}(z_\gamma,\mu_{\mathrm{F}})$, we obtain
\begin{equation}
D_{q\rightarrow\gamma}^{\mathrm{bare,MR}}(1-z)=
\frac{\alpha Q_q^2}{2\pi}
\left[
P_{ff}(z)\left[\ln\left(\frac{m_q^2}{\mu_0^2}\frac{(1-z)^2}{z^2}\right)+1\right]+C
\right]\,
.
\label{bareff_ALEPH}
\end{equation}
Integrating \refeq{bareff_ALEPH} over $z$ results in
\begin{eqnarray}
\int_0^{1-z_{\mathrm{cut}}}{\mathrm{d}} z \,D_{q\rightarrow\gamma}^{\mathrm{bare,MR}}(1-z)&=&
\frac{\alpha Q_q^2}{2\pi}
\biggl[
C(1-z_{\mathrm{cut}}) - \frac{(1-z_{\mathrm{cut}})^2}{2} + \ln\left(z_{\mathrm{cut}}\right)+4 \mathop{\mathrm{Li}_2}\nolimits\left(1-z_{\mathrm{cut}}\right)\nonumber\\
&&{}-2\ln\left(z_{\mathrm{cut}}\right) \ln\left(\frac{m_q^2}{\mu_0^2}
\frac{z_{\mathrm{cut}}^2}{(1 -z_{\mathrm{cut}})^2}\right)
+2\ln^2\left( z_{\mathrm{cut}}\right)\nonumber\\
&&{}-
\frac{1}{2}(1-z_{\mathrm{cut}}) (3-z_{\mathrm{cut}})
\ln\left(\frac{m_q^2}{\mu_0^2}
\frac{z_{\mathrm{cut}}^2}{(1-z_{\mathrm{cut}})^2}\right)
\biggr]
.
\label{bareff_ALEPH_z_int}
\end{eqnarray}
In the case of the two-cut-off phase-space-slicing method,
subtracting ${\mathrm{d}}\sigma_{\mathrm{frag}}(z_{\mathrm{cut}})$, i.e.\
\refeq{eq:dsigfinalzcut} with \refeq{bareff_ALEPH_z_int} from
${\mathrm{d}}\sigma_{\mathrm{coll}}^{\mathrm{final}}(z_{\mathrm{cut}})$,
\refeq{cutint}, leads to
\begin{align}
{\mathrm{d}}\sigma_{\mathrm{coll}}^{\mathrm{final}}(z_{\mathrm{cut}})
&{}-\mathrm{d}\sigma_{\mathrm{frag}}(z_{\mathrm{cut}})
=\sum_{i=3}^4
\frac{\alpha}{2\pi}Q_i^2{\mathrm{d}}\sigma_{\mathrm{Born}}\biggl[
\frac{1}{2} - \frac{2\pi^2}{3} + 4 z_{\mathrm{cut}} - C (1-z_{\mathrm{cut}})\nonumber\\
&{}-\left(\frac{3}{2}+2\ln\left(\frac{\Delta E}{E_i}\right)\right)\ln\left(\frac{2E_i^2\delta_{\mathrm{c}}}{m_i^2}\right)
+2\ln\left(\frac{\Delta E}{E_i}\right)
\nonumber\\&{}
+\left[\ln\left(\frac{2E_i^2\delta_{\mathrm{c}}}{\mu_0^2}z_{\mathrm{cut}}\right)
-\frac{3}{2}\right]\ln\left(z_{\mathrm{cut}}^2\right)
+\frac{1}{2}(1-z_{\mathrm{cut}}) (3-z_{\mathrm{cut}})
\ln\left(\frac{2E_i^2\delta_{\mathrm{c}}}{\mu_0^2}z_{\mathrm{cut}}^2\right)
\biggr]\nonumber\\
=&\,\,{}{\mathrm{d}}\sigma_{\mathrm{coll}}^{\mathrm{final}}-\sum_{i=3}^4
\frac{\alpha}{2\pi}Q_i^2{\mathrm{d}}\sigma_{\mathrm{Born}}
\biggl\{(4+C)(1-z_{\mathrm{cut}})\nonumber\\
&{}
-\left[\ln\left( \frac{2E_i^2\delta_{\mathrm{c}}}{\mu_0^2} z_{\mathrm{cut}}
- \frac{3}{2}\right)\right]
\ln\left( z_{\mathrm{cut}}^2\right)
-\frac{1}{2}(1-z_{\mathrm{cut}}) (3-z_{\mathrm{cut}})
\ln\left( \frac{2E_i^2\delta_{\mathrm{c}}}{\mu_0^2}
z_{\mathrm{cut}}^2\right)
\biggr\}
\label{dsigfinalzcut_aleph}
\end{align}
with ${\mathrm{d}}\sigma_{\mathrm{coll}}^{\mathrm{final}}$ from \refeq{slicing_final}.
This result consists of the original collinear contribution that
cancels against the virtual corrections, and an additional, finite
contribution, depending on the cut on the quark energy
$z_{\mathrm{cut}}$. All collinear singularities have cancelled.
In the case of the subtraction method, we can use charge conservation
\begin{equation}
Q_i^2=-\sum_{J=1\atop J\ne i}^4 (-1)^{(i+J)}Q_iQ_J
=-\sum_{a=1}^2 (-1)^{(i+a)}Q_iQ_a
-\sum_{j=3\atop j\ne i}^4 (-1)^{(i+j)}Q_iQ_j
\end{equation}
in \refeq{eq:dsigfinalzcut} to split
${\mathrm{d}}\sigma_{\mathrm{frag}}(z_{\mathrm{cut}})$ into a contribution from
final-state emitter and final-state spectator and one from final-state
emitter and initial-state spectator
\begin{equation}
{\mathrm{d}}\sigma_{\mathrm{frag}}(z_{\mathrm{cut}})=-\sum_{i=3}^4
\Biggl( \sum_{a=1}^2 (-1)^{i+a}\frac{Q_iQ_a}{Q_i^2}
+\sum_{{j=3}\atop {j\ne i}}^4 (-1)^{i+j}\frac{Q_iQ_j}{Q_i^2} \Biggr)
{\mathrm{d}}\sigma_{\mathrm{Born}}\int_0^{1-z_{\mathrm{cut}}}{\mathrm{d}} z\,
D_{q_i\rightarrow\gamma}^{\mathrm{bare,MR}}(1-z).\;
\label{frag_split}
\end{equation}
In the case of a final-state emitter and final-state spectator, we can
subtract the second part of \refeq{frag_split} from the contribution
of \refeq{eq:Gijbarint} to find
\begin{align}
\frac{1}{2 s}\sum_{i=3}^4\sum_{{j=3}\atop {j\ne i}}^4
&(-1)^{i+j}\frac{Q_iQ_j}{Q_i^2}
\int{\mathrm{d}}\tilde{\Phi}_{0,ij}\left|\mathcal{M}_0\left(\tilde{k}_i,\tilde{k}_j\right)\right|^2
\nonumber\\&{}\times
\int_0^{1-z_{\mathrm{cut}}}{\mathrm{d}} z \left\{
\frac{\alpha}{2\pi}Q_i^2 \sum_{\tau^\pm}\bar{\mathcal{G}}_{ij,\tau}^{(\mathrm{sub})}\left(
\tilde{s}_{ij},z\right) + D_{q_i\rightarrow\gamma}^{\mathrm{bare,MR}}(1-z)\right\} \nonumber\\
=\frac{\alpha}{4\pi s}&\sum_{i=3}^4\sum_{{j=3}\atop {j\ne i}}^4
(-1)^{i+j}\frac{Q_iQ_j}{Q_i^2}
\int{\mathrm{d}}\tilde{\Phi}_{0,ij}\left|\mathcal{M}_0\left(\tilde{k}_i,\tilde{k}_j\right)\right|^2
\,\biggl\{
\lrb1+C+\frac{z_{\mathrm{cut}}}{2}\right)
(1-z_{\mathrm{cut}})
\nonumber\\&\quad{}
-\left[
\frac{1}{2}(1-z_{\mathrm{cut}})(3-z_{\mathrm{cut}})
+2\ln\left( z_{\mathrm{cut}}\right)\right]
\ln\left(\frac{\tilde{s}_{ij}}{\mu_0^2}\frac{z_{\mathrm{cut}}}{1-z_{\mathrm{cut}}}\right)
\nonumber\\&\quad{}
+2\mathop{\mathrm{Li}_2}\nolimits\left( 1-z_{\mathrm{cut}}\right)
+\frac{3}{2}\ln\left( z_{\mathrm{cut}}\right)
\biggr\}.
\label{ncssubplusfrag_ij}
\end{align}
Analogously, in the case of a final-state emitter and initial-state
spectator using \refeq{eq:Giabarint}, we are left with
\begin{align}
\frac{1}{2 s}\sum_{i=3}^4\sum_{a=1,2}
&(-1)^{i+a}\frac{Q_iQ_a}{Q_i^2}
\int{\mathrm{d}}\tilde{\Phi}_{0,ia}\left|\mathcal{M}_0\left(\tilde{k}_i,\tilde{k}_a\right)\right|^2
\nonumber\\&\times
\int_0^{1-z_{\mathrm{cut}}}{\mathrm{d}} z
\left\{\frac{\alpha}{2\pi}Q_i^2\sum_{\tau^\pm}\bar{\mathcal{G}}_{ia,\tau}^{(\mathrm{sub})}\left(
\tilde{s}_{ia},z\right) + D_{q_i\rightarrow\gamma}^{\mathrm{bare,MR}}(1-z)\right\}=\nonumber\\
=\frac{\alpha}{4\pi s}&\sum_{i=3}^4\sum_{a=1,2}
(-1)^{i+a}Q_iQ_a
\int{\mathrm{d}}\tilde{\Phi}_{0,ia}\left|\mathcal{M}_0\left(\tilde{k}_i,\tilde{k}_a\right)\right|^2
\biggl\{
-\frac{\pi^2}{6}+\left( 1+C+\frac{z_{\mathrm{cut}}}{2}\right)
(1-z_{\mathrm{cut}})\nonumber\\
&\quad{}-\left[
\frac{1}{2}(1-z_{\mathrm{cut}})(3-z_{\mathrm{cut}})
+2\ln\left( z_{\mathrm{cut}}\right)\right]
\ln\left(\frac{|\tilde{s}_{ia}|}{\mu_0^2}\frac{z_{\mathrm{cut}}}{1-z_{\mathrm{cut}}}\right)
\nonumber\\&\quad{}
+2\mathop{\mathrm{Li}_2}\nolimits\left( 1-z_{\mathrm{cut}}\right)-2\mathop{\mathrm{Li}_2}\nolimits\left( -z_{\mathrm{cut}}\right)
+\frac{3}{2}\ln\left( z_{\mathrm{cut}}\right)
\biggr\}.
\label{ncssubplusfrag_ia}
\end{align}
Both \refeq{ncssubplusfrag_ij} and \refeq{ncssubplusfrag_ia} are
finite and only depend on the value of $z_{\mathrm{cut}}$, but not on
the mass regulators.
\subsection{Higher-order initial-state radiation}
\label{hoisr}
In order to achieve an accuracy at the per-mille level, we also
include effects stemming from higher-order ISR
using the structure-function approach as described in
\citeres{Altarelli:1996gh,Denner:2000bj}. The factorisation
theorem states that the leading-logarithmic (LL) initial-state QED
correction can be written as a convolution of the lowest-order cross
section with structure functions according to
\begin{equation}
\int{\mathrm{d}}\sigma^{\mathrm{LL}}=\int_0^1{\mathrm{d}} x_1\int_0^1{\mathrm{d}} x_2 \,
\Gamma_{\mathrm{e}\Pe}^{\mathrm{LL}}(x_1,Q^2)\Gamma_{\mathrm{e}\Pe}^{\mathrm{LL}}(x_2,Q^2)
\int{\mathrm{d}} \sigma_{\mathrm{Born}}(x_1k_1,x_2k_2),
\label{LL_cross}
\end{equation}
where $x_1$ and $x_2$ denote the fractions of the incoming momenta just before the hard scattering, $Q^2$
is the typical scale at which the scattering occurs, and the structure functions up to
$\mathcal{O}(\alpha^3)$ are given by \cite{Altarelli:1996gh}
\begin{eqnarray}
\Gamma^{\mathrm{LL}}_{\mathrm{e}\Pe}&=&\frac{\exp\left(-\frac{1}{2}\beta_{\mathrm{e}}\gamma_{\mathrm{E}}
+\frac{3}{8}\beta_{\mathrm{e}}\right)}{\Gamma\lrb1+\frac{1}{2}\beta_{\mathrm{e}}\right)}\frac{\beta_{\mathrm{e}}}{2}(1-x)
^{\frac{\beta_{\mathrm{e}}}{2}-1}-\frac{\beta_{\mathrm{e}}}{4}(1+x)\nonumber\\
&&{}-\frac{\beta_{\mathrm{e}}^2}{32}\left\{\frac{1+3x^2}{1-x}\ln(x)+4(1+x)\ln(1-x)+5+x\right\}\nonumber\\
&&{}-\frac{\beta_{\mathrm{e}}^3}{384}\biggl\{(1+x)\lsb6\mathop{\mathrm{Li}_2}\nolimits(x)+12\ln^2(1-x)-3\pi^2\right]\nonumber\\
&&{}+\frac{1}{1-x}\biggr[\frac{3}{2}(1+8x+3x^2)\ln(x)+6(x+5)(1-x)\ln(1-x)\nonumber\\
&&{}+12(1+x^2)\ln(x)\ln(1-x)-\frac{1}{2}(1+7x^2)\ln^2(x)
+\frac{1}{4}(39-24x-15x^2)\biggr]\biggr\},
\label{structure_function}
\end{eqnarray}
where
\begin{equation}
\beta_{\mathrm{e}}=\frac{2\alpha}{\pi}\left[\ln\left(\frac{Q^2}{m_\mathrm{e}^2}\right)-1\right],
\end{equation}
$\Gamma$ is the Gamma function, and $\gamma_{\mathrm{E}}$ the
Euler--Mascheroni constant. In the calculation at hand we use
$Q^2=s$.
When we add \refeq{LL_cross} to the one-loop result, we have to
subtract the lowest-order and one-loop contributions
${\mathrm{d}}\sigma^{\mathrm{LL},1}$ already contained in
\refeq{structure_function} to avoid double counting. They read
\begin{eqnarray}
\int{\mathrm{d}}\sigma^{\mathrm{LL},1}&=&\int_0^1{\mathrm{d}} x_1\int_0^1{\mathrm{d}} x_2\left[
\delta(1-x_1)\delta(1-x_2)+\Gamma_{\mathrm{e}\Pe}^{\mathrm{LL},1}(x_1,Q^2)\delta(1-x_2)\right.\nonumber\\
&&\left.{}+\Gamma_{\mathrm{e}\Pe}^{\mathrm{LL},1}(x_2,Q^2)\delta(1-x_1)\right]\int{\mathrm{d}}
\sigma_{\mathrm{Born}}(x_1k_1,x_2k_2),
\end{eqnarray}
where the one-loop structure functions are given by
\begin{equation}
\Gamma_{\mathrm{e}\Pe}^{\mathrm{LL},1}=\frac{\beta_{\mathrm{e}}}{4}\left(\frac{1+x^2}{1-x}\right)_+.
\end{equation}
\section{Implementation}
\label{sec:num}
\setcounter{equation}{0}
The real and virtual corrections described in the two previous sections are
implemented in a parton-level event generator. This program generates
final states with two and three particles for the hadronic cross sections, and
with three and four particles for event-shape distributions. It allows for
arbitrary, infrared-safe cuts on the final-state particles to be applied.
\subsection{Event selection}
\label{sec:es}
The infrared-safety requirement has one important implication that was
up to now not necessarily realized in the experimental studies. In ISR
events, where a photon is radiated close to the beam-pipe and not
observed, the event-shape variables must be computed in the CM system
of the observed hadrons. Only this transformation to the hadronic CM
system ensures that two-jet-like configurations are correctly
identified with the kinematic limit $y\to 0$ of the event-shape
distributions, as can be seen on the example of thrust: a partonic
final state with a quark--antiquark pair and an unobserved photon
yields two jets which are not back-to-back in the $\mathrm{e}^+\mathrm{e}^-$ CM
frame. The reconstructed thrust axis in this frame is in the direction
of the difference vector of the two jet momenta, resulting in $T<1$,
even for an ideal two-particle final state.
Unfortunately, most experimental studies of event-shape distributions
at LEP computed the shape variables in the $\mathrm{e}^+\mathrm{e}^-$ CM frame.
Modelling the ISR photonic corrections using standard parton-shower
programs, these distributions were then corrected (by often large
correction factors, shifting the two-jet peak from within the
distribution back to the kinematical edge) bin-by-bin. It is therefore
very difficult in practise to compare our results with data from LEP.
Despite its importance this task goes beyond the present paper.
In our implementation, we proceed as follows:
\begin{enumerate}
\item For all final-state particles, an acceptance cut is applied on
the polar angle $\theta_i$ of the particle $i$ with respect to the
beam direction: $|\cos\theta_i|\leq \cos\theta_{\mathrm{cut}}$.
Particles that do not pass this cut are unobserved and thus
discarded, i.e.\ their momenta are set to zero.
\item For the remaining final-state particles, the observed
final-state invariant mass squared $s^\prime$ is computed. The event
is rejected if $s' < s_{{\rm cut}}$.
\item If the event is accepted, it is boosted to the CM frame of
the observed final-state particles.
\item To identify isolated-photon events, all observed final-state
particles (including the photon) are clustered into jets using the
Durham algorithm with $E$ recombination and $y_{{\rm cut}} =
0.002$. After the clustering, the photon is inside one of the jets
(or makes up one jet), carrying a fraction $z$ of the jet energy. If
$z>z_{{\rm cut}}$, the photon is considered isolated and the event
is rejected.
\item All remaining events are accepted.
\end{enumerate}
Once an event passes the above set of cuts, we proceed with the
calculation of the event-shape variables, defined in \refsec{jetrate},
in the CM frame of the observed final-state particles. We impose an
additional cut individually for each histogram such that the
singularity in the two-jet region is avoided, typically as a lower
cut-off for the respective observables (see \refsec{se:input}).
The cut affects only the first bin of the histogram and does not cause
a distortion of the shape of the distribution.
\subsection{Input parameters}
\label{se:input}
For the numerical calculation, we
use the following set of input parameters \cite{Amsler:2008zzb}:
\begin{eqnarray}
\hspace*{-5mm}
\begin{array}[b]{r@{\,}l@{\qquad }r@{\,}l@{\qquad }r@{\,}l@{\qquad}}
G_{\mu} &= 1.16637\times 10^{-5}\unskip\,\mathrm{GeV}^{-2}, &
\alpha(0) &= 1/137.03599911,&\alpha_{G_\mu} &= 1/132.49395637 \\
\alpha_{\mathrm{s}}(\mathswitch {M_\PZ}) &= 0.1176,\\
\mathswitch {M_\PW}^{{\mathrm{LEP}}} &= 80.403\unskip\,\mathrm{GeV}, & \Gamma_{\PW}^{{\mathrm{LEP}}} &= 2.141\unskip\,\mathrm{GeV}, && \\
\mathswitch {M_\PZ}^{{\mathrm{LEP}}} &= 91.1876\unskip\,\mathrm{GeV},& \Gamma_{\PZ}^{{\mathrm{LEP}}} &= 2.4952\unskip\,\mathrm{GeV}, && \\
\mathswitch {m_\Pe} &= 0.51099892 \unskip\,\mathrm{MeV}, & \mathswitch {m_\mathswitchr t} &= 171.0\unskip\,\mathrm{GeV},
&\mathswitch {M_\PH} &= 120\unskip\,\mathrm{GeV}.
\end{array}\hspace*{-10mm}
\end{eqnarray}
We employ the complex-mass scheme \cite{Denner:2005es}, where a fixed
width enters the resonant W- and Z-boson propagators, in contrast to the
approach used at LEP to fit the W~and Z~resonances, where running
widths are taken. Therefore, we have to convert the ``on-shell''
values of $M_V^{{\mathrm{LEP}}}$ and $\Gamma_V^{{\mathrm{LEP}}}$ ($V=\mathrm{W},\mathrm{Z}$), resulting
from LEP, to the ``pole values'' denoted by $M_V$ and $\Gamma_V$. The
relation between the two sets is given by
\cite{Bardin:1988xt}
\begin{equation}\label{eq:m_ga_pole}
M_V = M_V^{{\mathrm{LEP}}}/
\sqrt{1+(\Gamma_V^{{\mathrm{LEP}}}/M_V^{{\mathrm{LEP}}})^2},
\qquad
\Gamma_V = \Gamma_V^{{\mathrm{LEP}}}/
\sqrt{1+(\Gamma_V^{{\mathrm{LEP}}}/M_V^{{\mathrm{LEP}}})^2},
\end{equation}
leading to
\begin{eqnarray}
\begin{array}[b]{r@{\,}l@{\qquad}r@{\,}l}
\mathswitch {M_\PW} &= 80.375\ldots\unskip\,\mathrm{GeV}, & \Gamma_{\PW} &= 2.140\ldots\unskip\,\mathrm{GeV}, \\
\mathswitch {M_\PZ} &= 91.1535\ldots\unskip\,\mathrm{GeV},& \Gamma_{\PZ} &= 2.4943\ldots\unskip\,\mathrm{GeV}.
\label{eq:m_ga_pole_num}
\end{array}
\end{eqnarray}
The scale dependence of $\alpha_{\mathrm{s}}$ is determined according to the
two-loop running. The number of active flavours at $\mathswitch {M_\PZ}$ is $n_\mathrm{F}=5$,
resulting in $\Lambda_5=0.221$~GeV. The scale dependence is matched
to two-loop order at the top threshold~\cite{Chetyrkin:2000yt}.
We neglect effects due to quark mixing and set the CKM matrix to
unity. Throughout this work, we parametrise the couplings appearing in
LO in the $G_\mu$ scheme, i.e.\ we use $\alpha_{G_\mu}$, whereas we
fix the electromagnetic coupling appearing in the relative corrections
by $\alpha=\alpha(0)$. If not stated otherwise, we use the parameters
\begin{equation}
\cos\theta_{\mathrm{cut}}=0.965,\quad s_{\mathrm{cut}}=0.81s,\quad z_{\mathrm{cut}}=0.9,\quad y_{\mathrm{cut}}=0.002
\label{esparas}
\end{equation}
in accordance with the event-selection criteria used in the ALEPH
analysis \cite{Barate:1996fi} and employ the Durham jet algorithm
together with the $E$ recombination scheme for the reconstruction of
isolated photons (see \refsec{jetrate}).
As mentioned in \refsec{sec:es}, we implement a cut such that the
singularity in the two-jet region is avoided. This cut requires the
variables $T$, $\rho$, $B_{\mathrm{T}}$, $B_{\mathrm{W}}$, and $C$ to
be greater than $0.005$, whereas $Y_3$ and $y_\mathrm{cut}$ for
$\sigma_{\mbox{\scriptsize3-jet}}$ are required to be greater than $0.00005$.
\subsection{Checks of the implementation}
The reliability of our results is ensured as follows:
\begin{itemize}
\item \emph{UV finiteness} is checked by varying the reference scale $\mu$ of dimensional regularisation
and finding that our results are independent of this variation.
\item \emph{IR finiteness} is verified through varying the infinitesimal photon mass $m_\gamma$
in mass regularization and observing
that the sum of the virtual corrections and the soft-photonic corrections in both the slicing and subtraction
approach is invariant.
In dimensional regularisation the independence of $\mu$ is checked as for UV divergences.
\item \emph{Mass singularities} related to collinear photon emission or exchange are shown to cancel between
the virtual and the subtraction endpoint contributions by varying the small
masses of the external fermions.
\item \emph{Two completely independent} calculations have been
performed within our collaboration. We find complete agreement of
the results for $\sigma_{\mathrm{had}}$, jet rates, and event-shape
distributions at the level of the Monte Carlo integration error
which typically is at the per-mille level.
\end{itemize}
Furthermore, we compare the results obtained with phase-space slicing
and the subtraction method, which are completely independent
techniques. In \reffig{fig:sighad_slpara} we show the mutual agreement
of both techniques for the NLO EW results for $\sigma_{\mathrm{had}}$,
in \reffig{fig:sig3j_slpara} for the full
$\mathcal{O}{\left(\alpha\right)}$ results for the three-jet rate with
$y_\mathrm{cut}=0.0006$ at $\sqrt{s}=\mathswitch {M_\PZ}$, and in
\reffig{fig:T_slpara} for the full $\mathcal{O}{{\left(\alpha\right)}}$
results for the thrust distribution at $\sqrt{s}=206\unskip\,\mathrm{GeV}$. Note that
\reffig{fig:sighad_slpara} and \reffig{fig:sig3j_slpara} refer to the
corrections to $\mathrm{e}^+\mathrm{e}^-\to q\bar{q}$ and
$\mathrm{e}^+\mathrm{e}^-\to q\bar{q}\mathrm{g}$, respectively, i.e.\ to two independent
implementations.
\begin{figure}
\begin{center}
\epsfig{file=slicing_s.ps,width=6cm}
\quad\epsfig{file=slicing_c.ps,width=6cm}
\end{center}
\vspace*{-2em}
\caption{Slicing cut dependence of
$\sigma_{\mathrm{had}}\left(M_\mathrm{Z}\right)$.
For comparison also the results of the subtraction method are shown.}
\label{fig:sighad_slpara}
\end{figure}%
\begin{figure}
\begin{center}
\epsfig{file=slicing_s_y3.eps,width=6cm}
\quad\epsfig{file=slicing_c_y3.eps,width=6cm}
\end{center}
\vspace*{-1em}
\caption{Dependence of the three-jet rate on the slicing parameters at
$\sqrt{s}=M_\mathrm{Z}$ for $y_{\mathrm{cut}}=0.0006$. For comparison also the results of the subtraction method are shown.}
\label{fig:sig3j_slpara}
\end{figure}%
\begin{figure}
\begin{center}
\epsfig{file=slicing_s_206.eps,width=6cm}
\quad
\epsfig{file=slicing_c_206.eps,width=6cm}
\end{center}
\vspace*{-1em}
\caption{Dependence of the differential thrust distribution
on the slicing parameters at $\sqrt{s}=206\unskip\,\mathrm{GeV}$. For comparison also the results of the subtraction method are shown.}
\label{fig:T_slpara}
\end{figure}%
We find that within integration errors the slicing results become
independent of the cut-offs for $\delta_s\lesssim 10^{-3}$ and
$\delta_c\lesssim 10^{-4}$ and fully agree with the results obtained
using the subtraction method. For the sake of clarity, we show only
curves for values of the slicing parameters that lie on the plateau in
\reffig{fig:T_slpara}. For values of the slicing parameters outside
the plateau, the behaviour follows the same pattern as in
Figs.~\ref{fig:sighad_slpara} and \ref{fig:sig3j_slpara}.
It turns out that the subtraction method is more efficient in terms of
run-time compared to phase-space slicing. To obtain results of comparable statistical quality, the
phase-space slicing method required about one order of magnitude more events than the subtraction
method. We therefore used the
subtraction method to obtain the results presented in the following
sections, both for $\sigma_{\mathrm{had}}$ and jet rates and
event-shape distributions.
\section{Numerical results}
\label{sec:results}
\setcounter{equation}{0}
The numerical parton-level event generator described above can be used
to compute the ${\cal O}(\alpha)$ electroweak corrections to the total
hadronic cross section, to event-shape distributions, and to the
three-jet rate.
\subsection{Results for the total hadronic cross section}
\label{se:results_sighad}
\begin{table}
\begin{tabular*}{\textwidth}{@{\extracolsep{\fill}}rcccc}
\toprule
&$\sqrt{s}=M_\mathrm{Z}$&$\frac{{\mathrm{d}}\sigma_i-{\mathrm{d}}\sigma_{\mathrm{Born}}}{{\mathrm{d}}\sigma_{\mathrm{Born}}}[\%]$&$\sqrt{s}=133\unskip\,\mathrm{GeV}$&$\frac{{\mathrm{d}}\sigma_i-{\mathrm{d}}\sigma_{\mathrm{Born}}}{{\mathrm{d}}\sigma_{\mathrm{Born}}}[\%]$\\
\midrule
$\sigma_{\mathrm{had}}^{\mathrm{Born}}/\unskip\,\mathrm{nb}$&$38.2845(15)$&&$0.068858(2)$&\\
$\sigma_{\mathrm{had}}^{\mathrm{weak}}/\unskip\,\mathrm{nb}$&$37.8541(2)$&$~{-1.1}$&$0.068348(2)$&$-0.7$\\
$\sigma_{\mathrm{had}}^{\mathrm{NLO}}/\unskip\,\mathrm{nb}$&$25.729(3)$&$-32.8$&$0.06269(2)$&$-9.0$\\
$\sigma_{\mathrm{had}}^{\mathrm{NLO+h.o.LL}}/\unskip\,\mathrm{nb}$&$27.341(3)$&$-28.6$&$0.06208(2)$&$-9.8$\\
\bottomrule
\end{tabular*}
\\[0.7cm]
\begin{tabular*}{\textwidth}{@{\extracolsep{\fill}}rcccc}
\toprule
&$\sqrt{s}=172\unskip\,\mathrm{GeV}$&$\frac{{\mathrm{d}}\sigma_i-{\mathrm{d}}\sigma_{\mathrm{Born}}}{{\mathrm{d}}\sigma_{\mathrm{Born}}}[\%]$&$\sqrt{s}=206\unskip\,\mathrm{GeV}$&$\frac{{\mathrm{d}}\sigma_i-{\mathrm{d}}\sigma_{\mathrm{Born}}}{{\mathrm{d}}\sigma_{\mathrm{Born}}}[\%]$\\
\midrule
$\sigma_{\mathrm{had}}^{\mathrm{Born}}/\unskip\,\mathrm{nb}$&$0.0276993(8)$&&$0.0170486(5)$&\\
$\sigma_{\mathrm{had}}^{\mathrm{weak}}/\unskip\,\mathrm{nb}$&$0.0276780(8) $&$~{-0.1}$&$0.0170626(5)$&$~0.1$\\
$\sigma_{\mathrm{had}}^{\mathrm{NLO}}/\unskip\,\mathrm{nb}$&$0.024770(7) $&$-10.6$&$0.015197(4)$&$-10.9$\\
$\sigma_{\mathrm{had}}^{\mathrm{NLO+h.o.LL}}/\unskip\,\mathrm{nb}$&$0.024633(7) $&$-11.1$&$0.015127(4)$&$-11.3$\\
\bottomrule
\end{tabular*}
\\[0.7cm]
\begin{tabular*}{\textwidth}{@{\extracolsep{\fill}}rcccc}
\toprule
&$\sqrt{s}=500\unskip\,\mathrm{GeV}$&$\frac{{\mathrm{d}}\sigma_i-{\mathrm{d}}\sigma_{\mathrm{Born}}}{{\mathrm{d}}\sigma_{\mathrm{Born}}}[\%]$&$\sqrt{s}=1000\unskip\,\mathrm{GeV}$&$\frac{{\mathrm{d}}\sigma_i-{\mathrm{d}}\sigma_{\mathrm{Born}}}{{\mathrm{d}}\sigma_{\mathrm{Born}}}[\%]$\\
\midrule
$\sigma_{\mathrm{had}}^{\mathrm{Born}}/\unskip\,\mathrm{nb}$&$0.00241881(7)$&&$0.00059139(2)$&\\
$\sigma_{\mathrm{had}}^{\mathrm{weak}}/\unskip\,\mathrm{nb}$&$0.00238722(7)$&$~{-1.3}$&$0.00056838(2)$&$~{-3.9}$\\
$\sigma_{\mathrm{had}}^{\mathrm{NLO}}/\unskip\,\mathrm{nb}$&$0.0020665(7)$&$-14.6$&$0.0004856(2)$&$-17.9$\\
$\sigma_{\mathrm{had}}^{\mathrm{NLO+h.o.LL}}/\unskip\,\mathrm{nb}$&$0.0020585(7)$&$-14.9$&$0.0004836(2)$&$-18.2$\\
\bottomrule
\end{tabular*}
\caption{Total hadronic cross section $\sigma_{\mathrm{had}}\left(\sqrt{s}\right)$ for LEP1 and LEP2 energies, and for $\sqrt{s}=500\unskip\,\mathrm{GeV}$ and $\sqrt{s}=1\unskip\,\mathrm{TeV}$.}
\label{tab:sighad}
\end{table}
The total hadronic cross section as a function of the CM energy and
the corresponding NLO electroweak corrections have been shown in
\citere{Denner:2009gx}. Here we list some numbers that have been used
to extract $\delta_{\sigma,1}$ and $\delta_{\sigma,\ge2,\mathrm{LL}}$, as
defined in \refeq{sig0NLO} and \refeq{sig0LL}, which enter the
calculation of normalised event-shape distributions and jet rates.
We use the event selection as described in \refsec{sec:es} with the cut
parameters given in \refeq{esparas}. In the following, `weak' refers to
the electroweak NLO corrections without purely photonic corrections.
\reftab{tab:sighad} shows the Born contribution to
$\sigma_{\mathrm{had}}$ in the first row, the weak
$\mathcal{O}(\alpha)$ contribution in the second row, the full
$\mathcal{O}(\alpha)$ contribution in the third row, and the full
$\mathcal{O}(\alpha)+$ h.o. LL contribution in the fourth row for LEP1
and LEP2 energies, as well as for $\sqrt{s}=500\unskip\,\mathrm{GeV}$ and
$\sqrt{s}=1000\unskip\,\mathrm{GeV}$. We show the absolute results in nanobarn in the
second and fourth columns and the relative corrections in per cent in
the third and fifths columns. The numbers in parentheses give the
uncertainties from Monte Carlo integration in the last digits of the
predictions.
For most energies, the electroweak corrections are sizable and
negative, ranging between $-30\%$ at the $\mathrm{Z}$ peak and about $-9\%$
at energies above. The numerically largest contribution is
always due to ISR.
Above $120\unskip\,\mathrm{GeV}$ the magnitude of the corrections is increased due to
LL resummation of ISR, whereas it is decreased in the resonance
region. The virtual one-loop weak corrections (from fermionic and
massive bosonic loops) are moderate of a few per cent and
always negative for $\mathswitch {M_\PZ}<\sqrt{s}<1\unskip\,\mathrm{TeV}$.
\subsection{Results for the event-shape distributions and jet rates}
\label{se:results_distri}
In the following we present the results of our calculation for the
three-jet rate as well as for event-shape distributions as described
in \refsec{jetrate}. We show our findings for $\sqrt{s}=\mathswitch {M_\PZ}$ as used
at LEP1 and the selected LEP2 energies $172\unskip\,\mathrm{GeV}$ and $206\unskip\,\mathrm{GeV}$. To
stress the relevance of our work for future linear colliders, we also
show results for $\sqrt{s}=500\unskip\,\mathrm{GeV}$.
The precise size and shape of the corrections depend on the observable
$y$ in question. However, they share the common feature that $q\bar
q\gamma$ final states contribute only in the two-jet region, typically
for small values of $y$.
\begin{figure}
\begin{center}
\epsfig{file=plot.T.MZ.eps,width=6.3cm}
\quad
\epsfig{file=plot.em2h.MZ.eps,width=6.3cm}\\[2mm]
\epsfig{file=plot.Bsum.MZ.eps,width=6.3cm}
\quad
\epsfig{file=plot.Bmax.MZ.eps,width=6.3cm}\\[2mm]
\epsfig{file=plot.Cpar.MZ.eps,width=6.3cm}
\quad
\epsfig{file=plot.ycutJ.MZ.eps,width=6.3cm}
\end{center}
\vspace*{-2em}
\caption{The event-shape distributions normalised to $\sigma_0$ at $\sqrt{s}=M_\mathrm{Z}$.}
\label{fig:distri_MZ_nonorm}
\end{figure}
In a first step, we show our results for the distributions normalised
to $\sigma_0$ for $\sqrt{s}=\mathswitch {M_\PZ}$ in \reffig{fig:distri_MZ_nonorm}.
The Born contribution is given by the $A$ term of \refeq{dsdy_EW},
while the full ${\cal O}(\alpha)$ corrections contain the tree-level
$q\bar q \gamma$ contribution $\delta_\gamma$ and the NLO electroweak
contribution $\delta_A$ of \refeq{dsdy_EW}. The $T$, $\rho$,
$B_{\mathrm{T}}$, $B_{\mathrm{W}}$, and $C$ distributions are weighted
by the respective variable $y$, evaluated at each bin centre. The
relative corrections in the lower boxes are obtained by dividing the
respective contributions to the corrections by the Born distribution
given by the $A$ term. We observe large negative corrections due to
ISR, and moderate weak corrections in all distributions. The
corrections are mainly constant for large $y$ (note that we plot $1-T$
instead of $T$), where the isolated-photon veto rejects all
contributions from $q\bar q\gamma$ final states. Near the two-jet
limit, the contribution from $q\bar q\gamma$ final states dominates
the relative corrections. Moreover, it turns out that the
electromagnetic corrections depend non-trivially on the
event-selection cuts (see \refsec{se:results_paradep} for a more
detailed discussion). We observe a significant decrease from the
second bin to the first bin in all distributions, caused by the lower
cut-off that we impose individually for all distributions. Since the
cut-off acts both in the Born and the NLO contribution, we find a
meaningful result for the relative corrections in the first bin. In
the $Y_3$ distribution we clearly see the onset of the $q\bar q\gamma$
final states for $Y_3=0.002$. Since we always cluster photons with
$y<y_\mathrm{cut}=0.002$ in the event selection (see \refsec{sec:es}),
the contribution from $q\bar q\gamma$ final states is removed if
$Y_3>0.002$ and only plays a role for $Y_3< 0.002$.
\begin{figure}
\begin{center}
\epsfig{file=plot.T.MZ.norm.eps,width=6.3cm}
\quad
\epsfig{file=plot.em2h.MZ.norm.eps,width=6.3cm}\\[2mm]
\epsfig{file=plot.Bsum.MZ.norm.eps,width=6.3cm}
\quad
\epsfig{file=plot.Bmax.MZ.norm.eps,width=6.3cm}\\[2mm]
\epsfig{file=plot.Cpar.MZ.norm.eps,width=6.3cm}
\quad
\epsfig{file=plot.ycutJ.MZ.norm.eps,width=6.3cm}
\end{center}
\vspace*{-2em}
\caption{The event-shape distributions normalised
to $\sigma_{\mathrm{had}}$ at $\sqrt{s}=M_\mathrm{Z}$.}
\label{fig:distri_MZ_1}
\end{figure}
\begin{figure}
\begin{center}
\epsfig{file=plot.T.172.norm.eps,width=6.3cm}
\quad
\epsfig{file=plot.em2h.172.norm.eps,width=6.3cm}\\[2mm]
\epsfig{file=plot.Bsum.172.norm.eps,width=6.3cm}
\quad
\epsfig{file=plot.Bmax.172.norm.eps,width=6.3cm}\\[2mm]
\epsfig{file=plot.Cpar.172.norm.eps,width=6.3cm}
\quad
\epsfig{file=plot.ycutJ.172.norm.eps,width=6.3cm}
\end{center}
\vspace*{-2em}
\caption{The event-shape distributions normalised
to $\sigma_{\mathrm{had}}$ at $\sqrt{s}=172\unskip\,\mathrm{GeV}$.}
\label{fig:distri_172_1}
\end{figure}
\begin{figure}
\begin{center}
\epsfig{file=plot.T.206.norm.eps,width=6.3cm}
\quad
\epsfig{file=plot.em2h.206.norm.eps,width=6.3cm}\\[2mm]
\epsfig{file=plot.Bsum.206.norm.eps,width=6.3cm}
\quad
\epsfig{file=plot.Bmax.206.norm.eps,width=6.3cm}\\[2mm]
\epsfig{file=plot.Cpar.206.norm.eps,width=6.3cm}
\quad
\epsfig{file=plot.ycutJ.206.norm.eps,width=6.3cm}
\end{center}
\vspace*{-2em}
\caption{The event-shape distributions normalised
to $\sigma_{\mathrm{had}}$ at $\sqrt{s}=206\unskip\,\mathrm{GeV}$.}
\label{fig:distri_206_1}
\end{figure}
\begin{figure}
\begin{center}
\epsfig{file=plot.T.500.norm.eps,width=6.3cm}
\quad
\epsfig{file=plot.em2h.500.norm.eps,width=6.3cm}\\[2mm]
\epsfig{file=plot.Bsum.500.norm.eps,width=6.3cm}
\quad
\epsfig{file=plot.Bmax.500.norm.eps,width=6.3cm}\\[2mm]
\epsfig{file=plot.Cpar.500.norm.eps,width=6.3cm}
\quad
\epsfig{file=plot.ycutJ.500.norm.eps,width=6.3cm}
\end{center}
\vspace*{-2em}
\caption{The event-shape distributions normalised
to $\sigma_{\mathrm{had}}$ at $\sqrt{s}=500\unskip\,\mathrm{GeV}$.}
\label{fig:distri_500_1}
\end{figure}
In expanding the corrections according to \refeq{dsdyhad_EW}, and
retaining only terms up to LO in $\alpha_{\mathrm{s}}$, we obtain the genuine
electroweak corrections to normalised event-shape distributions, which
we display at $\sqrt{s}=\mathswitch {M_\PZ}$ in \reffig{fig:distri_MZ_1}. The Born
contribution is given by the $A$ term of \refeq{dsdyhad_EW}, while the
${\cal O}(\alpha)$ corrections are now given by $\delta_{\mathrm{EW}}$
of \refeq{deltaEW}. It can be seen very clearly that the large ISR
corrections cancel between the event-shape distributions and the
hadronic cross section when expanding the normalised distributions
properly, resulting in electroweak corrections of a few per cent.
Moreover, effects from ISR resummation are largely reduced as well,
and the difference between $\mathcal{O}{\left(\alpha\right)}$ and
$\mathcal{O}{\left(\alpha\right)}+{}$ h.o. LL is very small.
The purely weak corrections are below the per-mille level.
For the thrust distribution, the full $\mathcal{O}{\left(\alpha\right)}$
corrections are almost constant around $0.5\%$ for $(1-T)>0.05$. The
coefficient $\delta_\gamma$ starts to emerge for $(1-T)=0.04$ and
contributes up to $2.6\%$. The full $\mathcal{O}(\alpha)$ corrections
peak for $(1-T)=0.02$ at $2.5\%$ and amount to $1.8\%$ for
$(1-T)=0.01$ in the first bin. The full $\mathcal{O}(\alpha)$
corrections are flat and around $0.5\%$ for $\rho>0.05$,
$B_{\mathrm{W}}>0.05$, $C>0.1$, and $Y_3>0.002$. For the
$B_{\mathrm{T}}$ distribution they are around $1\%$ and almost flat
for $B_{\mathrm{T}}>0.05$. The full $\mathcal{O}{\left(\alpha\right)}$
corrections reach a maximum between 1\% and 3\% typically for
small values of the event-shape variables and drop towards the first
bin down to between $1.8\%$ and $-4.1\%$. Only for $B_{\mathrm{T}}$
we find also a maximum (of $4\%$) in the last bin. The LO
$q\bar{q}\gamma$ channel contributes only for small $y$ and can amount
up to 4\%.
In \reffig{fig:distri_172_1} we show our results for
$\sqrt{s}=172\unskip\,\mathrm{GeV}$. The behaviour is similar as for $\sqrt{s}=\mathswitch {M_\PZ}$,
but in the middle of the distributions a peaking structure emerges. It
is located at $1-T,\rho,B_\mathrm{W}\approx 0.2$, $B_\mathrm{T}\approx
0.25$, $C\approx 0.65$, and $Y_3\approx 0.1$.
We investigate this behaviour in detail in the next section. In all
event-shape distributions the LO $q\bar q\gamma$ contribution ranges
between $3\%$ and $8\%$. Outside the two-jet region and apart from
the domain of the peaking structure, the full
$\mathcal{O}{\left(\alpha\right)}$ corrections are flat and of the order of
$5\%$.
They peak near the onset of the $q\bar q\gamma$ final states between
$4\%$ and $10\%$, and drop in the first bin down to between $1.5\%$
and $-10\%$.
Results for $\sqrt{s}=206\unskip\,\mathrm{GeV}$, are displayed in
\reffig{fig:distri_206_1}. In all event-shape distributions the LO
$q\bar q\gamma$ contribution ranges between $4\%$ and $9\%$. Outside
the two-jet region and outside the domain where the peaking structure
is located, the full $\mathcal{O}{\left(\alpha\right)}$ corrections are
flat between $0.1\%$ and $2\%$, they peak near the onset of the $q\bar
q\gamma$ final states between $5\%$ and $9\%$, and drop in the first
bin down to between $2\%$ and $-8\%$. The peaking structure is now
situated at smaller values of $y$, it is less pronounced and,
especially for $B_\mathrm{W}$, $B_\mathrm{T}$, $C$, and $Y_3$, it
extends over a larger range of $y$. Additionally, for large values of
$y$, the weak contribution slightly increases.
Finally, in \reffig{fig:distri_500_1} we show our results for
$\sqrt{s}=500\unskip\,\mathrm{GeV}$. In the event-shape distributions the LO $q\bar
q\gamma$ contribution ranges between $2\%$ and $8\%$. Outside the
two-jet region, the full $\mathcal{O}{\left(\alpha\right)}$ corrections are
flat between $2\%$ and $3\%$, they peak near the onset of the $q\bar
q\gamma$ final states between $2\%$ and $9\%$, and drop in the first
bin down to between $2\%$ and $-6\%$. The weak corrections here
contribute up to $+1\%$ in all observables for large values of $y$.
The peaking structure as observed for $\sqrt{s}=172\unskip\,\mathrm{GeV}$ and
$\sqrt{s}=206\unskip\,\mathrm{GeV}$ has completely disappeared.
\begin{figure}[t]
\begin{center}
\epsfig{file=plot.sig3j_a.MZ.norm.eps,width=6.3cm}
\quad
\epsfig{file=plot.sig3j_a.172.norm.eps,width=6.3cm}\\[2mm]
\epsfig{file=plot.sig3j_a.206.norm.eps,width=6.3cm}
\quad
\epsfig{file=plot.sig3j_a.500.norm.eps,width=6.3cm}
\end{center}
\vspace*{-2em}
\caption{The three-jet rate normalised
to $\sigma_{\mathrm{had}}$ at different CM energies.}
\label{fig:sig3}
\end{figure}
Figure~\ref{fig:sig3} displays the corrections to the three-jet rate
at various CM energies. As above, the corrections are
appropriately normalised to $\sigma_{\mathrm{had}}$. At
$\sqrt{s}=M_\mathrm{Z}$, the full $\mathcal{O}{\left(\alpha\right)}$ corrections
to the three-jet rate are about $0.5\%$ for $y_\mathrm{cut}\gtrsim
0.002$. Because we use $y_\mathrm{cut}=0.002$ in the event selection,
$q\bar q\gamma$ final states contribute only if
$y_\mathrm{cut}\lesssim 0.002$. The LO $q\bar q\gamma$ contribution
amounts to $1\%$. For $y_\mathrm{cut}< 0.002$, the full
$\mathcal{O}{\left(\alpha\right)}$ corrections become negative, reaching
$-1.5\%$ in the first bin. For $y_\mathrm{cut}\lesssim 0.0005$ the
three-jet rate becomes larger than $\sigma_{\mathrm{had}}$. This
behaviour indicates the breakdown of the perturbative expansion in
$\alpha_{\mathrm{s}}$ due to large logarithmic corrections proportional to
$\log(y_\mathrm{cut})$ at all orders. Inclusion of higher-order QCD
corrections, which are large and negative
\cite{our3j,weinzierl3j}
in this region, yields a ratio of $\sigma_{\mbox{\scriptsize 3-jet}}$ to
$\sigma_{\mathrm{had}}$ less than unity for an extended range of
$\log(y_\mathrm{cut})$. At the higher CM energies, the
corrections to the three-jet rate are larger than at $\sqrt{s}=M_\mathrm{Z}$.
For $y_\mathrm{cut}\gtrsim 0.002$, they are almost constant and amount
to about 4\% at $172\unskip\,\mathrm{GeV}$, and about 2\% at $206\unskip\,\mathrm{GeV}$ and $500\unskip\,\mathrm{GeV}$.
In the region $y_\mathrm{cut}< 0.002$, we find a negative contribution
of up to $-5\%$ for very small values of $y_\mathrm{cut}$.
\subsection{Impact of the event-selection cuts on the event-shape distributions}
\label{se:results_paradep}
In the above results, we could clearly observe that the electroweak
corrections to event-shape distributions are not flat, but display
peak structures. These structures are most pronounced at $\sqrt{s} =
172\unskip\,\mathrm{GeV}$. They are discussed here for thrust as an example. In
\reffig{fig:distri_172_1}, we see that the relative corrections show a
peaking structure for $1-T\approx 0.2$. To understand the origin of
these structures, we extensively studied how variations of the
event-selection cuts, especially the hard-photon cut, influence the
event-shape distributions.
We employ three different cuts in our calculation which
depend on four parameters:
\begin{itemize}
\item[1)] A cut on the production angle $\theta_i$ of all particles,
such that only particles $i$ with
$\cos\theta_i>\cos\theta_\mathrm{cut}$ are used in the
reconstruction of the event-shape variables. By default, we use the
value $\cos\theta_\mathrm{cut}=0.965$.
\item[2)] A cut on the visible energy squared $s'$ of the final state,
such that only events with $s'>s_{\mathrm{cut}}$ are accepted. By
default, we use the value $s_{\mathrm{cut}}=0.81s$.
\item[3)] The maximum photon energy in a jet $z$, for which we require
$z<z_{\mathrm{cut}}$, where the particles are clustered according to
the Durham jet algorithm with parameter $y_{\mathrm{cut}}$. By
default, we use the values $z_\mathrm{cut}=0.9$ and
$y_{\mathrm{cut}}=0.002$.
\end{itemize}
\begin{figure}[t]
\begin{center}
\epsfig{file=plot_172_s.eps,width=7.2cm}\quad
\epsfig{file=plot_172_c.eps,width=7.2cm}\\[2mm]
\epsfig{file=plot_172_z.eps,width=7.2cm}\quad
\epsfig{file=plot_172_y.ps,width=7.2cm}
\end{center}
\vspace*{-2em}
\caption{Dependence of the thrust distribution on different values of the phase-space cuts at $\sqrt{s}=172\unskip\,\mathrm{GeV}$.}
\label{fig:para_172}
\end{figure}
In \reffig{fig:para_172} we show the full
$\mathcal{O}{\left(\alpha\right)}$ corrections to the thrust distribution
normalised to the Born contribution for $\sqrt{s}=172\unskip\,\mathrm{GeV}$, where the
peak structures are most striking. We plot the results for three
different values of a single cut parameter while we set the other
three cut parameters to their default value. Going from left to right
and top to bottom, we vary $s_{\mathrm{cut}}$,
$\cos\theta_\mathrm{cut}$, $z_{\mathrm{cut}}$, and $y_{\mathrm{cut}}$.
By changing $s_{\mathrm{cut}}$ we observe a change in normalisation of
about $25\%$, while the shape of the distribution stays the same.
Varying $\cos\theta_\mathrm{cut}$ from larger to smaller values leads
to a more and more pronounced peak at $(1-T)\simeq0.2$. The
corrections grow below the peak but are only slightly changed above.
The different slopes for $1-T<0.03$ result from the changing
acceptance of ISR photons. Modifying $z_{\mathrm{cut}}$ has a dramatic
effect on the peak. For $z_{\mathrm{cut}}=0.99$ we find a very
pronounced resonance for $(1-T)\simeq0.28$. By reducing the cut, the
resonance gets strongly suppressed and moves to smaller values of
$(1-T)$. By increasing $y_{\mathrm{cut}}$, we observe an enhancement
of the peak, as well as a slight shift towards larger values of $1-T$.
In \reffig{fig:para_z} we study the change of the
$\mathcal{O}{\left(\alpha\right)}$ corrections to the thrust distribution
normalised to the Born contribution with $z_{\mathrm{cut}}$
for $\sqrt{s}=\mathswitch {M_\PZ}$, $133\unskip\,\mathrm{GeV}$, $206\unskip\,\mathrm{GeV}$, and $500\unskip\,\mathrm{GeV}$.%
\begin{figure}[t]
\begin{center}
\epsfig{file=plot_MZ_z.eps,width=7.2cm}\quad
\epsfig{file=plot_133_z.eps,width=7.2cm}\\[2mm]
\epsfig{file=plot_206_z.ps,width=7.2cm}\quad
\epsfig{file=plot_500_z.eps,width=7.2cm}
\end{center}
\vspace*{-1em}
\caption{Dependence of the thrust distribution on different values of
the cut $z_{\mathrm{cut}}$ at $\sqrt{s}=M_\mathrm{Z}$, $133\unskip\,\mathrm{GeV}$,
$206\unskip\,\mathrm{GeV}$, and $500\unskip\,\mathrm{GeV}$.}
\label{fig:para_z}
\end{figure}
Varying $z_{\mathrm{cut}}$ for $\sqrt{s}=M_\mathrm{Z}$ leaves the
distribution basically unchanged. By increasing $z_{\mathrm{cut}}$
from 0.5 to 0.99 for $\sqrt{s}=133\unskip\,\mathrm{GeV}$ we find a growth of the peak
to almost $100\%$. We also observe that the peak moves from
$1-T\approx0.28$ for $z_{\mathrm{cut}}=0.9$ to $1-T\approx0.25$ for
$z_{\mathrm{cut}}=0.99$. For $\sqrt{s}=206\unskip\,\mathrm{GeV}$ we see the same
features as for $\sqrt{s}=172\unskip\,\mathrm{GeV}$, but now with the peak at
$(1-T)\simeq0.19$ for large $z_{\mathrm{cut}}$. Finally, for
$\sqrt{s}=500\unskip\,\mathrm{GeV}$ a peaking structure seems to appear at
$(1-T)\simeq0.03$, while varying $z_{\mathrm{cut}}$
basically changes only the normalisation.
Through analysing events at the level of the Monte Carlo
generator, we find that the enhancement in the region of the peaking
structure always stems from $q\bar q\mathrm{g}\gamma$ final states,
where a soft gluon is clustered with a hard photon. Increasing
$\cos\theta_{\mathrm{cut}}$ leads to a logarithmic enhancement of
collinear ISR photons, increasing $z_{\mathrm{cut}}$ generally results
in a larger acceptance of photons inside jets, and increasing
$y_{\mathrm{cut}}$ causes more photons to be clustered together with
other partons, resulting in less events with isolated photons being
removed. We can therefore conclude that the peaking structure results
from the ISR photon contribution, where a soft gluon is clustered
together with the photon.
More precisely, the peaking structure can be explained by the
radiative-return phenomenon. Since we do not remove all energetic
photons, it is possible that a hard photon and a soft gluon are
clustered together, such that the energy fraction of the photon in the
jet does not exceed $z_\mathrm{cut}$ and the invariant mass of the
quark--antiquark--gluon system $s_{q{\bar q}g}$ is equal to the mass
of the $\mathrm{Z}$ boson. Such a configuration leads on the one hand to an
enhancement due to radiative return but also to a logarithmic
enhancement due to the soft gluon.
In order to analyse this effect further, we consider events where the
photon and the soft gluon are clustered together, such that we have a
three-particle final state that consists of a quark, an antiquark, and
a photonic jet. Assume that the quark, antiquark, and the photonic jet
have the three-momenta $\vec{p}_q,\vec{p}_{\bar q},\vec{p}_\gamma$ and
the energies $E_q,E_{\bar q},E_\gamma$, respectively. We use
energy-momentum conservation
\begin{eqnarray}
\vec{p}_q+\vec{p}_{\bar q}+\vec{p}_\gamma&=&0,\nonumber\\
E_q+E_{{\bar q}}+E_\gamma&=&\sqrt{s}.
\label{res_mom_cons}
\end{eqnarray}
and demand that the invariant mass of the quark--antiquark pair is
equal to $\mathswitch {M_\PZ}$, such that
\begin{equation}
s_{q{\bar q}}=\left( E_q+E_{\bar q}\right)^2-\left(\vec{p}_q+\vec{p}_{\bar q}\right)^2=
\left( E_q+E_{\bar q}\right)^2-\left(\vec{p}_\gamma\right)^2=\left( E_q+E_{\bar q}\right)^2-E_\gamma^2=\mathswitch {M_\PZ}^2.
\label{res_inv_mass_qq}
\end{equation}
Energy conservation \refeq{res_mom_cons} and the mass-shell condition
\refeq{res_inv_mass_qq} imply
\begin{equation}
E_q+E_{\bar q}=\frac{s+\mathswitch {M_\PZ}^2}{2\sqrt{s}},\qquad E_\gamma=\frac{s-\mathswitch {M_\PZ}^2}{2\sqrt{s}}.
\end{equation}
It can be shown that in a three-jet configuration with massless
partons, thrust is always determined by the energy of the most
energetic particle $E_{\mathrm{max}}$
\cite{Dissertori:2003pj}, i.e.~
\begin{equation}
T=\frac{2 E_{\mathrm{max}}}{\sqrt{s}}.
\end{equation}
If we now assume that the quark and the antiquark carry the same
energy, we can calculate the energies of all three jets in the final
state at different energies:
\begin{eqnarray}
E_q\left( 133\unskip\,\mathrm{GeV} \right)= E_{\bar{q}}\left( 133\unskip\,\mathrm{GeV} \right)\approx ~49\unskip\,\mathrm{GeV},\quad E_\gamma\left( 133\unskip\,\mathrm{GeV} \right)&\!\approx\! &~35\unskip\,\mathrm{GeV},\nonumber\\
E_q\left( 172\unskip\,\mathrm{GeV} \right)= E_{\bar{q}}\left( 172\unskip\,\mathrm{GeV} \right)\approx ~55\unskip\,\mathrm{GeV},\quad E_\gamma\left( 172\unskip\,\mathrm{GeV} \right)&\!\approx\! &~62\unskip\,\mathrm{GeV},\nonumber\\
E_q\left( 206\unskip\,\mathrm{GeV} \right)= E_{\bar{q}}\left( 206\unskip\,\mathrm{GeV} \right)\approx ~61\unskip\,\mathrm{GeV},\quad E_\gamma\left( 206\unskip\,\mathrm{GeV} \right)&\!\approx\! &~84\unskip\,\mathrm{GeV},\nonumber\\
E_q\left( 500\unskip\,\mathrm{GeV} \right)= E_{\bar{q}}\left( 500\unskip\,\mathrm{GeV} \right)\approx 129\unskip\,\mathrm{GeV},\quad E_\gamma\left( 500\unskip\,\mathrm{GeV} \right)&\!\approx\! &242\unskip\,\mathrm{GeV}.
\end{eqnarray}
This leads to the following thrust values where the radiative-return
phenomena should appear:
\begin{eqnarray}
(1-T_{\mathrm{RR}})(\sqrt{s}=133\unskip\,\mathrm{GeV})\approx 0.27,\nonumber\\
(1-T_{\mathrm{RR}})(\sqrt{s}=172\unskip\,\mathrm{GeV})\approx 0.28,\nonumber\\
(1-T_{\mathrm{RR}})(\sqrt{s}=206\unskip\,\mathrm{GeV})\approx 0.19,\nonumber\\
(1-T_{\mathrm{RR}})(\sqrt{s}=500\unskip\,\mathrm{GeV})\approx 0.03.
\end{eqnarray}
These values coincide perfectly with the peaks in
\reffigs{fig:para_172} and \ref{fig:para_z}. Relaxing the assumption
that the quark and the antiquark carry the same energy only results in
a broadening of the peaking structure. Varying the value of
$z_{\mathrm{cut}}$ leads to different energies of the photonic jet and
therefore changes the allowed energies in the above analysis. For
decreasing values of $z_{\mathrm{cut}}$, we therefore either only
observe the tail of the peak or cut it away completely which
effectively looks like a shift of the position or the disappearance of
the peak. For $\sqrt{s}=133\unskip\,\mathrm{GeV}$ we observe the tail of the peak for
$E_q<E_{q,\mathrm{peak}}$ such that for decreasing $z_{\mathrm{cut}}$
the peak seems to move to larger values of $(1-T)$. For
$\sqrt{s}=172\unskip\,\mathrm{GeV}$ and $\sqrt{s}=206\unskip\,\mathrm{GeV}$ we observe the tail of the
peak for $E_\gamma>E_{\gamma,\mathrm{peak}}$ such that for decreasing
$z_{\mathrm{cut}}$ the peak seems to move to smaller values of
$(1-T)$.
The study in this section clearly illustrates the non-trivial effect
of realistic photon isolation criteria to the electroweak corrections
to jet observables. The accidental clustering of a soft gluon with a
hard photon results in a photon jet with a photon energy fraction
below the rejection cut. In these events, the distribution of the
final-state jets, and their reconstructed pair invariant masses do no
longer reflect the underlying parton dynamics. A similar
misreconstruction could also happen for electroweak corrections to
final states involving jets at hadron colliders, and clearly deserves
further study.
\begin{figure}[t]
\begin{center}
\epsfig{file=plot.T.MZ.4q.eps,width=6.5cm}\hspace{1cm}
\epsfig{file=plot.T.200.4q.eps,width=6.5cm}\\[2mm]
\epsfig{file=plot.T.500.4q.eps,width=6.5cm}\hspace{1cm}
\epsfig{file=plot.T.1000.4q.eps,width=6.5cm}
\end{center}
\vspace*{-1em}
\caption{Electroweak corrections to the thrust distribution at
different CM energies. The interference contribution between
electroweak and QCD diagrams for the four-quark final state is
scaled by a factor 1000.}
\label{fig:fourq}
\end{figure}
\subsection{Contribution from four-quark final states}
At ${\cal O}(\alpha^3\alpha_{\mathrm{s}})$, event-shape distributions and jet
cross sections receive a contribution from the process $\mathrm{e}^+\mathrm{e}^- \to
q\bar q q\bar q$ through the interference of QCD and electroweak
amplitudes [see (\ref{process-4q})]. Compared to other contributions at
this order, this four-quark interference contribution is very small.
Its typical magnitude amounts to about one per mille of the
electroweak correction, and is thus within the integration error of
the four-particle contribution. To illustrate the smallness of this
contribution, \reffig{fig:fourq} compares the four-quark
contribution (scaled by a factor 1000) to the total electroweak
correction to the normalised thrust distribution at different
CM energies. The relative magnitude of the four-quark
contribution is always at the per-mille level.
\section{Conclusions and outlook}
\label{sec:conc}
\setcounter{equation}{0}
In this paper, we have derived the NLO electroweak corrections to three-jet
production and event-shape distributions in $\mathrm{e}^+\mathrm{e}^-$ annihilation. At this
order, contributions arise from virtual corrections from weak gauge bosons
(which were evaluated in the complex-mass scheme to take proper account of
the gauge-boson widths), from fermionic loops, from real and virtual photonic
corrections and from interferences between electroweak and QCD diagrams for
four-quark final states.
Our calculation is one of the first to address electroweak corrections
to jet production observables. For this type of observables, one has
to take proper account of the experimental event-selection cuts,
which are aiming to reject events containing isolated photons. An
infrared-safe definition of isolated photons must permit some amount
of hadronic energy around the photon; for jet observables, this can be
realized by a democratic clustering procedure, used by the LEP
experiments. In this approach, the photon is clustered like any other
hadron by the jet algorithm, resulting in a photon jet. If the photon
carries more than a predefined large fraction of the jet energy, it is
called isolated and the event is discarded. In our calculation, we
have implemented this isolated-photon-veto procedure. Since it
involves cuts on a specific, identified particle in the final state,
the resulting observable is no longer collinear-safe, and the
left-over collinear singularity is compensated by a further
contribution from the quark-to-photon fragmentation function. We have
documented this part of the calculation in detail in different
schemes.
The NLO electroweak corrections to absolute cross sections and
event-shape distributions turn out to be numerically substantial: for
example for thrust at
$\sqrt{s}=\mathswitch {M_\PZ}$, they amount to a correction of $-32\%$, which is
largely dominated by initial-state radiation. Beyond the NLO
electroweak corrections, we also included higher-order leading
logarithmic corrections, which are sizable. Their inclusion at
$\sqrt{s}=\mathswitch {M_\PZ}$ results in a total correction of $-28\%.$ Normalizing
these results to the total hadronic cross section (as is done in the
experimental measurement), corrected to the same order, only very
moderate corrections remain in the normalized jet cross sections and
event distributions, and practically no difference is observed between
the fixed-order NLO electroweak results and the results including
higher-order logarithmic corrections.
At LEP1, we find that NLO electroweak effects to event-shape
distributions amount to a correction of about one to two per cent. The
corrections are not uniform over the kinematical range, but tend to
increase towards the two-jet region, where the isolated-photon veto
becomes less efficient. The corrections to the three-jet rate are
below one per cent. Purely weak contributions form a gauge-invariant
subset of the electroweak corrections. At LEP1, these corrections are
below the per-mille level.
At LEP2 energies, the NLO electroweak corrections to event-shape
distributions and to the three-jet rate are typically at the level of two
to eight per cent.
The largest contribution comes again from the photonic
corrections. These are
very sensitive to the precise photon isolation cuts applied to select
the events, and are not uniform over the range of event-shape variables.
The NLO electroweak event-shape distributions
display peaks at LEP2 energies. These peaks are due to
a remant of the radiative-return phenomenon, which is not fully suppressed
by the photon isolation cuts. The position and energy dependence of these
peaks can be explained quantitatively.
Event-shape and jet cross section data from LEP1 and LEP2 have been
corrected for photonic radiation effects using multi-purpose
leading-logarithmic event generator programs. To compare our results
with the experimental data would first require to undo these
corrections. A further complication in the comparison with data arises
from the fact that event-shape distributions at LEP2 were determined
in the $\mathrm{e}^+\mathrm{e}^-$ centre-of-mass frame for all events, including
initial-state-radiation events. Most of the event-shape variables are
not boost invariant, and should thus be reconstructed in the
centre-of-mass frame of the observed hadrons. If reconstructed in the
$\mathrm{e}^+\mathrm{e}^-$ centre-of-mass frame, ideal two-jet events with
initial-state radiation will not be placed at the kinematical edge of
the distribution, thus violating infrared-safety criteria. Within a
perturbative calculation, it is not possible to apply the event
reconstruction in the $\mathrm{e}^+\mathrm{e}^-$ centre-of-mass frame. The purely
weak corrections at LEP1 and LEP2 were previously not accounted for in
the interpretation of event-shape and jet cross section data. Our
study shows that they are at the level of one per mille or below for
appropriately normalized distributions. At the current level of
experimental and theoretical precision, they are thus not yet relevant
to precision QCD studies of LEP data.
While the magnitude of electroweak corrections decreases towards
higher LEP2 energies, we observe them to increase again when going to
even higher energies, corresponding to a future linear collider. In
part, this increase comes from the fact that purely weak corrections
(which were negligible throughout the LEP2 energy range) become
sizable at high energies. At $\sqrt{s}=500$~GeV, NLO electroweak
corrections to event-shape distributions and jet cross sections amount
to two to four per cent, and have thus a potentially sizable impact on
precision QCD studies at a future linear collider. The purely weak
corrections reach up to one per cent at this energy. Most importantly,
our findings on the interplay of photon isolation and event-selection
cuts, and on the appropriate frame for the reconstruction of
initial-state radiation events will help to optimize precision QCD
studies at future high-energy $\mathrm{e}^+\mathrm{e}^-$ colliders from event shapes
and jet cross sections.
\section*{Acknowledgements}
This work was supported in part by the Swiss National Science
Foundation (SNF) under contracts 200020-116756, 200020-124773 and
200020-126691 and by the European Community's Marie-Curie Research
Training Network HEPTOOLS under contract MRTN-CT-2006-035505.
| proofpile-arXiv_065-5080 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{INTRODUCTION}
\label{intro}
During an eruptive event comprising a solar flare and a coronal mass ejection (CME), energy is believed to be converted into the heating of plasma and the kinetic energy of particles and the CME itself through the process of magnetic reconnection. The standard reconnection models (\citealt{park57,swee58,pets64}) state that newly connected field lines expel plasma from the reconnection site due to the Lorentz force. The pressure gradient across the diffusion region then forces new plasma inwards, along with the field lines frozen to it where they change connectivity and dissipate energy. \citet{lin00} stated that these inflows are concurrent with the eruption of a CME, which remains connected to the magnetic neutral point by an extended current sheet. Initially the CME rises slowly until no neighboring equilibrium state is available. After reaching this point the CME begins to rise at an increasing rate. Energy release and particle acceleration continue due to sustained reconnection as the CME accelerates.
To date, there has been little observational evidence for the predicted inflows associated with reconnection. The most cited example is that of \cite{yoko01}, who found inflow velocities of 1.0--4.7~km~s$^{-1}$ by tracing the movement of threadlike patterns above the limb in a series of {\it SOHO}/EIT 195\AA~images (\ion{Fe}{12}; \citealt{dela95}). Evidence for sustained energy release during CME acceleration has been reported in a recent study of two fast ($>$1000~km~s$^{-1}$) halo CMEs by \cite{temm08,temm10}, who found a strong correlation between the CME acceleration and flare hard X-ray (HXR) time profiles. The bremsstrahlung hard X-rays are signatures of thick-target collisions between the accelerated electrons and the ambient chromospheric material. The authors interpret this correlation as strong evidence for a feedback relationship occurring between CME acceleration and the energy released by magnetic reconnection in the current sheet formed behind the CME. In cases where the current sheet becomes sufficiently longer than its width, it is possible for multiple X-points to form due to the tearing mode instability which can result in the formation of plasmoids \citep{furt63}. Both upward- \citep{ko03,lin05} and downward-directed \citep{shee02} plasmoids associated with CME eruption have been commonly observed in white light coronagraph images, in agreement with MHD simulations (e.g., \citealt{forb83}). Further evidence for plasmoid motions has been presented through radio observations of drifting pulsating structures \citep[DPS;][]{klie00,karl04,rile07,bart08b}.
\begin{figure}[!t]
\begin{center}
\includegraphics[height=8.5cm,angle=90]{f1.eps}
\caption{The CME on 2007 January 25 as seen by the COR1 coronagraph (blue) at 06:53:24~UT as well as the associated EUVI field of view (green). The expanded box shows the coronal loops that form part of active region NOAA 10940 with 40 and 80\% contours of the 5-10~keV emission seen by RHESSI at the same time (solid line).}
\label{euvi_cor1}
\end{center}
\end{figure}
Observational X-ray evidence for the formation of a current sheet has been presented by \cite{sui03} using data from the Ramaty High Energy Solar Spectroscopic Imager (RHESSI; \citealt{lin02}). The authors were able to show that an above-the-looptop coronal X-ray source (or plasmoid) increased in altitude as a lower lying X-ray loop decreased in altitude during the initial stages of a solar flare. They concluded that magnetic reconnection occurred between the two sources as the current sheet formed. This interpretation was strengthened by evidence that the mean photon energy decreased with distance in both directions away from the reconnection site (see also \citealt{liu08}). The authors attribute the downward moving looptop to the collapse of the X-point to a relaxed magnetic loop during the reconfiguration of the magnetic field. The same conclusions were reached by \cite{sui04} and \cite{vero06}, who observed similar motions of rising plasmoids concurrent with shrinking looptop sources in other events imaged with RHESSI.
A recent numerical simulation by \cite{bart08a} shows that by invoking variable reconnection rates along the current sheet, {\it downward} propagating plasmoids should also be visible in X-rays below $\sim$2~R$_{\odot}$ (see also \citealt{rile07}). The condition for this scenario is met when the reconnection rate above the plasmoid is greater than that below resulting in a net downward tension in the newly connected magnetic field lines. Furthermore, this model shows that the interaction of such a plasmoid with the underlying loop system can result in a substantial increase in dissipated energy, more so than during the initial ejection of the rising plasmoid or coalescing plasmoid pairs. To date, there has only been one report of such an interaction by \cite{kolo07} using Yohkoh/SXT data. They found an increase in X-ray and decimetric radio flux and an increase in temperature at the interaction site.
\begin{figure}[!t]
\begin{center}
\includegraphics[height=8.5cm,angle=90]{f2.eps}
\caption{Lightcurves of the flare in the 3--6, 6--12, and 12--25~keV energy bands as observed by RHESSI, as well as the GOES 1--8~\AA~lightcurve. The horizontal bars at the top of the plot denote RHESSI's attenuator state (A0, A1), nighttime (N) and SAA passes (S).}
\label{hsi_goes_ltc}
\end{center}
\end{figure}
\begin{figure*}[]
\begin{center}
\includegraphics[height=\textwidth,angle=90]{f3.eps}
\caption{RHESSI images in the 5-10 keV energy band formed over 60s integrations during the onset of the flare, although only alternate images are shown here. Contours mark the 40\% and 80\% levels. The plasmoid (source A) and looptop (source B) sources are labeled. The grey pattern around the sources are the CLEAN residuals and reflect the background noise level of the images.}
\label{multi_hsi_plot}
\end{center}
\end{figure*}
\begin{figure}[!b]
\begin{center}
\includegraphics[height=8.5cm]{f4.eps}
\caption{The two sources observed by RHESSI imaged over 2~keV wide energy bins (3--5, 5--7, 7--9~keV) for a single time interval.}
\label{hsi_ht_vs_en}
\end{center}
\end{figure}
In this paper observational evidence is presented for increased hard X-ray and radio emission during the coalescence of a downward-moving coronal source with a looptop kernel at the onset of a flare observed with RHESSI. Coordinated observations from the Sun-Earth Connection Coronal and Heliospheric Investigation (SECCHI; \citealt{howa08}) suite of instruments onboard the Solar Terrestrial Earth Relations Observatory (STEREO; \citealt{kais08}) show that this interaction was concurrent with the acceleration phase of the associated CME. Using wavelet enhanced images from the EUV Imager (EUVI), evidence is also presented for inflowing magnetic field lines that persisted for several hours prior to reconnection. Section~\ref{obs} describes the event as viewed by RHESSI and STEREO and the techniques used to determine the motion of the coronal X-ray sources and the CME. Section~\ref{conc} discusses the findings in the context of numerical simulations, and summarizes the conclusions.
\section{OBSERVATIONS AND ANALYSIS}
\label{obs}
The event presented here occurred on 2007 January 25 in active region NOAA 10940, which was just behind the east limb at the time as seen from the Earth. Several CMEs from the same active region were observed around this period shortly after the launch of STEREO, and have been the focus of many studies \citep{attr07,luga08,luga09,grig09}. As STEREO-Behind was only within 0.2$^{\circ}$ from the Sun-Earth line at this time, no corrections were required to align the images with those from RHESSI. Figure~\ref{euvi_cor1} shows the CME as it passed through the COR1 field-of-view at 06:53:24~UT along with the associated active region as seen by EUVI. Also overlaid on the inset EUVI image are the 5--10~keV source contours observed with RHESSI at the same time. Figure~\ref{hsi_goes_ltc} shows the X-ray lightcurves in the 3--6, 6--12, and 12--25~keV energy bands from RHESSI, along with the 1--8~\AA~lightcurve from GOES. The GOES C6.3 class flare began at 06:33:00~UT and peaked at 07:15:00~UT. Emission in the 3--6 and 6--12 keV energy bands observed by RHESSI began to increase $\sim$5 minutes earlier. At the time of this event, there was another active region on the western limb that was the focus of instruments not capable of observing the full solar disk, such as TRACE and those onboard Hinode. Data for the event presented here were, therefore, only obtainable from those instruments capable of observing the entire solar disk. This included radio data from the Learmonth Solar Observatory in Western Australia at eight discreet frequencies (245, 410, 610, 1415, 2695, 4995, 8800, and 15400 MHz).
\subsection{Coronal X-Ray Source Motions}
\label{x_ray_sources}
RHESSI images were formed using CLEAN \citep{hurf02} in the 5-10~keV energy band over 60s integrations using detectors 4, 6, 8, and 9. Detectors \#2 and 7 were omitted from this analysis due to their reduced sensitivity to low-energy photons. The calibration for detector \#5 was poorly known at this time and was, therefore, also excluded. Detectors \#1, and 3 also introduced noise in the images in this case by over-resolving the sources due to their higher spatial resolution and were therefore also omitted. The 5--10~keV range was chosen to give the best signal to noise ratio below the instrumental background Ge line at $\sim$11~keV during the onset of the flare when the count rate was low. The earliest images revealed a single, high-altitude coronal source (source A; Figure~\ref{multi_hsi_plot}$a$--$f$). At 06:37~UT a second source (source B; Figure~\ref{multi_hsi_plot}$e$--$h$) appeared apparently at a lower altitude than the initial source. Source B was observed to lie above the post-flare loop arcade that later rose from behind the limb in EUVI images (see Figures 3 and 4 in \citealt{grig09}), and was therefore assumed to be a looptop kernel associated with newly formed loops that lay above those emitting in EUV. Source A, on the other hand, resembled an above-the-looptop source or plasmoid. From the bottom row of Figure~\ref{multi_hsi_plot} it can be seen that these two sources merged together between 06:37:00--06:41:00~UT. After 06:41:00~UT the amalgamated source displayed a cusp-like structure extending to the southeast until RHESSI went into the Earth's shadow at 07:09~UT.
Figure~\ref{hsi_ht_vs_en} shows the structure of the two sources in finer energy bins at a time when each of the sources could be clearly resolved (06:37~UT). It is shown that source A displayed an energy gradient in terms of height, with higher energy emission emanating from higher altitudes. Source B on the other hand showed no discernible displacement in terms of energy. This is in contrast to what was observed in the event presented by \cite{kolo07}, who found a thermal stratification in the looptop source, but no clear displacement for the higher altitude source. Similarly, \cite{sui03} found that higher energy emission emanated from higher altitudes for their looptop source, as expected. However, the reverse was found to be true for the associated rising plasmoid with mean photon energy decreasing with height, consistent with the idea that reconnection occurred in between the two sources. Similar conclusions were reached by \cite{liu08} who stated that higher energy emission should be observed closer to the reconnection site \citep{liu08}. With this in mind the case presented in Figure~\ref{hsi_ht_vs_en} suggests that in forming the plasmoid, the reconnection rate above the source must have been greater than that below. This would not only explain the reverse energy stratification as a function height, but also the resulting downward motion due to the resulting net tension exerted by the magnetic field, as surmised by \cite{bart08a}.
In order to track the motion of each source, the coordinates of the peak emission were identified and used to triangulate their height above the solar limb. The peak emission, rather than the centroid, was chosen to exclude the possibility of interpreting the relative change in intensity of the two sources as a motion. It was found that source A had an initial height of 45~Mm at 06:29~UT and decreased in altitude during the subsequent 12 minutes (Figure~\ref{hsi_ltc_ht_radio}$c$). A linear least-squares fit to these data points yielded a mean downward velocity of 12~km~s$^{-1}$, similar to the value of 16~km~s$^{-1}$ found by \citet{kolo07} for their downward-moving plasmoid. Source B was observed to rise continuously throughout the event, which is characteristic of a post-flare arcade. Its mean velocity was found to be $\sim$5~km~s$^{-1}$. After 06:41:00~UT the individual sources could no longer be resolved therefore the time interval over which the two sources merged was estimated to be from 06:37 to 06:41~UT.
\subsection{Evidence for Enhanced Nonthermal Emission}
\label{xray_spec}
According to \cite{bart08b}, a plasmoid-looptop interaction as described in Section~\ref{x_ray_sources} should have a distinct observational signature. The authors predict that the resulting increase in energy dissipation should manifest itself as enhanced chromospheric or HXR emission. In the event presented by \cite{kolo07}, the authors observed an concurrent increase in both HXRs (14--23~keV) and radio emission (1--2~GHz), both indicators of nonthermal electrons. The authors also detected an increase in temperature at the interaction site in the corona during the merging. Figure~\ref{hsi_ltc_ht_radio}$a$ shows the RHESSI lightcurves (in raw counts) in 3~keV wide energy bins (3--6, 6--9, 9--12, 12--15, and 15--18~keV) over the flare onset using the front segment of detector \#1 only. Between 06:38 and 06:44~UT (shown by the two vertical dotted lines in Figure~\ref{hsi_ltc_ht_radio}) there is a pronounced enhancement in the higher energy bands (12--15 and 15--18~keV, in particular). A similar enhancement is also visible in the 245 MHz channel of the Learmonth radio data (Figure~\ref{hsi_ltc_ht_radio}b). The increase in emission (from 06:38--06:41~UT) corresponds to the approximate time over which the two X-ray sources were observed to merge from Figure~\ref{multi_hsi_plot}e--g. From 06:41--06:44~UT (after the plasmoid source was no longer visible) HXR and radio emissions both appeared to decrease briefly. This episode of increased nonthermal emission is therefore believed to be a result of a secondary phase of particle acceleration due to magnetic reconnection within the current sheet formed between the two merging sources. Unfortunately there was no radio spectrograph data available at the time of this event to search for evidence of drifting pulsating structures.
A RHESSI spectrum taken around the time of the merging (06:41:00~UT; Figure~\ref{spec_fits}) also shows that emission from 9--20~keV is predominantly nonthermal, consistent with the idea that enhancements in both the HXR and radio lightcurves are evidence for an increase in the number of accelerated particles. This spectrum was also generate using only detector \#1 to remain consistent with the lightcurve shown in Figure~\ref{hsi_ltc_ht_radio}. This increased nonthermal emission is consistent with the simulations of \cite{bart08b} but is clearly coronal in nature, rather than chromospheric as predicted. Chromospheric rebrightening cannot be ruled out however but may be difficult to simultaneously detect both coronal plasmoids and footpoint emission during on-disk events due to RHESSI's limited dynamic range.
\begin{figure}[!t]
\begin{center}
\includegraphics[width=8.5cm]{f5.eps}
\caption{$a$) RHESSI lightcurves in the 3--6, 6--9, 9--12, 12--15, and 15--18~keV energy bands from the front segment of detector \#1 only. The horizontal bars marked A0 and A1 at the top of the plot denote the attenuator state. $b$) Emission in the 245, 410 and 610~MHz channels from Learmonth radio telescope. The fluxes in the 410 and 610 MHz channels have been scaled by factors of 2 and 2.5 for clarity, respectively. $c$) Height-time plots of the two 5--10~keV sources as observed by RHESSI. The plasmoid source is denoted by crosses while the looptop source is given by diamonds, both with error bars. The solid line represents a least-sqaures fit to the downward moving source. The two vertical dotted lines mark the approximate time of enhanced HXR and radio emission during which the two RHESSI sources appeared to merge.}
\label{hsi_ltc_ht_radio}
\end{center}
\end{figure}
\subsection{CME Kinematics}
\label{cme_acc}
One limitation of many previous studies of CMEs is the absence of data below $\sim$3~R$_{\odot}$, where most of the CME acceleration takes place. This is due in part to the loss of the C1 coronagraph on SOHO/LASCO in 1998. With the launch of STEREO in 2006, this gap has been filled with the SECCHI suite of instruments. EUVI captures full-disk observations out to 1.7~R$_{\odot}$, while the COR1 and COR2 coronagraphs cover 1.4--4~R$_{\odot}$ and 2--15~R$_{\odot}$, respectively.
\begin{figure}[!t]
\begin{center}
\includegraphics[width=8.5cm]{f6.eps}
\caption{RHESSI photon spectra with the associated residuals using the front segment of detector \#1 integrated over 06:41:40--06:42:00~UT during the merging phase. The dotted line represents the best fit to the thermal distribution while the dashed line represents the thick-target component. The solid line shows the sum of the two components and the dot-dashed line marks the background.}
\label{spec_fits}
\end{center}
\end{figure}
The data used in this study are exclusively from the STEREO-Behind (STEREO-B) spacecraft and were prepped using the standard {\sc secchi\_prep} routine inside SSWIDL. For EUVI, this entails standard corrections for de-bias, vignetting, and exposure time normalization, along with rotating the images with a cubic convolution interpolation to place solar north at the top of the image. For COR1, this involved the extra step of individually background subtracting each polarization state before combining using a M\"ueller matrix to form polarized brightness images. For COR2, total brightness images were created and then studied as base difference images. Both COR1 and COR2 images were further enhanced using a wavelet technique \citep{byrn09}.
\begin{figure*}[!ht]
\begin{center}
\includegraphics[height=\textwidth,angle=90]{f7.eps}
\caption{Six select EUVI images in the 195~\AA~passband covering the time range 04:56--06:51~UT. Panels $a$--$c$ show the initial gradual rise of the CME front ($\it triangles$). A structure believed to be the southern leg of the CME is also noted. Overplotted on panels $e$ and $f$ are contours of the concurrent 5--10~keV emission observed by RHESSI, the plasmoid (source A) and looptop (source B), respectively. Note that the leg of the CME is no longer visible in these panels.}
\label{euvi_cme_front}
\end{center}
\end{figure*}
The CME front was first detected in the EUVI 195~\AA~passband at 04:56~UT at a height of $\sim$150~Mm above the eastern limb and gradually increased in height over the subsequent $\sim$1.5 hours (see Figures~\ref{euvi_cme_front}$a$--$c$). After 06:30~UT, when the CME became visible in COR1 images (as shown in Figure~\ref{euvi_cor1}), it began to expand more rapidly. At the same time a structure believed to be one leg of the CME (Figures~\ref{euvi_cme_front}$a$--$d$) was observed to sweep northwards to the site of the RHESSI 5--10~keV emission as noted in Figure~\ref{euvi_cme_front}e. This motion is interpreted as evidence for the predicted inflows associated with reconnection and will be discussed further in Section~\ref{inflows}.
The maximum height of the CME front above the solar limb was measured in each frame to create a height-time profile. The assigned uncertainty in height of the CME front was taken to be five pixels, corresponding to uncertainties of 5, 50, and 200~Mm for EUVI, COR1 and COR2, respectively. From these, velocity and acceleration profiles along with their associated uncertainties were numerically derived using a three-point Lagrangian interpolation (see Figures~\ref{hsi_cme_ht}$a$--$c$) similar to that used by \cite{gall03}. This technique is not as sensitive to the uncertainties in the height-time measurements as a standard two-point numerical differentiation and can give an accurate representation of the acceleration profile. However, by smoothing the data in this way the magnitude of the values can only be taken as upper limits, at best.
Figures~\ref{hsi_cme_ht}$a$--$c$ show that the CME front rose gradually for 1.5 hours with a mean velocity of $<$100~km~s$^{-1}$ before beginning to accelerate at $\sim$06:15~UT, when it was at a height of 250~Mm (1.35~R$_{\odot}$). The acceleration profile peaks some 20 minutes later when the CME was at a height of 400~Mm above the limb (1.57~R$_{\odot}$). Subsequently it continued to increase in height and velocity but at a decreasing rate. It obtained its maximum velocity of 1400~km~s$^{-1}$ at a height of 7000~Mm ($\sim$11~R$_{\odot}$) at $\sim$08:00~UT after which it began to decelerate. Figures~\ref{hsi_cme_ht}$d$ and \ref{hsi_cme_ht}$e$ show the height-time plot and lightcurves of the associated X-ray emission, respectively. It can be seen that the time of the observed downward motion of the plasmoid observed by RHESSI occurred during the acceleration phase of the CME. This lends further support to the idea that the CME front and the plasmoid were connected by a mutual current sheet and that the primary episode of reconnection both accelerated the CME and generated the magnetic tension in the field lines necessary to drive the plasmoid downwards. However, it is also possible that the CME acceleration was driven by external forces (e.g. kink instability in the flux rope) which led to filamentation of the current sheet and subsequent reconnection and plasmoid motion.
\subsection{Reconnection Inflows}
\label{inflows}
During the initial gradual rise of the CME front, a linear structure believed to be the southern `leg' of the CME can be seen in panels $a$--$c$ of Figure~\ref{euvi_cme_front}. At 06:26~UT (Figure~\ref{euvi_cme_front}$d$) this structure was observed to sweep northwards towards the location of the RHESSI emission visible in the subsequent panel. It was no longer observed at its original location. Unfortunately, the northern leg was not visible, presumably obscured by the multitude of bright loops of the active region.
To track the motion of this structure, a vertical one-pixel slice at Solar X = -1010$\arcsec$ was taken through a series wavelet enhanced EUVI images and stacked together in sequence to form a time series. The left-hand panel of Figure~\ref{euvi_time_slice} shows one such image taken at 06:51:47~UT with a solid vertical line denoting the pixel column used in the time series. The dotted line indicates the position of the CME leg some 3 hours earlier and the arrow denotes its inferred direction of motion. The solid contours mark the concurrent 5-10~keV emission observed by RHESSI (source B), which appeared elongated with a cusp to the southeast. A long narrow structure extending from the looptop to the southeast was also apparent in EUVI images. Such features are also often attributed to the reconnection process (e.g. \citealt{mcke99}). The emission associated with the plasmoid when it was first observed by RHESSI at 06:29~UT is also overlaid (source A; dashed contours) and is also located along the narrow structure in the EUVI image.
The right-hand panel of Figure~\ref{euvi_time_slice} shows the time series of the one-pixel wide column through the series of wavelet-enhanced EUVI images. A feature was observed to propagate northwards from Solar Y $\approx-$195$\arcsec$ at 03:30~UT to Solar Y $\approx-$175$\arcsec$ at $\sim$06:50~UT, which was close to the site of the emission observed by RHESSI at that time. This time period also corresponds to the gradual rise phase of the CME front (noted in Figure~\ref{hsi_cme_ht}$a$). This feature is interpreted as evidence for the inflowing magnetic field lines associated with the slow reconnection prior to the main eruption. From this time series, an inflow velocity of 1.5~km~s$^{-1}$ was inferred, comparable to the 1.0--4.7~km~s$^{-1}$ value found by \cite{yoko01} using a similar method. Knowledge of the inflow velocity during a flare can provide information on the rate of reconnection and hence the energy release rate. The reconnection rate, $M_A$, is defined as the ratio of the inflow speed to the local Alfv\'{e}n speed. Taking a typical coronal Alfv\'{e}n speed of 1000~km~s$^{-1}$ the inflow velocity measured here would result in a value of $M_A$ = 0.001. This is also consistent with the lower end of the range of values for $M_A$ found by \citet{yoko01}. The brighter feature in the figure originating at solar Y $\approx$ -200$\arcsec$ and moving south is likely to be one of the active region loops being displaced as the CME erupts.
\section{DISCUSSION AND CONCLUSIONS}
\label{conc}
Rare observations are presented of a downward-propagating X-ray plasmoid appearing to merge with a looptop kernel during an eruptive event seen above the solar limb; the first case observed with RHESSI and perhaps only the second ever. Although the majority of above-the-looptop sources observed (in both white light and X-rays) tend to rise due to weaker magnetic field and decreasing density above the flare loops, in certain instances, conditions can be right for downward-moving plasmoids to form also. Enhanced HXR emission detected with RHESSI and radio emission observed by the Learmonth radio telescope suggest that this merging resulted in a secondary episode of particle acceleration (see Figure~\ref{hsi_ltc_ht_radio}). Images of the plasmoid formed over finer energy bins (as shown in Figure~\ref{hsi_ht_vs_en}) show that higher energy emission was observed at higher altitudes. This is consistent with the idea that the reconnection rate above the source was greater than that below, unlike rising plasmoids previously observed with RHESSI which show mean photon energy decreasing with height (e.g. \citealt{sui03}). Complementary observations from STEREO show that the plasmoid-looptop merging was concurrent with the period of the most rapid acceleration of the associated CME (Figures~\ref{hsi_cme_ht}$c$ and $d$). These observations are in agreement with a recent numerical simulation that predicts an increase in liberated energy during the merging of a plasmoid with a looptop source \citep{bart08a}. The formation of plasmoids is attributed to the tearing-mode instability during current sheet formation in the wake of an erupting CME \citep{furt63,lin00}.
\begin{figure}[!t]
\begin{center}
\includegraphics[width=8.5cm]{f8.eps}
\caption{Summary of the kinematics of both the CME observed with STEREO and the coronal X-ray sources observed with RHESSI. {\it a}) Height-time plot of the CME front from EUVI (diamonds), COR1 (triangles), and COR2 (squares). {\it b}) and {\it c}) The associated velocity and acceleration profiles, respectively. {\it d}) Height-time plot of the 5--10~keV sources as observed by RHESSI. The downward-moving coronal source is shown as crosses with error bars. The solid line denoted a least-squares fit to the data points and has been extended beyond the data points for clarity. The rising looptop source is represented by diamonds also with error bars. {\it e}) Observing summary profiles for RHESSI in the 3--6, 6--12 and 12--25~keV energy bands. Horizontal bars marked N and S denote RHESSI nighttimes and SAA passes, respectively.}
\label{hsi_cme_ht}
\end{center}
\end{figure}
\begin{figure}[!t]
\begin{center}
\includegraphics[height=8.5cm,angle=90]{f9.eps}
\caption{{\it Left}: A wavelet enhanced EUVI image taken at 06:51:47~UT. The solid contours overlaid mark the position of the HXR looptop source at the time of the image. The dashed contour marks the position of the plasmoid at 06:29~UT. The dotted line shows the location of the CME leg at 04:46~UT and the arrow points in the direction of its motion. The solid vertical line denotes the pixel column used for the time-slice plot on the right. {\it Right}: A temporal evolution of a vertical slice through a series of EUVI images. The dotted line marks the time of the image in the left-hand panel. The structure believed to be the inflowing CME leg is identified between the two parallel lines.}
\label{euvi_time_slice}
\end{center}
\end{figure}
\cite{bart07,bart08a} have shown theoretically that the deceleration of the plasmoid as it collides with the looptop source can lead to significant episodes of energy release. During this deceleration, antiparallel magnetic field lines begin to pile up between the two sources and a secondary current sheet is formed. This in turn leads to a secondary episode of magnetic reconnection that is driven by the magnetic tension of the field lines that govern the plasmoid motion. The authors also claim that the merging process triggers the excitation of large amplitude waves which can carry with them some of the stored magnetic energy. Although it is not possible to detect any acceleration or deceleration from the RHESSI images presented, a mean downward velocity of 12~km~s$^{-1}$ was calculated. This value is commensurate with the previous observation of \cite{kolo07}, who measured 16~km~s$^{-1}$ during a similar event observed with Yohkoh. However, both these observed values are considerably lower than the value predicted by \cite{bart08a} of $\sim$40\% of the local Alfv\'{e}n speed (i.e. $\sim$400~km~s$^{-1}$). Similar values of $\sim$200~km~s$^{-1}$ were predicted by \cite{rile07} for downward-moving white-light plasmoids. The low velocity measured here may be attributed to the low value of the reconnection rate as estimated from the inflows observed with EUVI (assuming that these field lines converged above the plasmoid). The value of $M_A \approx$ 0.001 is an order of magnitude lower than that used in the numerical simulation. As the amount of tension exerted on the plasmoid is sensitive to the net reconnection rate, this would result in a lower tension force and therefore lower downward velocity. This in turn may also affect the amount of energy liberated in the subsequent collision with the looptop. It is possible that the model of \cite{bart08a} may overestimate the velocity (and subsequent dissipated energy) given that the simulation is two-dimensional and does not take into account 3D structures, such as a twisted flux rope. Similarly the plasmoid detected with RHESSI is observed for more than 10 minutes before merging with the looptop source, whereas the simulations which yielded higher velocities predict that the source should exist for only $\sim$1 minute before merging. While the simulations of \cite{bart08a} predict a rebrightening of the loop footpoints in HXRs and/or chromospheric emission, both the analysis presented here and that of \citealt{kolo07} show a distinct increase in coronal emission. A recent analysis of Miklenic et al. (2010; submitted) appears to refute the idea that plasmoid-looptop interactions could be responsible for chromospheric rebrightenings. These observations provide further evidence that the particle acceleration process occurs in the corona rather than at the footpoints as recently suggested by \cite{flet08}, although acceleration at the footpoints as recently suggested by \cite{brow09} cannot be ruled out.
While plasmoid-looptop interactions are rarely observed, it is possible that they occur more often but are difficult to observe due to the brighter emission from the flare itself and RHESSI's limited dynamic range. A newly developed technique of deep integrations using RHESSI visibility-based X-ray imaging \citep{sain09} may help to identify faint X-ray sources in the corona during eruptive limb events. By comparing other similar events it may be possible to determine how great an effect the CME acceleration (and magnetic reconnection rate, if possible) has upon the resulting HXR and radio flux. It would therefore be useful to find events observed at a time when RHESSI's detector calibration was better known in order to perform a more rigorous spectral analysis which was not possible for this event due to poorly known calibration. This could reveal more detailed information on the energetics of the resulting accelerated particles.
\acknowledgements
ROM would like to thank Gordon Holman and Jack Ireland for their very helpful and insightful discussions, and Kim Tolbert for modifications made to the RHESSI software. We also thank the anonymous referee for their constructive comments which improved the quality of this paper. RTJMA is funded by a Marie Curie Intra European Fellowship. CAY acknowledges support from NASA Heliophysics Guest Investigator grand NNG08EL33C.
\bibliographystyle{apj}
| proofpile-arXiv_065-5091 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\subsection*{TO-DO List:}
\begin{enumerate}
\item
\end{enumerate}
\end{comment
\newpage
\section{Introduction}
\label{sec:Introduction}
In a bulk of material, Fourier's law is said to hold if the flux of energy $J$ is proportional to the gradient of temperature, i.e.,
\begin{equation}\label{Fourier}
J \,=\, -\kappa\nabla T
\,,
\end{equation}
where $ \kappa $ is called the conductivity of the material.
This phenomenological law has been widely verified in practice. Nevertheless, the mathematical understanding of thermal conductivity starting from a microscopic model is still a challenging question \cite{Challange2000} \cite{Dhar-review-2008} (see also \cite{Lepri-Livi-Politi-2} for a historical perspective).
Since the work of Peierls \cite{Peierls-1929}\cite{Peierls-book-1955}, it has been understood that anharmonic interactions between atoms should play a crucial role in the derivation of Fourier's law for perfect crystals.
It has been known for a long time that the conductivity of perfect harmonic crystals is infinite. Indeed, in this case, phonons travel ballistically without any interaction. This yields a wave like transport of energy across the system, which is qualitatively different than the diffusion predicted by the Fourier law \eqref{Fourier}. For example, in \cite{Rieder-Lebowitz-Lieb-1967}, it is shown that the energy current in a one-dimensional perfect harmonic crystal, connected at each end to heat baths, is proportional to the difference of temperature between these baths, and not to the temperature gradient.
In addition to the non-linear interactions, also the presence of impurities causes scattering of phonons and may therefore strongly affect the thermal conductivity of the crystal.
Thus, while avoiding formidable technical difficulties associated to anharmonic potentials, by studying disordered harmonic systems one can learn about the role of disorder in the heat conduction.
Moreover, many problems arising with harmonic systems can be stated in terms of random matrix theory, or can be reinterpreted in the context of disordered quantum systems.
Indeed, in \cite{Dhar_Spect_Dep-01} Dhar considered a one-dimensional harmonic chain of $ n $ oscillators connected to their nearest neighbors via identical springs and coupled at the boundaries to the rather general heat baths parametrized by a function $ \mu: \R \to \C $ and the temperatures $ T_1 $ and $ T_n $ of the left and right baths, respectively. Dhar expressed the steady state heat current $ J^{(\mu)}_n $ as the integral over oscillation frequency $ w $ of the modes:
\bels{intro:general stationary current as w-integral}{
J^{(\mu)}_n
\;=\;
(T_1-T_n)
\int_\R
\absb{
v_{\mu,n}^\mathrm{T}\msp{-1}(w) A_n(w)\cdots A_1(w)v_{\mu,1}\msp{-1}(w)
}^{-2}
\dif w
\,.
}
Here $ A_k(w) \in \R^{2 \times 2} $ is the random transfer matrix corresponding the mass of the $ k $\textsuperscript{th} oscillator, while $ v_{\mu,1}(w) $ and $ v_{\mu,n}(w) $ are $\C^2$-vectors determined by the bath function $ \mu $ and the masses of the left and the right most oscillators, respectively.
Standard multiplicative ergodic theory \cite{RDS-Arnold-1998} tells that asymptotically the norm of $ Q_n(w) := A_n(w)\cdots A_1(w) $ grows almost surely like $ \nE^{\gamma(w)n} $ where the non-random function $ \gamma(w) \ge 0 $ is the associated Lyapunov exponent. In the context of heat conduction this corresponds the localization of the eigenmodes of one-dimensional chains while in disordered quantum systems one speaks about the one-dimensional Anderson localization \cite{Anderson-58}.
However, in the absence of an external potential (pinning), the Lyapunov exponent scales like $ w^2 $, when $ w $ approaches zero, and this makes the scaling behavior of \eqref{intro:general stationary current as w-integral} non-trivial as well as highly dependent on the properties of the bath.
Indeed, only those modes for which the localization length $ 1/\gamma(w) $ is of equal or higher order than the length of the chain, $ n $, do have a non-exponentially vanishing contribution in \eqref{intro:general stationary current as w-integral}. Thus the heat conductance of the chain depends crucially on how the bath vectors $ v_{\mu,1}(w), v_{\mu,n}(w) $ weight the critical frequency range $ w^2n \lesssim 1 $.
In other words, explaining the scaling of the heat current in disordered harmonic chains reduces to understanding the limiting behavior of the matrix product $ Q_n(w) $ when $ w \leq n^{-1/2+\epsilon} $ for some $ \epsilon > 0 $.
The evolution of $ n \mapsto Q_n(w) $ reaches stationarity only when $w^2n \sim 1 $ while the components of $ Q_n(w) $ oscillate in the scale $ wn \sim 1 $ with a typical amplitude of $ w^{-1}\nE^{\gamma_0w^2n} $ as observed numerically in \cite{Dhar_Spect_Dep-01}.
Thus the challenge when working in this small frequencies regime is that the analysis does fall back neither to classical asymptotic estimates for large $ n $, nor to the estimate of the Lyapunov exponent for small $ w $.
Of course, the difficulty of this analysis depends also on the exact form of the vectors $ u_{\mu,k} $ in \eqref{intro:general stationary current as w-integral}, i.e., on the choice of the heat baths. Besides some rather recent developments, most of the studies so far have concentrated on two particular models.
In the first model, introduced by Rubin and Greer \cite{Rubin-Greer-71}, the heat baths themselves are semi-infinite ordered harmonic chains distributed according to Gibbs equilibrium measures of temperatures $ T_1 $ and $ T_n $, respectively.
Rubin and Greer were able to show that $ \tE\msp{1}J^{\text{RG}}_n \gtrsim n^{-1/2} $ with $ \tE[\genarg] $ denoting the expectation over the masses. Later Verheggen \cite{Verheggen-1979} proved that $ \tE\msp{1}J^{\mathrm{RG}}_n \sim n^{-1/2} $.
In the second model the heat baths are modeled by adding stochastic Ornstein-Uhlenbeck terms to the Hamiltonian equations of the chain (see \eqref{dynamics of the chain} below). This model, first analyzed by Casher and Lebowitz \cite{Casher-Lebowitz-71} in the context of heat conduction, was conjectured by Visscher (see ref. 9 in \cite{Casher-Lebowitz-71}) to satisfy $ \tE\msp{1}J^{\mathrm{CL}}_n \sim n^{-3/2} $.
Moreover, already in \cite{Casher-Lebowitz-71} it was argued that $ \tE\msp{1}J^{\mathrm{CL}}_n \gtrsim n^{-3/2} $. However, the line of reasoning there contains an error which invalidates this lower bound (see Section \ref{sec:Proof of Theorem}), and therefore no rigorous upper nor lower bounds have been published for $ \tE\msp{1}J^{\mathrm{CL}}_n $ until now.
\subsection{Casher-Lebowitz model and results}
The Hamiltonian of the isolated one-dimensional disordered chain is
\begin{equation}
\label{Hamiltonian}
H(q_1, \dots q_n, p_1, \dots p_n)
\;=\;
\sum_{k=1}^n \frac{p_k^2}{2m_k}
\,+\, \frac{1}{2}\sum_{k=0}^n (q_{k+1}-q_k)^2
\,,
\end{equation}
where $ q_k \in \R $ is the displacement of the $ k $\textsuperscript{th}
mass $ m_k $ from its equilibrium position and $ p_k $ is the associated momentum.
We consider fixed boundaries, i.e., $q_0=q_{n+1}=0$.
The usual Hamilton's equations are modified at the endpoints in order to include an interaction with heat baths.
In the Casher-Lebowitz model, this interaction consists of adding white noise and a viscous friction terms to the Hamiltonian equations of $ p_1 $ and $ p_n $: Suppose $ \lambda > 0 $ is the coefficient of viscosity, let $T_1\ge T_n > 0$ be the respective temperatures of the reservoirs, and let $ W_1, W_n $ be two independent Brownian motions.
The equations of motion for the Casher-Lebowitz chain then take the form of the stochastic differential equation
\bels{dynamics of the chain}{
\dif q_k
\;&=\;
\frac{\partial H}{\partial p_k} \dif t
\,,
\\
\dif p_k \;&=- \frac{\partial H}{\partial q_k} \dif t \,+\, (\delta_{k,1}+\delta_{k,n})(-\lambda p_k \dif t + \sqrt{2\lambda T_k m_k}\,\dif W_k)
\,,
}
with $1\le k\le n $.
If $ \sett{e_1, e_2} $ is the canonical basis of $ \C^2 $, then, as far as the scaling behavior goes, the choice \eqref{dynamics of the chain} of heat baths corresponds (see \cite{Casher-Lebowitz-71}, and \eqref{v_CLpm sim v_pm} below) to setting $ v_{\mu,1}(w) = \abs{w}^{-1/2} e_1 + \cI \abs{w}^{1/2}e_2 $ and $ v_{\mu,n}(w) = \abs{w}^{-1/2} e_1 - \cI \abs{w}^{1/2}e_2 $ in \eqref{intro:general stationary current as w-integral}.
The resulting current, denoted by $ J^{\mathrm{CL}}_n(m_1,\dots,m_n) $, is then by definition the average rate at which energy is carried from the left to the right heat bath over the stationary measure of \eqref{dynamics of the chain} for fixed masses $ m_k $.
\begin{comment
For fixed masses, the steady-state current of energy through the chain is given by the expression (see (2.18) in \cite{Casher-Lebowitz-71})
\bels{current}{
J_n (m_1,\dots ,m_n) \;& =\; \lim_{t\to \infty} \lambda\, \left\{ T_1 \,-\, \frac{\la p_1^2(t)\ra}{m_1} \right\}
\; = \; \lambda\, \left\{ T_1 \,-\, \frac{\la p_1^2\ra_{\infty }\msp{-6}}{m_1} \right\}
\,,
}
where $\langle \genarg \rangle$ denotes an average over the paths of the Brownian motions $W_1$ and $W_n$, while $ \langle \genarg \rangle_{\infty } $ denotes the average over the stationary measure of the stochastic process \eqref{dynamics of the chain}.
\end{comment
Now, suppose that the masses are random variables $ M_k $. Our main result is the following strict scaling relation for the mass averaged stationary current.
\Theorem{the scaling of the average current}{
Assume that the masses $ (M_k:k\in\N) $ are independent and identically distributed.
Suppose that the common probability distribution of the masses $ M_k $ admits a density,
compactly supported on $\rbrack 0,\infty\lbrack$,
continuously differentiable inside its support,
with an uniformly bounded derivative.
Denote by $ \tE[\genarg] $ the expectation over the masses. Then there exist $K,K'>0$ such that the heat current $ J^\mathrm{CL}_n $ satisfies the relation
\bels{the result}{
K\, \frac{T_1-T_n}{n^{3/2}}
\;\leq\;
\tE\bigl[
J^{\mathrm{CL}}_n\msp{-1}(M_1,\dots M_n)
\bigr]
\;\leq\;
K'\, \frac{T_1-T_n}{n^{3/2}}
\,.
}
}
The proof is based on a new representation of the matrix $ Q_n(w) $ in terms of a discrete time Markov chain on a circle. Based on this representation we obtain a good control of the joint behavior of the matrix elements of $ Q_n(w) $ for the most important regime $ w \leq n^{-1/2+\epsilon} $ where $ \epsilon > 0 $ is small.
Moreover, together with O'Connor's decay estimates \cite{O'Connor-75} for high frequencies we have a good control of the exponential decay of $ \norm{Q_n(w)} $ whenever $ w \ge n^{-1/2+\epsilon} $.
Therefore, the possibility of generalizing Theorem \ref{thr:the scaling of the average current} to a quite large class of heat baths seems possible by extending our analysis.
Indeed, in Subsection \ref{ssec:Other heat baths} we sketch how one can derive the scaling behavior of the stationary heat current for Dhar's modified version of the Casher-Lebowitz model as well as to prove the analogue of Theorem \ref{thr:the scaling of the average current} for the Rubin-Greer model.
The organization of the paper is as follows. In Section \ref{sec:conventions and outline}, we present the practical expression for the current $ J^{\mathrm{CL}}_n $, after first introducing some conventions and notation to be used in the rest of the paper. In the end of Section \ref{sec:conventions and outline} our strategy to obtain Theorem \ref{thr:the scaling of the average current} is outlined.
Sections \ref{sec:Representation of matrix elements} to \ref{sec:Potential theory} contain the three main technical results needed for the proof.
The actual proof of Theorem \ref{thr:the scaling of the average current} is then presented in Section \ref{sec:Proof of Theorem}.
\section{Conventions and outline of paper}
\label{sec:conventions and outline}
For the rest of this manuscript we are going to \emph{assume that the conditions of Theorem \ref{thr:the scaling of the average current} hold.} In particular, this means that the zero mean random variables
\bels{def of B_k}{
B_k \;:=\; \frac{M_k-\tE\msp{1}M_k}{\tE\msp{1}M_k}
\,,
}
are i.i.d., have a (Lebesgue) probability density $ \tau $ that satisfies $ \spt(\tau) \subset [b_-,b_+] $, and $ \tau \in \Cont^1([b_-,b_+]) $, for some constants $ -1 < b_- < b_+ < \infty $. Here $ \Cont^k([a,b]) $ denotes a continuous function $ f: [a,b] \to \R $ such that $ \frac{\dif^j f}{\dif x^j} $ exist for $ j \leq k $, and that these derivatives are bounded and continuous on $ ]a,b[ $.
The transfer matrices appearing in \eqref{intro:general stationary current as w-integral} are related to $ B_k $:
\bels{def of matrix A_k}{
A_k \;\equiv\;
A_k(w) \;=\; \mat{2 - \pi^2 w^2 (1+B_k) & -1 \\ 1 & 0}
\,,
}
where the frequency variable $ w $ is related to the frequency variable $ \omega $ in \cite{Casher-Lebowitz-71} by $ \omega = \pi^{-1} (\tE\msp{1}M_k)^{1/2} w $.
As already pointed out in the introduction, O'Connor has shown (see Theorem 6 and its proof in \cite{O'Connor-75}) that for any reasonable heat baths the frequencies above any fixed $ w_0 > 0 $ have exponentially small contribution to the total current \eqref{intro:general stationary current as w-integral} as $ n $ grows.
\begin{comment
that for any $ w_0 > 0 $ there exist numbers $ \alpha, \beta(w_0), K(w_0) > 0 $ such that for any unit vectors $ u, v \in \C^2 $ one has:
\bels{O'Connor: high frequencies do not matter}{
\tP\Bigl(\, \absb{u^\mathrm{T} A_n(w)\cdots A_1(w)v} \leq \nE^{-\alpha \sqrt{n}}\,\Bigr)
\;\leq\;
K(w_0)\msp{1} \nE^{-\beta(w_0) \sqrt{n}}
\,.
}
\end{comment
Therefore, one may \emph{consider an arbitrary small but fixed interval $ ]0,w_0] $ of frequencies $ w $ in order to prove Theorem \ref{thr:the scaling of the average current}.}
We write $ \N = \sett{1,2,3,\dots} $, $ \N_0 = \sett{0,1,2,\dots} $ and $ \R_+ = \;]0,\infty[\, $ and $ \bar{\R} = \R \cup \sett{\infty} $ with $ \infty = \pm \infty $.
Additionally, following conventions are used frequently.
\vspace{0.2cm}
\noindent{\bf Probability:}
Since all the randomness of the stationary state current $ J^\text{CL}_n $ originates from the random masses we define the probability space $ (\Omega,\mathcal{F},\tP) $ as the semi-infinite countable product of spaces $ ([b_-,b_+],\mathcal{B}[b_-,b_+], \tau(b) \dif b) $. Here $ \mathcal{B}(S) $ denotes the Borel $ \sigma$-algebra of the topological space $ S $.
The filtration generated by the sequence $ B \equiv (B_k:k\in\N) $ is denoted by $ \mathbb{F} = (\mathcal{F}_k:k\in\N) $, $ \mathcal{F}_k = \sigma(B_j:1 \leq j\leq k) \subset \mathcal{F} $.
As a convention, the names of new random variables on $ (\Omega,\mathcal{F},\tP) $ will be generally written in capital letters. A discrete time stochastic process $ (Z_n:n\in \K) $ is denoted by $ Z \equiv (Z_n) $ when index set $ \K $ is known or not relevant. Finally, we write $ \Delta Z_n = Z_n-Z_{n-1} $.
\vspace{0.2cm}
\noindent{\bf Constants and scaling:}
Because we are interested only in the scaling relations many expressions can be made more manageable by using the following conventions.
First, we use letters $ C, C', C_1, C_2,\dots $ to denote strictly positive finite constants, whose value may vary from place to place.
Except otherwise stated, these values depend only on $ \tau, \lambda, T_1 - T_n $ and $ w_0 $, but never on $ w $ or $ n $.
Secondly, suppose $ f, g,h $ are functions, we write $ f \lesssim g $, or equivalently, $ g \gtrsim f $ provided $ f \leq C\, g $ pointwise, i.e., $ f(x,y) \leq C g(y,z) $ for all possible arguments $ x,y,z $.
If $ f \lesssim g $ and $ f \gtrsim g $ then we write $ f \sim g $.
Moreover, the expression $ f = g + \mathcal{O}(h) $, where $ f,g,h $ means $ \abs{f-g} \leq C \abs{h} $.
\vspace{0.2cm}
\noindent{\bf Periodicity:}
In the following we are going to deal with functions that are defined and/or take values on the unit circle $ \T = \R/\Z $. The following conventions are practical on such occasions.
When $ x \in \R $ write $ \abs{x}_\T = \min(x-\floor{x},\ceil{x}-x) $ where $ \floor{x} $ ($ \ceil{x} $) denotes the largest (smallest) integer smaller (larger) than $ x $.
We identify 1-periodic functions on $\R$ with functions on $\T$. Similarly, a function $g:\R \to \R $ of the form $ g(x) = x + f(x) $, where $ f $ is 1-periodic, is identified with a function from $ \T $ to itself.
\subsection{Heat current in terms of matrix elements}
Let $ v = [v_0\msp{8}v_{-1}]^\mathrm{T} \in \C^2 $, and denote by $ D(v) \equiv (D_n(v):n \in \N) $ the discrete time stochastic process that solves for $ n \in \N $:
\bels{def of process D_n(v)}{
D_n(v) \;&=\; (1-\pi^2w^2(1+B_n)) \msp{1} D_{n-1}(v) \,-\, D_{n-2}(v)
\\
D_0(v) \;&=\; v_0\,,
\\
D_{-1}\msp{-1}(v) \;&=\; v_{-1}
\,.
}
By definition one then has for $ n \in \N $
\bels{solution of Q_n in terms of D_n(v)}{
Q_n \;=\; A_n A_{n-1} \cdots A_1 \;=\; \mat{D_n(e_1)\; & D_n(e_2)\; \\ D_{n-1}(e_1) & D_{n-1}(e_2)}
\,,
}
where $ A_k $ is the transfer matrix \eqref{def of matrix A_k} and $ e_1 = [1\msp{8}0]^\mathrm{T} $ and $ e_2 = [0\msp{8}1]^\mathrm{T} $.
As a remark it is worth noting that in the derivation of the stationary heat current one actually starts with \eqref{def of process D_n(v)} where $ D_n(e_k) $ are certain real valued (sub-)determinants of a semi-infinite matrix and then expresses the final formula conveniently in terms of the product \eqref{solution of Q_n in terms of D_n(v)}.
Now, in \cite{Casher-Lebowitz-71} it was proven that Casher-Lebowitz model corresponds to setting the bath vectors $ v_{\mu,1} $ and $ v_{\mu,n} $ in the general expression \eqref{intro:general stationary current as w-integral} of $ J^{(\mu)}_n $ equal to
\bels{v_CLpm sim v_pm}{
v_{\text{CL},1}(w) \;=\; \mat{ (\alpha M_1\abs{w})^{-1/2}\, \\ +\cI (\alpha M_1 \abs{w})^{1/2}}
\quad\text{and}\quad
v_{\text{CL},n}(w) \;=\; \mat{ (\alpha M_n\abs{w})^{-1/2}\, \\ -\cI (\alpha M_n \abs{w})^{1/2}}
\,.
}
Here the constant $ \alpha > 0 $ depends on the units of the frequency variable $ w $, etc.
Since the masses have a compact support, $ [m_-,m_+] \subset \,]0,\infty[ $ and the bath vectors are symmetric in $ w $, one has
\bels{CL-current and def of J_n}{
J^{\text{CL}}_n
\,\sim\;
(T_1-T_n) \int_\R \abs{v_{\text{CL},n}^\mathrm{T}(w) Q_n(w) v_{\text{CL},1}(w)}^{-2} \dif w
\;\sim\;
\int_0^\infty j_n(w) \dif w \;=:\;J_n
\,,
}
where $ j_n(w) := \abs{v_n^\mathrm{T}(w)Q_n(w)v_1(w)}^{-2} $, with $ v_1(w) = w^{-1/2}e_1 + \cI w^{1/2} e_2 $ and $ v_n(w) = w^{-1/2}e_1 - \cI w^{1/2}e_2$.
By using $ D_n(e_1) D_{n-1}(e_2) - D_{n-1}(e_1) D_n(e_2) = \det(A_n\cdots A_1) = 1^n = 1 $ to get rid of the mixed terms of $ D_n(e_k) \equiv D_n(e_k;w) $ one obtains:
\bels{def of jn}{
j_n(w
\;&=\; \bigl\{1 \,+ w^{-2}D_{n}(e_1)^2 + D_{n-1}(e_1)^2 \!+ D_{n}(e_2)^2 + w^2D_{n-1}(e_2)^2\,\bigr\}^{-2}
\,.
}
This is the form we are going to use for the proof of Theorem \ref{thr:the scaling of the average current}.
\subsection{Outline of the proof}
\label{ssec:Outline}
It follows from \eqref{CL-current and def of J_n} and \eqref{def of jn} that the scaling bounds of $ \tE(J^{\text{CL}}_n) \sim \tE(J_n) $ rely on the good understanding of the processes $ D(v) $ defined in \eqref{def of process D_n(v)}.
Thus, the first natural step towards the proof of the theorem is the derivation of an easier representation for $ D_n(v) $. This is the purpose of Section \ref{sec:Representation of matrix elements} where one constructs (Proposition \ref{prp:Fundamental decomposition of D_n(v)} and Corollary \ref{crl:Fundamental decompositions of D_1n and D_2n}) the representations:
\bels{outline:decomposition D_1n and D_2n}{
D_n(e_1) \;\sim\; w^{-1}\Gamma_n^\vartheta \cdot \sin \pi X^\vartheta_n
\,,\qquad\text{and}\qquad
D_n(e_2) \;\sim\; w^{-1}\Gamma_n^0 \cdot \sin \pi X^0_n
\,.
}
Here $ \vartheta = w + \mathcal{O}(w^3) $ is a constant, the phases $ (X^x_n: n \in \N_0) $ form a Markov process on $ \T $
\bels{outline:recursion for X^x}{
X^x_n \;&=\; X^x_{n-1} +\, w \,+\, w \msp{1}\phi(X^x_{n-1})B_n \,+\, \mathcal{O}(w^2)
\qquad\text{with}\qquad X^x_0 = x
\,,
}
and the amplitude $ \Gamma^x_n \in \,]0,\infty[ $ is an exponential functional of $ (x,B_k:1\leq k\leq n)$:
\bels{outline: Gamma^x_n}{
\Gamma_n^x \;&=\; \nE^{w \sum_{k=1}^n s(X^x_{k-1})B_k \;+\; w^2 \sum_{k=1}^n r(X^x_{k-1})B_k^2 \;+\; \mathcal{O}(w^3n)}
\,.
}
The smooth functions $ \phi,s,r : \T \to \R $ are explicitly known. The process $ X \equiv X^x $ is specified precisely in Definition \ref{def:Process X^x} and Lemma \ref{lmm:functions fb and Phi}, and its most important qualitative properties are listed in Corollary \ref{crl:the three qualitative properties of X}.
The main advantage of the representation \eqref{outline:decomposition D_1n and D_2n} is that, unlike the recursion relations \eqref{def of process D_n(v)} of $ D(v)$, it allows us to treat both the scaled noise $ wB_n $ and the initial values $ e_2 $ of $ D_n(e_2) $ as small perturbations around $ 0 $ and $ e_1 $, respectively.
Based on the representation \eqref{outline:decomposition D_1n and D_2n}, let us now carry out heuristic computations which form the outline for the actual proof of $ \tE(J_n) \sim n^{-3/2} $.
Along these calculations we will point out the properties of $ X^x_n $ and $ \Gamma^x_n $ which must be proven to make these calculations rigorous.
We start with the upper bound. By Theorem 6 of \cite{O'Connor-75} we may restrict the integration domain of \eqref{CL-current and def of J_n} into $ [0,w_0]$. Dropping positive terms from the denominator in \eqref{def of jn} then yields
\begin{subequations}
\begin{align}
\tE\msp{1}J^\mathrm{CL}_n
\;\sim\;
\tE\msp{1}J_n
\;&=\;
\tE \int_{0}^\infty j_n(w) \dif w
\;\leq\;
\int_{0}^{w_0} \tE\left\{ \frac{1}{1+w^{-2}D_n(e_1;w)^2}\right\}\msp{1} \dif w
\,
\label{outline:upper bound first lines}
\\
&=\;
\int_{0}^{w_0} \tE \left\{\int_\T\frac{1}{1+(w^{-2}\Gamma_n \sin x)^2}\,\tP(X_n \in \dif x|\Gamma_n)\right\} \dif w
\label{outline:upper bound need for pot theory}
\,.
\end{align}
\end{subequations}
Now comes the first crucial step. By standard martingale central limit theorems \cite{Hall1980} one expects that $ X_n $, if properly centered, scaled, and considered as a process on $ \R $, should converge to a Gaussian with unit variance.
Unfortunately, such weak convergence results do not suffice since we need to deal with very unlikely events.
Indeed, from \eqref{outline:upper bound need for pot theory} one sees that the crucial contribution of the terms inside the curly brackets comes when $ \abs{X_n} \lesssim w^2 / \Gamma_n $. The probability of this to happen is typically very small, e.g., of order $ n^{-1} $ when $ w^2n \sim 1 $.
Moreover, we would also like to be able to consider $ X_n $ and $ \Gamma_n $ effectively independent in \eqref{outline:upper bound need for pot theory}.
In other words, we would like to have:
\begin{itemize}
\item[(a)]
Pointwise bound:
$
\chi_{\Ball(wn,C w\sqrt{n})}(x) \cdot
\frac{\dif x}{\min(1,w\sqrt{n})}
\;\lesssim\;
\tP(X_n \in \dif x)
\;\lesssim\;
\frac{\dif x}{\min(1,w\sqrt{n})} $, $ x \in \T $;
\item[(b)]
Independence: $ \tP(X_n \in \dif x|\Gamma_n) \;\sim\; \tP(X_n \in \dif x)\; $, $ x \in \T $.
\end{itemize}
The purpose of Section \ref{sec:Potential theory} is to prove Proposition \ref{prp:Potential theory} which together with the bounds in Subsection \ref{ssec:Proof of the upper bound} implies that as far as \eqref{outline:upper bound need for pot theory} goes one may think that both (a) and (b) hold literally.
So by using (a-b) and then parametrizing $ \T $ with $ [-1/2,1/2] $ in \eqref{outline:upper bound need for pot theory} one gets
\begin{align}
\tE(J_n)\;
&\lesssim\;
\int_0^{w_0} \tE \left\{ \int_{-1/2}^{1/2} \frac{1}{1+(w^{-2}\Gamma_n x)^2}\cdot\frac{\dif x}{\min(1,w\sqrt{n})} \right\} \dif w
\notag
\\
&\lesssim\;
\int_0^{w_0}
\frac{1}{\min(1,w\sqrt{n})} \tE\left\{ \frac{\arctan (w^{-2}\Gamma_n)}{w^{-2}\Gamma_n} \right\} \dif w
\notag
\\
&\lesssim\;
\int_0^{n^{-1/2}}\msp{-7}\frac{w}{\sqrt{n}}\,\tE\bigl\{1/\Gamma_n(w)\bigr\}\,\dif w \,+\, \int_{n^{-1/2}}^{w_0}\tE\bigl\{ 1/\Gamma_n(w)\bigr\}\, \dif w
\label{outline:Expectation of 1/Gamma appears}
\,.
\end{align}
Here we have used the upper bound in (a), approximated $ \sin z \sim z $ and then performed a change of variables $ x \mapsto w^{-2}\Gamma_n x $. To get the last line we have approximated $ \arctan r \lesssim 1 $, for $ r \in \R_+ $.
In Section \ref{sec:Expectation of 1/Gamma_n} we bound the only unknown term in \eqref{outline:Expectation of 1/Gamma appears} by showing that there exists a constant $ \alpha > 0 $ such that
\begin{equation}
\label{outline:expectation of 1/Gamma}
\tE\{1/\Gamma_n(w)\} \;\lesssim\; \nE^{-\alpha w^2n}\,,
\quad\text{when}\quad 0 < w \leq w_0
\,.
\end{equation}
The sum over $ r $-terms in \eqref{outline: Gamma^x_n} is then shown to produce an exponent $ \nE^{-\gamma(w)n} $ where the constant $ \gamma(w) \sim w^2 $ is the Lyapunov exponent associated to the transfer matrices $ A_k $ in \eqref{def of matrix A_k} with explicit value given in \eqref{Lyapunov exponent explicitly solved}.
The challenge in Section \ref{sec:Expectation of 1/Gamma_n} is to bound the large deviations of the first sum in \eqref{outline: Gamma^x_n} so much that \eqref{outline:expectation of 1/Gamma} still holds for some $ \alpha > 0$.
By applying the bound \eqref{outline:expectation of 1/Gamma} in \eqref{outline:Expectation of 1/Gamma appears}, yields the upper bound for the total current:
\bea{
\tE(J_n)\;
&\lesssim\;
\int_0^{n^{-1/2}}\msp{-15}\frac{w}{\sqrt{n}}\cdot 1\, \dif w
\,+\,
\int_{n^{-1/2}}^{w_0} w^2 \nE^{-\gamma w^2 n} \dif w \;\sim\; n^{-3/2}
\,.
}
To prove the lower bound, it suffices to show that for $ w \in I := [(2n)^{-1/2},n^{-1/2}] $ one has $ \tP\bigl(j_n(w) \ge C w^2 \bigr) \gtrsim 1 $. Indeed, if this bound is verified then
\[
\tE(J_n) \gtrsim \int_I \tE \msp{1} j_n(w)\msp{1} \dif w
\;\ge\;
n^{-1/2}
\cdot
C\msp{1}(n^{-1/2})^2
\cdot
\tP\bigl(\msp{1}j_n(w) \ge C w^2 \bigr)
\;\sim\;
n^{-3/2}
\,.
\]
Just like with the upper bound the main contribution of $ \tE\msp{1}j_n(w) $ comes from the unlikely events, e.g., when $ \abs{X_n} \lesssim w^2 $.
For this reason one needs again the pointwise bounds (a) and (b).
However, unlike in \eqref{outline:upper bound first lines} the lower bound depends in a non-trivial way also on $ D_n(e_2) $ since by \eqref{def of jn} one has
\bels{outline:importance of D_2n for the lower bound}{
\tP\bigl(\msp{1}j_n(w) \ge C_1w^2 \bigr)
\;\sim\;
\tP\bigr(\abs{D_n(e_1;w)} \leq w^2, \abs{D_{n}(e_2;w)} \leq w\bigl)
}
Thus, to prove the lower bound one has to be able to analyze the joint behavior of the matrix elements $(D_n(e_1),D_n(e_2)) $, or equivalently, $ (X^\vartheta_n,X^0_n,\Gamma^\vartheta_n, \Gamma^0_n) $.
These dependencies are first addressed in Subsection \ref{ssec:Joint behavior} by deriving martingale exponent representations for both $ X^\vartheta_n-X^0_n $ and $ \Gamma^0_n/\Gamma^\vartheta_n $.
In Subsection \ref{ssec:Proof of the lower bound} these representations are used to extract (Lemma \ref{lmm:bound for K_n and L_n probabilities}) the typical joint behavior of the processes $ D(e_k) $, $ k=1,2$.
Based on this typical behavior one is then able to construct the final bound for the right side of \eqref{outline:importance of D_2n for the lower bound}.
\section{Representation of matrix elements}
\label{sec:Representation of matrix elements}
The purpose of this section is to derive the representation \eqref{outline:decomposition D_1n and D_2n} of processes $ D(v) $, $ v \in \R^2 $, (Proposition \ref{prp:Fundamental decomposition of D_n(v)} and Corollary \ref{crl:Fundamental decompositions of D_1n and D_2n}) in terms of the Markov process $ (X_n) $ on the unit circle $ \T $.
The first step of this derivation is to use the M\"obius transformation, associated to the average of the transfer matrix $ \tE(A_n) $, to construct $ w $-depended change-of-coordinates $ g $ which maps the evolution of the quotients $ \xi_n = D_n/D_{n-1} $ bijectively from $ \cR $ to $ \T $.
It turns out that in these new coordinates $ x = g^{-1}(\xi ) $ the noise, $ wB_n $, can be considered as a small perturbation around the zero noise evolution, which in turn is reduced to the simple shift $ x \mapsto x + \vartheta $. This is unlike in the original coordinates $ \xi \in \cR $ where the effect of noise is typically of order $ \mathcal{O}(1) $ regardless how small $ w $ is.
The Markov process $ (X_n) $ is now defined by $ X_n := g^{-1}(D_n/D_{n-1}) $ while the representation for the matrix elements is obtained by first writing $ D_n = g(X_n) \cdots g(X_1)\cdot D_0 $ and then using the explicit knowledge of $ g $ for expanding the resulting expression w.r.t. the small disorder $ (wB_n:n\in\N)$.
The representation \eqref{outline:decomposition D_1n and D_2n} is new. Besides having the benefits already mentioned before, it also has the nice property of reducing in the zero noise case to the explicit expression $ D_{1,n} \equiv D_n = \frac{\sin \pi \vartheta (n+1)}{\pi \vartheta} $ which was already discovered by Casher and Lebowitz (consider $1$-periodic chain in equation (3.5) in \cite{Casher-Lebowitz-71}).
The change-of-coordinates $ g $, on the other hand, is not really new as it was already discovered in a slightly different form by Matsuda and Ishii \cite{Matsuda-Ishii-1970}.
However, since our method of deriving $ g $ is different than in \cite{Matsuda-Ishii-1970} we have decided to include it here for the convenience of the reader.
In a more general context, our representation \eqref{outline:decomposition D_1n and D_2n} is similar to some standard decomposition of products on Markov chains.
Indeed, since $ D_n = \xi_n \cdots \xi_1 D_0 $ with $ \xi_k = D_n/D_{n-1}$, and since the transfer operator of the chain $(\xi_n) $ admits a spectral gap \cite{O'Connor-75}, a general argument \cite{Guivarch-School2002} allows us to write the decomposition $\abs{D_n} = \nE^{\gamma n + M_n}u(\xi_n)$, where $\gamma $ is a Lyapunov exponent, $ (M_n) $ is a martingale, and $ u $ is a function on $ \R $.
Although, one is not in general able to determine $ M_n $ and $ u $, it turns out that, in the special case of random matrices, Raugi \cite{Raugi-1997} has been able to compute them explicitly, up to the knowledge of the invariant measure of the chain $(\xi_n)$.
Still, the derivation of our formula \eqref{outline:decomposition D_1n and D_2n} is much more straightforward than the use of Raugi's formula.
\subsection{Expansion around zero noise evolution}
Let us associate a M\"obius transformation $ \mathcal{M}_A : \C \to \C $ to a $ 2 \times 2 $ to a square matrix $ A $ by setting
\[
\mathcal{M}_A(z) \;:=\; \frac{az+b}{cz+d} \qquad \text{for}\quad A \;=\; \mat{a & b \\ c & d}
\,.
\]
The association $ A \mapsto \mathcal{M}_A $ preserves the matrix multiplication
\bels{Mobius maps preserve matrix multiplication}{
\mathcal{M}_A \circ \mathcal{M}_B \,=\, \mathcal{M}_{AB}\,,
\qquad (A,B \in \C^{2 \times 2}\,)
}
so that $ (\mathcal{M}_A)^{-1} = \mathcal{M}_{A^{-1}} $ whenever either side of the equality exists.
By writing $ D_n \equiv D_n(v) $, $ v = [v_0\msp{6}v_{-1}]^\mathrm{T} \in \C^2 $, and using \eqref{def of process D_n(v)} one sees that the ratios
\bels{def of xi_n}{
\xi_n \;:=\; \frac{D_n}{D_{n-1}}
\,,
}
form a Markov process $ \xi \equiv (\xi_n:n\in \N_0) $ which satisfies a simple recursion relation:
\begin{subequations}
\label{system for xi_n}
\begin{align}
\label{recursion for xi_n}
\xi_n \;&=\; \mathcal{M}_{A_n}(\xi_{n-1})
\qquad (n\in \N)
\\
\label{initial condition for xi_n}
\xi_{0} &=\; \frac{v_0}{v_{-1}}
\,.
\end{align}
\end{subequations}
Here the random matrices $ A_n $ depend on $ B_n $ through the relation \eqref{def of matrix A_k}.
Since $ \mathcal{M}_{A_n}(\pm \infty) = 2 - \pi^2 w^2 (1+B_n)$ we identify $ \pm \infty = \infty $.
By using \eqref{def of xi_n} and \eqref{system for xi_n} we get
\bels{D_n as product of iterated fractions}{
D_{n} \;&=\; \xi_{n} \xi_{n-1} \cdots \xi_{1} D_0
\,,
}
provided no $ \xi_k \in \sett{0,\infty} $.
Recall that $ \cR $ denotes $ \R \cap \sett{\infty} $. In the following we shall consider \eqref{system for xi_n} on $ \cR $ instead on $ \C \cap \sett{\infty} $.
\Lemma{mean diagonal coordinates}{
There exists a coordinate transformation $ g: \T \to \cR $ such that
\bels{mean evolution is shift}{
(g^{-1} \circ \mathcal{M}_{\tE(A_k)} \circ g)(x) \;=\; x \,+\, \vartheta
\msp{50}(x \in \T)
\,,
}
where $ A_k $ is the random matrix \eqref{def of matrix A_k}, and the constant shift is given by
\bels{def of average shift}{
\vartheta
\;\equiv\;
\vartheta(w)
\;=\; \frac{1}{\pi}\arccos \left[1-\frac{\pi^2w^2}{2}\right] \;=\; w \,+\, \mathcal{O}(w^3)
\,.
}
The function $ g $ and and its inverse $ g^{-1} $ are given by
\begin{align}
\label{def of g}
g(x) \;&=\; \bigl(\mathcal{M}_{G} \circ E^{-1}\bigr)(x)
\;=\;
\frac{ \tan \pi x}{\cos \pi \vartheta \msp{1}\tan \pi x \,+\, \sin \pi \vartheta}
\\
\label{def of inverse of g}
g^{-1}(\xi) \;&=\; \bigl(E \circ \mathcal{M}_{G^{-1}}\bigr)(\xi)
\;=\;
\frac{1}{\pi} \arctan\left[ \frac{(\sin \pi\vartheta)\, \xi}{(\cos \pi\vartheta)\, \xi \,- 1 } \right]
\,,
\end{align}
where $ E : \partial D := \sett{z \in \C : \abs{z}=1} \to \T $ is the bijection $ \nE^{\cI \phi} \mapsto \frac{\phi}{2\pi} $, and the columns of $ G $ consists of eigenvectors of $ \tE(A_l) $.
}
\begin{Proof}
By diagonalizing, we get $ \tE(A_l) = G \Lambda G^{-1} $ where
\bels{matrices}{
\Lambda \;=\; \mat{\,\nE^{\cI \pi \vartheta} & 0 \\ 0 & \,\nE^{-\cI \pi \vartheta}}
\,,
\quad
G \;=\; \mat{\,1 & -1 \\ \nE^{-\cI \pi \vartheta} & -\nE^{\cI \pi \vartheta}}
\,,
\quad
G^{-1} =\; \frac{1}{2\cI \sin \pi \vartheta}\mat{\nE^{\cI \pi \vartheta} & -1\, \\ \,\nE^{-\cI \pi \vartheta} & -1}
\,,
}
and $ \vartheta $ is given in \eqref{def of average shift}.
From \eqref{matrices} we see that $ \mathcal{M}_{G^{-1}}(\cR) = \partial D $. Since the matrix $ G $ is invertible, the property \eqref{Mobius maps preserve matrix multiplication} implies that the associated M\"obius transformation is also invertible. In particular, the restrictions $ \mathcal{M}_{G}|_{\partial D} $ and $ \mathcal{M}_{G}^{-1}|_{\cR} = \mathcal{M}_{G^{-1}}|_{\cR} $ are bijections mapping $ \partial D $ into $ \cR $ and $ \cR $ into $ \partial D $, respectively.
Using these observations we identify the coordinate transformation $ g : \T \to \cR $ and its inverse $ g^{-1} : \cR \to \T $ by regrouping as follows:
\bels{main step in identifying evolution as a shift}{
\mathcal{M}_{\tE(A_l)} \;
&=\;
\mathcal{M}_{G} \circ \mathcal{M}_{\Lambda} \circ \mathcal{M}_{G^{-1}}
\\
&=\;
\bigl( \mathcal{M}_{G} \circ E^{-1} \bigr)
\circ
\bigl( E \circ \mathcal{M}_{\Lambda} \circ E^{-1}\bigr)
\circ
\bigl( E \circ \mathcal{M}_{G}^{-1} \bigr)
\\
&=\;
g \circ \lambda \circ g^{-1}
\,,
}
where $ \lambda $ equals the shift function on the right of \eqref{mean evolution is shift}.
In order to derive \eqref{def of g} and \eqref{def of inverse of g} the easiest way is to first solve $ g^{-1} $ using $ E(z/z^\ast) = 2E(z) = \pi^{-1}\arctan\bigl[\Im(z)/\Re(z)\bigr] $:
\[
x \;=\; g^{-1}(\xi) \;\equiv\; E\!\left( \frac{\nE^{\cI \pi \vartheta} \xi - 1}{\nE^{-\cI \pi \vartheta} \xi - 1} \right)
\;=\;
\pi^{-1} \arctan\left[ \frac{ \xi \sin (\pi \vartheta) }{\xi \cos (\pi \vartheta) - 1}\right]
\,.
\]
The formula for $ g $ follows now by simply inverting the above function.
\end{Proof}
Suppose $ \xi \in \cR $ and $ \xi' = \mathcal{M}_{\tE(A_l)}(\xi) $. The important property of the new coordinates $ x $ is that even though the step $ \abs{\xi' - \xi} $ can be arbitrary large\footnote{Jumps $ \abs{\xi'-\xi} $ become arbitrary large as $ \xi $ approaches $ 0 $.} regardless of how small $ w $ is, in the new coordinates every step $ g^{-1}(\xi')-g^{-1}(\xi) = \vartheta $ is of size $ w $. The next lemma says that this property remains true even when $ \mathcal{M}_{\tE(A_l)} $ is replaced by the random evolution $ \mathcal{M}_{A_l} $.
\Lemma{functions fb and Phi}{
Let $ w > 0 $ be fixed and let $ g : \T \to \cR $ be the $ w $-dependent coordinate tranformation \eqref{def of g}.
Then for any $ b \in \;]0,\infty[\, $ the function
\bels{definition of f_b}{
f_b \;:=\; g \circ \mathcal{M}_{A} \circ g^{-1} : \T \to \T
\qquad\text{where}\qquad
A \;\equiv\;
A(b) \;:=\; \mat{2 - \pi^2 w^2 (1+b) & -1 \\ 1 & 0}
\,,
}
is a bijection, that can be written as
\begin{subequations}
\label{f_b and its inverse in terms of Phi}
\begin{align}
\label{f_b in terms of Phi}
f_b(x) \;&=\; x \,+\, \vartheta \,+\, \Phi(x,b)
\\
\label{inverse of f_b in terms of Phi}
f^{-1}_b(y) \;&=\; y \,-\, \vartheta \,+\, \Phi(y-\vartheta,-b )
\,,
\end{align}
\end{subequations}
where the constant $ \vartheta = w + \mathcal{O}(w^3)$ is given in \eqref{def of average shift} and the smooth function $ \Phi : \T \times \,]0,\infty[\; \to \T $ is specified by
\begin{subequations}
\label{def of Phi}
\begin{align}
\label{Phi as arctan}
\Phi(x,b)
\;&=\;
\frac{1}{\pi}\arctan\left\{ \frac{ (\pi w/2) \bigl[1 - \cos (2\pi x)\bigr]\, b }{\sqrt{1-(\pi w/2)^2} \,-\, (\pi w/2) \sin (2\pi x)\,b}\right\}
\\
\label{Phi in product form}
&=\;
\sin^2 (\pi x) \Bigl[ wb \,+\, w^2b^2 (\pi/2)\sin (2\pi x) \,+\,w^3b\,R_3(w,x,b)\Bigr]
\,.
\end{align}
\end{subequations}
The remainder term $ R_3 : [0,w_0] \times \T \times [b_-,b_+] $ is a smooth and bounded function.
}
The lemma says that in $ x $-coordinates the system $ \xi_n = \mathcal{M}_{A_n}\msp{-2}(\xi_{n-1}) $, $ n \in \N $, and $ \xi_0 = g^{-1}(x) $ is described by the following process on a circle. The proof which is just a mechanical calculation can be found in appendix \ref{assec:Proof of fb-lemma}.
\Definition{Process X^x}{
Let $ x \in \T $. Markov process $ X^x \equiv (X^x_n:n\in\N_0) $ on $ \T $ is defined by setting
\bels{def of general process X^x}{
X^x_n \;&=\; f_{B_n}\msp{-2}(X^x_{n-1}) \qquad (n \in \N)
\\
X^x_0 \;&=\; x
\,.
}
When the starting point $ x $ is known from the context or its specific value is not relevant we write simply $ X $ and $ X_n $ instead of $ X^x $ and $ X^x_n $, respectively.
}
The main properties of $ f_b(x) $ are best seen by expanding it into the power series w.r.t. $ w $. Indeed, by using \eqref{def of average shift}, \eqref{f_b in terms of Phi} and \eqref{def of Phi} one gets:
\begin{subequations}
\label{expansion of fb, def of phi and psi}
\begin{align}
f_b(x) \;&=\; x \,+\, w \,+\,w \phi(x)\,b \,+\,w^2\psi(x)b^2\,+\,\mathcal{O}(w^3)
\,,
\\
\label{def of phi}
\phi(x) \;&=\; \sin^2\msp{-1} \pi x
\,,
\\
\label{def of psi}
\psi(x) \;&=\; \pi \sin^3\msp{-1} \pi x\, \cos \pi x
\,.
\end{align}
\end{subequations}
Let us denote $ \Delta Z_k := Z_k - Z_{k-1} $ for a stochastic process $ (Z_k) $. By using the expansion \eqref{expansion of fb, def of phi and psi} together with $ \tE(B_k) = 0 $ and $ B_k \ge b_- > -1 $ the following qualitative properties of $ X $ emerge.
\Corollary{the three qualitative properties of X}{
The process $ X $ has the following three useful properties:
\begin{itemize}
\titem{i} Uniform monotonicity:
$ 0 \;<\; (1+b_-)w + \mathcal{O}(w^2) \;\leq\; \Delta X_k \;\leq\; (1+b_+)w + \mathcal{O}(w^2)\,; $
\titem{ii} $ \mathcal{O}(w^1) $-martingale property modulo constant shift:
$ \tE\bigl[\Delta X_k - w\big|\mathcal{F}_{k-1}\bigr] \;=\; X_{k-1} \,+\, \mathcal{O}(w^2)\,;
$
\titem{iii} Uniform diffusion outside any neighborhood of zero: There are constants $ \alpha(\varepsilon),\beta > 0 $ such that $\tE\bigl[(\Delta X_k-w)^2\big|X_{k-1}=x\bigr] \,\in\, [\alpha(\varepsilon) w^2, \beta w^2] $ for $ \abs{x}_\T \ge \varepsilon $.
\end{itemize}
}
Having found good coordinates $ x = g(\xi) $ where $ \xi_n = D_n/D_{n-1} $ evolves in $ w$-sized steps in a relatively simple manner, our next step is to express the matrix elements of $ Q_n $ in terms of these new coordinates.
\Proposition{Fundamental decomposition of D_n(v)}{
Let $ v = [v_0 \msp{6} v_{-1}]^\mathrm{T} \in \bar{\R}^2 $ with $ v_0 \neq 0 $.
Then there is a constant $ w_0 > 0 $ such that for $ w \in \;]0,w_0] $ the solution of \eqref{def of process D_n(v)} is
\bels{fundamental decomposition of D_n(v)}{
D_n(v)
\;&=\; v_0 \cdot \Gamma_n^x \cdot \frac{\sin \pi X^x_n}{\sin \pi[\vartheta + \Phi(x,B_1)]}
\qquad\text{with}\quad
x = g^{-1}(v_1/v_2)
\,,
}
almost surely.
Here the random amplitude $ \Gamma^x_n : \Omega \to \;]0,\infty[\,$ has an exponential representation
\bels{def of Gamma^x_n}{
\Gamma^x_n \;=\;
\exp\Biggl[ w \sum_{l=1}^n s(X_{l-1}^x)B_l \,+\, w^2 \sum_{l=1}^n r(X^x_{l-1})B_l^2 \,+\,\mathcal{O}(w^3n) \Biggr]
\,,
}
where the smooth functions $ r,s : \T \to \R $ are specified by
\begin{subequations}
\label{s and r of Gamma}
\begin{align}
\label{function s}
s(x) \;&=\; - \frac{\pi}{2} \sin 2\pi x
\,,
\\
\label{function r}
r(x) \;&=\; \frac{\pi^2}{4} ( \cos^2 2\pi x \,-\, \cos 2\pi x )
\,.
\end{align}
\end{subequations}
}
\begin{Proof}
Denote $ D_n := D_n(v) $, $ \xi_n = D_n/D_{n-1} $ and set $ x := g^{-1}(\xi_0) \equiv g^{-1}(v_0/v_{-1}) $.
By definition \eqref{system for xi_n} the process $ (\xi_n) $ is described in $ x$-coordinates by the process $ (X^x_n) $.
Set $ X_n := X^x_n $ and use \eqref{def of g} to write
\bels{D_l ratio in terms of X_l}{
\xi_{l} \;=\; g \circ X_{l} \;=\; \mathcal{M}_{G} \circ E^{-1}(X_{l}) \;=\; \mathcal{M}_{G}(\nE^{\cI 2\pi X_{l}})
\,.
}
By using \eqref{matrices} to write out the M\"obius transformation we obtain:
\[
\mathcal{M}_G(\nE^{\cI \phi}) \;=\; \frac{\nE^{\cI \phi} - 1}{\nE^{\cI (\phi-\pi\vartheta)} - \nE^{\cI \pi \vartheta}}
\;=\;
\frac{\sin \frac{\phi}{2}}{\sin \bigl(\frac{\phi}{2}-\pi \vartheta\bigr)}
\,.
\]
By combining this with \eqref{D_l ratio in terms of X_l}, reorganizing the resulting product and then using \eqref{f_b in terms of Phi} to write $ f $ in terms of $ \Phi $ yields
\begin{align}
\notag
\frac{D_n}{v_0} \;&=\; \frac{\xi_{n} \xi_{n-1}\cdots \xi_{1} v_0}{v_0}
\;=\;
\prod_{l=1}^n \frac{\sin \pi X_l}{\sin \pi(X_{l}-\vartheta)}
\;=\;
\frac{\sin \pi X_n}{\sin \pi(X_1 - \vartheta)}
\prod_{l=1}^{n-1}
\frac{\sin \pi X_{l}}{\sin \pi (X_{l+1}-\vartheta)}
\\
\label{D_n as a product of sin-ratios}
&=\;
\frac{\sin \pi X_{n}}{\sin \pi [x + \Phi(x,B_1)]}
\prod_{l=1}^{n-1}
\frac{\sin \pi X_{l}}{\sin \pi [X_{l} + \Phi(X_{l},B_{l+1})]}
\,.
\end{align}
Here the possible extreme values $ \xi_k \in \sett{0,\infty} $ do not cause problems because we assumed $ \xi_0 = v_0/v_{-1} \neq 0 $ and \eqref{system for xi_n} implies
\[
\tP\bigl(\xi_k \in \sett{0,\infty} \text{ for some }k \in \N \msp{1}\big| \xi_0 \neq 0\bigr) \;=\; 0
\,.
\]
We must now show that the product of sin ratios in \eqref{D_n as a product of sin-ratios} equals the exponent $ \Gamma^x_n $. Since, the terms in the product are all similar let us consider only one such factor.
From \eqref{Phi in product form} one sees that $ \Phi(x,b) = \mathcal{O}(w) $. This suggests expressing the denominators on the last line of \eqref{D_n as a product of sin-ratios} as power series of $ \pi \Phi(x,b) $ around zero:
\bels{first expansion of sin-denominators}{
\sin \pi( x + \Phi(x,b))
\;&=\;
\sin \pi x\, \cos \pi\Phi(x,b) \,+\, \cos \pi x \,\sin \pi \Phi(x,b)
\\
&=\;
\sin \pi x\, \bigl\{1 -\frac{1}{2}\pi^2\Phi^2(x,b)\bigr\} \,+\, \pi \Phi(x,b) \cos \pi x \,+\, \mathcal{O}\bigl(\Phi^3(x,b)\bigr)
\,.
}
The expression \eqref{Phi in product form} also shows that $ \Phi^k(x,b)/\sin \pi x = \mathcal{O}(w^k) $ for $ k \ge 1/2 $.
Thus using \eqref{first expansion of sin-denominators} to rewrite the denominators in \eqref{D_n as a product of sin-ratios} and then dividing the numerator and the denominator by $ \sin \pi x $ yields the expression for geometric sum of variable $ q = -\pi \Phi(x,b) \cot \pi x + \frac{\pi^2}{2}\Phi^2(x,b) + \mathcal{O}(w^3) \,=\, \mathcal{O}(w) $. Expanding this geometric sum gives the first line of
\bea{
\frac{\sin \pi x}{\sin \pi(x +\Phi(x,b))}\;
&=\;
1 \,-\,\pi \Phi(x,b)\cot \pi x + \frac{\pi^2}{2}\Phi^2(x,b) \,+\,\pi^2 \Phi^2(x,b) \cot^2 \!\pi x
\,+\,\mathcal{O}(w^3)
\\
&=\;
1 \,-\, w\frac{\pi}{2} \sin 2\pi x\;b \,+\,w^2 \frac{\pi^2}{8}(1-\cos 2\pi x)^2\,b^2 + \mathcal{O}(w^3)
\,,
}
while the last line follows from \eqref{Phi in product form} and trigonometric double angle formulae. By using $ 1+z = \exp \circ \ln \msp{1} (1+z) = \exp\bigl[ z - \frac{1}{2}z^2 + \mathcal{O}(z^3) \bigr] $, with $ \abs{z} \leq Cw_0 $, for the last expression we get
\bea{
\frac{\sin \pi x}{\sin \pi (x +\Phi(x,b))}\;
&=\;
\exp\Bigl[ -w (\pi/2) \sin 2\pi x\; b \,+\,w^2 (\pi/2)^2\bigl( \cos^2 2\pi x - \cos 2\pi x \bigr) b^2 + \mathcal{O}(w^3) \Bigr]
\,.
}
Identifying functions $ s $ and $ r $ on the right side and then applying this bound term by term for the product in \eqref{D_n as a product of sin-ratios} yields the expression on the right side of \eqref{def of Gamma^x_n}.
\end{Proof
It is worth remarking that the proposition does not apply directly for $ v \in \C^2 $ since it relies on Lemmas \ref{lmm:mean diagonal coordinates} and \ref{lmm:functions fb and Phi} which apply only when $ (\xi_n) $ takes values on $ \R $. Of course, by the linearity of the system \eqref{def of process D_n(v)} one still has $ D_n(v_R + \cI v_I) = D_n(v_R) + \cI D_n(v_I) $ for any $ v_R,v_I \in \R^2 $.
The next corollary shows that the generic choice $ D_n(v) $ with $ v = e_k $, $ k =1,2$, is often a convenient choice as $ D(e_2)$ can be treated as a perturbation of $ D(e_1) $.
\Corollary{Fundamental decompositions of D_1n and D_2n}{
There is a constant $ w_0 > 0 $ such that for $ w \in \;]0,w_0] $:
\begin{subequations}
\label{fundamental decompositions of D_1n and D_2n}
\begin{align}
\label{fundamental decomposition of D_1n}
D_n(e_1)
\;&=\; \Gamma_n^\vartheta \cdot \frac{\sin \pi X^{\vartheta}_n}{\sin \pi[\vartheta + \Phi(\vartheta,B_1)]}
\;\sim\;
w^{-1} \Gamma_n^\vartheta\cdot \sin \pi X^\vartheta_n
\,,
\\
\label{fundamental decomposition of D_2n}
D_n(e_2)
\;&=\; \Gamma_n^0 \cdot \frac{\sin \pi X^0_n}{\sin \pi[\vartheta + \Phi(\vartheta,B_2)]}
\;\sim\;
w^{-1} \Gamma_n^0 \cdot \sin \pi X^0_n
\,.
\end{align}
\end{subequations}
}
\begin{Proof}
By \eqref{def of inverse of g} we get $ g^{-1}(\xi_0) = g^{-1}(1/0) = \vartheta $ and thus \eqref{fundamental decomposition of D_1n} follows directly from Proposition \ref{prp:Fundamental decomposition of D_n(v)}.
In order to prove \eqref{fundamental decomposition of D_2n} one can not directly apply the proposition since the first component of $ e_2 $ is zero. However, from \eqref{def of process D_n(v)} one sees that $ [D_1(e_2) \msp{8}D_0(e_2)]^\mathrm{T} = [-1 \msp{8}0]^\mathrm{T} = -e_1 $ and $ D_n(-v) = -D_n(v) $.
Thus, by defining $ \theta : \Omega \to \Omega $ by $ \theta\omega = (b_2,b_3,\dots) $ for $ \omega = (b_1,b_2,\dots) $ and denoting the associated pullback $ \theta_\ast $ on random variables $ Z $ by $ \theta_\ast Z(\omega) = Z(\theta \omega) $, one can write
\bels{representation for D_2n - step 1}{
D_n(e_2)
\,=\; - \theta_\ast D_{n-1}
\;=\;
\theta_\ast\Gamma^\vartheta_{n-1} \cdot \frac{\sin \pi \msp{1}\theta_\ast\msp{-1} X^\vartheta_{n-1}}{\sin \pi[\vartheta + \Phi(\vartheta,\theta_\ast\msp{-1} B_1)]}
\,,
}
where by the definition:
\bels{finding Gamma^0_n - step 1}{
\theta_\ast\Gamma^\vartheta_{n-1}
\;=\;
\exp\Biggl[ w \sum_{l=1}^{n-1} s(\theta_\ast X^\vartheta_{l-1})\,\theta_\ast \msp{-1}B_l \,+\, w^2 \sum_{l=1}^{n-1} r(\theta_\ast X^\vartheta_{l-1})(\theta_\ast\msp{-1} B_l)^2 \,+\,\mathcal{O}(w^3n) \Biggr]
\,.
}
Now, since $ \Phi(0,b) = 0 $ it follows that $ X^0_1 = f_{B_1}(0) = \vartheta + \Phi(0,B_1) = \vartheta = \theta_\ast X^\vartheta_0 $ regardless of the value of $ B_1 $. But $ (X^0_n:n\in\N) $ and $ (\theta_\ast X^\vartheta_{n-1}:n\in\N) $ also satisfy the same recursion relations for $ n \ge 2 $ and therefore $ \theta_\ast X^\vartheta_n = X^0_{n+1} $, $ n \in \N_0 $. Also, by definition $ \theta_\ast B_l(\omega) = b_{l+1} = B_{l+1}(\omega) $.
Thus we may replace $ \theta_\ast X^\vartheta_{l-1} $ with $ X^0_l $ and write $ \theta_\ast B_l = B_{l+1} $ in \eqref{representation for D_2n - step 1} and \eqref{finding Gamma^0_n - step 1}. Moreover, if we also reindex the sums in \eqref{finding Gamma^0_n - step 1} we obtain an exponential representation for $ \theta_\ast \Gamma_{n-1} $ that is up to a missing first terms $ w\msp{1}s(X^0_0)B_1 $ and $ w^2r(X^0_0)B_1^2 $ equal to $ \Gamma^0_n $. However, these missing terms are both zero due to the "coincidence" $ s(0) = r(0) = 0 $, and thus we get $ \theta_\ast\Gamma_{n-1} = \Gamma^0_n $. This proves \eqref{fundamental decomposition of D_2n}.
\end{Proof}
\subsection{Joint behavior}
\label{ssec:Joint behavior}
In order to prove $ n^{-3/2} \lesssim J_n $ we analyze the current density $ j_n $ defined in \eqref{def of jn}. This leads us to consider the properties of the quadruple $ (X^\vartheta_n,X^0_n,\Gamma^\vartheta_n,\Gamma^0_n) $.
Since $ X^\vartheta_0-X^0_0 = \vartheta \sim w $ one can consider $ X^0_n $ and $ \Gamma^0_n $ as perturbations around $ X^\vartheta_n $ and $ \Gamma^\vartheta_n $, respectively. Based on this simple idea one proves the following.
\Lemma{Representations for the lower bound}{
Let us treat $ X^x $, $ x \in \R $ as real valued processes. Then for all $ n \in \N $ and $ w \in \;]0,w_0] $:
\begin{align}
\label{R-distance between X^vartheta_n and X^0_n}
X^\vartheta_n -\, X^0_n \;=&\; w\, \nE^{M_n+\,L_n+\,\mathcal{O}(w^2n)}
\\
\label{Gamma_2n from Gamma_1n}
\Gamma^0_n/\Gamma^\vartheta_n \;=&\; \nE^{K_n +\, \mathcal{O}(w+w^2n)}
\,,
\end{align}
where $ (M_n), (L_n), (K_n) $ are $ \R$-valued $ \mathbb{F} $-martingales such that $ M_0 = L_0 = K_0 = 0 $ and $ n \in \N $:
\begin{align}
\label{def of dM_n}
\Delta M_n \;&=\; w\msp{1} \phi'(X^\vartheta_{n-1}) B_n
\\
\label{def of dL_n}
\Delta L_n \;&=\;
w^2 \nE^{M_{n-1}+\,L_{n-1}+\,\mathcal{O}(w^2n)} H_{n-1} B_n
\\
\label{def of dK_n}
\Delta K_n \;&=\;
w^2 \nE^{M_{n-1}+\,L_{n-1}+\,\mathcal{O}(w^2n)} U_{n-1} B_n
\,.
\end{align}
The processes $ (H_n) $ and $ (U_n) $ are $ \mathbb{F} $-adapted and bounded such that:
\bels{increments bounded by constant}{
\sup \;\setb{\abs{H_n},\,\abs{U_n},\,w^{-1}\msp{-1}\abs{\Delta L_n},\, w^{-1}\msp{-1}\abs{\Delta K_n}\,:\,n \in \N} \;&\leq\; C\,.
}
}
\begin{Proof
From \eqref{Phi in product form} and \eqref{def of phi} one sees that $ \Phi(x,b) = w \phi(x)b + w^2 R_2(x,b) $ where $ R_2 $ is a smooth and bounded function. Using \eqref{f_b in terms of Phi} we get
\bels{f_b difference 1}{
f_b(x)\,-\,f_b(x-z) \;
&=\;z \,+\,\Phi(x,b)-\Phi(x-z,b)
\\
&=\;\biggl\{ 1 \,+\,w \frac{\phi(x)-\phi(x-z)}{z}\,b \,+\,w^2 \frac{R_2(x,b)-R_2(x-z,b)}{z}\biggr\}\, z
\,,
}
for any $ z \in \R $. By the mean value theorem there are function $ \zeta_1(x,z), \zeta_2(x,z,b) \in [x-z,x] $ such that for any $ x \in \R $, $ z \ge 0 $ and $ b \in [b_-,b_+]$ we have
\bels{f_b difference 2}{
f_b(x)\,-\,f_b(x-z)
\;&=\;
\biggl\{1 \,+\, w \phi'(x)b \,-\,w z\,\frac{1}{2}\phi''(\zeta_1(x,z))\msp{1} b\,+\,
w^2 \partial_x R_2(\zeta_2(x,z,b),b)
\biggr\}\, z
\\
&=\; \exp \biggl[w \phi'(x)b \,-\, wz\,\frac{1}{2}\phi'' \!\circ \zeta_1(x,z)\msp{1} b \,+\, \mathcal{O}(w^2) \biggr]\, z
\,.
}
Now, set
\bels{Theta_n and H_n explicitly}{
\Theta_n \;:=\;
(X^{\vartheta}_n - X^0_n) / w
\qquad\text{and}\qquad
H_n \;:=\; -\frac{1}{2} \phi'' \!\circ \zeta_1(X^\vartheta_n,w\msp{1}\Theta_n)
\,,
}
Then \eqref{f_b difference 2} and \eqref{def of general process X^x} yield
\begin{align}
\notag
\Theta_n \
&=\; \frac{1}{w}\bigl\{f_{B_n}\!(X^\vartheta_{n-1}) \,-\, f_{B_n}\!(X^\vartheta_{n-1}\!-w\Theta_{n-1})\bigr\}
\\
\notag
&=\; \exp \Bigl[ w \phi'(X^\vartheta_{n-1}) B_n \,-\, w^2\Theta_{n-1}\frac{1}{2}\phi''\! \circ \zeta_1(X^\vartheta_{n-1},w\Theta_{n-1})\msp{1} B_n \,+\,\mathcal{O}(w^2)\Bigr] \cdot \Theta_{n-1}
\\
\label{exponent representation 1 for Theta_n}
&=\; \exp \Biggl[ w \sum_{j=1}^n \phi'(X^\vartheta_{j-1}) B_j \,+\, w^2 \sum_{j=1}^n \Theta_{j-1}\,H_{j-1}B_j \,+\, \mathcal{O}(w^2n) \Biggr] \cdot \Theta_0
\,.
\end{align}
By using \eqref{def of dM_n} and \eqref{def of dL_n} we identify the two sums inside the exponent in \eqref{exponent representation 1 for Theta_n} as $ M_n $ and $ L_n $, respectively. Together with $ \Theta_0 = (X^\vartheta_0-X^0_0)/w = \vartheta/w = 1 + \mathcal{O}(w^2) $ this gives $ \Theta_n = \nE^{M_n + L_n + \mathcal{O}(w^2n)} $ and by the definition \eqref{Theta_n and H_n explicitly} this equals \eqref{R-distance between X^vartheta_n and X^0_n}.
Moreover, $ w^{-1} \Delta L_{n+1} = w \Theta_nH_n B_{n+1} $, where using \eqref{f_b difference 2}, \eqref{Theta_n and H_n explicitly} and the definition of $ \zeta_1 $ we get
\[
w \msp{1} \Theta_n H_n
\;=\;
\frac{\phi(X^\vartheta_n)-\phi(X^0_n)}{X^\vartheta_n-X^0_n} \,-\, \phi'(X^\vartheta_n)
\;=:\;
\phi'(\zeta_0) \,-\, \phi'(X^\vartheta_n)
\,,
\]
for some $ \zeta_0 \in [X^0_n,X^\vartheta_n] $, and therefore $ w^{-1} \abs{\Delta L_{n+1}} \leq 2 \norm{\phi'}_\infty \!\cdot \max\sett{-b_-,b_+} =: C $.
In order to prove \eqref{Gamma_2n from Gamma_1n} we use again the mean value theorem to write
\bels{s at X^0_n as an expansion around X^vartheta_n}{
s(X^0_n)
\;=\;
s(X^\vartheta_n-w\msp{1}\Theta_n)
\;=\;
s(X^\vartheta_n) \,-\, w\msp{1}\Theta_n\cdot s' \circ \zeta_3(X^\vartheta_n,w\msp{1}\Theta_n)
\,,
}
where $ X^\vartheta_n - w\Theta_n \leq \zeta_3(X^\vartheta_n,w\Theta_n) \leq X^\vartheta_n $. Using this in \eqref{def of Gamma^x_n} yields
\bea{
\Gamma^0_n \;
&=\; \exp\left[w\sum_{l=1}^n s(X^0_{l-1})B_l \,+\, \mathcal{O}(w^2n)\right]
\\
&=\; \exp\left[w\sum_{l=1}^n s(X^\vartheta_{l-1})B_l \,-\,w^2\sum_{l=1}^n \Theta_{l-1} \msp{-2}\cdot s' \circ \zeta_3(X^\vartheta_{l-1},w\Theta_{l-1})\msp{1}B_l \,+\, \mathcal{O}(w^2n)\right]
\\
&=:\;
\Gamma^\vartheta_n\, \nE^{K_n \,+\, \mathcal{O}(w+w^2n)}
\,.
}
Above, we have identified $ U_n = -s' \circ \zeta_3(X^\vartheta_n,w\msp{1}\Theta_n) $ in \eqref{def of dK_n}. Finally, by equation \eqref{s at X^0_n as an expansion around X^vartheta_n}
$ w^{-1} \Delta K_{n+1} = w\Theta_n U_n B_{n+1} = [s(X^0_n)-s(X^\vartheta_n)]\, B_{n+1} $. Since $ s $ is a bounded function \eqref{function s} this implies $ w^{-1}\abs{\Delta K_n} \leq C $.
\end{Proof}
\section{Expectation of $ 1/\Gamma_n $}
\label{sec:Expectation of 1/Gamma_n}
In this section we prove the following result.
\Proposition{expectation of 1/Gamma_n decays exponentially}{
For sufficiently small $ w_{0} \sim 1 $ there exists $ \alpha \equiv \alpha(w_0) > 0 $ such that for $ n \in \N $,
\bels{expectation of 1/Gamma_n decays exponentially}{
\sup_{x\in\T} \tE\bigl(1/\Gamma^x_n\bigr)
\;\lesssim\;
\nE^{-\alpha w^2 n}
\,,\qquad w \in \;]0,w_0]
\,.
}
}
The content of this result is best understood by using \eqref{def of Gamma^x_n} to write $
1/\Gamma_n $ as exponent $ \nE^{-R_n w^2n \,+\, wn^{1/2}S_n \,+\, \mathcal{O}(w^3n)} $, where the normalized random variables
\[
S_n \;=\; \frac{-1}{n^{1/2}}\sum_{k=1}^n s(X_{k-1})B_k
\qquad\text{and}\qquad
R_n \;=\; \frac{1}{n}\sum_{k=1}^n r(X_{k-1})B_k^2
\,,
\]
are in average of order $ 1 $. Our proof of Proposition \ref{prp:expectation of 1/Gamma_n decays exponentially} consists of two steps which both rely on the fact that during any consecutive sequence of $ \floor{1/w} $ steps the random set $ \{ X_j(w) : j=k,\dots,k+\floor{1/w} \} $, $ k \in \N $, typically samples $ \T $ evenly.
First, Lemma \ref{lmm:noisy Lp-ergodic over one round} is used to shows that $ R_n \equiv R_n(w) $ can be replaced by the constant $ \gamma(w)/w^2 $ without introducing too large errors in $ \tE(1/\Gamma_n) $ provided $ wn \to \infty $. Here
\bels{Lyapunov exponent explicitly solved}{
\gamma(w)
\;=\;
\left\{ \tE(B^2_1) \cdot \!\int_\T r(x)\dif x \right\} w^2 +\, \mathcal{O}(w^3)
\;=\; \frac{\pi^2\tE(B^2_1)}{8}w^2 + \mathcal{O}(w^3)
\,,
}
is the Lyapunov exponent associated to the norm of $ Q_n $ in \eqref{solution of Q_n in terms of D_n(v)}.
Secondly, the uniform monotonicity (property (i) of Corollary \ref{crl:the three qualitative properties of X}) of the process $ X $ is used to bound the conditional variance (see \eqref{def of V_n}) of the martingale $ n^{1/2}S_n $ so that Freedman's powerful exponential martingale bound, i.e., Lemma \ref{lmm:Freedman's and Azuma's martingale exponent bounds}, can be applied to obtain a bound $ \tE \msp{1}\nE^{wn^{1/2}S_n} \leq \nE^{\beta w^2n} $, where $ \gamma(w)/w^2 -\beta =: \alpha \sim 1 $.
The following lemma provides two powerful exponential martingale bounds due to Freedman \cite{Freedman-75} and Azuma \cite{Azuma-1967}.
\Lemma{Freedman's and Azuma's martingale exponent bounds}{
Let $ (M_i) $ be a $ (\mathcal{F}_i) $-martingale, and define a process $ (V_n) $ by setting
$ V_0 = 0 $ and
\bels{def of V_n}{
V_n \;:=\; \sum_{i=1}^n\tE\bigl[(M_i-M_{i-1})^2\big| \mathcal{F}_{i-1}\bigr]
\,,\qquad n\in\N
\,.
}
Suppose there exists a constant $ m $ and a sequence $ (v_n) \subset [0,\infty[\, $ such that $\abs{M_n -M_{n-1}} \leq m $ and $ V_n \leq v_n $ for all $ n \in \N $.
Then for any $ t \in \R $ and $ n \in \N $:
\begin{align}
\label{Freedman's and Azuma's exponent bounds}
\tE\, \nE^{t M_n}
\;\leq\;
\begin{cases}
\nE^{\msp{1}\kappa_m(t)\msp{1}v_n}
\,,\quad
&\text{''Freedman's bound'';}
\\
\nE^{\frac{t^2}{2}m^2n}
\,,
&\text{''Azuma's bound'';}
\end{cases}
\end{align}
where
\bels{def of kappa_m}{
\kappa_m(t) \;=\; \frac{\nE^{mt} - 1 - mt}{m^2} \;\leq\; \frac{t^2}{2} \,+\,\frac{m}{6}\nE^{m\abs{t}}\abs{t}^3
\,.
}
}
For the convenience of readers the proofs of these bounds are included in Appendix \ref{assec:Proofs of Freedman's and Azuma's bounds}. The next inequality \eqref{Azuma's inequality} is often referred as Azuma's inequality.
\Corollary{Azuma's inequality}{
Suppose $ (M_k) $ satisfies the hypothesis of Lemma \ref{lmm:Freedman's and Azuma's martingale exponent bounds}. Then for any $ n \in \N $ and $ r > 0 $:
\bels{Azuma's inequality}{
\tP( \abs{M_n} \ge r ) \;
&\leq\;
2\, \nE^{-\frac{r^2}{2m^2n}}
\,.
}
}
\begin{Proof}
The proof follows by using Markov's inequality: $ \tP(\abs{M_n} \ge r) = \tP( M_n \ge r) + \tP( -M_n \ge r) \leq \nE^{-sr} \tE\, \nE^{sM_n} + \nE^{-sr} \tE \,\nE^{-sM_n} $, and then use Azuma's bound \eqref{Freedman's and Azuma's exponent bounds} with $ t = r/(m^2n) $.
\end{Proof}
\Lemma{noisy Lp-ergodic over one round}{
Suppose $ u $ is a Lipshitz-function on $ \T $, i.e., there is a constant $ L_u > 0 $ such that for all $ x,y \in \T $: $ \abs{u(x)-u(y)} \leq L_u \abs{x-y}_\T $. Then:
\bels{Lp-ergodic average}{
\sup_{x\in\T}\tE \Biggl\{ \biggl| w \sum_{j=0}^{\floor{1/w}} u(X^x_j) \,-\, \int_\T u(y) \dif y \biggr|^p \Biggr\} \;\leq\; C_p L_u^p w^{p/2}
\,,
}
where $ C_p $ does not depend on $ u $.
}
\begin{comment}
Recall the \emph{generalized H\"older's inequality}: Let $ \alpha \in \,[1,\infty[\, $ and $ \alpha_j \in \,[1,\infty] $, $ n \in \N $, such that $ \sum_{j=1}^n \frac{1}{\alpha_j} = \frac{1}{\alpha} $.
Suppose $ f_j $ are measurable functions on some measure space $ (S,\mathcal{S},\mu)$. Then $ \normb{ \prod_{j=1}^n f_j }_\alpha \leq \prod_{j=1}^n \norm{f_j}_{\alpha_j} $ where $ \norm{\genarg}_\alpha $ is the $ \Lspace^\alpha(S,\mu) $-norm.
\end{comment}
\begin{Proof}
Fix $ x $ and set $ X := X^x $ and $ I_j := [x+w\,(j-1),x+w\,j[\, $. Define for each $ j $ some $ \tilde{x}_j \in I_j $ by requiring $ \int_{I_j} u(x) \dif x = w\, u(\tilde{x}_j) $, and set $ \bar{x}_j := \tE(X_j) $. The properties \eqref{expansion of fb, def of phi and psi} of the chain $ X $ imply $ \abs{\bar{x}_j-\tilde{x}_j} \leq w $ for all $ j \leq \floor{1/w}$.
By writing the integral on the left side of \eqref{Lp-ergodic average} as a sum over $ u(\tilde{x}_j) $ and then applying the Lipshitz-property of $ u $ one gets
\bels{Lp-erg A}{
\tE \Biggl\{ \biggl| w \sum_{j=0}^{\floor{1/w}} \bigl[u(X_j) -u(\tilde{x}_j)\bigr] \biggr|^p \Biggr\}
\;\leq\;
L_u^p\, w^p \sum_{j_1,\dots,j_p} \tE \Biggl\{ \prod_{l=1}^p \abs{X_{j_l}-\tilde{x}_{j_l}}\Biggr\}
\,.
}
Now, $ X_j = x + wj + w^{1/2}M_j + \mathcal{O}(w) $ with $ M_j = w^{1/2}\sum_{i=1}^j \phi(X_{i-1})B_i $ uniformly for any $ 0 \leq j \leq \floor{1/w} $.
This means $ X_j - \tilde{x}_j = w^{1/2}(M_j + \mathcal{O}(w^{1/2})) $. By applying the generalized H\"older's inequality
one has,
\bels{Lp-erg B}{
\tE \Biggl\{ \prod_{l=1}^p \abs{X_{j_l}-\tilde{x}_{j_l}}\Biggr\}\;
&=\;
w^{p/2}
\tE \Biggl\{ \prod_{l=1}^p \absb{M_{j_l} + \mathcal{O}(w^{1/2})}\Biggr\}
\\
&\leq\;
w^{p/2}
\left(\prod_{l=1}^p \tE \Bigl\{ \absb{M_{j_l} + \mathcal{O}(w^{1/2})}^p \Bigr\}\right)^{1/p}
.
}
The last expectations of \eqref{Lp-erg B} can be bounded with Azuma's inequality \eqref{Azuma's inequality}. Indeed, $ \abs{M_j-M_{j-1}} \leq w^{1/2}\max(-b_-,b_+) \norm{\phi}_\infty \equiv Cw^{1/2} $ for each $ j $. This implies $ \tP\bigl(\abs{M_j} \in [k,k+1[\,\bigr) \leq 2 \tP(\abs{M_j}\ge k) \leq 2 \nE^{-k^2/(2C^2w\floor{1/w})} = \nE^{-k^2/C'} $ which, in turn, yields
\[
\tE \Bigl\{ \absb{M_j + \mathcal{O}(w^{1/2})}^p \Bigr\}
\;\leq\;
\sum_{k=0}^\infty (k+1+\mathcal{O}(w^{1/2}))^p \tP\bigl(\abs{M_j} \in [k,k+1[\,\bigr)
\;\leq\; 2\sum_{k = 0}^\infty k^p \nE^{-k^2/C'} \;=: C_p
\,,
\]
Since this bound holds uniformly for all $ j = 0,1, \dots, \floor{1/w} $ we may apply it term by term in \eqref{Lp-erg B}. Using the resulting bound again term by term in \eqref{Lp-erg A} yields the bound \eqref{Lp-ergodic average}.
\end{Proof}
\begin{Proof}[Proof of Proposition \ref{prp:expectation of 1/Gamma_n decays exponentially}]
Since $ \Gamma_n(w) \ge C $ for $ wn \sim 1 $ it is enough to show $ \tE(1/\Gamma^x_n) \leq C\,\nE^{-\alpha w^2 n} $ for $ n = \floor{1/w} m $, $ m \in \N $. Since $ \Delta X_n \ge Cw $ we may for the same reason fix some arbitrary starting point $ x \in \T $ and denote $ X^x_n $ and $ \Gamma^x_n $ by $ X_n $ and $ \Gamma_n $, respectively.
We begin the proof by decomposing the second sum in the exponent of \eqref{def of Gamma^x_n} into the double sum
\bels{decomposition of r sum}{
w^2 \sum_{i=1}^n r(X_{i-1})B_i^2 \;
&=\;w\sum_{k=1}^m \;w \msp{-16}\sum_{i= i_{k-1}+1}^{i_{k}} \msp{-10}r(X_{i-1})B_i^2
\;=\;
w \sum_{k=1}^m \gamma(X_{i_{k-1}}) \,+\,w\sum_{k=1}^m Z_k
\,,
}
where $ i_k = \floor{1/w}k + 1 $, $ k = 1,2,\dots,m $ is roughly the time the averaged process $ \bar{x}_j := \tE_x(X_j) = x + wj + \mathcal{O}(w^3j) $ has passed its starting point k\textsuperscript{th} time.
In the rightmost expression of \eqref{decomposition of r sum} we have further divided the inner sums into the conditional expectations and the fluctuation parts:
\begin{subequations}
\begin{align}
\label{def of Z_k}
Z_k \;:=&\; w\msp{-10}\sum_{i= i_{k-1}+1}^{i_{k}} \!r(X_{i-1})B_i^2 \,-\, \gamma(X_{i_{k-1}})
\\
\label{def of gamma}
\gamma(y)
\,:=&\;
\tE\Biggl\{ w \sum_{i=1}^{\floor{1/w}} r(X_{i-1}^y) B_i^2 \Biggr\}
\,.
\end{align}
\end{subequations}
The motivation behind the decomposition \eqref{decomposition of r sum} is twofold. First, Lemma \ref{lmm:noisy Lp-ergodic over one round} tells us that the function $ \gamma $ is almost constant for small $ w $, and especially
\bels{bound of gamma_-}{
\gamma(y) \;
&=\;
\tE(B^2)\,
\tE\Biggl\{ w\!\sum_{i=1}^{\floor{1/w}} r(X^y_{i-1})\Biggr\}
\;\ge\;
\tE(B^2)\,\int_\T r(z) \dif z \,-\, \beta_0 w^{1/2}
\;=:\; \tilde{\gamma}_-
\,,
}
where $ \beta_0 > 0 $ is a finite constant that does not depend on $ y $.
Here the first equality follows from $ \tE\bigl(r(X_{i-1})B_j\bigr) = \tE(B^2)\, \tE( r^2(X_i) )$, while the last expression comes from Lemma \ref{lmm:noisy Lp-ergodic over one round} with $ p = 1 $ and $ L_u := \norm{r'}_\infty $.
Using \eqref{bound of gamma_-} to bound each term $ \gamma(X_{i_{k-1}}) $ in \eqref{decomposition of r sum} yields the bound:
\bels{1/Gamma_n two martingales left}{
\tE\bigl(1/\Gamma_n\bigr) \;
&\leq\;
\nE^{-\gamma_- w^2n} \tE \exp\Biggl[ -w \sum_{i=1}^n s(X_{i-1})B_i \,-\, w\sum_{k=1}^m Z_k \Biggr]
\,,
\quad\text{with}\quad
\gamma := \tilde{\gamma} + \mathcal{O}(w)
\,,
}
where the $ \mathcal{O}(w^3n) $-term inside the exponent \eqref{def of Gamma^x_n} of $ \Gamma_n $ has been also absorbed into the constant $ \gamma_- $.
The second property of the decomposition \eqref{decomposition of r sum} is that $ (Z_k:k\in\N) $ constitutes a sequence of bounded martingale increments in the \emph{sparse filtration} $ \mathbb{F}' = (\mathcal{F}'_k) $, $ \mathcal{F}'_k := \mathcal{F}_{i_k} \equiv \sigma(B_1,B_2,\dots,B_{i_k}) $: the boundedness of $ Z_k $ is obvious as it is an average of $ \floor{1/w} $ uniformly bounded increments, while the martingale property holds, since $ X $ is Markov:
\[
\tE \Biggl( w\msp{-18}\sum_{\msp{12}i=i_{k-1}+1}^{i_k} \msp{-10}r(X_{i-1}) B_i^2 \Bigg|\mathcal{F}'_{k-1} \Biggr)(\omega)
\;=\;
\tE\Biggl\{w\sum_{i=1}^{\floor{1/w}} r\Bigl(X^{X_{i_{k-1}}\!(\omega)}_{i-1}\Bigr) B_i^2\, \Biggr\}
\;\equiv\; \gamma(X_{i_{k-1}}(\omega))
\,,
\]
for a.e. $ \omega \in \Omega $.
We want to consider both sums in the right side of \eqref{1/Gamma_n two martingales left} as martingales. Since this is not possible under the same expectation we apply H\"older's inequality to divide the expectation into the product of separate expectations
\bels{expectation of Gamma by Holder}{
\tE (1/\Gamma_n) \;
&\leq\;
\nE^{-\gamma_{-} w^2n}
\Biggl\{\tE \exp \biggl[ -pw \sum_{i=1}^n s(X_{i-1})B_i\biggr]\Biggr\}^{1/p}
\Biggl\{\tE \exp \biggl[ -p'w\sum_{k=1}^m Z_k\biggr] \Biggr\}^{1/p'}
\,,
}
where $ p,p' \ge 1 $ and $ 1/p + 1/p' = 1 $.
We can now bound both of these expectations with the help of Lemma \ref{lmm:Freedman's and Azuma's martingale exponent bounds}. Azuma's exponential bound \eqref{Freedman's and Azuma's exponent bounds} is sufficient for the second factor: if $ \abs{Z_k} \leq C_Z $, then
\bels{final bound for Z-exponent}{
\Biggl\{\tE \exp \biggl[ -p'w\sum_{k=1}^m Z_k\biggr] \Biggr\}^{1/p'}
\msp{-6}\leq\;
\Biggl\{\exp\biggl[\frac{(-p'w)^2}{2}C_Z^2\floor{nw}\biggr]\Biggr\}^{1/p'}
\msp{-6}\leq\;
\nE^{\beta_2 p' w^3n}
\,,
}
for some constant $ \beta_2 $.
In order to handle the first expectation of \eqref{expectation of Gamma by Holder} we note that the martingale $ (M_j) $, defined by $ \Delta M_j := s(X_{j-1})B_j $, $ j \in \N $ and $ M_0 = 0 $, has bounded increments. Moreover, since $ \tE[(\Delta M_i)^2|\mathcal{F}_{i-1}] = \tE(B^2) \cdot s^2(X_{i-1}) $, we see that for sufficiently small $ \varepsilon > 0 $:
\bea{
V_n
\;:=\;
\sum_{i=1}^n\tE\bigl[(M_i-M_{i-1})^2\big| \mathcal{F}_{i-1}\bigr]
\;=\;
\tE(B^2) \sum_{i=1}^n s^2(X_{i-1}) \;\leq\; (1-\varepsilon)\tE(B^2)\norm{s}_\infty^2 n
\,.
}
In order to get the last bound above, one uses the property (i) of Corollary \ref{crl:the three qualitative properties of X}, the continuity of $ s $ and $ s(0)= 0 $, to conclude that there must exist $ \varepsilon > 0 $ such that
\[
\absb{\bigl\{0 \leq i \leq n-1: \abs{s^2(X_i)} \leq \norm{s}_\infty^2/2 \bigr\}} \;\ge\; 2\varepsilon \msp{1}n
\,.
\]
This, by definition, implies the bound of $ V_n $ above.
Applying Freedman's bound of Lemma \ref{lmm:Freedman's and Azuma's martingale exponent bounds} with $ v_n := \tE(B^2)(1-\varepsilon)\norm{s}_\infty^2 n $ and $ \abs{M_i-M_{i-1}} \leq C_M =: m $ yields
\bels{final bound for s-exponent}{
\Biggl\{\tE \exp \biggl[ (-wp) \sum_{i=1}^n s(X_{i-1})B_i\biggr]\Biggr\}^{1/p}
\;&\leq\;
\Biggl\{ \exp\biggl[\kappa_{C_M}(-wp)\tE(B^2)(1-\varepsilon)\norm{s}_\infty^2 n \biggr]\Biggr\}^{1/p}
\\
&\leq\;
\nE^{\frac{1}{2}pw^2(1-\varepsilon)\tE(B^2)\norm{s}_\infty^2n \,+\,
\beta_1 \msp{1}p^2w^3n
}
\,,
}
where $ \beta_1 > (1/6)(1-\varepsilon)\tE(B^2)C_M \nE^{C_Mpw} \sim 1 $.
Plugging \eqref{final bound for s-exponent} and \eqref{final bound for Z-exponent} along with the estimate \eqref{bound of gamma_-} for $ \gamma_- $ into \eqref{expectation of Gamma by Holder} results into the total bound
\bels{collection of terms for 1/Gamma_n expectation}{
\tE(1/\Gamma_n)
\;\leq\;
\nE^{-\tE(B^2)\bigl\{\int_\T r(y) \dif y \,-\, p\msp{1}(1-\varepsilon)\frac{\norm{s}_\infty^2}{2} \bigr\}w^2n \,+\, \beta_0 w^{5/2}n \,+\,\beta_1 p^2w^3n\,+\, \beta_2p'w^3n+ Cw^3n}
\,.
}
Here the term inside curly brackets would disappear if $ p=1, \varepsilon = 0 $ because
$ \int_\T r(y) \dif y = \norm{s}_\infty^2/2 = \pi^2/8 $. However, since $ \varepsilon > 0 $ we can take $ p > 1 $ such that it remains positive.
However, by taking $ w_0 $ sufficiently small the last three terms, regardless of the size of $ p' $ or $ \beta_1,\beta_2,\beta_3,C $, can be made arbitrary small compared to the first part.
\end{Proof}
\section{Potential theory}
\label{sec:Potential theory}
This section is devoted to the statement and the proof of Proposition \ref{prp:Potential theory} below.
The derivation of the inequalities \eqref{upper bound} and \eqref{lower bound} constitutes a relatively classical problem in potential theory for Markov chains.
However, it does not seem possible to apply classical results (see e.g. \cite{Coulhon-1990} and \cite{Coulhon-1993}), since the chain $X$ is neither reversible, nor uniformly diffusive.
In particular, little appears to be known on lower bounds of the type \eqref{lower bound} for non-reversible Markov chains.
Results for Markov chains on a lattice \cite{Mustapha-2006}, or for differential equations in non-divergence form \cite{Escaurazia-2000}, do not adapt straightforwardly (and maybe not at all) to our case.
Instead, since we consider only the case $w\to 0$,
it has been possible to treat the left hand side of \eqref{upper bound} and \eqref{lower bound}
as a perturbation of quantities that can be computed explicitly.
We are then able to handle both of these bounds with a single method.
\Proposition{Potential theory}{
Let $\kappa>0$, and let $ h \in \Cont^1(\T) $. There exist $ K,K',w_0 > 0 $ such that,
for every $ w \in \,]0, w_0] $,
for every function $ u \in \Lspace^1(\T;\R_+)$,
for every $ x \in \R $,
and for every $ n\in\N$,
one has
\begin{subequations}
\label{Potential theory}
\begin{align}
\tE\big( \nE^{w \sum_{k=1}^n h(X_{k-1}^x) B_k}\, u( X_n^x) \big) \;
&\leq\;
\frac{K}{w\sqrt{n}} \int_\T u(y) \dif y
\qquad
(wn\ge \kappa, \, w^2 n \le 1)
\,,
\label{upper bound}
\\
\tE \big( \nE^{w \sum_{k=1}^n h(X_{k-1}^x) B_k}\, u(X_n^x) \big) \;
&\ge\;
K' \int_\T u(y) \dif y
\qquad
(1/2 \leq w^2n \leq 1)
\,.
\label{lower bound}
\end{align}
\end{subequations}
}
Before starting the proof let us make a few of definitions: First, for $ A \subset \T $ and $ 1 \leq p \leq \infty $ we define the space
\[
\Lspace^p_A(\T) \;:=\; \{ u\in\Lspace^p(\T):\spt(u)\subset A \}
\,.
\]
Secondly, let $ S $ be a continuous operator from $ \Lspace^p(\T) $ to $\Lspace^q(\T)$, for $ 1\leq p,q \leq \infty$, and denote the associated operator norm by $ \norm{S}_{p \to q } = \sup \{ \norm{Su}_q:u\in\Lspace^p(\T), \, \norm{u}_p \leq 1 \} $.
The content of Proposition \ref{prp:Potential theory} is twofold.
First, it describes the approach to equilibrium of the chain $X$.
To see this, let us consider the case $h=0$, and let us take some subset $A\subset \T$.
Equation \eqref{upper bound} implies that $\Prob(X_n^x \in A) \lesssim \max \lbrace 1/(w\sqrt n), 1 \rbrace\mathrm{Leb}(A)$ when $wn\ge \kappa$,
whereas \eqref{upper bound} and \eqref{lower bound} imply that $\Prob(X_n^x \in A)\sim \mathrm{Leb}(A)$ when $w^2n\ge 1/2$.
This is obvious when $w^2n \le 1$.
But, if $w^2n>1$, one can write $n=n_1 +n_2$ such that $\ceil{n_2 w^2} =1$, and
\[
\Expectation\msp{1}u(X_n^x) \;=\;
\int_{\mathbb T } \Expectation \big( u(X_n^x)| X_{n_1}^x = y \big) \, \Prob (X_{n_1}^x \in\dif y)
\,.
\]
The result follows since, if $ y \in \T$, one has $\Expectation \bigl[ u(X_n^x)\big| X_{n_1}^x = y \bigr] = \Expectation\msp{1}u(X_{n_2}^y) \sim \norm{u}_1 $.
Secondly, Proposition \ref{prp:Potential theory} asserts that the result obtained for $h=0$ is not destroyed when some specific perturbation is added ($h\ne 0$).
If $h\ne 0$ but if $u=1$, results \eqref{upper bound} and \eqref{lower bound} are trivial.
Indeed, by Azuma's inequality \eqref{Azuma's inequality}, one finds some $C>0$ such that, for every $n\in\N$ and for every $a>0$,
one has
\[
\Prob \Big(
\nE^{-a} \le \nE^{w \sum_{k=1}^n h(X_{k-1}^x) B_k} \le \nE^{a} \Big)
\;\ge\;
1 \,-\, 2\msp{1} \nE^{-\frac{Ca^2}{w^2n}}
\,.
\]
So, in general, one sees that the rare events where
$\nE^{w \sum_{k=1}^n h(X_{k-1}^x) B_k} $ is very large or very close to zero may essentially be neglected.
In the sequel, one assumes that
\begin{itemize}
\item[(A1)] $ \kappa>0$ and $h\in\Cont^1(\T)$ are given,
\item[(A2)] $ w\in\,\rbrack 0, w_0\rbrack$, where $w_0$ is small enough to make all our assertions valid.
\end{itemize}
All the constants introduced below may depend on $\kappa$ and $h$.
In order to prove Proposition \ref{prp:Potential theory}, let us introduce a continuous operator $T$ on $\Lspace^p(\T)$, $1 \leq p\leq \infty $, by setting
\bels{T}{
T u (x) \;:=\;
\Expectation \big[ (1 + wh(x)B)\, u\circ f_B(x) \big]
\;=\;
\int_{b_-}^{b_+} u\circ f_b(x) \, (1 + wh(x)b)\, \tau (b)\dif b
\,.
}
Since $\Expectation (B) = \int b \, \tau (b)\dif b = 0$, one has $T1 = 1$ and $\norm{T}_{\infty\to\infty } = 1$.
The operator $T$ is thus, formally, the transition operator of some Markov chain on the circle.
But, for every $b\in\lbrack b_-,b_+\rbrack$ and every $x\in\T$, one has
\[
\nE^{wh(x)b} \;=\; (1 + wh(x)b) \cdot \nE^{\mathcal O(w^2) }
\,.
\]
Therefore, for every $u\in\Lspace^1(\T;\R_+)$, for every $n\in\N$ satisfying $w^2 n\le 1$,
and for almost every $x\in\T$, one has
\bels{Ex Tn}{
T^n u (x) \;\sim\; \Expectation\bigl( \nE^{w \sum_{k=1}^n h(X_{k-1}^x) B_k}\, u( X_n^x)\bigr)
\,.
}
Let $ y \in \T $. The proof of Proposition \ref{prp:Potential theory} rests on the fact that, when $T^n$ acts on a function $u\in\Lspace^1_{\Ball(y,w^2)}(\T)$, it can be well approximated by an operator $S_{y,n}$ which can be explicitly studied.
In order to define $S_{y,n}$, let us first introduce the convolution operator $ T_y $ on $ \Lspace^p(\T) $, $1\le p\le \infty$, by setting
\bels{Ty}{
T_y u (x) := \int u\circ g_b(x,y) \, (1 + wh(y)b) \, \tau (b)\dif b
\,,
}
where
\bels{gbxy}{
g_b(x,y) \;:=\; x \,+\, \vartheta \,+\, \Phi(y,b) \;=\; x \,+\, w \,+\, w\msp{1}\phi(y)b \,+\, w^2 \psi (y) b^2 \,+\, \mathcal O(w^3)
\,,
}
with $ \Phi $ defined as in \eqref{def of Phi}, and $\phi$ and $\psi$ defined as in \eqref{def of phi} and \eqref{def of psi}.
Then, one sets $S_{y,0}:= \mathrm{Id}$, and defines each $ n \in \N $
\begin{subequations}
\begin{align}
\label{Syn}
S_{y,n} &:=\; T_{y-nw} \cdots T_{y-w}.
\\
\label{Ry}
R_y &:=\; T - T_y.
\end{align}
\end{subequations}
The core of our approximation scheme is described by equation \eqref{approximation scheme} below,
but let us now describe it heuristically.
Let $z\in\T$, and let $u\in \Lspace^1_{\Ball(z,w^2) }(\T; \R_+)$.
The support of $u\circ f_b$ should be centered at $z-w$, and so $g_b(\genarg, z-w)$ is likely to be the best approximation of $f_b$, among all the maps $g_b(\genarg, y)$ ($y\in\T$).
Therefore, one can think of $T_z$ as one of the best approximations of $T$ among all the operators $T_y$ ($y\in\T$).
One writes
\bels{loc heuristique 2}{
T^n u \;=\; T^{n-1}\msp{-2}R_{z-w} u \,+\, T^{n-1}T_{z-w} u
\,,
}
where $R_{z-w}$ is defined by \eqref{Ry}. The first term in the right hand side of \eqref{loc heuristique 2} can be bounded by means of our estimates on $R_{y}$ ($y\in\T$), in Lemmas \ref{lmm:estimates Rk} or \ref{lmm:estimates Rk L1} below.
One is thus left with the second term.
From the definition \eqref{Ty} of $T_y$ ($y\in\T$), the function $T_{z-w}u$ will be approximately centered at $z-w$. One now approximates $T$ by $T_{z-2w}$ and one obtains
\[
T^{n-1}T_{z-w} u \;=\; T^{n-2}R_{z-2w} T_{z-w} u \,+\, T^{n-2}T_{z-2w}T_{z-w} u
\,.
\]
Again, one is left with the second term.
But, continuing that way, one finally needs to handle the term $T T_{z-(n-1)w}\dots T_{z-w} u$, and one arrives to
\bels{loc heuristique 1}{
T T_{z-(n-1)w} \cdots T_z u \;=\; R_{z - nw}T_{z-(n-2)w} \cdots T_{z-w} u \;+\; T_{z-nw} \cdots T_{z-w} u
\,,
}
By the definition \eqref{Syn}, one has $T_{z-nw} \cdots T_{z-w} u = S_{z,n} u$. So, this time, the second term in \eqref{loc heuristique 1} can be bounded from above and below by some explicit estimates contained in Lemma \ref{lmm:approximate kernels} below. By means of Lemmas \ref{lmm:estimates Rk} and \ref{lmm:estimates Rk L1}, one thus needs to show that the sum of the terms containing an operator of the form $R_y$ ($y\in\T$) do not destroy the estimate on $S_{z,n}u$.
The rest of the section is organized as follows.
In Lemma \ref{lmm:approximate kernels}, one obtains some bounds on the functions $S_{y,n}u$ for $u\in\Lspace^1_{\Ball(y,w^2)}(\T)$.
The same bounds should be obtained for a Gaussian of variance $nw^2$ centered at $y$.
The proof turns out to be a straightforward computation, since the operators $T_y$ are diagonal in Fourier space.
Next, Lemmas \ref{lmm:estimates Rk} and \ref{lmm:estimates Rk L1} give us bounds on $R_y$.
Lemma \ref{lmm:estimates Rk L1} is actually not crucial, and needs only to be used when $n < 8$, since then
the function $S_{y,n}u$ may not be smooth enough for Lemma \ref{lmm:estimates Rk} to be applied.
Some easy results about the localization of the functions $T^n u$ and $S_{y,n} u$, for $u\in\Lspace^1_{\Ball(y,Cw)}(\T)$, are then given in Lemma \ref{lmm:localization measures}.
Finally, the proof of Proposition \ref{prp:Potential theory} is given.
Let us notice that, in Lemma \ref{lmm:approximate kernels}, and consequently in the proof of Proposition \ref{prp:Potential theory},
one has to distinguish between the case where $y\sim 0$, and the case where $y$ is away from 0.
This comes from the lack of diffusivity of the chain $X$ around 0 (see property (iii) of Corollary \ref{crl:the three qualitative properties of X}).
\Lemma{approximate kernels}{
Let $\epsilon>0$.
There exists $K>0$ such that,
for every $n\in\N$ satisfying $8 \le n \le w^{-2}$,
for every $y\in\T-\Ball(0,\epsilon)$,
and for every $u\in\Lspace ^1_{\Ball(y,w^2)}(\T; \R_+)$,
one has $S_{y,n} u \in \Cont^2(\T)$ and, for every $x\in\T$,
\begin{subequations}
\begin{align}
&\abs{\partial_x^l S_{y,n} u(x)}
\;\leq\;
\frac{\msp{-3}K\msp{1}\norm{u}_1}{(w\sqrt{n})^{(l+1)}}
\,,
\qquad\quad l=0,1,2,
\label{approximate kernels 1}
\\
&\absb{\sin^k \pi (x+wn-y) \cdot \partial_x^k S_{y,n} u (x)}
\;\leq\;
\frac{K \norm{u}_1}{w\sqrt{n}}\,,
\quad k=1,2
\,.
\label{approximate kernels 2}
\end{align}
\end{subequations}
Moreover, when $ \epsilon $ is small enough,
there exists $K'(\epsilon) >0$, with $K'(\epsilon)\to \infty$ as $\epsilon\to 0$, such that,
for every $n\in\N$ satisfying $\epsilon\le w^2n\le 2 \epsilon$,
for every $x,y\in\T$,
and for every $u\in\Lspace ^1_{\Ball(y,w^2)}(\T; \R_+)$,
\bels{approximate kernels 3}{
\abs{S_{y,n} u (x)} \;\ge\; K'(\epsilon)\norm{u}_1
\quad\text{when}\quad \abs{x+nw-y}_\T \leq\, 10\msp{1} \epsilon
\,.
}
}
The proof is deferred to the Appendix \ref{assec:proof for approximate kernels}.
\Lemma{estimates Rk}{
There exists $K>0$ such that, for every $u\in\Cont^2 (\T)$ and every $y\in \T$, one has
\bels{estimates Rk}{
\norm{R_y u}_\infty
\,\leq\; K w^2 \Bigl\{ \,&\normb{\sin \pi(\genarg - y-w) \cdot u'}_\infty
+\, w \msp{1}\norm{u'}_\infty \\
+\, &\normb{\sin^2 \pi (\genarg - y-w) \,\cdot\, u''}_\infty
+\, w \msp{1}\norm{u''}_\infty\Bigr\}
\,.
}
}
\begin{Proof}
One takes some $u\in\Cont^2(\T)$, and one fixes $x,y\in\T$.
From the definitions \eqref{T} and \eqref{Ty}, one has
\bea{
R_y u (x) \;\equiv\; (T- T_y) u (x)
\;=&\; \int (u\circ f_b(x) - u\circ g_b(x,y))\, (1 + wh(x)b)\, \tau(b)\dif b
\\
&+\, w\msp{1}(h(x) - h(y))\int u \circ g_b (x,y)\, b \, \tau (b)\dif b
\\
=:&\; A_1 \,+\, A_2
\,.
}
It is enough to bound $|A_1|$ and $|A_2|$ by the right hand side of \eqref{estimates Rk}.
Let us first bound $ \abs{A_1} $. By the mean value theorem, and the definitions \eqref{expansion of fb, def of phi and psi} and \eqref{gbxy} of $ f_b $ and $ g_b $, one has
\[
u\circ f_b (x) \,-\, u\circ g_b (x,y) \;=\;
u'(x + w + \xi_1)\,
\Bigl(
w \bigl[\phi(x)-\phi(y)\bigr]b \,+\, w^2 \bigl[\psi(x)-\psi(y)\bigr] b^2 \,+\, \mathcal O(w^3)
\Bigr)
\,,
\]
where $\xi_1 \equiv \xi_1(b)$ is such that
\begin{equation}
\label{bound-xi-1}
|\xi_1 | \;\leq\; w\msp{1}|\phi(x)-\phi(y)| \,+\, \mathcal O(w^2)
\,.
\end{equation}
By the mean value theorem again, one has
\[
u'(x+w+\xi_1) \;=\; u'(x+w) \,+\, u''(x+w+\xi_2)\, \xi_1
\,,
\]
where $ \xi_2 \equiv\xi_2(b)$ is such that $|\xi_2 |\le |\xi_1 |$.
Therefore, setting $\tilde\tau (b) = (1 + wh(x)b)\, \tau(b)$, one can write $A_1$ as
\bea{
A_1 \;=\; u'(x+w)\int \Big( w \bigl[ \phi(x)-\phi(y)\bigr] b \,+\, w^2 \bigl[\psi(x)-\psi(y)\bigr] b^2 \,+\, \mathcal O(w^3)\Big) \tilde\tau(b)\dif b&
\\
+
\int u''(x+w+\xi_2(b))\, \xi_1(b)\, \bigl( w \bigl[\phi(x)-\phi(y)\bigr] b \,+\, \mathcal O(w^2)\bigr) \tilde\tau(b)\dif b&
\,.
}
One has
\[
|\phi(x)-\phi(y) |
\;\lesssim\;
|\sin\pi(x-y)|
\qquad \text{and}\qquad
|\psi(x) - \psi(y) |
\;\lesssim\;
|\sin\pi (x-y)|
\,.
\]
So, taking into account the bound \eqref{bound-xi-1} and the fact that $\int b\tau(b)\dif b = 0$,
one gets
\bels{Rk loc 1}{
|A_1|
\;\lesssim\;
&w^2 |u'(x+w)| \, |\sin \pi(x-y)| \,+\, w^3 \norm{u'}_\infty
\\
+\; &w^2 \!\int |u''(x+w+\xi_2(b))|\,\sin^2\pi(x-y) \, \tilde\tau (b)\dif b \,+\, w^3 \norm{u''}_\infty
\,.
}
But one has $\sin^2\pi (x-y) \le \sin^2 \pi (x + \xi_2-y) + \mathcal O(w)$.
So, inserting this last bound in \eqref{Rk loc 1}, one sees that $|A_1|$ is bounded by the right hand side of \eqref{estimates Rk}.
Let us then bound $|A_2|$.
By the mean value theorem and the definition \eqref{gbxy} of $g_b$, one writes
\[
u\circ g_b(x,y) \;=\; u(x + w) \,+\, u'(x+w + \xi)\, \mathcal{O}(w)
\,,
\]
where $\xi \equiv \xi (b) = \mathcal O(w)$. Therefore, taking into account that $\int b\tau(b)\dif b=0$ and that
$ |h(x) - h(y)| \lesssim |\sin\pi(x-y)| $, one obtains
\bea{
|A_2| \;&\lesssim\; w^2 \int |\sin\pi(x-y)|\cdot |u'(x+w+\xi(b))|\cdot |b| \tau (b)\msp{1}\dif b
\\
&\lesssim\;
w^2 \bigl(\, \norm{\sin \pi(\mathrm {Id} - y-w) \cdot u'}_\infty +\, w\norm{u'}_\infty \bigr)
\,.
}
This finishes the proof.
\end{Proof}
\Lemma{estimates Rk L1}{
Let $ K,\epsilon >0 $. Let $ y\in\T $ be such that $ \abs{y}_\T \ge \epsilon $.
Then there exists $ K'>0 $ such that, for every $u\in \Lspace^1_{\Ball(y,Kw) }(\T)$, one has
\bels{estimates Rk L1}{
\norm{R_y u}_1 \le K' w \norm{u}_1
\,.
}
Moreover $ Tu \in \Lspace^\infty(\T) $, and one has
\bels{estimates T L1}{
\norm{Tu}_\infty \le K' \, w^{-1}\, \norm{u}_1
\,.
}
}
\begin{Proof}
The constants introduced in this proof may depend on $K$ and $\epsilon$.
Let $u\in \Lspace^1_{\Ball(y,Kw) }(\T)$.
One writes
\bels{T Ty noyaux}{
Tu(x) = \int_{\Ball(y,Kw) } t(x,z) u(z)\dif z
\quad\text{and}\quad
T_yu(x) = \int_{\Ball(y,Kw)} t_y(x,z) u(z)\dif z
\,,
}
where the functions $t$ and $t_y$ are obtained by performing a change of variables in the definitions \eqref{T} and \eqref{Ty} of $T$ and $T_y$.
Setting $F_x(b):=f_b(x)$ and $G_x(b):=g_b(x,y)$, where $f_b$ and $g_b$ are defined in \eqref{expansion of fb, def of phi and psi} and \eqref{gbxy},
one obtains
\bels{pot th l1 2}{
t(x,z) \;&=\; (1 + w h(x)F_x^{-1}(z))\, \tau(F_x^{-1}(z))\, \partial_z F_x^{-1}(z)
\,,
\\
t_y(x,z) \;&=\; (1 + w h(y)G_x^{-1}(z))\, \tau(G_x^{-1}(z))\, \partial_z G_x^{-1}(z)
\,.
}
Let $z\in \Ball(y,Kw)$ be given.
Let us see that $t(\genarg, z)$ and $t_y(\genarg, z)$ are well defined functions. The support of $t(\genarg, z)$ (respectively of $t_y(\genarg,z)$) is the support of $\tau\circ F_{(\genarg)}^{-1}(z)$ (resp. of $\tau \circ G_{(\genarg)}^{-1}(z)$). The support of $\tau\circ F_{(\genarg)}^{-1}(z)$ is made of all the $x$ such that
\[
b_- \le F_x^{-1}(z)\le b_+ \; \Leftrightarrow\; f_{b_-}(x) \le z \le f_{b_+}(x)\; \Leftrightarrow\; f_{b_+}^{-1}(z) \le x \le f_{b_-}^{-1}(z)
\,.
\]
One obtains a similar relation for the support of $\tau \circ G_{(\genarg)}^{-1}(z)$ and one gets therefore
\[
\spt(t(\genarg,z))\,, \; \spt(t_y(\genarg , z)) \,\subset\, \Ball (z, Cw) \,\subset\, \Ball(y, C'w).
\]
The hypothesis $|y|_\T \ge \epsilon$ ensures that the maps $F_x$ and $G_x$ are invertible when $x\in \Ball(y, C'w)$, and actually that
\bels{pot th l1 1}{
\partial_b F_x(b) \,\gtrsim\, w
\quad\text{and}\quad \partial_b
G_x(b) \,\gtrsim\, w
\,.
}
This shows in particular that $t(\genarg, z)$ and $t_y(\genarg, z)$ are bounded functions.
Let us now show \eqref{estimates Rk L1}.
Taking \eqref{pot th l1 2} into account, one has, from the definition \eqref{Ry} of $R_y$,
\bels{}{
\norm{R_y u}_1 \le \int_{\Ball(y,Kw) } |u(z)|\dif z \int_{\Ball(y,C'w) }|t(x,z) - t_y(x,z)|\dif x.
}
It is therefore enough to show that, for every $z\in \Ball(y,Kw)$, one has
\bels{pot th l1 a}{
\int_{\Ball(y,C'w) }|t(x,z) - t_y(x,z)|\dif x \;=\; \mathcal O(w)
\,.
}
Let us take some $z\in \Ball(y,Kw)$ and some $x\in\Ball(y,C'w)$.
Since $b_-\le F_x^{-1}(z),G_x^{-1}(z)\le b_+$, since $\tau$ is bounded, and since \eqref{pot th l1 1} holds, one finds, starting from \eqref{pot th l1 2}, that
\bels{pot th l1 b}{
|t(x,z) - t_y(x,z)|
\;\lesssim\;
|\partial_z F_x^{-1}(z) - \partial_z G_x^{-1}(z) | \,+ w^{-1} |\tau (F_x^{-1}(z)) - \tau (G_x^{-1}(z)) | \,+\, C.
}
For every $b\in\lbrack b_-,b_+\rbrack$, one has $\partial_b F_x(b) = w\phi(x) + \mathcal O(w^2)$ and $\partial_b G_x(b) = w\phi(y) + \mathcal O(w^2)$.
Therefore
\bels{pot th l1 c}{
|\partial_z F_x^{-1}(z) - \partial_z G_x^{-1}(z) | \;&\leq\; \Big|\frac{1}{w\phi (x) + \mathcal O(w^2) } - \frac{1}{w\phi (y) + \mathcal O(w^2) } \Big|
\\
&\lesssim\; w^{-1} |\phi (y) - \phi (x) + \mathcal O(w) | \;\lesssim\; 1
\,,
}
since $|y - x| = \mathcal O(w)$.
Inserting thus \eqref{pot th l1 c} in \eqref{pot th l1 b}, and then \eqref{pot th l1 b} in \eqref{pot th l1 a}, one finds
\bels{pot th l1 z}{
\int_{\Ball(y,Cw) }|t(x,z) - t_y(x,z)|\dif x\;
&\lesssim\;
w^{-1} \int_{\Ball(y,Cw) } |\tau (F_x^{-1}(z)) - \tau (G_x^{-1}(z)) | \dif x \,+\, \mathcal O(w)
\\
&=: \;w^{-1} I \,+\, \mathcal O(w)
\,.
}
It remains thus to show that $I = \mathcal O (w^2)$.
For this, let us define
\[
D_1 \,:=\; \lbrace x\in\T : b_-\le F_x^{-1}(z)\le b_+ \rbrace\,,
\quad\text{and}\quad
D_2 \,:=\; \lbrace x\in\T : b_-\le G_x^{-1}(z) \le b_+ \rbrace
\,.
\]
One writes
\[
I \;= \int_{D_1\cap D_2 } (\dots ) + \int_{(D_1\cap D_2)^c}(\dots) \;=:\, I_1 \,+\, I_2
\,.
\]
First, when $x\in D_1 \cap D_2$, one uses the fact that $\tau\in\Cont^1(\lbrack b_-,b_+\rbrack)$, that
\[
|F_x^{-1}(z)-G_x^{-1}(z)|
\;=\;
\Big| \frac{z-x-w}{w\phi(x) } - \frac{z-x-w}{w\phi(y)} + \mathcal O(w)\Big|
\;=\;
\mathcal O(w)
\,,
\]
since $|z-x-w|=\mathcal O(w)$ and $|\phi (y)-\phi(x) |=\mathcal O(w)$,
and that $\mathrm{Leb}(D_1\cap D_2)= \mathcal O(w)$, to conclude that $I_1=\mathcal O(w^2)$.
Next, when $x\in (D_1\cap D_2)^c$, one has $t(x,z)=t_y(x,z)=0$, except on $D_1\, \Delta\, D_2$.
But, for every $b\in\lbrack b_-,b_+\rbrack$, one has $|f_b(x)-g_b(x,y)| = \mathcal O(w^2)$, since $|x-y|=\mathcal O(w)$.
So, one has $\mathrm{Leb}(D_1\, \Delta\, D_2) = \mathcal O(w^2)$, and thus $I_2= \mathcal O(w^2)$.
Let us finally show \eqref{estimates T L1}.
From \eqref{T Ty noyaux}, one has that $|Tu(x)| \le \mathrm{sup}_{z\in\Ball(y,Kw) }|t(x,z)|$.
The relations \eqref{pot th l1 2} and \eqref{pot th l1 1} allow us to obtain the result.
\end{Proof}
In order to prove the next lemma, we introduce the adjoint $T^*$ of $T$ with respect to the Lebesgue measure.
This operator is defined on $\Lspace^p(\T)$ ($1\le p \le \infty$) and is
such that,
for every $u\in\Lspace^p(\T)$ and every $v\in\Lspace^{p'}(\T)$, with $1/p + 1/p' = 1$,
one has
\bels{def adjoint}{
\int_{\T} v\, T^* u\, \dif x = \int_{\T} u\, Tv\, \dif x
\,.
}
From the definition \eqref{T} of $T$, one concludes that
\bels{adjoint}{
T^* u (x) \;= \int_{b_-}^{b_+} u\circ f_b^{-1}(x)\, \bigl[1 + w \, h\circ f_b^{-1}(x)\msp{1}b\bigr]\, \partial_x f_b^{-1}(x)\, \tau (b)\dif b
\,.
}
Therefore, when $u\ge 0$, one has
\bels{approximation adjoint}{
T^* u (x) \;\ge\;
\nE^{-\mathcal O(w)} \int u\circ f_b^{-1}(x)\, \tau (b)\dif b
\,.
}
For $z\in\R$, let us define the chain $Y =(Y_{n}^z : n\in\N_0)$ by $Y_{0}^z := z $ and
\bels{chaine Yzn}{
Y_{n}^z \;:=\; f_{B_n}^{-1}(Y_{n-1}^z) \;=\; Y_{n-1}^z -\, w \,-\, w \phi (Y_{n-1}^z)B_n \,+\, \mathcal{O}(w^2)
\,.
}
\Lemma{localization measures}{
Let $K>0$. There exist $K_2 \ge K_1>0$ such that, for every $n\in\N$, for every $y\in\T$,
and for every $u\in\Lspace^1_{\Ball(y,Kw)}(\T)$, one has
\bels{support measures}{
\spt(T^n u),\; \spt(S_{y,n}u) \,\subset\, \bigl[ y- K_2wn, y- K_1 wn \bigr]
\,.
}
Morover, for every $ R>0 $ large enough, there exists $K'>0$ such that, for every $n\in\N$ satisfying $wn \leq 1 $,
for every $y\in\T$, and for every $u\in \Lspace^1_{\Ball(y,w)}(\T;\R_+)$, one has
\bels{concentration measure}{
\int_{\Ball(y-nw, R\sqrt w)} T^n u (z) \dif z \;\ge\; K' \norm{u}_1
\,.
}
}
\begin{Proof}
Let us first show \eqref{support measures}.
Let us consider the case of $T^n u$ ; the case of $S_{y,n}u$ is strictly analogous.
From the definition \eqref{T}, one sees that
\[
\spt(T^n u )
\;\subset\; \bigl[ f_{b_+}^{-n}(y-Kw/2), f_{b_-}^{-n} (y+Kw/2) \bigr]
\,.
\]
This implies the result, since, by the definition \eqref{expansion of fb, def of phi and psi} of $f_b$, one has, for every $x\in\T$ and every $b\in\lbrack b_-,b_+ \rbrack$,
\[
(1+b_-)w - \mathcal O(w^2)
\;\leq\;
x - f_b^{-1}(x)
\;\leq\;
(1+b_+)w + \mathcal O(w^2)
\,.
\]
Let us then show \eqref{concentration measure}.
Let $u\in\Lspace^1_{\Ball(y,w^2)}(\T;\R_+)$, let $R>0$, and let $n\in\N$ be such that $nw \le 1$.
From the definition \eqref{def adjoint} of the adjoint $T^*$, one has
\[
\int_{\Ball(y-nw, R\sqrt w) } T^n u (z) \dif z \;=\; \int_{\Ball(y,w) } T^{*n}\chi_{\Ball(y-nw,R\sqrt w) }(z)
\, u (z)\dif z
\,.
\]
It is therefore enough to show that, for every $z\in \Ball(y,w)$, one has $T^{*n}\chi_{\Ball(y-nw,R\sqrt w) }(z) \gtrsim 1$, if $R$ is large enough.
But, since $wn\le 1$, \eqref{approximation adjoint} implies that
\bels{pot th locc 2}{
T^{*n}\chi_{\Ball(y-nw,R\sqrt w) }(z) \;
&\gtrsim\;
\Expectation\bigl(\chi_{\Ball(y-nw,R\sqrt w)}\circ f_{B_n}^{-1}\circ \dots \circ f_{B_1}^{-1}(z) \bigr)
\\
&=\; 1 \,-\; \Prob\bigl( \abs{Y_{n}^z - (y-nw)} \ge R\sqrt{w}\bigr)
\,,
}
where $Y$ is defined in \eqref{chaine Yzn}.
Therefore, since $|z-y|=\mathcal O(w)$ and since $w^2n = \mathcal O(w)$,
one obtains, from the definition \eqref{chaine Yzn} of $Y$, and from Azuma's inequality \eqref{Azuma's inequality}, that
\bels{pot th locc 1}{
\Prob\bigl( \abs{Y_{n}^z - (y-nw)} \ge R \sqrt{w}\bigr) \;=\; \Prob\Big( \Big| w\sum_{k=1}^n \phi (Y_{k-1}^z) + \mathcal O(w)\Big| \ge R\sqrt w \Big)
\;\leq\; 2 \msp{1} \nE^{-\frac{CR^2}{nw}}
\,.
}
The proof is finished by taking $R$ large enough, and inserting \eqref{pot th locc 1} in \eqref{pot th locc 2}.
\end{Proof}
\begin{Proof}[Proof of proposition \ref{prp:Potential theory}]
Let $n\ge 9$ be such that $nw^2 \le 1$.
Let us make three observations.
First, by \eqref{Ex Tn},
it is enough to show the proposition with $\Expectation_x (\nE^{w \sum_{k=1}^n h(X_{k-1}) B_k}\, u( X_n))$ replaced by $T^n u(x)$ in \eqref{upper bound} and \eqref{lower bound}.
Second, it is enough to prove the proposition for functions in $\Lspace^1_{\Ball(y,w^2) }(\T; \R_+)$ for every $y\in\T$.
So, throughout the proof, one assumes that $y\in\T$ is given, and the symbol $ v $ denotes a function in $\Lspace^1_{\Ball(y,w^2) }(\T; \R_+)$.
Third, it is enough to show \eqref{lower bound} for some $n'$ satisfying $w^2 n' \le 1/2$.
Indeed, let us now assume that \eqref{lower bound} is shown for this $n'$, and let $n$ be such that $1/2 \le w^2 n \le 1$.
From the definition \eqref{T}, one sees that, if $u_1\ge u_2$, one has $Tu_1\ge Tu_2$.
So, one writes $n = n' + n''$ and, for every $u\in\Lspace^1(\T,\R_+)$, one gets
$T^n u (x) = T^{n''}T^{n'}u \gtrsim \norm{u}_1 T^{n''}1 \sim \norm{u}_1$,
where the fact that $T^{n''}1 \sim 1$ directly follows from the definition \eqref{T} of $T$, Azuma's bound \eqref{Freedman's and Azuma's exponent bounds}, and the hypothesis $ w^2 n\le 1 $.
The proof is now divided into three steps, but the core is entirely contained in the first one.
\textbf{Step 1: approximating $T^n$ by $S_{y,n}$:}
One here shows the bounds \eqref{upper bound} and \eqref{lower bound} under two particular assumptions:
\begin{enumerate}
\item One supposes that $|y|_\T \ge \epsilon_1$, for some $\epsilon_1>0$. The constants introduced below may depend on $\epsilon_1$.
\item Only for \eqref{lower bound},
one assumes that $n$ is such that $\epsilon_2\le n \le 2 \epsilon_2$ and that $|x+nw-y|_\T \le 10 \epsilon_2$ for some $\epsilon_2>0$ small enough.
\end{enumerate}
By the definition \eqref{Syn} of $S_{y,n}$, one can write
\bels{approximation scheme}{
T^n v \;&=\; S_{y,n}v \,+\, \sum_{k=1}^8 T^{n-k}R_{y-kw}S_{y,k-1}v \,+\, \sum_{k=9}^{n-1} T^{n-k}R_{y-kw}S_{y,k-1}v
\\
&=:\; S_{y,n}v \,+\, Q_1 \,+\, Q_2
\,.
}
Let us bound $\norm{Q_1}_\infty$.
Let $k\in\N$ be such that $ 1\le k\le8 $.
By \eqref{support measures}, one has
\bels{pot th loc abc}{
\spt(R_{y-kw}S_{y,k-1}v) \,\subset\, \Ball(y,Cw)
\,.
}
Remembering that $\norm{T}_{\infty\to\infty}=1$, one uses \eqref{estimates Rk L1} and \eqref{estimates T L1} to obtain that
\bels{pot th S1}{
\norm{Q_1}_\infty &\leq\, \sum_{k=1}^8 \norm{T^{9-k}R_{y-kw}S_{y,k-1}v}_\infty
\;\lesssim\;
w^{-1} \sum_{k=1}^8 \norm{R_{y-kw}S_{y,k-1}v}_1
\\
&\lesssim\, \sum_{k=0}^7 \norm{ S_{y,k-1}v}_1 \;\lesssim\; \norm{v}_1
\,,
}
where, for the last inequality, one has used the fact that $\norm{T_y}_{1\to 1 } = 1$ for every $y\in\T$.
Let us bound $\norm{ Q_2}_\infty$. By Lemma \ref{lmm:estimates Rk} and estimates \eqref{approximate kernels 2} and \eqref{approximate kernels 1} in Lemma \ref{lmm:approximate kernels}, one has, for $8\le k\le w^{-2}$,
\[
\norm{ T^{n-k}R_{y-kw}S_{y,k-1}v}_\infty
\;\leq\;
\norm{R_{y-kw}S_{y,k-1}v }_\infty
\;\lesssim\;
w^2 \biggl\{ \frac{1}{w\sqrt k } + \frac{w}{w^2k} +\frac{1}{w\sqrt k } +\frac{w}{w^3 k^{3/2}} \biggr\} \cdot \norm{v}_1.
\]
Therefore, since $w^2n \le 1$ by hypothesis, one gets
\bels{pot th S2}{
\norm{Q_2}_\infty
\;\lesssim\;
( w\sqrt n + w\log n + C)\norm{v}_1 \;\lesssim\; \norm{v}_1
\,.
}
So, from \eqref{approximation scheme}, \eqref{pot th S1} and \eqref{pot th S2},
one has
\[
\norm{T^n v - S_{y,n}v}_\infty \;\leq\; C\, \norm{v}_1
\,,
\]
where the constant $C$ is independent of $\epsilon_2$.
Therefore, in the particular case considered, \eqref{upper bound} follows from \eqref{approximate kernels 1} with $l=0$,
and \eqref{lower bound} follows from \eqref{approximate kernels 3}, if $\epsilon_2$ has been chosen small enough.
\textbf{Step 2: proof of \eqref{upper bound}:}
By Step 1, \eqref{upper bound} is known to hold when $|y|_\T \ge \epsilon_1$,
and one may now assume that $|y|_\T < \epsilon_1$.
Moreover, one has still the freedom to take $\epsilon_1$ as small as we want.
One now uses the hypothesis $nw \ge \kappa$.
Let $m\in\N$ be such that $mw = \epsilon'$, for some $\epsilon'\in \rbrack 0,c/2\rbrack$.
If $\epsilon_1$ is small enough,
it follows from \eqref{support measures} that one can chose $\epsilon'$ such that $\mathrm{supp}(T^m u )\cap \Ball(0,\epsilon_1)=\emptyset$.
But
the particular case considered in Step 1 implies that \eqref{upper bound} is valid for any function in $\Lspace^1_{\T - \Ball(0,\epsilon_1)}(\T)$, and thus one has
\[
T^n v (x) \;=\; T^{n-m}T^m v (x)
\;\lesssim\;
\frac{\norm{T^m v }_1}{w\sqrt{n-m} }
\;\lesssim\;
\frac{\norm{v}_1}{w\sqrt{n}}
\,,
\]
where the last inequality follows from the fact that $\norm{T}_{1\to 1} \le \nE^{\mathcal O(w) }$, as can be seen from the definition \eqref{T}.
\textbf{Step 3: proof of \eqref{lower bound}:}
One first will establish \eqref{lower bound} for $n$ such that $n=\lfloor \epsilon_2w^{-2} \rfloor$, and for $x$ such that $|x+nw-y|_\T \le 10 \epsilon_2$.
By Step 1, it is now enough to consider the case $|y|_\T < \epsilon_1$.
Let now $m = \lfloor \frac{1}{2}w^{-1}\rfloor$, and let $R>0$.
If $R$ is taken large enough,
it follows from \eqref{concentration measure},
and from the particular case of \eqref{lower bound} already established in Step 1,
that
\[
T^n v (x) \,\ge\, T^{n-m} \msp{-1}\big( \chi_{\Ball(y-mw,R\sqrt w)} T^m v \big)(x)
\;\gtrsim\;
\int_{\Ball(y-mw,R\sqrt w)} T^m v (z)\dif z
\;\gtrsim\;
\norm{v}_1
\,.
\]
One finally needs to get rid of the assumption $|x+nw-y|_\T \le 10 \epsilon_2$.
One uses a classical technique \cite{Coulhon-1993}.
One shows \eqref{lower bound} for $n=kq$, with $k\ge 1/ 18 \epsilon_2$, and $q$ such that $q = \lfloor \epsilon_2 w^{-2} \rfloor$.
One already knows that
\bels{pot th: Tq v}{
T^{q}v
\;\gtrsim\;
\chi_{\Ball(y-qw,10 \epsilon_2) }\norm{v}_1
\,.
}
But one now will show that, for every $z\in\T$, and for every $s\in\lbrack \epsilon_2, 1\rbrack$,
one has
\bels{pot th: Tq chi}{
T^q \chi_{\Ball(z,s)} \gtrsim \epsilon_2\, \chi_{\Ball(z-qw,s+9\epsilon_2) }.
}
This will imply the result :
\bea{
T^{n}v \;&=\; T^{kq}v
\;\gtrsim\;
T^{(k-1)q} \chi_{\Ball(y-qw,10 \epsilon_2) } \, \norm{v}_1
\\
&\gtrsim\, \dots \,\gtrsim\;
\epsilon_2^{k-1}\chi_{\Ball(y-kqw, (10 + 9(k-1))\epsilon_2) } \, \norm{v}_1
\;\gtrsim\;
\epsilon_2^{k-1} \norm{v}_1
\,.
}
Let us thus show \eqref{pot th: Tq chi}.
Let $z\in\T$ and $s\in\lbrack \epsilon_2, 1\rbrack$.
Let us write $T^q u (x) = \int t_q (x,z')u(z')\dif z'$ for any $u\in\Lspace^1(\T)$.
Relation \eqref{pot th: Tq v} implies in fact that $t_q (x, \genarg ) \gtrsim \chi_{\Ball(x+qw,10 \epsilon_2) }(\genarg)$
(which may be formally checked by taking $u(x) = \delta (y-x)$).
Therefore
\bea{
T^q \chi_{\Ball(z,s)}(x)\;
&\gtrsim\; \int \chi_{\Ball(x + qw,10\epsilon_2) }(z') \cdot \chi_{\Ball(z,s) }(z') \dif z'
\\
&\gtrsim\; \epsilon_2\, \chi_{\Ball(z,s+9\epsilon_2) }(x+qw)
\;=\; \epsilon_2\, \chi_{\Ball(z-qw,s+9\epsilon_2) }(x)
\,.
}
This finishes the proof.
\end{Proof}
\section{Putting everything together}
\label{sec:Proof of Theorem}
In \cite{Casher-Lebowitz-71} p. 1710, Casher and Lebowitz derive the lower bound $\tE(J_n) \gtrsim (T_1 - T_n) n^{-3/2}$. However, their argument contains a gap, and consequently this lower bound remains still to be proven. Indeed, their proof is based on the estimate on the following estimate of $ D_n(e_1) $ ($ K_{1,n} $ in their notation):
\bels{cas leb estimate}{
\tE\bigl[ D_n(e_1)^2\bigr] \;\sim\; \nE^{Cnw^2} \quad\text{as}\quad w \searrow 0
\,.
}
This bound is obtained by computing the eigenvalues of a $ 4 \times 4 $ matrix $ F $, defined in \cite{Casher-Lebowitz-71} p. 1710.
But this estimate cannot hold.
Indeed, we know for example, from Corollary \ref{crl:Fundamental decompositions of D_1n and D_2n} and Proposition \ref{prp:Potential theory}, that $\Expectation(D_{1,n}^2)\sim w^{-2}$ when $w^2n \sim 1 $.
Although the computation of the eigenvalues of $ F $ is correct, the authors do not take into account the fact that a $ w $-dependent change of variables is needed to obtain a correct estimate on $\Expectation[D_n(e_1)^2]$.
\subsection{Proof of the lower bound}
\label{ssec:Proof of the lower bound}
We begin by a lemma. Let $(L_n)$ and $(K_n)$ be the processes defined in Lemma \ref{lmm:Representations for the lower bound}.
\Lemma{bound for K_n and L_n probabilities}{
For every $\alpha >0$, there exists $C(\alpha)>0$, such that, for every $ a>0 $, and every $ n \in \N $ satisfying $ w^2n \leq 1 $, one has
\bels{bound for K_n and L_n probabilities}{
\tP(\abs{K_n} \ge a),\, \tP (\abs{L_n} \ge a) \;\leq\; C(\alpha) w^\alpha
\,.
}
}
\begin{Proof}
Let $ (A_n:n\in\N_0)$ be a $\mathbb F$-adapted process such that
\bels{def of A_n}{
A_n \;:=\; \nE^{M_n +\, L_n +\, \mathcal{O}(w^2n) }
}
for every $n \in \N_0 $, with $ M_n $ as defined in Lemma \ref{lmm:Representations for the lower bound}. From the expressions \eqref{def of dL_n} and \eqref{def of dK_n}, both $ K_n $ and $ L_n $ are of the form
\bea{
R_n \;:=\; w^2 \sum_{j=1}^n A_{j-1}S_{j-1}B_j
\,,
}
where $ (S_j) $ is $\mathbb{F}$-adapted, and satisfies $ \abs{S_j} \lesssim 1 $ for $ j\in \N_0 $.
Let $a>0$. One writes
\bea{
\tP (R_n \ge a) \;=\;
\tP\biggl( &w^{3/2} \sum_{j=1}^n w^{1/2} A_{j-1}S_{j-1}B_j \ge a, \; \max_{1\leq j\leq n}w^{1/2}\!A_{j-1} \leq 1\biggr)
\\
+\;\tP\biggl( &w^{3/2} \sum_{j=1}^n w^{1/2} A_{j-1}S_{j-1}B_j \ge a, \; \max_{1\leq j\leq n}w^{1/2}\!A_{j-1} > 1 \biggr)
\,.
}
Let us now define a process $ (\tilde A_n :n\in\N_0)$ by setting $\tilde{A}_n := A_n \cdot \chi_{[0,1]}\msp{-1}(w^{1/2}A_n) $.
One has
\bels{final proof lemm loc 01}{
\tP( R_n \ge a )
\;\leq\;
\tP\Big( w^{3/2} \sum_{j=1}^n w^{1/2}\tilde{A}_{j-1}S_{j-1}B_j \ge a \Big)
\,+\,
\sum_{j=1}^n \tP(w^{1/2}A_{j-1}> 1)
\,.
}
First, by Azuma's inequality \eqref{Azuma's inequality}, and since $ w^2n \leq 1$, one has
\bels{final proof lemm loc 02}{
\tP \biggl(w^{3/2} \sum_{j=1}^n w^{1/2}\tilde{A}_{j-1}S_{j-1}B_j \ge a \biggr)
\;\leq\;
2 \, \nE^{- Ca^2 /w^3n}
\;\leq\;
\nE^{- Ca^2w^{-1}}
\,.
}
Next, it follows from \eqref{def of dM_n}, \eqref{def of dL_n} and \eqref{increments bounded by constant} that $ A_n $ defined in \eqref{def of A_n} if also of the form $ A_n = \nE^{w\sum_{j=1}^n G_{j-1} B_j + \mathcal{O}(w^2 n)} $, where $(G_j) $ is $\mathbb{F}$-adapted, and $ \abs{G_j} \lesssim 1 $ for $ j \in \N_0 $.
So, applying again Azuma's inequality, one gets
\[
\tP(w^{1/2} A_{j-1}>1) \;=\; \tP\Big(w\sum_k^{j-1}G_{k-1}B_k + \mathcal{O}(w^2 n) > \frac{1}{2}\log \frac{1}{w}\Big)
\;\lesssim\;
\nE^{-\frac{ C \log^2 (1/w) }{(j-1)w^2}}
\;\leq\;
\nE^{- C' \log^2 (1/w)}
\,.
\]
Therefore $ \sum_{j=1}^n \tP(w^{1/2}A_{j-1}> 1) \lesssim w^{-2} \nE^{-C \log^2 (w^{-1}) } $. The proof is finished by inserting this last bound and \eqref{final proof lemm loc 02} in \eqref{final proof lemm loc 01}.
\end{Proof}
With the help of this lemma we can now prove the lower bound $ \tE\msp{1}J^\mathrm{CL}_n \gtrsim n^{-3/2} $ of Theorem \ref{thr:the scaling of the average current}.
Indeed, from \eqref{CL-current and def of J_n}, it follows that
\bea{
\tE\msp{1}J^\text{CL}_n \;\gtrsim\; \int_{(2n)^{-1/2}}^{n^{-1/2}} \tE\msp{1} j_n (w) \msp{1}\dif w
\,,
}
with $j_n$ defined in \eqref{def of jn}. It is therefore enough to show that when $ 1/2 \leq w^2n \leq 1 $ the bound $ \Expectation j_n(w) \gtrsim w^2 \sim n^{-1} $ holds.
So let $ 1/2 \leq w^2n \leq 1 $, and use Corollary \ref{crl:Fundamental decompositions of D_1n and D_2n} in \eqref{def of jn} to write
\bels{final proof: jn lower bound}{
j_n(w)
\;\gtrsim\;
\Bigg\{
1 \,+\,
\frac{(\Gamma_{n}^\vartheta\sin X_{n}^\vartheta)^2 }{w^4} + \frac{( \Gamma^\vartheta_{n-1} \sin X_{n-1}^\vartheta)^2 }{w^2} + \frac{(\Gamma_n^0\sin X_{n}^0)^2 }{w^2}+ (\Gamma_{n-1}^0\sin X_{n-1}^0)^2
\Biggr\}^{-1}
\msp{-10}.
}
Let us take some $ R, c>1 $. The constants introduced below may depend on $R$ and $c$.
Let us observe that, by point (i) of Corollary \ref{crl:the three qualitative properties of X}, one has $ \abs{X_{n-1}}_\T \lesssim w $ provided $ \abs{X_n}_\T \lesssim w^2 $, and that, from the definition \eqref{def of Gamma^x_n}, one has $\Gamma_{n-1}\in\lbrack 0,2R\rbrack$ when $\Gamma_n \in\lbrack 0,R\rbrack$. It follows therefore from \eqref{final proof: jn lower bound} that
\bels{final proof: jn proba 1}{
\Expectation\msp{1}j_n(w)
\;\gtrsim\;
\Prob\bigl(
\abs{X_n^\vartheta}_\T\le w^2, \, \Gamma_n^\vartheta \le R, \, \abs{X_n^0}_\T \le cw, \, \Gamma_n^0 \leq cR
\bigr)
\,.
}
We now uses Lemma \ref{lmm:Representations for the lower bound}.
First, by \eqref{R-distance between X^vartheta_n and X^0_n}, one has
\bels{final proof: with Mn and Ln}{
\chi_{\Ball(cw) }(X_n^0) \, \ge\, \chi_{\lbrack 0, R\rbrack }(\nE^{M_n}) \cdot \chi_{\Ball(0,1) }(L_n) \cdot \chi_{\Ball(0,w^2) }(X_n^\vartheta)
\,,
}
provided $ c $ is large enough.
Secondly, by \eqref{Gamma_2n from Gamma_1n}, one has
\bels{final proof: with Kn}{
\chi_{\lbrack 0, cR\rbrack }(\Gamma_n^0)
\;\ge\;
\chi_{\Ball(0,1) }(K_n) \cdot \chi_{\lbrack 0,R\rbrack }(\Gamma_n^\vartheta)
\,,
}
again, provided $ c $ is large enough.
Using then \eqref{final proof: with Mn and Ln} and \eqref{final proof: with Kn} in \eqref{final proof: jn proba 1}, one obtains
\bea{
\Expectation\msp{1}j_n(w)
\;\gtrsim\;
&\Prob(
|X_n^\vartheta |_\T\le w^2, \, \Gamma_n^\vartheta \le R, \, \nE^{M_n} \le R, \, |L_n|\le 1, \, |K_n| \le 1)
\\
\;\ge\;
&\Prob(|X_n^\vartheta |_\T\le w^2, \, |L_n|\le 1, \, |K_n| \le 1)
\\
&-
\Prob(|X_n^\vartheta |_\T\le w^2, \, \Gamma_n^\vartheta > R)
\,-\,
\Prob(|X_n^\vartheta |_\T\le w^2, \, \nE^{M_n} > R)
\\
\; \ge &\;
\Prob(|X_n^\vartheta |_\T\le w^2)
\,-\, \Prob(|L_n| > 1) - \Prob(|K_n| > 1)
\\
&-\
\Prob(|X_n^\vartheta |_\T\le w^2, \, \Gamma_n^\vartheta > R)
\,-\,
\Prob(|X_n^\vartheta |_\T\le w^2, \, \nE^{M_n} > R)
\,.
}
Applying then Markov's inequality to the two last terms, one gets
\bea{
\Expectation\msp{1}j_n(w)
\;\gtrsim\;
&\Prob(\msp{1}\abs{X_n^\vartheta}_\T \leq w^2)
\,-\, \Prob(|L_n| > 1)
\,-\, \Prob(|K_n| > 1)
\\
&-\frac{1}{R} \Expectation\Bigl[
\chi_{\Ball(0,w^2)}(X_n^\vartheta) \cdot \Gamma_n^\vartheta
\Bigr]
\,-\,
\frac{1}{R} \Expectation\Bigl[
\chi_{\Ball(0,w^2)}(X_n^\vartheta) \cdot \nE^{M_n}
\Bigr]
\,.
}
Proposition \ref{prp:Potential theory} and Lemma \ref{lmm:bound for K_n and L_n probabilities} allow then to conclude that $ \tE(j_n(w))\gtrsim w^2$ if $R$ is chosen large enough. This finishes the proof.
\subsection{Proof of the upper bound}
\label{ssec:Proof of the upper bound}
Let $ n \in \N$. Let $ c>0 $ to be fixed later. Starting from \eqref{CL-current and def of J_n}, one writes
\begin{equation}
\label{preuve upper}
\Expectation\msp{1}J^{\text{CL}}_n
\;\sim
\int_0^{c/n} \Expectation\msp{1}j_n(w) \dif w
\,+
\int_{c/n}^{w_0} \Expectation\msp{1} j_n(w) \dif w
\,+
\int_{w_0}^{\infty } \Expectation\msp{1}j_n(w)\dif w
\;=:\; \mathcal{J}_1 + \mathcal{J}_2 + \mathcal{J}_3
\,,
\end{equation}
with $ j_n $ defined in \eqref{def of jn}. Using the crude bounds $ D^2_{n-1}(e_1), D^2_n(e_2), D^2_{n-1}(e_2) \ge 0 $ in the definition of $ j_n $, and applying then Corollary \ref{crl:Fundamental decompositions of D_1n and D_2n}, one obtains
\bels{final proof: jn upper}{
j_n(w)
\;\lesssim\;
\frac{1}{1 + w^{-2}D_n^2(e_1)}
\;\lesssim\;
h(\Gamma_n^\vartheta\, \sin \pi X_n^\vartheta)
\quad\text{with}\quad
h(r) \;=\; \frac{1}{1\,+w^{-4}r^2}
\,.
}
Let us first bound $ J_1 $. Let $w\in \lbrack 0 , c/n\lbrack$. First, $\Gamma_{n}^\vartheta \gtrsim 1 $, as can be checked from its definition \eqref{def of Gamma^x_n}. Next, if $ c $ is small enough, one has, by point (i) of Corollary \ref{crl:the three qualitative properties of X}, that
\[
wn \;\lesssim\; X_n \;\leq\; \frac{1}{2}wn \;\leq\; \frac{1}{2}
\,.
\]
Therefore one has $\sin^2\pi X_n^\vartheta \,\gtrsim w^2n^2$, and thus
\bels{preuve upper J1}{
\mathcal{J}_1 \;\lesssim\; \int_0^{c/n}\frac{\dif w}{1 \,+\, w^{-2} n^2 }
\;\lesssim\;
n^{-3}
\,.
}
Let us next bound $ \mathcal{J}_2$. Let $ w \in \lbrack c/n, w_0 \lbrack$, and $ m = \min \lbrace n, \lfloor w^{-2}\rfloor \rbrace$. One writes
\bels{final proof: decomposition jn}{
\Expectation\msp{1}j_n(w)
\;=\;
\int_\R \int_\T \Expectation \big( j_n(w) | X_{n-m}^\vartheta = x, \Gamma_{n-m}^\vartheta = a \big) \,
\Prob (X_{n-m}^\vartheta \in \dif x, \, \Gamma_{n-m}^\vartheta \in \dif a)
\,.
}
To simplify notations, set $ \tE(\genarg|x,a) := \Expectation (\genarg | X_{n-m}^\vartheta = x, \Gamma_{n-m}^\vartheta = a) $.
If $x\in \T$ and $a \in\R$ are given, it follows from \eqref{final proof: jn upper} that
\bels{final proof:conditional jn}{
\Expectation(j_n(w)|x,a)
\;\lesssim\;
\Expectation
\msp{1}
h(a\msp{1} \Gamma_m^x\sin \pi X_m^x
\,,
}
since, by the definition \eqref{def of Gamma^x_n}, one may write $ \Gamma_n^\vartheta = \prod_{l=1}^n g(X_{l-1}^\vartheta, B_l) = \Gamma_{n-m}^\vartheta \prod_{l=n-m+1}^n g(X_{l-1}^\vartheta, B_l)$, for some function $g$.
Because $h(r) \le 1$ and $h(r)\le w^{4}r^{-2}$ for every $r\in\R$, one has, for every event $A$, the bound
\bels{final proof:decomposition hr}{
h(a\, \Gamma_m^x \sin \pi X_m^{x} )
\;\leq\;
1_A \,+\, 1_{A^c} \cdot w^4 \cdot (a\, \Gamma_m^x \sin \pi X_m^{x})^{-2} \, .
}
So, taking $ 1_A = \chi_{\lbrack0,1\rbrack }(w^{-4}a^2\sin^2 \pi X_m^x)$, and using \eqref{final proof:decomposition hr} in \eqref{final proof:conditional jn} one obtains
\[
\Expectation(j_n(w)|x,a)
\;\lesssim\;
\Expectation\Bigl\{
\chi_{\lbrack 0,1\rbrack} (w^{-4}a^2\sin^2 \pi X_m^x) \,+ \, \chi_{\rbrack 1,\infty\lbrack} (w^{-4}a^2\sin^2 \pi X_m^x) \cdot w^{4} \cdot (a\, \Gamma_m^x \sin \pi X_m^x)^{-2}
\Big\}
\,.
\]
Therefore, Proposition \ref{prp:Potential theory} implies
\bea{
\Expectation(j_n(w)|x,a)
\;&\lesssim\;
\frac{1}{w\sqrt{m}}\int_\T
\bigl\{
\chi_{\lbrack 0,1\rbrack} (w^{-4}a\sin^2\pi y) \,+\, \chi_{\rbrack 1,\infty\lbrack}(w^{-4}a\sin^2 \msp{-2}\pi y) \, w^{4}a^{-2} \sin^{-2} \msp{-3}\pi y
\bigr\}
\dif y
\\
&\lesssim\;
\frac{1}{w\sqrt{m}} \int_\T \frac{\dif y}{1 \,+\, w^{-4}a^2\sin^2 \pi y}
\;\lesssim\;
\frac{1}{w\sqrt{m}} \int_{-1/2}^{1/2} \frac{\dif y}{1 \,+\, (w^{-2}a \msp{1}y)^2}
\\
&\leq\;
\frac{w^2 a^{-1}}{w\sqrt m }\int_{-\infty }^{+\infty } \frac{\dif z}{1 + z^2} \;\lesssim \;
\frac{w}{\sqrt m}\,a^{-1}
\,,
}
where one has used the change of variables $z = w^{-2}ay$ to get the third line.
One now inserts this last bound in \eqref{final proof: decomposition jn}.
Applying Proposition \ref{prp:expectation of 1/Gamma_n decays exponentially}, one gets
\bea{
\Expectation\msp{1}j_n (w)
\;&\lesssim\;
\int_\R \int_\T \frac{w}{\sqrt m} \,a^{-1} \, \Prob (X_{n-m}^\vartheta \in \dif x, \,\Gamma_{n-m}^\vartheta \in \dif a)
\\
&=\;\frac{w}{\sqrt m }\Expectation(1/\Gamma_{n-m}^\vartheta) \lesssim \frac{w}{\sqrt m } \nE^{-\alpha w^2 (n-m) }
\;\lesssim\;
\max \!\left\{\!\frac{w}{\sqrt {n}}, w^2\right\} \nE^{-\alpha w^2 n }.
}
Therefore
\bels{final proof: J2 upper}{
\mathcal{J}_2 \;\lesssim\;
\frac{1}{\sqrt n} \int_0^{ n^{-1/2} } w\msp{1}\nE^{-\alpha w^2 n} \dif w
\,+\, \int_{n^{-1/2}}^\infty w^2 \nE^{-\alpha w^2 n } \dif w
\;\lesssim\;
n^{-3/2}
\,.
}
It has already been shown by O'Connor \cite{O'Connor-75} that $ \mathcal{J}_3 \lesssim \nE^{-C n^{1/2}}$.
One thus finishes the proof by inserting this last estimate, together with \eqref{preuve upper J1} and \eqref{final proof: J2 upper} in \eqref{final proof: jn upper}.
\subsection{On other heat baths}
\label{ssec:Other heat baths}
Associate a heat bath to a function $ \mu : \R \to \C $ as described by Dhar \cite{Dhar_Spect_Dep-01}. One may then obtain, at least formally, a new heat bath by replacing $ \mu $ with a function $ \tilde{\mu} : \R \to \C $ defined by scaling $ \tilde{\mu}(w) \sim \mu(\mathrm{sgn}(w)\abs{w}^s) $, $ s > 0 $.
In \cite{Dhar_Spect_Dep-01} Dhar argued based on numerics and a non-rigorous approximation that Casher-Lebowitz and Rubin-Greer bath functions $ \mu_{\mathrm{CL}}(w) \sim \cI w $ and $ \mu_{\mathrm{RG}}(w) \sim \nE^{-\cI \pi \vartheta(w)} $, with $ \vartheta(w) $ given in \eqref{def of average shift}, yield $ \tE\msp{1} J^{\wti{\mathrm{CL}}}_n \sim n^{-(1+s/2)} $ and $ \tE \msp{1}J^{\wti{\mathrm{RG}}}_n \sim n^{-(1+\abs{s-1})/2} $, respectively.
The first of these statements can be proven rigorously by directly adapting the proof of Theorem \ref{thr:the scaling of the average current}.
The second case, however, does not follow directly from the proof of $ \tE\msp{1}J^{\mathrm{RG}} \sim n^{-1/2} $, even though we believe it should not be too difficult to prove by using our results.
To see where the difficulties within this second case lie, as well as to further demonstrate our approach, let us sketch how $ \tE\msp{1}J^{\mathrm{RG}} \sim n^{-1/2} $, first proven by Verheggen \cite{Verheggen-1979}, can be obtained by using our representation of $ D_n(v) $.
Indeed, the choices $ \tilde{e}_1 := 2^{-1/2}(e_1 + e_2)$ and $ \tilde{e}_2 := 2^{-1/2}(e_1 - e_2) $ yield (Proposition \ref{prp:Fundamental decomposition of D_n(v)}) $ D_n(\tilde{e}_1) \sim \Gamma^{x_1}_n \sin \pi X^{x_1}_n $ and $ D_n(\tilde{e}_2) \sim w^{-1}\Gamma^{x_2}_n \sin \pi X^{x_2}_n $ with $ x_1 = 1/2 + \mathcal{O}(w) $ and $ x_2 = w/2 + \mathcal{O}(w^2) $, respectively.
If one substitutes these in the expression for the current density $ j^{\mathrm{RG}}_n(w) $ of the Rubin-Greer model (the equation between 3.1 and 3.2 in \cite{Verheggen-1979}) one ends up with an estimate
\bels{RG-current current density}{
(1+(\Gamma^{x_1}_n)^2 + (\Gamma^{x_2}_n)^2)^{-1}
\;\lesssim\;
j^{\mathrm{RG}}_n(w)
\;\lesssim\;
(1+(\Gamma^{x_2}_n)^2)^{-1}\,,
\quad\text{for}\quad w \leq w_0
\,,
}
after making use of the basic properties of $ X $-processes (Corollary \ref{crl:the three qualitative properties of X}). This reveals that the Rubin-Greer model is special in the sense that the random phases $ X^{x_k}_n $ in the expressions $ D_n(\tilde{e}_k) \sim \Gamma^{x_k}_n \sin \pi X^{x_k}_n $ do not have any direct role in the scaling behavior of the current.
The reason why proving $ \tE\msp{1} J^{\wti{\mathrm{RG}}}_n \sim n^{-(1+\abs{s-1})/2} $, $ s \neq 1 $, is again more difficult is that the bounds analogous to \eqref{RG-current current density} become again explicitly depended on $ X^{x_k} $.
Now continuing with the RG-model, based on \eqref{RG-current current density} one can prove $ \tE\msp{1}j^{\mathrm{RG}}_n(w) \sim \nE^{-C w^2 n} $ which then implies the scaling: $ \tE J^\mathrm{RG}_n = \int_\R \tE \msp{1}j^{\mathrm{RG}}_n(w) \dif w \sim n^{-1/2} $.
Indeed, for the lower bound $ \tE\msp{1}j^{\mathrm{RG}}_n(w) \gtrsim \nE^{-C w^2 n} $ one considers the typical behavior, which is easier to analyze than in the Casher-Lebowitz model since $ X $-processes are not present.
The respective upper bound follows from Proposition \ref{prp:expectation of 1/Gamma_n decays exponentially}.
\chapter{#1} \label{chp:#1}}
\newcommand{\sectionl}[1] {\section{#1} \label{sec:#1}}
\newcommand{\subsectionl}[1] {\subsection{#1} \label{ssec:#1}}
\newcommand{\abs}[1]{\lvert #1 \rvert}
\newcommand{\absb}[1]{\big\lvert #1 \big\rvert}
\newcommand{\norm}[1]{\lVert #1 \rVert}
\newcommand{\normb}[1]{\big\lVert #1 \big\rVert}
\newcommand{\tE} {\mathsf{E}}
\newcommand{\sE}[1] {\tE \!\left( {#1} \right)}
\newcommand{\E} {\tE}
\newcommand{\Expectation} {\tE}
\newcommand{\la} {\langle}
\newcommand{\ra} {\rangle}
\newcommand{\lla} {\left \langle}
\newcommand{\rra} {\right \rangle}
\newcommand{\tP} {\mathsf{P}}
\newcommand{\Prob} {\tP}
\newcommand{\floor} [1] { \lfloor {#1} \rfloor}
\newcommand{\floors}[1] {\left \lfloor {#1} \right \rfloor}
\newcommand{\ceil} [1] { \lceil {#1} \rceil}
\newcommand{\ceils} [1] {\left \lceil {#1} \right \rceil}
\newcommand{\w} {\wedge}
\renewcommand{\v} {\vee}
\newcommand{\R} {\mathbb{R}}
\newcommand{\cR} {\bar{\R}}
\newcommand{\C} {{\mathbb{C}}}
\newcommand{\K} {\mathbb{K}}
\newcommand{\N} {\mathbb{N}}
\newcommand{\Z} {\mathbb{Z}}
\newcommand{\Q} {\mathbb{Q}}
\newcommand{\T} {\mathbb{T}}
\newcommand{\mathcal{B}}{\mathcal{B}}
\newcommand{\Lspace} {\mathrm{L}}
\newcommand{\Leb} {\mathrm{Leb}}
\newcommand{\Cont} {\mathrm{C}}
\newcommand{\Ball} {B}
\newcommand{- \ \!\!\!\!\!\!\!\! \int}{- \ \!\!\!\!\!\!\!\! \int}
\newcommand{\sett}[1] { \{ {#1} \} }
\newcommand{\set}[1] { \sett{\,{#1}\,}}
\newcommand{\sset}[1] { \left\{\, {#1} \,\right\} }
\newcommand{\setb}[1] { \bigl\{ {#1} \bigl\} }
\newcommand{\setB}[1] { \Bigl\{\, {#1} \,\Bigr\} }
\newcommand{\cmpl} {\mathrm{c}}
\newcommand{\titem}[1] {\item[\emph{(#1)}]}
\newcommand{\msp}[1] {\mspace{#1 mu}}
\newcommand{\genarg} {{\,\bullet\,}}
\newcommand{\sprod} {\otimes}
\DeclareMathOperator{\arccot} {arccot}
\newcommand{\spt} {\mathrm{supp}}
\newcommand{\rv}[1] {\tilde{#1}}
\newcommand{\mat}[1]{\begin{bmatrix} #1 \end{bmatrix}}
\newcommand{\wti}[1] {\widetilde{#1}}
\newcommand{\wht}[1] {\widehat{#1}}
\newcommand{\bs}[1] {\boldsymbol{#1}}
\newcommand{\dif} {\mathrm{d}}
\newcommand{\cI} {\mathrm{i}}
\newcommand{\nE} {\mathrm{e}}
\newcommand{\Const} {C
\newcommand{\bigO} {\mathcal{O}}
\newcommand{\diag} {\mathrm{diag}}
\newcommand{\imply} {\Longrightarrow}
\newcommand{\trace} {\mathrm{Tr}\,}
\newcommand{\mathrm{T}}{\mathrm{T}}
\newcommand{\Id} {\mathrm{Id}}
\newcommand{\Cov} {\mathrm{Cov}} | proofpile-arXiv_065-5099 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
The existence of entanglement between more than two parties for discrete or continuous quantum variables is one of the most striking
predictions of quantum mechanics. The interest in multipartite entanglement is
motivated not only by fundamental questions but also by its potential for
applications in quantum communication technologies.
Possible applications of continuous-variable (CV) multipartite entanglement include quantum teleportation networks, quantum telecloning, controlled
quantum dense coding, and quantum secret sharing (see
Ref.~\cite{Peng2007} and references therein).
Characterizing multipartite entanglement has raised much interest since the pioneering work of
Coffman, Kundu and Wootters \cite{Coffman2000}.
They have established for a three-qubit system and
conjectured for $N$-qubit systems the so-called monogamy of quantum
entanglement, constraining the maximum entanglement between partitions of
a multiparty system. More recently, Adesso, Serafini and Illuminati \cite{ASI04}-\cite{Cerf2007} have introduced the
continuous-variable tangle as a measure of multipartite entanglement for
continuous-variable multimode Gaussian states. In particular, they have demonstrated
that this tangle satisfies the Coffman-Kundu-Wootters monogamy inequality.
The conjecture of Ref. \cite{Coffman2000} has been proven by Osborne and Verstraete \cite{OV2006}. The corresponding proof for Gaussian states has been obtained by Hiroshima, Adesso and Illuminati \cite{HAI2007}.
A number of schemes to generate CV multipartite entanglement have
been proposed theoretically and realized experimentally
in recent years. There were first the passive optical
scheme using squeezed states mixed with beam
splitters~\cite{vanLoock2000}. Then came the active schemes in which
multipartite entanglement is created as a result of parametric interaction
of several optical waves such as
cascaded/concurrent~\cite{Pfister2004,Guo2005,Yu2006},
interlinked~\cite{Ferraro2004,Olsen2006}, or consecutive parametric
interactions~\cite{Rodionov2004}.
These schemes generally neglect
the spatial structure of the electromagnetic field.
It is natural to investigate whether the spatial modes
can also serve for the creation of CV multipartite entanglement.
This question has been adressed recently in Ref. \cite{DBCK10} where a simple active scheme was proposed for the
creation of tripartite entanglement between spatial modes of the
electromagnetic field in the process of parametric interaction.
It consists in pumping a
nonlinear parametric medium by a coherent combination of several tilted
plane monochromatic waves which is called a spatially-structured pump. Since
the pump photon can be extracted from any of these waves and the pair of
down-converted photons emitted in different directions according to the
phase-matching condition, that scheme allows for the creation of tripartite
entanglement between spatial modes of the down-converted field.
The aim of this paper is to generalize the results of Ref. \cite{DBCK10} to the generation of spatial multipartite entanglement.
For that purpose, we present a scheme with an arbitrary number $2N$ of tilted plane waves pumping a parametric medium.
An interesting feature of this realistic proposal is the
possibility of localizing the created spatial multipartite entanglement
in just two well-defined spatial modes formed as a linear combination of all
the modes participating in the down-conversion process.
The
possibility of entanglement localization was introduced by
Serafini, Adesso and Illuminati \cite{ASI05} and a physical implementation was also given in terms of 2N-1 beam splitters and 2N single-mode squeezed inputs based on \cite{VF03}.
The paper is organized as follows. Section~II is devoted to spatial multipartite entanglement. We present the process of parametric down-conversion with a spatially structured pump consisting of $N$ pairs of symmetrically-tilted
plane waves in Subsec.~II~A. The evolution equations for the field operators are given and solved in the rotating wave approximation for possibly non-zero constant phase mismatch. The genuine multipartite entanglement is then studied In Subsec.~II~B. Section III is dedicated to the phenomenon of entanglement localization. It is presented explicitly in Subsec.~III~A and a quantitative characterization of the spatial entanglement generated by this process is given in Subsec.~III~B. The scaling with $N$ as well as the dependence on the phase mismatch are explicitly provided. Conclusions are drawn in Section~IV.
\section{Spatial multipartite entanglement}
\subsection{Parametric down-conversion with spatially-structured pump}
We consider a system of pumps which consists of $N$ pairs of symmetrically tilted plane waves:
\begin{equation}
E_p({\bf r}) = \frac{\alpha}{4\pi^2} \sum_{d=1}^N \left(e^{i{\bf q}_{d}.{\bf r}}+e^{-i{ \bf q}_{d}.{\bf r}}\right),
\end{equation}
with $\bf{r}=(x,y)$, the vector of coordinates in the plane of the crystal entering face, and $\bf{q}=(k_x,k_y)$, the projection of the three-dimensional wave vector ${\bf k}$ in that plane.
Its Fourier transform in the $xy$ plane of the (infinite) crystal (entering face) corresponds to Dirac delta's centered in ${\bf q_d}$ et ${\bf -q_d}$
\begin{equation}
E_p(\bf{q}) = \alpha \sum_{d=1}^N \left[\delta({\bf q}-{\bf q}_{d}) + \delta({\bf q}+{\bf q}_{d}) \right].
\label{TFN}
\end{equation}
This scheme is illustrated on Fig. 1 for $N=2$ and 4. The little circles represent the projections of the pump wave vectors in the $xy$ plane of the crystal entering face.
The plane depicted in Fig. 2 is the one going through one of the $N$ pairs of opposed pumps on Fig. 1, for example the $yz$ plane.
\begin{center}
\begin{figure}
\includegraphics[scale=0.45]{figN1.eps}
\caption{(Color online) Scheme for the generation of spatial multipartite entanglement with $2N$ symmetrically tilted pump waves (illustrated for $N=2$ and 4). The little circles represent the projections of the pump wave vectors in the $xy$ plane of the crystal entering face.
}
\end{figure}
\end{center}
\begin{center}
\begin{figure}
\includegraphics[scale=0.45]{figN2.eps}
\caption{(Color online) The plane depicted is the one passing through one of the $N$ pairs of opposed pumps on Fig. 1 (e. g. the $yz$ plane).
The tilted pumps have wave vectors ${\bf k}_p (\pm {\bf q}_n)$ with $n$ ranging from $1$ to $N$. The transverse (vertical) components are $\pm {\bf q}_n$.
}
\end{figure}
\end{center}
The evolution of the creation and annihilation operators, $\hat{a}$ and $\hat{a}^\dag$, associated to the propagation and diffraction of the quantized electromagnetic field through the nonlinear parametric medium is described by \cite{Kolobov1999}
\begin{equation}
\frac{\partial}{\partial z}\hat{a}(z,{\bf q}) = \lambda \int {\bf d q^\prime} E_p({\bf q}-{\bf q^\prime}) \hat{a}^\dag(z,-{\bf q^\prime}) e^{i\Delta({\bf q},-{\bf q^\prime})z}.
\label{evol4}
\end{equation}
Here $z$ is the longitudinal coordinate and $\Delta$ is the phase mismatch defined as
\begin{equation}
\Delta({\bf q},-{\bf q^\prime}) = k_z({\bf q})+k_z(-{\bf q^\prime})-k_{pz}({\bf q}-{\bf q^\prime}),
\end{equation}
where $k_z({q})$ and $k_z(-{q})$ are the longitudinal component of the two incoming wave vectors and $k_{pz}$ is the longitudinal component of the pump wave vector.
The conservation of energy and momentum imply that we have to consider a set of $2N+1$ coupled equations, obtained upon substitution of ({\ref{TFN}) into (\ref{evol4}), for the waves exiting the crystal with wave vector transverse components ${\bf q}$, ${\bf q}_d-{\bf q}$ and $-{\bf q}_d-{\bf q}$ with $d=1,\cdots,N$
\begin{widetext}
\begin{eqnarray}
\frac{\partial}{\partial z}\hat{a}(z,{\bf q}) & = & \alpha \lambda \sum_{d=1}^N \left\{\hat{a}^\dag(z,{\bf q}_d-{\bf q})e^{i\Delta({\bf q},{\bf q}_d-{\bf q})z}+\hat{a}^\dag(z,-{\bf q}_d-{\bf q})e^{i\Delta({\bf q},-{\bf q}_d-{\bf q})z} \right\} \label{NeqFULL} \\
\frac{\partial}{\partial z}\hat{a}(z,\pm {\bf q}_j+{\bf q}) & = & \ \alpha \lambda \sum_{d=1}^N \left\{\hat{a}^\dag(z,{\bf q}_d \mp {\bf q_j}-{\bf q})e^{i\Delta({\bf q},{\bf q}_d \mp {\bf q_j}-{\bf q})z}+
\hat{a}^\dag(z,-{\bf q}_d \mp {\bf q_j}-{\bf q})e^{i\Delta({\bf q},-{\bf q}_d \mp {\bf q_j}-{\bf q})z}\right\} \ . \nonumber
\end{eqnarray}
\end{widetext}
In the second equation, the phase mismatch is higher for the contributions with $d \neq j$ which shall therefore be neglected.
This corresponds to the usual rotating wave approximation.
The other phase mismatches are all taken equal to $\Delta$ which amounts to imposing some symetries on $\Delta(\pm {\bf q}_d\pm q,\pm {\bf q})$.
One may introduce a renormalized phase mismatch $\delta$ and a new relevant variable $\tilde r\equiv \sqrt{2} \alpha \lambda z$ which combines the interaction strength $\alpha \lambda$ and the longitudinal coordinate $z$:
\begin{eqnarray}
\delta&\equiv& \frac{\Delta}{2 \sqrt{2} \alpha \lambda} \nonumber\\
\tilde r&\equiv& \sqrt{2} \alpha \lambda z.
\end{eqnarray}
We shall use the following notation
\begin{eqnarray}
a_0(\tilde r)&\equiv&\hat{a}(z,{\bf q})\\
a_{n_\pm}(\tilde r)&\equiv& \hat{a}(z,\pm {\bf q}_n+{\bf q}) \qquad n=1,\cdots,N,\nonumber
\end{eqnarray}
and shall consider only the zeroth spatial Fourier components of the field, ${\bf q}=0$. Physically, this corresponds to photodetection of the light field by a single large photodetector without spatial resolution. These modes are depicted in Fig.~2: the long (blue) arrows pertains to $\hat{a}_{n_\pm}(0)$, the short one to $\hat{a}_0^\dag(0)$.
In this setting, one can then rewrite (\ref{NeqFULL}) as
\begin{eqnarray}
\frac{d}{d\tilde r}\hat{a}_0 & = & \frac{e^{2i\delta \tilde r}}{\sqrt{2}} \sum_{d=1}^N \left( \hat{a}_{d_+}^\dag + \hat{a}_{d_-}^\dag \right) \nonumber\\
\frac{d}{d\tilde r}\hat{a}_{n_\pm} & = & \frac{e^{2i\delta \tilde r} }{\sqrt{2}} \hat{a}_0^\dag \qquad n=1,\cdots,N.
\label{3eq}
\end{eqnarray}
We may solve this set of equations and obtain for the fields at the output of the crystal, $r=\tilde r|_{z=\ell}$:
\begin{eqnarray}\label{solneq}
\hat{a}_0(r) &=& U(r)\hat{a}_0(0) + \frac{V(r) }{\sqrt{2N}} \sum_{d=1}^N \left\{ \hat{a}_{d_+}^\dag(0) + \hat{a}_{d_-}^\dag(0) \right\}\nonumber \\
\hat{a}_{n_\pm}(r) &=&\hat{a}_{n_\pm}(0)+ \frac{U(r)-1}{\sqrt{2N}} \sum_{d=1}^N \left\{ \hat{a}_{d_+}(0) + \hat{a}_{d_-}(0) \right\} \nonumber\\
&+& \frac{V(r)}{\sqrt{2N}}\hat{a}_0^\dag(0)
\qquad n=1,\cdots,N.
\end{eqnarray}
The functions $U(r)$ and $V(r)$, which satisfy $|U(r)|^2-|V(r)|^2=1$, are given by
\begin{eqnarray}\label{UV}
U(r) & = & e^{i \delta r} \left( \cosh(\sqrt{N}\gamma r) - i\frac{\delta}{\sqrt{N}\gamma}\sinh(\sqrt{N}\gamma r) \right) \nonumber\\
V(r) & = & e^{i \delta r} \frac{1}{\gamma} \sinh(\sqrt{N}\gamma r) ,
\end{eqnarray}
where
$\gamma$ depends on the reduced phase mismatch $\delta$,
\begin{equation}
\gamma = \sqrt{1-\frac{\delta^2}{N}}.
\label{Gam}
\end{equation}
In the next subsection we shall characterize these solutions for the fields at the output of the crystal.
\subsection{Genuine multipartite entanglement}
The parametric down-conversion process preserves the Gaussian character of incoming modes. Hence, the outgoing modes are Gaussian states which are thus completely characterized by the covariance matrix associated to their quadrature components:
\begin{eqnarray}
\hat{x}_0(r)& \equiv& 2 {\rm Re}\, \hat{a}_0(r)\nonumber\\
\hat{p}_0(r) &\equiv& 2 {\rm Im}\, \hat{a}_0(r)\nonumber\\
\hat{x}_{n_\pm}(r)& \equiv& 2 {\rm Re}\, \hat{a}_{n_\pm}(r) \qquad n=1,\cdots,N \nonumber\\
\hat{p}_{n_\pm}(r) &\equiv& 2 {\rm Im}\, \hat{a}_{n_\pm}(r).
\end{eqnarray}
From the solution ({\ref{solneq}) one obtains
\begin{widetext}
\begin{eqnarray}\label{6quad}
\hat{x}_0(r) & = &{\rm{Re} \,} U(r) \hat{x}_0(0) - {\rm{Im} \,} U(r) \hat{p}_0(0)+ \sum_{d=1}^N \left\{ {\rm{Re} \,} V(r) \frac{ \hat{x}_{d_+}(0) + \hat{x}_{d_-}(0)} {\sqrt{2N}} + {\rm{Im} \,} V(r) \frac{\hat{p}_{d_+}(0) + \hat{p}_{d_-}(0)}{\sqrt{2N}} \right\}\nonumber\\
\hat{p}_0(r) & = &{\rm{Im} \,} U(r) \hat{x}_0(0)+ {\rm{Re} \,} U(r) \hat{p}_0(0) + \sum_{d=1}^N \left\{ {\rm{Im} \,} V(r) \frac{\hat{x}_{d_+}(0) + \hat{x}_{d_-}(0)}{\sqrt{2N}} - {\rm{Re} \,} V(r) \frac{ \hat{p}_{d_+}(0) + \hat{p}_{d_-}(0)} {\sqrt{2N}} \right\}\nonumber\\
\hat{x}_{n_\pm}(r) & = & \frac{{\rm{Re} \,} V(r)}{\sqrt{2N}} \hat{x}_0(0) +\frac{{\rm{Im} \,} V(r)}{\sqrt{2N}} \hat{p}_0(0) + \hat{x}_{n_\pm}(0) + \sum_{d=1}^N \left\{ ({\rm{Re} \,} U(r) -1) \frac{\hat{x}_{d_+}(0) + \hat{x}_{d_-}(0) }{2N} - {\rm{Im} \,} U(r)\frac{\hat{p}_{d_+}(0) + \hat{p}_{d_-}(0) }{2N} \right\} \nonumber\\
\hat{p}_{n_\pm}(r) & = & \frac{{\rm{Im} \,} V(r) }{\sqrt{2N}} \hat{x}_0(0) - \frac{{\rm{Re} \,} V(r) }{\sqrt{2N}} \hat{p}_0(0)+ \hat{p}_{n_\pm}(0) + \sum_{d=1}^N \left\{ {\rm{Im} \,} U(r) \frac{\hat{x}_{d_+}(0) + \hat{x}_{d_-}(0) }{2N} + ({\rm{Re} \,} U(r) -1) \frac{\hat{p}_{d_+}(0) + \hat{p}_{d_-}(0) }{2N} \right\} . \nonumber\\
\label{quad}
\end{eqnarray}
\end{widetext}
We can now determine the covariance matrix elements $\sigma_{ij} \equiv \left\langle \left(\Delta\hat{\xi}_i \Delta\hat{\xi}_j + \Delta\hat{\xi}_j \Delta\hat{\xi}_i\right)/2 \right\rangle $ of the output state $\rho$ with $ \left\langle . \right\rangle \equiv \textrm{Tr}\left[{\rho} . \right] $ and
$\Delta\hat{\xi}_i \equiv \hat{\xi}_i-\left\langle\hat{\xi}_i\right\rangle$ where $\hat{\xi}_i$ is some component of the vector $\hat{{\bf \xi}} = \left(\hat{x}_0,\hat{p}_0,\hat{x}_{1_+},\hat{p}_{1_+},\hat{x}_{1_-},\hat{p}_{1_-},\hat{x}_{2_+},\hat{p}_{2_+},\cdots, \hat{x}_{N_-},\hat{p}_{N_-}\right)$.
Taking into account that all inputs are vacuum states we obtain from (\ref{quad})
\begin{eqnarray}\label{paramCM3}
\!\!\!&\!\!\!&\!\!\! \left\langle [\hat{x}_0(r)]^2 \right\rangle= \left\langle [\hat{p}_0(r)]^2 \right\rangle= a\nonumber\\
\!\!\!&\!\!\!&\!\!\! \left\langle [\hat{x}_{n_\pm}(r)]^2 \right\rangle=\left\langle [\hat{p}_{n_\pm}(r)]^2\right\rangle=1+\frac{a-1}{2N}\nonumber\\
\!\!\!&\!\!\!&\!\!\! \left\langle \hat{x}_{n_\pm} (r) \hat{x}_{d_\pm}(r)\right\rangle=\left\langle \hat{x}_{n_\pm} (r) \hat{x}_{d_\mp}(r)\right\rangle=\frac{a-1}{2N}\nonumber\\
\!\!\!&\!\!\!&\!\!\! \left\langle \hat{p}_{n_\pm} (r) \hat{p}_{d_\pm}(r)\right\rangle=\left\langle \hat{p}_{n_\pm} (r) \hat{p}_{d_\mp}(r)\right\rangle=\frac{a-1}{2N}\nonumber\\
\!\!\!&\!\!\!&\!\!\! \left\langle \hat{x}_0(r) \hat{x}_{n_\pm}(r)\right\rangle = -\left\langle \hat{p}_0 (r)\hat{p}_{n_\pm}(r)\right\rangle = \frac{b}{\sqrt{2N}}\nonumber \\
\!\!\!&\!\!\!&\!\!\! \left\langle \hat{x}_0(r) \hat{p}_{n_\pm}(r)\right\rangle = \left\langle \hat{p}_0 (r)\hat{x}_{n_\pm}(r)\right\rangle = \frac{c}{\sqrt{2N}}\nonumber\\
\!\!\!&\!\!\!&\!\!\! \left\langle \hat{x}_{0} (r) \hat{p}_{0}(r)\right\rangle=\left\langle \hat{x}_{n_\pm} (r)\hat{p}_{d_\pm}(r)\right\rangle=\left\langle \hat{x}_{n_\pm} (r) \hat{p}_{d_\mp}(r)\right\rangle=0, \nonumber\\
\label{cov}
\end{eqnarray}
for $ n,d=1,\cdots,N,$ with
\begin{eqnarray}\label{abc}
&&a \equiv |U|^2+|V|^2 \nonumber \\
&&b \equiv 2({\rm Re}\, U \ {\rm Re}\, V - {\rm Im}\, U \ {\rm Im}\, V) \nonumber \\
&& c \equiv 2({\rm Re}\, U \ {\rm Im}\, V + {\rm Im}\, U {\rm Re}\, V) .\end{eqnarray}
Ordering the lines and columns according to $\hat{x}_0$, $\hat{p}_0$, $\hat{x}_{1_+}$, $\hat{p}_{1_+}$, $\hat{x}_{1_-}$, $\hat{p}_{1_-}$, $\hat{x}_{2_+}$, $\hat{p}_{2_+}$,
$\cdots$ $\hat{x}_{N_-}$, $\hat{p}_{N_-}$, the covariance matrix reads then
\begin{equation}
\sigma =
\left(\begin{array}{cccccccc}
A & D & D & D & \cdots & D &D & D \\
D & B & C & C & \cdots & C& C & C\\
D & C & B& C & \cdots & C & C & C \\
D & C & C& B & \cdots & C & C & C \\
\vdots & \vdots & \vdots & \vdots & \ddots & \vdots & \vdots & \vdots \\
D & C & C & C & \cdots & B & C & C\\
D & C & C & C & \cdots & C & B & C\\
D & C & C & C & \cdots & C &C & B\\
\end{array}\right) ,
\label{CM3}
\end{equation}
where $A$, $B$, $C$ and $D$ are the following $2 \times 2$ matrices
\begin{eqnarray}
&&A=a I_2, \qquad B=1 +\frac{a-1}{2N} I_2, \qquad C= \frac{a-1}{2N} I_2\nonumber\\
&&D=\frac{1}{\sqrt{2N}} \left(\begin{array}{cr}
b & c \\
c & -b
\end{array}\right), \qquad I_2= \left(\begin{array}{cc}
1 & 0 \\
0 & 1
\end{array}\right).
\end{eqnarray}
Its symplectic eigenvalues $\{\nu_i\}$ are $\sqrt{a^2-b^2-c^2}$ (with a fourfold degeneracy) and $1$ (with a $2N-3$ degeneracy). Noting that $b^2+c^2=4|U|^2 |V|^2$ implies that $\sqrt{a^2-b^2-c^2}=1$.
The product of symplectic eigenvalues is thus unity and, as expected for a unitary evolution, the output state remains pure.
The covariance matrix (\ref{CM3}) is bisymmetric, i. e. it is invariant under the permutation of quadratures
$\hat{x}_{n_\pm}(r),\hat{p}_{n_\pm}(r) \leftrightarrow \hat{x}_{d_\pm}(r),\hat{p}_{d_\pm}(\ell)$ or $\hat{x}_{d_\mp}(r),\hat{p}_{d_\mp}(\ell)$. As a consequence, it has the multipartite entanglement structure of covariance matrices associated to bisymmetric (1+2N)-mode Gaussian states considered in Ref. \cite{ASI05}. We now show that the output state obtained here exhibits genuine multipartite entanglement in the sense of Ref. \cite{VF03}. For that purpose, we have to verify that the following condition on covariance matrix elements is violated
\begin{eqnarray}
Q&\equiv &\left\langle\left( \hat{x}_0(r)- \frac{1}{\sqrt{ 2N}}\sum_{n=1}^N\left\{ \hat{x}_{n_+}(r)+\hat{x}_{n_-}(r)\right\}\right)^2 \right\rangle \nonumber\\
&+& \left\langle \left( \hat{p}_0(r)+ \frac{1}{\sqrt{ 2N}}\sum_{n=1}^N\left\{ \hat{p}_{n_+}(r)+\hat{p}_{n_-}(r)\right\}\right)^2 \right\rangle \geq \frac{1}{2N}.\nonumber\\ \label{C}
\end{eqnarray}
From (\ref{cov})-(\ref{abc}) one deduces that
\begin{eqnarray}
Q&=&4 (a-b)\\
&=& 4\left( [{\rm{Re} \,} U(r) -{\rm{Re} \,} V(r)]^2 + [{\rm{Im} \,} U(r) + {\rm{Im} \,} V(r)]^2 \right) . \label{Q}\nonumber
\end{eqnarray}
When the phase matching condition is satisfied, $\delta=0$, one deduces from (\ref{UV}) that $U(r)=\cosh \sqrt{N}r$ and $V(r)=\sinh \sqrt{N}r$. It follows that (\ref{Q}) reduces to
\begin{equation}
Q=4 \{\cosh (\sqrt{N}r)-\sinh(\sqrt{N}r)\}=4 e^{-2\sqrt{N}r}.
\end{equation}
This quantity is smaller than $1/2N$ if the squeezing parameter $r$ satisfies
\begin{equation}
r= \sqrt{2}\alpha \lambda \ell > \frac{3 \ln (2N)}{2 \sqrt{N}}.
\end{equation}
Under this condition on the pump amplitude $\alpha$, the coupling parameter $\lambda$ and the crystal length $\ell$, the output state $\rho$ produced by the above parametric down-conversion process with $2N$ symmetrically-tilted
plane waves therefore exhibits genuine multipartite entanglement.
This generalizes the result obtained in Ref. \cite{DBCK10} in the case of tripartite entanglement ($N=1$).
For a small nonzero phase mismatch, we can expand (\ref{Q}) around $\delta=0$, which yields
\begin{eqnarray}
Q&=&4 e^{-2\sqrt{N}r} +\frac{\delta^2}{N}\left( [3-4Nr^2] e^{-2\sqrt{N}r} \right. \nonumber\\
&+& \left. 4[ 2 \sqrt{N} r-1] +[2 \sqrt{N} r-1]^2 e^{2\sqrt{N} r} \right) + O(\delta^4).\nonumber\\
\end{eqnarray}
Owing to the last term, this quantity increases exponentially with $\sqrt{N}r$.
As a consequence, $Q$ remains smaller than $1/2N$ only for low values of the phase mismatch.
This suggests that the genuine multipartite entanglement, at least when it is estimated with the criterion (\ref{C}) is very sensitive to $\delta$. This situation is in contrast with the phenomenon that we study in the next section, namely the localization of entanglement which will be shown to be robust with respect to the phase mismatch.
\section{Localization of entanglement}
\subsection{Beamsplitting the output state}
It has been shown \cite{ASI05} that the entanglement of bisymmetric $(m+n)$-mode Gaussian states is unitarily {\it localizable}, i. e., that, through local unitary operations, it may be fully concentrated in a single pair of modes. We shall study explicitly this phenomenon here.
For that purpose we shall perform the following unitary transformation based on discrete Fourier series,
\begin{eqnarray}\label{unitary}
\hat{a}'_0(\tilde r)&=& \hat{a}_0(\tilde r) \nonumber\\
\hat{a}'_k(\tilde r) &=&\frac{1}{\sqrt{2N}} \sum_{n=1}^N e^{-\pi i (k-1) \frac{n-1}{N}} \{ \hat{a}_{n_+} (\tilde r) -e^{-\pi i k} \hat{a}_{n_-}(\tilde r) \}\nonumber\\
& & k=1,\cdots,2 N. \
\end{eqnarray}
Physically, this corresponds to beamsplitting the quantized fields at the output of the crystal.
As a consequence, from (\ref{quad}) one deduces that the new fields at the output of the crystal are transformed to the following ones
\begin{eqnarray}\label{sol3eq}
\hat{a}'_0(r) & = & U(r)\hat{a}'_0(0) + V(r){\hat{a}'_1}{^\dag}(0)\nonumber \\
\hat{a}'_1(r) & = & U(r)\hat{a}'_1(0) + V(r){\hat{a}'_0}{^\dag}(0) \nonumber\\
\hat{a}'_k(r) & = &\hat{a}'_k(0) \qquad k=2,\cdots, 2 N,
\end{eqnarray}
Accordingly, the quadratures are now given by
\begin{widetext}
\begin{eqnarray}\label{6quad2}
\hat{x}'_0(r) & = &{\rm Re} \,U(r) \hat{x}'_0(0)- {\rm Im}\, U(r) \hat{p}'_0(0)+{\rm Re} \,V(r) \hat{x}'_1(0)+ {\rm Im}\, V(r) \hat{p}'_1(0)\nonumber\\
\hat{p}'_0(r) & = & {\rm Re}\, U(r) \hat{p}'_0(0)+ {\rm Im}\, U(r) \hat{x}'_0(0)-{\rm Re}\, V(r) \hat{p}'_1(0)+ {\rm Im}\, V(r) \hat{x}'_1(0)\nonumber\\
\hat{x}'_1(r) & = & {\rm Re}\, U(r) \hat{x}'_1(0)- {\rm Im}\, U(r) \hat{p}'_1(0)+{\rm Re}\, V(r) \hat{x}'_0(0)+ {\rm Im}\, V(r) \hat{p}'_0(0) \nonumber\\
\hat{p}'_1(r) & = & {\rm Re}\, U(r) \hat{p}'_1(0)+ {\rm Im}\, U(r) \hat{x}'_1(0)-{\rm Re}\, V(r) \hat{p}'_0(0)+ {\rm Im}\, V(r) \hat{x}'_0(0)\\
\hat{x}'_k(r) & = & \hat{x}'_k(0) \qquad k=2,\cdots,2N\nonumber\\
\hat{p}'_k(r) & = & \hat{p}'_k(0)\nonumber.
\end{eqnarray}
\end{widetext}
Hence, the covariance matrix elements associated to the density matrix $\rho'$ are
\begin{eqnarray}\label{paramCM32}
&&\left\langle[ \hat{x}'_0(r)]^2\right\rangle= \left\langle[ \hat{p}'_0(r)]^2\right\rangle=\left\langle [\hat{x}'_1(r)]^2\right\rangle=\left\langle [\hat{p}'_1(r)]^2\right\rangle= a\nonumber \\
&&\left\langle \hat{x}'_0(r) \hat{x}'_1(r)\right\rangle = -\left\langle \hat{p}'_0(r) \hat{p}'_1(r)\right\rangle = b\nonumber \\
&& \left\langle \hat{x}'_0(r) \hat{p}'_1(r)\right\rangle = \left\langle \hat{p}'_0(r) \hat{x}'_1(r)\right\rangle = c\nonumber\\
&& \left\langle [\hat{x}'_k(r)]^2\right\rangle= \left\langle [\hat{p}'_k(r)]^2\right\rangle=1 \qquad k=2,\cdots,2 N,
\end{eqnarray}
and the other ones are zero.
Ordering the lines and columns according to $\hat{x}'_0$, $\hat{p}'_0$, $\hat{x}'_1$, $\hat{p}'_1$, $\hat{x}'_2$, $\hat{p}'_2$,
$\cdots$ $\hat{x}'_{2N}$, $\hat{p}'_{2N}$, the covariance matrix reads
\begin{equation}
\sigma' =
\left(\begin{array}{cccc}
a & 0 & b & c \\
0 & a & c & -b\\
b & c & a & 0 \\
c & -b & 0 & a \\
\end{array}\right) \bigoplus I_{2(2N-1)},
\label{CM4}
\end{equation}
where $I_{k}$ is the $k\times k$ unity matrix.
Its symplectic eigenvalues $\{\nu'_i\}$ are the same as those of $\sigma$ as $\sigma'$ is obtained by congruence,
\begin{equation}
\sigma'=S^{\rm T} \sigma S,
\end{equation}
where the elements of the symplectic transformation $S$ are given by (\ref{unitary}).
In the next subsection, we investigate the non-separability properties of the pertaining state $\rho'$.
\subsection{Logarithmic negativity}
The covariance matrix (\ref{CM4}) is bisymmetric, i. e. the local exchange of any pairs of modes within its two diagonal blocks leaves the matrix invariant \cite{ASI05}. Notice that here the unitary transformation (\ref{unitary}) is such that the covariant matrix $\sigma'$ is block-diagonal, and invariant under the exchange of modes $0$ and $1$.
The separability of the state $\rho'$ can be determined as in Ref. \cite{ASI05} by (i) considering a partition of the system in two subsystems and (ii) investigating the positivity of the partially transposed matrix $\tilde \rho'$ obtained upon transposing the variables of only one of the two subsystems.
The positivity of partial transposition (PPT) is a necessary condition for the separability of any bipartite quantum state \cite{P96},\cite{H96}.
It is also sufficient for the separability of $(1+n)-$mode Gaussian states \cite{S00},\cite{WW01}.
The covariance matrix $\tilde \sigma'$ of the partially transposed state $\tilde \rho'$ with respect to one subsystem is obtained \cite{S00} by changing the signs of the quadratures $p'_j$ belonging to that subsystem.
Here, because of the block-diagonal structure of $\sigma'$ we can restrict the analysis to the first block and consider the transposition of mode 1, i.e. change the sign of $p'_1$:
\begin{equation}
\tilde \sigma' =
\left(\begin{array}{cccc}
a & 0 & b & c \\
0 & a & -c & b\\
b & -c & a & 0 \\
c & b & 0 & a \\
\end{array}\right) \bigoplus I_{2(2N-1)},
\label{TCM3}
\end{equation}
Its symplectic eigenvalues $\{\tilde \nu'_j\}$ are $1$ and
\begin{equation}
\tilde \nu'_\pm \equiv a \pm \sqrt{b^2+c^2}=(|U| \pm |V|)^2. \label{nu}
\end{equation}
The necessary and sufficient PPT condition for the separability of the state $\rho'$ amounts to having $\tilde \nu'_j \geq 1 \forall j$. We can therefore focus on the smallest eigenvalue $\tilde \nu'_-$.
The extent to which this criterion is violated is measured by $\mathcal{E_N}(\rho')$, the logarithmic negativity of $\rho'$ defined as the logarithm of the trace norm of $\tilde \rho'$:
\begin{equation}
\mathcal{E_N}(\rho')=\ln || \tilde \rho' ||_1=\max( 0, -\ln \tilde \nu'_-), \label{logneg}
\end{equation}
where $\tilde \nu'_-$ is obtained exlicitly from (\ref{nu}) and (\ref{UV}),
\begin{widetext}
\begin{equation}
\tilde \nu'_-= \frac{1}{ \gamma^2}\left( \sqrt{1-\frac{\delta^2}{N \cosh^2\left(\sqrt{N} \gamma r\right)}} \cosh\left(\sqrt{N} \gamma r\right) - \sinh\left(\sqrt{N} \gamma r\right) \right)^2 . \label {logneg1}
\end{equation}
\end{widetext}
We first consider the case of zero phase mismatch,
$\delta=0$, for which (\ref{logneg}) reduces to
\begin{eqnarray}
\tilde \nu'_-&=& \left( \cosh(\sqrt{N} r) - \sinh(\sqrt{N} r)\right)^2\nonumber\\
&=&e^{-2\sqrt{N}r}.
\end{eqnarray}
The logarithmic negativity is thus positive and, furthermore, scales quadratically with the number of modes,
\begin{equation}
\mathcal{E_N}(\rho')=2\sqrt{N}r. \label{lnN}
\end{equation}
Note that the covariance matrix (\ref{CM4}) is block-diagonal since (\ref{abc}) entails that
$a=\cosh(2\sqrt{N}r)$, $b=\sinh(2\sqrt{N}r)$ and $c=0$.
It is readily recognized that its nontrivial part pertains to two
entangled modes with a squeezing parameter $2\sqrt{N}r$ and the remainder to
$2N-1$ modes in the vacuum state.
This is one instance of the localization of
entanglement as introduced in Ref.~\cite{ASI05}.
The result (\ref{lnN}) generalizes the one obtained in Ref.~\cite{DBCK10} for a single pair of tilted pump ($N=1$) and shows that the effective squeezing is enhanced by a factor $\sqrt{N}$.
When the phase mismatch takes on some finite values, the logarithmic negativity (\ref{logneg}) features the smallest symplectic eigenvalue (\ref{nu}).
It is instructive to obtain a more explicit expression for a small phase mismatch by expanding around the case $\delta=0$,
\begin{equation}
\mathcal{E_N}(\rho')=2\sqrt{N}r-
\frac{\delta^2 r }{\sqrt{N}} \left(1- \frac{\tanh{\sqrt{N}r}}{\sqrt{N}r}\right) +O(\delta^4) . \label{lognegD2}
\end{equation}
The correction is of second order in $\delta$ and negative. It is a bounded function of $N$ whose magnitude decreases for large $N$.
The positivity of the logarithmic negativity implies that the central mode is entangled with the uniform superposition of the tilted modes. This entanglement localization only slowly decreases when the phase mismath $\delta$ increases.
This confers some robustness to the entanglement localization process and is somehow in contrast with the genuine multipartite entanglement (\ref{C}) which is more sensitive to the phase mismatch.
Recalling that the squeezing parameter $r=\sqrt{2} \alpha \lambda \ell$, equation
(\ref{lognegD2}) also quantifies explicitly the entanglement localization in terms of the pump amplitude $\alpha$, the coupling parameter $\lambda$, the crystal length $\ell$ and provides the scaling with $N$.
\section{Conclusions}
We have presented a simple active optical scheme for the generation of spatial multipartite entanglement.
It consists in using $2N$ symmetrically tilted plane waves as pump modes in the spatially structured parametric down-conversion process taking place in a nonlinear crystal. We have found the analytical solution of the corresponding model in the rotating wave approximation for arbitrary $N$ and possibly nonzero phase mismatch. We have studied quantitatively the entanglement of the $2N+1$ coupled modes obtained at the output of the crystal.
When the phase matching condition is satisfied, the system exhibits genuine multipartite entanglement. It has also been shown to subsist for nonzero, albeit small, values of the phase mismatch.
In addition, our scheme provides a realistic proposal for the experimental realization of entanglement localization.
By mixing different spatial modes with beam splitters, we can localize the
entanglement distributed initially among all the $2N+1$ spatial modes
in only two well-defined modes, formed by the linear combinations of the
initial ones. Interestingly, this entanglement localization results in an enhancement
of the entanglement by a factor $\sqrt{N}$. Moreover, this process has been shown to be robust with respect to the phase mismatch.
\section{Acknowledgments}
The authors are grateful to M. I. Kolobov for stimulating and helpful discussions.
This work was supported by the FET programme COMPAS FP7-ICT-212008.
| proofpile-arXiv_065-5103 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
This work deals with dynamics of populations experiencing intrinsic noise caused by the discreteness of individuals and stochastic character of their interactions. When the average size $N$ of such a (stationary or quasi-stationary) population is large, the noise-induced fluctuations in the observed number of individuals are typically small, and only rarely large. In many applications, however, the rare large fluctuations can be very important. This is certainly true when their consequences are catastrophic, such as in the case of extinction of an isolated self-regulating
population after having maintained a long-lived metastable state, with applications ranging from population biology \cite{bartlett,assessment} and epidemiology \cite{bartlett,epidemic}
to genetic regulatory networks in living cells \cite{bio}. Another example of a catastrophic transition driven by a rare large intrinsic fluctuation is population explosion \cite{MS}. Rare large fluctuations may also induce stochastic switches between different metastable states \cite{dykman}; these appear in genetic regulatory networks \cite{switches} and in other contexts. Less dramatic but still important examples involve large fluctuations in the production rates of molecules on the surfaces of micron-sized dust grains in the interstellar medium, where the number of atoms, participating in the chemical reactions, can be relatively small \cite{green,biham}. As stochastic population dynamics is far from equilibrium and therefore
defies standard methods of equilibrium statistical mechanics, large fluctuations of stochastic populations are of much interest to physics \cite{kampen,gardiner}.
In this paper we consider single-species populations which are well mixed, so that spatial degrees of freedom are irrelevant. To account for stochasticity of gain-loss processes (in the following -- reactions) and discreteness of the individuals (in the following -- particles), we assume a Markov process and employ a master equation which describes the time evolution of the probability $P_n(t)$ to have a population size $n$ at (continuous) time $t$. If the population exhibits neither extinction, nor a switch to another metastable state or to an infinite population size, a natural goal is to determine the stationary probability distribution of the population size \cite{kubo}. For metastable populations -- populations experiencing either extinction, or switches between different metastable states -- one is usually interested in the mean time to extinction or escape (MTE), and in the long-lived \textit{quasi}-stationary distribution (QSD) of the population size. For single-step
processes these quantities can be calculated by standard methods. The MTE can be calculated exactly by employing the backward master
equation \cite{kampen,gardiner}. This procedure yields an exact but unwieldy analytic expression for the MTE which, for a large population size in the metastable state, can be simplified via a saddle-point approximation \cite{Doering}. In its turn, the QSD of a single-step process can be found, in many cases, through a recursion.
For multi-step processes neither the MTE, nor QSD can be calculated exactly. Many practitioners have used, in different contexts of physics, chemistry, population biology, epidemiology, cell biology, \textit{etc}, what is often called ``the diffusion approximation": an approximation of the master equation by a Fokker-Planck equation. The latter can be
obtained via the van Kampen system size expansion, or other related prescriptions. With a Fokker-Planck equation at hand, the MTE and QSD can again be evaluated by standard methods \cite{kampen,gardiner}. Unfortunately, this approximation is in general uncontrolled, and fails in its description of
the tails of the QSD. As a result, it gives exponentially large errors in the MTE~\cite{Doering,gaveau,kessler,Assaf2}.
Until recently, the MTE and QSD had been calculated accurately only for a few model problems involving multi-step processes. Recently, Escudero and Kamenev \cite{EK} and Assaf and Meerson \cite{Assaf4} addressed quite a general set of reactions and developed controlled WKB approximations for the MTE and QSD for population switches \cite{EK} and population extinction \cite{Assaf4}. When necessary, the WKB approximation must be supplemented by a recursive solution of the master equation at small population sizes \cite{MS,Assaf4}, and by the van Kampen system size expansion in narrow regions where the ``fast" and ``slow" WKB modes are coupled and the WKB approximation fails~\cite{MS,EK,Assaf4}.
The techniques developed in Refs.~\cite{EK} and \cite{Assaf4} (see also Refs. \cite{dykman,kessler,MS}) were formulated directly in the space of population size $n$. An alternative approach invokes a complementary space which can be interpreted as a momentum space.
The momentum-space representation is obtained when the master equation -- an infinite set of linear ordinary differential equations -- is transformed into a single evolution equation -- a linear partial differential equation -- for the probability generating function $G(p,t)$. Here $p$, a complementary variable, is conjugate to the population size $n$ and plays the role of ``momentum" in an effective Hamiltonian system which encodes, in the leading order of $1/N$-expansion, the stochastic population dynamics. One can then perform spectral decomposition of this linear partial differential equation for $G(p,t)$. In order to describe the stationary or metastable states, it suffices to consider the ground state and the lowest excited state of this spectral decomposition, whereas higher modes only contribute to short-time transients \cite{Assaf1,Assaf2}.
The ordinary differential equations for the ground state and the lowest excited state are determined by the specific set of reactions the population undergoes. The order of these equations is equal to the highest order of inter-particle reactions. For example, for two- (three-) body reactions the equations are of the second (third) order, \textit{etc}. In general, these ordinary differential equations cannot be solved exactly, and some perturbation techniques, employing the small parameter $1/N \ll 1$, need to be used.
The momentum-space spectral theory was developed~\cite{Assaf1,Assaf2,Assaf} for two-body reactions. Here we extend the theory to any many-body reactions. We also determine, in the general case, the previously unknown boundary conditions for the above-mentioned eigenvalue problems. If there is no absorbing state at infinity, the boundary conditions are ``self-generated" by the demand that the probability generating function $G(p,t)$ be, at any $t$, an entire function on the \textit{complex} plane $p$ \cite{entire}. We show that, for two-body reactions, the population extinction problem can always be solved by matching the exact solution of a quasi-stationary equation for the lowest excited state (see below) with a perturbative solution of a \textit{non}-quasi-stationary equation for the same state. This procedure always works when $N$ is sufficiently large. For three-, four-, $\dots$ body reactions the spectral decomposition can be used in conjunction with a $p$-space
WKB (Wentzel-Kramers-Brillouin) approximation which employs the same small parameter $1/N$ but does not rely on exact solution of the quasi-stationary equation. We find that there is a region of $p$ where the WKB approximation breaks down, and a region of $p$ where its accuracy is insufficient. In the former region a boundary-layer solution can be found and matched with the WKB solution. In the latter region a simple \textit{non}-WKB perturbative solution can be obtained. The theory extensions presented here turn the momentum-space spectral theory of large fluctuations into a more general tool.
As the evolution equation for $G(p,t)$ is \textit{equivalent} to the master equation, the $p$-space approach is clearly advantageous, compared to the $n$-space approach, when the problem in the $p$ space admits an exact solution, see Refs.~\cite{green,gardiner}. Otherwise,
the technical advantages of the $p$-space approach are not \textit{a priori} obvious. In any case, it provides a viable alternative, and an interesting perspective, to theory of large fluctuations of stochastic populations.
Here is the layout of the rest of the paper. Section~II briefly introduces the momentum-space spectral formalism, whereas in sections~III and IV we describe the methods of solution and illustrate them on several model examples. Sec.~III deals with a well-studied prototypical chemical reaction scheme which describes a stationary production of hydrogen molecules on interstellar dust grains. Here we show that the WKB approximation not only gives accurate results for the production rate (including its fluctuations) of hydrogen molecules, but also yields a complete stationary probability distribution function of the number of hydrogen atoms, including its non-Gaussian tails. In Sec.~IV we deal with isolated populations undergoing intrinsic-noise-driven extinction after maintaining a long-lived metastable state. Here, after some general arguments, we consider two different examples -- one studied previously and one new -- and determine the MTE and QSD. Throughout the paper we compare our analytical results with numerical solutions of the pertinent master equation and, when possible, with previous analytical results. Section~V summarizes our findings and discusses the advantages and disadvantages of the $p$-space method compared with the ``real" space WKB method \cite{dykman,kessler,MS,EK,Assaf4}.
\section{Master equation, probability generating function and spectral formulation}
Populations consist of discrete ``particles" undergoing stochastic gain and loss reactions. To account for both discreteness and stochasticity,
we assume the Markov property, see \textit{e.g.} Refs. \cite{kampen,gardiner}, and employ the master equation
\begin{equation}\label{master0}
\dot{P}_n(t)=\sum_{n^{\prime}\neq n} W_{n^{\prime} n} P_{n^{\prime}}
- W_{n n^{\prime}} P_{n}
\end{equation}
which describes the time evolution of the probability distribution function $P_n(t)$ to
have $n$ particles at time $t$. Here $W_{n n^{\prime}}$ is the
transition rate matrix; it is assumed that $P_{n<0}=0$.
The probability generating function, see \textit{e.g.} Refs. \cite{kampen,gardiner}, is defined as
\begin{equation}\label{genprob}
G(p,t)=\sum_{n=0}^{\infty} p^n P_n(t)\,.
\end{equation}
Here $p$ is an auxiliary variable which is conjugate to the number of particles $n$.
Once $G(p,t)$ is known, the probability distribution function $P_n(t)$ is given by the Taylor coefficients
\begin{equation}\label{prob}
P_n(t)=\left.\frac{1}{n!}\frac{\partial ^n G}{\partial
p^n}\right|_{p=0}
\end{equation}
or, alternatively, by employing the Cauchy theorem
\begin{equation}\label{cauchy}
P_n(t)=\frac{1}{2\pi i}\oint \frac{G(p,t)}{p^{n+1}}dp,
\end{equation}
where the integration has to be performed over a closed contour in
the \textit{complex} $p$-plane around the singular point $p=0$. For stochastic populations which do not exhibit
population explosion \cite{MS}, the probability $P_n(t)$ decays faster than exponentially at large $n$. Therefore, $G(p,t)$ is an entire function of $p$ on the complex
$p$-plane \cite{entire}.
If the reaction rates are polynomial in $n$, one can transform the master equation (\ref{master0}) into a single linear partial differential equation for the probability generating function,
\begin{equation}\label{geneq}
\frac{\partial G}{\partial t}= \hat{{\cal L}} G\,,
\end{equation}
where $\hat{{\cal L}}$ is a linear differential operator which includes powers of the partial differentiation operator $\partial/\partial p$. Equation~(\ref{geneq}) is exact and equivalent to the master equation (\ref{master0}). If only one-body reactions are present, $\hat{{\cal L}}$ is of first order in $\partial/\partial p$, and Eq.~(\ref{geneq}) can be solved by characteristics \cite{gardiner}. For many-body reactions one can proceed by expanding $G(p,t)$ in the yet unknown eigenmodes and eigenvalues of the problem \cite{Assaf,Assaf1,Assaf2}:
\begin{equation}\label{genexpansion}
G(p,t)= G_{st}(p)+\sum_{k=1}^{\infty} a_k \phi_{k}(p) e^{-E_{k}t}\,.
\end{equation}
As a result, partial differential equation~(\ref{geneq}) is transformed into an infinite set of ordinary differential equations: for the (stationary) ground-state mode $G_{st}(p)$ and for the eigenmodes of excited states $\{\phi_k(p)\}_{k=1}^{\infty}$. By virtue of Eq.~(\ref{prob}) or (\ref{cauchy}), the ground state eigenmode determines the \textit{stationary} probability distribution function of the system. If a long-lived population ultimately goes extinct, the stationary distribution is trivial: $P_n=\delta_{n,0}$, where $\delta_{n,0}$ is the Kronecker's delta. What is of interest in this case is the \textit{quasi-stationary} distribution and its (exponentially long) decay time which yields an accurate approximation to the MTE. These quantities are determined by the lowest excited eigenmode $\phi(p)$ and the eigenvalue $E_1$, respectively \cite{Assaf1,Assaf2}. The higher modes only contribute to short-time transients. Therefore, in the following we will focus on determining $G_{st}$ or solving th
e eigenvalue problem for $\phi_1(p)$ and $E_1$.
\section{Stationary distributions: Ground-state calculations}
As a first example, we consider a simple model of production of $H_2$ molecules on micron-sized dust grains in interstellar medium. This model was investigated by Green \textit{et. al.} \cite{green}, who computed the stationary probability distribution function of the number of hydrogen atoms via finding an exact solution to the ordinary differential equation for $G_{st}$. The same results were obtained, by a different method, by Biham and Lipshtat \cite{biham}. We will use this problem as a benchmark of the ground-state calculations using the momentum-space WKB approach. As we will see, this approach gives, for $N\gg 1$, an accurate \textit{approximate} solution for $G_{st}(p)$, and so it can be employed for many other models where no exact solutions are available.
Consider the following set of reactions: absorption of $H$-atoms by the grain surface $\emptyset \stackrel{\alpha}{\rightarrow} H$, desorption of $H$-atoms, $H\stackrel{\beta}{\rightarrow} \emptyset$, and formation of $H_2$-molecules from pairs of $H$-atoms which can be formally described as annihilation $2H\stackrel{\gamma}{\rightarrow} \emptyset$.
To calculate the production rate of $H_2$-molecules, one needs to determine the stationary probability distribution function of the $H$-atoms, $P_n(t\to\infty)$. For convenience, we rescale time and reaction rates by the desorption rate $\beta$ and denote $N=2\beta/\gamma$ and $R=\alpha\gamma/(2\beta^2)$. Ignoring fluctuations, one can write down the following (rescaled) deterministic rate equation:
\begin{equation}
\dot{\bar{n}}=NR-\bar{n}-\frac{2}{N}\bar{n}^2\,,
\label{rateH}
\end{equation}
where $\bar{n}(t) \gg 1$ is the average population size. The only positive fixed point of this equation,
\begin{equation}\label{rateeq1}
\bar{n}=\frac{N}{4}(\sqrt{1+8R}-1),
\end{equation}
is attracting, and the stationary probability distribution function $P_n$ is expected to be peaked around it. The master equation describing the stochastic dynamics of this system in rescaled time is
\begin{eqnarray}
\label{master1}
\frac{d}{dt}{P}_{n}(t)=\frac{1}{N}\left[(n+2)(n+1)P_{n+2}(t)-n(n-1)P_{n}(t)\right]+\left[(n+1)P_{n+1}(t)-nP_{n}(t)\right]+NR(P_{n-1}- P_n)\,.
\end{eqnarray}
This yields the following partial differential equation for $G(p,t)$ \cite{green}:
\begin{equation}\label{pde1}
\frac{\partial G}{\partial t} =
\frac{1}{N}(1-p^2)\frac{\partial^2 G}{\partial p^2}+(1-p)\frac{\partial G}{\partial p}+NR(p-1)G\,.
\end{equation}
The steady-state solution $G_{st}$ obeys the ordinary differential equation
\begin{equation}\label{ode1}
\frac{1}{N}(1+p)G_{st}^{\prime\prime}+G_{st}^{\prime}-NRG_{st}=0\,,
\end{equation}
where primes denote the $p$-derivatives. The boundary conditions are ``self-generated". Indeed, equality $G (p=1,t)=1$ holds at all times. This reflects conservation of probability, see Eq.~(\ref{genprob}). Therefore,
\begin{equation}\label{at1}
G_{st}(1)=1\,.
\end{equation}
Furthermore, Eq.~(\ref{ode1}) has a singular point at $p=-1$. As $G_{st}(p)$ must be analytic at $p=-1$,
we demand
\begin{equation}\label{secbc}
G_{st}^{\prime}(-1)-NRG_{st}(-1)=0.
\end{equation}
The boundary-value problem (\ref{ode1})-(\ref{secbc}) is exactly solvable in special functions \cite{green}. For a general set of reactions, however, one cannot expect an exact solution. Still, one can employ the small parameter $1/N$ to develop an accurate analytical approximation. To illustrate this point we will proceed as we were unaware of the exact solution, and then compare the approximate solution with the exact one. As the small parameter $1/N$ appears in the coefficient of the highest derivative, it is natural to use (a dissipative variant of) the stationary WKB approximation in the $p$-space \cite{Assaf}. The WKB ansatz is
\begin{equation}\label{ansatz}
G_{st}(p)=a(p)e^{-NS(p)},
\end{equation}
where the action $S(p)$ and amplitude $a(p)$ are non-negative functions of $p$. Using this ansatz in Eq.~(\ref{pde1}) with a zero left hand side, we obtain
\begin{eqnarray}\label{wkbfull1}
\frac{1}{N}(1-p^2)\left[a^{\prime\prime}-2NS^{\prime}a^{\prime}-NS^{\prime\prime}a+N^2(S^{\prime})^2 a\right]+(1-p)(a^{\prime}-NS^{\prime}a-N R\,a)=0\,.
\end{eqnarray}
In the leading order ${\cal O}(N)$ we obtain a stationary Hamilton-Jacobi equation
$H[p,-S^{\prime}(p)]=0$ with zero energy, \textit{cf}. Ref. \cite{Kamenev1}. The effective Hamiltonian is
\begin{equation}\label{Ham0}
H(p,q)=(1-p)[(1+p)q^2+q-R],
\end{equation}
where we have introduced $q(p)=-S^{\prime}(p)$: the reaction coordinate conjugate to the momentum $p$. The trivial zero-energy phase orbit $p=1$ is an invariant line of the Hamiltonian; it corresponds to the deterministic dynamics \cite{Kamenev1}. Indeed, the Hamilton's equation for $\dot{q}$,
$$
\dot{q}=R-q-2 q^2,
$$
coincides, in view of the relation $q=n/N$, with the deterministic rate equation~(\ref{rateH}). Hamiltonian (\ref{Ham0}) also has two nontrivial invariant zero-energy lines which are composed of the two solutions, $q_-(p)$ and $q_+(p)$, of
the quadratic equation $(1+p)q^2+q-R=0$:
\begin{equation}\label{spr}
q_-(p)=\frac{-1-v(p)}{2(1+p)}\,,\;\;\;q_+(p)=\frac{-1+v(p)}{2(1+p)}\,.
\end{equation}
Here we have denoted
\begin{equation}\label{vp}
v(p)=\sqrt{1+4R(1+p)}.
\end{equation}
The phase plane of this system is shown in Fig.~\ref{phasehyd}. The phase orbits $q=q_{-}(p)$ must be discarded. This is because $q_{-}(p)$ diverges at $p=-1$, whereas $G_{st}(p)$, and therefore $S(p)$, must be analytic everywhere.
The remaining nontrivial zero-energy phase orbit $q_+(p)\equiv q(p)$ has a special role. It describes the most probable path along which the system evolves, (almost) with certainty, in the course of a fluctuation bringing the system from the fixed point $(1,q_1)$ in the phase space $(p,q)$ to a given point, see Fig.~\ref{phasehyd}. Here $q_1=(1/4)(\sqrt{1+8R}-1)]$ is the attracting point of the deterministic rate equation, see Eq.~(\ref{rateeq1}).
\begin{figure}
\includegraphics[width=9.0cm,height=6.6cm,clip=]{fig1.eps}
\caption{Molecular hydrogen production on a grain. Shown are zero-energy
orbits of Hamiltonian (\ref{Ham0}) on the phase plane $(p,q)$. The
thick solid line corresponds to the instanton $q=q_+(p)$, see Eq.~(\ref{spr}). The motion along the vertical line $p=1$ is described by the deterministic rate equation ~(\ref{rateH}). The dashed lines depict the branch $q=q_-(p)$. It is non-physical at $q<0$ and does not contribute
to the WKB solution at $q>0$.} \label{phasehyd}
\end{figure}
Integrating the equation $S^{\prime}(p)=-q_+(p)$, we obtain
\begin{eqnarray}\label{act1}
S(p)=-v(p)+v(1)+\ln\frac{v(p)+1}{v(1)+1}\,,
\end{eqnarray}
where we have fixed the definitions of $a(p)$ and $S(p)$ by demanding
$S(p=1)=0$.
To calculate the amplitude $a(p)$ we proceed to the subleading ${\cal O}(1)$ order in Eq.~(\ref{wkbfull1}):
\begin{equation}
-2(1+p)S^{\prime}a^{\prime}-(1+p)S^{\prime\prime}a+a^{\prime}=0.
\end{equation}
Using $S(p)$ from Eq.~(\ref{act1}), we arrive at a first-order ordinary differential equation for $a(p)$,
\begin{equation}
\frac{a^{\prime}(p)}{a(p)}=\frac{4R^2(1+p)}{v(p)^2[1+v(p)]^2}\,.
\end{equation}
Solving this equation, we obtain the WKB solution
\begin{equation}\label{gwkb}
G_{st}^{WKB}(p)=\frac{v(1)^{1/2}\left[1+v(p)\right]}{v(p)^{1/2}[1+v(1)]}e^{-NS(p)},
\end{equation}
where the integration constant is chosen so as to obey boundary condition~(\ref{at1}). As one can easily check, WKB solution (\ref{gwkb}) also obeys boundary condition (\ref{secbc}).
As expected, pre-exponent $a(p)$ of the WKB solution (\ref{gwkb}) diverges at the turning point $p_{tp}=-1-1/(4R)<-1$ of the zero-energy phase orbit, see Fig. ~\ref{phasehyd}. As a result, the WKB solution breaks down in a close vicinity of this point. At $p<p_{tp}\,$ a WKB solution of a different nature appears: it exhibits decaying oscillations as a function of $p$. The oscillating WKB solution can be found by treating $S(p)$ as a complex-valued, rather than real, function. We will not need the oscillating solution, because the non-oscillating one, Eq.~(\ref{gwkb}), turns out to be sufficient for the purpose of calculating the probabilities $P_n$, see below.
Now we can compare WKB solution (\ref{gwkb}) with the exact solution of the problem (\ref{ode1})-(\ref{secbc}), derived by Green \textit{et al.} \cite{green}:
\begin{equation}\label{gex}
G_{st}^{exact}(p)=\left(\frac{2}{1+p}\right)^{\frac{N-1}{2}}\frac{I_{N-1}[2N \sqrt{R(1+p)}]}{I_{N-1}(2N \sqrt{2R})}\,,
\end{equation}
where
$I_k(w)$ is the modified Bessel function. To this end let us calculate the large-$N$ asymptote of $I_{N-1}[2N \sqrt{R(1+p)}]$ by using the integral definition of the modified Bessel function \cite{Abramowitz}
\begin{eqnarray}
I_{N-1}[2N\sqrt{R(1+p)}]=\frac{\left[N^2 R(1+p)\right]^{\frac{N-1}{2}}}{\sqrt{\pi}\,\Gamma(N-1/2)}\int_{-1}^1\frac{(1-t^2)^N e^{-2N \sqrt{R(1+p)} t}}{(1-t^2)^{3/2}}dt,
\end{eqnarray}
where $\Gamma(\dots)$ is the Euler Gamma function.
As $N\gg 1$, we can evaluate the integral by the saddle point approximation \cite{orszag}. Denoting $f(t)=\ln (1-t^2)-2\sqrt{R(1+p)} t,$
we find the relevant saddle point
$$
t_*(p)=\frac{1-\sqrt{1+4R(1+p)}}{2\sqrt{R(1+p)}}=-\sqrt{\frac{v(p)-1}{v(p)+1}}\,,
$$
with $v(p)$ from Eq.~(\ref{vp}). Then, expanding $f(t)\simeq f(t_*)+(1/2)f^{\prime\prime}(t_*)(t-t_*)^2$ with $f^{\prime\prime}(t_*)=-v(p)[1+v(p)]$, and performing the Gaussian integration, we obtain the $N\gg 1$ asymptote
\begin{eqnarray}\label{fp}
I_{N-1}[2N\sqrt{R(1+p)}]\simeq \frac{1+v(p)}{2\sqrt{2}\,\Gamma(N-1/2)\sqrt{N v(p)}}\left[N^2 R(1+p)\right]^{\frac{N-1}{2}}e^{N\{v(p)-1+\ln 2-\ln[1+v(p)]\}}\,.
\end{eqnarray}
Note that the saddle point approximation is valid on the entire segment $-1\leq p\leq 1$. In particular, Eq.~(\ref{fp}) with $p=1$ yields the $N\gg 1$ asymptote of the denominator of Eq.~(\ref{gex}). Now one can see that the large-$N$ asymptote of Eq.~(\ref{gex}) exactly coincides with WKB solution (\ref{gwkb}). Actually, the WKB result is indistinguishable from the exact result already for $N=10$, see Fig.~\ref{gcomp}.
\begin{figure}
\includegraphics[width=9.0cm,height=6.6cm,clip=]{fig2.eps}
\caption{Molecular hydrogen production on a grain. Shown is a comparison of WKB result (\ref{gwkb}) for $G_{st}(p)$ (dashed line) and exact result (\ref{gex}) (solid line) for $N=10$ and $R=1$. The agreement is excellent even for this moderate $N$.} \label{gcomp}
\end{figure}
Of a primary interest in the context of astrochemistry is the mean and variance of the steady-state production rate of $H_2$ molecules. Going back to physical units, we can write the mean steady-state production rate as
$$
{\cal R}(H_2)=\frac{\gamma}{2}\displaystyle\sum_{n=0}^{\infty}n (n-1) P_n = \frac{\gamma}{2} \langle n(n-1)\rangle=\frac{\gamma}{2}G_{st}^{\prime\prime}(1),
$$
\begin{equation}
{\cal R}(H_2)\simeq \frac{2\gamma N^2 R^2}{[v(1)+1]^2}\left[1-\frac{1}{N v^2(1)}\right]\,,
\label{prodrate}
\end{equation}
where $v(p)$ is given by Eq.~(\ref{vp}). One can check that this expression coincides with that obtained from the exact result, see Eq.~(22) in Ref.~\cite{green}, in the leading- and subleading-order at $N\gg 1$. The leading term in Eq.~(\ref{prodrate}) is what the deterministic
rate equation (\ref{rateH}) predicts.
Now consider the variance of the steady-state production rate of $H_2$ molecules:
$$
V(H_2)=\frac{\gamma}{2} \left[\langle n^2 (n-1)^2\rangle - \langle n (n-1)\rangle^2\right]\,.
$$
Using identity
\begin{eqnarray}
n^{2}(n-1)^{2} = n(n-1)(n-2)(n-3) + 4n(n-1)(n-2) + 2n(n-1)\,, \nonumber
\end{eqnarray}
we obtain the exact relation
$$
V(H_2) =\frac{\gamma}{2}\left\{G^{IV}(1)+4G^{\prime\prime\prime}(1)+
2G^{\prime\prime}(1)-[G^{\prime\prime}(1)]^2\right\}.
$$
From WKB solution (\ref{gwkb}) we obtain in the leading order
\begin{equation}
V(H_2)\simeq \frac{16 \gamma N^3 R^3 \left[v(1)+6R+1\right]}{v(1) \left[v(1)+1\right]^4}\,.
\end{equation}
The relative fluctuations of the production rate, $\sqrt{V}/{\cal R}$, scale with $N$ as $N^{-1/2}$, as expected.
Actually, the WKB approximation yields the whole stationary probability distribution function of the number of $H$ atoms. Green \textit{et. al.} \cite{green} obtained this distribution exactly from Eqs.~(\ref{gex}) and (\ref{prob}):
\begin{equation}\label{pdfex}
P_n=2^{\frac{N-1}{2}}\frac{(N^2
R)^{n/2}}{n!}\frac{I_{N+n-1}(2N\sqrt{R})}{I_{N-1}(2N\sqrt{2R})}\,.
\end{equation}
The $N\gg 1$, $n\gg 1$ asymptote of (\ref{pdfex}) can be written as
\begin{eqnarray}\label{pdfap}
P_n\simeq \frac{\sqrt{(1+q)\,v(1)}}{\sqrt{2\pi q N\,u(q)}}\,\frac{1+u(q)}{1+v(1)}e^{N\left\{\ln[1+v(1)]-v(1)+q+(1+q)u(q)-\ln
[(1+q)(1+u(q))]-q\ln[q(1+q)(1+u(q))/(2R)]\right\}},
\end{eqnarray}
where $q=n/N$, $v(p)$ is given by Eq.~(\ref{vp}) and
\begin{equation}\label{un}
u(q)=\sqrt{1+4R/(1+q)^2}\,.
\end{equation}
Now we compare Eq.~(\ref{pdfap}) with the WKB result, obtained from
Eqs.~(\ref{cauchy}) and (\ref{gwkb}):
\begin{equation}\label{pdfwkb}
P_n^{WKB}=\frac{1}{2\pi i}\oint dp \frac{v(1)^{1/2}\left[1+v(p)\right]}{p
\,v(p)^{1/2}[1+v(1)]}e^{-NS(p)-n\ln p}\,,
\end{equation}
where $S(p)$ is given by Eq.~(\ref{act1}). As $n\gg 1$, we can evaluate the
integral via the saddle point approximation. Let
$f(p)=-NS(p)-n\ln p$. The saddle point is at
$p_*=q(1+q)[1+u(q)]/(2R)$, where $u(q)$ is given by Eq.~(\ref{un}).
As $f^{\prime\prime}(p_*)>0$, the
integration contour in the vicinity of the saddle point must be chosen
perpendicular to the real axis. This adds an additional phase of
$e^{i\pi/2}$ to the
solution \cite{orszag}, which cancels $i$ in the denominator
of Eq.~(\ref{pdfwkb}). After the Gaussian integration and some algebra Eq.~(\ref{pdfwkb}) coincides with Eq.~(\ref{pdfap}). Finally, one can calculate $P_n$ at $N\gg 1$ but $n={\cal O}(1)$ by directly differentiating the WKB result (\ref{gwkb}) for $G_{st}(p)$, see Eq.~(\ref{prob}). The resulting probability distribution function is shown in Fig.~\ref{probhyd}. As one can see, the agreement between the WKB distribution and the exact distribution is excellent for all $n$.
\begin{figure}
\includegraphics[width=9.0cm,height=6.6cm,clip=]{fig3.eps}
\caption{Molecular hydrogen production on a grain. Shown is the natural logarithm of the stationary distribution $P_n$ versus $n$ for $N=50$ and $R=1$. The solid line is WKB approximation (\ref{pdfap}), the dashed line is
exact solution (\ref{pdfex}), and the dash-dotted line is the Gaussian
approximation. The WKB approximation and the exact solution are indistinguishable for all $n$. The non-Gaussian tails of the distribution cannot be described correctly by the van Kampen system size expansion. The inset shows, by different symbols, the small-$n$ asymptote of the distribution obtained analytically and numerically.}
\label{probhyd}
\end{figure}
\section{Metastability and extinction: First-excited-state calculations}\label{sec2}
Now we switch to isolated stochastic populations, so that there is no influx of particles into the system. If there is no population explosion, isolated populations ultimately undergo extinction with probability one. The deterministic rate equation for such a population can be written as
\begin{equation}\label{genrateeq}
\dot{\bar{n}}=\bar{n}\psi(\bar{n}),
\end{equation}
where $\psi(\bar{n})$ is a smooth function. In the following we assume $\psi(0)>0$, so that $\bar{n}=0$ is a \textit{repelling} fixed point of Eq.~(\ref{genrateeq}). The deterministically stable population size corresponds to an \textit{attracting} fixed point $\bar{n}=n_1>0$. According to the classification of Ref.~\cite{Assaf4}, such populations exhibit scenario A of extinction.
Let $n_1={\cal O}(N)\gg 1$. After a short relaxation time $t_r$, the population typically converges into a long-lived metastable state whose population size distribution is peaked around $n=n_1$. This metastable probability distribution function is encoded in the lowest excited eigenmode $\phi(p)\equiv\phi_1(p)$ of the probability generating function $G(p,t)$ (\ref{genexpansion}). Indeed, at $t\gg t_r$, the higher eigenmodes in the spectral expansion (\ref{genexpansion}) have already decayed, and $G(p,t)$ can be approximated as \cite{Assaf1,Assaf2}
\begin{equation}\label{G2}
G(p,t)\simeq 1-\phi(p)e^{-E t},
\end{equation}
where the lowest excited eigenfunction is normalized so that $\phi(0)=1$. The (exponentially small) lowest excited eigenvalue $E\equiv E_1$ determines the MTE of the population, $E\simeq\tau_{ex}^{-1}$. The slowly time-dependent probability distribution function of the population size, at $t\gg t_r$, is
\begin{equation}\label{qsd}
P_{n>0}(t)\simeq \pi_n e^{-t/\tau_{ex}}\;,\;\;P_0(t)\simeq 1-e^{-t/\tau_{ex}}\,.
\end{equation}
That is, the metastable probability distribution function exponentially slowly decays in time, whereas the extinction probability $P_0(t)$ exponentially slowly grows and reaches $1$ at $t\to \infty$. The shape function $\pi_n$ of the metastable distribution is called the quasi-stationary distribution (QSD). The QSD and MTE of a metastable population can be obtained by solving the eigenvalue problem for $\phi(p)$ and $E$, respectively. We now discuss some general properties of the solution to this eigenvalue problem, whereas in the following subsections we will illustrate the method of solution on two examples.
\subsection{General considerations}
Plugging Eq.~(\ref{G2}) into Eq.~(\ref{geneq}), we arrive at an ordinary differential equation for $\phi(p)$:
\begin{equation}\label{odeeq}
\hat{{\cal L}} \phi+E\phi=0\,.
\end{equation}
As $G(p,t)$ is an entire function on the complex $p$-plane \cite{entire}, $\phi(p)$ must be analytic in all singular points of differential operator $\hat{{\cal L}}$. If the order of this operator is $K$, this demand yields $K$ ``self-generated" boundary conditions for $\phi(p)$. In view of the equality $G(p=1,t)=1$, operator $\hat{{\cal L}}$ vanishes at $p=1$, which yields a universal boundary condition: $\phi(1)=0$. The rest of the $K-1$ self-generated boundary conditions are problem-specific, see examples below.
What is the general structure of differential operator $\hat{{\cal L}}$? For populations that experience extinction, $\hat{{\cal L}}\phi$ cannot include a term proportional to $\phi$, as such a term would correspond to influx of particles into the system, $\emptyset\to A$, and would prevent extinction. In general, $\hat{{\cal L}}$ includes first-order derivative terms (corresponding to branching and decay processes) and higher-order derivative terms. For extinction scenario A one has $\psi(0)>0$, see Eq.~(\ref{genrateeq}). Let $b_0$ denote the rate of decay $A\to\emptyset$, and $b_m$, $m=2,3, \dots, M,$ denote the rates of branching reactions $A\to mA$. One has $\psi(0)\equiv b_2+2b_3+\dots+(M-1)b_M-b_0>0$. Rescaling time by $\psi(0)$, we see that the (rescaled) coefficient of the term $\bar{n}^j$ (for $j=1,2,\dots$) in Eq.~(\ref{genrateeq}) must scale as $N^{1-j}$ to ensure that $n_1={\cal O}(N)$. As a result, the (rescaled) coefficient of the $j$th-order derivative term in $\hat{{\cal L}}$ scales as $N^{1-j}$, and $\hat{{\cal L}}$ can be written as
\begin{equation}\label{L}
\hat{{\cal L}}=f_1(p) \frac{d}{dp}+\frac{1}{N}\,f_2(p)\frac{d^2}{dp^2}+ \dots
+\frac{1}{N^{K-1}}\,f_K(p)\frac{d^{K}}{dp^{K}}\,.
\end{equation}
For reaction rates that are polynomial in $n$, the functions $f_j(p)$ are polynomial in $p$. Notably, all functions $f_j(p)$ vanish at $p=1$. How does the solution of Eq.~(\ref{odeeq}) look like at $N\gg 1$? As $E$ turns out to be exponentially small in $N$, the simplest approximation for Eq.~(\ref{odeeq}) would be to discard all terms except $f_1(p) d\phi/dp$, arriving at a constant solution $\phi(p)=1$ (according to our choice of normalization). Indeed, as $n_1={\cal O}(N)\gg 1$, the probability to observe $n\ll n_1$ particles in the metastable state is exponentially small. These probabilities are proportional to low-order derivatives of $\phi$ at $p=0$, see Eqs.~(\ref{prob}) and (\ref{G2}), so $\phi(p)$ must indeed be almost constant there. This solution, however, does not obey the zero boundary condition at $p=1$. The true solution, therefore, must rapidly fall to $0$ in a close vicinity of $p=1$, see Fig. \ref{phipic}. The point $p=1$ is a singular point of Eq.~(\ref{odeeq}). Actually, when approaching $p=1$ from the left, the almost constant solution breaks down even earlier: in the vicinity of another point $p=p_f<1$ where $f_1(p)$ vanishes, see the next paragraph. In the vicinity of $p=p_f$, the first-order derivative term seizes to be dominant, and all terms in Eq.~(\ref{odeeq}), including $E\phi$, are comparable. Although $\phi(p)$ deviates from a constant value in the vicinity of $p=p_f$, one can still treat this deviation perturbatively: $\phi(p)\simeq 1+\delta\phi(p)$, where $\delta\phi\ll 1$. When $p$ becomes distinctly larger than $p_f$, $\phi(p)$ already varies strongly. Here, the $E\phi$-term [which comes from the time derivative of $G(p,t)$] can again be neglected, and so the (nontrivial) solution which is sought in this region is \textit{quasi-stationary}. The quasi-stationary solution can be found in the WKB approximation, as the typical length scale $1/N$, over which $\phi(p)$ varies, is much smaller here than $1-p_f$ (a more accurate criterion will appear later).
Why does the root $p_f$ of function $f_1(p)$ exist? After some algebra, function $f_1(p)$ can be written as
\begin{equation}\label{f1p}
f_1(p)=\sum_{m=0}^M \tilde{b}_m (p^m-p)\,,
\end{equation}
where $\tilde{b}_m=b_m/\psi(0)$. The polynomial equation $f_1(p)=0$ has appeared in the context of $n$-space description of stochastic population extinction~\cite{Assaf4}. It has been shown in Ref.~\cite{Assaf4} that this equation
has exactly two real roots: $p=1$ and $p=p_f$, where in general $0\leq p_f<1$.
\begin{figure}
\includegraphics[width=9.0cm,height=6.6cm,clip=]{fig4.eps}
\caption{Shown is a sketch of the eigenfunction $\phi(p)$ of the lowest excited state at $N\gg 1$ for a typical problem of population extinction. $\phi(p)$
is almost constant on the region $p<1$ except close to $p=1$, where it rapidly goes to
zero.} \label{phipic}
\end{figure}
Now we can summarize the general scheme of solution of the eigenvalue problem for the lowest excited state.
One has to consider \textit{three} separate regions: (i) the region to the left, and sufficiently far, from the point $p=p_f$, where one can put $\phi(p)=1$ up to exponentially small corrections, (ii) in the boundary-layer region $|p-p_f|\ll p_f$ where $\phi(p)$ is still very close to $1$ and can be sought perturbatively, and (iii) in the quasi-stationary region $p_f<p\leq 1$, where $\phi(p)$ varies strongly, and the WKB approximation can be used. The solutions in the neighboring regions can be matched in their joint regions of validity. This procedure holds, at $N\gg 1$, for a broad class of systems exhibiting extinction.
There is a convenient shortcut to this general procedure when the highest-order reaction in the problem is two-body. Here the quasi-stationary equation [Eq.~(\ref{odeeq}) with the $E\phi$ term neglected] is always solvable exactly. There is no need to apply WKB approximation in such cases, and it suffices to only consider two, rather than three, regions, see the next subsection. Finally, regardless of the order of $\hat{{\cal L}}$, it is simpler to deal with $u(p)\equiv\phi^{\prime}(p)$ rather than with $\phi(p)$ itself, as this enables one to reduce the order of the ordinary differential equation by one everywhere.
\subsection{Branching-annihilation-decay}
The first example deals with a population of ``particles" which undergoes three stochastic reactions: branching
$A\stackrel{\lambda}{\rightarrow} 2A$, decay
$A\stackrel{\mu}{\rightarrow} \emptyset$ and annihilation
$2A\stackrel{\sigma}{\rightarrow} \emptyset$. As the state $n=0$ is absorbing, the population ultimately goes extinct. This example was solved by Kessler and Shnerb \cite{kessler} via ``real-space" WKB approximation, where the calculations are done in the space of population size. Here we solve it in the momentum space. Because of the presence of the linear decay reaction $A \rightarrow 0$, this example exhibits a generic transcritical bifurcation as a function of the control parameter $R_0$ introduced below, and generalizes simple single-parameter models \cite{Assaf1,Assaf2} considered earlier. The deterministic rate equation reads
\begin{equation}\label{rateeq}
\dot{\bar{n}} = (\lambda-\mu) \bar{n}-\sigma\,\bar{n}^2\,.
\end{equation}
For $\lambda>\mu$ Eq.~(\ref{rateeq}) has, in addition to the trivial fixed point $\bar{n}=0$, also a positive fixed point $n_1= (\lambda-\mu)/\sigma$. When starting from any
$\bar{n}(t=0) > 0$, the population size flows to the attracting
fixed point $\bar{n}=n_1$, with characteristic relaxation time
$t_r=(\lambda-\mu)^{-1}$, and stays there forever. Rescaling time $\lambda t \to t$, and introducing rescaled parameters, $N=\lambda/\sigma$ and
$R_0=\lambda/\mu$, the attracting fixed point becomes $n_1=N(1-R_0^{-1})$. We demand that $N\gg 1$, and $R_0>1$ and not too close to $1$ (the exact criterion will appear later). When $R_0$ exceeds $1$ the deterministic system undergoes a transcritical bifurcation.
To account for intrinsic noise we consider the master equation
\begin{eqnarray}
\label{p10}
\frac{d}{dt}{P}_{n}(t)=\frac{1}{2N}\left[(n+2)(n+1)P_{n+2}(t)-n(n-1)P_{n}(t)\right]+(n-1)P_{n-1}(t)-nP_{n}(t)+\frac{1}{R_0}\left[(n+1) P_{n+1}- n P_n\right]\,,
\end{eqnarray}
where time is rescaled, $\lambda t \to t$. The evolution equation for the probability generating function $G(p,t)$ is
\begin{equation}\label{pde2}
\frac{\partial G}{\partial t}=\frac{1}{2N}(1-p^2)\frac{\partial^2
G}{\partial p^2}+(p-1)\left(p-\frac{1}{R_0}\right)\frac{\partial
G}{\partial p}\,.
\end{equation}
At $t\gg t_r=(1-1/R_0)^{-1}$ the metastable probability distribution function, peaked at
$n\simeq n_1$, sets in, and Eq.~(\ref{G2}) holds.
To determine the QSD and MTE we turn to the Sturm-Liouville problem for the lowest excited eigenmode $\phi(p)$ and eigenvalue $E$
\begin{equation}\label{ode2}
\frac{1}{2N}(1-p^2)\phi^{\prime\prime}+(p-1)\left(p-\frac{1}{R_0}\right)\phi^{\prime}+E\phi=0\,.
\end{equation}
Here, the self-generated boundary conditions for $\phi(p)$ are: $\phi(1)=0$ and $2(1+R_0^{-1})\phi^{\prime}(-1)+E\phi(-1)=0$. Because of the expected exponential smallness of $E$, the latter condition can be safely approximated by $\phi^{\prime}(-1)\simeq 0$.
We now apply the procedure of solution presented in the previous subsection on Eq.~(\ref{ode2}). Using $u(p)= \phi^{\prime}(p)$, the exact solution of the quasi-stationary equation [Eq.~(\ref{ode2}) without the $E\phi$ term],
\begin{equation}\label{ode2wkb}
\frac{1}{2N}(1-p^2)u^{\prime}+(p-1)\left(p-\frac{1}{R_0}\right)u=0\,,
\end{equation}
can be written as
\begin{equation}\label{phiwkb0}
u (p)=C e^{-NS(p)}.
\end{equation}
Here
\begin{equation}\label{S2}
S(p)=2\left[1-p+\left(1+\frac{1}{R_0}\right)\ln
\left(\frac{1+p}{2}\right)\right]\,.
\end{equation}
To determine the arbitrary constant $C$ we need a boundary condition for $u(p)$ at $p=1$.
It follows from Eq.~(\ref{G2}) that, at $t \gg t_r$,
\begin{equation}\label{dGdp}
\frac{\partial G}{\partial p}(1,t)\simeq -u(1) e^{-Et}\,.
\end{equation}
On the other hand, by virtue of Eq.~(\ref{genprob}), the left hand side of Eq.~(\ref{dGdp})
is equal to $\bar{n}(t)$ which behaves as $n_1\, \exp(-Et)$, see \textit{e.g.} Ref.~\cite{Assaf2}. As a result, $u(1)\simeq-n_1$ and,
by using Eq.~(\ref{phiwkb0}), we obtain $C=-N(1-R_0^{-1})$. Therefore,
\begin{equation}\label{phiwkb2}
u(p)=-N\left(1-\frac{1}{R_0}\right)e^{-NS(p)}
\end{equation}
with $S(p)$ from Eq.~(\ref{S2}). This yields the solution we looked for: $\phi=\int_1^{p}u(s)ds$, which satisfies the boundary condition $\phi(1)=0$. One can check now that neglecting
the $E\phi$ term in Eq.~(\ref{ode2}) demands $pR_0-1\gg N^{-1/2}$.
Although there is no need in the WKB approximation in this case of a two-body reaction, it is still instructive to re-derive Eq.~(\ref{phiwkb2}) by using the WKB approximation for $\phi(p)$. To this end we consider the quasi-stationary version of Eq.~(\ref{ode2}),
\begin{equation}\label{ode2qsd}
\frac{1}{2N}(1-p^2)\phi^{\prime\prime}+(p-1)\left(p-\frac{1}{R_0}\right)
\phi^{\prime}=0\,,
\end{equation}
and make a WKB ansatz $\phi(p)= a(p) \exp[-N S(p)]$. In the leading order in $N\gg1$ we obtain a stationary Hamilton-Jacobi equation
$H[p,-S^{\prime}(p)]=0$ with effective Hamiltonian \cite{Kamenev2}
\begin{equation}\label{Ham1}
H(p,q)=\left[p-\frac{1}{R_0}-\frac{(1+p)q}{2}\right]q(p-1).
\end{equation}
Here, as in Sec. III, $q(p)=-S^{\prime}(p)$ is the reaction coordinate conjugate to the momentum $p$. There are two trivial zero energy orbits of this Hamiltonian: the deterministic orbit $p=1$ and the ``extinction orbit" $q=0$. The action along the extinction orbit is zero: $S(p)=0$, so the corresponding WKB mode can be called ``slow". There is also a nontrivial zero-energy orbit $q(p)=2(p-R_0^{-1})/(1+p)$. It includes a heteroclinic orbit exiting, at $t=-\infty$, the fixed point $(p=1,q=q_1\equiv n_1/N)$ and entering, at $t=\infty$, the fixed point $(p=R_0^{-1},q=0)$ of the phase plane $(p,q)$, see Fig. \ref{phasedb}. This orbit is the ``extinction instanton" \cite{Kamenev2,Kamenev1}. It describes the most probable path of the system from the long-lived metastable state to extinction. Integrating along this orbit and choosing $S(p=1)=0$, we recover Eq.~(\ref{S2}). This solution can be called the ``fast" WKB mode.
In the subleading order of the WKB approximation one obtains $a(p)=\left(1-R_0^{-1}\right)(1+p)/\left[2\left(p-R_0^{-1}\right)\right]$ for the fast, and $a(p)=const$ for the slow WKB modes. The general WKB solution is a superposition of the two modes,
\begin{equation}\label{alternative}
\phi(p)=1-\frac{\left(1-R_0^{-1}\right)(1+p)}{2\left(p-R_0^{-1}\right)}e^{-N S(p)}\,,
\end{equation}
with $S(p)$ from Eq.~(\ref{S2}). Here we have already imposed the boundary condition $\phi(1)=0$ and normalization condition $\phi(0)\simeq 1$. The $p$-derivative of $\phi(p)$ from Eq.~(\ref{alternative}) yields, in the leading order, Eq.~(\ref{phiwkb2}). As it is clear from Eq.~(\ref{alternative}), the WKB solution breaks down in a vicinity of the point $p=R_0^{-1}$, where the slow and fast WKB modes become strongly coupled. Here the quasi-stationarity does not hold.
\begin{figure}
\includegraphics[width=9.0cm,height=6.6cm,clip=]{fig5.eps}
\caption{Branching-annihilation-decay. Shown are zero-energy lines of Hamiltonian (\ref{Ham1}) on the $(p,q)$ phase plane. The
thick solid line corresponds to the instanton $q=-S^{\prime}(p)$ (\ref{S2}). Here $q_1=n_1/N=1-R_0^{-1}$, and the
area of the shaded region is equal to $S_0$ from
Eq.~(\ref{S0}).} \label{phasedb}
\end{figure}
We now proceed, therefore, to the non-quasi-stationary region $-1\leq p\lesssim p_f$ (a more restrictive condition will appear \textit{a posteriori}). It is easier to deal with it in terms of $u(p)$, rather than $\phi(p)$. Here we can treat the $E\phi$ term in
Eq.~(\ref{ode2}) perturbatively:
$\phi(p)=1+\delta\phi(p)$, where
$\delta\phi\ll 1$ \cite{Assaf1,Assaf2}. As a result,
Eq.~(\ref{ode2}) becomes an inhomogeneous first-order
equation for $u(p)=\delta \phi^{\prime}(p)$:
\begin{equation}\label{odebulk}
\frac{1}{2N}(1-p^2)u^{\prime}+(p-1)\left(p-\frac{1}{R_0}\right)u=-E\,,
\end{equation}
which can be solved by variation of parameter. For two-body reactions the corresponding \textit{homogeneous} equation, which coincides with the quasi-stationary equation (\ref{ode2wkb}), is exactly solvable. As a result,
one can solve Eq.~(\ref{odebulk}) in the entire non-quasi-stationary region which includes both $p<p_f$ and $|p-p_f|\ll p_f$. The solution
is
\begin{eqnarray}
u(p)=-2NE e^{2N\left[p-\left(1+\frac{1}{R_0}\right)\ln(1+p)\right]}\int_{-1}^{p}\frac{\exp\left\{2N\left[s-\left(1+\frac{1}{R_0}\right)\ln
(1+s)\right]\right\}}{1-s^2}ds\,, \label{ubulk}
\end{eqnarray}
where the arbitrary constant is chosen so as to obey the boundary condition $u(-1)\simeq 0$. Note, that the integrand in Eq.~(\ref{ubulk}) is regular at $s=-1$, so the perturbative solution is well-behaved. Solution (\ref{ubulk}) remains valid as long as
$\phi$ is close to $1$. As one can check, this holds for $1-p\gg N^{-1/2}$, \textit{cf}. Refs.~\cite{Assaf1,Assaf2}. The
perturbative solution (\ref{ubulk}) can be matched with the quasi-stationary solution (\ref{phiwkb2}), \textit{e.g.} at $N^{-1/2}\ll pR_0-1 \ll 1$ \cite{matching}.
Solution (\ref{ubulk}) simplifies in the ``left region" $p<p_f$, not too close to $p_f$. By Taylor-expanding the integrand in Eq.~(\ref{ubulk})
(which is a monotone increasing function of $p$ for $p<p_f$)
in the vicinity of $s=p$, we obtain
\begin{equation}\label{bulkleft}
u(p)^{left}\simeq -\frac{E}{(p-1)(p-R_0^{-1})}\,.
\end{equation}
This result (which holds in the region $1-pR_0\gg N^{-1/2}$) has a simple meaning: here the first-derivative term in Eq.~(\ref{odebulk}) is negligible. To neglect this term in Eq.~(\ref{odebulk}) [or the term proportional to $\phi^{\prime\prime}(p)$ in Eq.~(\ref{ode2})] is the same as to disregard the two-body reaction $2A\to \emptyset$ compared with the one-body reactions of branching and decay. This is indeed a legitimate approximation at small $n$ \cite{kessler,Assaf4}. Note that, not too close to $p=p_f$, $u(p)^{left}$ is exponentially small in $N$, so that $\phi \simeq 1$ up to an exponentially small correction. Putting $\phi=1$ in the left region, however, would be too a crude approximation, as it would only give a trivial left tail of the QSD: $\pi_1=\pi_2=\dots=0$ \cite{leftQSD}. Correspondingly, the solution in the left region cannot be obtained from the WKB approximation.
We can now find the eigenvalue $E$ by matching the quasi-stationary solution (\ref{phiwkb2}) and the perturbative non-quasi-stationary solution (\ref{ubulk}) in their joint validity region $N^{-1/2}\ll pR_0-1 \ll 1$ \cite{matching}. For $pR_0-1\gg N^{-1/2}$, the integral
in Eq.~(\ref{ubulk}) can be evaluated by the saddle point
approximation. The saddle point is at $p=p_f=R_0^{-1}$, and the result is
\begin{eqnarray}
u(p)\simeq-\frac{2E\sqrt{\pi N}R_0^{3/2}}{\sqrt{R_0+1}(R_0-1)}e^{-2N\left[\frac{1}{R_0}-\left(1+\frac{1}{R_0}\right)\ln
\left(1+\frac{1}{R_0}\right)\right]+2N\left[p-\left(1+\frac{1}{R_0}\right)\ln
(1+p)\right]}\,. \label{ubulkmatch}
\end{eqnarray}
Matching this result with the quasi-stationary solution (\ref{phiwkb2}), we
find
\begin{equation}\label{e12}
E=\sqrt{\frac{N(R_0+1)}{4\pi}}\frac{(R_0-1)^2}{R_0^{5/2}}e^{-NS_0}\,,
\end{equation}
where
\begin{equation}\label{S0}
S_0= 2\left[1-\ln2 -\frac{1+\ln 2}{R_0}
+\left(1+\frac{1}{R_0}\right)\,\ln\left(1+\frac{1}{R_0}\right)
\right].
\end{equation}
The MTE, in physical units, is $\tau_{ex}=(\lambda E)^{-1}$ with $E$ from Eq.~(\ref{e12}), in agreement with Ref. \cite{kessler}. As $R_0\to \infty$ the decay reaction
$A \to 0$ becomes irrelevant, and one recovers the result for the branching-annihilation model \cite{Assaf2,kessler,turner}. When $R_0-1\ll 1$, the system is close to the transcritical bifurcation of the deterministic rate equation. Here the Fokker-Planck approximation to the master equation is applicable \cite{Doering,kessler}. The corresponding asymptote of Eq.~(\ref{e12}),
$$
E=\sqrt{\frac{N}{2 \pi}}\,(R_0-1)^2\,e^{-\frac{N}{2} (R_0-1)^2}\,,
$$
is valid when $R_0-1\gg N^{-1/2}$, so that $E$ is still exponentially small in $N$.
Having found $E$, we have a complete solution for $u(p)$, given by Eqs.~(\ref{phiwkb2}) and (\ref{ubulk}). Now one can find the QSD by using Eq.~(\ref{prob}) for $n={\cal O}(1)$ and Eq.~(\ref{cauchy}) for $n\gg 1$. The results coincide with those obtained by Kessler and Shnerb \cite{kessler} by the ``real-space" WKB approximation, so we will not present them here. The large-$n$ tail of the QSD decays faster than exponentially, thus justifying our \textit{a priori} assumption that $\phi(p)$ is an entire function in the complex $p$-plane. Shown in Fig.~\ref{genfun} is a comparison between the analytical and numerical solutions for $\partial_p G \simeq -u(p) e^{-Et}$ at a time $t_r\ll t\ll 1/E$, when $\partial_p G\simeq -u(p)$.
\begin{figure}
\includegraphics[width=9.0cm,height=6.6cm,clip=]{fig6.eps}
\caption{Branching-annihilation-decay. Shown is the $p$-derivative $\partial_p G$ of the probability generating function $G$
at $t_r\ll t\ll 1/E$ for $N=10^3$ and $R_0=1.5$. The solid line is the absolute value of the perturbative solution~(\ref{ubulk}); the dashed line is the absolute value of the quasi-stationary
solution~(\ref{phiwkb2}). In the joint
region of their validity the two lines are indistinguishable. The crosses indicate the values obtained by a numerical solution of Eq.~(\ref{pde2}) for $n_0=20$
particles at $t=0$ and boundary conditions $G(1,t)=1$ and $\partial_p G(-1,t)=0$.} \label{genfun}
\end{figure}
\subsection{Branching and triple annihilation}\label{sec3}
Here we again consider a metastable population on the way to extinction, but now a three-body reaction is present. Our model system includes two reactions: the branching
$A\to\hspace{-4.3mm}^{\lambda}\hspace{2mm}2A$ and the triple annihilation $3A\to\hspace{-4.3mm}^{\mu}\hspace{2mm} \emptyset$. The deterministic rate equation,
\begin{equation}
\label{rate3}
\dot{\bar{n}}=\lambda \bar{n}-\frac{\mu}{2}\bar{n}^3,
\end{equation}
has two relevant fixed points: the repelling point $n=0$ and the attracting point
$n_1=(2\lambda/\mu)^{1/2}\equiv N\gg 1$. According to Eq.~(\ref{rate3}),
the system size approaches $\bar{n}=n_1$ after the relaxation time
$t_r=\lambda^{-1}$, and stays there forever. Contrary to this prediction, fluctuations drive the population to extinction. Upon rescaling time $t\to \lambda t$, the master equation reads
\begin{eqnarray}
\frac{dP_n(t)}{dt}=(n-1)P_{n-1}-n P_n +\frac{1}{3 N^2}\left[(n+3)(n+2)(n+1)P_{n+3}-n(n-1)(n-2)P_n\right],\nonumber\\\label{master3}
\end{eqnarray}
whereas the evolution equation for $G(p,t)$ is
\begin{equation}\label{pde3}
\frac{\partial G}{\partial t}=\frac{1}{3N^2}(1-p^3)\frac{\partial^3
G}{\partial p^3}+p(p-1)\frac{\partial G}{\partial p}\,.
\end{equation}
At $t\gg t_r$, Eq.~(\ref{G2}) holds, and the ordinary differential equation for the lowest excited eigenfunction $\phi(p)$ is
\begin{equation}\label{ode3}
\frac{1}{3N^2}(1-p^3)\phi^{\prime\prime\prime}+p(p-1)\phi^{\prime}+E\phi=0\,.
\end{equation}
This equation has three singular points in the complex $p$-plane. These are the roots of $1-p^3$: one real, $p_1=1$, and two complex,
$p_2=e^{2\pi i/3}$ and
$p_3=e^{4\pi i/3}$. Since $\phi(p)$ must be analytical in all these points, $\phi(p)$ must satisfy three conditions:
\begin{equation}\label{boundcon}
p_i (p_i-1)\phi^{\prime}(p_i)+E\phi(p_i)=0\,\;\;\;i=1,2,3.
\end{equation}
Here the $p$-derivative is in the complex plane. For $i=1$ Eq.~(\ref{boundcon}) yields $\phi(p=1)=0$. As $E$ turns out to be exponentially small in $N\gg 1$, we can neglect small terms proportional to $E$ in the conditions for $i=2$ and $3$ and obtain $\phi^{\prime}(p=e^{2\pi i/3})\simeq 0$ and $\phi^{\prime}(p=e^{4\pi i/3})\simeq 0$.
In the quasi-stationary region (the exact location of which will be determined later) Eq.~(\ref{ode3}) becomes
\begin{equation}\label{ode30}
\frac{1}{3N^2}(1-p^3)\phi^{\prime\prime\prime}+p(p-1)\phi^{\prime}=0\,.
\end{equation}
This equation is of second order for $u(p)=\phi^{\prime}(p)$, but it is not exactly solvable in terms of known special functions, and this is a typical situation for three-body, four-body, $\dots$, reactions. The presence of the large parameter $N\gg 1$ justifies the WKB ansatz $\phi(p)=a(p) e^{-N S(p)}$. It yields, in the leading order of $N\gg1$, a stationary Hamilton-Jacobi equation $H[p,-S^{\prime}(p)]=0$ with Hamiltonian
\begin{equation}\label{Ham2}
H(p,q)=\left[p-\frac{(1+p+p^2)q^2}{3}\right]q(p-1).
\end{equation}
Here again, in addition to the trivial zero-energy lines $q=0$ and $p=1$, one obtains an instanton orbit
\begin{equation}\label{sprime}
q=\psi(p)\equiv \left(\frac{3p}{1+p+p^2}\right)^{1/2}
\end{equation}
which connects the fixed points $(1,q_1=n_1/N=1)$ and $(0,0)$ in the $(p,q)$ plane, see Fig.~\ref{phase3}. The instanton corresponds to the fast-mode WKB solution, whereas the orbit $q=0$ corresponds to the slow-mode WKB solution, similarly to the previous example.
Again, it is simpler to do the actual calculations for $u(p)=\phi^{\prime}(p)$, rather than for $\phi(p)$. Using the WKB ansatz $u(p)=b(p)e^{-N S(p)}$ in the quasi-stationary equation
\begin{equation}\label{u3wkb}
\frac{1}{3N^2}(1+p+p^2)u^{\prime\prime}-p u=0\,,
\end{equation}
we obtain
\begin{equation}
\frac{(1+p+p^2)}{3N^2}[N^2(S^{\prime})^2
b-2NS^{\prime}b^{\prime}-N S^{\prime\prime}b]-pb=0,\label{pert3}
\end{equation}
where we have neglected the sub-subleading term proportional to $b^{\prime\prime}/N^2$. In the leading order we obtain
$S^{\prime}(p)=-\psi(p)$ [the solution with $S^{\prime}(p)=\psi(p)$ is non-physical and must be discarded]. The arbitrary constant can be fixed by putting $S(1)=0$, and we obtain
\begin{equation}\label{action3}
S(p)=-\int_{1}^{p}\psi(x)dx,
\end{equation}
with $\psi(x)$ from Eq.~(\ref{sprime}). This result can be expressed via elliptic integrals, but we will not need these cumbersome formulas.
\begin{figure}
\includegraphics[width=9.0cm,height=6.6cm,clip=]{fig7.eps}
\caption{Branching and triple annihilation. Shown are zero-energy lines of Hamiltonian (\ref{Ham2}) on the $(p,q)$ phase plane. The
thick solid line is the instanton $q=-S^{\prime}(p)=\psi(p)$, given by
Eq.~(\ref{sprime}). Here $q_1=1$, and the area of the shaded region is equal to
$S_0$ from Eq.~(\ref{s0}). The dashed line denotes a non-physical orbit.} \label{phase3}
\end{figure}
In the subleading order Eqs.~(\ref{pert3}) and
(\ref{action3}) yield a first-order ordinary differential equation for $b(p)$
whose general solution is
\begin{equation}
b(p)=\frac{C(1+p+p^2)^{1/4}}{p^{1/4}}\,.
\end{equation}
We demand $u(1)\simeq -n_1=-N$ [see Eq.~(\ref{dGdp})] and obtain the quasi-stationary WKB solution for $u(p)$:
\begin{equation}\label{phiwkb3}
u^{WKB} (p)=-\frac{N(1+p+p^2)^{1/4}}{(3p)^{1/4}}e^{-NS(p)}
\end{equation}
with $S(p)$ from Eq.~(\ref{action3}). Now one can check that asymptote~(\ref{phiwkb3}) is valid
when $p\gg N^{-2/3}$; otherwise it is not justified to neglect the term $b^{\prime\prime}(p)$ in Eq.~(\ref{pert3}). Again, the quasi-stationarity and the WKB approximation break down in a vicinity of the point where the fast and slow WKB modes are strongly coupled. In this example this point is at $p=0$, whereas in the previous example it was at $p=p_f\neq 0$. That the WKB breaks down here at $p=0$ is a special, non-generic situation resulting from the absence of the linear decay process $A\to 0$ from the set of reactions $A\to 2A$ and $3A\to 0$ we are dealing with.
To remedy the divergence of the WKB solution at $p=0$, one need to account for a deviation from quasi-stationarity. The corresponding non-quasi-stationary solution of Eq.~(\ref{ode3}) is perturbative in $E$, as in the previous example, so the equation we need to solve is
\begin{equation}\label{pert10}
\frac{1}{3N^2}(1-p^3)u^{\prime\prime}+p(p-1)u=-E\,.
\end{equation}
The corresponding homogeneous equation, Eq.~(\ref{ode30}), is not solvable in known special functions. Therefore, we will solve Eq.~(\ref{pert10})
approximately in two separate regions and match the solutions in their joint region of validity.
The first region, which we call ``left", is $p<0$ (and not too close to zero, see below).
Here we can neglect the $u^{\prime\prime}$-term in Eq.~(\ref{pert10}) and obtain
\begin{equation}
u^{left}(p)\simeq \frac{E}{p(1-p)}\,.
\label{left}
\end{equation}
This asymptote, valid when $-p\gg N^{-2/3}$, corresponds to neglecting the high-order reaction $3A \to 0$ at small population sizes. As in the previous example, $u^{left}(p)$ is exponentially small. By choosing an exponentially small solution for $u(p)$ in the left region, we effectively discarded two other linearly independent solutions of Eq.~(\ref{ode30}) which are singular at $p=e^{2\pi i/3}$ and $p=e^{4\pi i/3}$.
As $p_f=0$ here, one can actually put $u=0$ in the left region and still accurately determine the QSD \cite{leftQSD}.
The second region is the boundary layer $|p|\ll 1$, where Eq.~(\ref{pert10}) becomes
\begin{equation}
\label{oder3}
\frac{1}{3N^2}u^{\prime\prime}-pu=-E\,,
\end{equation}
The general solution of this equation is
\begin{eqnarray}
u^{bl} (p)=\left[c_1+\alpha^2 \pi E \int_0^pBi(\alpha s)ds\right] Ai(\alpha p)+\left[c_2-\alpha^2 \pi E \int_0^pAi(\alpha
s)ds\right] Bi(\alpha p)\,,\label{phibl3u}
\end{eqnarray}
where $Ai(y)$ and $Bi(y)$ are the Airy functions of the first and
second kind, respectively \cite{Abramowitz}, and $\alpha=(3N^2)^{1/3}$.
Now we can find the unknown constants $c_1$ and $c_2$ (assuming for a moment that $E$ is known) by matching the asymptotes (\ref{left}) and (\ref{phibl3u}) in their common region
$N^{-2/3}\ll -p \ll 1 $. As $u^{left}(p)$ is exponentially small at $N^2|p|^3\gg 1$, the boundary layer solution $u^{bl}(p)$ from Eq.~(\ref{phibl3u}) must also be exponentially small there. Evaluating the integrals in Eq.~(\ref{phibl3u}) at $p=-\infty$ and using the identities
$\int_{-\infty}^0 Bi(s)ds=0$ and $\int_{-\infty}^0 Ai(s)ds=2/3$,
we arrive at
\begin{equation}\label{coeff3}
c_1\simeq 0\;,\;\;\;c_2\simeq -\frac{2\pi E N^{2/3}}{3^{2/3}}\,.
\end{equation}
Now we can find the extinction rate $E$ by matching the asymptotes of
$u^{WKB}(p)$ and $u^{bl}(p)$ in their common region $N^{-2/3}\ll p\ll 1$.
The $p \ll 1$ asymptote of the WKB solution (\ref{phiwkb3}) is
\begin{equation}\label{wkbapprox3}
u^{WKB}\simeq
-\frac{N}{(3p)^{1/4}}e^{-NS_0}e^{(2/\sqrt{3})Np^{3/2}},
\end{equation}
where
\begin{eqnarray}
S_0=\int_{0}^{1}\left(\frac{3x}{1+x+x^2}\right)^{1/2}dx=0.836367\dots,\label{s0}
\end{eqnarray}
is the shaded area in Fig.~\ref{phase3}. Let us obtain the $p\gg N^{-2/3}$
asymptote of $u^{bl}(p)$ (\ref{phibl3u}). First, for $z\gg 1$
\cite{Abramowitz}
\begin{equation}
\label{largearg}
Ai(z)\simeq \frac{e^{-(2/3)z^{3/2}}}{2\pi^{1/2} z^{1/4}}\;,\;\;Bi(z)\simeq \frac{e^{(2/3)z^{3/2}}}{\pi^{1/2} z^{1/4}}\,.
\end{equation}
Now we need to evaluate the integrals in Eq.~(\ref{phibl3u}). As we are interested in the region of $N^2p^3\gg 1$, the integral of $Ai(\alpha s)$ can be evaluated by putting $p=\infty$ and using the saddle point approximation, arriving at
$$\int_0^{\infty} Ai\left[(3N^2)^{1/3}s\right]ds=\frac{1}{3^{4/3}N^{2/3}}\,.$$
The main contribution to the integral of $Bi(\alpha s)$ at $N^2p^3\gg 1$ comes from a vicinity of $s=p$, where $Bi(\alpha s)$ is exponentially large, see Eq.~(\ref{largearg}). Expanding the exponent in a Taylor series around $s=p$, we obtain in the leading order
$$\int_0^{p} Bi
\left[(3N^2)^{1/3}s\right]ds\simeq\frac{e^{(2/\sqrt{3})Np^{3/2}}}
{3^{7/12}\pi^{1/2} N^{7/6}p^{3/4}}\,.$$
Now one can see from Eq.~(\ref{coeff3}) that the main contribution to $u^{bl}(p)$ (\ref{phibl3u}) comes from the $Bi(\alpha p)$ term, and we obtain
\begin{equation}\label{uapprox}
u^{bl}\simeq-\frac{3^{1/4}\sqrt{\pi N}E}{p^{1/4}}
e^{(2/\sqrt{3})Np^{3/2}}\,.
\end{equation}
Matching Eqs.~(\ref{wkbapprox3}) and (\ref{uapprox}), we obtain
\begin{equation}\label{exttime3}
E=\sqrt{\frac{N}{3\pi}}e^{-NS_0}\,.
\end{equation}
The MTE in physical units is given by $\tau_{ex}=(\lambda E)^{-1}$,
which is exponentially large in $N$, as expected. A comparison between the analytical result for the extinction rate (\ref{exttime3}) and a
numerical result, obtained by solving (a truncated version of) master equation (\ref{master3}), is shown in Fig.~\ref{extrate}.
For $N\gg 1$ the agreement is excellent.
\begin{figure}
\includegraphics[width=9.0cm,height=6.6cm,clip=]{fig8.eps}
\caption{Branching and triple annihilation. Shown is a comparison between the extinction rate (\ref{exttime3}) (solid line) and the extinction rate $-[\ln(1-P_0^{num}(t))]/t$ (crosses) found from a
numerical solution of the master
equation (\ref{master3}) at
different $N$. The inset shows the ratio of the two rates.} \label{extrate}
\end{figure}
Now let us calculate the QSD. Combining Eq.~(\ref{cauchy}) with Eqs.~(\ref{G2}) and (\ref{qsd}), we obtain
\begin{equation}\label{exact}
\pi_{n\geq 1}=-\frac{1}{2\pi n
i}\oint\frac{u(p)}{p^n}dp\,.
\end{equation}
For $n\gg 1$ we can use the WKB asymptote~(\ref{phiwkb3}):
\begin{eqnarray}
\pi_{n\gg1}\simeq-\frac{1}{2\pi n i}\oint\frac{u^{WKB}(p)}{p^n}dp =\frac{N}{2\pi n i}\oint
\frac{(1+p+p^2)^{1/4}}{(3p)^{1/4}}\frac{\exp\left[N \int_{1}^{p}\psi(x)dx\right]}{p^{n}}\,dp,
\end{eqnarray}
with $\psi(x)$ given by Eq.~(\ref{sprime}). As $N\gg 1$ and $n\gg 1$, this integral can be
evaluated via saddle point approximation \cite{orszag}. Let us denote
$f(p)=N\int_{1}^{p}\psi(x)dx-n\ln p$. The saddle point equation $f^{\prime}(p_*)=0$ reduces to a cubic equation
\begin{equation}\label{saddle}
\frac{3 p^3}{1+p+p^2}=\left(\frac{n}{N}\right)^2,
\end{equation}
which has one and only one real root $p_*=p_*(n/N)$. As $f^{\prime\prime}(p_*)>0$, we must choose a
contour in the complex $p$-plane which goes through this root perpendicularly to the
real axis. The Gaussian integration yields
\begin{equation}\label{wkb3b}
\pi_{n\gg1}=\frac{N(1+p_*+p_*^2)^{1/4}}{n \sqrt{2\pi
f^{\prime\prime}(p_*)}\,(3p_*)^{1/4}}\frac{\exp\left[N
\int_{1}^{p_*}\psi(x)dx\right]}{p_*^{n}};
\end{equation}
we omit a cumbersome expression for $f^{\prime\prime}(p)$. Note, that for $n\gg 1$, the saddle point $p_*$ is always obtained in the region where $u^{WKB}(p)$ is valid, see below. Let us calculate the $1\ll n\ll N$ and $n\gg N$ asymptotes of Eq.~(\ref{wkb3b}) with an exponential accuracy,
$\ln \pi_n \simeq f(p_*)$. For $n\ll N$ the saddle point,
given by Eq.~(\ref{saddle}), is obtained at $p_*=[n/(\sqrt{3}N)]^{2/3}\ll 1$. Here it suffices, in the leading order in $n/N$, to put $p_*=0$ in the upper bound of the integral in Eq.~(\ref{action3}). Then the integral yields $S_0$ from Eq.~(\ref{s0}). For $n\gg N$ we obtain $p_*=[n/(\sqrt{3}N)]^2\gg 1$. Here a dominant contribution to the integral in Eq.~(\ref{action3}) comes from the region of $p_*\gg 1$ which enables one to simplify the integrand. The resulting asymptotes are
\begin{eqnarray}\label{probleftright}
\ln \pi_n&\simeq& N\left[-S_0+\frac{2n}{3N}\ln \frac{N}{n}+{\cal
O}\left(\frac{n}{N}\right)\right],\,\;n\ll N \nonumber\\
\ln \pi_n&\simeq& N\left[-\frac{2n}{N}\left(\ln\frac{n}{N}-1-\ln
\sqrt{3}\right)+{\cal O}(1)\right],\,\;n\gg N\,.\nonumber\\
\end{eqnarray}
Notice that each of these tails of the QSD are non-Gaussian. The $n\gg N$ tail decays faster than exponentially, thus justifying \textit{a posteriori} our assumption that $\phi(p)$ is an entire function in the complex $p$-plane.
At $|n-N|\ll N$ the saddle point, given by Eq.~(\ref{saddle}), is obtained at
$p_*=1+(n-N)/N$, and we arrive at a Gaussian asymptote
\begin{equation}\label{gauss3}
\pi_n\simeq \frac{1}{\sqrt{2\pi N}}e^{-(n-N)^2/(2N)};
\end{equation}
the preexponent is fixed by normalization. Equation~(\ref{gauss3}) holds for $|n-N|\ll N^{2/3}$; this condition is tighter than $|n-N|\ll N$. Note, that the Gaussian asymptote of the QSD can also be found by directly calculating the mean and variance of the distribution. These (and other higher cumulants of the distribution) can be found by using derivatives of $G(p,t)$ with respect to $p$ at $p=1$, see Eq.~(\ref{genprob}). Indeed, from Eqs.~(\ref{genprob}) and (\ref{genfun}), the mean of the QSD (at times $t_r\ll t\ll \tau_{ex}$) is given by $\bar{n}=\partial_p
G|_{p=1}\simeq -u(p=1)=N$, where here we have used $u=u^{WKB}(p)$ given by Eq.~(\ref{phiwkb3}). In its turn the variance in the leading order is
\begin{eqnarray}
V=\bar{n^2}-\bar{n}^2=\sum_{n=0}^{\infty}n^2P_n(t)-\left(\sum_{n=0}^{\infty}nP_n(t)\right)^2 =\left.\left[\partial_{pp} G+\partial_p G -(\partial_p G)^2\right]\right|_{p=1} \simeq -u^{\prime}(1)-u(1)-[u(1)]^2\simeq N\nonumber\,,
\end{eqnarray}
recovering the Gaussian asymptote (\ref{gauss3}).
\begin{figure}
\includegraphics[width=9.0cm,height=6.6cm,clip=]{fig9.eps}
\caption{Branching and triple annihilation. Shown is the natural logarithm of the QSD versus $n$ for $N=20$. The dashed line is
WKB solution (\ref{wkb3b}), the dash-dotted line is the Gaussian
approximation (\ref{gauss3}), and the solid line is the numerical
solution of the (truncated) master equation
(\ref{master3}). Inset: the $n\ll N$ asymptote of the QSD obtained analytically [Eqs.~(\ref{pi123}) and (\ref{smalln3})] ($\times$'s) and numerically (fat dots).} \label{prob3b}
\end{figure}
At $n={\cal O}(1)$ the QSD can be evaluated directly from
$$
\pi_n=-\left.\frac{1}{n!}\frac{d^{n-1} u(p)}{d p^{n-1}}\right|_{p=0},\;\;\;\;\;\; n\geq1.
$$
Here one should use the boundary-layer solution around $p=0$, given by Eqs.~(\ref{phibl3u}) and (\ref{coeff3}). This yields
\begin{equation}\label{pi123}
\pi_1=\frac{\Gamma(1/3) E N^{2/3}}{3^{1/3}}\;,\;\;\pi_2=\frac{\pi EN^{4/3}}{3^{1/6}\Gamma(1/3)}\;,\;\;\mbox{and}\;\;
\pi_3=\frac{E N^2}{2}.
\end{equation}
To calculate other $n={\cal O}(1)$ terms, one can use a recursion relation obtainable from the master equation (\ref{master3}) with $\dot{P}_n=0$. Indeed, at $n\ll N$ one can neglect
the terms $n(n-1)(n-2)P_n$ and $(n-1)P_{n-1}$ compared with the terms $(n+3)(n+2)(n+1)P_{n+3}$ and $nP_n$, respectively, and arrive at the following relation:
\begin{equation}\label{smalln3}
\pi_{n+3}=\frac{3N^2 n }{(n+3)(n+2)(n+1)}\pi_n\,.
\end{equation}
Note, that the small-$n$ (\ref{smalln3}) and the WKB (\ref{wkb3b}) segments of the QSD have a joint region of validity at $1\ll n\ll N$.
A comparison between WKB result (\ref{wkb3b}) and a numerical solution of (a truncated version of) master equation (\ref{master3}) is shown in
Fig.~\ref{prob3b}. The inset compares the $n \ll N$ analytical asymptote [see Eqs.~(\ref{pi123}) and (\ref{smalln3})] with numerical results. Excellent agreement is observed in both cases. It can be also seen that the Gaussian approximation (\ref{gauss3}) strongly overestimates the QSD in the low-$n$ region, and underestimates it in the high-$n$ region.
\section{Discussion}\label{discussion}
The $p$-space representation renders a unique perspective to theory of large fluctuations in populations undergoing Markovian stochastic gain-loss processes. The stationary distribution of the population size is encoded in the ground-state eigenfunction of a Sturm-Liouville (spectral) problem for the probability generating function. In the case of a long-lived metastable population on the way to extinction, the MTE and the quasi-stationary distribution of population size are encoded in the eigenfunction of the lowest excited state. The uniqueness of solution in these problems is guaranteed by the condition that the probability generating function is an entire function on the whole complex $p$-plane except at infinity. As this work has demonstrated (see also Refs. \cite{Assaf2,Assaf1,Assaf,Kamenev1,Kamenev2,AKM}), the $p$-space representation in conjunction with the WKB approximation and other perturbation tools
employing a large parameter $N\gg 1$ (the mean population size in the stationary or metastable state) yields accurate results for extreme statistics in a broad class of problems of stochastic population dynamics. Such an accuracy is usually impossible to attain via the van Kampen system size expansion which approximates the exact master equation by a Fokker-Planck equation.
How does the $p$-space approach compare with the ``real" space WKB method of Refs. \cite{dykman,kessler,MS,EK,Assaf4} when the stationary or metastable population size is large, $N\gg 1$? One advantage of the $p$-space representation is that, for two-body reactions, there is no need in the WKB approximation, as the quasi-stationary equation in this case is always solvable exactly. Another advantage appears when the WKB solution for $G(p,t)$ is valid for every $p\gtrsim 0$, as occurs in the molecular hydrogen production problem, Sec. III.
In such cases one directly finds the entire probability distribution function, including the region of
small $n={\cal O}(1)$. In the real-space approach
a separate (non-WKB) treatment of the $n={\cal O}(1)$ region, and a matching with the WKB-solution valid at $n\gg 1$ would be needed \cite{Assaf4}.
Still, from our experience, every problem which includes the large parameter $N\gg 1$, and can be solved
in the $p$-space, can be also solved in the ``real" space. Furthermore, for populations exhibiting escape to infinity \cite{MS}, escape to another metastable state \cite{EK}, or Scenario B of extinction \cite{Assaf4}, the $p$-space representation meets significant difficulties. One difficulty is that one should account for a constant-current WKB solution in these cases \cite{MS,EK,Assaf4}. The constant-current solution comes from the deterministic line $p=1$ of the phase plane of the underlying classical Hamiltonian. In the $p$-representation this line is vertical, as in Figs. \ref{phasehyd}, \ref{phasedb} and \ref{phase3}, and so the constant-current solution cannot be easily accounted for. In addition, it is unclear how to deal with the region of non-uniqueness of $q=q(p)$ which is inherent, in the $p$-representation, in these cases. There are two WKB solutions in this region, one of them exponentially small compared with the other. The real-space approach avoids these difficulties, and the solution in these cases can be worked out in a straightforward manner \cite{MS,EK,Assaf4}.
An important advantage of the $p$-space representation stems from the fact that the evolution equation for $G(p,t)$ is \textit{exactly} equivalent to the original master equation. Therefore, the $p$-space approach is especially valuable for exact analysis, as illustrated by the example of molecular hydrogen production, see Ref. \cite{green} and Section III.
Finally, generalization of the $p$-representation to interacting multi-species populations is quite straightforward, see Ref.~\cite{KM}. The resulting multi-dimensional evolution equation for the probability generating function can be analyzed in WKB approximation. As of present only the leading-order WKB-approximation for population extinction is available, and this is regardless of whether one uses the $p$- or $n$-space approach. In the leading WKB order the problem again reduces to finding a nontrivial zero-energy trajectory of the corresponding classical Hamiltonian, and the action along this special trajectory. This problem can be solved numerically. If additional small parameters are present, the problem may become solvable analytically, again in both $p$- and $n$-spaces~\cite{KM,DSL,MS2}.
\section*{Acknowledgments}
We are grateful to Alex Kamenev for fruitful discussions. This work was supported by the Israel Science Foundation (Grant No. 408/08).
| proofpile-arXiv_065-5108 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
In \cite{Stallings}, J. Stallings proved that finitely generated
groups with more than one end split non-trivially as an
amalgamated product $A\ast_CB$ (where non-trivial means $A\ne C\ne B$) or an HNN-extension $A\ast_C$ with $C$ a finite group. In about 1970, C. T. C. Wall raised questions about whether or not one could begin with a group $A_0$ and for $i>0$, produce a infinite sequence of non-trivial splittings, $A_i\ast _{C_i}B_i$ or $A_i\ast _{C_i}$ of $A_{i-1}$, with $C_i$ is finite. When such a sequence could not exist, Wall called the group $A_0$, {\it accessible} over such splittings. In
\cite{Dunwoody} M. Dunwoody proved that finitely presented groups
are accessible with respect to splittings over finite groups.
This implies that for a finitely presented group $G$ there is no
infinite sequence $\Lambda_1, \Lambda_2, \ldots $ of graph of
groups decomposition of $G$ such that $\Lambda_1$ is the trivial
decomposition (with 1-vertex) and for $i>1$, $\Lambda_i$ is
obtained from $\Lambda _{i-1}$ by non-trivially splitting a
vertex group over a finite group, replacing this vertex group
by the splitting and then reducing. (For splittings over finite groups there is
never a compatibility problem.) Instead any such sequence of
decompositions must terminate in one in which each vertex group
is either 1-ended or finite and all edge groups are finite. The
class of {\it small} groups is defined in terms of actions on
trees and is contained in the class of groups that contain no
non-abelian free group as a subgroup. In \cite{BestvinaFeighn}, M.
Bestvina and M. Feighn show that for a finitely presented group
$G$ there is a bound $N(G)$ on the number of edges in a
reduced graph of groups decomposition of $G$, when edge groups
are small. Limits of this sort are generally called
``accessibility" results. If $\mathcal C$ is a class of groups then call a graph of groups decomposition of a group $G$ with edge groups in $\mathcal C$ a $\mathcal C$-{\it decomposition} of $G$. A group $G$ is called {\it strongly
accessible} over $\cal C$ if there is a bound on the number of terms in a sequence $\Lambda _1, \Lambda _2, \ldots , \Lambda_n$ of
$\mathcal C$-decompositions of $G$, such that $\Lambda_1$
is the trivial decomposition, and for $i> 1$, $\Lambda
_i$ is obtained from $\Lambda _{i-1}$ by replacing a vertex group of $\Lambda _{i-1}$ with a compatible splitting $A\ast_CB$ or $A\ast_C$ ($C\in \cal C$) and then reducing. We call a group $G$
{\it accessible} over a class of groups $\cal C$ if there is a bound $N(G)$ on the number of edge groups in a reduced graph of groups decomposition of $G$ with edge groups in $\cal C$. Certainly strong accessibility implies accessibility. Dunwoody's theorem is a strong accessibility result for finitely presented groups over the class of finite groups. We know of no example where accessibility and strong accessibility are different.
In this paper, we produce accessibility results for
finitely generated Coxeter groups. In analogy with the 1-ended
assumptions of Rips-Sela \cite{RipsSela}, and the minimality
assumptions of \cite{DunwoodySageev}, we consider the class $M(W)$
of minimal splitting subgroups of $W$. If $H$ and $K$ are
subgroups of a group $W$ then $H$ is {\it smaller} than $K$ if
$H\cap K$ has finite index in $H$ and infinite index in $K$. If
$W$ is a group, then define $M(W)$, the set of {\it minimal
splitting subgroups of $W$}, to be the set of all subgroups $H$
of $W$, such that $W$ splits non-trivially (as an amalgamated
product or HNN-extension) over $H$ and for any other splitting
subgroup $K$ of $W$, $K$ is not smaller than $H$.
\begin{remark}
A minimal splitting subgroup of a finitely generated Coxeter group $W$ is finitely generated. This follows from remark 1 of \cite{MTVisual}. Suppose $A\ast_CB$ is a non-trivial splitting of $W$ and $C$ is not finitely generated. There a reduced visual decomposition of $W$ with (visual and hence finitely generated) edge group $E$ such that a conjugate of $E$ is a subgroup of $C$. Hence some conjugate of $E$ is smaller than $C$.
\end{remark}
Finite
splitting subgroups are always minimal and if a group is 1-ended,
then any 2-ended splitting subgroup is minimal. Our main theorem
is:
\begin{theorem}\label{Main}
Finitely generated Coxeter groups are strongly accessible over
minimal splittings.
\end{theorem}
Our basic reference for Coxeter groups is Bourbaki
\cite{Bourbaki}. A {\it Coxeter presentation} is given by
$$\langle S: m(s,t)\ (s,t\in S,\ m(s,t)<\infty )\rangle$$ where
$m:S^2\to \{1,2,\ldots ,\infty \}$ is such that $m(s,t)=1$ iff
$s=t$ and $m(s,t)=m(t,s)$. The pair $(W,S)$ is called a {\it
Coxeter system}. In the group with this presentation, the elements
of $S$ are distinct elements of order 2 and a product $st$ of
generators has order $m(s,t)$. Distinct generators commute if and
only if $m(s,t)=2$. A subgroup of $W$ generated by a subset $S'$
of $S$ is called {\it special} or {\it visual}, and the pair
$(\langle S'\rangle, S')$ is a Coxeter system with $m':(S')^2\to \{1,2,\ldots ,\infty\}$ the
restriction of $m$. A simple analysis of a Coxeter
presentation allows one to construct all decompositions of $W$
with only visual vertex and edge groups from that Coxeter
presentation. In \cite{MTVisual}, the authors show that for any finitely generated
Coxeter system $(W,S)$ and any graph of groups decomposition $\Lambda$ of $W$, there is an associated ``visual" graph
of groups decomposition $\Psi$ of $W$ with edge and vertex groups
visual, and such that each vertex (respectively edge) group of $\Psi$ is contained in a conjugate of a vertex
(respectively edge) group of $\Lambda$. This result
is called ``the visual decomposition theorem for finitely
generated Coxeter groups", and we say $\Psi$ is {\it a visual decomposition for $\Lambda$}. Clearly accessibility of finitely generated Coxeter groups is not violated by only visual decompositions. But, we give an example in \cite{MTVisual}, of a
finitely generated Coxeter system $(W,S)$ and a sequence
$\Lambda_i$ ($i\geq 1$) of (non-visual) reduced graph of groups
decompositions of $W$, such that $\Lambda _i$ has $i$-edge groups
and, for $i>1$, $\Lambda _i$ is obtained by compatibly splitting
a vertex group of $\Lambda _{i-1}$. Hence, even in the light of
the visual decomposition theorem and our accessibility
results here, there is no accessibility for Coxeter groups
over arbitrary splittings.
Theorem \ref{Main} implies there are irreducible decompositions of finitely generated Coxeter groups, with minimal splitting edge groups. Our next result implies that any such irreducible decomposition
has an ``equivalent" visual counterpart.
\begin{theorem}\label{Close}
Suppose $(W,S)$ is a Coxeter system and $\Lambda$ is a reduced
graph of groups decomposition of $W$ with $M(W)$ edge groups. If
$\Lambda$ is irreducible with respect to $M(W)$ splittings, and
$\Psi$ is a reduced graph of groups decomposition such that each edge group of $\Psi$ is in $M(W)$, each vertex group of $\Psi$ is a subgroup of a conjugate of a vertex group of
$\Lambda$, and each edge group of $\Lambda$ contains a conjugate of an edge group of $\Psi$ (in particular if $\Psi$ is a reduced visual graph of groups decomposition for $(W,S)$ derived from $\Lambda$ as in the main theorem of \cite{MTVisual}), then
\begin{enumerate}
\item $\Psi$ is irreducible with respect to $M(W)$ splittings
\item There is a (unique) bijection $\alpha$ of the vertices
of $\Lambda$ to the vertices of $\Psi$ such that for each vertex
$V$ of $\Lambda$, $\Lambda(V)$ is conjugate to $\Psi(\alpha (V))$
\item When $\Psi$ is visual, each edge group of $\Lambda$ is conjugate to a visual
subgroup for $(W,S)$.
\end{enumerate}
\end{theorem}
The vertex groups of $\Lambda$ in theorem \ref{Close} are Coxeter, and when $W$ is not indecomposable, they have fewer generators than there are in $S$. Hence they have irreducible decompositions of the same type. As the number of Coxeter generators decreases each time we pass from a non-indecomposable vertex group to a vertex group of an irreducible decomposition with minimal splitting edge groups for that vertex group, eventually this must process must terminate with (up to conjugation) irreducible visual subgroups of $(W,S)$. These terminal groups are maximal FA subgroups of $W$ and must be conjugate to the visual subgroups of $W$ determined by maximal complete subsets of the presentation diagram $\Gamma (W,S)$ (see \cite{MTVisual}).
The paper is laid out as follows: in \S 2 we state the visual
decomposition theorem and review the basics of graphs of groups
decompositions.
In \S 3, we list several well-known technical facts about Coxeter
groups. \S 3 concludes with an argument that shows an infinite
subgroup of a finitely generated Coxeter group $W$ (with Coxeter
system $(W,S)$), containing a visual finite index subgroup
$\langle A\rangle$ ($A\subset S$) decomposes as $\langle
A_0\rangle \times F$ where $A_0\subset A$ and $F$ is a finite
subgroup of a finite group $\langle D\rangle$ where $D\subset S$
and $D$ commutes with $A_0$. This result makes it possible for us
to understand arbitrary minimal splitting subgroups of $W$ in our
analysis of strong accessibility.
In \S 4, we begin our analysis of $M(W)$ by classify the visual
members of $M(W)$ for any Coxeter system $(W,S)$.
Proposition \ref{L24N} shows that for a non-trivial splitting $A\ast_CB$ of a finitely generated Coxeter group $W$ over a non-minimal group $C$, there is a splitting of $W$ over a minimal splitting subgroup $M$, such that $M$ is smaller than $C$. I.e. all non-trivial splittings of a finitely generated Coxeter group are ``refined" by minimal splittings. Theorem \ref {T1N} is the analogue of theorem \ref{artificial} (from \cite{MTVisual}), when edge groups of a graph of groups decomposition of a finitely generated Coxeter group are minimal splitting subgroups. The implications with this additional ``minimal splitting" hypothesis far exceed the conclusions of theorem \ref{artificial} and supply one of the more important technical results of paper. Roughly speaking, proposition \ref{P3N} says that any graph of groups decomposition of a finitely generated Coxeter group with edge groups equal to minimal splitting subgroups of the Coxeter group is, up to ``artificial considerations", visual. Proposition \ref{P3N} gives another key idea towards the proof of the main theorem. It allows us to define a descending sequence of positive integers corresponding to a given sequence of graphs of groups as in the main theorem.
Finally, theorem \ref{vismin} is a
minimal splitting version of the visual decomposition theorem of \cite{MTVisual}.
In \S 5, we define what it means for a visual decomposition of a
Coxeter group $W$, with $M(W)$ edge groups, to look irreducible
with respect to $M(W)$ subgroups. We show that a visual
decomposition looks irreducible if and only if it is irreducible.
This implies that all irreducible visual decompositions of a
Coxeter group can be constructed by an elementary algorithm.
Our main results, theorems \ref{Main} and \ref{Close} are proved
in \S5.
In the final section, \S 6, we begin with a list of generalizations of our
results that follow from the techniques of the paper. Then, we give an analysis of minimal splitting subgroups of ascending HNN extensions, followed by a complete analysis of minimal splittings of general finitely generated groups that contain no non-abelian free group. This includes an analysis of Thompson's group $F$. We conclude
with a list of questions.
\section{Graph of Groups and Visual Decompositions}
Section 2 of \cite{MTVisual} is an introduction to graphs of
groups that is completely sufficient for our needs in this paper.
We include the necessary terminology here. A graph of groups
$\Lambda$ consists of a set $V(\Lambda)$ of vertices, a set
$E(\Lambda)$ of edges, and maps $\iota,\tau:E(\Lambda)\to
V(\Lambda)$ giving the initial and terminal vertices of each edge
in a connected graph, together with vertex groups $\Lambda(V)$
for $V\in V(\Lambda)$, edge groups $\Lambda(E)$ for $E\in
E(\Lambda)$, with $\Lambda(E)\subset\Lambda(\iota(E))$ and an injective
group homomorphism $t_E:\Lambda(E)\to\Lambda(\tau(E))$, called
the edge map of $E$ and denoted by $t_E:g\mapsto g^{t_E}$. The
fundamental group $\pi(\Lambda)$ of a graph of groups $\Lambda$
is the group with presentation having generators the disjoint
union of $\Lambda(V)$ for $V\in V(\Lambda)$, together with a
symbol $t_E$ for each edge $E\in E(\Lambda)$, and having as
defining relations the relations for each $\Lambda(V)$, the
relations $gt_E=t_Eg^{t_E}$ for $E\in E(\Lambda)$ and
$g\in\Lambda(\iota(E))$, and relations $t_E=1$ for $E$ in a given
spanning tree of $\Lambda$ (the result, up to isomorphism, is
independent of the spanning tree taken).
If $V$ is a vertex of a graph of groups decomposition $\Lambda$ of
a group $G$ and $\Phi$ is a decomposition of $\Lambda(V)$ so that
for each edge $E$ of $\Lambda$ adjacent to $V$, $\Lambda(E)$ is
$\Lambda (V)$-conjugate to a subgroup of a vertex group of
$\Phi$, then $\Phi$ is {\it compatible} with $\Lambda$. Then $V$
can be replaced by $\Phi$ to form a finer graph of groups
decomposition of $G$.
A graph of groups is {\it reduced} if no edge between distinct
vertices has edge group the same as an endpoint vertex group. If
a graph of groups is not reduced, then we may collapse a vertex
across an edge, giving a smaller graph of groups decomposition of
the group.
If there is no non-trivial homomorphism of a group to the infinite
cyclic group $\mathbb Z$, then a graph of groups decomposition of
the group cannot contain a loop. In this case, the graph is a
tree. In particular, any graph of groups decomposition of a
Coxeter group has underlying graph a tree.
Suppose $\langle S: m(s,t)\ (s,t\in S,\ m(s,t)<\infty )\rangle$
is a Coxeter presentation for the Coxeter group $W$. The {\it
presentation diagram} $\Gamma(W,S)$ of $W$ with respect to $S$
has vertex set $S$ and an undirected edge labeled $m(s,t)$
connecting vertices $s$ and $t$ if $m(s,t)<\infty$. It is evident
from the above presentation that if a subset $C$ of $S$ separates
$\Gamma (W,S)$, $A$ is $C$ union some of the components of
$\Gamma -C$ and $B$ is $C$ union the rest of the components, then
$W$ decomposes as $\langle A\rangle \ast _{\langle C\rangle } \langle B\rangle$. This generalizes to graphs of
groups decompositions of Coxeter groups where each vertex and
edge group is generated by a subset of $S$. We say that $\Psi$ is a {\it visual graph of groups decomposition of} $W$ (for a given $S$), if each vertex and edge group of $\Psi$ is a special subgroup of $W$, the injections of each edge group into its endpoint vertex groups are given simply by inclusion, and the fundamental group of $\Psi$ is isomorphic to $W$ by the homomorphism induced by the inclusion map of vertex groups into $W$. If $C$ and $D$ are subsets of $S$, then we say $C$ {\it separates} $D$ {\it in} $\Gamma$ if there are points $d_1$ and $d_1$ of $D-C$, such that any path in $\Gamma$ connecting $d_1$ and $d_2$ contains a point of $C$.
The following lemma of \cite{MTVisual} makes it possible to understand when a graph of groups with special subgroups has fundamental group $W$.
\begin{lemma}\label{MT2}
Suppose $(W,S)$ is a Coxeter system. A graph of groups $\Psi$
with graph a tree, where each vertex group and edge group is a
special subgroup and each edge map is given by inclusion, is a
visual graph of groups decomposition of $W$ iff each edge in the
presentation diagram of $W$ is an edge in the presentation diagram
of a vertex group and, for each generator $s\in S$, the set of
vertices and edges with groups containing $s$ is a nonempty
subtree in $\Psi$.
\end{lemma}
In section 4 we describe when visual graph of groups decompositions with minimal splitting edge groups are irreducible with respect to splittings over minimal splitting subgroups. The next lemma follows easily from lemma \ref{MT2} and helps make that description possible.
\begin{lemma} \label{MT3}
Suppose $\Psi$ is a visual graph of groups decomposition for the finitely generated Coxeter system $(W,S)$, $V\subset S$ is such that $\langle V\rangle$ is a vertex group of $\Psi$ and $E\subset V$ separates $V$ in $\Gamma (W,S)$. Then $\langle V\rangle$ splits over $\langle E\rangle$, non-trivially and compatibly with $\Psi$ to give a finer visual decomposition for $(W,S)$ if and only if there are subsets $A$ and $B$ of $S$ such that $A$ is equal to $E$ union (the vertices of) some of the components of $\Gamma -E$, $B$ is $E$ union the rest of the components of $\Gamma-E$, $A\cap V\ne E\ne B\cap V$, and for each edge $D$ of $\Psi$ which is adjacent to $V$, and $D_S\subset S$ such that $\langle D_S\rangle =\Psi(D)$, we have $D_S\subset A$ or $D_S\subset B$. The $\Psi$-compatible splitting of $\langle V\rangle$ is $\langle A\cap V\rangle\ast _{\langle E\rangle } \langle B\cap V\rangle$.
\end{lemma}
The main theorem of \cite{MTVisual} is ``the visual decomposition theorem for finitely generated Coxeter groups":
\begin{theorem}\label{MT1}
Suppose $(W,S)$ is a Coxeter system and $\Lambda$ is a graph of
groups decomposition of $W$. Then $W$ has a visual graph of
groups decomposition $\Psi$, where each vertex (edge) group of $\Psi$ is
a subgroup of a conjugate of a vertex (respectively edge) group of $\Lambda$.
Moreover, $\Psi$ can be taken so that each special subgroup of
$W$ that is a subgroup of a conjugate of a vertex group of
$\Lambda$ is a subgroup of a vertex group of $\Psi$.
\end{theorem}
If $(W,S)$ is a finitely generated Coxeter system, $\Lambda$ is a graph of groups decomposition of $W$ and $\Psi$ satisfies the conclusion of theorem \ref{MT1} (including the moreover clause) and then $\Psi$ is called {\it a visual decomposition from} $\Lambda$ (see \cite{MTVisual}). In remark 1 of \cite{MTVisual}, it is shown that if $\Lambda$ is reduced and $\Psi$ is a visual decomposition from $\Lambda$ then for any edge $E$ of $\Lambda$ there is an edge $D$ of $\Psi$ such that $\Psi (D)$ is conjugate to a subgroup of $\Lambda (E)$.
If a group $G$ decomposes as $A\ast _CB$ and $H$ is a subgroup of
$B$, then the group $\langle A\cup H\rangle$ decomposes as $A\ast
_C\langle C\cup H\rangle$. Furthermore, $G$ decomposes as $\langle
A\cup H\rangle _{\langle C\cup H\rangle}B$, giving a somewhat
``artificial" decomposition of $G$. In \cite{MTVisual}, this idea is used on a
certain Coxeter system $(W,S)$ to produce reduced graph of groups
decompositions of $W$ with arbitrarily large numbers of edges.
The following theorem of \cite{MTVisual} establishes limits on how
far an arbitrary graph of groups decomposition for a finitely
generated Coxeter system can stray from a visual decomposition
for that system.
\begin{theorem}\label{artificial}
Suppose $(W,S)$ is a finitely generated Coxeter system, $\Lambda$
is a graph of groups decomposition of $W$ and $\Psi$ is a reduced
graph of groups decomposition of $W$ such that each vertex group of $\Psi$ is a subgroup of a conjugate of a vertex group of $\Lambda$. Then for each vertex
$V$ of $\Lambda$, the vertex group $\Lambda (V)$, has a graph of
groups decomposition $\Phi_V$ such that each vertex group of
$\Phi _V$ is either
(1) conjugate to a vertex group of $\Psi$ or
(2) a subgroup of $v\Lambda (E)v^{-1}$ for some $v\in \Lambda
(V)$ and $E$ some edge of $\Lambda$ adjacent to $V$.
\end{theorem}
When $\Psi$ is visual, vertex groups of the first type in theorem \ref{artificial} are
visual and those of the second type seem somewhat artificial. In section 4 we prove theorem \ref{T1N} which shows that if the edge groups of the decomposition $\Lambda$ in theorem \ref{artificial} are minimal splitting subgroups of $W$, then the decompositions $\Phi_V$ are compatible with $\Lambda$ and part (2) of the conclusion can be significantly enhanced.
\begin{lemma}\label{Sub}
If $\Lambda$ is a reduced graph of groups decomposition of a
group $G$, $V$ and $U$ are vertices of $\Lambda$ and $g\Lambda
(V)g^{-1}\subset\Lambda (U)$ for some $g\in G$, then $V=U$. If
additionally $\Lambda$ is a tree, then $g\in \Lambda(V)$.
$\square$
\end{lemma}
If $W$ is a finitely generated Coxeter group then since $W$ has a
set of order 2 generators, there is no non-trivial homomorphism
from $W$ to $\mathbb Z$. Hence any graph of groups decomposition
of $W$ is a tree. If $C\in M(W)$ and $W$ is finitely generated,
then theorem \ref{MT1} implies that $C$ contains a subgroup of
finite index which is isomorphic to a Coxeter group and so there
is no non-trivial homomorphism of $C$ to $\mathbb Z$.
The following is an easy exercise in the theory of graph of
groups or more practically it is a direct consequence of the
exactness of the Mayer-Viatoris sequence for a pair of groups.
\begin{lemma}\label{Z}
Suppose the group $W$ decomposes as $A\ast _CB$ and there is no
non-trivial homomorphism of $W$ or $C$ to $\mathbb Z$. Then there
is no non-trivial homomorphism of $A$ or $B$ to $\mathbb Z$.
$\square$
\end{lemma}
\begin{corollary}\label{CZ}
Suppose $W$ is a finitely generated Coxeter group and $\Lambda$
is a graph of groups decomposition of $W$ with each edge group in
$M(W)$, then any graph of groups decomposition of a vertex group
of $\Lambda$ is a tree. $\square$
\end{corollary}
\section{Preliminary results}
We list some results used in this paper. Most can be found in
\cite{Bourbaki}.
\begin{lemma} \label{Order}
Suppose $(W,S)$ is a Coxeter system and $P=\langle
S:(st)^{m(s,t)}$ for $m(s,t)<\infty\rangle$ (where $m:S^2\to
\{1,2,\ldots ,\infty\}$ ) is a Coxeter presentation for $W$. If
$A$ is a subset of $S$, then $(\langle A\rangle, A)$ is a Coxeter
system with Coxeter presentation $\langle A:(st)^{m'(s,t)}$ for
$m'(s,t) <\infty \rangle$ (where $m'=m\vert _{A^2}$). In
particular, if $\{s,t\}\subset S$, then the order of $(st)$ is
$m(s,t)$. $\square$
\end{lemma}
The following result is due to Tits:
\begin{lemma} \label{Tits}
Suppose $(W,S)$ is a Coxeter system and $F$ is a finite subgroup
of $W$ then there is $A\subset S$ such that $\langle A\rangle$ is
finite and some conjugate of $F$ is a subgroup of $\langle
A\rangle$. $\square$
\end{lemma}
If $A$ is a set of generators for a group $G$, the {\it Cayley
graph} ${\cal K}(G,A)$ of $G$ with respect to $A$ has $G$ as
vertex set and a directed edge labeled $a$ from $g\in G$ to $ga$
for each $a\in A$. The group $G$ acts on the left of $\cal K$.
Given a vertex $g$ in $\cal K$, the edge paths in $\cal K$ at $g$
are in 1-1 correspondence with the words in the letters $A^{\pm
1}$ where the letter $a^{-1}$ is used if an edge labeled $a$ is
traversed opposite its orientation. Note that for a Coxeter
system $(W,S)$, and $s\in S$, $s=s^{-1}$. It is standard to
identify the edges labeled $s$ at $x$ and $s$ at $xs$ in ${\cal
K}(W,S)$ for each vertex $x$, of $\cal K$ and each $s\in S$ and to
ignore the orientation on the edges. Given a group $G$ with
generators $A$, an $A$-{\it geodesic} for $g\in G$ is a shortest
word in the letters $A^{\pm 1}$ whose product is $g$. A geodesic
for $G$ defines a geodesic in $\cal K$ for each vertex $g\in G$.
Cayley graphs provide and excellent geometric setting for many of
the results in this section.
The next result is called the deletion condition for Coxeter
groups. An elementary proof of this fact, based on Dehn diagrams,
can be found in \cite{MTVisual}.
\begin{lemma}\label{Del}
{\bf The Deletion Condition} Suppose $(W,S)$ is a Coxeter system
and $a_1\cdots a_n$ is a word in $S$ which is not geodesic. Then
for some $i<j$, $a_i\cdots a_j=a_{i+1}\cdots a_{j-1}$. I.e. the
letters $a_i$ and $a_j$ can be deleted. $\square$
\end{lemma}
The next collection of lemmas can be derived from the deletion
condition.
\begin{lemma} \label{Double}
Suppose $(W,S)$ is a Coxeter system and $A$ and $B$ are subsets
of $S$. Then for any $w\in W$ there is a unique shortest element,
$d$, of the double coset $\langle A\rangle w\langle B\rangle$. If
$\delta$ is a geodesic for $d$, $\alpha$ is an $A$-geodesic, and
$\beta$ is a $B$-geodesic, then $(\alpha, \delta)$ and $(\delta,
\beta)$ are geodesic. $\square$
\end{lemma}
\begin{lemma} \label{Kil}
Suppose $(W,S)$ is a Coxeter system, $w\in W$, $I$ and $J\subset
S$, and $d$ is the minimal length double coset representative in
$\langle I\rangle w\langle J\rangle$. Then $\langle I\rangle\cap
d\langle J\rangle d^{-1}=\langle K\rangle$ for $K=I\cap
(dJd^{-1})$ and, $d^{-1}\langle K\rangle d=\langle J\rangle \cap
(d^{-1}\langle I\rangle d)=\langle K'\rangle$ for $K'=J\cap
d^{-1}Id=d^{-1}Kd$. In particular, if $w=idj$ for $i\in \langle
I\rangle$ and $j\in \langle J\rangle$ then $\langle I\rangle \cap
w\langle J \rangle w^{-1}=i\langle K\rangle i^{-1}$ and $\langle
J\rangle \cap w^{-1}\langle I\rangle w=j^{-1}\langle K'\rangle
j$. $\square$
\end{lemma}
\begin{lemma}\label{Back}
Suppose $(W,S)$ is a Coxeter system, $A$ is a subset of $S$ and
$\alpha$ is an $S$-geodesic. If for each letter $a\in A$, the word
$(\alpha ,a)$ is not geodesic, then the group $\langle A\rangle$
is finite. $\square$
\end{lemma}
\begin{lemma}\label{Extend}
Suppose $(W,S)$ is a Coxeter system and $x\in S$. If $\alpha$ is a
geodesic in $S-\{x\}$, then the word $(\alpha, x)$ is geodesic.
$\square$
\end{lemma}
If $(W,S)$ is a Coxeter system and $w\in W$ then the deletion
condition implies that the letters of $S$ used to compose an
$S$-geodesic for $w$ is independent of which geodesic one
composes for $w$. We define $lett(w)_S$ to be the subset of $S$
used to composes a geodesic for $w$, or when the system is
evident we simply write $lett(w)$.
\begin{lemma}\label{L56}
Suppose $(W,S)$ is a Coxeter system, $w\in W$, $b\in S-lett(w)$,
and $bwb\in \langle lett(w)\rangle$ then $b$ commutes with $lett
(w)$.$\square$
\end{lemma}
The next lemma is technical but critical to the main
results of the section.
\begin{lemma} \label{L54}
Suppose $(W,S)$ is a finitely generated Coxeter system and
$A\subset S$ such that $\langle A\rangle$ is infinite and there
is no non-trivial $F\subset A$ such that $\langle F\rangle$ is finite and
$A-F$ commutes with $F$. Then there is an infinite $A$-geodesic
$\alpha$, such that each letter of $A$ appears infinitely many
times in $\alpha$.
\end{lemma}
\begin{proof} The case when $\langle
A\rangle$ does not (visually) decompose as $\langle A-U\rangle\times \langle
U \rangle$ for any non-trivial $U\subset A$, follows from lemma 1.15 of \cite{MRT}. The general case follows since once the
irreducible case is established, one can interleave geodesics from
each (infinite) factor of a maximal visual direct product
decomposition of $\langle A\rangle$. I.e. if $\langle A\rangle
=\langle A-U\rangle\times \langle U\rangle$, $(x_1,x_2,\ldots )$
and $(y_1,y_2,\ldots)$ are $U$ and $A-U$-geodesics respectively,
then the deletion condition implies $(x_1,y_1,x_2,y_2,\ldots)$ is
an $A$-geodesic.
\end{proof}
\begin{remark}\label{Split}
Observe that if $(W,S)$ is a Coxeter system, and $W=\langle
F\rangle \times \langle G\rangle=\langle H\rangle \times \langle
I\rangle$ for $F\cup G=S=H\cup I$. Then $W=\langle F\cup H\rangle
\times \langle G\cap I\rangle$ and $\langle F\cup H\rangle
=\langle F\rangle \times \langle H-F\rangle$. In particular, for
$A\subset S$, there is a unique largest subset $C\subset A$ such
that $\langle A\rangle =\langle A-C\rangle \times \langle
C\rangle$ and $\langle C\rangle$ is finite. Define $T_{(W,S)}(A)\equiv
C$ and $E_{(W,S)}(A)\equiv A-C$. When the system is evident we simply write $T_W(A)$ and $E_W(A)$.
\end{remark}
For a Coxeter system $(W,S)$ and $A\subset S$, let $lk_2(A, (W,S))$
({\it the 2-link of $A$} in the system $(W,S)$) be the set of all $s\in S-A$ that commute
with $A$. For consistency we define $lk_2(\emptyset ,(W,S))=S$. When the system is evident we simply write $lk_2(A)$.
In the presentation diagram $\Gamma (W,S)$, $lk_2(A)$
is the set of all vertices $s\in S$ such that $s$ is connected to
each element of $A$ by an edge labeled 2.
If $G$ is a group with generating set $S$ and $u$ is an $S$-word, denote by $\bar u$ the element of $G$ represented by $u$.
\begin{lemma} \label{Fin2}
Suppose $(W,S)$ is a Coxeter system, $A\subset S$, and $r$ is an
$A$-geodesic such that each letter of $A$ appears infinitely
often in $r$. If $r$ can be partitioned as $(r_1,r_2,\ldots )$
and $w\in W$ is such that $w\bar r_iw^{-1}=s_i$, $\vert s _i\vert = \vert \bar r_i\vert$, and
$(\beta,r_i,r_{i+1},\ldots )$ and $(r_1,\ldots ,r_i,\beta ^{-1})$
are geodesic for all $i$ where $\beta$ is a geodesic for $w$,
then $w\in \langle A\cup lk_2(A)\rangle$.
\end{lemma}
\begin{proof} If $w$ is a minimum length counter-example,
then by lemma \ref{L56}, $\vert w\vert>1$. Say $(w_1,\ldots ,w_n)$
is a geodesic for $w$. For all $m$, $(w_1,\ldots ,w_n,r_1,\ldots ,
r_m,w_n)$ is not geodesic and the last $w_n$ deletes with one of
the initial $w_i$. For some $i\in \{1,\ldots ,n\}$, there are
infinitely many $m$ such that the last $w_n$ deletes with $w_i$.
Say this set of such $m$ is $\{m_1,m_2,\ldots \}$ (in ascending order). Then $w_n$
commutes with $\bar r_{m_j+1} \bar r_{m_j+2} \cdots \bar
r_{m_{j+1}}$ for all $j$. By lemma \ref{L56}, $w_n\in A_0\cup
lk_2(A_0)$. Then $w'=w_1\cdots w_{n-1}$ is shorter than $w$ and
satisfies the hypothesis of the lemma with $r$ replaced by
$r'=(r_1',r_2',\ldots )$ where $r_i'=(r_{m_i+1}, r_{m_i+2},\ldots
,r_{m_{i+1}})$. By the minimality of $w$ , $w'\in \langle A_0\cup
lk_2(A_0)\rangle$, and so $w\in \langle A\cup
lk_2(A)\rangle$.
\end{proof}
The next result is analogous to classical results (see V. Deodhar \cite{Deodhar}).
\begin{lemma}\label{LFin}
Suppose $(W,S)$ is a finitely generated Coxeter system, $A$ and $B$ are subsets of
$S$, $u$ is a shortest element of
the double coset $\langle B\rangle g\langle A\rangle$, and
$g\langle A\rangle g^{-1}\subset \langle B\rangle$. Then $uAu^{-1}\subset B$ and $lett(u)\subset lk_2(E_W(A))$. In particular, $uxu^{-1}=x$ for all $x\in E_W(A)$ and $E_W(A)\subset E_W(B)$. If additionally, $g\langle A\rangle g^{-1}=\langle B\rangle$, then $uAu^{-1}=B$ and $E_W(A)=E_W(B)$.
\end{lemma}
\begin{proof} Note that $g\langle A\rangle g^{-1}=bua\langle A\rangle a^{-1}u^{-1}b^{-1}\subset \langle B\rangle$ for some $a\in \langle A\rangle$ and $b\in \langle B\rangle$. Then $u\langle A\rangle u^{-1}\subset \langle B\rangle$.
By lemma \ref{Kil}, $u\langle A\rangle u^{-1}=u\langle A\rangle u^{-1}\cap \langle B\rangle=\langle (uAu^{-1})\cap B\rangle$ and so $\langle A\rangle =\langle A\cap u^{-1}Bu\rangle$ and $A\subset u^{-1}Bu$ so that $uAu^{-1}\subset B$.
If $E(A)=\emptyset$ there is nothing more to prove. Otherwise, lemma \ref{L56} implies there is a geodesic $\alpha$ in the letters of $E_W(A)$, such that each letter of $E_W(A)$ appears infinitely often in $\alpha$. By lemma \ref{Fin2} (with partitioning $r_i$ of length 1), $lett(u)\subset E_W(A)\cup lk_2(E_W(A))$. By the definition of $u$, no geodesic for $u$ can end in a letter of $A$ and so $lett(u)\subset lk_2(E_W(A))$. Then $E_W(A)\subset B$ so $E_W(A)\subset E_W(B)$.
Now assume $g\langle A\rangle g^{-1}=\langle B\rangle$. Then as $u^{-1}$ is the shortest element of the double coset $\langle A\rangle g^{-1}\langle B\rangle$, we have $u^{-1}Bu\subset A$ so $uAu^{-1}=B$, and we have $E_W(B)\subset E_W(A)$ so $E_W(A)=E_W(B)$.
\end{proof}
\begin{proposition} \label{Index}
Suppose $(W,S)$ is a Coxeter system, $B$ is an infinite subgroup
of $W$ and $A\subset S$ such that $\langle A\rangle$ has finite
index in $B$. Then $B=\langle A_0\rangle \times C$ for $A_0\subset
A$ and $C$ a finite subgroup of $\langle lk_2(A_0)\rangle$. (By
lemma \ref{Tits}, $C$ is a subgroup of a finite group $\langle
D\rangle$ such that $D\subset S-A_0$ and $D$ commutes with $A_0$.)
\end{proposition}
\begin{proof} Let $A_0\equiv E_W(A)$. By lemma \ref{L54}
there is an infinite-length $A_0$-geodesic $r$, such that each
letter in $A_0$ appears infinitely often in $r$. The group
$\langle A_0\rangle$ contains a subgroup $A'$ which is a normal
finite-index subgroup of $B$. Let $\alpha _i$ be the initial
segment of $r$ of length $i$, and $C_i$ the $B/A'$ coset
containing $\bar \alpha_i$, the element of $W$ represented by
$\alpha _i$. Let $i$ be the first integer such that $C_i=C_j$ for
infinitely many $j$. Replace $r$ by the terminal segment of $r$
that follows $\alpha _i$. Then $r$ can be partitioned into
geodesics $(r_1,r_2,\ldots )$ such that $\bar r_i\in A'$. Hence
for any $i$ and any $b\in B$, $b\bar r_i b^{-1}\in A'\subset \langle
A_0\rangle$.
It suffices to show that $B\subset \langle A_0\rangle \times \langle
lk_2(A_0)\rangle$, since then each $b\in B$ is such that $b=xy$
with $x\in \langle A_0\rangle$ and $y\in \langle
lk_2(A_0)\rangle$. As $A_0\subset B$, $y\in B$ and so $B=\langle
A_0\rangle \times (B\cap\langle lk_2(A_0)\rangle)$. (Recall
$\langle A_0\rangle$ has finite index in $B$.)
Suppose $b$ is a shortest element of $B$ such that $b\not \in
\langle A_0\rangle \times \langle lk_2(A_0)\rangle$. Let $\beta
$ be a geodesic for $b$.
\noindent {\bf Claim} The path $(\beta, r_1,r_2,\ldots )$ is
geodesic.
\noindent {\bf Proof:} Otherwise let $i$ be the first integer such
that $(\beta , \alpha _i)$ (recall $\alpha_i$ is the initial
segment of $r$ of length $i$) is not geodesic. Then $\bar \beta
\bar \alpha _i =\bar \gamma \bar \alpha _{i-1}$ where $\gamma$ is
obtained from $\beta$ by deleting some letter and $ (\gamma,
\alpha _{i-1})$ is geodesic. We have $\bar \gamma \bar \alpha
_{i-1}=b\bar \alpha _i$, and $\{b, \bar \alpha _{i-1},\bar \alpha
_i\}\subset B$, so $\bar \gamma \in B$.
We conclude the proof of this claim by showing: If $b$ is a shortest element of $B$ such that $b\not \in \langle A_0\cup lk_2(A_0)\rangle$ and $\beta$ is a geodesic for $b$, then a letter cannot be deleted from $\beta$ to give
a geodesic for an element of $B$.
\noindent Otherwise, suppose $\beta =(b_1,\ldots
,b_m)$, $\gamma =(b_1,\ldots ,b_{i-1}, b_{i+1},\ldots ,b_m)$ is
geodesic, and $\bar \gamma \in B$. By the minimality hypothesis,
$\{b_1,\ldots ,b_{i-1}, b_{i+1},\ldots b_m\}\subset A_0\cup
lk_2(A_0)$. ``Sliding" $lk_2(A_0)$-letters of $\beta$ before
$b_i$ ``back" and those after $b_i$ ``forward", gives a geodesic
$(\beta _1,\beta_2,b_i,\beta _3,\beta _4)$ for $b$, with $lett
(\beta _1)\cup lett (\beta _4)\subset lk_2(A_0)$ and $lett (\beta
_2)\cup lett (\beta _3)\subset A_0$. Now, $\bar \beta _1\bar \beta
_2b_i\bar\beta _3 \bar \beta _4 \bar r_1\cdots \bar r_j \bar
\beta_ 4^{-1} \bar \beta _3^{-1}b_i\bar \beta_2 ^{-1}\bar \beta
_1^{-1}\in A' \subset \langle A_0\rangle$, for each $j$. This implies
$b_i\bar\beta _3 \bar r_1\cdots \bar r_j \bar \beta _3^{-1}b_i\in
\langle A_0\rangle$. For large $j$, $lett (\bar\beta _3 \bar
r_1\cdots \bar r_j \bar \beta _3^{-1})=A_0$. By lemma \ref{L56},
$b_i\in A_0\cup lk_2(A_0)$, and so $b\in \langle A\cup lk_2(A_0)\rangle$. This is contrary to our assumption and the claim is proved. $\square$
The same proof shows $(\beta, r_k,r_{k+1},\ldots )$ is geodesic
for all $k$.
Let $\delta_i$ be a geodesic for $b\bar r_ib^{-1}\in \langle
A_0\rangle$. Next we show $\vert \delta _i\vert =\vert r_i\vert$.
As $(\beta, r_i)$ is geodesic and $b\bar r_i=\bar
\delta_ib$, $\vert \delta _i\vert \geq \vert r_i\vert$. If
$\vert \delta _i\vert > \vert r_i\vert$ then $(\delta _i,\beta
)$ is not geodesic. Say $\delta _i=(x_1,\ldots ,x_k)$ for $x_i\in A_0$. Let $j$ be the largest integer such that $(x_j,\ldots , x_k, b_1,\ldots ,b_m)$ is not geodesic. Then $x_j$ deletes with say $b_i$ and $(x_{j+1},\ldots ,x_k,b_1,\ldots ,b_{i-1},b_{i+1},\ldots ,b_m)$ is geodesic. As
$$x_{j+1} \ldots x_kb_1\ldots b_{i-1}b_{i+1}\ldots b_m=x_j\ldots x_kb\in B$$
the word $(b_1,\ldots ,b_{i-1},b_{i+1},\ldots ,b_m)$ is a geodesic for an element of $B$. This is impossible by the closing argument of our
claim.
Since $(\beta , r_1,\ldots ,r_i)$ is geodesic for all $i$, so is
$(\delta _1,\ldots ,\delta _i,\beta)$. Since
$$(r_1,\ldots ,r_i,\beta ^{-1})^{-1}=(\beta, r_i^{-1},\ldots ,r_1^{-1})$$
the claim shows $(r_1,\ldots ,r_i,\beta
^{-1})$ is geodesic for all $i$. The proposition now follows directly from lemma \ref{Fin2}.
\end{proof}
\section{Minimal Splittings}
Recall that a subgroup $A$ of $W$ is a {\it minimal splitting
subgroup} of $W$ if $W$ splits non-trivially over $A$, and there
is no subgroup $B$ of $W$ such that $W$ splits non-trivially over
$B$, and $B\cap A$ has infinite index in $A$ and finite index in
$B$.
For a Coxeter system $(W,S)$ we defined $M(W)$ to be the
collection of minimal splitting subgroups groups of $W$.
Observe that if $W$ has more than 1-end, then each member of
$M(W)$ is a finite group.
Define $K(W,S)$ to be the set of all subgroups of $W$ of the form $\langle A\rangle\times M$ for
$A\subset S$, and $M$ a subgroup of a finite special subgroup of
$\langle lk_2(A)\rangle$ (including when $\langle A
\rangle$ and/or $M$ is trivial). If $W$ is finitely generated, then
$K(W,S)$ is finite.
\begin{lemma} \label{L16N}
Suppose $(W,S)$ is a finitely
generated Coxeter system and $\Lambda$ is a non-trivial reduced
graph of groups decomposition of $W$ such that each edge group of
$\Lambda$ is in $M(W)$. If $\Psi$ is a reduced visual graph of groups
decomposition for $W$ such that each edge group of $\Psi$ is conjugate to a subgroup of $\Lambda$ then each edge group of $\Psi$ is in
$M(W)$. $\square$
\end{lemma}
\begin{lemma}\label{Minform}
Suppose $(W,S)$ is a finitely generated Coxeter system and $G$ is a group in $M(W)$. Then $G$ is conjugate to a group in $K(W,S)$.
\end{lemma}
\begin{proof}
By theorem \ref{MT1}, there is $E\subset S$ and $w\in W$ such that $W$ splits non-trivially over $E$ and $w\langle E\rangle w^{-1}$ is conjugate to a subgroup of $G$. By the minimality of $G$, $\langle E\rangle$ has finite index in $w^{-1}Gw$ and the lemma follows from theorem \ref{Index}.
\end{proof}
\begin{example}
Consider the Coxeter system $(W,S)$ with $S=\{a,b,c,d,x,y\}$,
$m(u,v)=2$ if $u\in \{a,c,d\}$ and $v\in \{x,y\}$,
$m(a,b)=m(b,c)=2$, $m(c,d)=3$, $m(x,b)=m(y,b)=3$ and $m(x,y)=m(a,c)=m(a,d)=m(b,d)=\infty$. The group $W$ is 1-ended since no subset of $S$ separates the presentation
diagram $\Gamma (W,S)$ and also generates a finite group. The group $\langle x,c,y\rangle$ is a member of $M(W)$, since it is 2-ended and $\{x,c,y\}$ separates $\Gamma$.
The set $\{x,y,b\}$ separates $\Gamma$, but $\langle x,b,y\rangle\not\in M(W)$
since $\langle x,y\rangle$ has finite index in
$\langle x,c,y\rangle$ and infinite index in
$\langle x,b,y\rangle$.
Note that no subset of $\{x,b,y\}$ generates a group in $M(W)$.
The element $cd$ conjugates $\{x,c,y\}$
to $\{x,d,y\}$. So, $\langle x,d,y\rangle \in M(W)$. Hence a
visual subgroup in $M(W)$ need not separate $\Gamma (W,S)$.
\end{example}
\begin{proposition} \label{L24N}
Suppose $(W,S)$ is a finitely generated Coxeter system and
$W=A\ast _CB$ is a non-trivial splitting of $W$. Then there
exists $D\subset S$ and $w\in W$ such that $\langle D\rangle \in
M(W)$, $D$ separates $\Gamma( W,S)$ and $w\langle E_W(D)\rangle w^{-1}\subset C$ (so $w\langle D\rangle
w^{-1}\cap C$ has finite index in $w\langle D\rangle w^{-1}$).
Furthermore, if $C\in M(W)$ then $w\langle
E_W(D)\rangle w^{-1}$ has finite index in $C$.
\end{proposition}
\begin{proof}
The second part of this follows trivially from the definition of
$M(W)$ and theorem \ref{MT1}. Let $\Psi_1$ be a reduced visual
graph of groups decomposition for $A\ast _CB$. Each edge group of
$\Psi_1$ is a subgroup of a conjugate of $C$. Say $D_1\subset S$
and $\langle D_1\rangle$ is an edge group of $\Psi_1$. Then $W$
splits non-trivially as $\langle E_1\rangle \ast _{\langle
D_1\rangle}\langle F_1\rangle$, where $E_1\cup F_1=S$ and
$E_1\cap F_1=D_1$. If $\langle D_1\rangle$ is not in $M(W)$, there
exists $C_1$ a subgroup of $W$, such that $W$ splits
non-trivially as $A_1\ast _{C_1}B_1$ and such that $C_1\cap
\langle D_1\rangle$ has infinite index in $\langle D_1\rangle$
and finite index in $C_1$. Let $\Psi_2$ be a reduced visual
decomposition for $A_1\ast _{C_1}B_1$, and $D_2\subset S$ such
that $\langle D_2\rangle$ is an edge group of $\Psi _2$. Then a
conjugate of $\langle D_2\rangle$ is a subgroup of $C_1$, and
$W=\langle E_2\rangle \ast _{\langle D_2\rangle}\langle
F_2\rangle$, where $E_2\cup F_2=S$ and $E_2\cap F_2=D_2$. For
$i\in \{1,2\}$, $\langle D_i\rangle =\langle U_i\rangle \times
\langle V_i\rangle$ where $U_i=E_W(D_i)$ and $V_i=T_W(D_i)$ (so by
remark \ref{Split}, $U_i\cup V_i=D_i$ and $V_i$ is the (unique)
largest such subset of $D_i$ such that $\langle V_i\rangle$ is
finite).
It suffices to show that $U_2$ is a proper subset of $U_1$.
Choose $g\in W$ such that $g\langle D_2\rangle g^{-1}\subset C_1$. Then
by lemma \ref{Kil}, $g\langle D_2\rangle g^{-1}\cap \langle
D_1\rangle =d\langle K\rangle d^{-1}$ for $d\in \langle
D_1\rangle$ and $K=D_1\cap mD_2m^{-1}$ where $m$ is the minimal
length double coset representative of $\langle D_1\rangle
g\langle D_2\rangle$. Write $\langle K\rangle= \langle U_3\rangle
\times \langle V_3\rangle $ with $U_3=E_W(K)$ and $V_3=T_W(K)$. As $K\subset D_1$, $E_W(K)\subset E_W(D_1)$, so $U_3\subset U_1$. As $m^{-1}Km\subset D_2$, lemma \ref{LFin} implies $E_W(K)\subset E_W(D_2)$ so $U_3\subset U_2$. Hence $U_3\subset U_1\cap U_2$.
Since $C_1\cap\langle D_1\rangle$ has infinite index in $\langle D_1\rangle $, $d\langle
K\rangle d^{-1}$ has infinite index in $\langle D_1\rangle$. As $d_1\in \langle D_1\rangle$, $\langle K\rangle$ has infinite index in $\langle D_1\rangle$.
Hence $U_3$ is a proper subset of $U_1$.
Recall that $g\langle D_2\rangle g^{-1}\subset C_1$ and $C_1\cap \langle
D_1\rangle$ has finite index in $C_1$ so $d\langle K\rangle
d^{-1}=g\langle D_2\rangle g^{-1}\cap \langle D_1\rangle$ has
finite index in $g\langle D_2\rangle g^{-1}$ and $g^{-1}d\langle U_3\rangle
d^{-1}g$ has finite index in $\langle D_2\rangle$. Thus, for $u$ the minimal
length double coset representative of $\langle D_2\rangle
g^{-1}d\langle U_3\rangle$, $u\langle U_3\rangle u^{-1}$ has finite index in $\langle
D_2\rangle$.
Since $E_W(U_3)=U_3$, lemma \ref{LFin} implies $U_3=uU_3u^{-1}\subset D_2$.
Hence $\langle U_3\rangle$ has finite index in $\langle
U_2\rangle$. By proposition \ref{Index}, $\langle
U_2\rangle=\langle U_3\rangle \times C$ for $C$ a finite subgroup
of $\langle lk_2(U_3)\rangle$. If $s\in U_2-U_3$ then as
$U_2\subset U_3\cup lk_2(U_3)$, $s\in lk_2(U_3)$. Hence $\langle
U_2\rangle =\langle U_3\rangle \times \langle U_2-U_3\rangle$. As
$\langle U_3\rangle$ has finite index in $\langle U_2\rangle$,
$\langle U_2-U_3\rangle$ is finite. By the definition of $U_2$,
$U_2=U_3$ and so $U_2$ is a proper subset of $U_1$.
\end{proof}
We can now easily recognize separating special subgroups in
$M(W)$.
\begin{corollary}\label{C7}
Suppose $(W,S)$ is a Coxeter system and $C\subset S$ separates
$\Gamma (W,S)$. Then $\langle C\rangle \in M(W)$ iff there is no
$D\subset S$ such that $D$ separates $\Gamma (W,S)$ and $E_W(D)$ is a
proper subset of $E_W(C)$.
\end{corollary}
\begin{proof}
If $\langle C\rangle \in M(W)$, $D\subset S$ such that $D$ separates $\Gamma$ and $E_W(D)$ is a proper subset of $E_W(C)$, then by proposition
\ref{Index}, $\langle E_W(D)\rangle$ has infinite index in
$\langle E_W(C)\rangle$. But then $\langle D\rangle \cap \langle C\rangle$ has finite index in $\langle D\rangle$ and infinite index in $\langle C\rangle$, contrary to the assumption $\langle C\rangle \in M(W)$.
If $\langle C\rangle \not\in M(W)$, then by proposition \ref{L24N}, there is $D\subset S$ and $w\in W$ such that $\langle D\rangle \in M(W)$, $D$ separates $\Gamma$, and $w\langle E_W(D)\rangle w^{-1}\subset \langle C\rangle$. By lemma \ref {LFin}, $E_W(D)\subset E_W(C)$. Since $\langle C\rangle\not \in M(W)$, $ E_W(D)$ is a proper subset of $E_W(C)$.
\end{proof}
\begin{theorem} \label{T1N}
Suppose $(W,S)$ is a finitely generated Coxeter system,
$\Lambda$ is a reduced graph of groups decomposition for $W$ with
each edge group a minimal splitting subgroup of $W$, and $\Psi$ is
a reduced graph of groups decomposition of $W$ such that each vertex group of $\Psi$ is conjugate to a subgroup of a vertex group of $\Lambda$ and for each edge $E$ of $\Lambda$, there is an edge $D$ of $\Psi$ such that $\Psi(D)$ is conjugate to a subgroup of $\Lambda(E)$. (E.g. if $\Psi$ is a visual decomposition from $\Lambda$.)
If $A$ is a vertex of
$\Lambda$, and $\Phi _A$ is the reduced decomposition of $\Lambda
(A)$ given by the action of $\Lambda (A)$ on the Bass-Serre tree
for $\Psi$, then
1) For each edge $E$ of $\Lambda$ adjacent to $A$, $\Lambda
(E)\subset a\Phi_A(K)a^{-1}$, for some $a\in \Lambda (A)$ and
some vertex $K$ of $\Phi_A$. In particular, the decomposition
$\Phi_A$ is compatible with $\Lambda$.
2) Each vertex group of $\Phi_A$ is conjugate to a vertex group of
$\Psi$ (and so is Coxeter), or is $\Lambda (A)$-conjugate to
$\Lambda (E)$ for some edge $E$ adjacent to $A$.
3) If each edge group of $\Psi$ is in $M(W)$, then each edge group of $\Phi_A$ is a minimal splitting subgroup of $W$.
\end{theorem}
\begin{proof}
Suppose $E$ is an edge of $\Lambda$ adjacent to $A$.
By hypothesis, there is an edge $D$ of $\Psi$ and $w\in W$
such that $w\Psi(D)w^{-1}\subset \Lambda (E)$. Since $\Lambda (E)$
is minimal, $\Psi(D)$ has finite index in $w^{-1}\Lambda(E)w$ and
so corollary 4.8 of \cite{DicksDunwoody} implies $\Lambda (E)$
stabilizes a vertex of $T_{\Psi}$, the Bass-Serre tree for
$\Psi$. Thus $\Lambda (E)$ is a subgroup of $a\Phi_A(K)a^{-1}$,
for some vertex $K$ of $\Phi _A$, and some $a\in \Lambda(A)$. Part
1) is proved.
By theorem \ref{artificial}, each vertex group of $\Phi_A$ is either
conjugate to a vertex group of $\Psi$ or $\Lambda(A)$-conjugate to
a subgroup of an edge group $\Lambda (E)$, for some edge $E$ of
$\Lambda$ adjacent to $A$. Suppose $Q$ is a vertex of $\Phi_A$ and
$a_1\Phi_A(Q)a_1^{-1}\subset \Lambda (E)$ for some $a_1\in
\Lambda (A)$. By part 1), $\Lambda (E)\subset
a_2\Phi_A(K)a_2^{-1}$, for some $a_2\in \Lambda(A)$ and $K$ a
vertex of $\Phi_A$. Thus, $a_1\Phi_A(Q)a_1^{-1}\subset \Lambda
(E)\subset a_2\Phi_A(K)a_2^{-1}$. Lemma \ref{Sub} implies $Q=K$ and $a_2^{-1}a_1\in \Phi_A(Q)$, so
$\Phi_A(Q)=a_2^{-1}\Lambda (E)a_2$ and part 2) is proved.
By part 1) $W$ splits non-trivially over each edge group of $\Phi_A$ and part 3) follows.
\end{proof}
\begin{proposition} \label{P2N}
Suppose $(W,S)$ is a finitely generated Coxeter system, $\Lambda$ is a reduced graph of groups decomposition of $W$ and $E$ is an edge of $\Lambda$ such that $\Lambda (E)$ is conjugate to a group in $K(W,S)$. Then there is $Q\subset S$ such that a conjugate of $\langle Q\rangle$ is a subgroup of a vertex group of $\Lambda$ and a conjugate of $\Lambda (E)$ has finite index in $\langle Q\rangle$. \end{proposition}
\begin{proof} The group $\Lambda (E)$ is conjuate to $\langle B\rangle \times F$ for $B\subset S$ and $F\subset \langle D\rangle$ where $D\subset lk_2(B)$ and $\langle D\rangle$ is finite. Let $T_{\Lambda}$ be the Bass-Serre tree for $\Lambda$ and set $B=\{b_1,\ldots ,b_n\}$. It suffices to show that $\langle B\cup D\rangle$ stabilizes a vertex of $T_{\Lambda}$. Otherwise, let $i\in \{0,1,\ldots , n-1\}$ be large as possible so that $\langle D\cup \{b_1,\ldots ,b_i\}\rangle$ stabilizes a vertex of $T_{\Lambda}$. As $\langle D\cup \{b_{i+1}\}\rangle$ is finite, it stabilizes some vertex $V_1$ of $T_{\Lambda}$. The group $\langle B\rangle$ stabilizes a vertex $V_2$ of $T_{\Lambda}$ and $\langle D\cup \{b_1,\ldots ,b_{i}\}\rangle$ stabilizes a vertex $V_3$ of $T_{\Lambda}$. Since $T_{\Lambda}$ is a tree, there is a vertex $V_4$ common to the three $T_{\Lambda}$-geodesics connecting pairs of vertices in $\{V_1,V_2,V_3\}$. Then $\langle D\cup \{b_1,\ldots ,b_{i+1}\rangle $ stabilizes $V_4$, contrary to the minimality of $i$. Instead, $\langle D\cup B\rangle$ stabilizes a vertex of $T_{\Lambda}$.
\end{proof}
The next result combines theorem \ref{T1N} and proposition \ref{P2N} to show that any graph of groups decomposition of a Coxeter group with edge groups equal to minimal splitting subgroups of the Coxeter group is, up to ``artificial considerations", visual.
\begin{proposition} \label{P3N}
Suppose $(W,S)$ is a finitely generated Coxeter system, $\Lambda$ is a reduced graph of groups decomposition for $W$ with each edge group a minimal splitting subgroup of $W$, and $\Psi$ is a reduced visual decomposition from $\Lambda$. If $\Phi'$ is the graph of groups obtained from $\Lambda$ by replacing each vertex $A$ of $\Lambda$ by $\Phi_A$, the graph of groups decomposition of $\Lambda(A)$ given by the action of $\Lambda(A)$ on the Bass-Serre tree for $\Psi$, and $\Phi$ is obtained by reducing $\Phi'$, then there is a bijection $\tau$, from the vertices of $\Phi$ to those of $\Psi$ so that for each vertex $V$ of $\Phi$, $\Psi(\tau (V))$ is conjugate to $\Phi(V)$.
\end{proposition}
\begin{proof} Part 1) of theorem \ref {T1N} implies the decomposition $\Phi$ is well-defined. If $Q$ is a vertex of $\Psi$ then a conjugate of $\Psi(Q)$ is a subgroup of $\Lambda(B)$ for some vertex $B$ of $\Lambda$, and corollary 7 of \cite{MTVisual} (an elementary corollary of theorem \ref{artificial}) implies this conjugate of $\Psi(Q)$ is a vertex group of $\Phi_B$. Hence each vertex group of $\Psi$ is conjugate to a vertex group of $\Phi'$. Suppose $A$ is a vertex of $\Lambda$ and $U$ is a vertex of $\Phi_A$ such that $\Phi_A(U)$ is $\Lambda (A)$-conjugate to $\Lambda (E)$ for some edge $E$ adjacent to $A$. If $\Lambda (E)$ is not conjugate to a special subgroup of $(W,S)$, then as $\Lambda (E)$ is conjugate to a group in $K(W,S)$, proposition \ref{P2N} implies there is a vertex $V$ of $\Lambda$ and a vertex group of $\Phi_V$ properly containing a conjugate of $\Lambda (E)$. Hence $\Phi_A(U)$ is eliminated by reduction when $\Phi$ is formed. If $\Lambda (E)$ is conjugate to a special subgroup of $(W,S)$, then as $\Lambda (E)$ is also conjugate to a subgroup of a vertex group of $\Psi$, either $\Lambda(E)$ is conjugate to a vertex group of $\Psi$ or $\Lambda(E)$ is eliminated by reduction when $\Phi$ is formed. Hence by part 2) of theorem \ref{T1N}, every vertex group of $\Phi$ is conjugate to a vertex group of $\Psi$. No two vertex groups of $\Psi$ are conjugate, so if $V$ is a vertex of $\Phi$, let $\tau (V)$ be the unique vertex of $\Psi$ such that $\Phi(V)$ is conjugate to $\Psi(\tau(V))$. As no two vertex groups of $\Phi$ are conjugate, $\tau$ is injective. If $Q$ is a vertex of $\Psi$, then as noted above $\Psi(Q)$ is conjugate to a vertex group of $\Phi'$ and so $\Psi(Q) \subset w\Phi(V)w^{-1}$ for some $w\in W$ and $V$ a vertex of $\Phi$. Choose $x\in W$ such that $\Phi(V)=x\Psi(\tau(V))x^{-1}$. Then $\Psi(Q)\subset wx\Psi(\tau(V))x^{-1}w^{-1}$. Lemma \ref{Sub} implies $Q=\tau(V)$ and so $\tau$ is onto.
\end{proof}
In the previous argument it is natural to wonder if a vertex group of $\Psi$ might be conjugate to a vertex group of $\Phi_A$ and to a vertex group of $\Phi_B$ for $A$ and $B$ distinct vertices of $\Lambda$. Certainly such a group would be conjugate to an edge group of $\Lambda$. The next example show this can indeed occur.
\begin{example}
Consider the Coxeter presentation $\langle a,b,c,d : \ a^2=b^2=c^2=d^2=1\rangle$. Define $\Lambda$ to be the graph of groups decomposition $\langle a,cdc\rangle\ast_{\langle cdc\rangle}\langle b,cdc\rangle\ast \langle d\rangle$. Then $\Lambda$ has graph with a vertex $A$ and $\Lambda (A)=\langle a,cdc\rangle$, edge $C$ with $\Lambda(C)=\langle cdc\rangle$ vertex $B$ with $\Lambda(B)=\langle b,cdc\rangle$ edge $E$ with $\Lambda(E)$ trivial and vertex $D$ with $\Lambda(D)=\langle d\rangle$.
The visual decomposition for $\Lambda$ is $\Psi=\langle a\rangle\ast \langle b\rangle\ast \langle c\rangle\ast \langle d\rangle$, a graph of groups decomposition with each vertex group isomorphic to $\mathbb Z_2$ and each edge group trivial. Now $\Phi_A$ has decomposition $\langle a\rangle\ast \langle cdc\rangle$, $\Phi_B$ has decomposition $\langle b\rangle \ast \langle cdc\rangle$ and $\Phi_D$ has decomposition $\langle d\rangle$. Observe that the $\Psi$ vertex group $\langle d\rangle$ is conjugate to a vertex group of both $\Phi_A$ and $\Phi_B$. The group $\Phi$ of the previous theorem would have decomposition $\langle a\rangle\ast \langle b\rangle\ast \langle c\rangle \ast \langle cdc\rangle$.
\end{example}
\begin{lemma} \label{Kequal}
Suppose $(W,S)$ is a finitely generated Coxeter system and $C$ is a subgroup of $W$ conjugate to a group in $K(W,S)$. If $D$ is a subgroup of $W$ and $wDw^{-1}\subset C\subset D$ for some $w\in W$, then $wDw^{-1}=C=D$.
\end{lemma}
\begin{proof} Conjugating we may assume $C=\langle U\rangle\times F$, for $U\subset S$, $E_W(U)=U$ and $F$ a finite group. Let $K\subset lk_2(U)$ such that $\langle K\rangle$ is finite and $F\subset \langle K\rangle$. Now, $w\langle U\rangle w^{-1}\subset wCw^{-1}\subset wDw^{-1}\subset C\subset \langle U\cup K\rangle$. Write $w=xdy$ for $x\in \langle U\cup K\rangle$, $y\in \langle U\rangle$, and $d$ the minimal length double coset representative of $\langle U\cup K\rangle w\langle U\rangle$. Then $dCd^{-1}\subset dDd^{-1}\subset x^{-1}Cx$. By lemma \ref{LFin}, $dUd^{-1}=U$ and by the definition of $x$, $x^{-1}\langle U\rangle x=\langle U\rangle$. The index of $\langle U\rangle$ in $dCd^{-1}$ is $\vert F\vert $ and the index of $\langle U\rangle$ in $x^{-1}Cx$ is $\vert F\vert$. Hence $dCd^{-1}=dDd^{-1}=x^{-1}Cx$ and $wCw^{-1}=wDw^{-1}=C$.
\end{proof}
\begin{remark}
The argument in the first paragraph below shows that if $\Lambda$ is a reduced graph of groups decomposition of a Coxeter group $W$, $V$ is a vertex of $\Lambda$ and $\Phi$ is a reduced graph of groups decomposition of $\Lambda (V)$, compatible with $\Lambda$ then when replacing $V$ by $\Phi$ to form $\Lambda_1$, no vertex group of $\Phi$ is $W$-conjugate to a subgroup of another vertex group of $\Phi$. In particular, each edge of $\Phi$ survives reduction in $\Lambda_1$.
\end{remark}
\begin{proposition} \label{Kreduce}
Suppose $(W,S)$ is a finitely generated Coxeter system and $\Lambda$ is a reduced graph of groups decomposition of $W$ with $M(W)$ edge groups. Suppose a vertex group of $\Lambda$ splits nontriviall and compatibly as $A\ast _CB$ over an $M(W)$ group $C$. Then there is a group in $ K(W,S)$ contained in a conjugate of $B$ which is not also contained in a conjugate of $A$ (and then also with $A$ and $B$ reversed).
\end{proposition}
\begin{proof} Let $V$ be the vertex group such that $\Lambda(V)$ splits as $A\ast _CB$ and let $\Lambda_1$ be the graph of groups resulting from replacing $\Lambda(V)$ by this splitting. If there is $w\in W$ such that $wBw^{-1}\subset A$, then (by considering the Bass-Serre tree for $\Lambda_1$) a $W$-conjugate of $B$ is a subgroup of $C$. Lemma \ref {Kequal} then implies $B=C$, which is nonsense. Hence no $W$-conjugate of $B$ (respectively $A$) is a subgroup of $A$ (respectively $B$). This implies that if $\Lambda_2$ is obtained by reducing $\Lambda_1$, then there is an edge $\bar C$ of $\Lambda_2$ with vertices $\bar A$ and $\bar B$, such that $\Lambda_2(\bar C)=C$, and $\Lambda_2(\bar A)$ is $\hat A$ where $\hat A$ is either $A$ or a vertex group (other than $\Lambda_1(V)$) of $\Lambda_1$ containing $A$ as a subgroup. Similarly for $\Lambda _2(\bar B)$.
If $B$ collapses across an edge of $\Lambda_1$ then $B$ is conjugate to a group in $K(W,S)$ and $B$ satisfies the conclusion of the proposition. If $B$ does not collapse across an edge of $\Lambda_1$ (so that $\hat B=B$), then let $\Phi_B$ be the reduced graph of groups decomposition of $B$ induced from the action of $B$ on $\Psi$, the visual decomposition of $W$ from $\Lambda_2$. By theorem \ref{T1N}, each vertex group of $\Phi_B $ is conjugate to a group in $K(W,S)$ and the decomposition $\Phi_B$ is compatible with $\Lambda_2$. Let $\Lambda_3$ be the graph of groups decomposition of $W$ obtained from $\Lambda_2$ by replacing the vertex for $B$ by $\Phi_B$. In $\Lambda_3$, the edge $\bar C$ connects the vertex $\bar A$ to say the $\Phi_B$-vertex $\tilde B$. If $\Lambda _3(\tilde B)$ is not conjugate to a subgroup of $A$, then $\Lambda _3(\tilde B)$ satisfies the conclusion of our proposition. Otherwise, (as before) lemma \ref{Kequal} implies $\Lambda_3(\bar C)=\Lambda_3(\tilde B)$ and we collapse $\tilde B$ across $\bar C$ to form $\Lambda _4$. Note that if $\bar C$ does collapse, then $\Phi_B$ has more than one vertex.
There is an edge of $\Lambda_4$ (with edge group some subgroup of $C$ which is also an edge group of $\Phi_B$) separating the vertex $\bar A$ from some vertex $K$ of $\Phi_B$. The group $\Lambda_4(K)$ satisfies the conclusion of the proposition, since otherwise a $W$-conjugate of $\Lambda _4(K)$ is a subgroup of $A$. But then lemma \ref{Kequal} implies $\Lambda _4(K)$ is equal to an edge group of $\Phi_B$ which is impossible.
\end{proof}
Proposition \ref{Kreduce} is the last result of this section needed to prove our main theorem. The remainder of the section is devoted to proving theorem \ref{vismin}, a minimal splitting version of the visual decomposition theorem of \cite{MTVisual}. In order to separate this part of the paper from the rest, some lemmas are listed here that could have been presented in earlier sections.
The next lemma follows directly from theorem \ref{Index}.
\begin{lemma} \label{properE}
Suppose $(W,S)$ is a finitely generated Coxeter system and
$A\subset S$. If $B$ is a proper subset of $E(A)$ then $\langle
B\rangle$ has infinite index in $\langle E(A)\rangle$. $\square$
\end{lemma}
\begin{lemma}\label{minequiv}
Suppose $(W,S)$ is a finitely generated Coxeter system, $A$ and
$B$ are subsets of $S$ such that $\langle A\rangle$ and $\langle
B\rangle$ are elements of $M(W)$. If $E(A)\subset B$ then
$E(A)=E(B)$.
\end{lemma}
\begin{proof} If $E(A)\subset B$, then the definitions of $E(A)$
and $E(B)$, imply $E(A)\subset E(B)$. As $\langle B\rangle \in
M(W)$, lemma \ref{properE} implies $E(A)$ is not a proper subset
of $E(B)$.
\end{proof}
\begin{lemma} \label{connect}
Suppose $(W,S)$ is a finitely generated Coxeter system, $C\subset
S$ is such that $\langle C\rangle \in M(W)$ and $C$ separates
$\Gamma (W,S)$. If $K\subset S$ is a component of $\Gamma -C$,
then for each $c\in E(C)$, there is an edge connecting $c$ to $K$.
\end{lemma}
\begin{proof} Otherwise, $C-\{c\}$ separates $\Gamma$. This is
impossible by lemma \ref{properE} and the fact that $\langle
C\rangle \in M(W)$.
\end{proof}
In the remainder of this section we simplify notation for visual graph of groups decompositions by labeling each vertex of such a graph by $A$, where $A\subset S$ and $\langle A\rangle$ is the vertex group. It is possible for two distinct edges of such a decomposition to have the same edge groups so we do not extend this labeling to edges.
\begin{lemma} \label{mincompat2}
Suppose $(W,S)$ is a finitely generated Coxeter system and $\Psi$
is a reduced $(W,S)$-visual graph of groups decomposition with
$M(W)$-edge groups. If $A\subset S$ is a vertex of $\Psi$, and
$M\subset S$ is such that $\langle M\rangle \in M(W)$, $M$
separates $\Gamma(W,S)$ and $E(M)\subset A$, then
1) either $E(M)=E(C)$ for some $C\subset S$ and $\langle C\rangle$ the edge group of an edge of $\Psi$ adjacent to $A$, or $M\subset A$ and $M$ separates $A$ in
$\Gamma$, and
2) for each $C\subset S$ such that $\langle C\rangle$ is the edge group of an edge of $\Psi$ adjacent to $A$, $C-M$ is
a subset of a component of $\Gamma -M$.
In particular, if $E(M)\ne E(C)$ for each $C\subset S$ such that $\langle C\rangle$ is the edge group of an edge adjacent to $A$ in $\Psi$, then $\langle A\rangle$ visually splits over
$\langle M\rangle$, compatibly with $\Psi$, such that each vertex
group of the splitting is generated by $M$ union the intersection of $A$ with a
component of $\Gamma -M$.
\end{lemma}
\begin{proof} First we show that if $M\not \subset A$, then
$E(M)=E(C)$ for some $C$ such that $\langle C\rangle$ is the edge group of an edge adjacent to $A$ in $\Psi$. If
$E(M)=\emptyset$ then $\langle M\rangle$ is finite and
$E(C)=\emptyset$ for every $C\subset S$ such that $\langle
C\rangle \in M(W)$. Hence we may assume $E(M)\ne \emptyset$. As
$E(M)\subset A$, there is $m\in M-E(M)$ such that $m\not \in A$.
Say $m\in B$ for $B\subset S$ a vertex of $\Psi$. If $E$
is the first edge of the $\Psi$-geodesic from $A$ to $B$ and $\Psi (E)=C$, then
$m\not \in C$. But in $\Gamma$, there is an edge between $m$ and
each vertex of $E(M)$. Hence $E(M)\subset C$ and lemma
\ref{minequiv} implies $E(M)=E(C)$.
To complete part 1), it suffices to show that if $E(M)\ne E(C)$
for all $C\subset S$ such that $\langle C\rangle$ is the edge group of an edge of $\Psi$ adjacent to the vertex $A$
of $\Psi$, then $M$ separates $A$ in $\Gamma$. We have shown that
$M\subset A$. Write $W=\langle D_C\rangle \ast _{\langle C\rangle}
\langle B_C\rangle$ where $C\subset S$ is such that $\langle C\rangle=\Psi(E)$ for $E$ an edge of $\Psi$ adjacent to $A$, and $B_C$ (respectively $D_C$) the union of
the $S$-generators of vertex groups for all vertices of $\Psi$ on
the side of $E$ opposite $A$ (respectively, on the same side of
$C$ as $A$). In particular, $M\subset D_C$ and $M\cap
(B_C-C)=\emptyset$. Then $B_C$ is the union of $C$ and some of the
components of $\Gamma -C$ (and $D_C$ is the union of $C$ and the
rest of the components of $\Gamma -C$). By lemma \ref{minequiv},
$E(C)\not \subset M$. Choose $c\in E(C)-M$. If $B'$ is a
component of $\Gamma -C$ and $B'\subset B_C$, then by lemma
\ref{connect}, there is an edge of $\Gamma$ connecting $c$ and
$B'$. Hence $(B_C-C)\cup (E(C)-M)\subset K_C$ for some component
$K_C$ of $\Gamma -M$. In particular, $c\in A\cap K_C$. Also note
that if $c'\in C-M$ then either $c'\in E(C)-M$ or there is an edge
of $\Gamma$ connecting $c'$ to $c$. In either case $c'\in K_C$
and $(B_C-C)\cup (C-M)\subset K_C$.
For $i\in \{1,\ldots ,n\}$, let $E_i$ be the edges of
$\Psi$ adjacent to $A$ and let $\langle C_i\rangle=\Psi(E_i)$ for $C_i\subset S$. Since $\langle A\rangle$ is a vertex group of $\Psi$, $\Gamma -A=\cup _{i=1}^n
(B_{C_i}-C_i)\subset \cup_{i=1}^n K_{C_i}$. We have argued that
there is $c_i\in A\cap K_{C_i}$ for the component $K_{C_i}$ of
$\Gamma -M$. If $K_{C_i}\ne K_{C_j}$, then $M$ separates the
points $c_i$ and $c_j$ of $A$, in $\Gamma$. If all $K_{C_i}$ are
equal (e.g. when $n=1$), then $\Gamma -K_{C_i}\subset A$. Since
$M$ separates $\Gamma$, $\Gamma \ne K_{C_i}\cup M$, so $M$
separates $c_i$ from a point of $A-(K_{C_i}\cup M)$. In any case
part 1) is proved.
Part 2): As noted above, if $E(M)\ne E(C)$, then for any
$C\subset S$ such that $\langle C\rangle$ is the edge group of an edge of $\Psi$ adjacent to $A$ we have $(B_C-C)\cup
(C-M)\subset K_C$ for $K_C$ a component of $\Gamma -M$ and $B_C$
some subset of $S$. If $E(M)=E(C)$ then $\langle C-M\rangle$ is finite, so $C-M$ is a complete
subset of $\Gamma$ and hence a subset of a component of $\Gamma
-M$.
\end{proof}
The next result is a minimal splitting version of the visual decomposition theorem. While part 2) of the conclusion is slightly weaker than the corresponding conclusion of the visual decomposition theorem, part 3) ensures that all edge groups of a given graph of groups decomposition of a finitely generated Coxeter group are ``refined" by minimal visual edge groups of a visual decomposition. The example following the proof of this theorem shows that part 2) cannot be strengthened.
\begin{theorem} \label{vismin}
Suppose $(W,S)$ is a finitely generated Coxeter system and
$\Lambda$ is a reduced graph of groups decomposition for $W$.
There is a reduced visual decomposition $\Psi$ of $W$ such
that
1) each vertex group of $\Psi$ is a subgroup of a conjugate of a
vertex group of $\Lambda$,
2) if $D$ is an edge of $\Psi$ then either $\Psi (D)$ is
conjugate to a subgroup of an edge group of $\Lambda$, or
$\Psi(D)$ is a minimal splitting subgroup for $W$ and a visual subgroup of finite index in
$\Psi (D)$ is conjugate to a subgroup of an edge group of
$\Lambda$.
3) for each edge $E$ of $\Lambda$ there is an edge $D$ of $\Psi$
such that $\Psi(D)$ is a minimal splitting subgroup for $W$, and a visual subgroup of finite
index of $\Psi (D)$ is conjugate to a subgroup of $\Lambda (E)$.
\end{theorem}
\begin{proof} Let $C_1$ be an edge group of $\Lambda$. By
proposition \ref{L24N} there exists $M_1\subset S$ and $w\in W$ such
that $\langle M_1\rangle \in M(W)$, $M_1$ separates $\Gamma
(W,S)$ and $w\langle M_1\rangle w^{-1}\cap C_1$ has finite index
in $w\langle M_1\rangle w^{-1}$. Then $W$ visually splits as
$\Psi_1\equiv \langle A_1\rangle \ast _{\langle M_1\rangle
}\langle B_1\rangle$ (so $A_1\cup B_1=S$, $M_1=A_1\cap B_1$, and
$A_1$ is the union of $M_1$ and some of the components of
$\Gamma-M_1$ and $B_1$ is $M_1$ union the other components of
$\Gamma -M_1$). Suppose $C_2$ is an edge group of $\Lambda$ other
than $C_1$. Then $W=K_2\ast _{C_2}L_2$ where $K_2$ and $L_2$ are
the subgroups of $W$ generated by the vertex groups of $\Lambda$
on opposite sides of $C_2$. Let $T_2$ be the Bass-Serre tree for
this splitting.
Suppose $\langle A_1\rangle$ and $\langle B_1\rangle$ stabilize
the vertices $X_1$ and $Y_1$ respectively of $T_2$. Then $X_1\ne
Y_1$, since $W$ is not a subgroup of a conjugate of $K_2$ or
$L_2$. Now, $\langle M_1\rangle$ stabilizes the $T_2$-geodesic
connecting $X_1$ and $Y_1$ and so $\langle M_1\rangle$ is a
subgroup of a conjugate of $C_2$. In this case we define
$\Psi_2\equiv \Psi_1$.
If $\langle A_1\rangle$ does not stabilize a vertex of $T_2$ then
there is a non-trivial visual decomposition $\Phi_1$ of $\langle
A_1\rangle$ from its action on $T_2$ as given by the visual
decomposition theorem. Since a conjugate of $\langle M_1\rangle
\cap w^{-1}C_1w$ has finite index in $\langle M_1\rangle$ and at
the same time stabilizes a conjugate of a vertex group of
$\Lambda$ (and hence a vertex of $T_2$), corollary 4.8 of
\cite{DicksDunwoody} implies $\langle M_1\rangle$ stabilizes a vertex
of $T_2$, and so $\Phi_1$ is visually compatible with the visual
splitting $\Psi_1=\langle A_1\rangle \ast _{\langle M_1\rangle }\langle
B_1\rangle$. If $\langle E_2\rangle$ is an edge group of $\Phi_1$,
then a conjugate of $\langle E_2\rangle $ is a subgroup of $C_2$.
By corollary \ref{C7}, there is $M_2\subset S$ such that $M_2$
separates $\Gamma(W,S)$, $\langle M_2\rangle\in M(W)$ and
$E(M_2)\subset E_2$ and so $\langle E(M_2)\rangle$ is a subgroup
of a conjugate of $C_2$. If $E(M_2)\ne E(M_1)$, then lemma
\ref{mincompat2} implies $M_2\subset A_1$ and $\langle
A_1\rangle$ visually splits over $\langle M_2\rangle$ compatibly
with the splitting $\Psi_1$. Reducing produces a visual
decomposition $\Psi_2$. Similarly if $\langle A_1\rangle$
stabilizes a vertex of $T_2$ and $\langle B_1\rangle$ does not.
Inductively, assume $C_1,\ldots ,C_n$ are distinct edge groups of
$\Lambda$, $\Psi_{n-1}$ is a reduced visual graph of
groups decomposition, each edge group of $\Psi_{n-1}$ is in
$M(W)$ and contains a visual subgroup of finite index conjugate to a
subgroup of $C_i$ for some $1\leq i\leq n-1$, and for each $i\in
\{1,2,\ldots ,n-1\}$ there is an edge group $\langle M_i\rangle$ $(M_i \subset S)$ of
$\Psi_{n-1}$ such that a visual subgroup of finite index of $\langle M_i\rangle$ is
conjugate to a subgroup of $C_i$. Write $W=K_n\ast _{C_n}L_n$ as
above, and let $T_n$ be the Bass-Serre tree for this splitting.
Either two adjacent vertex groups of $\Psi_{n-1}$ stabilize
distinct vertices of $T_n$ (in which case we define $\Psi
_n\equiv \Psi_{n-1}$) or some vertex $V_i\subset S$ of $\Psi
_{n-1}$ does not stabilize a vertex of $T_n$. In the latter case
$\langle V_i\rangle$ visually splits (as above) to give $\Psi_n$.
Hence, we obtain a reduced visual decomposition $\Psi'$
such that for each edge group $\langle M\rangle$ $(M\subset S)$ of $\Psi'$, $\langle
M\rangle$ is a group in $M(W)$, a subgroup of finite index in $\langle
M\rangle$ is conjugate to a subgroup of an edge group of
$\Lambda$, and for each edge $D$ of $\Lambda$ there is
an edge group $\langle M\rangle$ of $\Psi'$ such that $\langle
E(M)\rangle$ (a subgroup of finite index in $\langle M\rangle$)
is conjugate to a subgroup of $\Lambda (D)$.
Suppose $V\subset S$ is a vertex of $\Psi'$. Consider $\Phi_V$,
the visual decomposition of $\langle V\rangle$ from its action on
$T_{\Lambda}$, the Bass-Serre tree for $\Lambda$. If $\langle D\rangle $ $(D\subset S)$
is an edge group for an edge of $\Psi '$ adjacent to $V$, then a subgroup of finite
index in $\langle D\rangle$ stabilizes a vertex of $T_{\Lambda}$.
By corollary 4.8 of \cite{DicksDunwoody}, $\langle D\rangle$
stabilizes a vertex of $T_{\Lambda}$ and $\Phi _V$ is compatible
with $\Psi'$. Replacing each vertex $V$ of $\Psi'$ by $\Phi_V$
and reducing gives the desired decomposition of $W$.
\end{proof}
The following example exhibits why one cannot expect a stronger
version of theorem \ref{vismin} with visual decomposition $\Psi$
having only minimal edge groups, or so that all minimal edge
groups of $\Psi$ are conjugate to subgroups of edge groups of
$\Lambda$.
\begin{example}
Consider the Coxeter presentation $\langle a_1, a_2, a_3, a_4,
a_5: a_i^2=1,
(a_1a_2)^2=(a_2a_3)^2=(a_3a_4)^2=(a_4a_5)^2=(a_5a_1^2)=(a_2a_5)^2=1\rangle$
and the splitting $\Lambda=\langle a_2,a_3,a_4\rangle \ast
_{\langle a_2,a_4\rangle }\langle a_1,a_2, a_4, a_5\rangle$. The
subgroup $\langle a_2,a_5\rangle$ is the only minimal visual
splitting subgroup for this system, and it is smaller than
$\langle a_2,a_4\rangle$. Then no subgroup of $\langle
a_2,a_4\rangle$ is a minimal splitting subgroup for our group.
The only visual decomposition for this splitting satisfying the
conclusion of theorem \ref{vismin} is: $\langle
a_1,a_2,a_5\rangle \ast _{\langle a_2,a_5\rangle}\langle
a_2,a_4,a_5\rangle \ast _{\langle a_2,a_4\rangle} \langle
a_2,a_3,a_4\rangle$.
\end{example}
\section{Accessibility}
We prove prove our main theorem in this section, a strong
accessibility result for splittings of Coxeter groups over groups
in $M(W)$. For a class of groups $\cal V$, we call a graph of groups decomposition of a
group {\it irreducible with respect to $\cal V$-splittings} if
for any vertex group $V$ of the decomposition, every non-trivial
splitting of $V$ over a group in $\cal V$ is not compatible with
the original graph of groups decomposition.
The following simple example describes a non-trivial compatible
splitting of a vertex group of a graph of groups decomposition
$\Lambda$, of a Coxeter group followed by a reduction to produce
a graph of groups with fewer edges than those of $\Lambda$. This
illustrates potential differences between accessibility and strong
accessibility.
\begin{example}
$$W\equiv \langle s_1,s_2:s_i^2\rangle\times \langle
s_3,s_4,s_5,s_6:s_i^2\rangle$$
First consider the splitting of $W$ as:
$$\langle s_1,s_2,s_3,s_4\rangle \ast _{\langle
s_1,s_2,s_4\rangle}\langle s_1,s_2,s_4,s_5\rangle \ast _{\langle
s_1,s_2,s_5\rangle}\langle s_1,s_2, s_5, s_6\rangle$$
The group $\langle s_1,s_2, s_4,s_5\rangle$ splits as $\langle
s_1,s_2,s_4\rangle \ast _{\langle s_1,s_2\rangle} \langle
s_1,s_2,s_5\rangle$. Replacing this group in the above splitting
with this amalgamated product and collapsing gives the following
decomposition of $W$:
$$\langle s_1,s_2,s_3,s_4\rangle \ast _{\langle
s_1,s_2\rangle}\langle s_1,s_2,s_5,s_6\rangle$$
\end{example}
\begin{proposition}\label{twoendvisualsplit}
Suppose $(W,S)$ is a finitely generated Coxeter system, $\Psi$ is
a reduced visual graph of groups decomposition of $(W,S)$, with
$M(W)$ edge groups and $V$ is a vertex of $\Psi$ such that
$\Psi (V)$ decomposes compatibly as a nontrivial amalgamated product
$A*_{C}B$ where $C$ is in $M(W)$. Then $\Psi (V)$ is a nontrivial
amalgamated product of special subgroups over an $M(W)$ special
subgroup $U$, with $U$ a subgroup of a conjugate of $C$, and such
that any special subgroup contained in a conjugate of $A$ or $B$
is a subgroup of one of the factors of this visual splitting. In
particular, the vertex group $\Psi (V)$ visually splits, compatibly with
$\Psi$, to give a finer visual decomposition of $(W,S)$.
\end{proposition}
\begin{proof}
Applying theorem \ref{MT1} to the
amalgamated product $A \ast_CB$, we get that there is a reduced
visual graph of groups decomposition $\Psi '$ of $\Psi (V)$ such that each vertex
group of $\Psi '$ is a subgroup of a conjugate of $A$ or $B$ and each edge group
a subgroup of a conjugate of $C$. Then $\Psi '$ has more than one
vertex since $A*_{C}B$ being nontrivial means $\Psi (V)$ is not a
subgroup of a conjugate of $A$ or $B$. Fix an edge of $\Psi '$,
say with edge group $U$, and collapse the other edges in $\Psi '$
to get a nontrivial visual splitting of $\Psi (V)$ over $U$ a subgroup of a
conjugate of $C$. By theorem \ref{MT1},
a special subgroup of $\Psi (V)$ contained in a conjugate of $A$ or $B$
is contained in a vertex group of $\Psi '$ and so is contained in
one of the factors of the resulting visual splitting of $\Psi (V)$
derived from partially collapsing $\Psi '$. Hence this visual
decomposition of $\Psi (V)$ is compatible with $\Psi$, giving a finer
visual decomposition of $(W,S)$. Since $C$ is in $M(W)$ and a
conjugate of $U$ is a subgroup of $C$, $U$ is in $M(W)$.
\end{proof}
A visual decomposition $\Psi$ of a Coxeter system $(W,S)$
{\it looks irreducible with respect to $M(W)$ splittings} if each
edge group of $\Psi$ is in $M(W)$ and for any subset $V$ of $S$ such that $\langle V\rangle$ is a vertex group of $\Psi$, $\langle V\rangle$ cannot be split visually, non-trivially and $\Psi$-compatibly over $\langle E\rangle\in M(W)$ for $E\subset S$, to give a finer visual decomposition of $W$. By lemma \ref{MT3}, it is
elementary to see that every finitely generated Coxeter group has
a visual decomposition that looks irreducible with respect to
$M(W)$ splittings. The following result is a direct consequence of
Proposition \ref{twoendvisualsplit}.
\begin{corollary}
A visual decomposition of a Coxeter group looks irreducible with
respect to $M(W)$ splittings, iff it is irreducible with respect
to $M(W)$ splittings. $\square$
\end{corollary}
Hence any visual graph of groups decomposition of a Coxeter group
with $M(W)$ edge groups can be refined to a visual decomposition
that is irreducible with respect to $M(W)$ splittings.
\begin{corollary}\label{JSJproposition}
Suppose $(W,S)$ is a finitely generated Coxeter system and $W$ is
the fundamental group of a graph of groups $\Lambda$ where each
edge group is in $M(W)$. Then $W$ has an irreducible with respect
to $M(W)$ splittings visual decomposition $\Psi$ where each vertex
group of $\Psi$ is a subgroup of a conjugate of a vertex group of
$\Lambda$.
\end{corollary}
\begin{proof}
Applying theorem \ref{MT1} to $\Lambda$, we get a reduced
visual graph of groups $\Psi$ from $\Lambda$. If $\Psi$ looks
irreducible with respect to $M(W)$ splittings, then we are done.
Otherwise, some vertex group of $\Psi$ visually splits nontrivially and
compatibly over an $M(W)$ special subgroup and we replace the
vertex with this visual splitting in $\Psi$. We can repeat,
replacing some special vertex group by special vertex groups with
fewer generators, until we must reach a visual graph of groups
which looks irreducible with respect to $M(W)$ splittings.
\end{proof}
Theorem \ref{Close} describes how ``close'' a
decomposition with $M(W)$ edge groups, which is irreducible with
respect to $M(W)$ splittings, is to a visual one.
\medskip
\noindent {\bf Theorem 2} {\it Suppose $(W,S)$ is a finitely generated Coxeter system and $\Lambda$ is a reduced
graph of groups decomposition of $W$ with $M(W)$ edge groups. If
$\Lambda$ is irreducible with respect to $M(W)$ splittings, and
$\Psi$ is a reduced graph of groups decomposition such that each edge group of $\Psi$ is in $M(S)$, each vertex group of $\Psi$ is a subgroup of a conjugate of a vertex group of
$\Lambda$, and each edge group of $\Lambda$ contains a conjugate of an edge group of $\Psi$ (in particular if $\Psi$ is a reduced visual graph of groups decomposition for $(W,S)$ derived from $\Lambda$ as in the main theorem of \cite{MTVisual}), then
\begin{enumerate}
\item $\Psi$ is irreducible with respect to $M(W)$ splittings
\item There is a (unique) bijection $\alpha$ of the vertices
of $\Lambda$ to the vertices of $\Psi$ such that for each vertex
$V$ of $\Lambda$, $\Lambda(V)$ is conjugate to $\Psi(\alpha (V))$
\item When $\Psi$ is visual, each edge group of $\Lambda$ is conjugate to a visual
subgroup for $(W,S)$.
\end{enumerate}}
\begin{proof}
Consider a vertex $V$ of $\Lambda$ with vertex group
$A=\Lambda(V)$. By theorem \ref{T1N}, $\Lambda (V)$ has a graph of groups decomposition $\Phi_V$ such that $\Phi _V$ is compatible with $\Lambda$, each edge group of $\Phi_V $ is in $M(W)$ and each vertex group of $\Phi_V$ is conjugate to a vertex group of $\Psi$ or conjugate to $\Lambda(E)$ for some edge $E$ of $\Lambda$ adjacent to $V$. Since $\Lambda$ is reduced and irreducible with respect to $M(W)$ splittings, $\Phi_V$ has a single vertex and $\Lambda(V)$ is conjugate to $\Psi(V')$ for some vertex $V'$ of $\Psi$.
Since no vertex group of $\Psi$ is
contained in a conjugate of another, $V'$ is uniquely determined,
and we set $\alpha(V)=V'$. No vertex group of $\Lambda$ is conjugate to another so $\alpha$ is injective. Since each vertex group $\Psi(V')$ is
contained in a conjugate of some $\Lambda(V)$ which is in turn
conjugate to $\Psi(\alpha(V))$ we must have $V'=\alpha(V)$ and
each $V'$ is in the image of $\alpha$.
If $\Psi$ is not irreducible with respect to $M(W)$ splittings,
then it does not look irreducible with respect to $M(W)$
splittings and some vertex group $W_1$ of $\Psi$ visually splits
nontrivially and compatibly over an $M(W)$ special subgroup $U_1$.
Reducing gives a visual graph of groups decomposition $\Psi_1$ of $W$ satisfying the hypotheses on $\Psi$ in the statement of the theorem. Now $W_1$ is conjugate to a vertex group $A$
of $\Lambda$ and the above argument shows $A$ is conjugate to a vertex group of $\Psi_1$. But then, $W_1$ is conjugate to a vertex group of $\Psi_1$, which is nonsense. Instead, $\Psi$ is irreducible with respect to $M(W)$
splittings.
Since $\Lambda$ is a tree, we can take each edge group of
$\Lambda$ as contained in its endpoint vertex groups taken as
subgroups of $W$. Hence each edge group is simply the
intersection of its adjacent vertex groups (up to conjugation).
Since vertex groups of $\Lambda$ are conjugates of
vertex groups in $\Psi$, their intersection is conjugate to a
special subgroup (by lemma \ref{Kil}) when $\Psi$ is visual.
\end{proof}
\begin{example}\label{simex2}
Let $W$ have the Coxeter presentation:
$$\langle s_1,s_2,s_3,s_4,s_5: s_k^2, (s_1s_2)^2, (s_2s_3)^2,
(s_3s_4)^2, (s_4s_5)^2\rangle \times \langle s_6,s_7:s_k^2\rangle
$$
\noindent Then $W$ is 1-ended and has the following visual
$M(W)$-irreducible decomposition (each edge group is 2-ended):
$$\!\langle s_1,s_2,s_6,s_7\rangle \ast
_{\langle s_2,s_6,s_7\rangle}\langle s_2,s_3,s_6,s_7\rangle\ast
_{\langle s_3,s_6, s_7\rangle }\langle s_3,s_4,s_6,s_7\rangle\ast
_{\langle s_4,s_6,s_7\rangle}\langle s_4, s_5, s_6,s_7\rangle\!$$
There is an automorphism of $W$ sending $s_5$ to $s_3s_5s_3$ and
all other $s_i $ to themselves. This gives another
$M(W)$-irreducible decomposition of $W$ where the last vertex
group $\langle s_4, s_5, s_6,s_7\rangle$ of the above graph of
groups decomposition is replaced by $\langle s_4, s_3s_5s_3,
s_6,s_7\rangle$. As $s_3$ does not commute with $s_1$ we see that
in regard to part 2 of theorem \ref{Close}, a single element of
$W$ cannot be expected to conjugate each vertex group of an
arbitrary $M(W)$-irreducible decomposition to a corresponding
vertex group of a corresponding visual $M(W)$-irreducible
decomposition.
\end{example}
\noindent {\bf Theorem 1} {\it Finitely generated Coxeter groups
are strongly accessible over minimal splittings.}
\begin{proof}
Suppose $(W,S)$ is a finitely generated Coxeter system. There are
only finitely many elements of $K(W,S)$ (which includes the
trivial group). For $G$ a subgroup of $W$ let $n(G)$ be the
number of elements of $K(W,S)$ which are contained in any
conjugate of $G$ (so $1\leq n(G)\leq n(W)$). For $\Lambda$ a
finite graph of groups decomposition of $W$, let
$c(\Lambda)=\Sigma_{i=1}^{n(W)} 3^ic_i(\Lambda)$ where $c_i(\Lambda)$ is the count of
vertex groups $G$ of $\Lambda$ with $n(G)=i$.
If $\Lambda$ reduces to $\Lambda'$ then
clearly $c_i(\Lambda ')\leq c_i(\Lambda)$ for all $i$, and for some $i$, $c_i(\Lambda ')$ is strictly less than $c_i(\Lambda)$. Hence, $c(\Lambda ')<c(\Lambda)$.
If $\Lambda$ is reduced with $M(W)$ edge groups, and a
vertex group $G$ of $\Lambda$ splits non-trivially and compatibly as $A*_CB$ to
produce the decomposition $\Lambda'$ of $W$, then every subgroup of a conjugate of
$A$ or $B$ is a subgroup of a conjugate of $G$, but, by proposition \ref{Kreduce}, some element of $K(W,S)$ is contained in a conjugate of
$B$, and so of $G$, but not in a conjugate of $A$. Hence
$n(A)<n(G)$, and similarly $n(B)<n(G)$. This implies that
$c(\Lambda')<c(\Lambda)$ since $c_{n(G)}$ decreases by 1 in going from $\Lambda$ to $\Lambda'$ and the only other $c_i$ that change are $c_{n(A)}$ and $c_{n(B)}$, which are both increased by 1 if $n(A)\ne n(B)$ and $c_{n(A)}$ increases by 2 if $n(A)=n(B)$, but $c_{n(A)}$ and $c_{n(B)}$ have smaller coefficients than $c_{n(G)}$ in the summation $c$. More specifically, $c(\Lambda)-c(\Lambda')=3^{n(G)}-(3^{n(A)}+3^{n(B)})>0$.
If $\Lambda$ is the trivial decomposition of $W$, then $c(\Lambda) =3^{\vert K(W,S)\vert}$ and we define this number to be $C(W,S)$.
Suppose $\Lambda_1,\ldots ,\Lambda_k$ is a sequence of reduced graph of groups decompositions of $W$ with $M(W)$ edge groups, such that $\Lambda_1$ is the trivial decomposition and $\Lambda_i$ is obtained from $\Lambda_{i-1}$ by splitting a vertex group $G$ of $\Lambda_{i-1}$ non-trivially and compatibly as $A\ast _CB$, for $C\in M(W)$ and then reducing. We have shown that $c(\Lambda_i)<c(\Lambda_{i-1})$ for all $i$, and so $k\leq C(W,S)$. In particular,
$W$ is strongly accessible over $M(W)$ splittings
\end{proof}
\section{Generalizations, Ascending HNN extensions (and a group of Thompson) and Closing questions}
Recall that if $G$ is a group and $H$ and $K$ are subgroups of $G$
then $H$ is smaller than $K$ if $H\cap K$ has finite index in $H$
and infinite index in $K$. Suppose $W$ is a finitely generated
Coxeter group and $\cal C$ is a class of subgroups of $W$ such
that for each $G\in \cal C$, any subgroup of $G$ is in $\cal C$,
e.g. the virtually abelian subgroups of $W$. Define $M(W,{\cal
C})$, {\it the minimal $\cal C$ splitting subgroups of $W$}, to
be the set of all subgroups $H$ of $W$ such that $H\in \cal C$,
$W$ splits non-trivially over $H$ and for any $K\in \cal C$ such
that $W$ splits non-trivially over $K$, $K$ is not smaller than
$H$. Then the same line of argument as used in this paper shows
that $W$ is strongly accessible over $M(W,{\cal C})$ splittings.
If $(W,S)$ is a finitely generated Coxeter system and $\Psi$ is
an $M(W)$-irreducible graph of groups decomposition of $W$ with
$M(W)$-edge groups, then by theorem \ref{Close}, each vertex group
$V$ of $\Psi$ is a Coxeter group with Coxeter system $(V,A)$
where $A$ is conjugate to a proper subset of $S$. The collection
$M(V)$ is not, in general, a subset of $M(W)$, and so $V$ has an
$M(V)$-irreducible graph of groups decomposition with $M(V)$-edge
groups. As $\vert A\vert <\vert S\vert$, there cannot be a
sequence $\Psi=\Psi_0,\Psi_1,\ldots ,\Psi_n$, with $n>\vert
S\vert$, of distinct graph of groups decompositions where $\Psi$
is $M(W)$-indecomposable with $M(W)$-edge groups, for $i>0$,
$V_i$ a vertex group of $\Psi_{i-1}$ and $\Psi_i$ is
$M(V_i)$-indecomposable with edge groups in $M(V_i)$. Such a
sequence must terminate with a special subgroup of $W$ that has
no non-trivial decomposition. By the FA results of
\cite{MTVisual}, that group must have a complete presentation
diagram.
Suppose $B$ is a group, and $\phi:A_1\to A_2$ is an isomorphism of subgroups of $B$. The group $G$ with presentation $\langle t,B:t^{-1}at=\phi(a) \hbox{ for } a\in A_1\rangle$ is called an {\it HNN extension} with {\it base group} $B$, {\it associated subgroups} $A_i$ and {\it stable letter} $t$. If $A_1=B$ then the HNN extension is {\it ascending} and if additionally, $A_2$ is a proper subgroup of $B$ (i.e. $A_2\ne B$), then the HNN extension is {\it strictly ascending}.
The bulk of this section is motivated by an example of Richard Thompson. Thompson's group $F$ is finitely presented and is an ascending HNN extension of a group isomorphic to $F$. Hence $F$ is not ``hierarchical accessible" over such splittings (see question 1 below). If a group $G$ splits as an ascending HNN extension, then (by definition) there is no splitting of the base group which is compatible with the first splitting, so standard accessibility is not an issue. The only question is that of minimality of such splittings.
\begin{theorem} \label{ind}
Suppose $A$ is a finitely generated group and $\phi:A\to A$ is a monomorphism. Let $G\equiv \langle t, A:t^{-1}at=\phi (a) \hbox{ for }a\in A\rangle$ be the resulting ascending HNN extension. Then:
1) If $\phi (A)$ has infinite index in $A$, this splitting of $G$ is not minimal and there is no finitely generated subgroup $B$ of $G$ such that $B$ is smaller than $A$, $G$ splits as an ascending HNN extension over $B$ and this splitting over $B$ is minimal.
2) If $\phi(A)$ has finite index in $A$, then there is no finitely generated subgroup $B$ of $G$ such that $B$ is smaller than $A$ and $G$ splits as an ascending HNN extension over $B$.\end{theorem}
\begin{proof}
First note that $G$ is also an ascending HNN extension over $\phi (A)$, (with presentation $\langle t, \phi (A): t^{-1}at=\phi (a) \hbox{ for } a\in \phi (A)\rangle$. Hence if $\phi(A)$ has infinite index in $A$, the splitting over $A$ is not minimal. Part 5) of lemma \ref{asc} implies the second assertion of part 1) of the theorem. Part 4) of lemma \ref{asc} implies part 2) of the theorem.
\end{proof}
\begin{lemma} \label{asc}
Suppose $\phi:A\to A$ and $\tau:B\to B$ are monomorphisms of finitely generated subgroups of $G$, and the corresponding ascending HNN extensions are isomorphic to $G$.
$$G\equiv \langle A,t:t^{-1}at=\phi(a) \hbox{ for all }a\in A\rangle\equiv \langle s,B:s^{-1}bs=\tau(b) \hbox { for all } b\in B\rangle$$
If $A\cap B$ has finite index in $B$ (so $B$ is potentially smaller than $A$). Then:
1) The normal closures $N(A)$ and $N(B)$ in $G$ are equal.
2) If $\phi(A)\ne A$ then $s=at$ for some $a\in N(A)$
3) If $\phi(A)=A$, then $\tau(B)=B$ (so $N(A)= A=B=N(B)$) and
$s=at^{\pm 1}$ for some $a\in A$.
4) If $\phi(A)$ has finite index in $A$, then $A\cap B$ has finite index in $A$ (so $B$
is not smaller than $A$) and $\tau(B)$ has finite index in $B$.
5) If $\phi(A)$ has infinite index in $A$, then $\tau(B)$ has infinite index in $B$.
\end{lemma}
\begin{proof}
Let $A_0=A$ and let $A_i=t^iA_0t_i^{-i}$. Then $t^{-1}A_it=A_{i-1}<A_i$. Note that $N(A)=\cup _{i=0}^\infty A_i$. Let $\pi:G\to G/N(A)\equiv \mathbb Z$ be the quotient map. Since $A\cap B$ has finite index in $B$, $\pi(B)$ is finite (and hence trivial). This implies $B<N(A)$. As $B$ is finitely generated, $B<A_m$ for some $m$. This also implies that $\langle \pi(s)\rangle=\langle \pi(t)\rangle=\mathbb Z$ and so $N(B)=N(A)$, completing 1).
Normal forms in ascending HNN extensions imply $s=t^pa_1t^{-q}$ for some $p,q\geq 0$ and $a_1\in A$. This implies $|p-q|=1$. Hence $s=at^{\pm 1}$ for $a\in A_p$.
Suppose $s=at^{-1}$. Let $r$ be the maximum of $m$ and $p$. Note that
$$N(A)=N(B)=\cup_{i=0}^{\infty}s^iBs^{-i}=\cup _{i=0}^\infty (at^{-1})^iB(ta^{-1})^i<A_r$$
(since, $t^{-1}Bt<t^{-1}A_mt=A_{m-1}<A_r$ and (as $a\in A_r$) $aA_ra^{-1}=A_r$). But if $\phi(A)\ne A$, $A_{r+1}\not < A_r$. Instead, $s=at$, completing 2).
If $\phi(A)=A$, then $N(A)=A$. As $N(B)=A$ is finitely generated, $N(B)=\cup _{i=0}^n s^iBs^{-i}=s^nBs^{-n}$ for some $n>0$. So $N(B)=B$ completing 3).
If $\phi(A)$ has finite index in $A$ then $A$ has finite index in $A_i$ for all $i\geq 0$. Since $N(A)=N(B)$, and $A$ and $B$ are finitely generated, there are positive integers $p<p'$ and $q<q'$ such that
$$A<s^pBs^{-p}<t^qAt^{-q}<s^{p'}Bs^{-p'}<t^{q'}At^{-q'}$$
Hence $B$ has finite index in $s^{p'-p}Bs^{-(p'-p)}$. This implies $B$ has finite index in $s^iBs^{-i}$ for all $i\geq 0$ and also $\tau (B)$ has finite index in $B$. Similarly, there are positive integers $j$ and $k$ such that
$$B<t^kAt^{-k}<s^jBs^{-j}$$
Hence $B$ and $A$ (and so $A\cap B$) have finite index in $t^kAt^{-k}$. This implies $A\cap B$ has finite index in $A$ and 4) is complete.
Assume $\phi(A)$ has infinite index in $A$. As $A$ and $B$ are finitely generated subgroups of $N(A)=N(B)$, there positive integers $k$ and $j$ such that
$$B<t^kAt^{-k}<s^jBs^{-j}$$
The group $B$ does not have finite index in $t^kAt^{-k}$ since otherwise $A\cap B$ (and then $A$) would have finite index in $t^kAt^{-k}$. This implies $B$ has infinite index in $s^jBs^{-j}$. This in turn implies $B$ has infinite index in $s^iBs^{-i}$ for all $i\geq 0$. This also implies $\tau(B)$ has infinite index in $B$.
\end{proof}
\begin{example} {\bf (Thompson's Group)}
In unpublished work, R. J. Thompson introduced a group, traditionally denoted $F$, in the context of finding infinite finitely presented simple groups. This group is now well studied in a variety of other contexts. The group $F$ has presentation
$$\langle x_1,x_2,\ldots : x_i^{-1}x_jx_i=x_{j+1} \hbox { for } i<j\rangle$$
Well know facts about this group include: $F$ is $FP_\infty$ (\cite {BG}), in particular, $F$ is finitely presented (with generators $x_1$ and $x_2$), the commutator subgroup of $F$ is simple (\cite{Br}), and $F$ contains no free group of rank 2 (\cite{BS}). Clearly, $F$ is an ascending HNN extension of itself (with base group $\langle x_2,x_3,\ldots \rangle$ and stable letter $x_1$ - called the ``standard" splitting of $F$).
We are interested in understanding ``minimal" splittings of $F$ and more generally minimal splittings of finitely generated groups containing no non-abelian free group. We list some elementary facts.
\medskip
\noindent {\bf Fact 1.} {\it If $G$ contains no non-abelian free group and $G$ splits as an amalgamated free product $A\ast_CB$ then $C$ is of index 2 in both $A$ and $B$ and hence is normal in $G$. If $G$ splits as an HNN-extension, then this splitting is ascending. }
\medskip
\medskip
\noindent{\bf Fact 2.}
{\it The group $F$ does not split non-trivially as $A\ast_CB$}
\begin{proof}
Otherwise $C$ is normal in $F$ and $F/C$ is isomorphic to $\mathbb Z_2\ast \mathbb Z_2$. Since the commutator subgroup $K$ of $G$ is simple, $K\cap C$ is either trivial or $K$. The intersection is not $K$ since $F/C$ is not abelian. The intersection is non-trivial, since otherwise $K$ would inject under the quotient map $F\to F/C\equiv \mathbb Z_2\ast \mathbb Z_2$.
\end{proof}
By theorem \ref{ind} and the previous facts we have:
\medskip
\noindent{\bf Fact 3.}
{\it The only non-trivial splittings of $F$ are as ascending HNN extensions $\langle t,A:t^{-1}at=\phi(a) \hbox{ for all } a\in A\rangle$. For $A$ finitely generated, this splitting is minimal iff the image of the monomorphism $\phi:A\to A$ has finite index in $A$.}
\medskip
R. Bieri, W. D. Neumann and R. Strebel have shown that if $G$ is a finitely presented group containing no free group of rank 2 and $G$ maps onto $\mathbb Z\oplus \mathbb Z$, then $G$ contains a finitely generated normal subgroup $H$ such that $G/H\cong \mathbb Z$ (see theorem D of \cite{BNS} or theorem 18.3.8 of \cite{Ge}). Hence, there is a short exact sequence $1\to H\to F\to\mathbb Z\to1$ with $H$ finitely generated.
\medskip
\noindent {\bf Fact 4.} {\it The ascending HNN extensions given by the short exact sequence $1\to H\to F\to\mathbb Z\to1$ (with $H$ finitely generated) are minimal splittings.}
\begin{theorem} Suppose $G$ is a finitely generated group containing no non-abelian free subgroup. Suppose $G$ can be written as an ascending HNN extension $\langle t, A:t^{-1}at=\phi(a) \hbox{ for all } a\in A\rangle$ and as non-trivial amalgamated products $C\ast_DE$ and $H\ast_KL$ where all component groups are finitely generated, then:
1) $D\cap A$ does not have finite index in $A$ or $D$ (so neither $A$ nor $D$ is smaller than the other),
2) if $D\cap K$ has finite index in $K$ then $K=D$ (so neither $D$ nor $K$ is smaller than the other),
3) $C\ast_DE$ is a minimal splitting and $\langle t, A:t^{-1}at=\phi(a) \hbox{ for all } a\in A\rangle$ is minimal iff $\phi(A)$ has finite index in $A$.
\end{theorem}
\begin{proof}
Let $q:G\to G/N(A)\equiv \mathbb Z$ and $p:G\to G/D\equiv \mathbb Z_2\ast \mathbb Z_2$ be the quotient maps.
If $D\cap A$ has finite index in $D$ then $q(D)$ is finite, so $q(D)$ is trivial and $D<N(A)$. But this implies there is a homomorphism from $\mathbb Z_2\ast \mathbb Z_2$ onto $\mathbb Z$, which is nonsense.
If $D\cap A$ has finite index in $A$, then $p(A)$ is a finite subgroup of $\mathbb Z_2\ast \mathbb Z_2\equiv \langle x:x^2=1\rangle \ast \langle y:y^2=1\rangle$. Then $p(A)$ is a subgroup of a conjugate of $\langle x\rangle$ or $ \langle y\rangle$. Without loss, assume $p(A)<\langle x\rangle$. If $p(A)=1$, then $A<D$ and so $N(A)<D$. But this implies there is a homomorphism of $\mathbb Z$ onto $\mathbb Z_2\ast \mathbb Z_2$ which is nonsense. Hence $p(A)=\langle x\rangle$. But then $p(t)$ commute with $x$. This is implies $p(t)$ is trivial. This is impossible as $p(t)$ and $p(A)$ generate $\mathbb Z_2\ast \mathbb Z_2$. Part 1) is finished.
Suppose $D\cap K$ has finite index in $K$. Then as above we can assume that $p(K) <\langle x\rangle$. If $p(K)=1$, then $K<D$. If additionally $K\ne D$ then there is a homomorphism from $\mathbb Z_2\ast \mathbb Z_2$ onto $\mathbb Z_2\ast \mathbb Z_2$ with non-trivial kernel. This is impossible. Hence, either, $K=D$ or $p(K)=\langle x\rangle$. We conclude that $K=D$ since $p(K)$ is normal in $\mathbb Z_2\ast \mathbb Z_2$, and 2) is finished.
Fact 1, and part 2) implies $C\ast_DE$ is a minimal splitting. Fact 1, theorem \ref{ind} and part 1) imply $\langle t, A:t^{-1}at=\phi(a) \hbox{ for all } a\in A\rangle$ is minimal iff $\phi(A)$ has finite index in $A$.
\end{proof}
\end{example}
We conclude this paper with some questions of interest.
\begin{enumerate}
\item
For an arbitrary finitely generated Coxeter group $W$, is there a
sequence $\Lambda _1, \Lambda_2,\dots $ of graphs of groups such
that $\Lambda _1$ is a non-trivial decomposition of $W$ with edge
groups in $M(W)$, and for $i>1$, $\Lambda _i$ is a non-trivial
decomposition of a vertex group $V_i$ of $\Lambda _{i-1}$ with
$M(V_i)$-edge groups (but $\Lambda_i$ is not necessarily
compatible with $\Lambda _{i-1}$)? This sort of accessibility is called {\it hierarchical} accessibility in analogy with 3-manifold decompositions). If no such sequence exists,
then does a last term of such a splitting sequence have no
splittings of any sort (is it FA)? Would such a last term always
be visual?
\item
Is there a JSJ theorem for Coxeter groups over minimal
splittings? In \cite{MTJSJ}, we produce a JSJ result for Coxeter
groups over virtually abelian splitting subgroups that relies on splittings over minimal virtually abelian subgroups.
\medskip
For the standard strictly ascending HNN splitting of Thompson's group $F$ (given by $\langle x_1, x_2,\ldots :x_i^{-1}x_jx_i=x_{j+1} \hbox{ for } i<j\rangle$ - with base group $B\equiv\langle x_2,x_3,\ldots \rangle$ and stable letter $x_1$) there is no minimal splitting subgroup $C$ of $F$
with $C$ smaller than $B$. Hence, for finitely presented groups, there is no analogue for proposition \ref{L24N}. Still $F$, and in fact all finitely generated groups containing no non-abelian free group, are strongly accessible over finitely generated minimal splitting subgroups.
\item
Are finitely presented groups (strongly) accessible over finitely generated
minimal splittings?
\medskip
Finitely generated groups are not accessible over finite splitting subgroups (see D2), and hence finitely generated groups are not accessible over minimal splittings.
\item Does Thompson's group split as a {\it strict} ascending HNN extension with finitely generated base $A$ and monomorphism $\phi:A\to A$ such that $\phi(A)$ has finite index in $A$?
\end{enumerate}
| proofpile-arXiv_065-5123 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}\label{sec:intro}
The Sunyaev-Zel'dovich Effect (SZE) is an useful tool for studies of galaxy clusters.
This distortion of the Cosmic Microwave Background (CMB) is caused by
the inverse Compton scattering by high energy electrons as the CMB propagates
through the hot plasma of galaxy clusters \citep{SZ1972}. The SZE signal
is essentially redshift independent, making it particularly useful for
determining the evolution of large-scale structure.
For upcoming SZE cluster surveys \citep{Ruhl2004,Fowler2004,Kaneko2006,Ho2008}, it is important to investigate
the relations between SZE flux density and other cluster properties, such as mass, temperature, and gas fraction.
By assuming that the evolution of clusters is dominated by self-similar gravitational processes,
we can predict simple power law relations between integrated Compton $Y$ and other cluster properties \citep{Kasier1986}.
Strong correlations between integrated SZE flux and the mass of clusters are also suggested by
numerical simulations \citep{dasilva2004,Motl2005,Nagai2006}.
These relations imply
the possibility of determining the masses and temperatures of clusters, and investigating cluster evolution at high redshift,
with SZE observation data alone.
\citet{Joy2001} and \citet{Bonamente2007} demonstrated an iterative approach based on the isothermal $\beta$ model to estimate the values of electron temperature $T_e$, total mass $M_t$, gas mass $M_g$, and Compton-$Y$ from SZE data alone. In this paper, we seek to derive the same cluster properties from the AMiBA SZE measurements of six clusters. Due to the limited $u-v$ space sampling, the AMiBA data do not provide useful constraints on the structural parameters, $\beta$ and $r_c$, in a full iterative model fitting. Instead, we adopt $\beta$ and $r_c$ from published X-ray fits and use a Markov Chain Monte-Carlo (MCMC) method to determine the cluster properties ($T_e, M_t, M_g$ and $Y$).
We also estimate these cluster properties from AMiBA data with structural constraints from X-ray data
using the non-isothermal universal temperature profile model \citep{Hallman2007}.
All quantities are integrated to spherical radius $r_{2500}$
within which the mean over-density of the cluster is $2500$~times the critical density
at the cluster's redshift.
We then investigate the scaling relations between these cluster properties derived from the SZE data,
and identify correlations between those properties
that are induced by the iterative method.
We note that
\citet{Locutus2008} investigate the scaling relations between the values of Compton $Y$ from AMiBA SZE data
and other cluster properties from X-ray and other data.
All results are in good agreement.
However, we are concerned that there are embedded relations between the properties
we derived using this method. Therefore, we also investigate the embedded scaling relations
between SZE-derived properties as well.
We assume the large-scale structure of the Universe to be described by
a flat $\Lambda$CDM model with $\Omega_{\rm m} = 0.26$, $\Omega_{\rm \Lambda} =
0.74$, and Hubble constant $H_{\rm 0} = 72 \ \rm km \, s^{-1} \, Mpc^{-1}$,
corresponding to the values obtained using the WMAP 5-year data
\citep{WMAP5}. All uncertainties quoted are at the 68\%
confidence level.
\section{Determination of cluster properties}\label{sec:property}
\subsection{AMiBA Observation of SZE}\label{subsec:observation}
AMiBA is a coplanar interferometer \citep{Ho2008,Mingtang2009}.
During 2007, it was operated with 7 close-packed antennas of 60 cm in diameter,
giving 21 vector baselines in $u$-$v$ space and
a synthesized resolution of $6^\prime$ \citep{Ho2008}.
The antennas are mounted on a six-meter platform \citep{Koch2008M},
which we rotate during the observations to provide better $u$-$v$ coverage.
The observations of SZE clusters, the details about the transform of the data into calibrated visibilities,
and the estimated cluster profiles are presented in \citet{Wu2009}.
Further system checks are discussed in \citet{Lin2008} and \citet{amiba07-nishioka}.
For other scientific results deduced from AMiBA 2007 observations, please refer to
\citet{Locutus2008,Liu2008,Koch2008,Sandor2008,Keiichi2009}
\subsection{Isothermal $\beta$ modeling}\label{subsec:betamodel}
Because the $u$-$v$ coverage is incomplete for a single SZE experiment,
we can measure neither the accurate profile of a cluster nor its central surface brightness.
Therefore we have chosen to assume an SZE cluster model and thus a surface brightness profile,
so that a corresponding template in the $u$-$v$ space can be fitted to the observed visibilities
in order to estimate the underlying model parameters.
We consider a spherical isothermal $\beta$-model \citep{betamodel},
which expresses the electron number density profile as
\begin{equation}
n_{e}(r)=n_{{\rm e}0}\left(1+\frac{r^{2}}{r^{2}_{\rm c}}\right)^{-3\beta/2},
\label{eq:bdensity}
\end{equation}
where $n_{{\rm e}0}$ is the central electron number density,
$r$ is the radius from the cluster center,
$r_{\rm c}$ is the core radius, and $\beta$ is the power-law index.
Traditionally the SZE is characterized by the Compton $y$ parameter,
which is defined as the integration along the line of sight with given direction,
\begin{equation}
y(\hat{n})\equiv\int^{\infty}_{0}\sigma_{T}n_{e}\frac{k_{\rm B}T_{\rm e}}{m_{\rm e}c^{2}}dl.
\label{eq:def_y}
\end{equation}
Compton $y$ is related to ${\Delta}I_{\rm SZE}$ as
\begin{equation}
{\Delta}I_{\rm SZE}=I_{\rm CMB}yf(x,T_{\rm e})\frac{xe^x}{e^x-1},
\label{eq:intensity}
\end{equation}
where $x{\equiv}h\nu/k_{\rm B}T_{\rm CMB}$, $I_{\rm CMB}$ is the present CMB specific intensity, and
$f(x,T_{\rm e})=\left[x\coth(x/2)-4\right]\left[1+\delta_{\rm rel}(x,T_{\rm e})\right]$ \citep[e.g., ][]{L2006}.
$\delta_{\rm rel}(x,T_{\rm e})$ is a relativistic correction \citep{rel},
which we take into account to first order in $k_{\rm B}T_{\rm e}/m_{\rm e}c^{2}$.
The relativistic correction becomes significant when the electron temperature exceeds $10~\rm keV$,
which is the regime of our cluster sample.
One can combine Equations~(\ref{eq:bdensity}-\ref{eq:intensity})
and integrate along the line of sight to obtain the SZE in the apparent radiation intensity as
\begin{equation}
{\Delta}I_{\rm SZE}=I_{\rm 0}\left(1+\theta^{2}/\theta^{2}_{\rm c}\right)^{(1-3\beta)/2}
\label{eq:betaintensity}
\end{equation}
where $\theta$ and $\theta_{\rm c}$ are the angular equivalents of $r$ and $r_{\rm c}$ respectively.
Because the clusters in our sample are not well resolved by AMiBA,
we cannot get a good estimate of $I_0$, $\beta$, and $\theta_{\rm c}$ simultaneously from our data alone.
Instead, we use the X-ray derived values for $\beta$ and $r_{\rm c}$, as summarized in Table~\ref{tab:parameters1},
and then estimate the central specific intensity $I_{\rm 0}$ \citep{Liu2008}
by fitting Equation (\ref{eq:betaintensity}) to the calibrated visibilities obtained by \citet{Wu2009}.
In the analysis we take into account
the contamination from point sources and structures in the primary CMB.
Given the $\beta$-model described above, we can derive relations between cluster parameters and estimate them using the MCMC method.
The parameters to be estimated are
the electron temperature $T_{\rm e}$, $r_{2500}$, total mass $M_{\rm t}\equiv M_{\rm t}(r_{2500})$,
gas mass $M_{\rm g}\equiv M_{\rm g}(r_{2500})$,
and the integrated Compton $Y\equiv Y(r_{2500})$.
Theoretically $M_t(r_{2500})$ can be fomulated through the hydrostatic equilibrium equation
\citep[e.g., ][]{Grego2001,Bonamente2007}:
\begin{equation}
M_{\rm t}(r_{2500})=\frac{3{\beta}k_{\rm B}T_{\rm e}}{G{\mu}m_{\rm p}}\frac{r^{3}_{2500}}{r^{2}_{\rm c}+r^{2}_{2500}},
\label{eq:mtot}
\end{equation}
where $G$ is the gravitational constant and $\mu$ is the mean mass per particle
of gas in units of the mass of proton, $m_p$.
To calculate $\mu$, we assume that
$\mu$ takes the value appropriate for clusters
with solar metallicity as given by \citet{Anders1989}.
Here we use the value $\mu=0.61$.
By combining Equation~(\ref{eq:mtot}) and the definition of $r_{2500}$,
we can obtain $r_{2500}$ as a function of $\beta$, $T_{\rm e}$, $r_{\rm c}$, and redshift $z$ \citep[e.g., ][]{Bonamente2007}
\begin{equation}
r_{2500}=\sqrt{\frac{3{\beta}k_{B}T_{e}}{G{\mu}m_{p}}\frac{1}{\frac{4}{3}\pi\rho_{c}(z)\cdot2500}-r^{2}_{c}}.
\label{eq:r2500}
\end{equation}
Then $M_{\rm g}(r_{2500})$ can be expressed,
by integrating the $n_{\rm e}(r)$ in Equation~(\ref{eq:bdensity}) as
\begin{equation}
M_{g}(r)=4{\pi}\mu_{e}n_{e0}m_{p}D^{3}_{A}\int^{r/D_{A}}_{0}\left(1+\frac{\theta^{2}}{\theta^{2}_{c}}\right)^{-3\beta/2}{\theta}^{2}d\theta,
\label{eq:mgas}
\end{equation}
where $\mu_{e}=1.17$ is the mean particle mass per electron in unit of $m_{p}$, $D_{\rm A}$ is the angular diameter
determined by $z$, and $n_{{\rm e}0}$ is the central electron density, derived through the equation in \citet{L2006}:
\begin{equation}
n_{e0}=\frac{{\Delta}T_{0}m_{e}c^{2}\Gamma(\frac{3}{2}\beta)}{f(x,T_{e})T_{CMB}\sigma_{T}k_{B}T_{e}D_{A}\pi^{1/2}\Gamma(\frac{3}{2}\beta-\frac{1}{2})\theta_{c}},
\label{eq:cedensity}
\end{equation}
where $\Gamma$ is the gamma function, ${\Delta}T_0$ is the SZE temperature change, and $T_{CMB}$ is the present CMB temperature. ${\Delta}T_0$
is derived as ${\Delta}T_{0}/T_{CMB}=(e^x-1)I_{0}/xe^xI_{CMB}$.
\begin{deluxetable*}{c|cc|ccc|ccc}
\tabletypesize{\scriptsize}
\tablewidth{0pt}
\tablecaption{Parameters for isothermal spherical $\beta$-model \label{tab:parameters1}}
\tablehead{& & &\multicolumn{3}{c}{Without 100 kpc cut\tablenotemark{b}} & \multicolumn{3}{c}{With 100 kpc cut\tablenotemark{e}} \\
Cluster & z &$D_{\rm A}$& $\beta$ & $r_{c}$ & $\Delta I_{0}$\tablenotemark{c} & $\beta$ & $r_{c}$ & $\Delta I_{0}$\tablenotemark{c} \\
& & (Mpc) & & $(")$ &($\times 10^{5}$ Jy/sr)& & $(")$ &($\times 10^{5}$ Jy/sr)
}
\startdata
A1689 & $0.183$ & $621$ &$0.609^{+0.005}_{-0.005}$ & $26.6^{+0.7}_{-0.7}$ & $-3.13\pm0.95$ & $0.686^{+0.010}_{-0.010}$ &
$48.0^{+1.5}_{-1.7}$ & $-2.36\pm0.71$ \\
A1995 & $0.322$ & $948$ & $0.770^{+0.117}_{-0.063}$ & $38.9^{+6.9}_{-4.3}$ & $-3.30\pm1.17$ & $0.923^{+0.021}_{-0.023}$ &
$50.4^{+1.4}_{-1.5}$ & $-3.19\pm1.23$ \\
A2142 & $0.091$ & $340$ & $0.740^{+0.010}_{-0.010}$ & $188.4^{+13.2}_{-13.2}$ & $-2.09\pm0.36$ & - &
- & - \\
A2163 & $0.202$ & $672$ & $0.674^{+0.011}_{-0.008}$ & $87.5^{+2.5}_{-2.0}$ & $-3.24\pm0.56$ & $0.700^{+0.07}_{-0.07}$\tablenotemark{d} &
$78.8^{+0.6}_{-0.6}$\tablenotemark{d} & $-3.64\pm0.61$ \\
A2261 & $0.224$ & $728$ & $0.516^{+0.014}_{-0.013}$ & $15.7^{+1.2}_{-1.1}$ & $-1.90\pm0.98$ & $0.628^{+0.030}_{-0.020}$ &
$29.2^{+4.8}_{-2.9}$ & $-2.59\pm0.90$ \\
A2390 & $0.232$ & $748$ & $0.600^{+0.060}_{-0.060}$\tablenotemark{a} & $28.0^{+2.8}_{-2.8}$\tablenotemark{a} & $-2.04\pm0.65$ & $0.58^{+0.058}_{-0.058}$\tablenotemark{a} &
$34.4^{+3.4}_{-3.4}$\tablenotemark{a} & $-2.85\pm0.77$\\
\enddata
\tablenotetext{a} {a $10\%$ error is assumed for $\beta$ and $r_c$ for which the original reference does not give an error estimation.}
\tablenotetext{b} {Reference - \cite{2002ApJ...581...53R} for A1689, A1995, A2163, and A2261. \cite{2003MNRAS.345.1241S,2005MNRAS.359...16L} for A2142. \cite{2000MNRAS.315..269A} for A2390.}
\tablenotetext{c} {Best-fit values for $\Delta I_{0}$ with foreground estimation from point sources and CMB \citep{Liu2008}.}
\tablenotetext{d} {$\beta$ fixed to a fiducial value $0.7$ in \cite{2006ApJ...647...25B}, a $10\%$ error is assumed.}
\tablenotetext{e} {Reference - \cite{2006ApJ...647...25B} for A1689, A1995, A2163, and A2261. \cite{2001MNRAS.324..877A} for A2390.}
\end{deluxetable*}
Finally, with the $I_0$ computed earlier and the $r_{2500}$ estimated here
we can integrate the Compton $y$ out to $r_{2500}$ to yield $Y$
\begin{equation}
Y=\frac{2{\pi}{\Delta}T_{0}}{f(x,T_{e})T_{CMB}}\int^{\theta_{2500}}_{0}\left(1+\frac{\theta^{2}}{\theta^{2}_{c}}\right)^{(1-3\beta)/2}\theta d\theta,
\label{eq:intY}
\end{equation}
where $\theta_{2500}=r_{2500}/D_{a}$ indicates the projected angular size of $r_{2500}$.
With the formulae as described above, for a set of $\beta$, $r_{\rm c}$, and $z$ as measured from X-ray observations and $I_{0}$ from AMiBA SZE observation, we can arbitrarily assign a `pseudo' electron temperature $T_{{\rm e}(i)}$,
and then determine the pseudo $r_{2500}(T_{{\rm e}(i)})$, $M_{\rm t}(T_{{\rm e}(i)})$, $M_{\rm g}(T_{{\rm e}(i)})$, and $Y(T_{{\rm e}(i)})$. Given $M_{\rm t}(T_{{\rm e}(i)})$ and $M_{\rm g}(T_{{\rm e}(i)})$,
we obtained the pseudo gas fraction $f_{gas}(T_{{\rm e}(i)})=M_{\rm g}(T_{{\rm e}(i)})/M_{\rm t}(T_{{\rm e}(i)})$.
Using $f_{gas}(T_{{\rm e}(i)})$ as a function of $T_{{\rm e}(i)}$ we applied the MCMC method by varying $T_{\rm e}$ and $\Delta I_{0}$ to estimate the likelihood distribution
of each cluster property. While estimating the MCMC likelihood we assume that the likelihoods
of $\Delta I_{0}$ and $f_{gas}$ are independent. The likelihood distributions of $\Delta I_{0}$ for each cluster are
taken from the fitting results of \citet{Liu2008}, while the likelihood distribution of $f_{gas}$ is
assumed to be Gaussian with mean $0.116$ and standard deviation $0.005$, which is the ensemble average over 38 clusters observed by Chandra and OVRO/BIMA \citep{L2006}.
In the process, the values of $\beta$, $r_{\rm c}$, and $z$
are taken from other observational results which are summarized in \citet{Koch2008} and Table~\ref{tab:parameters1}. We took the $\beta$ model parameters from both ROSAT and Chandra X-ray results.
The Chandra results were derived by fitting an isothermal $\beta$ model to
the X-ray data with a central 100-kpc cut. The aim of the cut-off is to exclude
the complicated non-gravitational physics (e.g, radiative
cooling and feedback mechanisms) in cluster cores.
Table ~\ref{tab:iso} summarizes our results derived assuming an isothermal $\beta$ model.
We present the results obtained with isothermal $\beta$ model parameters derived with and without 100-kpc cut
both here.
Figure~\ref{fig:com} compares our results with the SZE-X-ray joint results obtained from OVRO/BIMA and Chandra data
\citep{Bonamente2007,Morandi2007}.
These are in good agreement.
\begin{deluxetable*}{c|ccccc|ccccc}
\tabletypesize{\scriptsize}
\tablewidth{0pt}
\tablecaption{SZE derived cluster properties in isothermal $\beta$ model\label{tab:iso}}
\tablehead{
&\multicolumn{5}{c}{Without 100-kpc cut}&\multicolumn{5}{c}{With 100-kpc cut}\\
Cluster & $r_{2500}$ & $k_{\rm B}T_{\rm e}$ & $M_{\rm g}$ & $M_{\rm t}$ & $Y$ & $r_{2500}$ & $k_{\rm B}T_{\rm e}$ & $M_{\rm g}$ & $M_{\rm t}$ & $Y$ \\
& $(")$ & (keV) & $(10^{13}M_{\odot})$ & $(10^{14}M_{\odot})$ & $(10^{-10})$ &
$(")$ & (keV) & $(10^{13}M_{\odot})$ & $(10^{14}M_{\odot})$ & $(10^{-10})$
}
\startdata
A1689 &$209^{+16}_{-19}$&$10.4^{+1.6}_{-1.7}$&$4.9^{+1.2}_{-1.2}$&$4.2^{+1.1}_{-1.0}$&$3.2^{+1.5}_{-1.2}$
&$214^{+16}_{-19}$&$10.0^{+1.5}_{-1.6}$&$5.2^{+1.3}_{-1.3}$&$4.5^{+1.2}_{-1.1}$&$3.1^{+1.3}_{-1.2}$\\
A1995 &$150^{+13}_{-15}$&$12.0^{+1.9}_{-2.2}$&$7.4^{+1.9}_{-2.1}$&$6.4^{+1.7}_{-1.8}$&$1.9^{+1.0}_{-0.8}$
&$159^{+13}_{-18}$&$11.6^{+1.7}_{-2.3}$&$8.5^{+2.4}_{-2.5}$&$7.5^{+2.0}_{-2.3}$&$1.9^{+1.0}_{-0.8}$\\
A2142 &$430^{+23}_{-28}$&$11.9^{+1.1}_{-1.3}$&$6.6^{+1.1}_{-1.2}$&$5.7^{+1.0}_{-1.0}$&$16.9^{+4.4}_{-4.2}$
&$ - $&$ - $&$ - $&$ - $&$ - $\\
A2163 &$228^{+14}_{-13}$&$15.3^{+1.7}_{-1.5}$&$8.5^{+1.5}_{-1.5}$&$7.2^{+1.4}_{-1.2}$&$7.7^{+2.2}_{-1.9}$
&$237^{+13}_{-13}$&$15.4^{+1.6}_{-1.5}$&$9.5^{+1.6}_{-1.5}$&$8.1^{+1.5}_{-1.3}$&$8.0^{+2.1}_{-1.9}$\\
A2261 &$147^{+15}_{-20}$&$8.7^{+1.8}_{-2.3}$&$2.7^{+1.0}_{-1.0}$&$2.3^{+0.9}_{-0.9}$&$1.3^{+1.0}_{-0.8}$
&$172^{+16}_{-15}$&$10.0^{+1.8}_{-1.7}$&$4.6^{+1.3}_{-1.2}$&$4.0^{+1.2}_{-1.0}$&$2.2^{+1.1}_{-0.9}$\\
A2390 &$156^{+12}_{-15}$&$9.2^{+1.3}_{-1.7}$&$3.7^{+0.9}_{-1.0}$&$3.2^{+0.8}_{-0.8}$&$1.6^{+0.7}_{-0.6}$
&$174^{+13}_{-15}$&$11.9^{+1.8}_{-1.9}$&$5.2^{+1.3}_{-1.2}$&$4.4^{+1.1}_{-1.1}$&$3.1^{+1.3}_{-1.2}$\\
\enddata
\end{deluxetable*}
\begin{figure*}
\plotone{f1.eps}
\caption{\small
Comparison of $T_{\rm e}$ (upper-left), $M_{\rm g}$ (upper-right), $M_{\rm t}$ (lower-left),
and $Y$ (lower-right) of clusters derived from AMiBA SZE data based on isothermal $\beta$ model with
100-kpc cut (x-axis) and those given in literature (y-axis).
All y-axis values are from \citet{Bonamente2007},
except for
the Y values, which are from \citet{Morandi2007},
and those for A2390, which is indicated by a circle
with $T_e$ from \citet{Benson2004} and $M_{\rm t}$ calculated from the data in \citet{Benson2004}.
The dashed lines indicate $y~=~x$.
\label{fig:com}}
\end{figure*}
\subsection{UTP $\beta$ model}\label{subsec:UTP}
The simulation done by \citet{Hallman2007} suggested incompatibility between isothermal $\beta$
model parameters fitted to X-ray surface brightness profiles and those fitted to SZE profiles. This incompatibility
also causes bias in the estimates of $Y$ and $M_{g}$. They suggested a non-isothermal $\beta$ model
with a universal temperature profile (UTP). We also considered how the UTP $\beta$ model changes our
estimates of cluster properties in this section.
In the UTP $\beta$-model, the baryon density profile is the same as Equation~ (\ref{eq:bdensity}), and the temperature profile can be written as \citep{Hallman2007}:
\begin{equation}
T_{e}\left(r\right)=\left\langle T\right\rangle_{500}T_{0}\left(1+\left(\frac{r}{\alpha r_{500}}\right)^{2}\right)^{-\delta},
\label{eq:tprofile}
\end{equation}
where $\left\langle T\right\rangle_{500}$ indicates the average spectral temperature inside $r_{500}$. $T_{0}$, $\alpha$, and $\delta$ are dimensionless parameters in the universal temperature profile model.
$\delta$ is the outer slope of the temperature profile, outside of a
core with electron temperature $T_{\rm e0}=\left\langle T\right\rangle_{500}T_{0}$.
This core is of size $\alpha r_{500}$.
The total mass can be obtained by solving the hydrostatic equilibrium equation \citep{Fabricant1980}:
\begin{equation}
M_{t}\left(r\right)=-\frac{k_{\rm B}r^{2}}{G\mu m_{\rm p}}\left(T_{\rm e}(r)\frac{dn_{\rm e}(r)}{dr}+n_{\rm e}(r)\frac{dT_{\rm e}(r)}{dr}\right).
\label{eq:hydrostatic}
\end{equation}
In the isothermal $\beta$-model, Equation~(\ref{eq:hydrostatic}) can be reduced into the form of Equation~(\ref{eq:mtot}). However,
in the UTP $\beta$-model, the derivative of $T_{\rm e}(r)$ with respect to $r$ in Equation~(\ref{eq:hydrostatic}) is no longer zero. By applying
Equation~(\ref{eq:bdensity}) and Equation~(\ref{eq:tprofile}) in Equation~(\ref{eq:hydrostatic}), one can obtain:
\begin{equation}
M_{t}\left(r\right)=\frac{k_{\rm B}T_{\rm e0}}{G\mu m_{\rm p}}\left(\frac{3\beta r^{3}}{r^{2}+r^{2}_{\rm c}}+\frac{2\delta r^{3}}{r^{2}+\alpha^{2}r^{2}_{500}}\right)\left(1+\frac{r^{2}}{\alpha^{2}r^{2}_{500}}\right)^{-\delta}.
\label{eq:mtotutp}
\end{equation}
By combining Equation~(\ref{eq:mtotutp}) and the definition of $r_{500}$, an analytical solution for $r_{500}$ can be obtained as:
\begin{equation}
r_{500}=\sqrt{\frac{(1+\alpha^{2})(3\beta A-r^{2}_{c})+2\delta A+\sqrt{D}}{2(1+\alpha^{2})}},
\label{eq:r500utp}
\end{equation}
where $A=3k_{\rm B}T_{\rm e0}(1+\alpha^{-2})^{-\delta}/(4G\mu m_{\rm p}\pi\rho_{\rm c}(z)\cdot 500)$, and
$D=[(1+\alpha^{2})(3\beta A-r^{2}_{c})+2\delta A]^{2}+8(1+\alpha^{2})\delta Ar^{2}_{\rm c}$. If $\delta \rightarrow 0$ or
$\alpha \rightarrow \infty$, which indicate the nearly isothermal case, Equation~(\ref{eq:r500utp}) reduces to a form similar to Equation~(\ref{eq:r2500}).
Using the definition of $r_{500}$, $M_{t}(r_{500})$ can be written as:
\begin{equation}
M_{t}(r_{500})=500\cdot\frac{4}{3}\pi r^{3}_{500}\rho_{\rm c}(z).
\label{eq:mt500}
\end{equation}
For an arbitrary overdensity $\Delta$, we can not find an analytical solution for arbitrary $r_{\Delta}$ (i.e.: $r_{2500}$, $r_{200}$, etc.). However, with the known $r_{500}$, we can still find the numerical solution for $r_{\Delta}$ easily. We can then solve for $M_{t}(r_{\Delta})$ using Equation~(\ref{eq:mtotutp}).
To yield the central electron number density, we consider the formula for the Compton $y$ resulting from the UTP $\beta$-model (see the Appendix of \citet{Hallman2007}). By setting the projected radius $b=0$ in Equation~(A10) in \citet{Hallman2007}, one can obtain:
\begin{equation}
n_{e0}=\frac{{\Delta}T_{0}m_{e}c^{2}}{f(x,T_{e})T_{CMB}\sigma_{T}k_{B}\left\langle T\right\rangle_{500}T_{0}I_{\rm SZ}(0)},
\label{eq:ne0utp}
\end{equation}
where
\begin{equation}
I_{\rm SZ}(0)=\frac{\pi^{1/2}\Gamma(\frac{3}{2}\beta+\delta-\frac{1}{2})F_{2,1}\left(\delta ,\frac{1}{2};\frac{3\beta}{2}+\delta,1-\frac{r^{2}_{\rm c}}{\alpha^{2}r^{2}_{500}}\right)r_{c}}{\Gamma(\frac{3\beta}{2}+\delta)},
\label{eq;isz0}
\end{equation}
and $F_{2,1}$ is Gauss' hypergeometric function. Here we assume $f(x,T_{e})=f(x,\left\langle T\right\rangle_{500}T_{0})$, and the change of $f(x,T_{e})$ due to the change of $T_{e}$ along line of sight
is negligible. Actually, by numerical calculation we found that the error in Equation~(\ref{eq:ne0utp}) caused by this assumption is less than $~1\%$. Because the UTP $\beta$ model assumes the electron density profile as same as the isothermal
$\beta$ model, we can rewrite $M_{g}$ in UTP model by simply applying Equation (\ref{eq:ne0utp}) in Equation (\ref{eq:mgas}).
Thus, the integration of the Compton $y$ profile, instead of Equation~(\ref{eq:intY}), becomes:
\begin{equation}
Y=Y_{0}\int^{\theta_{2500}}_{0}\left(1+\frac{\theta^{2}}{\theta^{2}_{c}}\right)^{(1-3\beta)/2}\left(1+\frac{\theta^{2}}{\alpha^{2}\theta^{2}_{500}}\right)^{-\delta}F(\theta)\theta d\theta,
\label{eq:intYutp}
\end{equation}
where $Y_{0}=(2{\pi}{\Delta}T_{0})/(fT_{\rm CMB}F(0))$ and
$F(\theta)=F_{2,1}(\delta ,1/2;3\beta/2+\delta,1-(r^{2}_{\rm c}+\theta^{2})/(\alpha^{2}r^{2}_{500}+\theta^{2}))$.
We were not able to constrain the parameters $\beta$, $r_{c}$, $\delta$, and $\alpha$ of the UTP significantly with our SZE data alone. However, the simulation of \citet{Hallman2007} suggested that there is no significant systematic difference
between the values of $\beta$ and $r_{c}$ resulting from fitting an isothermal $\beta$ model to mock X-ray observations
and those parameters fitted using the UTP $\beta$ model. Therefore, we simply assume that the ratio between the isothermal $\beta_{\rm iso}$ value and UTP $\beta_{\rm UTP}$ value is $1\pm0.1$, and $r_{c,\rm iso}/r_{c,\rm UTP}=1\pm0.2$, for each cluster.
We also assume $\delta=0.5$, $\alpha=1$, and $T_{0}=1.3$. Those values are taken from the average of results of \citet{Hallman2007}. Then we fit $\Delta I_{0}$ to AMiBA SZE observation data with the UTP $\beta$ model parameters above by fixing $\delta$, $\alpha$, and $T_{0}$, and treating the likelihood distributions of $\beta_{\rm UTP}$ and $r_{c,\rm UTP}$ as two independent Gaussian-distributions.
Finally, we applied the MCMC method, which varies $\Delta I_{0}$, $\beta$, $r_{c}$, and $\left\langle T\right\rangle_{500}$, to estimate cluster properties with the equations derived from the UTP $\beta$ model and the data fitting results.
Table ~\ref{tab:utp} summarizes our results derived with the UTP $\beta$ model.
Figure~\ref{fig:comutp} compares our results with the SZE-X-ray joint results obtained from OVRO/BIMA and Chandra data
\citep{Bonamente2007,Morandi2007}.
These are also in good agreement. We find that the electron temperature derived with
the UTP $\beta$ model are in significantly better agreement with the temperatures from Chandra X-ray measurements.
\begin{deluxetable*}{c|ccccc|ccccc}
\tabletypesize{\scriptsize}
\tablewidth{0pt}
\tablecaption{SZE derived cluster properties in the UTP $\beta$ model\label{tab:utp}}
\tablehead{
&\multicolumn{5}{c}{Without 100-kpc cut}&\multicolumn{5}{c}{With 100-kpc cut}\\
Cluster & $r_{2500}$ & $k_{\rm B}T_{\rm e}$\tablenotemark{a} & $M_{\rm g}$ & $M_{\rm t}$ & $Y$ & $r_{2500}$ & $k_{\rm B}T_{\rm e}$\tablenotemark{a} & $M_{\rm g}$ & $M_{\rm t}$ & $Y$ \\
& $(")$ & (keV) & $(10^{13}M_{\odot})$ & $(10^{14}M_{\odot})$ & $(10^{-10})$ &
$(")$ & (keV) & $(10^{13}M_{\odot})$ & $(10^{14}M_{\odot})$ & $(10^{-10})$
}
\startdata
A1689 &$219^{+23}_{-23}$&$8.9^{+1.5}_{-1.6}$&$5.6^{+1.9}_{-1.8}$&$4.8^{+1.7}_{-1.5}$&$3.4^{+1.7}_{-1.5}$
&$220^{+23}_{-22}$&$8.3^{+1.4}_{-1.3}$&$5.7^{+1.9}_{-1.7}$&$4.9^{+1.7}_{-1.5}$&$3.0^{+1.5}_{-1.2}$\\
A1995 &$154^{+16}_{-18}$&$9.7^{+1.7}_{-1.7}$&$7.6^{+2.8}_{-2.5}$&$6.7^{+2.3}_{-2.3}$&$1.8^{+1.1}_{-0.8}$
&$161^{+16}_{-20}$&$9.1^{+1.6}_{-1.5}$&$8.7^{+3.1}_{-2.9}$&$7.5^{+2.7}_{-2.5}$&$1.9^{+1.0}_{-0.8}$\\
A2142 &$458^{+43}_{-49}$&$9.9^{+1.1}_{-1.3}$&$7.6^{+2.4}_{-2.3}$&$6.4^{+2.2}_{-1.8}$&$17.0^{+6.5}_{-5.5}$
&$ - $&$ - $&$ - $&$ - $&$ - $\\
A2163 &$245^{+23}_{-23}$&$13.0^{+1.7}_{-1.5}$&$10.1^{+3.0}_{-2.7}$&$8.8^{+2.6}_{-2.4}$&$8.0^{+3.1}_{-2.5}$
&$251^{+21}_{-24}$&$13.1^{+1.5}_{-1.7}$&$10.8^{+3.1}_{-2.8}$&$9.3^{+2.7}_{-2.4}$&$8.3^{+2.9}_{-2.6}$\\
A2261 &$160^{+17}_{-22}$&$7.9^{+1.5}_{-1.8}$&$3.6^{+1.3}_{-1.4}$&$3.1^{+1.1}_{-1.2}$&$1.5^{+0.9}_{-0.8}$
&$183^{+20}_{-21}$&$8.7^{+1.5}_{-1.6}$&$5.4^{+1.9}_{-1.8}$&$4.7^{+1.6}_{-1.6}$&$2.2^{+1.2}_{-1.0}$\\
A2390 &$166^{+17}_{-17}$&$8.1^{+1.2}_{-1.4}$&$4.5^{+1.5}_{-1.4}$&$3.9^{+1.3}_{-1.2}$&$1.8^{+0.8}_{-0.7}$
&$188^{+18}_{-20}$&$10.7^{+1.5}_{-1.9}$&$6.3^{+2.1}_{-1.9}$&$5.5^{+1.8}_{-1.7}$&$3.3^{+1.6}_{-1.4}$\\
\enddata
\tablenotetext{a}{The average electron temperature up to $r_{500}$ (i.e.:$\left\langle T\right\rangle_{500}$ in Equation (\ref{eq:tprofile}))}
\end{deluxetable*}
\begin{figure*}
\plotone{f2.eps}
\caption{\small
Comparison of $T_{\rm e}$ (upper-left), $M_{\rm g}$ (upper-right), $M_{\rm t}$ (lower-left),
and $Y$ (lower-right) of clusters derived from AMiBA SZE data based on the UTP $\beta$ model with
100-kpc cut (x-axis) and those given in literature (y-axis).
All y-axis values are from \citet{Bonamente2007},
except for
the Y values, which are from \citet{Morandi2007},
and those for A2390, which is indicated by a circle
with $T_e$ from \citet{Benson2004} and $M_{\rm t}$ calculated from the data in \citet{Benson2004}.
The dashed lines indicate $y~=~x$.
\label{fig:comutp}}
\end{figure*}
\section{Embedded scaling relations}\label{sec:scaling}
The self-similar model \citep{Kasier1986} predicts simple power-law scaling relations between
cluster properties \citep[e.g., ][]{Bonamente2007,Morandi2007}.
Motivated by this, people usually investigate the scaling relations between the derived cluster
properties from observational data to see whether they are consistent with the self-similar model.
However, the method described above is based on the isothermal $\beta$-model and the UTP $\beta$-model. Therefore, there could be
some embedded relations which agree with self-similar model predictions between the derived properties. We investigated the embedded relations through both
analytical and numerical methods.
\subsection{Analytical formalism and numerical analysis}\label{sec:ana}
In the isothermal $\beta$ model, by applying Equation~(\ref{eq:r2500}) in Equation~(\ref{eq:mtot}), $M_{\rm t}$ can be rewritten
as
\begin{equation}
M_{\rm t}=2500\cdot\frac{4}{3}\pi\rho_{\rm c}\left(z\right)\left(\frac{3{\beta}k_{\rm B}T_{\rm e}}{G{\mu}m_{\rm p}}\frac{1}{2500\cdot\frac{4}{3}\pi\rho_{\rm c}\left(z\right)}-r^{2}_{\rm c}\right)^{\frac{3}{2}}.
\label{eq:mtrelation}
\end{equation}
As we can see, while $\beta$ is set to be a constant, and $r^{2}_{2500}>>r^{2}_{\rm c}$, which implies $3{\beta}k_{\rm B}T_{\rm e}/(G{\mu}m_{\rm p}\cdot2500\cdot\frac{4}{3}\pi\rho_{\rm c}\left(z\right))>>r^{2}_{\rm c}$, the relation
$M_{t}{\propto}T^{3/2}_{\rm e}$ will be obtained.
However, for some of the clusters we considered in this paper, the values of $r_{2500}/r_{\rm c}$ are only slightly above $2$. Therefore,
we have to investigate the scaling relation between $M_{\rm t}$ and $T_{\rm e}$ by considering $\partial \ln M_{\rm t}/\partial \ln T_{\rm e}$.
By partially differentiating Equation (\ref{eq:mtrelation}) by $T_{\rm e}$, and multiplying it by $T_{\rm e}/M_{\rm t}$, we can get
\begin{equation}
\frac{\partial \ln M_{\rm t}}{\partial \ln T_{\rm e}}=\frac{3}{2}\frac{(r^{2}_{2500}+r^{2}_{\rm c})}{r^{2}_{2500}},
\label{eq:slopeMT}
\end{equation}
which decreases from $1.875$ at $r_{2500}/r_{\rm c}=2$ to $1.5$ as $r_{2500}/r_{\rm c}\rightarrow \infty$. That implies $M_{\rm t}$ behaves as $M_{\rm t}\propto T^{1.875}_{\rm e}$ while $r_{2500}/r_{\rm c}\approx 2$ and $M_{\rm t}\propto T^{1.5}_{\rm e}$ while $r_{2500}/r_{\rm c}$ approaches infinity.
This result shows that there is an embedded $M_{\rm t}$-$T_{\rm e}$ relation consistent
with the self-similar model in the method described above.
If we assume that the gas fraction $f_{\rm gas}$ is a constant, the
scaling relation between $M_{\rm g}$ and $T_{\rm e}$ will be as same as the relation between $M_{\rm t}$ and $T_{\rm e}$.
In order to investigate the relations between integrated $Y$ and the other cluster properties, we consider Equation~(\ref{eq:intY}). By combining Equation~(\ref{eq:r2500})-(\ref{eq:cedensity}), one can obtain:
\begin{equation}
{\Delta}T_{0}=\frac{M_{\rm g}(r_{2500})f(x,T_{e})T_{\rm CMB}\sigma_{\rm T}k_{\rm B}T_{\rm e}\Gamma\left(\frac{3}{2}\beta-\frac{1}{2}\right)\theta_{\rm c}}{4\pi^{1/2}\mu_{\rm e}m_{\rm p}D^{2}_{\rm A}m_{\rm e}c^{2}\Gamma\left(\frac{3}{2}\beta\right)\int^{\theta_{2500}}_{0}\left(1+\frac{\theta^{2}}{\theta^{2}_{\rm c}}\right)^{-3\beta/2}\theta^{2}d\theta}.
\label{eq:deltaT0}
\end{equation}
Then we combine Equation (\ref{eq:deltaT0}) and Equation (\ref{eq:intY}) and obtain:
\begin{equation}
Y=\frac{\pi^{1/2}M_{\rm g}(r_{2500})\sigma_{\rm T}k_{\rm B}T_{\rm e}}{2\mu_{\rm e}m_{\rm p}m_{\rm e}c^{2}D^{2}_{\rm A}}g(\theta_{2500},\theta_{\rm c},\beta),
\label{eq:ytrelation}
\end{equation}
where
\begin{equation}
g(\theta_{2500},\theta_{\rm c},\beta)=\frac{\Gamma\left(\frac{3}{2}\beta-\frac{1}{2}\right)\theta_{\rm c}\int^{\theta_{2500}}_{0}\left(1+\frac{\theta^{2}}{\theta^{2}_{\rm c}}\right)^{\left(1-3\beta\right)/2}\theta d\theta}{\Gamma\left(\frac{3}{2}\beta\right)\int^{\theta_{2500}}_{0}\left(1+\frac{\theta^{2}}{\theta^{2}_{\rm c}}\right)^{-3\beta/2}\theta^{2}d\theta}
\label{eq:funG}
\end{equation}
is a dimensionless function of $\theta_{2500}$, $\theta_{\rm c}$, and $\beta$.
We also calculated $\partial \ln Y/\partial \ln T_{\rm e}$ to investigate the behavior of $Y$ when $T_{\rm e}$ varies (see Figure~\ref{fig:YTembed}). As we can see in Figure~\ref{fig:YTembed}, $\partial \ln Y/\partial \ln T_{\rm e}$ varies between $2.45$ and $2.75$ while $r_{2500}/r_{\rm c}>2.0$ and $0.5\leq\beta\leq1.2$. We also noticed that
$\partial \ln Y/\partial \ln T_{\rm e}$ approaches $2.5$ as $r_{2500}/r_{c}$ approaches infinity.
This result indicates that behaviour similar to the self-similar model is built into
scaling relation studies based solely on SZE data.
The effect of varying $\beta$ is investigated. If we consider power law scaling
relation
\begin{equation}
Q=10^{A}X^{B}
\label{eqn:line}
\end{equation}
between $M_{\rm t}$ and $T_{\rm e}$ with $M_{\rm t}$ written as Equation (\ref{eq:mtrelation}), one can find that changing the value of $\beta$ will only affect the normalization factor $A$. In other words, if we change $\beta$ to $\beta'$, $A$ will be changed to $A'=A+B\log_{10}(\beta'/\beta)$.
In the $Y$-$T_{e}$ relation, $\beta$ will affect the scaling power $B$ as shown in Figure \ref{fig:YTembed}.
$B$ varies within a range of only $0.04$ while $0.5\leq\beta\leq1.2$.
Considering the UTP $\beta$ model, we undertook a similar analysis of the embedded scaling relation. The results, which are similar with those obtained with the isothermal $\beta$ model, are shown in Figure \ref{fig:utpembed}.
\begin{figure}
\plotone{f3.eps}
\caption{\small
Embedded scaling relation between $Y$ and $T_{\rm e}$. The shaded scale indicates different $\beta$
from $0.5$ (the darkest line) to $1.2$ (the lightest line). The dashed line indicates the predicted value by
self-similar model.
\label{fig:YTembed}}
\end{figure}
\begin{figure}
\plotone{f4.eps}
\caption{\small
Embedded $M_{t}$-$T_{\rm e}$ (upper panel) and $Y$-$T_{\rm e}$ (lower panel) scaling relations in UTP $\beta$
model. The grey scales indicate different $\beta$
from $0.5$ (the darkest line) to $1.2$ (the lightest line). The dashed lines indicate the predicted values by
self-similar model.
\label{fig:utpembed}}
\end{figure}
\subsection{Calculation of Scaling Relations}
Here we investigate the $Y$-$T_{\rm e}$, $Y$-$M_{\rm t}$, and $Y$-$M_{\rm g}$ scaling relations
for the quantities derived above.
We also study the $M_{\rm t}$-$T_{\rm e}$ scaling relation
with the $M_{\rm t}$ from AMiBA SZE data and the $T_{\rm e}$ from X-ray data \citep{Bonamente2007,Morandi2007}.
For a pair of cluster properties $Q$-$X$, we consider the power-law scaling relation (Equation (\ref{eqn:line})).
To estimate $A$ and $B$,
we perform a maximum-likelihood analysis in the log-log plane.
For the $M_{\rm t}$-$T_{\rm e}$ relation,
because $M_{\rm t}$ and $T_{\rm e}$ are independent measurements from different observational data,
we can simply perform linear minimum-$\chi^{2}$ analysis to estimate $A$ and $B$ \citep{Press1992,Benson2004}.
On the other hand, for the SZE-derived properties,
because they are correlated and so are their likelihoods
(i.e., $L(Q,X)\neq L(Q)L(X)$, as manifested by the colored areas in Figure~\ref{fig:Ysr}),
we cannot apply $\chi^{2}$ analysis.
Instead we use a Monte Carlo method by randomly choosing one MCMC iteration from each cluster many times.
With each set of iterations we derived a pair of $A_{i}$ and $B_{i}$ using linear regression method.
Finally we estimate the likelihood distribution of $A$ and $B$ using the distribution of $\left\{A_{i}\right\}$ and
$\left\{B_{i}\right\}$.
The results are presented in Table~\ref{tab:sr} and Figures~\ref{fig:Ysr} and~\ref{fig:MTsr}.
However, as we discussed in Section \ref{sec:ana}, the scaling relations between
SZE-derived properties should be interpreted as a test of embedded scaling relations rather than estimations of the true scaling relations. On the other hand, the $M_{\rm t}$-$T_{\rm e}$ relation
compared $M_{\rm t}$ and $T_{\rm e}$ from different experiments. Therefore, we can regard it
as a test of the scaling relation prediction.
\begin{deluxetable}{c|ccc}
\tabletypesize{\scriptsize}
\tablecaption{Scaling relations of SZE-derived cluster properties\label{tab:sr}}
\tablehead{
Scaling& & & \\
Relations& $A$ & $B$ & $B_{\rm thy}$}
\startdata
$D^{2}_{A}E(z)Y,T$ & $-4.32^{+0.07}_{-0.06}$ & $2.48^{+0.20}_{-0.22}$ & $2.50$ \\
$D^{2}_{A}E(z)^{-2/3}Y,M_{\rm t}$ & $-4.80^{+0.21}_{-0.21}$ & $1.28^{+0.27}_{-0.23}$ & $1.67$ \\
$D^{2}_{A}E(z)^{-2/3}Y,M_{\rm g}$ & $-4.89^{+0.22}_{-0.22}$ & $1.29^{+0.28}_{-0.25}$ & $1.67$ \\
$E(z)M_{\rm t},T $ & $0.66^{+0.11}_{-0.12}$ & $0.95^{+0.66}_{-0.60}$ & $1.50$ \\
\enddata
\tablecomments{All cluster properties used in the analysis are based on the AMiBA SZE data (see Sec.~\ref{sec:property}),
except for the $T$ in the $M$-$T$ relation, where the $T$ is from \citet{Bonamente2007} for A1689, A1995, A2163, A2261,
and from \citet{Morandi2007} for A2390.
The units of $T$, $D^{2}_{A}Y$, $M_{\rm t}$, and $M_{\rm g}$ are
$7keV$, $Mpc^{2}$, $10^{14}M_{\odot}$, and $10^{13}M_{\odot}$, respectively.
The last column $B_{\rm thy}$ indicates the theoretical values predicted by self-similar model.
In the first column,
$E^{2}(z)\equiv\Omega_{\rm M}\left(1+z\right)^{3}+\left(1-\Omega_{\rm M}-\Omega_{\rm \Lambda}\right)\left(1+z\right)^{2}+\Omega_{\rm \Lambda}$.}
\end{deluxetable}
\begin{figure}
\epsscale{0.8}
\plotone{f5.eps}
\caption{\small
Scaling Relations of
$Y$-$T_{\rm e}$ (upper), $Y$-$M_{\rm g}$ (middle), and $Y$-$M_{\rm t}$ (lower)
based on the AMiBA SZE derived results.
Gray areas indicate the 68\% confidence regions for the parameter pairs of each cluster.
Solid lines are the best fits as in Tab.~\ref{tab:sr}.\label{fig:Ysr}}
\end{figure}
\begin{figure}
\plotone{f6.eps}
\caption{\small
$M_{\rm t}-T_{\rm e}$ scaling relation
between the X-ray measured $T_{\rm e}$ \citep{Bonamente2007,Morandi2007}
and the AMiBA derived $M_{\rm t}$.
The boxes indicate the $1\sigma$ errors for each cluster.
The solid line is the best fit as in Tab.~\ref{tab:sr}.
\label{fig:MTsr}}
\end{figure}
\section{Discussions and Conclusion}\label{sec:discuss}
We derived the cluster properties, including $T_{\rm e}$, $r_{2500}$ ,$M_{\rm t}$, $M_{\rm g}$ and $Y$,
for six massive galaxy clusters ($M_{\rm t}(r_{2500})>2\times 10^{14}M_{\odot}$) mainly based on the AMiBA SZE data.
These results are in good agreement with those obtained solely from the OVRO/BIMA SZE data,
and those from the joint SZE-X-ray analysis of Chandra-OVRO/BIMA data.
In the comparison, the SZE-X-ray joint analysis gives smaller error bars than the pure SZE results,
because currently the uncertainty in the measurement of the SZE flux is still large.
On the other hand, in our current SZE-based analysis,
due to the insufficient $u$-$v$ coverage of the 7-element AMiBA
we still need to use X-ray parameters for the cluster model
i.e., the $\beta$ and $\theta_{\rm c}$ for the $\beta$-model.
However, \citet{apex2009} have deduced $\beta$ and $\theta_{\rm c}$ from an
APEX SZE observation alone recently.
For AMiBA, the situation will be improved when
it expands to its 13-element configuration with 1.2m antennas \citep[AMiBA13;][]{Ho2008},
and thus much stronger constraints on the cluster properties than current AMiBA results are expected.
Furthermore, with about three times higher angular solution, we should be able to estimate $\beta$ and $\theta_{\rm c}$ from our SZE data with AMiBA13
and make our analysis purely SZE based \citep{Ho2008,Sandor2008}.
Nevertheless,
the techniques of using SZE data solely to estimate cluster properties are still important,
because
many upcoming SZE surveys will observe SZE clusters for which no X-ray data are available
\citep{Ruhl2004,Fowler2004,Kaneko2006,Ho2008},
especially for those at high redshifts.
\citet{Hallman2007} suggested that adopting the UTP $\beta$ model for SZE data on galaxy clusters
will reduce the overestimation of the integrated Compton $Y_{500}$ and gas mass. However, the $Y_{2500}$
values we obtained with the UTP model are not smaller than those obtained with the isothermal model. The $M_{g}(r_{2500})$
values deduced using the UTP model are even larger than those deduced using the isothermal model.
For the case of integrated Compton Y, when we compare $Y_{500}$ deduced using the UTP model $Y_{500,\rm UTP}$,
and those deduced using the isothermal model $Y_{500,\rm iso}$,
we found that the $Y_{500,\rm UTP}$ are smaller than $Y_{500,\rm iso}$, as predicted by \citet{Hallman2007}.
The reason is that the Compton $y$ profile predicted using the UTP $\beta$ model will decrease more quickly than
the profile predicted by the isothermal $\beta$ model, with increasing radius.
Therefore, the ratio $Y_{\Delta,\rm UTP}/Y_{\Delta,\rm iso}$ will decrease as $\Delta$ decreases.
We also noticed that the electron temperature values obtained with the isothermal model are significantly higher
than the temperatures deduced from X-ray data for most clusters we considered.
The temperatures of clusters obtained using the UTP model are lower than those obtained with the isothermal model
and thus are in better agreement with those deduced from X-ray data.
Therefore, in the UTP model, with similar $Y_{2500}$ and lower temperature, we should get larger $M_{g}$.
The electron temperatures derived using the UTP $\beta$ model are in better agreement
with X-ray observation results than those derived using the isothermal $\beta$ model.
This result implies that the UTP $\beta$ model may provide better estimates of the electron temperature when we can
use only the $\beta$ model parameters from X-ray observation. However, we noticed that
the UTP $\beta$ model produced larger errorbars than the isothermal $\beta$ model did.
These increased errors are based on the uncertainties of $\beta$ and $r_{c}$ which we insert by hand.
On the other hand, because we treat $\beta$ and $r_{c}$ as independent parameters in this work,
the uncertainty could be over estimated due to the degeneracy between these two parameters.
If we can access to the likelihood distributions of $\beta$ and $r_{c}$ of the UTP $\beta$ model derived from observation,
the error-bars might be reduced significantly.
There is a concern that
the scaling relations among the purely SZE-derived cluster properties may be implicitly embedded
in the formalism we used here. In this paper,
we also investigate for the first time the embedded scaling relations
between the SZE-derived cluster properties.
Our analytical and numerical analyses both suggest that there are embedded scaling relations
between SZE-derived cluster properties, with both the isothermal model and the UTP model, while we fix $\beta$.
The embedded $Y$-$T$ and $M$-$T$ scaling relations are close to the predictions of self-similar model.
The results imply that the assumptions built in the pure-SZE method
significantly affect the scaling relation between the SZE-derived properties.
Therefore, we should treat those scaling relations carefully.
Our results suggest the possibility of measuring cluster parameters with SZE observation alone.
The agreement between our results and those from the literature provides
not only confidence for our project
but also supports our understanding of galaxy clusters.
The upcoming expanded AMiBA with higher sensitivity and better resolution will
significantly improve the constraints on these cluster properties.
In addition, an improved determination of the $u$-$v$ space structure of the clusters directly from AMiBA
will make it possible to measure the properties of clusters which currently do not have good X-ray data.
The ability to estimate
cluster properties based on SZE data will improve the study of mass distribution at high redshifts.
On the other hand, the fact that the assumptions of cluster mass and temperature profiles significantly bias the estimations of scaling relations should be also noticed and treated carefully.
\acknowledgments
We thank the Ministry of Education, the National Science
Council (NSC), and the Academia Sinica, Taiwan, for their funding and supporting of AMiBA project. YWL thank the AMiBA team
for their guiding, supporting, hard working, and helpful discussions.
We are grateful for computing support from the National Center for High-Performance Computing, Taiwan.
This work is also supported by National Center for Theoretical Science, and
Center for Theoretical Sciences, National Taiwan University for J.H.P.~Wu.
Support from the STFC for M. Birkinshaw is also acknowledged.
| proofpile-arXiv_065-5125 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
The idea of intermittency is not new. Some observational evidence was
given already in early 60-ties.
For example \cite{burbidge1965} suggested intermittent outbursts of
NGC1275 the cD galaxy in the center of Perseus A cluster.
\cite{kellerman1966} derived the intermittency timescales of
$\sim 10^4-10^6$~years required for the production of relativistic
particles responsible for the observed synchrotron spectra in radio
sources and quasars. Signatures of the past recurrent
activity in nuclei of normal galaxies were presented by
\cite{bailey1978}, but their paper was not really noticed and
has only 24 citations to date.
\cite{shields1978} examined quasar models and suggested that their
accumulate the mass during quiescent periods and then through the
instability they transfer the mass onto a central black hole during
a short period of an outburst of the activity. These are just a few
examples of the AGN intermittency that has been considered since the
early days of studies of the nuclear emission in galaxies. It is now
that we are looking closely at this behaviour as it becomes evident
that it is an important component to our understanding of the
evolution of structures in the universe. However, there are still
many open questions about the origin of the intermittent behaviour. Is
it related to the unsteady fuel supply, or accretion flow? Are there
many quiescent and outburst phases? What is the mechanism regulating
the intermittent behaviour?
Observations show a range of timescales for the AGN outbursts. Quasar
lifetimes estimated based on large samples of SDSS quasars are of
order of $10^7$~years \citep{martini2001}. Signatures of outbursts in
recent observations of X-ray clusters indicated similar timescales
\citep[see][for the review]{mcnamara2007}. However, episodes of activity on timescales
shorter than $<10^5$~years have been also observed for example in
compact radio sources
\citep{owsianik1998,reynolds1997} or as light echos in nearby galaxies
\citep{lintott2009}.
In this review we will focus on the short timescales of the
intermittent jet activity. We discuss the radio source evolution,
observational evidence for the intermittent activity and present the
model for the origin and nature of the short term activity based on
the accretion disk physics.
\section{Jets and Compact Radio Sources}
Jets provide the evidence for energetic AGN outflows. They highlight
the fact that the energy released in the nuclear region in the close
vicinity of a black hole can influence the environment at large
distances. They also trace the source age and activity timescales. In
many recently discovered X-ray jets associated with powerful quasars
the continuous X-ray jet emission extends to hundreds of kpc distances
from the core
\citep{siemi2002,sambruna2002,harris2006}. These jets are
straight, sometimes curved or bent, and have many knots. For example
in PKS~1127-145 jet associated with z=1.18 quasar the separation and
size of the knots may indicate separate outbursts of jet activity with
timescales $\sim10^5$~years \citep{siemi2007}.
Host galaxy scale jets, smaller than $<10$~kpc provide a direct
information about the jet interactions with the ISM and
feedback. Compact radio sources that are entirely contained within the
host galaxy represent the initial phase of radio source growth and
they are young. The most compact Gigahertz Peaked Spectrum (GPS)
sources have linear radio sizes below $\sim 1$~kpc. Their age can be
probed directly by studying the expansion velocity of symmetric radio
structures and it is typically less than $10^3$~years
\citep{polatidis2003,gugliucci2005}. Compact Steep Spectrum (CSS) radio sources
are slightly larger, but still contained within the host galaxy. Their
age is given by synchrotron ageing measurements and is smaller than
$10^5$~years \citep{murgia1999}.
\section{Evolution of Radio Sources}
\begin{figure}[!ht]
\begin{center}
\includegraphics[scale=0.6]{siemiginowska_f1.eps}
\end{center}
\caption{Radio sources in the radio luminosity vs. X-ray luminosity
plane. FRI are marked with empty circles and FRII with squares.
Compact radio sources from \cite{tengstrand2009} are marked by large
black circles in the upper right-hand part of the diagram. The
regression lines for the FRI sources is marked with dashed line and
for FRII with dash-dot-dash lines. The solid line is the fit to the
compact sources with the lower X-ray luminosity. The evolutionary tracks
towards FRI and FRII sources are marked.}
\label{fig1}
\end{figure}
\cite{tengstrand2009} discuss the XMM-{\it Newton} and {\it Chandra} X-ray Observatory
observations of a complete sample of compact GPS galaxies. All these
sources are unresolved in X-rays, but they are very powerful in both
the radio and X-rays bands. In Figure~\ref{fig1} we mark the location
of GPS galaxies in radio vs. X-ray luminosity plane. We also mark the
locations of the large scale FRI and FRII sources. The GPS sources
cover an upper right corner of that diagram and the plotted regression
lines indicate possible evolutionary path for the GPS sources to grow
into large scale radio sources. However, it is unclear which way and
how the evolution of the GPS radio sources proceeds. Are they fade in
radio and X-rays simultaneously or maybe only the radio fades while
the X-ray emission remains unchanged indicating that the X-ray
emission process is independent of the radio one? On the other hand
maybe the GPS sources remain in this part of the diagram due to
repetitive outbursts. \cite{odea1997} show that there is an
overabundance of compact radio sources, so they need to be short-lived
or intermittent rather then evolving in a self-similar manner to
explain their numbers.
\cite{reynolds1997}
proposed a phenomenological model with the recurrent outbursts on
timescales between $\sim10^4-10^5$~years lasting for $\sim
3\times10^4$ years that fits the observed numbers of radio sources and their sizes.
There is additional growing evidence for the AGN intermittent
activity. Morphology of large radio galaxies with double-double or
triple-triple aligned structures show repetitions on scale of
$\sim10^5-10^6$ years \cite[see for example][]{brocksopp2007}). Some
nearby radio galaxies show a younger radio structures embedded in a
relic radio halo \cite[e.g. 4C+29.30][]{jamrozy2007}.
Recent {\it Chandra} observations of X-ray clusters show signs of AGN
outbursts on timescales of $10^7$~year, e.g. Perseus A
\cite{fabian2003}, M87 \cite{forman2005}. However, the origin of intermittent activity
has not be determined so far. Mergers operate on long timescales and
can be important in X-ray clusters, especially at high
redshifts. Unstable fuel supply due to feedback may play a significant
role on the long timescales. Instabilities in the accretion flow
related to the accretion physics have been shown to operate on long
\citep[][]{shields1978,siemi1996,janiuk2004} and short timescales
\citep{janiuk2002,czerny2009} depending on the nature of the
instabilities.
\section{Outbursts of the Activity due to Accretion Disks Instabilities}
Accretion disk instabilities have been studied and shown to operate in
binary systems. The ionization instability that causes large amplitude
($\Delta L \sim 10^4$) outbursts in galactic binaries on timescales
between 1 and 1000 years may operate in accretion disks around
supermassive black holes. It can cause huge outbursts on timescale
$10^6-10^8$~years and influence the evolution and growth of a central
black hole \citep{siemi1996}. The radiation pressure instability
studied in microquasars causes moderate amplitude variability on short
timescales. The time-dependent disk models that include this
instability support the observed correlation between luminosity
variations and jet activity in microquasar \citep{fender2004}. In fact
the radiation pressure instability is the only quantitative mechanism
explaining the observed variability in GRS1915+105
\citep{janiuk2000,nayakshin2000,janiuk2002,merloni2006,janiuk2007}.
Scaling the observed outbursts with timescales of 100-2000~sec in this
$\sim 10 M_{\odot}$ system to 10$^9M_{\odot}$ galactic size black hole
gives the variability timescales between 300-6000 years.
\begin{figure}[!ht]
\begin{center}
\includegraphics[scale=0.48]{siemiginowska_f2.eps}
\end{center}
\caption{ Surface density vs. effective temperature relation, e.g.
the local stability curve at 10$R_g$ for an accretion disk with the
viscosity scaled with gas and total pressure as
$\sqrt{P_{gas}P_{tot}}$. The viscous heating is balanced by cooling
through the processes listed on the right hand side. The regions at
lower temperatures are dominated by the gas pressure, while the upper
branch becomes unstable when the radiation pressure dominates. Upper
curves shows different solutions that include parametrized outflow
stabilizing the disk. The top curve shows the solution for the
advective slim disk. Green solid qdots indicate solutions obtained during
time-dependent model calculation \citep[see][for
details]{janiuk2002}.}
\label{fig2}
\end{figure}
Figure~\ref{fig2} shows the standard stability curve in the surface
brightness vs. effective temperature plane with the stable solutions
for an $\alpha$ viscosity accretion disk. In the region dominated by
radiation pressure the disk is unstable. The viscous heating cannot be
compensated by the radiative cooling and some additional cooling of
the disk in a form of an outflow, an advection towards the black hole
or the energy dissipation in the corona can stabilize the disk. The
solid dots show the evolutionary ``track'' of the local disk in the
unstable region. This model includes the outflow that transports away
the excess heating energy allowing the disk to transfer to a lower
temperature stable state
\citep{janiuk2002,czerny2009}.
Figure~\ref{fig3} shows the luminosity variations due to the radiation
pressure instability occurring in a disk around $M_{BH}=3\times10^8
M_{\odot}$ black hole with a steady flow of matter with the rate $\dot
M=0.1\dot M_{Edd}$, where $\dot M_{Edd}$ is the critical accretion
rate corresponding to the Eddington luminosity. The outbursts are
separated by 3$\times10^4$~years as required by
\citeauthor{reynolds1997} model. For a different black hole mass and
accretion rates the outbursts timescales and durations are
different. In general the effects of the instability depend on the
size of the unstable disk region which scales with accretion
rate. Also, there is a lower limit to the accretion rate for this
instability to operate because the radiation pressure instability
occurs when $P_{rad}>P_{gas}.$ In the systems with $M_{BH}>10^8
M_{\odot}$ and accretion rates below a few percent of the critical
Eddington accretion rates the instability does not occur and the disk
is stable.
We associate the outbursts caused by the radiation pressure
instability with the ejections of radio jets. The predicted timescales
for the outbursts durations and repetitions are in agreement with the
observations \citep{wu2009} of compact radio sources.
\begin{figure}
\begin{center}
\includegraphics[scale=0.55]{siemiginowska_f3a.eps}
\includegraphics[scale=0.48]{siemiginowska_f3b.eps}
\end{center}
\caption{{\bf Left:} Luminosity variations due to the radiation
pressure instability in an accretion
disk around $M_{BH}=3\times10^8M_{\odot}$ and $\dot M =0.1 \dot
M_{Edd}$. {\bf Right:} Outburst durations represented as constant
lines in the black hole mass vs. accretion rate parameter
space. Outbursts lasting for $10^3$~years are too short for the
expansion of a radio source beyond its host galaxy.}
\label{fig3}
\end{figure}
\section{Consequences}
There are several implications of the intermittent jet activity.
The repetitive outbursts have potentially a very strong impact on the
ISM. The repetitive shocks on timescales shorter than the relaxation
timescales may keep the ISM warm, and also more efficiently drive the
gas out of the host galaxy.
There exists an intrinsic limit to the size of a radio source given by
the timescales.
Evolution of a radio source proceeds in a non self-similar way. The
outbursts repeat regularly every $10^3-10^6$ years. The jet turns-off
between each outburst. Each radio structure may represent one
outbursts and a young source will indicate a new outburst. However,
the timescales for fading of a radio source are longer than the
separation timescales between the outbursts, and we should observe the
fossil radio structure in addition to the compact one. 10\% of GPS
sources have faint radio emission on large scales, that is typically
explained as a relic of the other active phase \citep{stanghellini2005}.
What is the radio source evolution between the outbursts? The jet
turns-off between each outburst and then the pressure driven expansion
continues until the radio source energy is comparable to the thermal
energy of the heated medium. If the outburst lasts about $\sim
1000$~years then the radio source does not expand beyond the host
galaxy. The hot spots can only travel to a distance of $\sim 300$~pc
before the jet turns off and then the pressure driven expansion drives
the radio structure only to a distance of 3~kpc for the typical ISM
density and jet power. At this point the re-collapse of the radio
source starts and it continues for about a Myr if there is no another
outburst. In order for the source to escape its host galaxy the
outburst needs to last longer than $\sim 10^4$~years.
Re-collapse phase is typically longer than the repetition timescale and
a ``tunnel'' made by the jet will not close between outbursts. It
means that the repetitive jets outbursts will propagate into a
rarified medium. As the timescales between outbursts are long we
expect to observe fading compact radio sources with no active nucleus.
For high accretion rates ($>0.02 \dot M_{Edd}$) the jet outbursts can
be governed by the radiation pressure instabilities. The observed
radio power suggests high accretion rates that are consistent with the
requirement for the instability to operate. Thus the number of compact
size radio sources with can be explained by the outbursts with
timescales consistent with the radiation pressure instability.
There are some open issues related to the theoretical understanding of
the disk instability. The effects of the instability depend on the
disk viscosity. The standard \citeauthor*{shakura1973} disk is
unstable in the case of viscosity scaling with total pressure while it
is stable for the viscosity scaled with the gas pressure. In the case of
the MRI viscosity the local disk simulations support some scaling of the
effective torque with pressure. However, the effect of the radiation
pressure instability is not clear in the global MHD simulations,
although very recent simulations by \cite{hirose2009} confirm the
existence of the instability. Observationally the behaviour of
microquasars strongly supports the radiation pressure explanation for
the outbursts.
\section{Conclusions and Future Perspectives}
Observations indicate a complex behaviour of radio sources: continuous
jets, signatures of repetitive outbursts in separated radio
components, or the statistic of radio sources. The source complex
behaviour may reflect different regimes of the accretion flow that
depend on the black hole mass and accretion rate. We also note that
for some parameters the radio source may never leave the host galaxy.
Large samples of radio sources are needed for statistical studies. We
should be able to determine the number of sources, their lifetimes and
sizes. We also need more sources with measurements of their age as
well as indications for the intermittency in the sources radio
morphology. Such data should be available in the future with the new
radio surveys that probe fainter, low power compact sources that might
be in the fading phase.
\acknowledgements
AS thanks the organizers for the invitation to the meeting.
This research is funded in part by NASA contract NAS8-39073. Partial
support for this work was provided by the NASA grants
GO5-6113X, GO8-9125A and NNX07AQ55G.
| proofpile-arXiv_065-5133 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\chapter[Empirical deconvolution]{Kernel methods and minimum contrast
estimators for empirical deconvolution}
\footnotetext[1]{Department of Mathematics and Statistics, The University of
Melbourne, Parkville, VIC 3010, Australia; [email protected]}
\footnotetext[2]{Department of Mathematics and Statistics, The University of
Melbourne, Parkville, VIC 3010, Australia; [email protected]}
\arabicfootnotes
\contributor{Aurore Delaigle
\affiliation{University of Melbourne and University of Bristol}}
\contributor{Peter G. Hall
\affiliation{University of Melbourne and University of California at Davis}}
\renewcommand\thesection{\arabic{section}}
\numberwithin{equation}{section}
\renewcommand\theequation{\thesection.\arabic{equation}}
\numberwithin{figure}{section}
\renewcommand\thefigure{\thesection.\arabic{figure}}
\begin{abstract}We survey classical kernel methods for providing nonparametric
solutions to problems involving measurement error. In particular we outline
kernel-based methodology in this setting, and discuss its basic properties.
Then we point to close connections that exist between kernel methods and
much newer approaches based on minimum contrast techniques. The connections
are through use of the sinc kernel for kernel-based inference. This
`infinite order' kernel is not often used explicitly for kernel-based
deconvolution, although it has received attention in more conventional
problems where measurement error is not an issue. We show that in a
comparison between kernel methods for density deconvolution, and their
counterparts based on minimum contrast, the two approaches give identical
results on a grid which becomes increasingly fine as the bandwidth
decreases. In consequence, the main numerical differences between these two
techniques are arguably the result of different approaches to choosing
smoothing parameters.
\end{abstract}
\subparagraph{Keywords}bandwidth, inverse problems, kernel estimators, local
linear methods, local polynomial methods, minimum contrast methods,
nonparametric curve estimation, nonparametric density estimation,
nonparametric regression, penalised contrast methods, rate of convergence,
sinc kernel, statistical smoothing
\subparagraph{AMS subject classification (MSC2010)}62G08, 62G05
\section{Introduction}
\subsection{Summary}
Our aim in this paper is to give a brief survey of kernel methods\index{kernel methods|(} for solving problems involving \Index{measurement error}, for example problems involving density \Index{deconvolution} or regression with errors in variables\index{errors in variables|(}, and to relate these `classical' methods (they are now about twenty years old) to new approaches based on \Index{minimum contrast methods}. Section~1.1 motivates the treatment of problems involving errors in variables, and section~1.2 describes conventional kernel methods for problems where the extent of measurement error is so small as to be ignorable. Section~2.1 shows how those standard techniques can be modified to take account of measurement errors, and section~2.2 outlines theoretical properties of the resulting estimators.
In section~3 we show how kernel methods for dealing with measurement error are
related to new techniques based on minimum contrast ideas. For this purpose,
in section~3.1 we specialise the work in section~2 to the case of the sinc
kernel\index{kernel methods!sinc kernel}. That kernel choice is not widely used for density deconvolution,
although it has previously been studied in that context by Stefanski and
Carroll (1990)\index{Stefanski, L.|(}\index{Carroll, R. J.|(}, Diggle and Hall
(1993)\index{Diggle, P. J.}, Barry and Diggle (1995)\index{Barry, J.}, Butucea
(2004)\index{Butucea, C.}, Meister (2004)\index{Meister, A.} and Butucea and
Tsybakov (2007a,b)\index{Tsybakov, A. B.}. Section~3.2 outlines some of the
properties that are known of sinc kernel estimators, and section~3 points to
the very close connection between that approach and minimum contrast, or
penalised contrast, methods\index{penalised contrast methods}.
\subsection{Errors in variables}
Measurement errors\index{measurement error|(} arise commonly in practice, although only in a minority of statistical analyses is a special effort made to accommodate them. Often they are minor, and ignoring them makes little difference, but in some problems they are important and significant, and we neglect them at our peril.
Areas of application of deconvolution\index{deconvolution|(}, and regression with measurement
error, include the analysis of seismological\index{seismology} data
(e.g.~Kragh and Laws, 2006)\index{Kragh, E.}\index{Laws, R.}, \Index{financial
analysis} (e.g.~Bonhomme and Robin, 2008)\index{Bonhomme, S.}\index{Robin, J.-M.},
disease epidemiology\index{epidemiology|(} (e.g.~Brookmeyer and Gail, 1994,
Chapter~8)\index{Brookmeyer, R.}\index{Gail, M. H.}, and nutrition\undex{nutrition|(}.
The latter topic is of particular interest today, for example in connection
with errors-in-variables problems for
\index{data!nutrition data|(}data gathered in
\index{food|(}food frequency
questionnaires\index{questionnaire} (FFQs), or dietary questionnaires for
epidemiological studies (DQESs). Formally, an FFQ is `A method of dietary
assessment in which subjects are asked to recall how frequently certain foods
were consumed during a specified period of time,' according to the Nutrition
Glossary of the European Food Information Council. An FFQ seeks detailed
information about the nature and quantity of food eaten by the person filling
in the form, and often includes a query such as, ``How many of the above
servings are from fast food outlets (McDonalds, Taco Bell, etc.)?'' (Stanford
University, 1994)\index{Stanford University}. This may seem a simple question to answer, but nutritionists interested in our consumption of fat generally find that the quantity of fast food that people admit to eating is biased downwards from its true value. The significant concerns in Western society about fat intake, and about where we purchase our oleaginous food, apparently influences our truthfulness when we are asked probing questions about our eating habits.
Examples of the use of statistical deconvolution in this area include the work
of Stefanski and Carroll (1990)\index{Stefanski, L.|)}\index{Carroll, R. J.|)} and
Delaigle and Gijbels (2004b)\index{Gijbels, A.}, who address nonparametric
density deconvolution from measurement-error data, obtained from FFQs during
the second National Health and Nutrition Examination Survey (1976--1980);
Carroll {\em et al.}~(1997)\index{Freedman, L. S.}\index{Pee, D.}, who discuss design
and analysis aspects of linear measurement-error models when data come from
FFQs; Carroll {\em et al.}~(2006)\index{Midthune, D.}\index{Kipnis, V.}, who use
measurement-error models, and deconvolution methods, to develop marginal mixed
measurement-error models for each nutrient in a nutrition study, again when
FFQs are used to supply the data; and Staudenmayer {\em et al.}~(2008)\index{Ruppert,
D.}\index{Buonaccorsi, J. P.}, who employ a dataset from nutritional epidemiology\index{epidemiology|)} to illustrate the use of techniques for nonparametric density deconvolution. See Carroll {\em et al.}~(2006, p.~7) for further discussion of applications to data on nutrition.\index{food|)}\index{data!nutrition data|)}
How might we correct for errors in variables? One approach is to use methods based on deconvolution, as follows. Let us write $Q$ for the quantity of fast food that a person admits to eating, in a food frequency questionnaire; let $Q_0$ denote the actual amount of fast food; and put $R=Q/Q_0$. We expect that the distribution of $R$ will be skewed towards values greater than~1, and we might even have an idea of the shape of the distribution responsible for this effect, i.e.~the distribution of $\log R$. Indeed, we typically work with the logarithm of the formula $Q=Q_0\,R$, and in that context, writing $W=\log Q$, $X=\log Q_0$ and $U=\log R$, the equation defining the variables of interest is:
\begin{equation}
W=X+U\,.\label{WXU}
\end{equation}
We have data on $W$, and from that we wish to estimate the distribution of $X$, i.e.~the distribution of the logarithm of fast-food consumption.
It can readily be seen that this problem is generally not solvable unless the distribution of $U$, and the joint distribution of $X$ and $U$, are known. In practice we usually take $X$ and $U$ to be independent, and undertake empirical deconvolution\index{deconvolution|)} (i.e.~estimation of the distribution, or density, of $X$ from data on $W$) for several candidates for the distribution of~$U$. If we are able to make repeated measurements of $X$, in particular to gather data on $W^{(j)}=X+U^{(j)}$ for $1\leq j\leq m$, say, then we have an opportunity to estimate the distribution of $U$ as well.
It is generally reasonable to assume that $X$, $U^{(1)}$, \ldots, $U^{(M)}$ are independent random variables. The distribution of $U$ can be estimated whenever $m\geq2$ and the distribution is uniquely determined by $|\phi_U|^2$, where $\phi_U$ denotes the \Index{characteristic function} of~$U$. The simplest example of this type is arguably that where $U$ has a \Index{symmetric distribution} for which the characteristic function does not vanish on the real line. One example of repeated measurements in the case $m=2$ is that where a food frequency questionnaire asks at one point how many times we visited a fast food outlet, and on a distant page, how many hamburgers or servings of fried chicken we have purchased.
The model at (\ref{WXU}) is simple and interesting, but in examples from nutrition science, and in many other problems, we generally wish to estimate the response to an \Undex{explanatory variable}, rather than the distribution of the explanatory variable. Therefore the proper context for our food frequency questionnaire example\undex{nutrition|)} is really \Index{regression}, not distribution or density estimation. In regression with errors in variables\index{errors in variables|)} we observe data pairs $(W,Y)$, where
\begin{equation}
W=X+U\,,\quad Y=g(X)+V\,,\label{WYfromXU}
\end{equation}
$g(x)=E(Y\,|\, X=x)$, and the random variable $V$, denoting an experimental error, has zero mean. In this case the standard regression problem is altered on account of errors that are incurred when measuring the value of the explanatory variable. In (\ref{WYfromXU}) the variables $U$, $V$ and $X$ are assumed to be independent.
The measurement error $U$, appearing in (\ref{WXU}) and (\ref{WYfromXU}), can be interpreted as the result of a `laboratory error' in determining the `dose' $X$ which is applied to the subject. For example, a laboratory technician might use the \Index{dose} $X$ in an experiment, but in attempting to determine the dose after the experiment they might commit an error $U$, with the result that the actual dose is recorded as $X+U$ instead of~$X$. Another way of modelling the effect of measurement error is to reverse the roles of $X$ and $W$, so that we observe $(W,Y)$ generated as
\begin{equation}
X=W+U\,,\quad Y=g(X)+V\,.\label{XYfromWU}
\end{equation}
Here a precise dose $W$ is specified, but when measuring it prior to the experiment our technician commits an error $U$, with the result that the actual dose is $W+U$. In (\ref{XYfromWU}) it assumed that $U$, $V$ and $W$ are independent.
The measurement error model (\ref{WYfromXU}) is standard. The alternative
model (\ref{XYfromWU}) is believed to be much less common, although in some
circumstances it is difficult to determine which of (\ref{WYfromXU}) and
(\ref{XYfromWU}) is the more appropriate. The model at (\ref{XYfromWU}) was
first suggested by Berkson (1950)\index{Berkson, J.}, for whom it is named.
\subsection{Kernel methods}
If the measurement error $U$ were very small then we could estimate the
density $f$ of $X$, and the function $g$ in the model (\ref{WYfromXU}), using standard
kernel methods. For example, given data $X_1$, \ldots, $X_n$ on $X$ we could take
\begin{equation}
{\hat f}(x)=\frac1{nh}\,\sum_{i=1}^n\, K\Big(\frac{x-X_i}h\Big)\label{fhat}
\end{equation}
to be our estimator of $f(x)$. Here $K$ is a kernel function and $h$, a
positive quantity, is a bandwidth\index{bandwidth|(}. Likewise, given data $(X_1,Y_1)$, \ldots, $(X_n,Y_n)$ on $(X,Y)$ we could take
\begin{equation}
{\hat g}(x)=\frac{\sum_i\, Y_i\,K\{(x-X_i)/h\}}{\sum_i\, K\{(x-X_i)/h\}}\label{ghat}
\end{equation}
to be our estimator of $g(x)$, where $g$ is as in the model at~(\ref{WYfromXU}).
The estimator at (\ref{fhat}) is a standard kernel density
estimator\index{kernel methods!kernel density estimator}, and is itself a
probability density if we take $K$ to be a density. It is consistent under
particularly weak conditions, for example if $f$ is continuous and $h\ra0$ and
$nh\to\infty$ as $n$ increases. Density estimation\index{density estimation} is discussed at length by
Silverman (1986)\index{Silverman, B. W.} and Scott~(1992)\index{Scott, D. W.}.
The estimator ${\hat g}$, which we generally also compute by taking $K$ to be a
probability density, is often referred to as the `local
constant'\index{density estimation!local
constant estimator} or Nadaraya--Watson\index{Nadaraya, E. A.!Nadaraya--Watson estimator} estimator of~$g$. The first of these names follows from the fact that ${\hat g}(x)$ is the result of fitting a constant to the data by \Index{local least squares}:
\begin{equation}
{\hat g}(x)=\mathop{\rm argmin}_c\,\sum_{i=1}^n\,(Y_i-c)^2\,K\Big(\frac{x-X_i}h\Big)\,.\label{ghatArgmin}
\end{equation}
The estimator ${\hat g}$ is also consistent under mild conditions, for example if
the variance of the error, $V$, in (\ref{WYfromXU}) is finite, if $f$ and $g$
are continuous, if $f>0$ at the point $x$ where we wish to estimate~$g$, and
if $h\to 0$ and $nh\to\infty$ as $n$ increases. General kernel methods are
discussed by Wand and Jones (1995)\index{Wand, M. P.}\index{Jones, M. C.},
and \Index{statistical smoothing} is addressed by
Simonoff~(1996)\index{Simonoff, J. S.}.
Local constant estimators have the advantage of being relatively robust
against uneven spacings in the sequence $X_1$, \ldots, $X_n$. For example,
the ratio at (\ref{ghat}) never equals a nonzero number divided by zero.
However, local constant estimators are particularly susceptible
to \Index{boundary bias}. In particular, if the density of $X$ is supported
and bounded away from zero on a compact interval, then ${\hat g}$, defined by
(\ref{ghat}) or (\ref{ghatArgmin}), is generally inconsistent at the endpoints
of that interval. Issues of this type have motivated the use of local
polynomial estimators\index{density estimation!local polynomial estimator}, which are defined by ${\hat g}(x)={\hat c}_0(x)$ where, in a generalisation of~(\ref{ghatArgmin}),
\begin{equation}
({\hat c}_0(x),\ldots,{\hat c}_p(x))
=\mathop{\rm argmin}_{(c_0,\ldots,c_p)}\,\sum_{i=1}^n\,\bigg\{Y_i-\sum_{j=0}^p\,c_j\,(x-X_i)^j\bigg\}^2\,
K\Big(\frac{x-X_i}h\Big)\,.\label{chat}
\end{equation}
See, for example, Fan and Gijbels~(1996)\index{Fan, J.}\index{Gijbels, I.}. In (\ref{chat}), $p$ denotes the degree of the locally fitted polynomial. The estimator ${\hat g}(x)={\hat c}_0(x)$, defined by (\ref{chat}), is also consistent under the conditions given earlier for the estimator defined by (\ref{ghat}) and~(\ref{ghatArgmin}).
In the particular case $p=1$ we obtain a local-linear
estimator\index{density estimation!local linear estimator|(} of~$g(x)$:
\begin{equation}
{\hat g}(x)=\frac{S_2(x)\,T_0(x)-S_1(x)\,T_1(x)}{S_0(x)\,S_2(x)-S_1(x)^2}\,,\label{loclin}
\end{equation}
where
\begin{equation}
\begin{split}
S_r(x)&=\frac1{nh}\,\sum_{i=1}^n\,
\bigg(\frac{x-X_i}h\bigg)^{\!r}\>
K\bigg(\frac{x-X_i}h\bigg)\,,\\
T_r(x)&=\frac1{nh}\,\sum_{i=1}^n\, Y_i\,
\bigg(\frac{x-X_i}h\bigg)^{\!r}\>
K\bigg(\frac{x-X_i}h\bigg)\,,
\end{split}\label{SxTx}
\end{equation}
$h$ denotes a bandwidth\index{bandwidth|)} and $K$ is a kernel function.
Estimators of all these types can be quickly extended to cases where errors in variables are present, for example as in the models at (\ref{WXU}) and (\ref{WYfromXU}), simply by altering the kernel function $K$ so that it acts to cancel out the influence of the errors. We shall give details in section~2. Section~3 will discuss recently introduced methodology which, from some viewpoints looks quite different from, but is actually almost identical to, kernel methods.
\section{Methodology and theory}
\subsection{Definitions of estimators}
We first discuss a generalisation of the estimator at (\ref{fhat}) to the case where
there are errors in the observations of $X_i$, as per the model at~(\ref{WXU}). In
particular, we assume that we observe data $W_1$, \ldots, $W_n$ which are
independent and identically distributed as $W=X+U$, where $X$ and $U$ are
independent and the distribution of $U$ has known \Index{characteristic
function} $\phi_U$ which does not vanish anywhere on the real line. Let $K$
be a kernel function, write $\phi_K=\int e^{itx}\,K(x)\,dx$ for the associated
Fourier transform\index{Fourier, J. B. J.!Fourier transform|(}, and define
\begin{equation}
K_U(x)=\frac1{2\pi}\int e^{-itx}\;\frac{\phi_K(t)}{\phi_U(t/h)}\;dt\,.\label{KU}
\end{equation}
Then, to construct an estimator ${\hat f}$ of the density $f=f_X$ of $X$, when all
we observe are the contaminated data $W_1$, \ldots, $W_n$, we simply replace $K$ by $K_U$, and $X_i$ by $W_i$, in the definition of ${\hat f}$ at (\ref{fhat}), obtaining the estimator
\begin{equation}
{\hat f}_{{\rm decon}}(x)=\frac1{nh}\,\sum_{i=1}^n\, K_U\Big(\frac{x-W_i}h\Big)\,.\label{fhatdecon}
\end{equation}
Here the subscript `decon' signifies that ${\hat f}_{{\rm decon}}$ involves empirical
deconvolution\index{deconvolution|(}. The adjustment to the kernel takes care of the measurement
error\index{measurement error|)}, and results in consistency in a wide variety of settings. Likewise, if
data pairs $(W_1,Y_1)$, \ldots, $(W_n,Y_n)$ are generated under the model at
(\ref{WYfromXU}) then, to construct the local constant estimator\index{density
estimation!local constant estimator} at (\ref{ghat}), or the local linear
estimator\index{density estimation!local linear estimator|)} defined by (\ref{loclin}) and (\ref{SxTx}), all we do is replace each $X_i$ by $W_i$, and $K$ by~$K_U$. Other local polynomial estimators\index{density estimation!local polynomial estimator} can be calculated using a similar rule, replacing $h^{-r}(x-X_i)^rK\{(x-X_i)/h\}$ in $S_r$ and $T_r$ by $K_{U,r}\{(x-W_i)/h\}$, where
$$
K_{U,r}(x)=\frac1{2\pi i^r}\int e^{-itx}\;\frac{\phi_K^{(r)}(t)}{\phi_U(t/h)}\;dt\,.
$$
The estimator at (\ref{fhatdecon}) dates from work of Carroll and Hall
(1988)\index{Carroll, R. J.} and Stefanski and Carroll
(1990)\index{Stefanski, L.}. Deconvolution-kernel regression
estimators\index{kernel methods!deconvolution-kernel estimator} in the
local-constant\index{density estimation!local constant estimator} case were developed by Fan and
Truong (1993)\index{Fan, J.}\index{Truong, Y. K.}, and extended to the general
local polynomial\index{density estimation!local polynomial estimator} setting by Delaigle {\em et al.}~(2009).
The kernel $K_U$ is deliberately constructed to be the function whose Fourier
transform\index{Fourier, J. B. J.!Fourier transform} is $\phi_K/\phi_U$. This
adjustment permits cancellation of the influence of errors in variables, as
discussed at the end of section~1.3. To simplify calculations, for example
computation of the integral in (\ref{WYfromXU}), we generally choose $K$ not
to be a density function but to be a smooth, symmetric function for which
$\phi_K$ vanishes outside a compact interval. The commonly-used candidates
for $\phi_K$ are proportional to functions that are used for $K$, rather than
$\phi_K$, in the case of regular kernel estimation discussed in section~1.3.
For example, kernels $K$ for which $\phi_K(t)=(1-|t|^r)^s$ for $|t|\leq1$, and
$\phi_K(t)=0$ otherwise, are common; here $r$ and $s$ are integers. Taking
$r=2s=2$, $r=s=2$ and $r=\frac23\,s=2$ corresponds to the Fourier
inverses\index{Fourier, J. B. J.!Fourier inverse} of the biweight, quartic and
triweight kernels, respectively. Taking $s=0$ gives the inverse of the
uniform kernel, i.e.~the \index{kernel methods!sinc kernel}sinc kernel, which we shall meet again in section~3. Further information about kernel choice is given by Delaigle and Hall (2006).
These kernels, and others, have the property that $\phi_K(t)=1$ when $t=0$,
thereby guaranteeing that $\int K=1$. The latter condition ensures that the
density estimator\index{density estimation}, defined at (\ref{fhatdecon}) and
constructed using this kernel, integrates to~1. (However, the estimator
defined by (\ref{fhatdecon}) will generally take negative values at some
points~$x$.) The normalisation property is not so important when the kernel
is used to construct regression estimators\index{regression!regression estimator}, where the effects of multiplying $K$ by a constant factor cancel from the `deconvolution' versions of formulae (\ref{ghat}) and~(\ref{loclin}), and likewise vanish for all deconvolution-kernel estimators\index{kernel methods!deconvolution-kernel estimator} based on local polynomial methods\index{density estimation!local polynomial estimator}.
Note that, as long as $\phi_K$ and $\phi_U$ are supported either on the whole real line or on a symmetric compact domain, the kernel $K_U$, defined by (\ref{KU}), and its generalised form $K_{U,r}$, are real-valued. Indeed, using properties of the complex conjugate of Fourier transforms\index{Fourier, J. B. J.!Fourier transform|)} of real-valued functions, and the change of variable $u=-t$, we have, using the notation $\overline a(t)$ for the complex conjugate of a complex-valued function $a$ of a real variable $t$,
\begin{align*}
\overline K_{U,r}(x)
&=(-1)^{-r} \frac1{2\pi i^r}\int e^{itx}\;\frac{\overline{\phi_K^{(r)}}(t)}{\overline\phi_U(t/h)}\;dt\\
&=(-1)^{-r} \frac1{2\pi i^r}\int e^{itx}\;\frac{(-1)^{-r}\phi_K^{(r)}(-t)}{\phi_U(-t/h)}\;dt\\
&= \frac1{2\pi i^r}\int e^{-iux}\;\frac{\phi_K^{(r)}(u)}{\phi_U(u/h)}\;du
=K_{U,r}(x).
\end{align*}
In practice it is almost always the case that the distribution of $U$ is
symmetric\index{symmetric distribution}, and in the discussion of variance in section~2.2, below, we shall make this assumption. We shall also suppose that $K$ is symmetric, again a condition which holds almost invariably in practice.
The estimators discussed above were based on the assumption that
the characteristic function\index{characteristic function|(} $\phi_U$ of the errors in variables is
known. This enabled us to compute the deconvolution kernel $K_U$
at~(\ref{KU}). In cases where the distribution of $U$ is not known, but can
be estimated from replicated data (see section~1.2), we can replace $\phi_U$
by an estimator of it and, perhaps after a little regularisation, compute an
empirical version of~$K_U$. This can give good results, in both theory and
practice. In particular, in many cases the resulting estimator of the density
of $X$, or the regression mean\index{regression!regression mean} $g$, can be shown to have the same first-order properties as estimators computed under the assumption that the distribution of $U$ is known. Details are given by Delaigle {\em et al.}~(2008).
Methods for choosing the \Index{smoothing parameter}, $h$, in the estimators
discussed above have been proposed by Hesse (1999)\index{Hesse, C.}, Delaigle
and Gijbels (2004a,b) and Delaigle and Hall~(2008).
\subsection{Bias and variance}\index{bias}
The expected value of the estimator at (\ref{fhatdecon}) equals
\begin{align}
E\{{\hat f}_{{\rm decon}}(x)\}&=\frac1{2\pi h}\int
E\big[e^{-it\{x-W\}/h}\big]\;\frac{\phi_K(t)}{\phi_U(t/h)}\;dt\notag \\
&=\frac1{2\pi}\int
e^{-itx}\frac{\phi_K(ht)}{\phi_U(t)}\;\phi_X(t)\,\phi_U(t)\,dt\notag \\
&=\frac1{2\pi}\int e^{-itx}\phi_K(ht)\,\phi_X(t)\,dt
=\frac1h\int K(u/h)\,f(x-u)\,du\notag \\
&=E\{{\hat f}(x)\}\,,\label{Efhatdecon}
\end{align}
where the first equality uses the definition of $K_U$, and the fourth equality
uses Plancherel's identity\index{Plancherel, M.!Plancherel's identity}. Therefore the deconvolution estimator ${\hat f}_{{\rm decon}}(x)$, calculated from data contaminated by measurement errors, has exactly the same mean, and therefore the same bias, as ${\hat f}(x)$, which would be computed using values of $X_i$ observed without measurement error. This confirms that using the deconvolution kernel estimator does indeed allow for cancellation of measurement errors, at least in terms of their presence in the mean.
Of course, variance is a different matter. Since ${\hat f}_{{\rm decon}}(x)$ equals a sum of independent random variables then
\begin{align}
&{\rm var}\{{\hat f}_{{\rm decon}}(x)\}\notag \\
&{}\qquad=\big(nh^2\big)^{-1}\,{\rm var}\Big\{K_U\Big(\frac{x-W}h\Big)\Big\}\notag \\
&{}\qquad\sim(nh)^{-1}\,f_W(x)\,\int K_U^2
=\frac{f_W(x)}{2\pi nh}\;\int\phi_K(t)^2\,|\phi_U(t/h)|^{-2}\,dt\,.\label{varfhat}
\end{align}
(Here the relation $\sim$ means that the ratio of the left- and right-hand
sides converges to~1 as $h\ra0$.) Thus it can be seen that the variance of
${\hat f}_{{\rm decon}}(x)$ depends intimately on tail behaviour\index{tail behaviour|(} of the characteristic function\index{characteristic function|)} $\phi_U$ of the measurement-error distribution.
If $\phi_K$ vanishes outside a compact set, which, as we noted in section~2.1,
is generally the case, and if $|\phi_U|$ is asymptotic to a positive regularly
varying function\index{regular variation} $\psi$ (see Bingham {\em et al.},~
1989)\index{Bingham, N. H.}\index{Goldie, C. M.}\index{Teugels, J. L.}, in the sense that $|\phi_U(t)|\asymp\psi(t)$ (meaning that the ratio of both sides is bounded away from zero and infinity as $t\to\infty$), then the integral on the right-hand side of (\ref{Efhatdecon}) is bounded between two constant multiples of $\psi(1/h)^{-2}$ as $h\ra0$. Therefore by (\ref{varfhat}), provided that $f_W(x)>0$,
\begin{equation}
{\rm var}\{{\hat f}_{{\rm decon}}(x)\}\asymp(nh)^{-1}\,\psi(1/h)^{-2}\label{varasymp}
\end{equation}
as $n$ increases and $h$ decreases.
Recall that we are assuming that $f_U$ and $K$ are both symmetric functions.
If the density $f$ of $X$ has two bounded and continuous derivatives, and if $K$ is bounded and symmetric and satisfies $\int x^2\,|K(x)|\,dx<\infty$, then the bias of ${\hat f}_{{\rm decon}}$ can be found from (\ref{Efhatdecon}), using elementary calculus and arguments familiar in the case of standard kernel estimators:
\begin{align}
{\rm bias}(x)&=E\{{\hat f}_{{\rm decon}}(x)\}-f(x)=E\{{\hat f}(x)\}-f(x)\notag \\
&=\int K(u)\,\{f(x-hu)-f(x)\}\,du
={\textstyle \frac12}\,h^2\,\kappa\,f''(x)+o\big(h^2\big)\label{bias}
\end{align}
as $h\ra0$, where $\kappa=\int x^2\,K(x)\,dx$. Therefore, provided that $f''(x)\neq0$, the bias of the conventional kernel estimator ${\hat f}(x)$ is exactly of size $h^2$ as $h\ra0$. Combining this property, (\ref{Efhatdecon}) and (\ref{varasymp}) we deduce a relatively concise asymptotic formula for the \Index{mean squared error} of~${\hat f}_{{\rm decon}}(x)$:
\begin{equation}
E\{{\hat f}_{{\rm decon}}(x)-f(x)\}^2\asymp h^4+(nh)^{-1}\,\psi(1/h)^{-2}\,.\label{MSE}
\end{equation}
For a given error distribution we can work out the behaviour of $\psi(1/h)$ as
$h\ra0$, and then from (\ref{MSE}) we can calculate the
optimal \Index{bandwidth} and determine the exact rate of convergence of
${\hat f}_{{\rm decon}}(x)$ to $f(x)$, in mean square. In many instances this rate is
optimal, in a \Index{minimax} sense; see, for example, Fan~(1991)\index{Fan,
J.}. It is also generally optimal in the case of the
errors-in-variables\index{errors in variables} regression estimators discussed in section~2.1, based on deconvolution-kernel versions of local polynomial estimators\index{density estimation!local polynomial estimator}. See Fan and Truong~(1993)\index{Fan, J.}\index{Truong, Y. K.}.
Therefore, despite their almost naive simplicity, deconvolution-kernel
estimators of densities and regression functions have features that can hardly
be bettered by more complex, alternative approaches. The results derived in
the previous paragraph, and their counterparts in the regression case, imply
that the estimators are limited by the extent to which they can
recover \index{high-frequency information} from the data. (This is reflected
in the fact that the rate of decay of the tails\index{tail behaviour|)} of $\phi_U$ drives the results on convergence rates.) However, the fact that the estimators are nevertheless optimal, in terms of their rates of convergence, implies that this restriction is inherent to the problem, not just to the estimators; no other estimators would have a better convergence rate, at least not uniformly in a class of problems.
\section{Relationship to minimum contrast methods}\index{minimum contrast methods|(}
\subsection{Deconvolution kernel estimators based on the sinc kernel}
The sinc, or Fourier integral, kernel\index{kernel methods!sinc kernel|(} is given by
\begin{equation}
L(x)=\begin{cases}(\pi x)^{-1}\,\sin(\pi x)&\text{if $x\neq0$}\\
1&\text{if $x=0\,.$}\end{cases}\label{Lx}
\end{equation}
Its Fourier transform\index{Fourier, J. B. J.!Fourier transform}, defined as a
Riemann integral\index{Riemann, G. F. B.!Riemann integral}, is the `boxcar
function'\index{boxcar function}, $\phi_L(t)=1$ if $|t|\leq1$ and $\phi_L(t)=0$ otherwise. In particular, $\phi_L$ vanishes outside a compact set, which property, as we noted in section~2.1, aids computation. The version of $K_U$, at (\ref{KU}), for the sinc kernel is
$$
L_U(x)=\frac1{2\pi}\int_{-1}^1 e^{-itx}\,\phi_U(t/h)^{-1}\,dt
=\frac1\pi\int_0^1\cos(tx)\,\phi_U(t/h)^{-1}\,dt\,,
$$
where the second identity holds if the distribution of $U$ is
symmetric\index{symmetric distribution} and has no zeros on the real line.
The kernel $L$ is sometimes said to be of `infinite order'\index{kernel
methods!infinite order kernel}, in the sense that if $a$ is any function with an infinite number of bounded, integrable derivatives then
\begin{equation}
\int\bigg[\int\{a(x+hu)-a(x)\}\,L(u)\,du\bigg]^2\,dx=O\big(h^r\big)\label{intintaa}
\end{equation}
as $h\da0$, for all $r>0$. If $K$ were of finite order then (\ref{intintaa}) would hold only for a finite range of values of $r$, no matter how many derivatives the function $a$ enjoyed. For example, if $K$ were a symmetric function for which $\int u^2\,K(u)\,du\neq0$, and if we were to replace $L$ in (\ref{intintaa}) by $K$, then (\ref{intintaa}) would hold only for $r\leq4$, not for all~$r$. In this case we would say that $K$ was of second order\index{kernel
methods!second-order kernel|(}, because
$$
\int\{a(x+hu)-a(x)\}\,K(u)\,du=O\big(h^2\big)\,.
$$
If we take $a$ to be the density, $f$, of the random variable $X$, and take $K$ in the definition of ${\hat f}$ at (\ref{fhat}) to be the sinc kernel, $L$, then (\ref{intintaa}) equals the integral of the squared \Index{bias} of~${\hat f}$. Therefore, in the case of a very smooth density, the `infinite order' property of the sinc kernel ensures particularly small bias, in an average sense.
Properties of conventional kernel density estimators, but founded on the sinc
kernel, for data without measurement errors, have been studied by, for
example, Davis (1975, 1977)\index{Davis, K. B.}. Glad {\em et al.}~(1999)\index{Glad,
I. K.}\index{Hjort, N. L.}\index{Ushakov, N.}have provided a good survey of
properties of sinc kernel methods for density estimation, and have argued that
those estimators have received an unfairly bad press. Despite criticism of
sinc kernel estimators (see e.g.~Politis and Romano,~1999)\index{Politis,
D. N.}\index{Romano, J. P.}, the approach is ``more accurate for quite moderate values of the sample size, has better asymptotics in non-smooth cases (the density to be estimated has only first derivative), [and] is more convenient for bandwidth selection etc''\index{bandwidth} than its conventional competitors, suggest Glad {\em et al.}~(1999).
The property of greater accuracy is borne out in both theoretical and numerical studies, and derives from the infinite-order property noted above. Indeed, if $f$ is very smooth then the low level of average squared bias can be exploited to produce an estimator ${\hat f}$ with particularly low mean squared error, in fact of order $n^{-1}$ in some cases. The most easily seen disadvantage of sinc-kernel density estimators is their tendency to suffer from spurious oscillations, inherited from the infinite number of oscillations of the kernel itself.
These properties can be expected to carry over to density and regression
estimators based on
\index{data!contaminated data}contaminated data, when we use the sinc kernel. To give a
little detail in the case of density estimation from data contaminated by
measurement errors, we note that if the density $f$ of $X$ is infinitely
differentiable, but we observe only the contaminated data $W_1$, \ldots, $W_n$
distributed as $W$, generated as at (\ref{WXU}); if we use the density estimator at
(\ref{fhat}), but computed using $K=L$, the sinc kernel; and if $|\phi_U(t)|\geq
C\,(1+|t|)^{-\alpha}$ for constants $C$, $\alpha>0$; then, in view of (\ref{Efhatdecon}), (\ref{varfhat}) and (\ref{intintaa}), we have for all $r>0$,
\begin{align}
&\int\{{\hat f}_{{\rm decon}}(x)-f(x)\}^2\,dx\notag \\
&{}\qquad=\int\{E{\hat f}(x)-f(x)\}^2
+\big(nh^2\big)^{-1}\,\int{\rm var}\Big\{L_U\Big(\frac{x-W}h\Big)\Big\}\,dx\notag \\
&{}\qquad\leq\int\bigg[\int\{f(x+hu)-f(x)\}\,L(u)\,du\bigg]^2\,dx
+(nh)^{-1}\,\int L_U^2\notag \\
&{}\qquad=O\bigg\{h^r+(nh)^{-1}\,\int_{-1}^1|\phi_U(t/h)|^{-2}\,dt\bigg\}\notag \\
&{}\qquad=O\Big\{h^r+\big(nh^{2\alpha+1}\big)^{-1}\Big\}\,.\label{intfhat}
\end{align}
It follows that, if $f$ has infinitely many integrable derivatives and if the tails of $\phi_U(t)$ decrease at no faster than a polynomial rate as $|t|\to\infty$, then the \Index{bandwidth} $h$ can be chosen so that the mean integrated squared error of a deconvolution kernel estimator of $f$, using the sinc kernel, converges at rate $O(n^{\epsilon-1})$ for any given $\epsilon>0$.
This very fast rate of convergence contrasts with that which occurs if the
kernel $K$ is of only finite order. For example, if $K$ is a second-order
kernel, in which case (\ref{intintaa}) holds only for $r\leq 4$ when $L$ is replaced by $K$, the argument at (\ref{intfhat}) gives:
$$
\int\{{\hat f}_{{\rm decon}}(x)-f(x)\}^2\,dx
=O\Big\{h^4+\big(nh^{2\alpha+1}\big)^{-1}\Big\}\,.
$$
The fastest rate of convergence of the right-hand side to zero is attained with $h=n^{-1/(2\alpha+5)}$, giving
$$
\int\{{\hat f}_{{\rm decon}}(x)-f(x)\}^2\,dx=O\big(n^{-4/(2\alpha+5)}\big)\,.
$$
In fact, this is generally the best rate of convergence of mean integrated
squared error that can be obtained using a second-order kernel when
the \Index{characteristic function} $\phi_U$ decreases like $|t|^{-\alpha}$ in the
tails\index{tail behaviour}, even if the density $f$ is exceptionally smooth. Nevertheless, second-order kernels\index{kernel methods!second-order kernel|)} are often preferred to the sinc kernel in practice, since they do not suffer from the unwanted oscillations that afflict estimators based on the sinc kernel.\index{kernel methods!sinc kernel|)}
\subsection{Minimum contrast estimators, and their relationship to
deconvolution kernel estimators}
In the context of the measurement error model at (\ref{WXU}), Comte {\em et
al.} (2007)\index{Comte, F.|(}\index{Rozenholc, Y.|(}\index{Taupin, M.-L.|(} suggested an interesting minimum contrast estimator of the density $f$ of $X$. Their approach has applications in a variety of other settings (see Comte \etalc2006, 2008; Comte and Taupin, 2007), including to the regression model at (\ref{WYfromXU}), and the conclusions we shall draw below apply in these cases too. Therefore, for the sake of brevity we shall treat only the density deconvolution problem.
To describe the minimum contrast estimator in that setting, define
$$
{\hat a}_{k\ell}=\frac1{2\pi n}\,
\sum_{j=1}^n\,\int\exp(it\,W_j)\,\phi_{L_{k\ell}}(t)\,\phi_U(t)^{-1}\,dt\,,
$$
where $\phi_{L_{k\ell}}$ denotes the Fourier transform\index{Fourier, J. B. J.!Fourier transform} of the function $L_{k\ell}$ defined by $L_{k\ell}(x)=\ell^{1/2}\,L(\ell\,x-k)$, $k$ is an integer and $\ell>0$. In this notation the minimum contrast nonparametric density estimator~is
$$
{\tilde f}(x)=\sum_{k=-k_0}^{k_0}\,{\hat a}_{k\ell}\,L_{k\ell}(x)\,.
$$
There are two tuning parameters, $k_0$ and~$\ell$. Comte {\em et al.}~(2007) suggest choosing $\ell$ to minimise a penalisation criterion.
The resulting minimum contrast estimator is called a penalised contrast
density estimator\index{penalised contrast methods}. The penalisation
criterion suggested by Comte {\em et al.}~(2007) for choosing $\ell$ is related to
cross-validation, although its exact form, which involves the choice of
additional terms and multiplicative constants, is based on \Index{simulation}
experiments. It is clear on inspecting the definition of ${\tilde f}$ that $\ell$
plays a role similar to that of the inverse of \Index{bandwidth} in a
conventional deconvolution kernel estimator\index{kernel methods!deconvolution-kernel estimator}. In particular, $\ell$ should diverge to infinity with~$n$. Comte {\em et al.}~(2007) suggest taking $k_0=2^m-1$, where $m\geq\log_2(n+1)$ is an integer. In numerical experiments they use $m=8$, which gives good performance in the cases they consider. More generally, $k_0/\ell$ should diverge to infinity as sample size increases.
The minimum contrast density estimator of Comte {\em et al.}~(2007) is actually very
close to the standard deconvolution kernel density estimator at (\ref{fhat}),
where in the latter we use the sinc kernel\index{kernel methods!sinc kernel} at~(\ref{Lx}). Indeed, as the theorem below shows, the two estimators are exactly equal on a grid, which becomes finer as the bandwidth, $h$, for the sinc kernel density estimator decreases. However, this relationship holds only for values of $x$ for which $|x|\leq k_0/\ell$; for larger values of $|x|$ on the grid, ${\tilde f}(x)$ vanishes. (This property is one of the manifestations of the fact that, as noted earlier, $k$ and $\ell$ generally should be chosen to depend on sample size in such a manner that $k_0/\ell\to\infty$ as $n\to\infty$.)
\begin{nthm}Let ${\hat f}_{{\rm decon}}$ denote the deconvolution kernel density estimator at $(\ref{fhat})$, constructed using the sinc kernel and employing the bandwidth $h=\ell^{-1}$. Then, for any point $x=hk$ with $k$ an integer, we have
$$
{\tilde f}(x)=\begin{cases}{\hat f}_{{\rm decon}}(x)&\text{if $|x|\leq k_0/\ell$}\\
0&\text{if $|x|>k_0/\ell\,.$}\end{cases}
$$
\end{nthm}\index{bandwidth}
A proof of the theorem will be given in section~3.3. Between grid points the estimator ${\tilde f}$ is a nonstandard interpolation of values of the kernel estimator ${\hat f}_{{\rm decon}}$. Note that, if we take $h=\ell^{-1}$, the weights $L(\ell x-k)=L\{(x-hk)/h\}$ used in the interpolation decrease quickly as $k$ moves further from $x/h$, and, except for small $k$, neighbour weights are close in magnitude but differ in sign. (Here $L$ is the sinc kernel\index{kernel methods!sinc kernel} defined at~(\ref{Lx}).) In effect, the interpolation is based on rather few values ${\hat f}_{{\rm decon}}(k/\ell )$ corresponding to those $k$ for which $k$ is close to $x/h$.
In practice the two estimators are almost indistinguishable. For example,
Figure~3.1 compares them using the \Index{bandwidth} that minimises the
integrated squared difference between the true density and the estimator, for
one generated sample in the case where $X$ is normal N$(0,1)$, $U$ is
Laplace\index{Laplace, P.-S.!Laplace distribution} with ${\rm var}(U)/{\rm var}(X)=0.1$, and $n=100$ or $n=1000$. In the left graphs the two estimators can hardly be distinguished. The right graphs show magnifications of these estimators for $x\in[-{\textstyle \frac12},0]$. Here it can be seen more clearly that the minimum contrast estimator is an approximation of the deconvolution kernel estimator, and is exactly equal to the latter at $x=0$.
\begin{figure}[h!t]
\mbox{\includegraphics[scale=0.4]{NormLap100.ps}\quad
\includegraphics[scale=0.4]{NormLap100ZOOM.ps}}
\mbox{\includegraphics[scale=0.4]{NormLap1000.ps}\quad
\includegraphics[scale=0.4]{NormLap1000ZOOM.ps}}
\caption{Deconvolution kernel density estimator (DKDE) and minimum contrast
estimator (PCE) for a particular sample of size $n=100$ (upper panels) or
$n=1000$ (lower panels) in the case ${\rm var}(U)/{\rm var}(X)=0.1$. Right panels show
magnifications of the estimates for $x\in[-0.5,0]$ in the respective upper
panels.}
\end{figure}
These results highlight the fact that the differences in performance between
the two estimators derive more from different tuning parameter choices than
from anything else. In their comparison, Comte {\em et al.}~(2007)\index{Comte,
F.|)}\index{Rozenholc, Y.|)}\index{Taupin, M.-L.|)} used a minimum contrast
estimator with the sinc kernel $L$ and a \Index{bandwidth} chosen by
penalisation, whereas for the deconvolution kernel estimator they employed a
conventional second-order kernel $K$ and a different bandwidth-choice
procedure. Against the background of the theoretical analysis in section~3.1,
the different kernel choices (and different ways of choosing smoothing
parameters)\index{smoothing
parameter} explain the differences observed between the penalised contrast
density estimator\index{penalised contrast methods} and the deconvolution
kernel density estimator based on a second-order kernel\index{kernel
methods!second-order kernel}.
\subsection{Proof of Theorem}
Note that $\phi_{L_{k\ell}}(t)=\ell^{-1/2}\,\exp(itk/\ell)\,\phi_L(t/\ell)$ and
$$
{\hat a}_{k\ell}=\frac1{2n\pi\ell^{1/2}}\;
\sum_{j=1}^n\,\int_{-\ell\pi}^{\ell\pi}\exp\big\{-it\,\big(k\,\ell^{-1}-W_j\big)\big\}\,
\frac{\phi_L(t/\ell)}{\pi_U(t)}\;dt\,.
$$
Therefore,
\begin{align}
&{\tilde f}(x)\notag \\
&{}\quad=\frac1{2n\pi}\sum_{k=-k_0}^{k_0}L(\ell x-k)
\sum_{j=1}^n\,\int_{-\ell\pi}^{\ell\pi}\exp\big\{-it\big(k\ell^{-1}-W_j\big)\big\}
\frac{\phi_L(t/\ell)}{\pi_U(t)}\,dt\notag \\
&{}\quad=\sum_{k=-k_0}^{k_0}\,L(\ell x-k)\,{\hat f}_{{\rm decon}}(k/\ell)\,.\label{tildef}
\end{align}
If $r$ is a nonzero integer then $L(r)=0$. Therefore, if $x=kh=s/\ell$ for an integer $s$ then $L(\ell x-k)=0$ whenever $k\neq s$, and $L(\ell x-k)=1$ if $k=s$. Hence, (\ref{tildef}) implies that ${\tilde f}(x)={\hat f}_{{\rm decon}}(x)$ if $|k|\leq k_0$, and ${\tilde f}(x)=0$ otherwise.\index{kernel methods|)}\index{deconvolution|)}\index{minimum contrast methods|)}
| proofpile-arXiv_065-5134 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
More than 30 years ago, in 1975 T.~Regge and C.~Teitelboim
suggested a formulation of gravity \cite{regge} similar to
the formulation of string theory. They assumed that our space-time
is a four-dimensional surface in ten-dimensional Minkowski space
$R^{1,9}$ with one timelike and nine spacelike dimensions. In
this case the variables describing the gravity are the embedding
function of this surface into the ambient space. The authors take the standard
Einstein-Hilbert expression for the action. In this
expression they replace the metric by the induced metric
expressed in terms of the embedding function. We will call this
formulation of gravity the embedding theory. In this approach the
equations of motion (the "Regge-Teitelboim equations") appear to be
more general than the Einstein equations and they contain extra solutions.
To overcome the problem of extra solutions T.~Regge and
C.~Teitelboim suggested in \cite{regge} to impose additional
constraints $G_{\mu\bot}=0$ ("Einstein's constraints"), where
$G_{\mu\nu}$ is the Einstein tensor, $\mu,\nu,\ldots=0,1,2,3$, and
the symbol $\bot$ denotes the direction orthogonal to the
constant time surface. While constructing the canonical formalism
these constraints are considered similarly to the primary
constrains, resulting in a system of eight constraints. We
will call the theory arising from such approach the
Regge-Teitelboim formulation of gravity, in contrast to the embedding theory.
The approach to the gravity based on the consideration of a surface
in a flat Minkow\-ski space could be more convenient than a
standard approach when we try to develop a quantum theory of
gravity, since in this case we have a possibility to formulate the causality
principle more clearly. In the quantum field theory the causality principle usually
means the commutativity of operators related
to areas separated by a spacelike interval.
This principle is difficult to formulate in the framework of the standard gravity
formulation in terms of metric $g_{\mu\nu}$, because the interval
between points is determined by the metric which is an
operator itself. Therefore for two specific points of the space-time it is impossible to
determine what kind of interval separates them
independently of a specific state. In the case of the description of
gravity as a dynamics of a three-dimensional surface in a flat
ambient space, we can try to work out a quantum field theory
giving this gravity in the classical limit. In this case the problem of formulation of the
causality principle would be
solved, since the causality in the flat ambient space can be
determined by standard means of quantum field theory.
The Regge-Teitelboim formulation of gravity
has been discussed in the work \cite{deser}
published immediately after \cite{regge}.
The authors remarked that the Regge-Teitelboim equations
are trilinear in the second derivatives of the embedding function.
This fact is very significant for this approach
as it obstructs linearizing the theory near the flat surface.
Also in \cite{deser} the problem of absence of uniqueness of the embedding is
discussed.
The lack of uniqueness causes a question whether the transfer from one surface to
another with the same
metric corresponds to the change of some physical degrees of freedom, or such a
transfer should be
considered as "a change of the embedding gauge". The discussion of these problems
is beyond the scope of our article.
In the paper \cite{deser} it is also noted that an artificial, {\it ad hoc},
imposing of additional constraints to the theory seems not
quite satisfactory and that a more satisfactory alternative would
be to find another action whose Euler-Lagrange equation would be
equivalent to Einstein equations. Such an action was suggested in the paper
\cite{tmf07}. In Section~4 of our paper we clarify the way of building the action and
discuss the meaning of the existence of additional first-class constraints in the
canonical formalism.
After the articles \cite{regge,deser} the idea of embedding was used
for description of gravity quite often. In particular, the
canonical formalism for the embedding theory without imposing
Einstein's constraints was investigated in \cite{tapia,frtap,rojas06}.
Such a canonical formalism turns out to be very complicated. Among
recent works using the idea of embedding we mark
\cite{faddeev1,faddeev2,rojas09}. An extended bibliography related to the
embedding theory and similar problems can be found in \cite{tapiaob}.
In the work \cite{regge} the form of the constraints system for
Regge-Teitelboim formulation of gravity was found. Also the problem
has been formulated to investigate the algebra of these constraints
and to verify whether these constraints are the first-class
constraints. However this problem is not completely solved by now.
It is probably due to the fact that one of constraints in
\cite{regge} was written incorrectly, as it was shown in \cite{tmf07},
see details in Sec.~2.
We started to work on this problem in the article \cite{sbshk05}. It
was analyzed in detail under what conditions the imposition of
Einstein's constraints turns the Regge-Teitelboim equations into the
Einstein's equations. This is true in generic case, i.~e.,
except some special values of variables at a fixed instant. The
canonical formalism for Regge-Teitelboim formulation of gravity
was built anew in \cite{tmf07} with a supplementary imposition of Einstein's
constraints. A correct form of all constraints was obtained, however the constraint
algebra has not been
found completely. In this paper we are completing the
solution of the problem. We perform an accurate calculation
of the Poisson brackets between constraints and, as a result,
we obtain a first-class constraint algebra for Regge-Teitelboim
formulation of gravity.
For convenience we describe in Section~2 the
construction of the canonical formalism for Regge-Teitelboim
formulation of gravity following \cite{tmf07}. We do it in
particular in order to explain why we regard that in \cite{regge}
one of constraints was written incorrectly and to show how we
obtain a correct form of all constraints.
Section~3 contains the main result of this paper. In this section
we find all Poisson brackets between constraints and we obtain
the first-class constraint algebra. We also discuss the relation
between this algebra and the constraint algebra of
Arnowitt-Deser-Misner formalism for the Einstein's gravity. The
formalism used in calculations is described in
\cite{tmf07,sbshk05}.
\section{Canonical formalism with additionally imposed Einstein's constraints}
In this section we build a canonical formalism for Regge-Teitelboim
formulation of gravity following \cite{tmf07}. We additionally
impose Einstein's constraints as it was suggested in \cite{regge}.
The embedding function determining the four-dimensional surface
$W^4$ in the flat ten-dimensional space $R^{1,9}$ is the map
\disn{v3.2}{
y^a(x^\mu):R^4\longrightarrow R^{1,9}.
\hfil\hskip 1em minus 1em (\theequation)}
Here and below, the indices $a,b,\dots$ run over the values
$0,1,2,\dots,9$; and $y^a$ are the Lorentzian coordinates in $R^{1,9}$.
A constant metric $\eta_{ab}=diag(1,-1,-1,\dots,-1)$
in the ambient space $R^{1,9}$ can easily rise and lower indices.
It induces on the surface $W^4$ the metric
\disn{2}{
g_{\mu\nu}=\eta_{ab}\,\partial_\mu y^a\, \partial_\nu y^b=\eta_{ab}\,e^a_\mu e^b_\nu,
\hfil\hskip 1em minus 1em (\theequation)}
where $e^a_\mu\equiv\partial_\mu y^a$.
We take the theory action in the form of the standard Einstein-Hilbert expression
\disn{39d}{
S=\int d^4x\, \sqrt{-g}\;R,
\hfil\hskip 1em minus 1em (\theequation)}
where we substitute the induced metric expressed in terms of the
embedding function $y^a(x)$ by formula (\ref{2}). We consider the
gravity with matter absent, because adding matter does
not play a fundamental role in the analysis of the theory.
Note that the issue of changing the physical content of the theory
under a non-point change of variables including
time derivatives was studied in paper \cite{gitman}.
It has been shown, under some assumptions, that if after the change of
variables the higher derivatives are contained in the Lagrangian only in the form of a
combination being a total time derivative, then the physical content of the theory
remains unchanged.
Substituting (\ref{2}) in the action (\ref{39d}), the above condition is fulfilled (see
below).
Nevertheless in this case the result \cite{gitman} is inapplicable, as the assumptions
made there are violated.
In particular, the change of variables (\ref{2}) is quadratic
whereas in the paper \cite{gitman} only infinitesimal transformations are considered.
Varying action (\ref{39d}) with respect to $y^a(x)$ gives the
Regge-Teitelboim equations which can be written as
\disn{50}{
G^{\mu\nu}\,b^a_{\mu\nu}=0,
\hfil\hskip 1em minus 1em (\theequation)}
where $G^{\mu\nu}$ is Einstein's tensor, and
\disn{22}{
b^a_{\mu\nu}={\Pi_{\!\!\bot}}^a_b\,\partial_\mu\partial_\nu y^b=\nabla\!_\mu e^a_\nu
\hfil\hskip 1em minus 1em (\theequation)}
is the second fundamental form of the surface. Here $\nabla\!_\mu$ is
the covariant derivative, and the quantity ${\Pi_{\!\!\bot}}^a_b$ is the
projector on the space orthogonal to the surface $W^4$ at a given
point. We note that although the free index $a$ ranges over 10 values,
there are only 6 independent equations, and the rest 4 equations
satisfy identically because of the properties of the second
fundamental form of the surface.
Besides the solutions of Einstein equations $G^{\mu\nu}=0$,
the equations (\ref{50}) contain extra solutions which can be
excluded (in general case) by imposing at the initial instant
the Einstein's constraints
\disn{50.1}{
n_\mu G^{\mu\nu}=0,
\hfil\hskip 1em minus 1em (\theequation)}
where $n_\mu$ is a unit vector normal to surfaces
$x^0=const$ at each point (see \cite{tmf07,sbshk05}).
For developing a canonical formalism it is convenient to drop the
total derivative term in the integrand in (\ref{39d}) and to rewrite
the action under the Arnowitt-Deser-Misner (ADM) form \cite{adm}:
\disn{77.1}{
S=\int d^4x\, \sqrt{-g}\left((K^i_i)^2-K_{ik} K^{ik}+\tri{R}\right),
\hfil\hskip 1em minus 1em (\theequation)}
where $K_{ik}$ is the second fundamental form of the surface
$t=const$ considered as a submanifold in $W^4$. Here and below,
the indices $i,k,\dots$ range the values $1,2,3$, and we label the
quantities related to the surface $t=const$ with the digit "3"
over the letter.
If we rewrite this expression in terms of the embedding function $y^a(x)$
it becomes
\disn{77.2}{
S=\int d^4x \sqrt{-g}\left(
n_a\, n_b\;
\tri{b}^a_{ik}\,\tri{b}^b_{lm}
L^{ik,lm}+\tri{R}\right),
\hfil\hskip 1em minus 1em (\theequation)}
where we introduced
\disn{t7}{
L^{ik,lm}=\tri{g}^{ik}\tri{g}^{lm}-
\frac{1}{2}\left(\tri{g}^{il}\tri{g}^{km}+\tri{g}^{im}\tri{g}^{kl}\right),\quad
L^{ik,lm}=L^{ki,lm}=L^{lm,ik}
\hfil\hskip 1em minus 1em (\theequation)}
(this quantity is equal to a known Wheeler-De Witt metric within a factor).
The action (\ref{77.2}) can be rewritten in the form where
the derivatives $\dot y^a\equiv \partial_0 y^a$ of the variables $y^a(x)$
with respect to the time $x^0$ are written explicitly:
\disn{t1}{
S=\int dx^0\, L(y^a,\dot y^a),\no
L=\int d^3x\;\frac{1}{2}\left(
\frac{\dot y^a\;B_{ab}\;\dot y^b}{\sqrt{\dot y^a\;\tri{{\Pi_{\!\!\bot}}}_{ab}\;\dot y^b}}+
\sqrt{\dot y^a\;\tri{{\Pi_{\!\!\bot}}}_{ab}\;\dot y^b}\;B^c_c\right),
\hfil\hskip 1em minus 1em (\theequation)}
where the quantity
\disn{t2}{
B^{ab}=2\sqrt{-\tri{g}}\;\tri{b}^a_{ik}\tri{b}^b_{lm}L^{ik,lm},
\hfil\hskip 1em minus 1em (\theequation)}
as well as the projection operator $\tri{{\Pi_{\!\!\bot}}}_{ab}$ do not contain time derivatives.
We find the generalized momentum $\pi_a$ for the variable
$y^a$ from action (\ref{t1}):
\disn{t10}{
\pi_a=\frac{\delta L}{\delta \dot y^a}=
B_{ab}n^b-\frac{1}{2}n_a\left( n_c B^{cd} n_d-B^c_c\right),
\hfil\hskip 1em minus 1em (\theequation)}
where we use the formula
\disn{n3.1b}{
n^a=\frac{\tri{{\Pi_{\!\!\bot}}}^a_b\,\dot y^b}{\sqrt{\dot y^c\,\tri{{\Pi_{\!\!\bot}}}_{cd}\,\dot y^d}}.
\hfil\hskip 1em minus 1em (\theequation)}
We suppose that besides the primary constraints appearing from
this equality, four Einstein's constraints (\ref{50.1}) are also satisfied.
As shown in \cite{tmf07,sbshk05}, they can be written as
\disn{t13}{
{\cal H}^0=\frac{1}{2}\left( n_c B^{cd} n_d-B^c_c\right)\approx 0,
\hfil\hskip 1em minus 1em (\theequation)}
\vskip -1em
\disn{t16}{
{\cal H}^i=-2\sqrt{-\tri{g}}\;\,\tri{\nabla\!}_k\left( L^{ik,lm}
\,\tri{b}^a_{lm}\, n_a\right)\approx 0.
\hfil\hskip 1em minus 1em (\theequation)}
We note that the definitions of the constraints (\ref{t13}) differ from
these used in \cite{tmf07} by the factor $1/2$.
If we use the constraint (\ref{t13}) in equality (\ref{t10}), then it takes a simple form
\disn{t17}{
\pi_a=B_{ab}n^b.
\hfil\hskip 1em minus 1em (\theequation)}
Taking account of (\ref{t2}) and of the properties of the quantity $\tri{b}^a_{ik}$,
we immediately obtain three primary constraints
\disn{t12}{
\Phi_i=\pi_a\tri{e}^a_i\approx 0.
\hfil\hskip 1em minus 1em (\theequation)}
One more constraint, the fourth one, has to appear. In \cite{regge}
it was obtained from the unit vector $n^b$ normalization under the form
\disn{g2}{
\left( B^{-1}\pi\right)^2-1\approx 0,
\hfil\hskip 1em minus 1em (\theequation)}
where $B^{-1}$ meaned the inversion of matrix $B$ in seven-dimensional
subspace normal to the surface $W^3$. However, this form was incorrect,
because the matrix $B$ has rank 6 in general case and
could not be inverted in the seven-dimensional subspace mentioned above.
Indeed, the quantity $\tri{b}^a_{ik}$ can be considered as a set
of six vectors (at fixed values of indices $i,k$ over which it is symmetric).
On the other hand, this quantity satisfies three identities $\tri{b}^a_{ik}\tri{e}_{a,l}=0$.
Therefore, in general case there exists a unique vector $w_a$ determined by
conditions
\disn{t3}{
w_a\tri{e}^a_l=0,\qquad w_a\tri{b}^a_{ik}=0,\qquad |w_a w^a|=1.
\hfil\hskip 1em minus 1em (\theequation)}
The matrix $B^{ab}$ gives a zero when it acts on this vector lying in
the seven-dimensio\-nal subspace mentioned above. Hence, it can not be
inverted in this subspace. Instead of (\ref{g2}) the fourth primary
constraint has to be written as
\disn{t18}{
\Psi^4=\pi_a w^a\approx 0
\hfil\hskip 1em minus 1em (\theequation)}
(the reason for such notation will be obvious below)
and the condition of normalization of vector $n^b$ does not lead to
new limitations.
Using formulas (\ref{t1}),(\ref{n3.1b}),(\ref{t17}),
one easily founds that the Hamiltonian of the theory
\disn{t18.1}{
H=\int\! d^3x\, \pi_a \dot y^a - L
\hfil\hskip 1em minus 1em (\theequation)}
vanishes. Therefore, the generalized Hamiltonian reduces to a
linear combination of constraints
(\ref{t13}),(\ref{t16}),(\ref{t12}),(\ref{t18}).
In the canonical formalism, constraints must be expressed via
generalized coordinates and momenta, i.~e., via $y^a$ and $\pi_a$
but not $\dot y^a$ in our case. Constraints (\ref{t12}) and
(\ref{t18}) satisfy this requirement (we note that vector $w_a$
determined by conditions (\ref{t3}) depends on $y^a$, but not on
$\dot y^a$), while constraints (\ref{t13}) and (\ref{t16}) do not
satisfy it. They must therefore be transformed to the necessary
form. For this, we introduce the quantity $\alpha^{ik}_a$
unambiguously determined by the conditions
\disn{t20}{
\alpha^{ik}_a=\alpha^{ki}_a,\quad
\alpha^{ik}_a\, \tri{e}^a_l=0,\quad
\alpha^{ik}_a w^a=0,\quad
\alpha^{ik}_a\, \tri{b}^a_{lm}=\frac{1}{2}\left(\delta^i_l\delta^k_m+\delta^i_m\delta^k_l\right).
\hfil\hskip 1em minus 1em (\theequation)}
The value $\alpha^{ik}_a$ can be considered as inverse to $\tri{b}^a_{lm}$, and
\disn{t20.1}{
\alpha^{ik}_b\, \tri{b}^a_{ik}=\tri{{\Pi_{\!\!\bot}}}^a_b-\frac{w^aw_b}{w^cw_c},
\hfil\hskip 1em minus 1em (\theequation)}
where the right part contains the projector on the
six-dimensional subspace normal to the surface $W^3$ and to the vector
$w^a$.
It is clear that $\alpha^{ik}_a$ as well as $w_a$ depends on $y^a$
but not on $\dot y^a$. Relation (\ref{t17}) implies that
\disn{t22}{
\tri{b}^b_{ik}n_b=\frac{1}{2\sqrt{-\tri{g}}}\;
\hat L_{ik,lm}\,\alpha^{lm}_a\,\pi^a,
\hfil\hskip 1em minus 1em (\theequation)}
where
\disn{t21}{
\hat L_{pr,lm}=\frac{1}{2}\left( g_{pr}g_{lm}-g_{pl}g_{rm}-g_{pm}g_{rl}\right),\no
\hat L_{pr,lm}L^{ik,lm}=\frac{1}{2}\left(\delta^i_p\,\delta^k_r+\delta^i_r\,\delta^k_p\right).
\hfil\hskip 1em minus 1em (\theequation)}
Using formula (\ref{t22}), the constraints (\ref{t13}),(\ref{t16})
can be expressed in terms of $y^a$ and $\pi_a$. It is convenient
to use a linear combination $\Psi^i={\cal
H}^i+\tri{g}^{ik}\Phi_k$ instead of the constraint ${\cal H}^i$.
As a result, we have a set of eight constraints:
\disn{t27}{
\Phi_i=\pi_a\tri{e}^a_i,\qquad
\Psi^i=-\sqrt{-\tri{g}}\;\tri{\nabla\!}_k\!\left(\frac{1}{\sqrt{-
\tri{g}}}\;\pi^a\alpha_a^{ik}\right)+\pi^a\tri{e}_a^i,\qquad
\Psi^4=\pi_a w^a,\no
{\cal H}^0=\frac{1}{4\sqrt{-\tri{g}}}\;\pi^a\alpha_a^{ik}\hat L_{ik,lm}\alpha^{lm}_b\pi^b-\sqrt{-
\tri{g}}\;\tri{R}.
\hfil\hskip 1em minus 1em (\theequation)}
We can see that all constraints except ${\cal H}^0$ are linear
in momentum $\pi^a$, and the constraint ${\cal H}^0$ is quadratic.
\section{Constraint algebra}
In this section we find the exact form of all Poisson brackets
between the constraints. It will be seen that these Poisson
brackets are linear combinations of the constraints, therefore this set of eight
constraints forms a first-class constraint algebra
for Regge-Teitelboim formulation of gravity. We drop some
tedious algebraic transformations using formulas described in \cite{tmf07,sbshk05}.
It is convenient to work with constraints convoluted with arbitrary functions.
It is also convenient to join constraints $\Psi^i$ and $\Psi^4$ under the index $A$
ranging the values $1,2,3,4$, since, as it will be shown, their
action on variables has similar geometrical meaning in spite of their
different nature ($\Psi^4$ is a primary
constraint and $\Psi^i$ contains an additionally imposed constraint
${\cal H}^i$). We use denotations
\disn{g3}{
\Phi_\xi\equiv\int d^3x\; \Phi_i(x)\,\xi^i(x)=\int d^3x\; \pi_a\tri{e}^a_i\xi^i,\qquad
{\cal H}^0_\xi\equiv\int d^3x\; {\cal H}^0(x)\,\xi(x),\no
\Psi_\xi\equiv\int d^3x\; \Psi^A(x)\,\xi_A(x)=\int d^3x\;\pi^a\left(
\alpha_a^{ik}\tri{\nabla\!}_i\xi_k+\tri{e}^k_a\xi_k+w_a\xi_4\right)=\hfill\cr\hfill=
\int d^3x\;\pi^a V_a^A\xi_A,
\hfil\hskip 1em minus 1em (\theequation)}
where a denotation for the differential operator is used:
\disn{g4}{
V_a^i=\alpha_a^{ik}\tri{\nabla\!}_i+\tri{e}^k_a,\qquad
V_a^4=w_a.
\hfil\hskip 1em minus 1em (\theequation)}
First of all, we find a geometrical meaning of three constraints $\Phi_i$.
For this purpose we calculate their action on variables. It is easy to find that
\disn{t51}{
\pua{\Phi_\xi}{y^a(x)}=
\xi^i(x)\partial_i y^a(x),\qquad
\pua{\Phi_\xi}{\frac{\pi_a(x)}{\sqrt{-\tri{g}(x)}}}=
\xi^i(x)\partial_i \frac{\pi_a(x)}{\sqrt{-\tri{g}(x)}},
\hfil\hskip 1em minus 1em (\theequation)}
where $\{\dots\}$ is a Poisson bracket. It means that $\Phi_\xi$
generates a transformation $x^i\to x^i+\xi^i(x)$ of
three-dimensional coordinates on the constant-time surface $W^3$
(it should be noted that generalized momentum $\pi^a$ is a
three-dimensional scalar density). Because all constraints
(\ref{g3}) are covariant (in three-dimensional meaning)
equalities, we can write the action of constraints $\Phi_i$ on them:
\disn{g5}{
\pua{\Phi_\xi}{\Phi_\zeta}=-\int d^3x\;\Phi_k\left(\xi^i\tri{\nabla\!}_i\zeta^k-\zeta^i\tri{\nabla\!}_i\xi^k\right),
\hfil\hskip 1em minus 1em (\theequation)}\vskip -1em
\disn{g5.1}{
\pua{\Phi_\xi}{\Psi_\zeta}=-\int
d^3x\left(\Psi^k\left(\xi^i\tri{\nabla\!}_i\zeta_k+\zeta_i\tri{\nabla\!}_k\xi^i\right)+\Psi^4\,\xi^i\partial_i\zeta_4\right),
\hfil\hskip 1em minus 1em (\theequation)}\vskip -1em
\disn{g5.2}{
\pua{\Phi_\xi}{{\cal H}^0_\zeta}=-\int d^3x\;{\cal H}^0\,\xi^i\partial_i\zeta.
\hfil\hskip 1em minus 1em (\theequation)}
Now we find a geometrical meaning of four constraints $\Psi^A$.
It is easy to verify that
\disn{g6}{
\pua{\Psi_\xi}{\tri{g}_{ik}(x)}=0,
\hfil\hskip 1em minus 1em (\theequation)}
so the constraints $\Psi^A$ generate transformations which are an
isometric bending of the surface $W^3$ (we stress that it is true
as for $\Psi^i$ so for $\Psi^4$). We note that the number
(four) of the found generators of three-dimensional isometric
bendings corresponds to the difference between the
dimensionality (ten) of the space into which the three-dimensional
surface is embedded and the number of independent
components (six) of the three-dimensional metric.
It is useful to calculate the action of constraints $\Psi^A$ on quantity
\disn{g6.1}{
\pi^{lm}\equiv-\pi^a\alpha_a^{lm}/2.
\hfil\hskip 1em minus 1em (\theequation)}
The calculation gives a rather long equality where each term is
proportional to one of constraints $\Psi^A$. Thus, under the action of
$\Psi^A$ the quantity $\pi^{lm}$ does not change if $\Psi^A=0$.
Since ${\cal H}^0$ and ${\cal H}^i$ can be expressed by
quantities $\tri{g}_{lm}$ and $\pi^{lm}$ (see~(\ref{t27})), we
can at once conclude (taking in to account (\ref{g5.1})) that the
Poisson bracket of constraint $\Psi_\xi$ with constraints
$\Psi^i$ and ${\cal H}^0$ reduces to a linear combination of constraints.
After tedious calculations we
obtain an exact result of the action of constraints $\Psi^A$ on
other constraints:
\disn{g8}{
\pua{\Psi_\xi}{\Psi_\zeta}=\int d^3x\;\left(\delta y^a_{\Psi_\xi} \,\overline\Psi_{ab}\;\delta
y^b_{\Psi_\zeta}-
\delta y^a_{\Psi_\zeta} \,\overline\Psi_{ab}\;\delta y^b_{\Psi_\xi}\right),
\hfil\hskip 1em minus 1em (\theequation)}\vskip -1em
\disn{g9}{
\pua{\Psi_\xi}{{\cal H}^0_\zeta}=\int d^3x\;\left(\delta y^a_{\Psi_\xi} \,\overline\Psi_{ab}\;\delta
y^b_{{\cal H}^0_\zeta}-
\delta y^a_{{\cal H}^0_\zeta} \,\overline\Psi_{ab}\;\delta y^b_{\Psi_\xi}\right),
\hfil\hskip 1em minus 1em (\theequation)}
where the quantity
\disn{g10.1}{
\overline\Psi_{ab}=\left(\Psi^i\eta_{ab}-\Psi^4w_b V^i_a\right) \tri{\nabla\!}_i
\hfil\hskip 1em minus 1em (\theequation)}
is a linear combination of the constraints $\Psi^A$, being also (like
$V_a^A$, see~(\ref{g4})) a differential operator. We have denoted
\disn{g10.2}{
\delta y^a_{\Psi_\xi}(x)=\pua{\Psi_\xi}{y^a(x)}=V^{aA}\xi_A(x),\quad
\delta y^a_{{\cal H}^0_\zeta}(x)=\pua{{\cal H}^0_\zeta}{y^a(x)}=\hat B^{ac}\pi_c\zeta
\hfil\hskip 1em minus 1em (\theequation)}
for the results of acting of constraints on the independent variable $y^a(x)$, where
\disn{g11}{
\hat B^{ac}=\frac{1}{2\sqrt{-\tri{g}}}\;\alpha^a_{ik}\alpha^c_{lm}\hat L^{ik,lm}
\hfil\hskip 1em minus 1em (\theequation)}
is the inverted quantity to $B_{cb}$ in six-dimensional subspace
normal to the surface $W^3$ and to the vector $w^a$:
\disn{g12}{
\hat B^{ac}B_{cb}=\tri{{\Pi_{\!\!\bot}}}^a_b-\frac{w^aw_b}{w^cw_c}
\hfil\hskip 1em minus 1em (\theequation)}
(formulas (\ref{t20}),(\ref{t20.1}),(\ref{t21}) are used).
In order to complete finding of the full constraint algebra
we need to calculate the Poisson bracket of the constraint
${\cal H}^0$ with itself. This calculation is the most
tedious and gives:
\disn{g13}{
\pua{{\cal H}^0_\xi}{{\cal H}^0_\zeta}=
\int d^3x\Biggl(
\delta y^a_{{\cal H}^0_\xi}\,\overline\Psi_{ab}\;\delta y^b_{{\cal H}^0_\zeta}\,-\,
\delta y^a_{{\cal H}^0_\zeta}\,\overline\Psi_{ab}\;\delta y^b_{{\cal H}^0_\xi}\,+\hfill\cr\hfill+\,
\left(\Psi^k-\tri{g}^{kl}\Phi_l\right)\!\left(\xi\tri{\nabla\!}_k\zeta-\zeta\tri{\nabla\!}_k\xi\right)\!\Biggr).
\hfil\hskip 1em minus 1em (\theequation)}
The formulas
(\ref{g5})-(\ref{g5.1}),(\ref{g8}),(\ref{g9}),(\ref{g13}) gives
the exact form of the first-class constraint algebra for
Regge-Teitelboim formulation of gravity. It should be noted that
the results of calculation of Poisson brackets (\ref{g8}),(\ref{g9})
and partially (\ref{g13}) have similar structure. The reason
for that is unclear.
According to what is written after the formula (\ref{t18.1}), the
generalized Hamiltonian of the theory can be written in the form
\disn{g14}{
H^{\mbox{\scriptsize gen}}=\int\! d^3x\left( \tilde\lambda^i\Phi_i+N_A\Psi^A+N_0{\cal H}^0\right).
\hfil\hskip 1em minus 1em (\theequation)}
As can be seen from (\ref{g8}) (taking into account
(\ref{g10.1})), the four constraints $\Psi^A$ generating
the isometric bending of the surface $W^3$ form a subalgebra in the full
constraint algebra. It means that the Poisson brackets between
them are reduced to their linear combination. Moreover,
it is seen that the Poisson brackets of the constraints $\Psi^A$ with all
other constraints (and consequently with the Hamiltonian (\ref{g14}))
also reduce to such a linear combination. Thus the constraints
$\Psi^A$ form the ideal. It means that once imposed the constraints
$\Psi^A$ remain satisfied in the time independently of satisfying
other constraints.
If we limit our consideration of the system with satisfied constraints
$\Psi^A=0$, then its dynamics will be determined by the Hamiltonian
\disn{g15}{
\tilde H=\int\! d^3x\left( \tilde\lambda^i\Phi_i+N_0{\cal H}^0\right)=
\int\! d^3x\left( -\tilde\lambda_i{\cal H}^i+N_0{\cal H}^0\right)=\hfill\cr\hfill=
\int\! d^3x\left(\! -2\tilde\lambda_i
\sqrt{-\tri{g}}\;\tri{\nabla\!}_k\!\left(
\frac{\pi^{ik}}{\sqrt{-\tri{g}}}\right)\!+N_0\!\left(
\frac{\pi^{ik}\hat L_{ik,lm}\pi^{lm}}{\sqrt{-\tri{g}}}-
\sqrt{-\tri{g}}\;\tri{R}\right)\!\!\right),
\hfil\hskip 1em minus 1em (\theequation)}
where $\Phi_i$ was expressed by ${\cal H}^i$, the formulas
(\ref{t27}) were applied, and the quantity
$\pi^{ik}$ determined by (\ref{g6.1}) was used.
This Hamiltonian as a functional of the quantities $\tri{g}_{ik}$ and $\pi^{ik}$
coincides exactly with the known Hamiltonian expression in the ADM formalism.
Besides, it is easy to verify that these quantities $\tri{g}_{ik}$ and $\pi^{ik}$
are canonically conjugate at $\Psi^A=0$ (it should be
noted that this condition is necessary only for vanishing of the Poisson
bracket $\pua{\pi^{ik}(x)}{\pi^{lm}(\tilde x)}$ ). Therefore
the dynamics of Regge-Teitelboim formulation of gravity on the surface of constraints
$\Psi^A=0$ coincides with the dynamics of gravity in the ADM formalism.
\section{Discussion about existence of additional first-class constraints}
In this section we discuss what could mean
the existence in the canonical formalism of additional constraints
which are in involution with the Hamiltonian of the theory
and which form a first-class constraint algebra, probably with
other constraints inherent in the theory. The
Einstein's constraints (\ref{t13}),(\ref{t16}) for Regge-Teitelboim formulation of
gravity are just these additional constraints.
For comparison we consider a simple model in the Minkowski space with the action
\disn{g16}{
S=\int\!dt\!\int\! d^3x\left(
\frac{1}{2}(\partial_0A_i)(\partial_0A_i)-\frac{1}{4}\left(\partial_iA_k-\partial_kA_i\right)\left(\partial_iA_k-
\partial_kA_i\right)\rs,
\hfil\hskip 1em minus 1em (\theequation)}
where the independent variable is a three-component field $A_i(x)$.
The generalized momentum is the quantity $\pi_i=\partial_0A_i$,
the primary constraints are absent. The Hamiltonian has form
\disn{g17}{
H=\int\! d^3x\left(
\frac{1}{2}\pi_i\pi_i+\frac{1}{4}\left(\partial_iA_k-\partial_kA_i\right)\left(\partial_iA_k-\partial_kA_i\right)\rs.
\hfil\hskip 1em minus 1em (\theequation)}
We consider an additional constraint $\Phi(x)=\partial_i\pi_i(x)$. It
is easy to verify that it is in involution with the Hamiltonian, so
their Poisson bracket $\pua{H}{\Phi(x)}=0$ vanishes.
Because of $\pua{\Phi(x)}{\Phi(y)}=0$, the quantity $\Phi(x)$ is the first-class
constraint and can be added to the Hamiltonian with a Lagrange factor:
\disn{g18}{
H^{\mbox{\scriptsize gen}}=\int\! d^3x\left(
\frac{1}{2}\pi_i\pi_i+\frac{1}{4}\left(\partial_iA_k-\partial_kA_i\right)\left(\partial_iA_k-\partial_kA_i\right)
+\lambda\,\partial_i\pi_i\right).
\hfil\hskip 1em minus 1em (\theequation)}
Therefore, the case of the additional imposed constraint
$\Phi(x)$ in this model is completely analogous to the case of
Einstein's constraints in Regge-Teitelboim formulation of gravity.
We construct the action $S'$ corresponding to the Hamiltonian
(\ref{g18}). A new equality for the generalized velocity reads
\disn{g19}{
\partial_0 A_i=\frac{\delta H^{\mbox{\scriptsize gen}}}{\partial\pi_i}=\pi_i-\partial_i\lambda.
\hfil\hskip 1em minus 1em (\theequation)}
Expressing the generalized momentum $\pi_i$ from this equality
and making the Legendre transform we find the required action
\disn{g20}{
S'=\int\!dt\!\int\! d^3x\biggl(
\frac{1}{2}(\partial_0A_i+\partial_i\lambda)(\partial_0A_i+\partial_i\lambda)-\hfill\cr\hfill-\frac{1}{4}\left(\partial_iA_k-
\partial_kA_i\right)\left(\partial_iA_k-\partial_kA_i\right)\biggl).
\hfil\hskip 1em minus 1em (\theequation)}
The Lagrange factor $\lambda(x)$ related with the additionally imposed
constraint appears in this action as a new independent variable.
Denoting $A_0=-\lambda$ one easy recognizes in
expression (\ref{g20}) the free electrodynamics action. The
initial action (\ref{g16}) can be derived from it by fixing of
the gauge $A_0=0$.
This example shows that the existence of additional first-class
constraints in the canonical formalism can indicate
that the initial theory without additional constraints is a
result of fixing of gauge (probably partial) in some extended
theory with an additional gauge symmetry. In particular, the
initial embedding theory with action (\ref{39d}) having a
four-parameter gauge group, where the independent variable is the
embedding function, appears to be the result of the gauge fixing
in Regge-Teitelboim formulation of gravity which has an eight-parameter gauge group
and is described by the Hamiltonian (\ref{g14}).
It should be noted that, as well known, the fixing of gauge in action usually leads to
loss of some equations of motion. That is why the Regge-Teitelboim equations
(\ref{50}) have extra solutions.
The action of the extended theory corresponding to generalized
Hamiltonian (\ref{g14}) of Regge-Teitelboim formulation of gravity was found in
\cite{tmf07} in a way completely analogous to the one described in this section.
It can be written in the form of the initial Einstein-Hilbert action
\disn{s2}{
S=\int d^4x\, \sqrt{-g'}\;R(g'),
\hfil\hskip 1em minus 1em (\theequation)}
if we substitute for the metric $g'_{\mu\nu}$ the modification of expression (\ref{2}):
\disn{s3}{
g'_{ik}=\tri{g}_{ik}=\partial_iy^a\partial_ky_a,\quad
g'_{0k}=\partial_0y^a\partial_ky_a-N_k,\quad
g'_{00}=N_0^2+g'_{0i}\,\tri{g}^{ik}\,g'_{0k}
\hfil\hskip 1em minus 1em (\theequation)}
(whence we obtain $g'^{00}=\frac{1}{N_0^2}$; we note that these
formulas differ from formulas in \cite{tmf07} by numeric factors
because of changing of definition of constraint (\ref{t13})).
Here $N_k$ and $N_0$ are new independent variables (in addition
to $y^a$), which are transformed into Lagrange multipliers in
canonical formalism. The action (\ref{s2}) has an eight-parameter
gauge symmetry, and the value $g'_{\mu\nu}$ appears to be invariant
under four of these eight transformations, which have constraints
$\Psi^A$ as generators in the canonical formalism.
If we introduce a partial fixing of gauge by conditions
\disn{s4}{
N_0=\sqrt{\partial_0 y^a \tri{{\Pi_{\!\!\bot}}}_{ab}\partial_0 y^b},\qquad N_k=0,
\hfil\hskip 1em minus 1em (\theequation)}
then the quantity $g'_{\mu\nu}$ coincides with the induced metric,
and the action of the extended theory (\ref{s2}) transforms into
the action (\ref{39d}) of the initial embedding theory. If we do
not fix the gauge, then the quantity $g'_{\mu\nu}$ still satisfies
(in general case) the Einstein's equations. Therefore we can consider the
quantity $g'_{\mu\nu}$ to be the metric, which is invariant under
additional symmetry transformation and coincides with the induced
metric only in the mentioned gauge.
A disadvantage of the action (\ref{s2}) of the extended theory is the
presence of a singled out direction of time related to the fact that the time and space
components appear in formulas (\ref{s3}) in a different way.
It would be interesting to find a modification of formulas (\ref{s3})
without singled out time direction but where the equations of motion
still would be equivalent to the Einstein's equations.
\vskip 0.2em
{\bf Acknowledgments.}
The work was supported by the Russian Ministry of Education, Grant
No.~RNP.2.1.1/1575.
| proofpile-arXiv_065-5141 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
Rest-frame UV spectra of roughly 20\% of all quasars exhibit
blueshifted Broad Absorption Lines (BAL) that are indicative of an
outflow. BAL are mainly associated with resonance lines of high
ionization species, like \ion{C}{4}, \ion{N}{5}, \ion{O}{6} (HiBAL),
and can reach velocities as high as 50,000 km $s^{-1}$ \citep{weymann, turnshek}.
Despite various recent statistical studies of BAL quasars
\citep{hall, trump, ganguly} the relationship between HiBAL and the host
galaxy remain illusive as these outflows do not contain distance
diagnostics in their spectra. Thus, it is difficult to establish
whether the outflows affect only the near AGN environment (0.1-10~pc)
or whether they extend to the scales of the entire galaxy (1-10~kpc).
A subset of BALQSOs also show absorption features from low ionization
species such as \ion{Mg}{2}, \ion{Al}{2}, and most importantly for diagnostics, \ion{Fe}{2} and \ion{Si}{2}. These
absorption features are generally
complex, as they are made of multiple components of narrower
(of the order of a few hundred km~s$^{-1}$)
absorption troughs. AGN with this kind of features are often refer to as
FeLoBAL. In these systems the spectra of \ion{Fe}{2} and \ion{Si}{2}
is valuable because they
often include absorption troughs from
metastable levels, which serve to determine
the distance of the outflow from the central source and thereby beginning
to relate the outflows to their host galaxy. Such outflows were
studied in the past \citep{wampler, dekool01, dekool02,
hamann, arav01}. However, the presently available
Sloan Digital Sky Survey (SDSS)
and the development of advanced BAL analysis tools (Arav et al 2002,
Gabel 2005, Arav et al 2005) allow for a more accurate and systematic
study of these systems. With this in mind, we began a
comprehensive study of FeLoBAL outflows that contain distance diagnostics
\citep{arav08, korista08,
dunn09, moe09}.
Up to the present, though, all studies are based on either global
properties of the multi-component absorption troughs or the strongest
kinematic component.
Little is known about the individual
physical properties of the components and their relationship to the
whole outflow.
In the best studied cases, \cite{wampler, dekool01, dekool02}
were able to obtain some column densities for individual components, but
could neither diagnose the physical conditions of the components nor derive
their relative distances.
Simple assumptions that all components had similar physical conditions
and that they have either constant speed or equal monotonic acceleration
would lead to the conclusion that either the absorbers are scattered along a wide
range of distances from the central source or the absorbers vary in age as a result of
episodic ejections. But BAL systems in general are far from simple. Apart from a weak
correlations between the absorber bulk velocities and bolometric luminosity of the
AGN
\citep{laor, dunn08, ganguly}, no correlations have been found among velocity, width, ionization, or
any other observable properties.
Recent high spectral resolution eschell observations of two quasars QSO 2359--1241
and SDSS J0318--0600 with the Very Large Telescope (VLT) have allowed us to study
in detail the various independent components that compose their FeLoBAL.
QSO 2359--1241 (VNSS J235953--124148) is a luminous ($M_B= -28.7$), reddened
($A_V \approx 0.5$) quasar at redshift $z=0.868$ \citep{brotherton01}. SDSS
J0318--0600 is also a highly reddened bright quasar ($A_V\approx 1$ and $M_B=-28.15$) at redshift
$z=1.967$. Both quasars exhibit a rich multi-component FeLoBAL comprising five and
eleven components respectively. The strongest components in the FeLoBAL, which are
also the first absorbers from the central engine (see Section 3), for each of these objects were measured
and analyzed in \cite{arav08} and \cite{korista08} for QSO 2359--1241 and \cite{dunn09} for
SDSS J0318--0600. It was found that the absorbers are located
at $\sim$3~kpc and between 6 and 20 kpc
from the central engine, respectively.
QSO 2359--1241 and SDSS J0318--0600 are the first to be studied in detail from
a sample of $\sim$80
FeLoBAL quasars with resonance and metastable \ion{Fe}{2} absorption lines (FeLoBAL
quasars) in their spectra. These lines can be used for direct determination of the physical
conditions and energy transport of the outflows. The sample was extracted out of
50,000 objects in the SDSS database as part of a major ongoing effort to study the
nature of quasar outflows and their effects on the host galaxy \citep{moe09}.
\section{The measured column densities}
Observations of QSO~2359--1241 and SDSS J0318--0600 consist of
echelle VLT/UVES high-resolution
($R \approx 40,000$) spectra with 6.3 hour exposures for each object.
Fig.~\ref{spectra} illustrates the structure of the absorption troughs. The
observations for QSO~2359--1241 were presented in \cite{arav08}, together with the
identification of
all the absorption features associated with the outflow.
Column densities of only the strongest component ($e$) were measured
at that time.
The observations of SDSS J0318-0600 are presented in \cite{dunn09}.
The high signal-to-noise data allowed us to measure the column densities
from 5 unblended absorbers in QSO~2359--1241
and 11 components in SDSS J0318--0600. For the latter, though, only the strongest
components {\bf a, i}, and {\bf k} could be independently measured, while all other
components had to be measured as a single blended structure.
Whenever possible, the column densities were determined through three different
assumption, i.e. full covering or apparent optical depth, partial line-of-sight
covering, and velocity dependent covering according to the power law method of
de Kool et al. (2001); Arav et al. (2005). For each component in
QSO~2359$-$1241, we find that the troughs require the power law method to
determine the full column density. We quote the column density determined by
the power law method in Table 1. For the three components in SDSS~J0318$-$0600,
we find that the results of these three methods are generally in good agreement,
indicating that there is full covering of each source and the column
densities could be reliably measured (see Dunn et al. 2009).
The measured column densities for the observed components in QSO~2359--1241
and SDSS J0318-0600 are given in Tables 1 and 2 respectively.
\clearpage
\begin{figure}
\rotatebox{0}{\resizebox{\hsize}{\hsize}
{\plotone{bautistaf1.eps}}}
\caption{Absorption troughs showing 5 and 11 clearly separated absorption
components in the spectra of QSO~2359--1241 and SDSS J0318-0600, respectively.
The same components in velocity space seem to be present in all troughs of
the same object. }
\label{spectra}
\end{figure}
\clearpage
\begin{deluxetable}{lcccccc}
\tabletypesize{\scriptsize}
\tablecaption{Measured Column Densities in QSO~2359--1241}
\tablewidth{0pt}
\tablehead{
\colhead{Species} &
\colhead{E$_{low}$ (cm$^{-1}$)} &
\multicolumn{5}{c}{Column density ($\times 10^{12}$ cm$^{-2}$)}\\
& &
\colhead{Comp. {\bf a}} &
\colhead{Comp. {\bf b}} &
\colhead{Comp. {\bf c}} &
\colhead{Comp. {\bf d}} &
\colhead{Comp. {\bf e}}
}
\startdata
\ion{He}{1}& 159~856 & $14.3\pm0.7$ & $22.9\pm3.4$ & $6.1\pm0.3$ & $4.5\pm2.2$ & $138.0\pm9.9$ \\
\ion{Fe}{2} & 0 & $0.49\pm0.08$& $5.7\pm1.5$ & $2.7\pm0.3$ & $2.7\pm0.1$ & $72.4\pm3.5$ \\
\ion{Fe}{2} & 385 & & $0.93\pm0.28$ & $0.40\pm0.09$& $0.60\pm0.22$& $32.4\pm1.2$ \\
\ion{Fe}{2} & 668 & & $1.1\pm0.5$ & & & $18.2\pm1.3$ \\
\ion{Fe}{2} & 863 & & $0.45\pm0.20$ & & & $11.5\pm0.9$ \\
\ion{Fe}{2} & 977 & & & & & $7.1\pm0.6$ \\
\ion{Fe}{2} & 1873 & & & & & $77.6\pm9.5$ \\
\ion{Fe}{2} & 7955 & & & & & $5.0\pm0.5$ \\
\ion{Mg}{1} & 0 & $0.04\pm0.02$& $2.1\pm1.0$ & $0.28\pm0.02$& $0.04\pm0.03$& $0.83\pm0.06$ \\
\ion{Mg}{2} & 0 & & & & & $>65$ \\
\ion{Si}{2} & 0 & & $198\pm48$ & & & \\
\ion{Si}{2} & 287 & & & & & $794\pm206$ \\
\ion{Al}{3} & 0 & & & $12.7\pm0.4$ & & $>79$ \\
\ion{Ca}{2} & 0 & $0.07\pm0.03$& $0.83\pm0.04$ & $0.47\pm0.02$& & $3.3\pm0.3$ \\
\ion{Ni}{2} & 8394 & & & & & $6.2\pm0.6$ \\
\enddata
\end{deluxetable}
\begin{deluxetable}{lccccc}
\tabletypesize{\scriptsize}
\tablecaption{Measured Column Densities in SDSS J0318--0600}
\tablehead{
\colhead{Species} &
\colhead{E$_{low}$(cm$^{-1}$)} &
\multicolumn{4}{c}{Column density ($\times 10^{12}$ cm$^{-2}$)}\\
& &
\colhead{Comp. {\bf a}} &
\colhead{Comp. {\bf i}} &
\colhead{Comp. {\bf k}} &
\colhead{Comp. {\bf b-h}}
}
\startdata
\ion{Al}{2} & 0 &$116.1\pm0.1$ & $400\pm40$ & $35.0\pm0.3$ & $>390$ \\
\ion{Al}{3} & 0 &$46.0\pm0.4$ & $1560\pm220$ & $73.0\pm0.8$ & $>810$ \\
\ion{C}{2} & 0 &$333\pm11$ & & $1100\pm200$ & $>14000$\\
\ion{C}{2} & 63 &$577\pm21$ & $>19000$ & $1800\pm300$ & \\
\ion{C}{4} & 0 &$734\pm10$ & $29000\pm3000$& $1297\pm13$ & $>10000$ \\
\ion{Fe}{2} & 0 &$<40$ & $1275\pm35$ & $154\pm6$ & $>490$\\
\ion{Fe}{2} & 385 & & $294\pm77$ & & \\
\ion{Fe}{2} & 668 & & & $9.0\pm0.3$&\\
\ion{Fe}{2} & 863 & & $147\pm36$ & & \\
\ion{Fe}{2} & 977 & & & & \\
\ion{Fe}{2} & 1873& & $163\pm49$ & & \\
\ion{Fe}{2} & 2430& & $25.0\pm5.4$ & & \\
\ion{Fe}{2} & 7955& & $8.1\pm0.2$ & & \\
\ion{Mg}{2} & 0 & $28.7\pm0.1$& $3200\pm400$ & $192\pm1$ & $>880$\\
\ion{Mn}{2} & 0 & & $17.5\pm0.1$ & & \\
\ion{Ni}{2} & 0 & & $180 \pm4 $ & $<120$ & \\
\ion{Ni}{2} & 8394& & $64.0\pm0.4$ & $10.0\pm0.3$ &\\
\ion{Si}{2} & 0 & $101\pm3$ & $7220\pm100$ & $640\pm150$ & $>3500$\\
\ion{Si}{2} & 287 & $<50$ & $7380\pm130$ & $352\pm12$ & \\
\ion{Si}{4} & 0 & $145\pm6$ & $5600\pm1300$ & $140\pm4$ & $>1800$\\
\enddata
\end{deluxetable}
\clearpage
\clearpage
\begin{figure}
\rotatebox{0}{\resizebox{\hsize}{\hsize}
{\plotone{bautistaf2.eps}}}
\caption{Electron density diagnostics from \ion{Fe}{2} in QSO~2359--1241.
The ratio of column densities of
the excited level at 385~cm$^{-1}$ (a $^6$D$_{7/2}$) to the ground level
(a $^6$D$_{9/2}$) is plotted against
the logarithm of the electron density. The measured ratios for each of the
FeLoBAL components is drawn on top of the theoretical prediction. The
uncertainties in the measured ratios are depicted by vertical bars, and
that leads to uncertainties in the derived electron density which are
indicated by horizontal bars.}
\label{fe2qso2359}
\end{figure}
\clearpage
\clearpage
\begin{figure}
\rotatebox{0}{\resizebox{\hsize}{\hsize}
{\plotone{bautistaf3.eps}}}
\caption{Electron density diagnostics from the observed ratios of column densities of
excited to ground levels of \ion{C}{2} and \ion{Si}{2} in SDSS J0318-0600.
For the component {\bf a} only an upper limit to the \ion{Si}{2} ratio
could be obtained from observations.}
\label{j0318}
\end{figure}
\clearpage
\section{Analysis of spectra and modeling}
\subsection{The density of the outflows}
We use the observed ratios of column densities of excited to resonance lines as electron
density indicators, which are directly proportional to the level populations.
In QSO~2359--1241 we find excellent diagnostics from the \ion{Fe}{2} column
densities of the $a~^5D_{7/2}$ level at 385 cm$^{-1}$ and the ground level
$a~^5D_{9/2}$. The diagnostics are shown in Fig. \ref{fe2qso2359}. From these we
get $\log(n_e/$cm$^{-3})=4.4\pm0.1$ for component {\bf e} (see Korista et al. 2008),
$\log(n_e/$cm$^{-3})=3.8\pm0.2$ for component {\bf d},
and $\log(n_e/$cm$^{-3})=3.6\pm0.1$ and $3.6\pm0.2$ for components {\bf c} and {\bf b}.
Unfortunately, we have no excited lines in component {\bf a} suitable for diagnostics.
The theoretical level populations for the \ion{Fe}{2} ion were computed from
the atomic model of \cite{baupra98}.
In SDSS~J0318--0600 we find a density diagnostic for component {\bf a} in the ratio of
\ion{C}{2} column densities of the excited level at 63 cm$^{-1}$ ($^2$P$^o_{3/2}$) to
the ground level ($^2$P$^o_{1/2}$), which yields $\log(n_e/$cm$^{-3})=2.6\pm0.2$ cm$^{-3}$.
This is consistent with the limit $\log(n_e/$cm$^{-3})<2.8$ from the ratio of
the \ion{Si}{2} excited (287 cm$^{-1}$; $^2$P$^o_{3/2}$) to the ground
($^2$P$^o_{1/2}$) levels.
For components {\bf i} and {\bf k}
the \ion{Si}{2} ratios
yield $\log(n_e/$cm$^{-3})=3.3\pm0.2$
and $\log(n_e$/cm$^{-3}$)=$2.85\pm0.10$ respectively. Additional diagnostics from
\ion{Fe}{2} are available for component {\bf i} and they are all consistent with the present
determination (see Dunn et al. 2009). Fig. \ref{j0318} illustrates the present
diagnostics.
The theoretical level populations for \ion{C}{2} were computed using the
effective collision strengths of \cite{blum} and A-values from
\cite{wiese}. For the \ion{Si}{2} spectral model we use the
effective collision strengths of
Dufton \& Kingston (1991) and
A-values for forbidden transitions of
Nussbaumer (1977).
In \cite{arav08}, \cite{korista08}, and \cite{dunn09} we showed that
the troughs in QSO~2359--1241 and SDSS~J0318--0600
arise from a region where
hydrogen is mostly ionized, thus the electron density derived above
should be nearly equal (within $\sim$10\%) to the total hydrogen
density of the clouds.
\subsection{Ionization structure of the outflows}
Under the premise that the absorbers are in photoionization equilibrium we
assume constant gas density clouds in plane parallel geometry. Thus,
the ionization structure of a warm photoionized plasma are typically characterized by
the so called ionization parameter, which is defined as
\begin{equation}
U_H \equiv {\Phi_H\over{n_H c}} = {Q_H\over {4\pi R^2 n_H c}},
\end{equation}
where $c$ is the
speed of light, $n_H$ is the gas density,
$\Phi_H$ is the ionizing photon flux, $Q_H$ is the rate of hydrogen ionizing photons
emitted by the ionizing source, and $R$ is the distance from the ionizing source to the
cloud. From this definition, $Q_H$ can be estimated directly from the
luminosity of the object and some knowledge of its Spectral Energy
Distribution (SED).
\cite{korista08} and \cite{dunn09} constructed detailed photoionization models of the main
components of QSO~2359--1241 and SDSS~J0318--0600 and determined
their physical conditions.
But, modeling all the absorbing components of the outflow simultaneously opens
a variety of new scenarios. Fortunately, the combined measurements of column
densities for \ion{Fe}{2} and \ion{He}{1} in all of the components of QSO~2359--1241
and \ion{Si}{2} and \ion{Si}{4} in the components of SDSS~J0318--0600 allow us to constrain the
ionization parameter of all of these components.
We use the photoionization modeling code {\sc cloudy}~c07.02.01 \citep{cloudy} to compute grids
of models in $U_H$ for each value of $\vy{n}{H}$ of interest. All
models are calculated for $N_H$ (total hydrogen column density) running from the inner phase of the cloud to deep
into the ionization fronts (IF's) where the temperature drops to only a few
thousand K. For QSO~2359--1241 we assumed solar abundances as in
\cite{korista08}. Then, the accumulated column densities of
the metastable
2~$^3S$ excited state of \ion{He}{1} (hereafter, \ion{He}{1}$^*$)
and total
\ion{Fe}{2} were extracted out of the {\sc cloudy} output files and
plotted in Fig.~\ref{columnqso2359} as
N(\ion{He}{1}*)/N(\ion{Fe}{2}) vs. $\log$N(\ion{He}{1}*). These plots serve as
direct diagnostics of $U_H$ by marking the measured column densities on the plots.
Further, the value of $N_H$ is also readily available as there is a direct correspondence
in each model between the accumulated total hydrogen column and
the accumulated column density of all species.
The diagnostic power of the combined \ion{He}{1}$^*$ and \ion{Fe}{2} lines
was explained in \cite{korista08}.
It was shown that the \ion{He}{1}$^*$ column density is set by the \ion{He}{2} column, which
is bound to the \ion{H}{2} fraction and
the ionization parameter. This is because whenever helium is ionized hydrogen, with a much lower ionization potential, must be ionized too. For any conceivable quasar ionizing continuum recombination of He II to neutral He occurs simultaneously with recombination of H II to neutral H.
This is shown in Fig. 1 of Korista et al. (2008). On the other hand, the ionization fraction of
\ion{Fe}{2} across the ionization threshold is determined by charge exchange
with hydrogen
$${\rm Fe}^{++} + {\rm H}^0 \rightleftharpoons {\rm Fe}^{+} + {\rm H}^+.$$
Thus, the column density of \ion{Fe}{2} traces the column of neutral hydrogen
and
\begin{equation}
{N(He~I^*)\over N(Fe~II)} \propto
{N(H~II)\over N(H~I)} \propto U_H;
\end{equation}
in other words, the measured ratio of column densities of \ion{He}{1}$^*$
to \ion{Fe}{2} serves as direct indicator of the average ionization parameter
of the gas cloud.
Within the cloud, whose depth, $r$, is expected to be much smaller than $R$,
the $U_H$ varies as
\begin{equation}
U_H(r) \propto {e^{-\tau(r)}\over n_e},
\end{equation}
where $\tau$ is the optical depth to hydrogen ionizing photons, which
is proportional to the column density of neutral hydrogen.
Hence, at the inner phase of the cloud and for a large fraction of
it hydrogen is nearly fully ionized and $\tau$ remains small. Within
this depth in the cloud the ionization of the gas and hence the
$N$(\ion{He}{1}$^*$)/$N$(\ion{Fe}{2}) ratio is expected to remain
roughly constant (see figure 8 in Arav et al. 2001). Deeper in the cloud, a small decrease in the ionization
of hydrogen leads to an increase in $\tau$, which in turns reduces
the number of ionizing photons right ahead
and results in even lower hydrogen
ionization. At this point, an ionization front
develops and is accompanied by a quick drop of the
$N$(\ion{He}{1}$^*$)/$N$(\ion{Fe}{2}) ratio into the cloud or,
equivalently, $N$(\ion{He}{1}$^*$).
Furthermore, the $N$(\ion{He}{1}$^*$)/$N$(\ion{Fe}{2}) vs. $N$(\ion{He}{1}$^*$)
plots shown in Fig.~3 offer a complete diagnostic for $U_H$ and
the total hydrogen column density ($N_H$) for clouds
of a given chemical composition.
Here, it is important to realize that as these plots are sensitive to
the integrated flux of hydrogen ionizing photons, most of which
($\sim90 \%$) arise
from the spectral region between 1 and 4~Ry in Quasar SED. Thus,
the plots depend only little on the actual shape of the SED.
This is illustrated in Fig.~3 by showing the theoretical curves as
calculated with the
Mathews and
Ferland (1987; MF87 hereafter) quasar SED
and with
the transmitted spectrum from the inner most component ({\bf e}) of the outflow.
That component {\bf e} is the first absorber from the source
in QSO~2359--1241
becomes apparent as this has the
largest $U_H$~dex and is also the densest, thus it is concluded from Eqn.~1 that it most have the smallest distance $R$ among all components (see also the discussion below).
From the diagnostics in Fig.~4 it results that component {\bf e}
has $\log(U_H)$= -2.4 for $\log_{10}(n_H/$cm$^{-3})=4.4$.
Component {\bf d} with a density $\log_{10}(n_H/$cm$^{-3})=3.8$ has
$\log_{10}(U_H)\approx -2.8$, and the two lowest density components {\bf b} and
{\bf c} have $\log_{10}(U_H)\approx -2.7$ and -2.9 respectively.
{\bf Notice that the electron densities of components {\bf b} and
{\bf d} are sufficiently similar to each other, and in fact overlap
within the uncertainty bars, for them to be plotted on the same diagnostic
diagram without any significant lose of accuracy.}
We build similar plots for SDSS J0318--0600 but based on the column densities of
\ion{Si}{2} and \ion{Si}{4} (Fig~\ref{columnj0318}). For these models we start with a
chemical composition expected for a galaxy with metalicity $Z$=4.2, which
we found to be a reasonably choice that fits well the measured column densities (Dunn et al. 2009). Models with either solar composition or very high metalicities
(e.g $Z=$7.2) can be discarded on the basis of the observed absorption
troughs for various species. Basing the diagnostics on two
ions of the same element has two important advantages over the previous plots: (1)
the column density ratio is mostly independent of the assumed chemical abundances, and
(2) the plots are mostly independent of $\vy{n}{H}$. A disadvantage,
though, is that the \ion{Si}{2} is mostly created through photoionization
by radiation below the hydrogen ionization threshold (0.6~Ry), while
\ion{Si}{4} has a formation energy higher than that of \ion{He}{2}.
Consequently, the $N$(\ion{Si}{4})/$N$(\ion{Si}{2})
is more sensitive to the SED than in the previous case.
Because SDSS J0318--0600 is a extremely reddened object and
the location of the extinguishing dust with respect to the absorber
is unknown we need to consider two different SEDs for the models. These
are: the UV-soft SED developed in Dunn et al (2009) and this SED after
reddening.
In Fig.~4 we plot $\log(N$(\ion{Si}{4})/$N$(\ion{Si}{2}))
vs. $\log_{10}(N$(\ion{Si}{2}) as obtained from the two SEDs
considered (upper panels)
and in the cases in which these SEDs are attenuated by
component {\bf i},
which is identified as the inner most absorber (lower panels).
From these diagnostics
the densest component {\bf i} has the highest $U_H$ (=$-2.75\pm0.10$ dex
for the unreddened SED and $3.02\pm0.10$~dex for the reddened SED)
and largest
total column (=$20.9\pm0.1$~dex for the unreddened SED and $20.1\pm0.1$~dex for the reddened SED). The lower density components {\bf a} and {\bf k} have
considerably less column density and are less ionized.
From the physical conditions derived above it is now possible to estimate the distance
from each absorption component to the ionizing source. But, it is convenient
to determine first the relative distances of all component to the source.
The relative distance can be obtained more accurately from
observations than absolute distance. This is because the absolute determination of distance depends strongly on the number of ionizing photons of the SED and on whether this is reddened before ionizing the cloud. So, we see that in
QSO 2359--1241 and SDSS J0318--0600 the absolute distance to the absorbers vary by several factors depending on whether reddening of the SED occurs before or after ionizing the absorbers. This sort of uncertainty, however, does not affect the relative distances among absorbers. From Eqn.(1) one gets
\begin{equation}
\log(R/R_0) = {1\over 2} [\log(Q_H/Q_{H0}) - \log(\vy{n}{H})/\vy{n}{H0}) - \log(\vy{U}{H}/\vy{U}{H0}) ],
\end{equation}
where $R_0$, $\vy{Q}{H0}$, $\vy{n}{H0}$, and $\vy{U}{H0}$ are
the distance, rate of ionizing photons, particle density, and ionization
parameter for a given reference component, which we choose as the
strongest component in each of the absorbers studied (i.e. {\bf e} in
QSO 2359-1241 and {\bf i} in SDSS J0318--0600). In identifying the inner-most
absorber it is important to realize that in the first approximation that uses
the same SED in determining the distance to all components the result is
actually correct for the inner-most component and overestimated for the rest
(see Eqn. 4 of the manuscript). This means that the absorber with the shortest
distance to the source in this first approximation is indeed the inner-most
absorber. For any other absorber to located in the inner-most position the
other absorbers with shorter distance in the first order approximation would
have to be at larger distances that initially estimated, which is impossible
if the flux of ionizing photons is to be reduced by the effect of attenuation.
Notice that the relative locations of all the components could, in principle,
be determined following the same logical argument, except that they are so
close together that their estimated distance differences soon become smaller
than the uncertainties.
The strongest
components are also the innermost, as we will see below.
For any absorption
component that sees the same unattenuated radiation from the source
as the reference component
$\vy{Q}{H}=\vy{Q}{H0}$. On the other hand, if a component is
shadowed by intervening gas, particularly by the reference component,
$\vy{Q}{H}<\vy{Q}{H0}$.
Table~\ref{distances} presents the relative distance for
components in the two absorbers considered. First, we assume that
all components see the same unattenuated source ($\vy{Q}{H}=\vy{Q}{H0}$)
and the results are
given in columns 4 and 5 of the table. Under this
assumption components {\bf e} and {\bf i} are the inner-most absorbers in QSO 2359--1241
and SDSS J0318--0600 respectively. Beyond these, other components are
dispersed along 2 to 4 times that distance. We note that there are no
correlations between velocity, $\vy{n}{H}$, and $R$.
\clearpage
\begin{figure}
\rotatebox{0}{\resizebox{\hsize}{\hsize}
{\plotone{bautistaf4.eps}}}
\caption{Results of grid models for QSO~2359--1241 and diagnostics
of $U_H$ and total column density for all the kinematic
absorption components of the outflow. The various curves presented are
for $\log_{10}(U_H)=-2.2$ (red), -2.4 (blue), -2.6 (green),
-2.8 (cyan), -3.0 (magenta), -3.2 (yellow). The solid lines depict the
results from the MF87 SED and the dotted lines show the results
from the spectrum transmitted through (attenuated by) component {\bf e}.
The calculated column densities are plot for three different
values of $\vy{n}{H}$, in agreement with the previous density diagnostics.}
\label{columnqso2359}
\end{figure}
\clearpage
\begin{figure}
\rotatebox{-90}{\resizebox{\hsize}{\hsize}
{\plotone{bautistaf5.eps}}}
\caption{Results of grid models for SDSS J0318--0600 and diagnostics
of ionization parameters and total column density for all kinematic
absorption components of the outflow. The upper panels shows the results
with unattenuated unreddeneded SED (left) and reddeneded SED (right), while the lower panels presents the results from
the unreddened SED (left) and reddened SED (right)
after attenuation by component {\bf i}. All models we used
$\log(\vy{n}{H})$=3.0, but are practically independent of
$\vy{n}{H}$.
The different colors of the curves correspond to
$\log_{10}(U_H)=-2.2$ (red), -2.4 (blue), -2.6 (green),
-2.8 (cyan), -3.0 (magenta), -3.2 (yellow), -3.4 (black).}
\label{columnj0318}
\end{figure}
\clearpage
\clearpage
\begin{deluxetable}{ccrcccccrrr}
\rotate
\tabletypesize{\scriptsize}
\tablecaption{Calculated distances to the outflows}
\tablewidth{0pt}
\tablehead{
\colhead{Comp.} &
\colhead{$v$ (km/s)}&
\colhead{$\delta v$ (km/s)\tablenotemark{a}} &
\colhead{$\log(n_H)$}&
\multicolumn{2}{c}{Unattenuated SED}&
\multicolumn{2}{c}{Attenuated SED\tablenotemark{b}}
& & &
\cr
\colhead{}&
\colhead{}&
\colhead{}&
&
\colhead{$\log(U_H)$}&
\colhead{$\log(R/R_0)$}&
\colhead{$\log(U_H)$} &
\colhead{$\log(R/R_0)$}&
\colhead{$\log(N_H)$} &
\colhead{$\dot E/\dot E_0$} &
\colhead{$\dot M/\dot M_0$}
}
\startdata
\multicolumn{10}{c}{QSO 2359--1241}\cr
b& -945 & $14\pm5\ $ &$3.65\pm0.21$&$-2.68\pm0.05$&
$+0.5\pm0.2$&$-2.68\pm0.05$&
$+0.0\pm0.2$ &$20.04\pm0.07$& 0.10 & 0.21\cr
c&-1080 & $30\pm21$ &$3.6\pm0.1$&$-2.89\pm0.05$&
$+0.7\pm0.1$&$-2.87\pm0.05$&
$+0.2\pm0.1$ & $19.46\pm0.05$& 0.06 & 0.06\cr
d&-1200 & $28\pm12$ &$3.84\pm0.16$&$-2.83\pm0.05$&
$+0.5\pm0.2$&$-2.81\pm0.05$&
$+0.0\pm0.2$ & $19.66\pm0.05$& 0.08 & 0.11\cr
e&-1380 & $59\pm22$ &$4.4\pm0.1$&$-2.42\pm0.03$&
& &
& $20.56\pm0.05$ & 1 & 1\cr
\\
\hline
\multicolumn{10}{c}{SDSS J0318--0600, unreddened SED} \cr
a &-7450 & $150\pm 10$ &$2.65\pm0.20$ &$-3.04\pm0.10$ &$+0.5\pm0.2$&
$-3.02\pm0.05$ &
$-0.3\pm0.2$ & $18.2\pm0.2$ & 0.01 & 0.004\cr
i &-4200 & $670\pm 10$ &$3.3\pm0.2$ &$-2.75\pm0.10$ &
& &
& $20.9\pm0.1$ & 1 & 1\cr
k &-2800 & $290\pm10$ &$2.85\pm0.2$ &$-3.30\pm0.10$&
$+0.5\pm0.2$&$-3.40\pm0.10$&
$-0.2\pm0.2$ & $18.8\pm0.3$& 0.002 & 0.005\cr
\hline
\multicolumn{10}{c}{SDSS J0318--0600, reddened SED} \cr
a &-7450 & $150\pm 10$ &$2.65\pm0.20$ &$-3.13\pm0.10$ &$+0.4\pm0.2$&
$-2.80\pm0.05$ &
$-0.2\pm0.2$ & $18.2\pm0.2$ & 0.07 & 0.02\cr
i &-4200 & $670\pm 10$ &$3.3\pm0.2$ &$-3.02\pm0.10$ &
& &
& $20.1\pm0.1$ & 1 & 1\cr
k &-2800 & $290\pm10$ &$2.85\pm0.2$ &$-3.40\pm0.10$&
$+0.4\pm0.2$&$-3.19\pm0.10$&
$-0.1\pm0.2$ & $18.8\pm0.3$& 0.01 & 0.03\cr
\hline
\enddata
\tablenotetext{a}{Full width half maximum measured from \ion{Fe}{2}
in QSO~2359--1241 and \ion{Al}{2} for SDSS J0318-0600}
\tablenotetext{b}{The scales for QSO 2359--1241
and SDSS J0318-0600 are
($R_0$, $\dot E_0$,
$\dot M_0$) =
($1.3\pm0.4$ kpc, $2\times 10^{43}$ ergs~s$^{-1}$, 40 M$_\odot$ yr$^{-1}$)
and
($6\pm 3$ kpc, $1\times 10^{45}$ ergs s$^{-1}$, 180 M$_\odot$ yr$^{-1}$)
if rednig of the SED occurs before ionizing the outflow or
($3\pm1$ kpc, $4\times 10^{43}$ ergs $^{-1}$, 100 M$_\odot$ yr$^{-1}$)
and ($18\pm8$ kpc, $6\times 10^{45}$ ergs s$^{-1}$, 1100 M$_\odot$ yr$^{-1}$)
if redning occurs after.}
\label{distances}
\end{deluxetable}
\clearpage
However, given that the distance scales from the absorbers to the central source
(kpc scales) are much greater than the size of the central source
it seems much more physically plausible
that as the innermost components will shadow all further absorbers.
Thus, as the first absorber is ionized by the source the next
component in line from the source will
only receive the transmitted SED from the first absorber,
i.e. an attenuated SED. Furthermore, every component would only
see SED attenuated by all absorbers closer to the source. Thus, in
Table 3 we recalculate the distance for all components, other than the inner most,
using SEDs that account for the attenuation by the inner most components.
In this case $\log(\vy{Q}{H}/\vy{Q}{H0})=-1.0$ for QSO2359--1241 and
-1.64 or -0.80 for SDSS J0318--0600 when using the unreddened and reddened
SED respectively.
Surprisingly, all components of both objects converge, within the uncertainties,
to the same distance from the central source. The uncertainties
quoted in the table combine the errors in the values of $\vy{n}{H}$ and $\vy{N}{H}$ as
diagnosed from the measured column densities from spectra.
In both quasars studied here the inner most absorbers are clearly identified
as the densest and largest systems. The relative ordering
of subsequent absorbers with respect to the central source could be
tentatively
estimated too, but we make no attempt to do so because
the distance between them is always smaller than the uncertainties.
Also in Table~\ref{distances}
we present for every component
the estimated kinetic luminosity and mass flux rate, defined as
\begin{equation}
\dot E = 4\pi \mu m_p \Omega R N_H v^3,
\end{equation}
\begin{equation}
\dot M = 8\pi \mu m_p \Omega R N_H v,
\end{equation}
were $\mu\approx 1.4$ is the mean particle mass for solar composition, $m_p$ is the proton mass, and
$\Omega$ is the global covering
factor.
For the present calculations we adopt $\Omega=0.2$ (see section 4.2 in Dunn et al. 2009),
which impacts the absolute values $\dot E_0$ and $\dot M_0$ given in the
table but not the relative contributions of the components.
These quantities are clearly dominated by the contribution of the largest
innermost component of each outflow, while the minor components together contribute
little to the total energy and mass carried out by the outflow.
This indicates that the minor
components may not be considered as ejection events in their own merits,
but instead they are physically related to the main component.
Clearly, if attenuation of the SED were ignored the distance
to each of the minor components would be overestimated and as well as
their $\dot E$ and $\dot M$ contributions. Yet, even in this case
all the minor components together could only account for a small fraction of
$\dot E$ and $\dot M$ sustained by the main component.
{\bf
Finally, in Table~3 we quote the absolute values of $R_0$, $E_0$, and $\dot M_0$
for the main components of QSO 2350--1241
and SDSS J0318--0600
as obtained in Korista et al (2008) and Dunn et al. (2010). These absolute
value of $R_0$ is a lot more uncertain than the relative quantities tabulated here,
for the reasons explained already at the begining of h=this section. The determination of absolute kinetic
energies and mass outflow rates are even more uncertain because they depend
on $R$ and the assumed valued for the global covering factor, which is
least known parameter of the investigation.}
\section{Discussion and conclusions}
The high spatial resolution and signal-to-noise of the VLT spectra
allowed us to study the properties of each of the kinematic
components in the FeLoBALS of quasars QSO 2350--1241
and SDSS J0318--0600.
From the measurements of column densities for different kinematic components
we determined the electron number density for these components.
For QSO 2350--1241 we used the ratio of column densities of
\ion{Fe}{2} from the excited level at 385 cm$^{-1}$ and the ground
level. In the case of SDSS J0318--0600 we used the ratios of
column densities among levels of the ground multiplets of \ion{Si}{2}
and \ion{C}{2}. Interestingly, there is a clear density contrast between
the maint kinematic component in each object and the smaller components.
By contrast, all smaller components in each object seems to exhibit roughly
the same density. The density contrast between the densest components and
the smaller one in each object are $\sim 0.8$ dex for QSO 2350--1241
and $\sim 0.5$ dex for SDSS J0318--0600.
Next, we determine the ionization parameters characteristic of each
of the absorption components in both quasars. To this end, we designed
diagnostic plots by which the ionization parameter as well as the total
hydrogen column can be uniquely determined. These plots
demonstrate that: (1) any given ratio of column densities among
medium and low ionization is a smooth function of the column density
for a fixed value of
the ionization parameter and (2) these curves of
column density rations vs. column density are monotonic with $U_H$.
There are various consequences of this: (a) for a given pair of
measured column densities (and fixed density, chemical composition, and
SED) no more than one solution in $U_H$ and $N_H$
can be found; (b) $U_H$ and $N_H$ and their errors
are necessarily correlated; (c) these solutions can be found either
graphically or numerically in a way much more efficient than
though generic numerical optimization techniques; (d) the graphical
nature of the diagnostic allows one to set the observations from various
different observers on the same page and gain valuable insights.
In the determination of $\vy{U}{H}$ and $\vy{N}{H}$ traditional plots of predicted column densities vs.
$\vy{N}{H}$ as abscissa have various disadvantages.
This is because for every column density measured there is whole family of solutions ($\vy{U}{H}$, $\vy{N}{H}$), thus both parameters must be determined simultaneously. A typical approach used is to normalize the predicted column densities to the measured columns and then look for the intersection between various curves. The disadvantages in that is that in dividing theoretical values by measured values mixes up theoretical and observational uncertainties. Further, a solution based on intersections of curves or broad regions, if uncertainties are accounted for in some fashion, offers no simple intuitive understanding of how the results may change in case of systematic effects on either the theoretical modeling or the observations. By contrast, the plots that we propose here clearly allow one to visualize the error bars of the measurements and their significance relative to the predictions of different models. One can also visualize how systematic effects on the calculations, like for example chemical composition or shape of the SED, would shift the results of the diagnostics. Another very important advantage of the proposed plots is the potential to put various kinematic components of the same trough on the same plot and compare them under equal conditions.
It is true that the proposed plots do not show the resulting $\vy{N}{H}$ explicitly, but by fixing
$\vy{U}{H}$ the whole problem is solved and there is a direct correspondence
between the observed column density of observed species and $\vy{N}{H}$. Thus,
$\vy{N}{H}$ can be read directly from the tabulated solutions of the models.
We determined relative distances of the various kinematic components,
firstly under the assumption that all components see the same unattenuated
SED. It becomes immediately clear that the component with the largest column density
is always the first in line from the source.
Once the first kinematic component in line was identified for each
object we include the effect
of attenuation of the SED on the distance determination for the
remaining components. It was found that distance determinations that ignore
attenuation affects are significantly overestimated. By contrast, when
attenuation by the innermost component is considered in the distance
estimation all the kinematic components in the
absorption troughs are found in close proximity to each other, and possibly related. This result, if found generally true in most FeLoBAL, ought to have
important consequences in our understanding of the dynamics and energetics of
quasar outflows.
\acknowledgments
We acknowledge support from NSF grant AST 0507772 and from NASA LTSA grant
NAG5-12867.
| proofpile-arXiv_065-5153 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section*{Acknowledgments}
We thank Phil Armitage, the referee, for
comments that greatly improved the paper's presentation,
and we thank Scott Noble for detailed discussions
about his work.
We thank Charles Gammie, Chris Reynolds, Jim Stone,
Kris Beckwith, John Hawley, Julian Krolik,
Chris Done, Chris Fragile, Martin Pessah, and Niayesh Afshordi
for useful discussions.
This work was supported in part by NASA grant
NNX08AH32G (AT \& RN), NSF grant AST-0805832 (AT \& RN),
NASA Chandra Fellowship PF7-80048 (JCM),
an NSF Graduate Research Fellowship (RFP),
and by the NSF through TeraGrid resources provided by
NCSA (Abe), LONI (QueenBee), NICS (Kraken)
under grant numbers TG-AST080025N and TG-AST080026N.
\input{msappendix.tex}
\bibliographystyle{mnras}
\section{Example Solutions and Scalings for the Gammie (1999) Model}
\label{sec_gammie}
Table~\ref{tbl_gammie} gives representative solutions for the \citet{gammie99} model
of a magnetized thin accretion flow.
The columns correspond to the black hole spin, $a$;
the specific magnetic flux, $\Upsilon$;
the nominal efficiency, ${\tilde{e}}$;
percent deviation of ${\tilde{e}}$ from the NT value;
the specific angular momentum, $\jmath$;
percent deviation of $\jmath$ from NT;
and the normalized rate of change of the dimensionless black hole spin, $s$ (see Eq.~\ref{spinevolve}).
For $\Upsilon\lesssim 0.5$ and across all black hole spins,
the relative change in the specific angular momentum is less than $5\%$
and the relative change in the efficiency is less than $9\%$.
For small values of $\Upsilon\lesssim 1$, the
deviations of $\jmath$ and ${\tilde{e}}$
from NT behave systematically and one can derive simple fitting functions.
For $\jmath$ we find
\begin{eqnarray}
&{}& \log_{10}\left[-D[\jmath]\right] \nonumber \\
&\approx& 0.79 + 0.37 (a/M) + 1.60 \log_{10}\Upsilon \\
&\sim& (4/5) + (1/3)(a/M) + (8/5)\log_{10}\Upsilon ,
\end{eqnarray}
with an L2 error norm of $0.7\%,0.7\%$, respectively,
for the first and second relations, while for
${\tilde{e}}$ we find
\begin{eqnarray}
&{}&\log_{10}\left[D[{\tilde{e}}]\right]\nonumber \\
&\approx& 1.44 + 0.12 (a/M) + 1.60 \log_{10}\Upsilon \\
&\sim& (3/2) + (1/10)(a/M) + (8/5)\log_{10}\Upsilon ,
\end{eqnarray}
with an L2 error norm of $0.9\%,1\%$, respectively,
for the first and second relations.
These results indicate that the deviations from NT scale as $\Upsilon^{8/5}$
for $\Upsilon\lesssim 1$. For $\Upsilon\gtrsim 1$,
the index on $\Upsilon$ depends on the spin parameter.
In the span from $\Upsilon\sim 0.2$ to $\Upsilon\sim 1$,
a linear fit across all black hole spins
gives $-D[\jmath]\sim -1+11\Upsilon$ and $D[{\tilde{e}}]\sim -4+33\Upsilon$,
which are rough, though reasonable looking, fits.
\begin{table}
\caption{Thin Magnetized Inflow Solutions}
{\small
\begin{center}
\begin{tabular}{lllllll}
\hline
$\frac{a}{M}$ & $\Upsilon$ & ${\tilde{e}}$ & $D[{\tilde{e}}]$ & $\jmath$ & $D[\jmath]$ & $s$ \\
\hline
0 & 0.1 & 0.0576 & 0.709 & 3.46 & -0.172 & 3.46 \\
0 & 0.2 & 0.0584 & 2.14 & 3.45 & -0.52 & 3.45 \\
0 & 0.3 & 0.0595 & 4.08 & 3.43 & -0.991 & 3.43 \\
0 & 0.5 & 0.0624 & 9.17 & 3.39 & -2.23 & 3.39 \\
0 & 1 & 0.0727 & 27.1 & 3.24 & -6.57 & 3.24 \\
0 & 1.5 & 0.0859 & 50.2 & 3.04 & -12.2 & 3.04 \\
0 & 6 & 0.19 & 232 & 1.51 & -56.4 & 1.51 \\
0.7 & 0.1 & 0.105 & 1.03 & 2.58 & -0.286 & 1.33 \\
0.7 & 0.2 & 0.107 & 3.07 & 2.56 & -0.853 & 1.31 \\
0.7 & 0.3 & 0.11 & 5.8 & 2.54 & -1.61 & 1.3 \\
0.7 & 0.5 & 0.117 & 12.8 & 2.49 & -3.57 & 1.26 \\
0.7 & 1 & 0.142 & 36.7 & 2.32 & -10.2 & 1.12 \\
0.7 & 1.5 & 0.172 & 66.3 & 2.11 & -18.5 & 0.95 \\
0.7 & 6 & 0.477 & 360 & -0.00714 & -100 & -0.74 \\
0.9 & 0.1 & 0.157 & 1.17 & 2.09 & -0.386 & 0.576 \\
0.9 & 0.2 & 0.161 & 3.37 & 2.08 & -1.11 & 0.567 \\
0.9 & 0.3 & 0.165 & 6.29 & 2.06 & -2.07 & 0.555 \\
0.9 & 0.5 & 0.177 & 13.7 & 2.01 & -4.5 & 0.524 \\
0.9 & 1 & 0.215 & 38.3 & 1.84 & -12.6 & 0.423 \\
0.9 & 1.5 & 0.262 & 68.3 & 1.63 & -22.5 & 0.3 \\
0.9 & 6 & 0.845 & 443 & -0.958 & -146 & -1.24 \\
0.98 & 0.1 & 0.236 & 0.949 & 1.68 & -0.397 & 0.179 \\
0.98 & 0.2 & 0.241 & 2.86 & 1.66 & -1.2 & 0.174 \\
0.98 & 0.3 & 0.246 & 5.36 & 1.64 & -2.25 & 0.168 \\
0.98 & 0.5 & 0.261 & 11.6 & 1.6 & -4.9 & 0.152 \\
0.98 & 1 & 0.309 & 32.2 & 1.45 & -13.6 & 0.1 \\
0.98 & 1.5 & 0.368 & 57.1 & 1.28 & -24.1 & 0.0379 \\
0.98 & 6 & 1.21 & 416 & -1.27 & -175 & -0.862 \\
0.998 & 0.1 & 0.319 & -0.63 & 1.4 & 0.344 & 0.0374 \\
0.998 & 0.2 & 0.327 & 2.02 & 1.38 & -1.11 & 0.0342 \\
0.998 & 0.3 & 0.332 & 3.66 & 1.36 & -2 & 0.0322 \\
0.998 & 0.5 & 0.345 & 7.73 & 1.33 & -4.22 & 0.0273 \\
0.998 & 1 & 0.388 & 20.9 & 1.23 & -11.4 & 0.0113 \\
0.998 & 1.5 & 0.439 & 37 & 1.11 & -20.2 & -0.00819 \\
0.998 & 6 & 1.19 & 272 & -0.675 & -148 & -0.292 \\
\hline
\end{tabular}
\end{center}
}
\label{tbl_gammie}
\end{table}
\section{Inflow Equilibrium Timescale in the Novikov-Thorne Model}
\label{sec_inflow}
The radius out to which inflow equilibrium is achieved in a given
time may be estimated by calculating the mean radial velocity $v_r$
and then deriving from it a viscous timescale $-r/v_r$.
When the flow has achieved steady state, the accretion rate,
\begin{equation}\label{eq:mdot}
\dot{M}=-2\pi r\Sigma v_r \mathcal{D}^{1/2},
\end{equation}
is a constant independent of time and position. Here we derive an
expression for $v_r$ corresponding to the general relativistic NT thin
disk model. In what follows, capital script letters denote standard
functions of $r$ and $a$ (c.f. eqns. (14) and (35) in \citet{pt74})
which appear as relativistic corrections in otherwise Newtonian
expressions. They reduce to unity in the limit
$r/M\rightarrow\infty$.
The vertically-integrated surface density may be defined as $\Sigma=2
h\rho$, where $h$ is the disk scale-height and $\rho$ is the rest-mass
density at the midplane. In equilibrium, density is related to
pressure by
\begin{align}
\frac{dp}{dz}&=\rho \times (\mbox{``acceleration of gravity''})\\
&=\rho\frac{Mz}{r^3}\frac{\mathcal{B}^2\mathcal{D}\mathcal{E}}{\mathcal{A}^2\mathcal{C}},
\end{align}
the vertically-integrated solution of which is
\begin{equation}
h=(p/\rho)^{1/2}/|\Omega| \mathcal{A}\mathcal{B}^{-1}\mathcal{C}^{1/2}\mathcal{D}^{-1/2}\mathcal{E}^{-1/2}.
\end{equation}
The pressure may be parameterized in terms of the viscous stress,
$|t_{\hat{r}\hat{\phi}}|=\alpha p$, which is a known function of $r$ and
$a$:
\begin{equation}
W=2ht_{\hat{r}\hat{\phi}}=\frac{\dot{M}}{2\pi}\Omega\frac{\mathcal{C}^{1/2}\mathcal{Q}}{\mathcal{B}\mathcal{D}}.
\end{equation}
The surface density is then
\begin{equation}
\Sigma=\frac{1}{2\pi}\frac{\dot{M}}{\alpha h^2|\Omega|}\mathcal{A}^{2}\mathcal{B}^{-3}\mathcal{C}^{3/2}\mathcal{D}^{-2}\mathcal{E}^{-1}\mathcal{Q}.
\end{equation}
Substituting this in Eq. (\ref{eq:mdot}), the radial velocity is
\begin{equation}\label{eq:inflow}
v_r=-\alpha|h/r|^2|\Omega| r \mathcal{A}^{-2}\mathcal{B}^{3}\mathcal{C}^{-3/2}\mathcal{D}^{3/2}\mathcal{E}\mathcal{Q}^{-1}.
\end{equation}
This result is independent of the exact form of the pressure and
opacity and so is valid in all regions of the disk. The inflow
equilibrium time may be estimated as $t_{\rm ie}\sim
-2r/v_r$.
\section{Comparisons with Other Results}
\label{sec:comparison}
The results we have obtained in the present work are consistent with
those of \citet{arc01} and \citet{rf08}, who carried out pseudo-Newtonian studies,
and with the results of S08, who did a full GRMHD simulation.
Both of these studies found only minor deviations from NT for thin accretion disks
with a multi-loop initial field geometry. However, more recently,
N09 and N10 report {\it apparently} inconsistent results,
including factors of up to five larger deviations from NT
in the specific angular momentum ($2\%$ in S08 versus $10\%$ in N10)
for the same disk thickness of $|h/r|\sim 0.07$.
They also find a $50\%$ larger deviation
from NT in the luminosity ($4\%$ in S08 versus $6\%$ in N09).
Furthermore, in N10 they conclude that the electromagnetic stresses
have no dependence on disk thickness or initial magnetic field geometry,
whereas we find that the electromagnetic stresses have a statistically significant dependence
on both disk thickness and magnetic field geometry.
We have considered several possible explanations for these differences,
as we now describe.
We attempt to be exhaustive in our comparison
with the setup and results by N09 and N10,
because our works and their works seek accuracies much better than order two
in measuring deviations from NT.
Thus, any deviations between our results by factors of two or more
must be investigated further in order to ensure a properly understood and accurate result.
First, we briefly mention some explanations that N10 propose
as possible reasons for the
discrepant results, viz., differences in
1) numerical algorithm or resolution;
2) box size in $\phi$-direction: $\Delta\phi$;
3) amplitude of initial perturbations;
4) accuracy of inflow equilibrium;
and 5) duration of the simulations.
Our algorithms are similar except that their PPM
interpolation scheme assumes primitive quantities are cell averages (standard PPM),
while ours assumes they are point values (as required to be applied in a higher-order scheme).
They used LAXF dissipative fluxes,
while we used HLL fluxes that are about twice more accurate for shocks
and may be more accurate in general.
On the other hand, they used parabolic interpolation for the Toth electric field,
while we use the standard Toth scheme.
Given these facts, we expect that the accuracy of our algorithms is similar.
Overall, our convergence testing and other diagnostics (see \S\ref{sec:convergence})
confirm that none of their proposed issues can be the cause of differences between S08 and N10.
We have shown that inflow equilibrium must
include saturation of the specific magnetic flux, $\Upsilon$,
which generally saturates later in time than other quantities.
By running our fiducial model A0HR07 to a time of nearly $30000M$,
we ensure that we have a long period of steady state conditions
to compute our diagnostic quantities. The fact that we need to run our fiducial thin disk simulation for such a long time
to reach inflow equilibrium up to a radius $r\sim 9M$ is completely consistent with our
analytical estimate of the time scale calculated using Eq. \ref{eq:inflow} of Appendix~\ref{sec_inflow} (see
the earlier discussion in \S\ref{sec_infloweq}
and Fig. \ref{velvsr}). In the comparison between the numerical and analytical
results shown in Figure~\ref{velvsr}, we found agreement
by setting $\alpha |h/r|^2\approx 0.00033$ which,
for our disk with $|h/r|\approx 0.064$,
corresponds to $\alpha\approx 0.08$.
With this value of $\alpha|h/r|^2$, we would have to run
the simulation until $t\sim83000M,~160000M$, to reach inflow
equilibrium out to $15M,~20M$, respectively,
corresponding to a couple viscous timescales at that radius.
N10 state that they reach inflow equilibrium within $r\sim 15M$--$20M$
in a time of only $t\sim 10000M$.
Since their disk thickness is $|h/r|\approx 0.06$,
even a single viscous timescale would require
their simulations to have $\alpha\sim0.38$ to reach inflow equilibrium up to $r\sim 15M$,
and an even larger value of $\alpha$ for $r\sim20M$. This seems unlikely.
We can partially account for their result by considering our 1-loop model,
which up to $t\sim 17000M$
has $\alpha |h/r|^2$ twice as large and $\alpha$
about $70\%$ larger than in the fiducial 4-loop run.
However, this still falls far short by a factor of roughly $3$
of what N10 would require for inflow equilibrium up to $15M$--$20M$.
Further, our A0HR07LOOP1 model, which is similar to their model,
only reaches a saturated state
by $17000M$, and only the $\Upsilon$ quantity indicates
that saturation has been reached.
If we were to measure quantities from $10000M$ to $15000M$
as in N10, we would have underestimated the importance of magnetic field geometry
on the electromagnetic stresses.
Since all these simulations are attempting to obtain accuracies
better than factors of two in the results,
this inflow equilibrium issue should be explored further.
A few possible resolutions include:
1) N10's higher resolution leads to a much larger $\alpha$;
2) their disk has a larger ``effective'' thickness, e.g. $|h/r|\sim 0.13$,
according to Eq. 5.9.8 in NT (see Eq. \ref{eq:inflow} of Appendix~\ref{sec_inflow});
3) some aspects of their solution have not yet reached inflow equilibrium
within a radius much less than $r\sim 15M$,
such as the value of $\Upsilon$ vs. time that saturates much later than other quantities;
or 4) they achieve constant fluxes vs. radius due to transient non-viscous effects
-- although one should be concerned that the actual value of such fluxes
continues to secularly evolve in time and one still requires evolution
on the longer viscous (turbulent diffusion) timescale to reach true inflow equilibrium.
Second, we considered various physical setup issues, including differences in:
1) range of black hole spins considered;
2) range of disk thicknesses studied;
3) ad hoc cooling function;
and 4) equation of state.
We span the range of black hole spins and disk thicknesses studied by N10,
so this is unlikely to explain any of the differences.
Some differences could be due to the disk thickness vs. radius
profiles established by the ad hoc cooling functions in the two studies.
N10's cooling function is temperature-based and
allows cooling even in the absence of any dissipation,
while ours is based upon the specific entropy and
cools the gas only when there is dissipation.
Both models avoid cooling unbound gas.
In S08 and in the present paper, we use an ideal gas equation of
state with $\Gamma=4/3$,
while N09 and N10 used $\Gamma=5/3$.
The properties of the turbulence do appear to depend on the equation
of state \citep{mm07}, so it is important to investigate further
the role of $\Gamma$ in thin disks.
Third, the assumed initial field geometry may introduce
critical differences in the results.
Issues with the initial field geometry include
how many field reversals are present,
how isotropic the field loops are in the initial disk,
how the electromagnetic field energy is distributed in the initial disk,
and how the magnetic field is normalized.
In S08 and here, we have used a multi-loop geometry in the initial torus
consisting of alternating polarity poloidal field bundles stacked radially.
We ensure that the field loops are roughly isotropic within the initial torus.
We set the ratio of maximum gas pressure to maximum magnetic pressure
to $\beta_{\rm maxes}=100$, which gives us a volume-averaged mean $\beta$ within
the dense part of the torus ($\rho_0/\rho_{0,\rm max}\ge 0.2$) of $\bar{\beta}\sim 800$.
Our procedure ensures that all initial local values of $\beta$ within the disk
are much larger than the values in the evolved disk, i.e., there is
plenty of room for the magnetic field to be amplified by the MRI.
We have also studied a 1-loop geometry that is
similar to the 1-loop geometry used in N09 and N10.
Their initial $\phi$-component of the vector potential is
$A_\phi\propto {\rm MAX}(\rho_0/\rho_{0,\rm max} - 0.25,0)$
(Noble, private communication).
They initialize the magnetic field geometry
by ensuring that the volume-averaged gas pressure divided by
the volume-averaged magnetic pressure is $\beta_{\rm averages}=100$
(Noble, private communication).
(They stated that the mean initial plasma $\beta$ is $\bar{\beta}=100$.)
For their thin disk torus model parameters,
this normalization procedure leads to a portion of the
inner radial region of the torus to have a local value of $\beta\sim 3-8$,
which may be a source of differences in our results.
Such a small $\beta$ is lower than present in the saturated disk.
N10 make use of an older set of simulations from a different non-energy-conserving
code \citep{hk06,bhk08} to investigate the effect of other field geometries.
The results from this other code have strong outliers, e.g., the KD0c model,
and so we are unsure if these other simulations can be used for such a study.
N10 state that they find no clear differences in the electromagnetic stresses
for different initial field geometries.
As shown in their figures 12 and 13, the \citet{ak00} model
captures the smoothing of the stress outside the ISCO,
but it is not a model for the behavior of the stress inside the ISCO.
We find that electromagnetic stresses have a clear dependence
on both disk thickness and the initial
magnetic field geometry, with a trend that agrees
with the \citet{gammie99} model of a magnetized thin disk.
Our Figure~\ref{loop1manystress} shows that the stress within the ISCO
is reasonably well modelled by the \citet{gammie99} model.
Our 1-loop thin disk model gives a peak normalized stress (integrated over all angles)
of about $3.2\times 10^{-3}$ for times $12900M$ to $17300M$,
which is comparable to the 1-loop thin disk model
in N10 with peak normalized stress (integrated over all angles) of about $2.5\times 10^{-3}$
(after correcting for their $\phi$-box size).
Hence, we are able to account for the results of their 1-loop model.
In addition, we used the specific magnetic flux, $\Upsilon$,
an ideal MHD invariant that is conserved within the ISCO,
to identify how electromagnetic stresses scale with disk thickness and magnetic field geometry.
In the saturated state, the value of $\Upsilon$,
which controls the electromagnetic stresses,
is different for different initial magnetic field geometries.
We find that $\jmath$ within the disk ($\pm 2|h/r|$ from the midplane)
deviates from NT by
$-3\%$ in our 4-loop model and $-6\%$ in our 1-loop model
for times $12900M$ to $17300M$.
Integrating over all angles, $\jmath$ deviates by $-6\%$ for the 4-loop
model and $-11\%$ for the 1-loop model for times $12900M$ to $17300M$.
Thus, we find a clear factor of two change, depending on
the assumed initial field geometry and the range of integration.
The excess luminosity is $3.5\%$ for the 4-loop model
and $5.4\%$ for the 1-loop model for times $12900M$ to $17300M$.
Recalling that N10 find a deviation from NT of about $-10\%$ in $\jmath$
(integrated over all angles) and a luminosity excess beyond NT of about $6\%$,
this shows we can completely account for the {\it apparent}
inconsistencies mentioned by N10 by invoking
dependence of the results on the initial field geometry
and the presence of extra stress beyond the disk component of the accretion flow.
Fourth, let us consider measurement and interpretation differences.
Our ultimate goal is to test how well the NT model
describes a magnetized thin accretion disk.
The primary quantity that is used to measure this effect in S08 and N10
is the specific angular momentum $\jmath$. However, the
measurements are done differently in the two studies.
In S08 as well as in this paper, we focus on the disk gas
by limiting the range of $\theta$ over which we compute the
averaging integrals ($\pm2|h/r|$ from the midplane).
In contrast, N10 compute their integrals
over a much wider range of $\theta$ which includes
both the disk and the corona-wind-jet
(Noble, 2010, private communications).
We have shown in \S~\ref{sec_fluxdiskcorona}
that the disk and corona-wind-jet contribute
roughly equally to deviations of $\jmath$ from the NT value.
In principle, the luminosity from the corona-wind-jet could be important,
but we have shown that the excess luminosity of bound gas
within the ISCO is dominated by the disk.
This means that the measure used by N10,
consisting of integrating over all gas to obtain $\jmath$,
cannot be used to infer the excess luminosity of bound gas within the ISCO.
Further, the corona would largely emit non-thermal radiation,
so for applications in which one is primarily interested in the thermal component
of the emitted radiation, one should evaluate the accuracy of the NT model by
restricting the angular integration range to the disk component within $\pm 2|h/r|$.
Fifth, let us consider how the results from N10 scale with disk
thickness for the specific case of a non-spinning ($a/M=0$) black
hole. We have performed a linear least squares fit of
their simulation results, omitting
model KD0c which is a strong outlier. For $\jmath$ integrated
over all $\theta$, their relative difference follows
$D[\jmath]\approx -7 - 45|h/r|$
with confidence of $95\%$ that these coefficients, respectively,
only deviate by $\pm 67\%$ and $\pm 89\%$.
These fits imply that, as $|h/r|\to 0$, the
relative deviation of $\jmath$ from the NT value is about $-7\%$,
but they could easily be as low as $-2\%$.
Their results do not indicate a statistically significant
large deviation from NT as $|h/r|\to 0$.
Since the total deviation in $\jmath$ from NT includes the effects of
electromagnetic (and all other) stresses, this implies that
their models are consistent with weak electromagnetic stresses as $|h/r|\to 0$.
Further, we have already established that the 1-loop geometry gives
(at least) twice the deviation from NT compared to the 4-loop geometry,
plus there is another factor of two arising from including
the corona-wind-jet versus not including it.
This net factor of 4 applied to N10's results implies
that $\jmath$ would deviate by about $-2\%$ or even as low as $-0.5\%$
from NT in the limit $|h/r|\to 0$ if they were to consider a 4-loop field geometry
and focus only on the disk gas.
Thus, their models show no statistically significant
large deviations from NT.
In addition, our results in section~\ref{scalinglaws}
show that, whether we consider
an integral over all angles or only over the disk, there is
no statistically significant large deviation from NT as $|h/r|\to 0$.
In summary, we conclude that the apparent differences between
the results obtained in S08 and the present paper on the one hand,
and those reported
in N09 and N10 on the other, are due to
1) dependence on initial magnetic field geometry (multi-loop vs 1-loop);
2) dependence upon the initial magnetic field distribution and normalization;
and 3) measurement and interpretation differences
(disk vs. corona-wind-jet).
Note in particular that the 1-loop initial field geometry is
severely squashed in the vertical direction and elongated
in the radial direction for thin disks,
and it is not clear that such a geometry would ever arise naturally.
There are also indications from our simulation that the 1-loop geometry
may actually never reach a converged state due to the arbitrary
amount of magnetic flux accreted onto the black hole
due to the single polarity
of the initial magnetic field.
Finally, if one is trying to test how well
simulated thin accretion disks compare with NT,
then it is important to restrict the comparison to disk
gas near the midplane.
One should not expect the gas in the corona-wind-jet
to agree with the NT model.
\section{Conclusions}
\label{sec:conclusions}
We set out in this study to test the standard model of thin accretion
disks around rotating black holes as developed by \citet{nt73}. We
studied a range of disk thicknesses and black hole spins and found
that magnetized disks are consistent with NT to within a few percent
when the disk thickness $|h/r|\lesssim 0.07$. In addition, we noted
that deviations from the NT model decrease as $|h/r|$ goes down.
These results suggest that black
hole spin measurements via the x-ray continuum-fitting method
\citep{zcc97,shafee06,mcc06,ddb06,liu08,gmlnsrodes09,gmsncbo10}, which
are based on the NT model, are robust to model uncertainties so long
as $|h/r|\lesssim 0.07$. At luminosities below $30\%$ of Eddington,
we estimate disk thicknesses to be $|h/r|\lesssim0.05$, so the NT
model is perfectly adequate.
These results were obtained by performing global 3D GRMHD simulations
of accreting black holes with a variety of disk thicknesses, black
hole spins, and initial magnetic field geometries in order to test how these
affect the accretion disk structure, angular momentum transport,
luminosity, and the saturated magnetic field. We explicitly tested
the convergence of our numerical models by considering a range of
resolutions, box sizes, and amplitude of initial perturbations.
As with all numerical studies, future calculations should continue to clarify what
aspects of such simulations are converged by performing more parameter
space studies and running the simulations at much higher resolutions.
For example, it is possible that models with different black hole
spins require more or less resolution than the $a=0$ models,
while we fixed the resolution for all models and only tested convergence
for the $a=0$ models.
We confirmed previous results by S08 for a non-spinning ($a/M=0$) black
hole, which showed that thin ($|h/r|\lesssim 0.07$) disks initialized
with multiple poloidal field loops agree well with the NT
solution once they reach steady state. For the fiducial model
described in the present paper, which has similar parameters as the
S08 model, we find $2.9\%$ relative deviation in the specific angular
momentum accreted through the disk, and $3.5\%$ excess luminosity from
inside the ISCO. Across all black hole spins that we have considered,
viz., $a/M=0, ~0.7, ~0.9, ~0.98$, the relative deviation from NT in
the specific angular momentum is less than $4.5\%$, and the luminosity
from inside the ISCO is less than $7\%$ (typically smaller, and
much of it is likely lost to the hole). In addition, all
deviations from NT appear to be roughly proportional to $|h/r|$.
We found that the assumed initial field geometry modifies the
accretion flow. We investigated this effect by considering two
different field geometries and quantified it by measuring the specific
magnetic flux, $\Upsilon$, which is an ideal MHD invariant (like the specific
angular momentum or specific energy). The specific magnetic flux can
be written as a dimensionless free parameter that enters the
magnetized thin disk model of \citet{gammie99}. This
parameter determines how much the flow deviates from NT as a result of
electromagnetic stresses.
We found that $\Upsilon$ allows a quantitative understanding
of the flow within the ISCO, while the electromagnetic stress ($W$)
has no well-defined normalization and varies widely within the ISCO.
While a plot of the stress may appear to show large stresses
within the ISCO, the actual deviations from NT can be small.
This demonstrates that simply plotting $W$ is not a useful diagnostic
for measuring deviations from NT.
We found that the specific magnetic flux of the
gas inside the ISCO was substantially larger when we used a single
poloidal magnetic loop to initialize the simulation compared to our
fiducial 4-loop run. For $a/M=0$ and $|h/r|\lesssim 0.07$, the
early saturated phase (times $12900M$ to $17300M$)
of the evolution for the 1-loop
geometry gave $5.6\%$ relative deviation in the specific angular
momentum and $5.8\%$ excess luminosity inside the ISCO. These
deviations are approximately twice as large as the ones we found for
the 4-loop simulation.
At late times, the 1-loop model generates significant
deviations from NT, which is a result similar to that found
in a vertical field model in \citet{mg04}.
However, we argued that the multiple loop geometry we used
is more natural than the single loop geometry, since
for a geometrically thin disk the magnetic field in the 1-loop model
is severely squashed vertically and highly elongated radially.
The 1-loop model is also likely to produce a strong radio jet.
More significant deviations from NT probably
occur for disks with strong ordered magnetic field, as found in 2D
GRMHD simulations by \citet{mg04}. Of course, in the limit that the
magnetic field energy density near the black hole exceeds the local
rest-mass density, a force-free magnetosphere will develop and
deviations from the NT model will become extreme.
We argued that this corresponds to when the specific magnetic flux
$\Upsilon\gtrsim 1$ near the disk midplane.
Our 1-loop model appears to be entering such a phase
at late time after accumulation of a significant amount of magnetic flux.
Such situations likely produce powerful jets that are
not observed in black hole x-ray binaries in the thermal state.
However, transition between the thermal state
and other states with a strong power-law component \citep{fend04a,remm06}
may be partially controlled by the accumulation of magnetic flux
causing the disk midplane (or perhaps just the corona)
to breach the $\Upsilon\sim 1$ barrier.
Such a behavior has been studied in the non-relativistic regime \citep{nia03,ina03,igu09},
but more work using GRMHD simulations is required to validate the behavior.
We also found that the apparently different results obtained
by N10 were mostly due to measurement and interpretation differences.
We found that both the disk and the corona-wind-jet contribute nearly
equally to deviations in the total specific angular momentum relative
to the NT model. However, the corona-wind-jet
contributes much less to the luminosity than the disk component.
Therefore, if one is interested in comparing the luminous portion
of the disk in the simulations against the NT model,
the only fair procedure is to consider only the disk gas,
i.e., gas within a couple of scale heights of the midplane. This is
the approach we took in this study (also in S08). N10 on the other
hand included the corona-wind-jet gas in their calculation of the specific angular
momentum. The dynamics of
the coronal gas differs considerably from the NT model. Therefore,
while it does not contribute to the luminosity of bound gas,
it doubles the deviation of the specific angular momentum from the NT model.
In addition, N10 used a 1-loop initial field geometry for their work which,
as discussed above, further enhanced deviations.
\section{Convergence with Resolution and Box Size}
\label{sec:convergence}
\begin{figure}
\begin{center}
\includegraphics[width=3.3in]{f14.eps}
\end{center}
\caption{This plot shows $\jmath$, $\jmath_{\rm in}$, and ${\tilde{e}}$
for a sequence of simulations that are similar to the fiducial run
(A0HR07), viz., $|h/r|\approx 0.07$, $a/M=0$, but use different
radial resolutions, or $\theta$ resolutions, or box sizes.
The integration range in $\theta$ is over $\pm 2|h/r|$ around the midplane.
Only the region of the flow in inflow equilibrium, $2M<r<9M$, is
shown in the case of $\jmath$.
The different lines are as follows:
black dashed line: NT model; black solid line: fiducial model A0HR07;
blue solid line: model C0 (S08);
magenta dotted line: model C1;
magenta solid line: model C2;
red dotted line: model C3;
red solid line: model C4;
green dotted line: model C5;
green solid line: model C6.
Note that changes in the numerical resolution
or other computational parameters lead to
negligible changes in the values of $\jmath$, $\jmath_{\rm in}$,
and ${\tilde{e}}$ in the region of the flow that is in inflow equilibrium, $r<9M$.
For $r\gtrsim 9M$, the flow has not achieved steady state,
which explains the large deviations in ${\tilde{e}}$.
Only the lowest resolution models are outliers.
}
\label{fluxconvramesh}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=3.3in]{f15.eps}
\end{center}
\caption{Similar to Figure~\ref{fluxconvramesh},
but for the normalized luminosity, $L(<r)/\dot{M}$,
and its logarithmic derivative, $d(L/\dot{M})/d\ln{r}$, both shown vs. radius.
We see that all the models used to test convergence show consistent
luminosity profiles over the region that is in inflow equilibrium, $r<9M$.
The well-converged models have
$\tilde{L}_{\rm in}\lesssim 4\%$,
which indicates only a low level of luminosity
inside the ISCO.
}
\label{lumconvramesh}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=3.3in,clip]{f16.eps}
\end{center}
\caption{This is a more detailed version of Figure~\ref{fluxconvramesh},
showing $\jmath$ vs $r$ for individually labeled models.
The models correspond to the fiducial resolution (solid lines),
a higher resolution run (dot-dashed lines),
and a lower resolution run (dotted lines).
Generally, there are only minor differences between the fiducial and higher resolution models.
}
\label{jin4panel}
\end{figure}
\input{table_conv.tex}
The fiducial model described earlier was computed with a numerical
resolution of $256\times64\times32$, using an azimuthal wedge of
$\pi/2$. This is to be compared with the simulation described in S08,
which made use of a $512\times128\times32$ grid and used a $\pi/4$ wedge.
These two runs give very similar results, suggesting that the details
of the resolution and wedge size are not very important.
To confirm this, we have run a number of simulations with
different resolutions and wedge angles. The complete list
of runs is:
$256\times64\times32$ with $\Delta\phi=\pi/2$ (fiducial run, model A0HR07)),
$512\times128\times32$ with $\Delta\phi=\pi/4$ (S08, model C0),
$256\times128\times32$ with $\Delta\phi=\pi/2$ (model C6),
$256\times32\times32$ with $\Delta\phi=\pi/2$ (model C5),
$256\times64\times64$ with $\Delta\phi=\pi/2$ (model C4),
$256\times64\times64$ with $\Delta\phi=\pi$ (model C2),
$256\times64\times16$ with $\Delta\phi=\pi/2$ (model C3), and
$256\times64\times16$ with $\Delta\phi=\pi/4$ (model C1).
Figure~\ref{fluxconvramesh} shows the
accreted specific angular momentum, $\jmath$,
ingoing component of the specific angular momentum, $\jmath_{\rm in}$,
and the nominal efficiency ${\tilde{e}}$ as functions of radius
for all the models used for convergence testing.
Figure~\ref{lumconvramesh} similarly shows the cumulative
luminosity $L(<r)/\dot{M}$ and differential luminosity $d(L/\dot{M})/d\ln{r}$
as functions of radius.
The overwhelming impression from these plots is that the
sequence of convergence simulations agree with one another quite
well. Also, the average of all the runs matches the NT model very
well; this is especially true for the steady-state region of the flow,
$r<9M$. Thus, qualitatively, we conclude that our results
are well-converged.
For more quantitative comparison, Figure~\ref{jin4panel} shows the
profile of $\jmath$ vs $r$
for the various models, this time with each model separately identified.
It is clear that $\jmath$ has converged,
since there are very minor deviations from our
highest resolution/largest box size to our next
highest resolution/next largest box size.
All other quantities, including ${\tilde{e}}$, $\jmath_{\rm in}$, and $\Upsilon$
are similarly converged.
The model with $N_\phi=64$ shows slightly {\it less} deviations from NT
in $\jmath$ than our other models.
However, it also shows slightly higher luminosity than our other models.
This behavior is likely due to the stochastic temporal
behavior of all quantities vs. time,
but this could also be due to the higher $\phi$-resolution causing
a weaker ordered magnetic field to be present
leading to weaker ideal electromagnetic stresses,
smaller deviations from NT in $\jmath$ within the ISCO,
but with the remaining turbulent field being dissipated giving a higher luminosity.
The $N_\phi$ resolution appears to be the limit on our accuracy.
Further quantitative details are given in Table~\ref{tbl_resolution},
where we list numerical results for all the convergence test
models, with the $\theta$ integration
performed over both $\pm 2|h/r|$ around the midplane and over all angles.
We see that there are some trends as a
function of resolution and/or $\Delta\phi$. Having only $32$ cells in
$\theta$ or $16$ cells in $\phi$ gives somewhat poor results, so these
runs are under-resolved. However, even for these
runs, the differences are not large.
Note that $\Upsilon$ reaches a steady-state much later than all other
quantities, and our C? (where ? is 0 through 6)
models did not run as long as the fiducial model.
This explains why $\Upsilon$ is a bit lower for the C? models.
Overall, we conclude that our choice of resolution $256\times 64\times 32$
for the fiducial run (A0HR07) is adequate to reach convergence.
\section{Diagnostics}
\label{sec:diagnostics}
In this section, we describe several important diagnostics
that we have found useful in this study.
First, since we regulate the disk height via an ad hoc cooling function,
we check the scale height of the simulated disk
as a function of time and radius
to ensure that our cooling function operates properly.
Second, the equations we solve consist of
seven time-dependent ideal MHD equations,
corresponding to four relevant conserved
quantities\footnote{The energy-momentum of the fluid is not strictly conserved
because of radiative cooling; however, the fluid component of the
energy-momentum equations still proves to be useful.
Only energy conservation of the fluid is strongly affected for our types of models.}.
Using these quantities we construct three dimensionless flux ratios
corresponding to
the accreted specific energy,
specific angular momentum, and specific magnetic flux.
Third, we check what the duration of the simulations should be
in order to reach a quasi-steady state (``inflow equilibrium'') at any given radius.
Finally, we describe how we compute the luminosity.
When the specific fluxes are computed as a spatial or temporal average/integral,
we always take the ratio of averages/integrals of fluxes (i.e. $\int dx F_1/\int dx F_2$)
rather than the average/integral of the ratio of fluxes (i.e. $\int dx (F_1/F_2)$).
The former is more appropriate for capturing the mean behavior,
while the latter can be more appropriate when investigating
fluxes with significant phase shifted correlations between each other.
As relevant for this study, the accretion disk has significant
vertical stratification and the local value of the ratio of fluxes
can vary considerably without any affect on the bulk accretion flow.
Similarly, potentially one flux can (e.g.) nearly vanish over short periods,
while the other flux does not, which leads to unbounded values for the ratio of fluxes.
However, the time-averaged behavior of the flow is not greatly affected by such short
periods of time.
These cases demonstrate why the ratio of averages/integrals is always
performed for both spatial and temporal averages/integrals.
When comparing the flux ratios or luminosities from
the simulations against the NT model,
we evaluate the percent relative difference $D[f]$ between
a quantity $f$ and its NT value as follows:
\begin{equation}
D[f] \equiv 100\frac{f-f[{\rm NT}]}{f[{\rm NT}]} .
\end{equation}
For a density-weighted time-averaged value of $f$, we compute
\begin{equation}\label{meantheta}
\langle f \rangle_{\rho_0} \equiv
\frac{\int\int\int f \,\rho_0(r,\theta,\phi) dA_{\theta\phi}dt}
{\int\int\int \rho_0(r,\theta,\phi) dA_{\theta\phi}dt} ,
\end{equation}
where $dA_{\theta\phi}\equiv \sqrt{-g} d\theta d\phi$ is an area
element in the $\theta-\phi$ plane, and the integral over $dt$
is a time average over the duration of interest,
which corresponds to the period when the disk is in steady-state.
For a surface-averaged value of $f$, we compute
\begin{equation}
\langle f \rangle \equiv \frac{\int\int f\; dA_{\theta\phi}}{\int\int dA_{\theta\phi}} .
\end{equation}
\subsection{Disk Thickness Measurement}
\label{sec:diskthick1}
We define the dimensionless
disk thickness per unit radius, $|h/r|$,
as the density-weighted mean angular deviation
of the gas from the midplane,
\begin{equation}\label{thicknesseq}
\left|\frac{h}{r}\right| \equiv \left\langle \left|\theta-\frac{\pi}{2}\right| \right\rangle_{\rho_0} .
\end{equation}
(This quantity was called ${\Delta\theta}_{\rm abs}$ in S08.)
Notice that we assume the accretion disk plane is on the equator
(i.e. we assume $\langle\theta\rangle_{\rho_0}=\pi/2$).
As defined above, $|h/r|$ is a function of $r$. When we wish
to characterize the disk by means of a single estimate of its
thickness, we use the value of $|h/r|$ at $r=2r_{\rm ISCO}$, where
$r_{\rm ISCO}$ is the ISCO radius ($r_{\rm ISCO}=6M$ for a
non-spinning BH
and $r_{\rm ISCO}=M$ for a maximally-spinning BH; \citealt{shapirobook83}).
As we show in \S\ref{sec:diskthick2}, this choice is quite reasonable.
An alternative thickness measure,
given by the root-mean-square thickness $(h/r)_{\rm rms}$,
allows us to estimate how accurate we can be about our definition of thickness.
This quantity is defined by
\begin{equation}\label{thicknessrms}
\left(\frac{h}{r}\right)_{\rm rms} \equiv \left\langle \left(\theta-\frac{\pi}{2}\right)^2\right\rangle_{\rho_0}^{1/2} .
\end{equation}
The range of $\theta$ for the disk thickness integrals in the above
equations is from $0$ to $\pi$.
\subsection{Fluxes of Mass, Energy, and Angular Momentum}
The mass, energy and angular momentum conservation equations give
the following fluxes,
\begin{eqnarray}\label{Dotsmej}
\dot{M}(r,t) &=& -\int_\theta \int_\phi \rho_0 u^r dA_{\theta\phi}, \\
{\rm e} \equiv \frac{\dot{E}(r,t)}{\dot{M}(r,t)} &=& \frac{\int_\theta\int_\phi T^r_t dA_{\theta\phi}}{\dot{M}(r,t)} , \\
\jmath \equiv \frac{\dot{J}(r,t)}{\dot{M}(r,t)} &=& -\frac{\int_\theta\int_\phi T^r_\phi dA_{\theta\phi}}{\dot{M}(r,t)} .
\end{eqnarray}
The above relations define
the rest-mass accretion rate (sometimes just referred to as the mass accretion rate), $\dot{M}$;
the accreted energy flux per unit rest-mass flux, or {\it specific energy}, ${\rm e}$;
and the accreted angular momentum flux per unit rest-mass flux,
or {\it specific angular momentum}, $\jmath$.
Positive values of these quantities
correspond to an inward flux through the black hole horizon.
The black hole spin evolves due to the accretion of mass, energy, and angular momentum,
which can be described by the dimensionless spin-up parameter s,
\begin{equation}\label{spinevolve}
s \equiv \frac{d(a/M)}{dt}\frac{M}{\dot{M}} = \jmath - 2\frac{a}{M}{\rm e} ,
\end{equation}
where the angular integrals used to compute $\jmath$ and ${\rm e}$
include all $\theta$ and $\phi$ angles \citep{gammie_bh_spin_evolution_2004}.
For $s=0$ the black hole is in so-called ``spin equilibrium,''
corresponding to when the dimensionless black hole spin, $a/M$,
does not change in time.
The ``nominal'' efficiency, corresponding
to the total loss of specific energy from the fluid,
is obtained by removing the rest-mass term from the accreted specific energy:
\begin{equation}
{\tilde{e}} \equiv 1- {\rm e} .
\end{equation}
The time-averaged value of ${\tilde{e}}$ at the horizon ($r=r_{\rm H}$)
gives the total nominal efficiency: $\langle{\tilde{e}}(r_{\rm H})\rangle$,
which is an upper bound on the total photon radiative efficiency.
The range of $\theta$ over which the flux density integrals in the above equations
are computed depends on the situation. In S08, we limited the
$\theta$ range to $\delta\theta=\pm 0.2$ corresponding
to 2--3 density scale heights in order to focus on the disk
and to avoid including the disk wind or black hole jet.
In this paper, we are interested in studying how the contributions to the
fluxes vary as a function of height above the equatorial plane. Our
expectation is that the disk and corona-wind-jet contribute differently to
these fluxes. Thus, we consider different ranges of $\theta$ in the
integrals, e.g., from $(\pi/2)-2|h/r|$ to
$(\pi/2)+2|h/r|$, $(\pi/2)-4|h/r|$ to
$(\pi/2)+4|h/r|$, or $0$ to $\pi$. The first and third
are most often used in later sections.
\subsection{Splitting Angular Momentum Flux into Ingoing and Outgoing Components}
For a more detailed comparison of the simulation results with the
NT model, we decompose the flux of angular momentum into
an ingoing (``in'') term which is related to the advection of
mass-energy into the black hole
and an outgoing (``out'') term which is related to the forces and
stresses that transport angular momentum radially outward.
These ingoing and outgoing components of the specific angular momentum
are defined by
\begin{eqnarray}\label{Dotssplit}
{\jmath}_{\rm in}(r,t) &\equiv& \frac{\langle(\rho_0 + u_g + b^2/2) u^r \rangle \langle u_\phi \rangle}{\langle -\rho_0 u^r\rangle}, \\
{\jmath}_{\rm out}(r,t) &\equiv& \jmath - {\jmath}_{\rm in}(r,t) .
\end{eqnarray}
By this definition, the ``in'' quantities correspond to inward transport of the
comoving mass-energy density of the disk, $u^\mu u^\nu
T_{\mu\nu}=\rho_0 + u_g + b^2/2$. Note that ``in'' quantities are products
of the mean velocity fields $\langle u^r \rangle$ and $\langle u_\mu \rangle$
and not the combination $\langle u^r u_\mu \rangle$; the latter
includes a contribution from
correlated fluctuations in $u^r$ and $u_\mu$, which corresponds to the
Reynolds stress.
The residual of the total flux minus the ``in''
flux gives the outward, mechanical transport by Reynolds stresses
and electromagnetic stresses.
One could also consider a similar splitting for the specific energy.
The above decomposition most closely
matches our expectation that the inward flux should agree with the NT result
as $|h/r|\to 0$. Note, however, that our conclusions in
this paper do not require any
particular decomposition.
This decomposition is different
from S08 and N10 where the entire magnetic term ($b^2 u^r u_\phi - b^r b_\phi$)
is designated as the ``out'' term.
Their choice overestimates the effect of electromagnetic stresses,
since some magnetic energy is simply advected into the black hole.
Also, the splitting used in S08 gives non-monotonic ${\jmath}_{\rm in}$
vs. radius for some black hole spins,
while the splitting we use gives monotonic values for all black hole
spins.
\subsection{The Magnetic Flux}
\label{magneticfluxdiag}
The no-monopoles constraint implies that the total magnetic flux
($\Phi = \int_S \vec{B}\cdot \vec{dA}$)
vanishes through any closed surface
or any open surface penetrating a bounded flux bundle.
The magnetic flux conservation equations require that
magnetic flux can only be transported to the black hole
or through the equatorial plane by advection.
The absolute magnetic flux ($\int_S |\vec{B}\cdot \vec{dA}|$)
has no such restrictions and can grow arbitrarily due to the MRI.
However, the absolute magnetic flux can saturate when the
electromagnetic field comes into force balance with the matter.
We are interested in such a saturated state of the magnetic field
within the accretion flow and threading the black hole.
We consider the absolute magnetic flux
penetrating a spherical surface and an equatorial surface given, respectively, by
\begin{eqnarray}
\Phi_r(r,\theta,t) &=& \int_\theta \int_\phi |B^r| dA_{\theta'\phi} , \\
\Phi_\theta(r,\theta,t) &=& \int_{r'=r_{\rm H}}^{r'=r} \int_\phi |B^\theta| dA_{r'\phi} .
\end{eqnarray}
Nominally, $\Phi_r$ has an integration range of $\theta'=0$ to $\theta'=\theta$
when measured on the black hole horizon,
while when computing quantities around the equatorial plane $\theta'$
has the range $\langle\theta\rangle\pm\theta$.
One useful normalization of the magnetic fluxes is
by the total flux through one hemisphere of the black hole plus through the equator
\begin{equation}
\Phi_{\rm tot}(r,t) \equiv \Phi_r(r'=r_{\rm H},\theta'=0\ldots \pi/2,t) + \Phi_\theta(r,\theta'=\pi/2,t) ,
\end{equation}
which gives the normalized absolute radial magnetic flux
\begin{equation}
\tilde{\Phi}_r(r,\theta,t) \equiv \frac{\Phi_r(r,\theta,t)}{\Phi_{\rm tot}(r=R_{\rm out},t=0)} ,
\end{equation}
where $R_{\rm out}$ is the outer radius of the computational box.
The normalized absolute magnetic flux measures
the absolute magnetic flux on the black hole horizon
or radially through the equatorial disk per unit absolute flux
that is initially available.
The \citet{gammie99} model of a magnetized thin accretion flow
suggests another useful normalization of the magnetic flux is
by the absolute mass accretion rate
\begin{equation}\label{massg}
\dot{M}_G(r,t) \equiv \int_\theta \int_\phi \rho_0 |u^r| dA_{\theta\phi} ,
\end{equation}
which gives the normalized specific absolute magnetic fluxes
\begin{eqnarray}\label{Dotsgammie}
\Xi(r,t) &=& \frac{\Phi_r(r,t)}{\dot{M}_G(r,t)} , \\
\Upsilon(r,t) &\equiv& \sqrt{2} \left|\frac{\Xi(r,t)}{M}\right| \sqrt{\left|\frac{\dot{M}_G(r=r_{\rm H},t)}{{\rm SA}_{\rm H}}\right|} \label{equpsilon} ,
\end{eqnarray}
where ${\rm SA} = (1/r^2)\int_\theta \int_\phi dA_{\theta\phi}$ is the local solid angle,
${\rm SA}_{\rm H}={\rm SA}(r=r_{\rm H})$ is the local solid angle on the horizon,
$\Xi(r,t)$ is the radial magnetic flux per unit rest-mass flux
(usually specific magnetic flux),
and $\Upsilon(r,t) c^{3/2}/G$ is a particular dimensionless
normalization of the specific magnetic flux
that appears in the MHD accretion model developed by \citet{gammie99}.
Since the units used for the magnetic field are arbitrary,
any constant factor can be applied to $\Xi$
and one would still identify the quantity as the specific magnetic flux.
So to simplify the discussion we henceforth call
$\Upsilon$ the specific magnetic flux.
To obtain Equation~(\ref{equpsilon}),
all involved integrals should have a common $\theta$ range around the equator.
These quantities all have absolute magnitudes
because a sign change does not change the physical effect.
The quantities $\jmath$, ${\rm e}$, ${\tilde{e}}$, $\Xi$, and $\Upsilon$
are each conserved along poloidal field-flow lines
for stationary ideal MHD solutions \citep{bekenstein_new_conservation_1978,tntt90}.
Gammie's (1999) model of a magnetized accretion flow within the ISCO assumes:
1) a thin equatorial flow ;
2) a radial poloidal field geometry (i.e., $|B_\theta|\ll |B_r|$) ;
3) a boundary condition at the ISCO corresponding to zero radial velocity ;
and 4) no thermal contribution.
The model reduces to the NT solution within the ISCO for $\Upsilon\to 0$,
and deviations from NT's solution are typically small
(less than $12\%$ for $\jmath$ across all black hole spins;
see Appendix~\ref{sec_gammie}) for $\Upsilon\lesssim 1$.
We have defined the quantity $\Upsilon$ in equation~(\ref{Dotsgammie})
with the $\sqrt{2}$ factor,
the square root of the total mass accretion rate through the horizon per unit solid angle,
and Heaviside-Lorentz units for $B^r$
so that the numerical value of $\Upsilon$ at the horizon is identically
equal to the numerical value of the free parameter in \citet{gammie99},
i.e., their $F_{\theta\phi}$ normalized by $F_{\rm M}=-1$.
As shown in that paper, $\Upsilon$ directly controls deviations
of the specific angular momentum and specific energy
away from the non-magnetized thin disk theory values of the NT model.
Even for disks of finite thickness, the parameter shows how electromagnetic stresses
control deviations between the horizon and the ISCO.
Note that the flow can deviate from NT at the ISCO
simply due to finite thermal pressure \citep{mg04}.
In Appendix~\ref{tbl_gammie} Table~\ref{sec_gammie},
we list numerical values of $\jmath$ and ${\tilde{e}}$ for Gammie's (1999) model,
and show how these quantities deviate from NT
for a given black hole spin and $\Upsilon$.
We find $\Upsilon$ to be more useful as a measure of the importance of the magnetic field
within the ISCO than our previous measurement in S08 of
the $\alpha$-viscosity parameter,
\begin{equation}\label{alphaeq}
\alpha=\frac{T^{\hat{\phi}\hat{r}}}{p_g+p_b} ,
\end{equation}
where $T^{\hat{\phi}\hat{r}} = {e^{\hat{\phi}}}_{\mu} {e^{\hat{r}}}_{\nu} T^{\mu\nu}$
is the orthonormal stress-energy tensor components in the comoving frame,
and ${e^{\hat{\nu}}}_{\mu}$ is the contravariant tetrad system in the local fluid-frame.
This is related to the normalized stress by
\begin{equation}\label{stress}
\frac{W}{\dot{M}} = \frac{\int\int T^{\hat{\phi}\hat{r}} dA'_{\theta\phi}}{\dot{M}\int_\phi dL'_{\phi}} ,
\end{equation}
where
$dA'_{\theta\phi} = {e^{\hat{\theta}}}_{\mu} {e^{\hat{\phi}}}_{\nu} d\theta^\mu d\phi^\nu$
is the comoving area element,
$dL'_{\phi} = {e^{\hat{\phi}}}_{\nu} d\phi^\nu$ evaluated at $\theta=\pi/2$
is the comoving $\phi$ length element,
$\theta^\mu=\{0,0,1,0\}$, and $\phi^\nu=\{0,0,0,1\}$.
This form for $W$ is a simple generalization of Eq. 5.6.1b in NT73,
and note that the NT solution for $W/\dot{M}$ is given by Eq. 5.6.14a in NT73.
In S08, $W$ was integrated over fluid satisfying
$-u_t (\rho_0 + u_g + p_g + b^2)/\rho_0 < 1$
(i.e., only approximately gravitationally bound fluid and no wind-jet).
We use the same definition of bound in this paper.
As shown in S08, a plot of the radial profile
of $W/\dot{M}$ or $\alpha$ within the ISCO does not necessarily quantify
how much the magnetic field affects the accretion flow properties,
since even apparently large values of this quantity within
the ISCO do not cause a significant deviation from NT
in the specific angular momentum accreted.
On the other hand, the \citet{gammie99} parameter $\Upsilon$
does directly relate to the electromagnetic stresses within the ISCO
and is an ideal MHD invariant (so constant vs. radius) for a stationary flow.
One expects that appropriately time-averaged simulation data
could be framed in the context of this stationary model
in order to measure the effects of electromagnetic stresses.
\subsection{Inflow Equilibrium}
\label{sec_infloweq}
When the accretion flow has achieved steady-state inside a given radius, the
quantity $\dot M(r,t)$ will (apart from normal fluctuations due to turbulence) be
independent of time,
and if it is integrated over all $\theta$ angles will be constant
within the given radius\footnote{If we
integrate over a restricted range of $\theta$, then
there could be additional mass flow through the boundaries in the
$\theta$ direction and $\dot{M}(r,t)$ will no longer be independent of
$r$, though it would still be independent of $t$.}. The energy and
angular momentum fluxes have a non-conservative contribution due to
the cooling function and therefore are not strictly constant.
However, the cooling is generally a minor contribution (especially in
the case of the angular momentum flux), and so we may still measure the
non-radiative terms to verify inflow equilibrium.
The radius out to which inflow equilibrium can be achieved in a given
time can be estimated by calculating the mean radial velocity $v_r$
and then deriving from it a viscous timescale $-r/v_r$. From standard
accretion disk theory and using the definition of $\alpha$ given in
Eq.~(\ref{stress}), the mean radial velocity is given by
\begin{equation}\label{eqvr}
v_r \sim -\alpha \left|\frac{h}{r}\right|^2 v_{\rm K} ,
\end{equation}
where $v_{\rm K}\approx(r/M)^{-1/2}$ is the Keplerian speed at radius $r$
and $\alpha$ is the standard viscosity parameter
given by equation~(\ref{alphaeq}) \citep{fkr92}.
Although the viscous timescale is the nominal
time needed to achieve steady-state, in practice it takes several viscous times
before the flow really settles down, e.g., see the calculations reported in
\citet{shapiro2010}. In the present paper, we assume that inflow equilibrium
is reached after two viscous times, and hence we
estimate the inflow equilibrium time, $t_{\rm ie}$, to be
\begin{equation}\label{tie}
t_{\rm ie} \sim -2\frac{r}{v_r} \sim 2 \left(\frac{r}{M}\right)^{3/2} \left(\frac{1}{\alpha |h/r|^2}\right) \sim 5000 \left(\frac{r}{M}\right)^{3/2} ,
\end{equation}
where, in the right-most relation,
we have taken a typical value of $\alpha\sim 0.1$ for the gas in the disk proper (i.e., outside the ISCO)
and we have set $|h/r|\approx 0.064$,
as appropriate for our thinnest disk models.
A simulation must run until $t\sim t_{\rm ie}$ before we can expect
inflow equilibrium at radius $r$. According to the above Newtonian estimate, a thin
disk simulation with $|h/r| \sim 0.064$ that has been run for a time of $30000M$ will achieve
steady-state out to a radius of only $\sim3M$. However, this estimate is inaccurate since
it does not allow for the boundary condition on the flow at the ISCO. Both
the boundary condition as well
as the effects of GR are included in the formula for the radial velocity
given in Eq.~5.9.8 of NT, which we present for completeness in Appendix~\ref{sec_inflow}.
That more accurate result, which is what we use for all our plots and numerical estimates,
shows that our thin disk simulations should
reach inflow equilibrium
within $r/M=9,~7,~5.5,~5$, respectively, for $a/M=0,~0.7,~0.9,~0.98$.
These estimates are roughly consistent with the radii out to which
we have a constant $\jmath$ vs. radius in the simulations
discussed in \S\ref{sec:thicknessandspin}.
\subsection{Luminosity Measurement}
We measure the radiative luminosity of the accreting gas directly from
the cooling function $dU/d\tau$. At a given radius, $r$, in
the steady region of the flow,
the luminosity per unit rest-mass accretion rate
interior to that radius is given by
\begin{equation}\label{lum}
\frac{{L}(<r)}{\dot{M}(r,t)} = \frac{1}{\dot{M}(r,t)(t_f-t_i)} \int_{t=t_i}^{t_f}
\int_{r'=r_{\rm H}}^{r}\int_{\theta=0}^\pi\int_\phi \left(\frac{dU}{d\tau}\right)u_t\,
dV_{t r'\theta\phi} ,
\end{equation}
where $dV_{t r'\theta\phi} = {\sqrt{-g}} dt dr' d\theta d\phi$
and the 4D integral goes from the initial time $t_i$ to the final
time $t_f$ over which the simulation results are time-averaged, from the
radius $r_{\rm H}$ of the horizon to the radius $r$ of interest,
and usually over all $\theta$ and $\phi$.
We find it useful to compare the simulations
with thin disk theory by computing the ratio of the luminosity emitted inside the ISCO
(per unit rest-mass accretion rate) to the total radiative efficiency of the NT model:
\begin{equation}\label{Lin}
\tilde{L}_{\rm in} \equiv \frac{L(<r_{\rm ISCO})}{\dot{M}{\tilde{e}}[{\rm NT}]} .
\end{equation}
This ratio measures the excess radiative luminosity from inside the ISCO in the
simulation relative to the total luminosity in the NT model (which predicts zero luminosity here).
We also consider the excess luminosity over the entire inflow equilibrium region
\begin{equation}\label{Leq}
\tilde{L}_{\rm eq} \equiv \frac{L(r<r_{\rm eq})-L(r<r_{\rm eq})[{\rm NT}]}{\dot{M}{\tilde{e}}[{\rm NT}]} ,
\end{equation}
which corresponds to the luminosity (per unit mass accretion rate)
inside the inflow equilibrium region (i.e. $r<r_{\rm eq}$, where
$r_{\rm eq}$ is the radius out to which inflow equilibrium has been established)
subtracted by the NT luminosity all divided by the total NT efficiency.
Large percent values of $\tilde{L}_{\rm in}$ or $\tilde{L}_{\rm eq}$
would indicate large percent deviations from NT.
\section{Discussion}
\label{sec:discussion}
We now discuss some important consequences of our results
and also consider issues to be addressed by future calculations.
First, we discuss the relevance to black hole spin measurements.
In recent years, black hole spin parameters have been measured in
several black hole x-ray binaries by fitting their x-ray continuum
spectra in the thermal (or high-soft) spectral
state \citep{zcc97,shafee06,mcc06,ddb06,liu08,gmlnsrodes09,gmsncbo10}.
This method is based on several
assumptions that require testing \citep{nms2008proc},
the most critical being the assumption
that an accretion disk in the radiatively-efficient thermal state is
well-described by the Novikov-Thorne model of a thin disk. More
specifically, in analyzing and fitting the spectral data, it is
assumed that the radial profile of the radiative flux, or equivalently the
effective temperature, in the accretion disk
closely follows the prediction of the NT model.
Practitioners of the continuum-fitting method generally restrict their
attention to relatively low-luminosity systems below $30\%$ of the
Eddington luminosity. At these luminosities, the maximum height of
the disk photosphere above the midplane is less than $10\%$ of the
radius, i.e., $(h/r)_{\rm photosphere} \leq 0.1$ \citep{mcc06}.
For a typical disk, the photospheric disk thickness is approximately
twice the mean absolute thickness $|h/r|$ that we consider in
this paper. Therefore, the disks that observers consider for
spin measurement have $|h/r| \lesssim 0.05$, i.e., they are thinner
than the thinnest disk ($|h/r|_{\rm min} \sim 0.06$) that we (S08,
this paper) and others (N09, N10) have simulated.
The critical question then is the following: Do the flux profiles of
very thin disks match the NT prediction? At large radii the two will
obviously match very well since the flux profile is determined simply
by energy conservation\footnote{This is why the formula for the flux as a function
of radius in the standard thin disk model does not depend on details like the
viscosity parameter $\alpha$ \citep{fkr92}.}. However,
in the region near and inside the ISCO, analytic models have to apply a boundary
condition, and the calculated flux profile in the inner region of the disk
depends on this choice. The conventional choice is
a ``zero-torque'' boundary condition at the ISCO. Unfortunately, there is disagreement on
the validity of this assumption. Some authors have argued that
the magnetic field strongly modifies the zero-torque condition and that,
therefore, real disks might behave very differently from the NT model near the ISCO
\citep{krolik99,gammie99}. Other authors, based either on heuristic arguments or on
hydrodynamic calculations, find that the NT model is accurate even near
the ISCO so long as the disk is geometrically thin \citep{pac00,ap03,shafee08,abr10}.
Investigating this question was the primary
motivation behind the present study.
We described in this paper GRMHD simulations of geometrically thin
($|h/r|\sim0.07$) accretion disks around black holes with a range of
spins: $a/M=0, ~0.7, ~0.9, ~0.98$. In all cases, we find that the
specific angular momentum $\jmath$ of the accreted gas as measured at
the horizon (this quantity provides information on the dynamical
importance of torques at the ISCO) shows only minor deviations at the
level of $\sim 2\%$--$4\%$ from the NT model. Similarly, the
luminosity emitted inside the ISCO is only $\sim 3\%$--$7\%$ of the
total disk luminosity. When we allow for the fact that a large fraction of this
radiation will probably be lost into the black hole because of
relativistic beaming as the gas plunges inward (an effect ignored in
our luminosity estimates), we conclude that the region inside
the ISCO is likely to be quite unimportant. Furthermore, our
investigations indicate that
deviations from the NT model decrease with decreasing $|h/r|$.
Therefore, since the disks of interest to observers are
generally thinner than the thinnest disks we have simulated, the NT model
appears to be an excellent approximation for modeling the spectra of
black hole disks in the thermal state.
One caveat needs to be mentioned. Whether or not the total luminosity
of the disk agrees with the NT model is not important since, in
spectral modeling of data, one invariably fits a normalization (e.g.,
the accretion rate $\dot{M}$ in the model KERRBB; \citealt{lznm05})
which absorbs any deviations in this quantity. What is important is the
{\it shape} of the flux profile versus radius. In particular, one is
interested in the radius at which the flux or effective temperature is
maximum \citep{nms2008proc,mnglps09}. Qualitatively, one imagines that the
fractional shift in this radius
will be of order the
fractional torque at the ISCO, which is likely to be of order the
fractional error in $\jmath$. We thus guess that, in the systems
of interest, the shift is nearly always below $10\%$. We plan to
explore this question quantitatively in a future study.
Another issue is the role of the initial magnetic field topology. We
find that, for $a/M=0$, starting with a 1-loop field geometry gives an
absolute relative deviation in $\jmath$ of $7.1\%$, and an excess luminosity
inside the ISCO of $4.9\%$, compared to $2.9\%$ and $3.5\%$ for our
standard 4-loop geometry. Thus, having a magnetic field distribution
with long-range correlation in the radial direction seems to increase
deviations from the NT model, though even the larger effects we find in this case are
probably not a serious concern for black hole spin measurement. Two
comments are in order on this issue. First, the 4-loop geometry is
more consistent with nearly isotropic turbulence in the poloidal plane
and, therefore, in our view a more natural initial condition.
Second, the 1-loop model develops a stronger field inside the ISCO and
around the black hole and might therefore be expected to produce a
relativistic jet with measurable radio emission. However, it is
well-known that black hole x-ray binaries in the thermal state have no
detectable radio emission. This suggests that the magnetic field is
probably weak, i.e., more consistent with our 4-loop
geometry.
Next, we discuss the role of electromagnetic stresses on the
dynamics of the gas in the plunging region inside the ISCO.
In order to better understand this issue, we have
extracted for each of our simulations the radial profile of the
specific magnetic flux, $\Upsilon$. This quantity appears
as a dimensionless free parameter (called $F_{\theta\phi}$) in the
simple MHD model of the plunging region developed by \citet{gammie99}.
The virtues of the specific magnetic flux are its well-defined
normalization and its constancy with radius for stationary flows
\citep{tntt90}. In
contrast, quantities like the stress $W$ or the viscosity
parameter $\alpha$ have no well-defined normalization; $W$ can be
normalized by any quantity that has an energy scale, such as $\rho_0$, $\dot{M}$,
or $b^2$, while $\alpha$ could be defined with respect to the total pressure, the gas
pressure, or the magnetic pressure. The numerical values of $W$ or
$\alpha$ inside the ISCO can thus vary widely, depending on which
definition one chooses. For instance, although S08 found $\alpha\sim
1$ inside the ISCO, the specific angular momentum flux, $\jmath$,
deviated from NT by no more than a few percent.
Further, Figure~\ref{loop1manystress} shows that (even for the multi-loop model)
the stress appears quite large within the ISCO,
but this is misleading because the effects of the stress
are manifested in the specific angular momentum, specific energy,
and luminosity -- all of which agree with NT to within less than $10\%$
for the multi-loop model.
Since $W$ and $\alpha$ do not have a single value within the ISCO
or a unique normalization,
we conclude that they are not useful
for readily quantifying the effects of the electromagnetic stresses within the ISCO.
Gammie's (1999) model shows how the value of $\Upsilon$
is directly related to the electromagnetic stresses within the ISCO.
Unfortunately, the actual value of $\Upsilon$ is a free parameter
which cannot be easily determined from first principles.
It is possible that accretion disks might have $\Upsilon\gg 1$,
in which case, the model predicts large deviations from NT.
For example, if $\Upsilon=6$, then for an $a/M=0$ black hole
$\jmath$ is lowered by $56\%$ relative to the NT model.
We have used our 3D GRMHD simulations which include self-consistent MRI-driven turbulence
to determine the value of $\Upsilon$ for
various black hole spins, disk thicknesses, and field geometries.
For the multiple-loop field geometry, we find that
the specific magnetic flux varies with disk thickness and spin as
\begin{equation}
\Upsilon\approx 0.7 + \left|\frac{h}{r}\right| - 0.6\frac{a}{M} ,
\end{equation}
within the disk component,
which indicates that electromagnetic stresses are weak
and cause less than $8\%$ deviations in $\jmath$
in the limit $|h/r|\to 0$ for all black hole spins.
Our rough analytical arguments for how $\Upsilon$ should scale
with $|h/r|$ and $a/M$ are consistent with the above formula.
Even for the 1-loop field geometry, $\Upsilon\lesssim 1$ for thin disks,
so electromagnetic stresses cause only minor deviations from NT
for all black hole spins (for $\Upsilon\lesssim 1$, less than $12\%$ in $\jmath$).
Not all aspects of the \citet{gammie99} model agree with our simulations.
As found in \citet{mg04},
the nominal efficiency, ${\tilde{e}}$, does not match well
and for thin disks is quite close to NT.
Since the true radiative efficiency is limited to no more than ${\tilde{e}}$,
this predicts only weak deviations from NT in the total luminosity
even if $\jmath$ has non-negligible deviations from NT.
Also, this highlights that the deviations from NT in $\jmath$ are due to non-dissipated
electromagnetic stresses and cannot be used to directly predict the excess luminosity within the ISCO.
The assumption of a radial flow in a split-monopole field
is approximately valid, but the simulations do show some non-radial
flow and vertical stratification, a non-zero radial velocity at the
ISCO, and thermal energy densities comparable to magnetic energy
densities.
Inclusion of these effects is required
for better consistency with simulation results inside the ISCO.
Next, we consider how our results lend some insight into the spin evolution of black holes.
Standard thin disk theory with photon capture predicts that
an accreting black hole spins up until it reaches
spin equilibrium at $a_{\rm eq}/M\approx 0.998$ \citep{thorne74}.
On the other hand, thick non-radiative accretion flows
deviate significantly from NT and reach equilibrium at
$a_{\rm eq}/M\sim 0.8$ for a model with $\alpha\sim 0.3$
and $|h/r|\sim 0.4$ near the horizon \citep{pg98}.
GRMHD simulations of moderately thick
($|h/r|\sim 0.2$--$0.25$) magnetized accretion flows
give $a_{\rm eq}/M\approx 0.9$ \citep{gammie_bh_spin_evolution_2004}.
In this paper, we find from our multi-loop field geometry models
that spin equilibrium scales as
\begin{equation}
\frac{a_{\rm eq}}{M} \approx 1.1 - 0.8\left|\frac{h}{r}\right| ,
\end{equation}
where one should set $a_{\rm eq}/M=1$ if the above formula gives $a_{\rm eq}/M>1$.
This gives a result consistent with the above-mentioned studies of thick disks,
and it is also consistent with our rough analytical prediction
based upon our scaling of $\Upsilon$ and using the Gammie model
prediction for the spin equilibrium.
This result also agrees with the NT result in the limit $|h/r|\to 0$
within our statistical errors,
and shows that magnetized
thin disks can approach the theoretical limit of $a_{\rm eq}/M\approx 1$,
at least in the multi-loop case.
In the single-loop field geometry, because of the presence of
a more radially-elongated initial poloidal field,
we find a slightly stronger torque on the black hole.
However, before a time of order $17000M$,
the deviations in the equilibrium spin parameter, $a_{\rm eq}/M$,
between the 4-loop and 1-loop field geometries appear to be
minor, so during that time the scaling given above roughly holds.
Of course, it is possible (even likely) that
radically different field geometries or anomalously
large initial field strengths will lead
to a different scaling.
Lastly, we mention a number of issues which we have neglected but are
potentially important. A tilt between the angular momentum vector of
the disk and the black hole rotation axis might significantly affect
the accretion flow \citep{fragile07}. We have not accounted for any
radiative transfer physics, nor have we attempted to trace photon
trajectories (see, e.g. N09 and \citealt{noblekrolik09}).
In principle one may require the simulation to be evolved for
hundreds of orbital times at a given radius in order to completely
erase the initial conditions \citep{sorathia10},
whereas we only run the model for a couple of viscous time scales.
New pseudo-Newtonian simulations show that convergence may require
resolving several disk scale heights with high resolution
\citep{sorathia10}, and
a similar result has been found also
for shearing box calculations with no net flux and no stratification
(Stone 2010, private communication). In contrast,
we resolve only a couple of scale heights.
Also, we have only studied two different types of initial field geometries.
Future studies should consider whether alternative field geometries
change our results.
\section{Governing Equations}
\label{sec:goveqns}
The system of interest to us is a magnetized accretion disk
around a rotating black hole.
We write the black hole Kerr metric in Kerr-Schild (KS,
horizon-penetrating) coordinates \citep{fip98,pf98}, which can be mapped to
Boyer-Lindquist (BL) coordinates or an orthonormal basis in any frame
\citep{mg04}. We work with Heaviside-Lorentz units, set the speed of
light and gravitational constant to unity ($c=G=1$), and let $M$ be
the black hole mass. We solve the general relativistic
magnetohydrodynamical (GRMHD) equations of motion for rotating black
holes \citep{gam03} with an additional cooling term designed to keep the
simulated accretion disk at a desired thickness.
Mass conservation gives
\begin{equation}
\nabla_\mu (\rho_0 u^\mu) = 0 ,
\end{equation}
where $\rho_0$ is the rest-mass density,
corresponding to the mass density in the fluid frame,
and $u^\mu$ is the contravariant 4-velocity.
Note that we write the orthonormal 3-velocity as $v_i$
(the covariant 3-velocity is never used below).
Energy-momentum conservation gives
\begin{equation}\label{emomeq}
\nabla_\mu T^\mu_\nu = S_\nu ,
\end{equation}
where the stress energy tensor $T^\mu_\nu$ includes both matter and
electromagnetic terms,
\begin{equation}
T^\mu_\nu = (\rho_0 + u_g + p_g + b^2) u^\mu u_\nu + (p_g + b^2/2)\delta^\mu_\nu - b^\mu b_\nu ,
\end{equation}
where $u_g$ is the internal energy density and $p_g=(\Gamma-1)u_g$ is
the ideal gas pressure with $\Gamma=4/3$ \footnote{Models with
$\Gamma=5/3$ show some minor differences compared to models with
$\Gamma=4/3$ \citep{mg04,mm07}.}. The contravariant fluid-frame magnetic 4-field
is given by $b^\mu$, and is related to the lab-frame 3-field via $b^\mu
= B^\nu h^\mu_\nu/u^t$ where $h^\mu_\nu = u^\mu u_\nu +
\delta^\mu_\nu$ is a projection tensor,
and $\delta^\mu_\nu$ is the Kronecker delta function.
We write the orthonormal 3-field as $B_i$ (the covariant 3-field is never used below).
The magnetic energy density ($u_b$)
and magnetic pressure ($p_b$) are then given by $u_b=p_b=b^\mu b_\mu/2 = b^2/2$.
Note that the angular velocity of the gas is $\Omega=u^\phi/u^t$.
Equation (\ref{emomeq}) has a source term
\begin{equation}\label{eq:cooling_term}
S_\nu = \left(\frac{dU}{d\tau}\right) u_\nu ,
\end{equation}
which is a radiation 4-force corresponding to a simple isotropic
comoving cooling term given by $dU/d\tau$. We ignore radiative
transport effects such as heat currents, viscous stresses, or other
effects that would enter as additional momentum sources in the
comoving frame. In order to keep the accretion disk thin, we employ
the same ad hoc cooling function as in S08:
\begin{equation}\label{cooling}
\frac{dU}{d\tau} = - u_g \frac{\log\left(K/K_c\right)}{\tau_{\rm cool}} S[\theta] ,
\end{equation}
where $\tau$ is the fluid proper time,
$K=p_g/\rho_0^\Gamma$ is the entropy constant,
$K_c=0.00069$ is set to be the same entropy constant as the torus atmosphere
and is the entropy constant we cool the disk towards,
and $K_0\gtrsim K_c$ is the entropy constant of the initial torus\footnote{We intended
to have a constant $K$ everywhere at $t=0$, but
a normalization issue led to $K_c\lesssim K_0$.
Because of this condition,
the disk cools toward a slightly thinner equilibrium at the start of the simulation,
after which the cooling proceeds as originally desired by cooling towards
the fiducial value $K=K_c$.
Our models with $|h/r|\approx 0.07$ are least affected by this issue.
Also, since we do not make use of the cooling-based luminosity near $t=0$,
this issue does not affect any of our results.
We confirmed that this change leads to no significant issues
for either the magnitude or scaling of quantities with thickness
by repeating some simulations with the intended $K_c=K_0$.
The otherwise similar simulations have thicker disks as expected
(very minor change for our thin disk model as expected),
and we find consistent results
for a given measured thickness in the saturated state.}.
The gas cooling time is set to $\tau_{\rm cool}=2\pi/\Omega_{\rm K}$,
where $\Omega_{\rm K} = (1/M)/[ (a/M) +(R/M)^{3/2}]$ is the Keplerian angular frequency
and $R=r\sin\theta$ is the cylindrical radius
(We consider variations in the cooling timescale in section~\ref{sec_fluxdiskcorona}.).
We use a shaping function given by the
quantity $S[\theta] = \exp[-(\theta-\pi/2)^2 / (2(\theta_{\rm nocool})^2)]$,
where we set $\theta_{\rm nocool}=\{0.1,0.3,0.45,0.45\}$
for our sequence of models with target thickness of $|h/r|=\{0.07, ~0.1, ~0.2, ~0.3\}$,
although we note that the thickest model with target $|h/r|=0.3$ has no cooling turned on.
The shaping function $S[\theta]$ is used to avoid cooling in the low density
funnel-jet region where the thermodynamics is not accurately evolved
and where the gas is mostly unbound
(see Figure~\ref{taperoff} in section~\ref{sec_fluxdiskcorona}).
In addition, we set the cooling function $dU/d\tau=0$
if 1) the timestep, $dt$, obeys $dt>\tau_{\rm cool}$,
which ensures that the cooling does not create negative entropy gas ;
or 2) $\log(K/K_c)<0$, which ensures the gas is only cooled, never heated.
Photon capture by the black hole is not included,
so the luminosity based upon this cooling function is an upper limit
for radiation from the disk.
The above cooling function drives the specific entropy of the gas
toward the reference specific entropy $K_c$.
Since specific entropy always increases due to dissipation,
this cooling function explicitly tracks dissipation.
Hence, the luminosity generated from the cooling function
should not be considered as the true luminosity,
but instead should be considered as representing the emission
rate in the limit that all dissipated energy is lost as radiation.
Any other arbitrary cooling function that does not track dissipation
would require full radiative transfer to obtain the true luminosity.
Magnetic flux conservation is given by the induction equation
\begin{equation}
\partial_t({\sqrt{-g}} B^i) = -\partial_j[{\sqrt{-g}}(B^i v^j - B^j v^i)] ,
\end{equation}
where $v^i=u^i/u^t$, and $g={\rm Det}(g_{\mu\nu})$ is the determinant of the
metric. In steady-state, the cooling is balanced by heating from
shocks, grid-scale reconnection, and grid-scale viscosity. No
explicit resistivity or viscosity is included.
\section{Fiducial Model of a Thin Disk Around a Non-Rotating Black Hole}
\label{sec:fiducialnonrot}
Our fiducial model, A0HR07, consists of a magnetized thin accretion disk around a
non-rotating ($a/M=0$) black hole. This is similar to the model
described in S08;
however, here we consider a larger suite of diagnostics,
a resolution of $256\times64\times32$,
and a computational box with $\Delta\phi=\pi/2$.
As mentioned in section~\ref{sec:modelsetup},
the initial torus parameters are set so that the
inner edge is at $r=20M$,
the pressure maximum is at $r=35M$,
and $|h/r|\lesssim 0.1$ at the pressure maximum (see Figure~\ref{initialtorus}).
The initial torus is threaded with magnetic field in the multi-loop
geometry as described in section~\ref{sec:modelsetup}.
For this model, we use four loops in order to ensure that
the loops are roughly circular in the poloidal plane. Once
the simulation begins, the MRI leads to MHD
turbulence which causes angular momentum transport and
drives the accretion flow to a quasi-steady state.
The fiducial model is evolved for a total time of $27350M$. We
consider the period of steady-state to be from $T_i=12500M$ to
$T_f=27350M$ and of duration $\Delta T=14850M$.
All the steady-state results described below are obtained
by time-averaging quantities over this steady-state period,
which corresponds to about $160$ orbital periods at the ISCO,
$26$ orbits at the inner edge of the initial torus ($r=20M$),
and $11$ orbits at the pressure maximum of the initial torus ($r=35M$).
\subsection{Initial and Evolved Disk Structure}
Figure~\ref{initialtorus} shows contour plots of various quantities in
the initial solution projected on the
($R$,\;$z$) $=(r\sin\theta$,\;$r\cos\theta$)-plane. Notice the relatively small
vertical extent of the torus.
The disk has a thickness of $|h/r|\sim0.06-0.09$ over the radius range
containing the bulk of the mass. The four magnetic loops are clearly
delineated. The plot of $Q_{\rm MRI}$ indicates
that the MRI is well-resolved within the two primary loops.
The left-most and right-most loops are marginally under-resolved, so a slightly slower-growing MRI mode
is expected to control the dynamics in this region.
However, the two primary loops tend to dominate the overall evolution of the gas.
Figure~\ref{finaltorus} shows the time-averaged solution during the
quasi-steady state period from $T_i=12500M$ to $T_f=27350M$. We refer to the disk
during this period as being ``evolved'' or ``saturated.''
The evolved disk is in steady-state up to $r\sim 9M$,
as expected for the duration of our simulation.
The rest-mass density is concentrated in the disk midplane within $\pm 2|h/r|$,
while the magnetic energy density is concentrated above the disk in a corona.
The MRI is properly resolved with $Q_{\rm MRI}\approx 6$ in the
disk midplane\footnote{\citet{sano04} find that having about
$6$ grid cells per wavelength of the fastest growing MRI mode
during saturation leads to convergent behavior for the electromagnetic stresses,
although their determination of $6$ cells was based upon a 2nd order van Leer scheme
that is significantly more diffusive than our PPM scheme.
Also, the (time-averaged or single time value of) vertical field is already
(at any random spatial position)
partially sheared by the axisymmetric MRI, and so may be less relevant
than the (e.g.) maximum vertical field per unit orbital time
at any given point that is not yet sheared and so represents
the vertical component one must resolve. These issues imply
we may only need about $4$ cells per wavelength of the fastest growing mode
(as defined by using the time-averaged absolute vertical field strength).}.
The gas in the midplane has plasma $\beta\sim 10$
outside the ISCO and $\beta\sim 1$ near the black hole,
indicating that the magnetic field has been amplified
beyond the initial minimum of $\beta\sim 100$.
Figure~\ref{magnetictorus} shows the time-averaged structure of the
magnetic field during the quasi-steady state period. The field has a
smooth split-monopole structure near and inside the ISCO.
Beyond $r\sim 9M$, however, the field becomes irregular, reversing direction more
than once. At these radii, the simulation has not reached inflow equilibrium.
\begin{figure}
\centering
\includegraphics[width=3.3in,clip]{f1.eps}
\caption{The initial state of the fiducial model (A0HR07) consists of weakly magnetized
gas in a geometrically thin torus around a non-spinning ($a/M=0$) black hole.
Color maps have red as highest values and blue as lowest values.
Panel (a): Linear color map of rest-mass density,
with solid lines showing the thickness $|h/r|$ of the initial torus. Note
that the black hole horizon is at $r=2M$, far to the left of the plot, so
the torus is clearly geometrically thin.
Near the pressure maximum $|h/r| \lesssim 0.1$, and elsewhere $|h/r|$ is even smaller.
Panel (b): Contour plot of $b^2$ overlaid on linear color map of rest-mass density
shows that the initial field consists of four poloidal loops centered at $r/M=
29,$ $34$, $39$, $45$.
The wiggles in $b^2$ are due to the initial perturbations.
Panel (c): Linear color map of the plasma $\beta$ shows that the disk is
weakly magnetized throughout the initial torus.
Panel (d): Linear color map of the number of grid cells per fastest growing MRI wavelength,
$Q_{\rm MRI}$,
shows that the MRI is properly resolved for the primary two loops at the center of the disk.
}
\label{initialtorus}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=3.3in,clip]{f2.eps}
\caption{The evolved state of the fiducial model (A0HR07)
consists of a weakly magnetized thin disk surrounded by a strongly magnetized corona.
All plots show quantities that have been time-averaged over the period $12500M$ to $27350M$.
Color maps have red as highest values and blue as lowest values.
Panel (a): Linear color map of rest-mass density, with solid lines showing
the disk thickness $|h/r|$. Note that the
rest-mass density drops off rapidly inside the ISCO.
Panel (b): Linear color map of $b^2$ shows that a strong magnetic field is present
in the corona above the equatorial disk.
Panel (c): Linear color map of plasma $\beta$
shows that the $\beta$ values are much lower than in
the initial torus. This indicates that considerable
field amplification has occurred via the MRI.
The gas near the equatorial plane has $\beta\sim 10$ far outside the ISCO and approaches
$\beta\sim 1$ near the black hole.
Panel (d): Linear color map of the number of grid cells per fastest
growing MRI wavelength, $Q_{\rm MRI}$,
shows that the MRI is properly resolved within most of the accretion flow.
Note that $Q_{\rm MRI}$ (determined by the vertical magnetic field strength)
is not expected to be large inside the plunging region
where the field is forced to become mostly radial
or above the disk within the corona where the field is mostly toroidal.
}
\label{finaltorus}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=3.3in,clip]{f3.eps}
\caption{Magnetic field lines (red vectors)
and magnetic energy density (greyscale map) are shown for the fiducial
model (A0HR07).
Panel (a): Snapshot of the magnetic field structure at time $27200M$
shows that the disk is highly turbulent for $r > r_{\rm ISCO}=6M$
and laminar for $r<r_{\rm ISCO}$.
Panel (b): Time-averaged magnetic field in the saturated state
shows that for $r\lesssim 9M$, viz., the region of the flow that we expect to have
achieved inflow equilibrium, the geometry of the time-averaged magnetic field
closely resembles that of a split-monopole.
The dashed, vertical line marks the position of the ISCO.
}
\label{magnetictorus}
\end{figure}
\subsection{Velocities and the Viscous Time-Scale}\label{sec:velvisc}
Figure~\ref{velocitytorus} shows the velocity structure in the evolved
model. The snapshot indicates well-developed
turbulence in the interior of the disk at radii beyond the ISCO
($r>6M$), but laminar flow inside the ISCO and over most of the
corona. The sudden transition from turbulent to laminar behavior at
the ISCO, which is seen also in the magnetic field
(Figure~\ref{magnetictorus}a), is a clear sign that the flow dynamics
are quite different in the two regions. Thus the ISCO
clearly has an affect on the accreting gas. The time-averaged flow shows
that turbulent fluctuations are smoothed out within $r\sim 9M$.
Figure~\ref{streamtorus} shows the velocity stream lines
using the line integral convolution method to illustrate vector fields. This figure again
confirms that the accretion flow is turbulent at radii larger than $r_{\rm ISCO}$
but it becomes laminar inside the ISCO,
and it again shows that time-averaging
smooths out turbulent fluctuations out to $r\sim 9M$.
\begin{figure}
\centering
\includegraphics[width=3.3in,clip]{f4.eps}
\caption{Flow stream lines (red vectors) and rest-mass density (greyscale map)
are shown for the fiducial model (A0HR07).
Panel (a): Snapshot of the velocity structure and rest-mass density at time $27200M$
clearly shows MRI-driven turbulence in the interior of the disk.
The rest-mass density appears more diffusively distributed
than the magnetic energy density shown in Figure~\ref{magnetictorus}a.
Panel (b): Time-averaged streamlines and rest-mass density
show that for $r\lesssim 9M$ the velocity field is mostly radial
with no indication of a steady outflow.
Time-averaging smooths out the turbulent fluctuations
in the velocity.
The dashed, vertical line marks the position of the ISCO.
}
\label{velocitytorus}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=3.3in,clip]{f5.eps}
\caption{Flow stream lines are shown for the fiducial model (A0HR07).
Panel (a): Snapshot of the velocity structure
at time $27200M$ clearly shows MRI-driven turbulence in the interior
of the disk.
Panel (b): Time-averaged streamlines
show that for $r\lesssim 9M$ the velocity field is mostly radial.
The dashed, vertical line marks the position of the ISCO.
}
\label{streamtorus}
\end{figure}
Figure~\ref{velvsr} shows components of the time-averaged velocity
that are angle-averaged over $\pm 2|h/r|$ around the midplane (thick dashed lines
in Figure~\ref{horvsradius}). By limiting the range of the
$\theta$ integral, we
focus on the gas in the disk, leaving out the corona-wind-jet.
Outside the ISCO, the radial velocity from the simulation agrees well with the analytical GR
estimate (Eq. \ref{eq:inflow} in Appendix~\ref{sec_inflow}).
By making this comparison, we found $\alpha |h/r|^2 \approx 0.00033$.
For our disk thickness
$|h/r| = 0.064$, this corresponds to
$\alpha\approx 0.08$, which is slightly smaller than the nominal
estimate $\alpha\sim0.1$ we assumed in \S\ref{sec_infloweq}.
As the gas approaches the ISCO, it accelerates rapidly in
the radial direction and finally free-falls into the black hole. This
region of the flow is not driven by viscosity and hence the dynamics
here are not captured
by the analytical formula.
Figure~\ref{velvsr} also shows the inflow equilibrium time $t_{\rm ie}$,
which we take to be twice the GR version of the viscous time: $t_{\rm ie} = -2r/v_r$.
This is our estimate of the time it will take for the gas at a given radius to
reach steady-state. We see that, in a time of $\sim 27350M$,
the total duration of our simulation, the solution can be in steady-state
only inside a radius of $\sim9M$. Therefore, in the time-averaged
results described below, we consider the results to be reliable only over
this range of radius.
\begin{figure}
\centering
\includegraphics[width=3.3in,clip]{f6.eps}
\caption{The time-averaged, angle-averaged, rest-mass density-weighted
3-velocities and viscous timescale in the fiducial model (A0HR07)
are compared with the NT model.
Angle-averaging is performed over the disk gas lying within $\pm 2|h/r|$ of the midplane.
Top Panel: The orthonormal radial 3-velocity (solid line),
and the analytical GR estimate given in Eq. \ref{eq:inflow} of Appendix~\ref{sec_inflow} (dashed line).
Agreement for $r>r_{\rm ISCO}$ between the simulation
and NT model is found when we set $\alpha|h/r|^2\approx 0.00033$.
At smaller radii, the gas dynamics is no longer determined by viscosity
and hence the two curves deviate.
Middle Panel: Shows the orthonormal azimuthal 3-velocity $v_\phi$ (solid line)
and the corresponding Keplerian 3-velocity (dashed line).
Bottom Panel: The inflow equilibrium time scale $t_{\rm ie}\sim -2r/v_r$ (solid line)
of the disk gas is compared to the analytical GR thin disk estimate (dashed line).
At $r\sim 9M$, we see that $t_{\rm ie}\sim 2\times10^4M$. Therefore,
the simulation needs to be run for this time period (which we do)
before we can reach inflow equilibrium at this radius.
}
\label{velvsr}
\end{figure}
\subsection{Fluxes vs. Time}
Figure~\ref{5dotpanel} shows various fluxes vs. time that should
be roughly constant once inflow equilibrium has been reached.
The figure shows
the mass flux, $\dot{M}(r_{\rm H},t)$,
nominal efficiency, ${\tilde{e}}(r_{\rm H},t)$,
specific angular momentum, $\jmath(r_{\rm H},t)$,
normalized absolute magnetic flux, $\tilde{\Phi}_r(r_{\rm H},t)$,
(normalized using the unperturbed initial total flux),
and specific magnetic flux, $\Upsilon(r_{\rm H},t)$,
all measured at the event horizon ($r=r_H$).
These fluxes have been integrated over the
entire range of $\theta$ from 0 to $\pi$. The quantities $\dot{M}$,
${\tilde{e}}$ and $\jmath$ appear to saturate already at $t\sim7000M$.
However, the magnetic field parameters saturate only at $\sim12500M$.
We consider the steady-state period of the disk to begin
only after all these quantities reach their saturated values.
The mass accretion rate is quite variable, with root-mean-square (rms)
fluctuations of order two.
The nominal efficiency ${\tilde{e}}$ is fairly close to the NT efficiency,
while the specific angular momentum $\jmath$ is clearly below the NT value.
The results indicate that torques are present within the ISCO,
but do not dissipate much energy or cause significant energy to be transported
out of the ISCO.
The absolute magnetic flux per unit initial absolute flux, $\tilde{\Phi}_r$,
threading the black hole grows to about $1\%$,
which indicates that the magnetic field strength near the black hole is not
just set by the amount of magnetic flux in the initial torus.
This suggests our results are insensitive to the total
absolute magnetic flux in the initial torus.
The specific magnetic flux, $\Upsilon\approx 0.86$ on average.
Magnetic stresses are relatively weak since $\Upsilon\lesssim 1$,
which implies the magnetic field contributes
no more than $7\%$ to deviations from NT in $\jmath$ \citep{gammie99} ;
see Appendix~\ref{sec_gammie}.
During the quasi-steady state period,
the small deviations from NT in $\jmath$
are correlated in time with the magnitude of $\Upsilon$.
This is consistent with the fact that the specific magnetic flux controls these deviations.
Also, notice that $\tilde{\Phi}_r$ is roughly constant in time
while $\Upsilon$ varies in time. This is clearly because $\dot{M}$
is varying in time and also consistent with the fact that
$\Upsilon$ and $\dot{M}$ are anti-correlated in time.
\begin{figure}
\centering
\includegraphics[width=3.3in,clip]{f7.eps}
\caption{Shows for the fiducial model (A0HR07)
the time-dependence at the horizon of the mass accretion rate, $\dot{M}$ (top panel);
nominal efficiency, ${\tilde{e}}$,
with dashed line showing the NT value (next panel);
accreted specific angular momentum, $\jmath$,
with dashed line showing the NT value (next panel);
absolute magnetic flux relative to
the initial absolute magnetic flux, $\tilde{\Phi}_r$ (next panel);
and dimensionless specific magnetic flux, $\Upsilon$ (bottom panel).
All quantities have been integrated over all angles.
The mass accretion rate varies by factors of up to four
during the quasi-steady state phase.
The nominal efficiency is close to, but on average slightly lower than,
the NT value.
This means that the net energy loss through photons, winds, and jets
is below the radiative efficiency of the NT model.
The specific angular momentum is clearly lower than the NT value,
which implies that some stresses are present inside the ISCO.
The absolute magnetic flux at the black hole horizon grows
until it saturates due to local force-balance.
The specific magnetic flux $\Upsilon\lesssim 1$,
indicating that electromagnetic stresses inside the ISCO are weak
and cause less than $7\%$ deviations from NT in $\jmath$.
}
\label{5dotpanel}
\end{figure}
\subsection{Disk Thickness and Fluxes vs. Radius}
\label{sec:diskthick2}
Figure~\ref{horvsradius} shows the time-averaged disk thickness of the
fiducial model as a function of radius. Both
measures of thickness defined in \S\ref{sec:diskthick1} are shown; they
track each other. As expected, our primary thickness
measure, $|h/r|$, is the
smaller of the two. This thickness measure varies by a small amount
across the disk, but it is generally consistent with the following fiducial
value, viz., the value $|h/r|=0.064$ at $r=2r_{\rm ISCO}=12M$.
\begin{figure}
\centering
\includegraphics[width=3.3in,clip]{f8.eps}
\caption{The time-averaged scale-height, $|h/r|$, vs. radius in the fiducial model
(A0HR07) is shown by the solid lines.
The above-equator and below-equator values
of the disk thickness are $|h/r|\sim 0.04$--$0.06$
in the inflow equilibrium region $r<9M$.
We use the specific value of $|h/r|=0.064$ as measured at $r=2r_{\rm ISCO}$
(light dashed lines) as a representative thickness for the entire flow.
Twice this representative thickness (thick dashed lines)
is used to fix the $\theta$ range of integration for averaging
when we wish to focus only on the gas in the disk
instead of the gas in the corona-wind-jet.
The root mean square thickness $(h/r)_{\rm rms}\sim 0.07$--$0.13$
is shown by the dotted lines.
}
\label{horvsradius}
\end{figure}
Figure~\ref{fluxesvsradius1} shows the behavior of various fluxes
versus radius for the full $\theta$ integration range ($0$ to $\pi$).
We see that the mass accretion rate, $\dot{M}$,
and the specific angular momentum flux, $\jmath$,
are constant up to a radius $r \sim 9M$. This is exactly the
distance out to which we expect inflow equilibrium to have been
established, given the inflow velocity and viscous time scale results
discussed in \S\ref{sec:velvisc}. The consistency of these two
measurements gives us confidence that the simulation has truly
achieved steady-state conditions inside $r=9M$. Equally clearly,
and as also expected, the simulation is not in steady-state at larger radii.
The second panel in Figure~\ref{fluxesvsradius1} shows that
the inward angular momentum flux, $\jmath_{\rm in}$,
agrees reasonably well with the NT prediction. It falls
below the NT curve at large radii, i.e., the gas there is
sub-Keplerian. This is not surprising since we have included
the contribution of the corona-wind-jet gas which, being at high latitude,
does not rotate at the Keplerian rate. Other quantities, described below,
show a similar effect due to the corona. At the horizon, $\jmath_{\rm
in}=3.286$, which is $5\%$ lower than the NT value. This deviation is
larger than that found by S08. Once again, it is because
we have included the gas in the corona-wind-jet,
whereas S08 did not.
The third panel in Figure~\ref{fluxesvsradius1} shows that
the nominal efficiency ${\tilde{e}}$ at the horizon lies below the NT prediction.
This implies that the full accretion flow (disk+corona+wind+jet) is radiatively
less efficient than the NT model.
However, the overall shape of the
curve as a function of $r$ is similar to the NT curve.
The final panel in Figure~\ref{fluxesvsradius1} shows the value of $\Upsilon$
vs. radius. We see that $\Upsilon\approx 0.86$ is constant out to $r\sim 6M$.
A value of $\Upsilon\sim 1$ would have led to $7\%$ deviations from NT in $\jmath$,
and only for $\Upsilon\sim 6.0$ would deviations become $50\%$ (see Appendix~\ref{sec_gammie}).
The fact that $\Upsilon\sim 0.86\lesssim 1$ indicates that electromagnetic stresses are weak
and cause less than $7\%$ deviations from NT in $\jmath$.
Note that one does not expect $\Upsilon$ to be constant\footnote{
We also find that the ideal MHD invariant related to the
``isorotation law'' of field lines,
$\Omega_F(r)\equiv \left(\int\int d\theta d\phi {\sqrt{-g}} |v^r B^\phi - v^\phi B^r|\right)/\left(\int\int d\theta d\phi{\sqrt{-g}} |B^r|\right)$,
is Keplerian outside the ISCO and is (as predicted by the \citealt{gammie99} model)
roughly constant from the ISCO to the horizon (see also \citealt{mg04,mck07a}).}
outside the ISCO where the magnetic field is dissipating due to MHD turbulence
and the gas is forced to be nearly Keplerian despite a sheared magnetic field.
\begin{figure}
\centering
\includegraphics[width=3.3in,clip]{f9.eps}
\caption{Mass accretion rate and specific fluxes are shown
as a function of radius for the fiducial model (A0HR07).
From top to bottom the panels show:
Top Panel: mass accretion rate;
Second Panel: the accreted specific angular momentum, $\jmath$ (dotted line),
$\jmath_{\rm in}$ (solid line), and the NT profile (dashed line);
Third Panel: the nominal efficiency ${\tilde{e}}$ (solid line)
and the NT profile (dashed line);
Bottom Panel: the specific magnetic flux $\Upsilon$.
For all quantities the integration range includes all $\theta$.
The mass accretion rate and $\jmath$ are roughly constant
out to $r\sim 9M$, as we would expect for inflow equilibrium.
The profile of $\jmath_{\rm in}$ lies below the
NT value at large radii because we include gas in the slowly rotating corona.
At the horizon, $\jmath$ and ${\tilde{e}}$ are modestly below the corresponding NT values.
The quantity $\Upsilon\sim 0.86$ and is roughly constant out to $r\sim 6M$,
indicating that electromagnetic stresses are weak inside the ISCO.
}
\label{fluxesvsradius1}
\end{figure}
As we have hinted above, we expect
large differences between the properties of the gas that
accretes in the disk proper, close to the midplane, and that which
flows in the corona-wind-jet region. To focus just on the disk gas, we show
in Figure~\ref{fluxesvsradius2} the same fluxes as in Figure~\ref{fluxesvsradius1},
except that we have restricted the $\theta$ range to
$\pi/2 \pm 2|h/r|$. The mass accretion rate is
no longer perfectly constant for $r<9M$. This is simply a consequence
of the fact that the flow streamlines do not perfectly follow the
particular constant $2|h/r|$ disk boundary we have chosen. The
non-constancy of $\dot{M}$ does not significantly affect the other quantities
plotted in this figure since they are all normalized by the local $\dot{M}$.
The specific angular momentum, specific energy, and specific magnetic flux
are clearly shifted closer to the NT values when we restrict the angular integration range.
Compared to the NT value, viz., $\jmath_{\rm NT}(r_{\rm H})=3.464$, the fiducial model
gives $\jmath(r_{\rm H})=3.363$ ($2.9\%$ less than NT) when integrating
over $\pm 2|h/r|$ around the midplane (i.e., only over the disk gas)
and gives $\jmath(r_{\rm H})=3.266$ ($5.7\%$ less than NT) when integrating over all $\theta$
(i.e., including the corona-wind-jet).
Even though the mass accretion rate through the corona-wind-jet
is much lower than in the disk, still this gas
contributes essentially as much to the deviation of the specific angular momentum
as the disk gas does.
In the case of the specific magnetic flux, integrating over
$\pm 2|h/r|$ around the midplane we find $\Upsilon\approx 0.45$,
while when we integrate over all angles $\Upsilon\approx 0.86$.
The \citet{gammie99} model of an equatorial (thin) magnetized flow
within the ISCO shows that deviations in the specific angular
momentum are determined by the value of $\Upsilon$.
We find that the measured values of $\Upsilon$ are able to roughly predict
the measured deviations from NT in $\jmath$.
In summary, a comparison of Figure~\ref{fluxesvsradius1} and
Figure~\ref{fluxesvsradius2} shows
that all aspects of the accretion flow in the fiducial simulation
agree much better with the NT prediction
when we restrict our attention to regions close to the midplane.
In other words, the gas in the disk proper,
defined here as the region lying within $\pm 2|h/r|$ of the midplane, is well
described by the NT model. The deviation of the angular momentum flux
$\jmath_{\rm in}$ or $\jmath$ at the horizon relative to NT is $\lesssim 3\%$,
similar to the deviation found by S08\footnote{The quantities $\jmath_{\rm in}$ and
$\jmath$ are nearly equal at the horizon in the calculations reported
here whereas they were different in S08. This is because S08 used an
alternate definition of $\jmath_{\rm in}$. If we had used that definition
here, we would have found a deviation of $\sim2\%$ in $\jmath_{\rm in}$,
just as in S08}, while the nominal efficiency ${\tilde{e}}$ agrees to within $\sim1\%$.
\begin{figure}
\centering
\includegraphics[width=3.3in,clip]{f10.eps}
\caption{Similar to Figure~\ref{fluxesvsradius1},
but here the integration range only includes angles within
$\pm 2|h/r|=\pm0.128$ radians of the midplane. This allows us to focus on the disk gas.
The mass accretion rate is no longer constant
because streamlines are not precisely radial.
The quantities shown in the second and third panels are not
affected by the non-constancy of $\dot{M}$
because they are ratios of time-averaged fluxes
within the equatorial region and are related to ideal MHD invariants.
As compared to Figure~\ref{fluxesvsradius1},
here we find that $\jmath$, $\jmath_{\rm in}$, and ${\tilde{e}}$ closely follow the NT model.
For example, $\jmath(r_{\rm H})=3.363$ is only $2.9\%$ less than NT.
This indicates that the disk and coronal regions behave quite differently.
As one might expect, the disk region behaves like the NT model,
while the corona-wind-jet does not.
The specific magnetic flux is even smaller than in
Figure~\ref{fluxesvsradius1} and is $\Upsilon\sim 0.45$,
which indicates that electromagnetic stresses
are quite weak inside the disk near the midplane.
}
\label{fluxesvsradius2}
\end{figure}
\subsection{Comparison with Gammie (1999) Model}
\label{sec_gammiecompare}
Figure~\ref{gammie4panel} shows a comparison between the fiducial model
and the \citet{gammie99} model of a magnetized thin accretion flow within the ISCO
(see also Appendix~\ref{sec_gammie}).
Quantities have been integrated within $\pm 2|h/r|$ of the midplane
and time-averaged over a short period from $t=17400M$ to $t=18400M$.
Note that time-averaging $b^2$, $\rho_0$, etc.
over long periods can lead to no consistent comparable solution if the value of $\Upsilon$
varies considerably during the period used for averaging.
Also, note that the presence of vertical stratification,
seen in Figures~\ref{fluxesvsradius1} and~\ref{fluxesvsradius2}
showing that $\Upsilon$ depends upon height,
means the vertical-averaging used to obtain $\Upsilon$
can sometimes make it difficult to compare the simulations
with the \citet{gammie99} model which has no vertical stratification.
In particular, using equation~(\ref{Dotsgammie}) over this time period,
we find that $\Upsilon\approx 0.2~,0.3,~0.44,~0.7,~0.8$
for integrations around the midplane of, respectively,
$\pm0.01,~\pm0.05,~\pm2|h/r|,~\pm\pi/4,~\pm\pi/2$,
with best matches to the Gammie model (i.e. $b^2/2$ and other quantities match)
using an actual value of $\Upsilon=0.2,~0.33,~0.47,~0.8,~0.92$.
This indicates that stratification likely causes
our diagnostic to underestimate the best match with the Gammie model
once the integration is performed over highly-stratified regions.
However, the consistency is fairly good considering how much $\Upsilon$
varies with height.
Overall, Figure~\ref{gammie4panel} shows
how electromagnetic stresses control the deviations from NT within the ISCO.
The panels with $D[\jmath]$ and $D[{\rm e}]$ show how the electromagnetic flux starts
out large at the ISCO and drops to nearly zero on the horizon.
This indicates the electromagnetic flux has been converted
into particle flux within the ISCO by ideal (non-dissipative) electromagnetic stresses\footnote{This
behavior is just like that seen in ideal MHD jet solutions, but inverted with radius.}.
The simulated magnetized thin disk agrees quite well with the Gammie solution,
in contrast to the relatively poor agreement found for thick disks \citep{mg04}.
Only the single parameter $\Upsilon$ determines the Gammie solution,
so the agreement with the value and radial dependence among multiple independent terms
is a strong validation that the Gammie model is working well.
Nevertheless, there are some residual deviations
near the ISCO where the thermal pressure dominates the magnetic pressure.
Even if deviations from NT are present right at the ISCO,
the total deviation of the particle flux between the ISCO and horizon
equals the deviation predicted by the \citet{gammie99} model,
as also found in \citet{mg04} for thick disks.
This indicates that the \citet{gammie99} model accurately predicts
the effects of electromagnetic stresses inward of the ISCO.
Finally, note that the electromagnetic stresses within the ISCO
are ideal and non-dissipative in the Gammie model.
Since the flow within the ISCO in the simulation
is mostly laminar leading to weak non-ideal (resistive or viscous) effects,
the dissipative contribution (which could lead to radiation) can be quite small.
An exception to this is the presence of extended current sheets,
present near the equator within the ISCO in the simulations,
whose dissipation requires a model of the (as of yet, poorly understood)
rate of relativistic reconnection.
\begin{figure}
\centering
\includegraphics[width=3.3in,clip]{f11.eps}
\caption{Comparison between the accretion flow
(within $\pm 2|h/r|$ around the midplane) in the fiducial model (A0HR07), shown by solid lines,
and the model of a magnetized thin accretion disk
(inflow solution) within the ISCO by \citet{gammie99}, shown by dashed lines.
In all cases the red vertical line shows the location of the ISCO.
Top-left panel: Shows
the radial 4-velocity, where the Gammie solution assumes $u^r=0$ at the ISCO.
Finite thermal effects lead to non-zero $u^r$ at the ISCO for the simulated disk.
Bottom-left panel: Shows the rest-mass density ($\rho_0$, black line),
the internal energy density ($u_g$, magenta line),
and magnetic energy density ($b^2/2$, green line).
Top-right and bottom-right panels: Show the percent deviations
from NT for the simulations and Gammie solution
for the specific particle kinetic flux ($u_\mu$, black line),
specific enthalpy flux ($(u_g + p_g) u_\mu/\rho_0$, magenta line),
and specific electromagnetic flux ($(b^2 u^r u_\mu - b^r b_\mu)/(\rho_0 u^r)$, green line),
where for $\jmath$ we use $\mu=\phi$ and for ${\rm e}$ we use $\mu=t$.
As usual, the simulation result for the specific fluxes
is obtained by a ratio of flux integrals instead of the direct ratio of flux densities.
The total specific flux is constant vs. radius and is a sum of
the particle, enthalpy, and electromagnetic terms.
This figure is comparable to Fig.~10 for a thick ($|h/r|\sim 0.2$--$0.25$)
disk in \citet{mg04}.
Finite thermal pressure effects cause the fiducial model to deviate
from the inflow solution near the ISCO, but the solutions rapidly
converge inside the ISCO and the differences between the simulation
result and the Gammie model (relative to the total specific angular momentum or energy)
are less than $0.5\%$.
}
\label{gammie4panel}
\end{figure}
\subsection{Luminosity vs. Radius}
Figure~\ref{luminosityvsradius} shows radial profiles of two measures
of the disk luminosity: $L(<r)/\dot{M}$, which is the cumulative
luminosity inside radius $r$, and $d(L/\dot{M})/d\ln{r}$, which gives
the local luminosity at $r$.
We see that the profiles from the simulation are quite close to
the NT prediction, especially in the steady-state
region. As a way of measuring the deviation of the simulation results from
the NT model, we estimate what fraction of the disk luminosity is
emitted inside the ISCO; recall that the NT model predicts zero luminosity here.
The fiducial simulation gives $L(<r_{\rm ISCO})/\dot{M} = 0.0021$,
which is $3.5\%$ of the nominal efficiency ${\tilde{e}}[{\rm NT}]=0.058$ of a thin NT disk
around a non-spinning black hole.
This shows that the excess luminosity radiated within the ISCO is quite small.
The relative luminosity within the ISCO is $\tilde{L}_{\rm in}=3.5\%$
and the relative luminosity within the inflow equilibrium region is
$\tilde{L}_{\rm out}=8.0\%$.
Hence, we conclude that, for accretion disks which are as thin as our
fiducial model, viz., $|h/r|\sim0.07$,
the NT model provides a good description of the luminosity profile.
\begin{figure}
\centering
\includegraphics[width=3.3in,clip]{f12.eps}
\caption{Luminosity per unit rest-mass accretion rate vs. radius (top panel)
and the logarithmic derivative of this quantity (bottom panel)
are shown for the fiducial model (A0HR07).
The integration includes all $\theta$ angles.
The simulation result (solid lines, truncated into dotted lines
outside the radius of inflow equilibrium)
shows that the accretion flow emits more radiation
than the NT prediction (dashed lines) at small radii.
However, the excess luminosity within the ISCO
is only $\tilde{L}_{\rm in}\approx 3.5\%$,
where ${\tilde{e}}[{\rm NT}]$ is the NT efficiency at the horizon
(or equivalently at the ISCO).}
\label{luminosityvsradius}
\end{figure}
\subsection{Luminosity from Disk vs. Corona-Wind-Jet}
\label{sec_fluxdiskcorona}
The fiducial model described so far includes a tapering of the cooling
rate as a function of height above the midplane, given by the
function $S[\theta]$ (see equation~\ref{cooling}).
We introduced this taper in order to only cool bound ($-u_t(\rho_0+u_g+ p_g+b^2)/\rho_0 < 1$)
gas and to avoid including the emission from the part of the corona-wind-jet
that is prone to excessive numerical dissipation due to the low resolution
used high above the accretion disk.
This is a common approach that others have also taken
when performing GRMHD simulations of thin disks (N09, N10).
However, since our tapering function does not explicitly
refer to how bound the gas is, we need to check that it
is consistent with cooling only bound gas.
We have explored this question by re-running the fiducial model with all
parameters the same except that we turned off the tapering function altogether,
i.e., we set $S[\theta]=1$.
This is the only model for which the tapering function is turned off.
Figure~\ref{taperoff} shows a number of luminosity profiles
for the fiducial model and the no-tapering model.
This comparison shows that, whether or not we include a taper,
the results for the luminosity from all the bound gas is nearly the same.
Without a tapering, there is some luminosity at high latitudes above $\pm 8|h/r|$
corresponding to emission from the low-density jet region (black solid line).
This region is unbound and numerically inaccurate,
and it is properly excluded when we use the tapering function.
Another conclusion from the above test is that, as far as the
luminosity is concerned, it does not matter much whether we focus
on the midplane gas ($(\pi/2)\pm2|h/r|$) or include all the bound gas.
The deviations of the luminosity from NT in the two cases are similar --
{\it changes} in the deviation are less than $1\%$.
An important question to ask is whether the excess luminosity from
within the ISCO is correlated with, e.g., deviations from NT in
$\jmath$, since $D[\jmath]$ could then be used as a proxy for the
excess luminosity. We investigate this in the context of the
simulation with no tapering. For an integration over $\pm 2|h/r|$
around the midplane (which we identify with the disk component), or
over all bound gas, or over all the gas (bound and unbound), the
excess luminosity inside the ISCO is $\tilde{L}_{\rm
in}=3.3\%,~4.4\%,~5.4\%$, and the deviation from NT in $\jmath$ is
$D[\jmath]=-3.6\%, ~-6.7\%, ~-6.7\%$, respectively. We ignore the
luminosity from unbound gas since this is mostly due to material in a
very low density region of the simulation where thermodynamics is not
evolved accurately. Considering the rest of the results, we see that
$D[\jmath]$ is $100\%$ larger when we include bound gas outside the
disk compared to when we consider only the disk gas, whereas the
excess luminosity increases by only $32\%$. Therefore, when we
compute $\jmath$ by integrating over all bound gas and then assess
the deviation of the simulated accretion flow from the NT model, we
strongly overestimate the excess luminosity of the bound gas relative
to NT. A better proxy for the latter is the deviations from NT in
$\jmath$ integrated only over the disk component (i.e. over $\pm
2|h/r|$ around the midplane).
Furthermore, we note that the gas that lies beyond $\pm 2|h/r|$ from
the disk midplane consists of coronal gas, which is expected to be
optically thin and to emit a power-law spectrum of photons. For many
applications, we are not interested in this component but rather care
only about the thermal blackbody-like emission from the
optically-thick region of the disk. For such studies, the most
appropriate diagnostic from the simulations is the radiation emitted
within $\pm2|h/r|$ of the midplane. According to this diagnostic, the
excess emission inside the ISCO is only $\tilde{L}_{\rm in}=3.4\%$ in
the model without tapering, and $3.5\%$ in the fiducial model that
includes tapering.
Lastly, we consider variations in the cooling timescale, $\tau_{\rm cool}$ ,
which is another free parameter of our cooling model that we generally set to $2\pi/\Omega_{\rm K}$.
However, we consider one model that is otherwise identical to the fiducial model
except we set $\tau_{\rm cool}$ to be five times shorter so that the
cooling rate is five times faster.
We find that $\tilde{L}_{\rm in}=4.2\%$,
which is slightly larger than the fiducial model with
$\tilde{L}_{\rm in}=3.5\%$.
Even though the cooling rate is five times faster than an orbital rate,
there is only $20\%$ more luminosity from within the ISCO.
This is likely due to the flow within the ISCO being mostly laminar
with little remaining turbulence to drive dissipation and radiation.
\begin{figure}
\centering
\includegraphics[width=3.3in,clip]{f13.eps}
\caption{Shows enclosed luminosity vs. radius for models
with different cooling prescriptions and $\theta$
integration ranges. The black dashed line corresponds to the
NT model. The luminosity for the fiducial model A0HR07,
which includes a tapering of the cooling with disk height as
described in \S\ref{sec:goveqns},
is shown integrated over $\pm2|h/r|$ from the midplane (black dotted line),
integrated over all bound gas (black long dashed line), and integrated over all fluid (black solid line). Essentially all the gas is bound and so the black solid and long dashed lines are indistinguishable.
The red lines are for a model that is identical to the fiducial run,
except that no tapering is applied to the cooling.
For this model the lines are:
red solid line: all angles, all fluid;
red dotted line: $\pm 2|h/r|$ around the midplane;
red long dashed line: all bound gas.
The main result is that the luminosity from bound gas is
nearly the same (especially at the ISCO)
whether or not we include tapering
(compare the red long dashed line and the black long dashed line).
}
\label{taperoff}
\end{figure}
\section{Thin Disks with Varying Magnetic Field Geometry}
\label{sec:magneticfield}
We now consider the effects of varying the initial field geometry.
Since the magnetic field can develop large-scale structures
that do not act like a local scalar viscosity, there could in principle be
long-lasting effects on the
accretion flow properties as a result of the initial field geometry.
This is especially a concern for geometrically thin disks,
where the 1-loop field geometry corresponds to a
severely squashed and highly organized field loop bundle with
long-range coherence in the radial direction, whereas our
fiducial 4-loop model corresponds to nearly circular loops which
impose much less radial order on the MRI-driven turbulence.
To investigate this question we have simulated a model similar
to our fiducial run except that we initialized the gas torus
with a 1-loop type field geometry instead of our usual multi-loop geometry.
Figure~\ref{jfluxvsgeometry} shows the radial dependence
of $\jmath$, $\jmath_{\rm in}$, ${\tilde{e}}$, and $\Upsilon$
for the two field geometries under consideration, and
Table~\ref{tbl_magneticfield} reports numerical estimates
of various quantities at the horizon.
Consider first the solid lines (4-loop fiducial run) and
dotted lines (1-loop run) in Figure~\ref{jfluxvsgeometry},
both of which correspond to integrations in $\theta$ over
$\pm 2|h/r|$ around the midplane.
The simulation with 4-loops is clearly more consistent
with NT than the 1-loop simulation.
The value of $\jmath$ at the horizon in the 4-loop model deviates
from NT by $-2.9\%$.
Between the times of $12900M$ and $17300M$,
the 1-loop model deviates by $-5.6\%$,
while at late time over the saturated period the 1-loop model deviates by
$-7.2\%$.
The long-dashed lines show the effect of integrating over all $\theta$
for the 1-loop model.
This introduces yet another systematic deviation from NT
(as already noted in \S\ref{sec_fluxdiskcorona});
now the net deviation of $\jmath$
becomes $-10.7\%$ for times $12900M$ to $17300M$
and becomes $-15.8\%$ for the saturated state.
Overall, this implies that the assumed initial field geometry
has a considerable impact on the specific angular momentum
profile and the stress inside the ISCO.
This also indicates that the saturated state
is only reached after approximately $17000M$,
and it is possible that the 1-loop model may never properly
converge because magnetic flux of the same sign
(how much flux is initially available is arbitrary
due to the arbitrary position of the initial gas pressure maximum)
may continue to accrete onto the black hole
and lead to a qualitatively different accretion state
(as seen in \citet{igumenshchev03} and \citet{mg04} for their vertical field model).
At early times, the nominal efficiency, ${\tilde{e}}$,
shows no significant dependence on
the field geometry,
and sits near the NT value for both models.
At late time in the 1-loop model, ${\tilde{e}}$ rises somewhat,
which may indicate the start of the formation
of a qualitatively different accretion regime.
\input{table_field.tex}
Figure~\ref{lumvsgeometry} shows the normalized luminosity.
We see that the 1-loop model produces more luminosity
inside the ISCO.
For times $12900M$ to $17300M$,
$\tilde{L}_{\rm in}=5.4\%$ (integrated over all $\theta$)
compared to $3.5\%$ for the 4-loop field geometry.
Thus there is $50\%$ more radiation coming from
inside the ISCO in this model.
At late time during the saturated state,
$\tilde{L}_{\rm in}=4.6\%$ (integrated over all $\theta$).
Thus there is approximately $31\%$ more radiation coming from
inside the ISCO in this model
during the late phase of accretion.
\begin{figure}
\centering
\includegraphics[width=3.3in,clip]{f21.eps}
\caption{Radial profiles of
$\jmath$ and $\jmath_{\rm in}$ (top panel),
${\tilde{e}}$ (middle panel), and $\Upsilon$ (bottom panel)
are shown for two different initial field geometries.
Results for the fiducial 4-loop field geometry (model A0HR07)
integrated over $\pm 2|h/r|$ around the midplane are shown by solid lines,
for the 1-loop field geometry (model A0HR07LOOP1) integrated over $\pm 2|h/r|$
around the midplane by dotted lines,
and the 1-loop model integrated over all angles by long-dashed lines.
The short-dashed lines in the top two panels show the NT result.
We see that the 1-loop field geometry shows larger deviations from NT
in $\jmath$ and $\Upsilon$ compared to the 4-loop geometry.
The panels also reemphasize the point that including all $\theta$ angles
in the angular integration leads to considerable changes in $\jmath$ and $\Upsilon$
due to the presence of magnetic field in the corona-wind-jet.
}
\label{jfluxvsgeometry}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=3.3in,clip]{f22.eps}
\caption{Similar to Figure~\ref{jfluxvsgeometry}
for the initial 4-loop and 1-loop field geometries,
but here we show the luminosity (top panel)
and log-derivative of the luminosity (bottom panel).
The luminosity is slightly higher for the 1-loop model
compared to the 4-loop model.
}
\label{lumvsgeometry}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=3.3in,clip]{f23.eps}
\caption{Similar to Figure~\ref{5dotpanel},
but here we compare the initial 4-loop fiducial model (black solid lines)
and the 1-loop model (dashed magenta lines). The horizontal black dashed lines
in the second and third panels show the predictions of the
NT model.
The mass accretion rate, $\dot{M}$, has larger root-mean-squared
fluctuations in the 1-loop model,
which is indicative of more vigorous turbulence.
The nominal efficiency, ${\tilde{e}}$, shows no clear difference.
The specific angular momentum, $\jmath$, is lower in the 1-loop
model compared to the 4-loop model.
This indicates that the 1-loop field leads to larger stress within the ISCO.
The absolute magnetic flux (per unit initial total absolute flux)
on the black hole is larger in the 1-loop model than the 4-loop model.
Since $\tilde{\Phi}_r\sim 1/2$ for the 1-loop model,
essentially half of the initial loop was advected onto the black hole,
while the other half gained angular momentum and has been advected away.
This may indicate that the 1-loop geometry
is a poor choice for the initial field geometry,
since the magnetic flux that ends up on the black hole is determined
by the initial conditions.
For times $12900M$ to $17300M$,
the value of $\Upsilon$ is about twice higher in the 1-loop model,
which implies about twice greater electromagnetic stresses within the ISCO.
}
\label{5dotpanel1loop}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=3.3in,clip]{f24.eps}
\caption{Normalized electromagnetic stress, $W/\dot{M}$,
as a function of radius for the fiducial model (black lines)
and the otherwise identical 1-loop model (magenta lines).
The solid lines correspond to a $\theta$ integration over all angles,
while the dotted lines correspond to a $\theta$ integration over $\pm 2|h/r|$.
The dashed line shows the NT result,
while the dashed green lines show the Gammie (1999) result
for $\Upsilon=0.60, 0.89, 0.90, 1.21$ for lines from the bottom to the top.
The Gammie (1999) model gives a reasonable fit to the simulation's
stress profile within the ISCO.
The 1-loop model shows about $50\%$ larger peak normalized stress (integrated over all angles)
compared to the multi-loop model (integrated over all angles),
which is consistent with the 1-loop model leading to larger
deviations from NT (about $50\%$ larger luminosity over the period used
for time averaging).
The large differences between the solid
and dotted lines again highlights the fact that the stress within the disk is much smaller
than the stress over all $\theta$ that includes the corona+wind+jet.
As pointed out in S08, even though such a plot of the electromagnetic stress
appears to indicate large deviations from NT within the ISCO,
this is misleading because one has not specified the quantitative
effect of the non-zero value of $W/\dot{M}$ on physical quantities within the ISCO.
Apparently large values of $W/\dot{M}$ do not
necessarily correspond to large deviations from NT.
For example, quantities such as $\jmath$, ${\rm e}$, and the luminosity
only deviate by a few percent from NT for the multi-loop model.
}
\label{loop1manystress}
\end{figure}
Table~\ref{tbl_magneticfield} also reports the results
for thick ($|h/r|\approx 0.3$) disk models
initialized with the multi-loop and 1-loop field geometries.
This again shows that the deviations from NT are influenced by the
initial magnetic field geometry and scale with $|h/r|$ in a way
expected by our scaling laws.
The 1-loop models show deviations from NT in $\jmath$ are larger
as related to the larger value of $\Upsilon$.
The deviations from NT are less affected
by the initial magnetic field geometry for thicker disks,
because the deviations from NT are also driven
by thermal effects and Reynolds stresses rather
than primarily electromagnetic stresses as for thin disks.
These effects can be partially understood by looking
at the specific electromagnetic stress, $\Upsilon$, shown in Figure~\ref{jfluxvsgeometry}.
We find $\Upsilon\approx 0.45$ for the 4-loop field geometry.
For times $12900M$ to $17300M$,
$\Upsilon\approx 0.71$ in the 1-loop field geometry,
and during the saturated state $\Upsilon\approx 1.28$.
For times $12900M$ to $17300M$,
the $50\%$ larger $\Upsilon$ appears to be the reason for
the $50\%$ extra luminosity inside the ISCO in the 1-loop model.
The magnetized thin disk model of \citet{gammie99}
predicts that, for $a/M=0$, specific magnetic fluxes of $\Upsilon=0.45, ~0.71$
should give deviations from NT of $-D[\jmath]=1.9, ~3.9$, respectively.
These are close to the deviations seen in the simulations,
but they are not a perfect match for reasons we can explain.
First, the details of how one spatially-averages quantities
(e.g., average of ratio vs. ratio of averages)
when computing $\Upsilon$ lead to moderate changes in its value,
and, for integrations outside the midplane,
comparisons to the Gammie model can require
slightly higher $\Upsilon$ than our diagnostic reports.
Second, the finite thermal pressure at the ISCO
leads to (on average over time) a deviation already at the ISCO
that is non-negligible compared to the deviation introduced
by electromagnetic stresses between the ISCO and horizon.
This thermal component is not always important,
e.g., see the comparison in Figure~\ref{gammie4panel}.
Still, as found in \citet{mg04} for thick disks at least,
the deviations from NT contributed by the thermal pressure
are of the same order as the deviations predicted by the Gammie model.
These results motivate extending the \citet{gammie99} model
to include a finite (but still small) thermal pressure
such that the boundary conditions at the ISCO lead to a non-zero radial velocity.
Within the ISCO, we find that the time-averaged
and volume-averaged comoving field strength for the 4-loop geometry
roughly follows $|b|\propto r^{-0.7}$ within $\pm 2|h/r|$ of the disk midplane,
while at higher latitudes we have a slightly steeper scaling.
For times $12900M$ to $17300M$,
the 1-loop geometry has $|b|\propto r^{-1.1}$ within $\pm 2|h/r|$ of the disk midplane,
and again a slightly steeper scaling in the corona.
Other than this scaling, there are no qualitative differences in the distribution
of any comoving field component with height above the disk.
While the \citet{gammie99} solution does not predict a power-law
dependence for $|b|$, for a range between $\Upsilon=0.4$--$0.8$,
the variation near the horizon is approximately $|b|\propto r^{-0.7}- r^{-0.9}$,
which is roughly consistent with the simulation results.
The slightly steeper slope we obtain for the 1-loop geometry is consistent
with a higher specific magnetic flux,
although the variations in $\Upsilon$ for integration over different ranges of angle
imply stratification and a non-radial flow
which the \citet{gammie99} model does not account for.
This fact and the rise in $\Upsilon$ with decreasing radius
seen in Figure~\ref{jfluxvsgeometry}
indicate a non-trivial degree of angular compression
as the flow moves towards the horizon.
Overall, our results suggest that deviations from NT
depend on the assumed field geometry,
and that the \citet{gammie99} model roughly fits the simulations.
Figure~\ref{5dotpanel1loop}
shows the same type of plot as in Figure~\ref{5dotpanel},
but here we compare the fiducial 4-loop model with
the 1-loop model.
As mentioned above, the 1-loop geometry has a
larger deviation in $\jmath$ from the NT value,
corresponding to larger stresses inside the ISCO.
The absolute magnetic flux (per unit initial total absolute magnetic flux)
on the black hole $\tilde{\Phi}_r$ is of order $1/2$,
suggesting that the inner half of the initial field bundle
accreted onto the black hole,
while the other half was advected to larger radii.
This is consistent with what is seen in simulations of thick tori \citep{mg04,bhk08}.
This suggests that using the 1-loop geometry leads to results
that are sensitive to the initial absolute magnetic flux,
while the multiple loop geometry leads to results that are insensitive
to the initial absolute magnetic flux.
Such dependence of the electromagnetic stress on initial magnetic field geometry
has also been reported in 3D pseudo-Newtonian simulations by \citet{ra01}
and in 3D GRMHD simulations by \citet{bhk08}.
Figure~\ref{loop1manystress}
shows the electromagnetic stress as computed by equation~(\ref{stress})
for the multiple loop fiducial model (A0HR07)
and the otherwise identical 1-loop model (A0HR07LOOP1).
We only show the electromagnetic part of the stress,
and within the ISCO this is to within a few percent the same
as the total stress obtained by including all terms in the stress-energy tensor.
Outside the ISCO, the total stress agrees more with the NT model.
The figure shows the full-angle integrated electromagnetic stress,
the electromagnetic stress integrated over only $\pm 2|h/r|$,
the NT stress,
and the \citet{gammie99} electromagnetic stress for $\Upsilon=0.60, 0.89, 0.90, 1.21$
(we choose $\Upsilon$, the only free parameter of the model,
such that the peak magnitude of the stress agrees with the simulation).
The chosen $\Upsilon$ values are close to our diagnostic's value
of $\Upsilon$ for these models, which demonstrates that the \citet{gammie99} model
is consistently predicting the simulation's results with a single free parameter.
The stress is normalized by the radially dependent $\dot{M}(r)$
that is computed over the same $\theta$ integration range.
We do not restrict the integration to bound material as done in S08
(in S08, the stress is integrated over $\pm 2|h/r|$ and only for bound material,
while in N10 the stress\footnote{
N10's figures 12 and 13 show stress vs. radius,
but some of the integrals they computed were not re-normalized
to the full $2\pi$ when using their simulation $\phi$-box size of $\pi/2$,
so their stress curves are all a constant factor of $4$ times larger
than the actual stress (Noble, private communication).}
is only over bound material).
The stress for the fiducial model was time-averaged over
the saturated state, while the 1-loop model was time-averaged
from time $12900M$ to $17300M$ in order to best compare
with the early phase of accretion for the 1-loop model studied in N10.
Figure~\ref{loop1manystress} shows that
the simulation and NT stress do not agree well,
and it suggests there is an {\it apparently} large stress within the ISCO.
However, as first pointed out by S08 and discussed in section~\ref{magneticfluxdiag},
this stress does not actually correspond to a large deviation from NT in
physically relevant quantities such as the
specific angular momentum, specific energy, and luminosity.
This point is clarified by making a comparison to the \citet{gammie99} model's stress,
which agrees reasonably well with the simulation stress inside the ISCO.
Even though the stress may appear large inside the ISCO,
the stress corresponding to the Gammie model with (e.g.) $\Upsilon=0.60$
only translates into a few percent deviations from NT.
This figure also demonstrates that the initial magnetic field
geometry affects the amplitude of the stress in the same direction as it
affects other quantities and is reasonably well predicted by the \citet{gammie99} model.
The initial magnetic field sets the saturated value of $\Upsilon$,
which is directly related to the electromagnetic stresses within the ISCO.
The 1-loop model leads to a peak stress (integrated over all angles)
within the ISCO that is about $50\%$ larger than the multi-loop model (integrated over all angles),
which is likely related to the extra $50\%$ luminosity in the 1-loop
model compared to the multi-loop model.
The fact that the stress normalization changes with initial field geometry
is consistent with other 3D GRMHD simulations of thick disks by \citet{bhk08}.
This figure again shows how the stress within the disk ($\pm 2|h/r|$)
is much smaller than the total disk+corona+wind+jet (all $\theta$).
Finally, we discuss previous results obtained for
other field geometries using an energy-conserving
two-dimensional GRMHD code \citep{mg04}.
While such two-dimensional simulations are unable to sustain turbulence,
the period over which the simulations do show turbulence agrees quite well
with the corresponding period in three-dimensional simulations.
This implies that the turbulent period in the
two-dimensional simulations may be qualitatively correct.
The fiducial model of \citet{mg04} was of a thick ($|h/r|\sim 0.2$--$0.25$)
disk with a 1-loop initial field geometry around an $a/M=0.9375$
black hole.
This model had $\Upsilon\sim 1$ near the midplane within the ISCO
and $\Upsilon\sim 2$ when integrated over all $\theta$ angles.
Their measured value of $\jmath\approx 1.46$ integrated over all angles,
$|b|\propto r^{-1.3}$ within the ISCO within the disk midplane \citep{mck07b},
along with $\Upsilon\sim 1$--$2$
are roughly consistent with the \citet{gammie99} model prediction
of $\jmath\approx 1.5$.
Similarly, the strong vertical field geometry model they studied had $\Upsilon\sim 2$
near the midplane within the ISCO and $\Upsilon\sim 6$ integrated over all $\theta$ angles.
Their measurement of $\jmath\approx -1$ integrated over all angles
is again roughly consistent with
the model prediction of $\jmath\approx -1.2$ for $\Upsilon\sim 6$.
Note that in this model, $\Upsilon$ rises (as usual to reach saturation)
with time, but soon after $\Upsilon\gtrsim 2$ in the midplane,
the disk is pushed away by the black hole and then $\Upsilon$ is forced to be even larger.
Evidently, the accumulated magnetic flux near the black hole pushes
the system into a force-free magnetosphere state -- not an accretion state.
This shows the potential danger of using strong-field initial conditions
(like the 1-loop field geometry),
since the results are sensitive to the assumed initial flux
that is placed on (or rapidly drops onto) the black hole.
Even while the disk is present, this particular model
exhibits net angular momentum extraction from the black hole.
This interesting result needs to be confirmed using
three-dimensional simulations of both thick and thin disks.
\section{Introduction}
\label{sec_intro}
Accreting black holes (BHs) are among the most powerful astrophysical
objects in the Universe. Although they have been the target of
intense research for a long time, many aspects of black hole accretion
theory remain uncertain to this day. Pioneering work by
\citet{Bardeen:1970:SCO,sha73,nt73,pt74} and others indicated that
black hole accretion through a razor-thin disk can be highly
efficient, with up to $42\%$ of the accreted rest-mass-energy being
converted into radiation. These authors postulated the existence of a
turbulent viscosity in the disk, parameterized via the famous
$\alpha$-prescription \citep{sha73}. This viscosity causes outward
transport of angular momentum; in the process, it dissipates energy and
produces the radiation. The authors also assumed that, within the inner-most
stable circular orbit (ISCO) of the black hole, the viscous torque
vanishes and material plunges into the black hole with constant energy
and angular momentum flux per unit rest-mass flux.
This is the so-called ``zero-torque'' boundary condition.
Modern viscous
hydrodynamical calculations of disks with arbitrary thicknesses
suggest that the zero-torque condition is a good
approximation when the height ($h$) to radius ($r$) ratio of the accreting gas is small:
$|h/r|\lesssim 0.1$ \citep{pac00,ap03,snm08,sadowski_slim_disks_2009,abr10}.
Radiatively efficient disks in
active galactic nuclei (AGN) and x-ray binaries are expected to have
disk thickness $|h/r| < 0.1$ whenever the luminosity is limited
to less than about $30\%$ of the Eddington luminosity \citep{mcc06}.
The above hydrodynamical studies thus suggest that systems in this
limit should be described well by the standard relativistic thin disk
theory as originally developed by \citet{nt73}, hereafter NT.
In parallel with the above work, it has for long been recognized
that the magnetic field could be a complicating factor
that may significantly modify accretion dynamics near and inside the ISCO \citep{thorne74}.
This issue has become increasingly important with the realization
that angular momentum transport in disks is
entirely due to turbulence generated
via the magnetorotational instability (MRI) \citep{bal91,bh98}.
However, the magnetic field does not necessarily behave
like a local viscous hydrodynamical stress. Near the black hole,
the magnetic field may have large-scale structures
\citep{macdonald84}, which can induce stresses across the
ISCO \citep{krolik99,gammie99,ak00} leading to changes in, e.g.,
the density, velocity, and amount of dissipation and emission.
Unlike turbulence, the magnetic field can transport angular momentum without
dissipation (e.g. \citealt{li02}), or it can dissipate in current
sheets without transporting angular momentum. In \citet{ak00}, the
additional electromagnetic stresses are treated simply as a freely tunable
model parameter on top of an otherwise hydrodynamical model. A more
complete magnetohydrodynamical (MHD) model of a magnetized thin disk
has been developed by \citet{gammie99}.
In this model, the controlling free parameter is the
specific magnetic flux, i.e., magnetic flux per unit rest-mass flux.
Larger values of this parameter lead to larger deviations from NT
due to electromagnetic stresses, but the exact
value of the parameter for a given accretion disk is unknown.
For instance, it is entirely possible that electromagnetic stresses
become negligible in the limit when the disk thickness $|h/r|\to 0$.
The value of the specific magnetic flux
is determined by the nonlinear turbulent saturation of the magnetic field,
so accretion disk simulations are the best way to establish its magnitude.
The coupling via the magnetic field between a spinning black hole and
an external disk, or between the hole and the corona, wind and jet
(hereafter, corona-wind-jet),
might also play an important role
in modifying the accretion flow near the black hole.
The wind or jet (hereafter, wind-jet)
can transport angular momentum and energy away from the
accretion disk and black hole
\citep{blandford_accretion_disk_electrodynamics_1976,bz77,mg04,mck06jf,mck07b,km07}.
The wind-jet power depends upon factors such as
the black hole spin \citep{mck05,hk06,kom07},
disk thickness \citep{meier01,tmn08,tmn09,tnm09,tnm09b},
and the strength and large-scale behavior of the magnetic field
\citep{mg04,bhk08,mb09}, and these can affect the angular
momentum transport through an accretion disk. In this context, we
note that understanding how such factors affect disk structure may be
key in interpreting the distinct states and complex behaviors observed
for black hole X-ray binaries \citep{fend04a,remm06}.
These factors also affect the black hole spin history
\citep{gammie_bh_spin_evolution_2004},
and so must be taken into account when considering
the effect of accretion on the cosmological evolution of black hole spin
\citep{Hughes:2003:BHM,gammie_bh_spin_evolution_2004,bv08}.
Global simulations of accretion disks using general relativistic
magnetohydrodynamical (GRMHD) codes (e.g. \citealt{gam03,dev03})
currently provide
the most complete understanding of how turbulent magnetized accretion
flows around black holes work. Most simulations have studied
thick ($|h/r|\gtrsim 0.15$) disks without radiative cooling.
Such global simulations of the inner accretion flow
have shown that fluid crosses the ISCO without any clear evidence that
the torque vanishes at the ISCO, i.e., there is no
apparent ``stress edge''
\citep{mg04,kro05,beckwith08b}. Similar results were previously found with
a pseudo-Newtonian potential for the black hole \citep{kh02}. In
these studies, a plot of the radial profile of the normalized
stress within the ISCO appears to indicate a significant deviation
from the NT thin disk theory \citep{kro05,beckwith08b}, and it was thus
expected that much thinner disks might also deviate significantly from NT.
A complicating factor in drawing firm conclusions from
such studies is that the assumed initial
global magnetic field geometry and strength can significantly
change the magnitude of electromagnetic stresses
and associated angular momentum transport inside the ISCO \citep{mg04,bhk08}.
The implications of the above studies for truly thin
disks ($|h/r|\lesssim 0.1$) remain uncertain. Thin disks are difficult
to resolve numerically, and simulations have been attempted only
recently. Simulations of thin disks using a
pseudo-Newtonian potential for the black hole reveal good
agreement with standard thin disk theory \citep{rf08}. The first
simulation of a thin ($|h/r|\approx 0.05$) disk using a full GRMHD model
was by \citet{shafee08}, hereafter S08. They
considered a non-spinning ($a/M=0$) black hole
and an initial field geometry consisting of multiple opposite-polarity
bundles of poloidal loops
within the disk.
They found that, although the stress profile appears to indicate
significant torques inside the ISCO, the actual angular momentum flux
per unit rest-mass flux through the disk component deviates from the
NT prediction by only $2\%$, corresponding to an estimated deviation
in the luminosity of only about $4\%$.
The study by S08 was complemented by
\citet{noble09}, hereafter N09, who considered a thin ($|h/r|\approx 0.1$)
disk around an $a/M=0.9$ black hole and an initial field geometry
consisting of a single highly-elongated poloidal loop bundle
whose field lines follow the density contours of the thin disk.
They found $6\%$ more luminosity than predicted by NT. More
recently, \citet{nkh10}, hereafter N10, considered
a thin ($|h/r|\approx 0.07$) disk around a non-spinning ($a/M=0$)
black hole and reported up to $10\%$
deviations from NT in the specific angular momentum accreted
through the accretion flow.
In this paper, we extend the work of S08 by considering a range of
black hole spins, disk thicknesses, field geometries,
box sizes, numerical resolutions, etc.
Our primary result is that we confirm S08, viz., geometrically thin disks
are well-described by the NT model. We show that there are important
differences between the dynamics of the gas in the disk and in
the corona-wind-jet.
In addition, we find that the torque and luminosity within the ISCO
can be significantly affected by the geometry and strength of the
initial magnetic field, a result that should be considered when
comparing simulation results to thin disk theory. In this context, we
discuss likely reasons for the apparently different conclusions
reached by N09 and N10.
The equations we solve are given in \S\ref{sec:goveqns}, diagnostics
are described in \S\ref{sec:diagnostics}, and our numerical setup is
described in \S\ref{sec:nummodels}. Results for our fiducial thin
disk model for a non-rotating black hole are given in
\S\ref{sec:fiducialnonrot}, and we summarize convergence studies in
\S\ref{sec:convergence}. Results for a variety of
black hole spins and disk thicknesses are presented in
\S\ref{sec:thicknessandspin} and for thin disks with different
magnetic field geometries and strengths in \S\ref{sec:magneticfield}.
We compare our results with previous studies in
\S\ref{sec:comparison}, discuss the implications of our results in
\S\ref{sec:discussion}, and conclude with a summary of the salient
points in \S\ref{sec:conclusions}.
\section{Physical Models and Numerical Methods}
\label{sec:nummodels}
This section describes our physical models and numerical methods.
Table~\ref{tbl_models} provides a list of all our simulations
and shows the physical and numerical parameters that we vary.
Our primary models are labeled by names of the form AxHRy, where x is
the value of the black hole spin parameter and y is approximately
equal to the disk thickness $|h/r|$. For instance, our fiducial model
A0HR07 has a non-spinning black hole ($a/M=0$) and a geometrically
thin disk with $|h/r| \sim 0.07$. We discuss this particular model in
detail in \S\ref{sec:fiducialnonrot}. Table~\ref{tbl_models} also
shows the time span (from $T_i/M$ to $T_f/M$) used to perform the
time-averaging, and the last column shows the actual value of $|h/r|$
in the simulated model as measured during inflow equilibrium, e.g.,
$|h/r|= 0.064$ for model A0HR07.
\subsection{Physical Models}
\label{sec:modelsetup}
This study considers black hole accretion disk systems
with a range of black hole spins: $a/M=0, ~0.7, ~0.9, ~0.98$,
and a range of disk thicknesses: $|h/r|=0.07, ~0.13, ~0.2, ~0.3$.
The initial mass distribution is an isentropic equilibrium torus \citep{chak85a,chak85b,dev03}.
All models have an initial inner torus edge at $r_{\rm in}=20M$,
while the torus pressure maximum for each model
is located between $R_{\rm max}=35M$ and $R_{\rm max}=65M$.
We chose this relatively large radius for the initial torus
because S08 found that placing the torus at smaller radii caused the
results to be sensitive to the initial mass distribution.
We initialize the solution so that $\rho_0=1$ is the maximum rest-mass density.
In S08, we set $q=1.65$ ($\Omega\propto r^{-q}$ in non-relativistic limit)
and $K=0.00034$ with $\Gamma=4/3$,
while in this paper we adjust the initial angular momentum profile
such that the initial torus has the target value of $|h/r|$ at the
pressure maximum.
For models with $|h/r|=0.07, ~0.13, ~0.2, ~0.3$,
we fix the specific entropy of the torus by setting, respectively,
$K=K_0\equiv\{0.00034, ~0.0035, ~0.009, ~0.009\}$
in the initial polytropic equation of state given by $p=K_0\rho_0^\Gamma$.
The initial atmosphere surrounding the torus has
the same polytropic equation of state
with nearly the same entropy constant of $K=0.00069$,
but with an initial rest-mass density of $\rho_0=10^{-6} (r/M)^{-3/2}$,
corresponding to a Bondi-like atmosphere.
Recent GRMHD simulations of thick disks
indicate that the results for the disk (but not the wind-jet)
are roughly independent of the initial field geometry
\citep{mck07a,mck07b,bhk08}.
However, a detailed study for thin disks has yet to be performed.
We consider a range of magnetic field geometries
described by the vector potential $A_\mu$ which is related
to the Faraday tensor by $F_{\mu\nu} = A_{\nu,\mu} - A_{\mu,\nu}$.
As in S08, we consider a general multiple-loop field geometry corresponding
to $N$ separate loop bundles stacked radially within the initial disk.
The vector potential we use is given by
\begin{equation}\label{vectorpot}
A_{\phi,\rm N} \propto Q^2\sin\left(\frac{\log(r/S)}{\lambda_{\rm field}/(2\pi r)}\right) \left[1 + w({\rm ranc}-0.5)\right] ,
\end{equation}
where ${\rm ranc}$ is a random number generator for the domain $0$ to $1$
(see below for a discussion of perturbations.).
All other $A_\mu$ are initially zero.
All our multi-loop and 1-loop simulations have $S=22M$, and the values of
$\lambda_{\rm field}/(2\pi r)$ are listed in Table~\ref{tbl_models}.
For multi-loop models, each additional field loop bundle has opposite polarity.
We use $Q = (u_g/u_{g,\rm max} - 0.2) (r/M)^{3/4}$, and set $Q=0$ if either $r<S$ or $Q<0$,
and $u_{g,\rm max}$ is the maximum value of the internal energy density in the torus.
By comparison, in S08, we set $S = 1.1 r_{\rm in}$, $r_{\rm in}=20M$,
$\lambda_{\rm field}/(2\pi r)=0.16$,
such that there were two loops centered at $r=28M$ and $38M$.
The intention of introducing multiple loop bundles
is to keep the aspect ratio of the bundles roughly 1:1 in the poloidal plane,
rather than loop(s) that are highly elongated in the radial direction.
For each disk thickness, we tune $\lambda_{\rm field}/(2\pi r)$
in order to obtain initial poloidal loops that are roughly isotropic.
As in S08, the magnetic field strength is set such that
the plasma $\beta$ parameter satisfies
$\beta_{\rm maxes}\equiv p_{g,\rm max}/p_{b,\rm max}=100$,
where $p_{g,\rm max}$ is the maximum thermal pressure
and $p_{b,\rm max}$
is the maximum magnetic pressure in the entire torus.
Since the two maxima never occur at the same location,
$\beta=p_g/p_b$ varies over a wide range of values within the disk.
This approach is similar to how the magnetic field was
normalized in other studies \citep{gam03,mg04,mck06jf,mck07b,km07}.
It ensures that the magnetic field is weak throughout the disk.
Care must be taken with how one normalizes any given initial magnetic field geometry.
For example, for the 1-loop field geometry used by \citet{mg04},
if one initializes the field with a {\it mean}
(volume-averaged) $\bar{\beta}=100$,
then the inner edge of the initial torus has $\beta\sim 1$
and the initial disk is not weakly magnetized.
For most models, the vector potential at all grid points
was randomly perturbed by $2\%$ ($w$ in equation~\ref{vectorpot})
and the internal energy density at all grid points
was randomly perturbed by $10\%$ \footnote{In S08,
we had a typo saying we perturbed the field by $50\%$,
while it was actually perturbed the same as these models, i.e.:
$2\%$ vector potential perturbations and $10\%$ internal energy perturbations.}.
If the simulation starts with perturbations of the vector potential,
then we compute $\Phi_{\rm tot}$ (used to obtain $\tilde{\Phi}_r$)
using the pre-perturbed magnetic flux
in order to gauge how much flux is dissipated due to the perturbations.
Perturbations should be large enough to excite the non-axisymmetric MRI
in order to avoid the axisymmetric channel solution,
while they should not be so large as to induce significant dissipation
of the magnetic energy due to grid-scale magnetic dissipation just after the evolution begins.
For some models, we studied different amplitudes for the initial perturbation in order to ensure
that the amplitude does not significantly affect our results.
For a model with $|h/r|\sim 0.07$, $a/M=0$, and a single polarity field loop,
one simulation was initialized with $2\%$ vector potential perturbations
and $10\%$ internal energy perturbations,
while another otherwise similar simulation was given no seed perturbations.
Both become turbulent at about the same time $t\sim 1500M$.
The magnetic field energy at that time is negligibly different,
and there is no evidence for significant differences in any quantities during inflow equilibrium.
\input{table_models.tex}
\subsection{Numerical Methods}
\label{sec:nummethods}
We perform simulations using the GRMHD code HARM
that is based upon a conservative shock-capturing Godunov scheme.
One key feature of our code is that
we use horizon-penetrating Kerr-Schild coordinates
for the Kerr metric \citep{gam03,mg04,mck06ffcode,nob06,mm07,tch_wham07},
which avoids any issues with the coordinate singularity in Boyer-Lindquist coordinates.
Even with Kerr-Schild coordinates, one must ensure
that the inner-radial boundary of the computational domain
is outside the so-called inner horizon (at $r/M\equiv 1-\sqrt{1-(a/M)^2}$)
so that the equations remain hyperbolic,
and one must ensure that there are plenty of grid cells
spanning the region near the horizon in order to avoid
numerical diffusion out of the horizon.
Another key feature of our code is the use of a 3rd order accurate (4th order error)
PPM scheme for the interpolation of primitive quantities
(i.e. rest-mass density, 4-velocity relative to a ZAMO observer, and
lab-frame 3-magnetic field) \citep{mck06jf}.
The interpolation is similar to that described in \citet{cw84},
but we modified it to be consistent with interpolating through point values of primitives
rather than average values.
We do not use the PPM steepener, but we do use the PPM flattener
that only activates in strong shocks
(e.g. in the initial bow shock off the torus surface, but rarely elsewhere).
The PPM scheme attempts to fit a monotonic 3rd order polynomial directly through the grid
face where the dissipative flux enters in the Godunov scheme.
Only if the polynomial is non-monotonic does the interpolation reduce order
and create discontinuities at the cell face, and so only then
does it introduce dissipative fluxes.
It therefore leads to extremely small dissipation
compared to the original schemes used in HARM,
such as the 1st order accurate (2nd order error)
minmod or monotonized central (MC) limiter type schemes that always
create discontinuities (and so dissipative fluxes) at the cell face regardless
of the monotonicity for any primitive quantity that is not linear in space.
Simulations of fully three-dimensional models of accreting black holes producing jets
using our 3D GRMHD code show that this PPM scheme
leads to an improvement in effective resolution by at least factors of roughly two
per dimension as compared to the original
HARM MC limiter scheme for models with resolution $256\times 128\times 32$ \citep{mb09}.
The PPM method is particularly well-suited for resolving turbulent flows since they
rarely have strong discontinuities
and have most of the turbulent power in long wavelength modes.
Even moving discontinuities are much more accurately resolved by PPM than minmod or MC.
For example, even without a steepener, a simple moving contact
or moving magnetic rotational discontinuity
is sharply resolved within about 4 cells using the PPM scheme as
compared to being diffusively resolved within about 8-15
cells by the MC limiter scheme.
A 2nd order Runge-Kutta method-of-lines scheme is used to step forward in time,
and the timestep is set by using the fast magnetosonic wavespeed
with a Courant factor of $0.8$.
We found that a 4th order Runge-Kutta scheme does not significantly improve accuracy,
since most of the flow is far beyond the grid cells
inside the horizon that determine the timestep.
The standard HARM HLL scheme is used for the dissipative fluxes,
and the standard HARM T\'oth scheme is used for the magnetic field evolution.
\subsection{Numerical Model Setup}
\label{sec:numsetup}
The code uses uniform internal coordinates $(t,x^{(1)},x^{(2)},x^{(3)})$
mapped to the physical coordinates $(t,r,\theta,\phi)$.
The radial grid mapping is
\begin{equation}
r(x^{(1)}) = R_0 + \exp{(x^{(1)})} ,
\end{equation}
which spans from $R_{\rm in}$ to $R_{\rm out}$.
The parameter $R_0=0.3M$ controls the resolution near the horizon.
Absorbing (outflow, no inflow allowed) boundary conditions are used.
The $\theta$-grid mapping is
\begin{equation}
\theta(x^{(2)}) = [Y(2{x^{(2)}}-1) + (1-Y)(2{x^{(2)}}-1)^7 +1](\pi/2) ,
\end{equation}
where $x^{(2)}$ ranges from $0$ to $1$ (i.e. no cut-out at the poles)
and $Y$ is an adjustable parameter that can be used to concentrate grid
zones toward the equator as $Y$ is decreased from $1$ to $0$.
Roughly half of the $\theta$ resolution is concentrated
in the disk region within $\pm 2|h/r|$ of the midplane.
The HR07 and HR2 models listed in Table~\ref{tbl_models} have $11$ cells per $|h/r|$,
while the HR1 and HR3 models have $7$ cells per $|h/r|$.
The high resolution run, C6, has $22$ cells per $|h/r|$,
while the low resolution model, C5, has $5$ cells per $|h/r|$.
For $Y=0.15$ this grid gives roughly $6$ times more angular resolution compared
to the grid used in \citet{mg04} given by equation~(8) with $h=0.3$.
Reflecting boundary conditions are used at the polar axes.
The $\phi$-grid mapping is given by $\phi(x^{(3)}) = 2\pi x^{(3)}$,
such that $x^{(3)}$ varies from $0$ to $1/8,1/4,3/8,1/2$
for boxes with $\Delta\phi = \pi/4,\pi/2,3\pi/4,\pi$, respectively.
Periodic boundary conditions are used in the $\phi$-direction.
In all cases, the spatial integrals are renormalized to refer to the full $2\pi$ range in $\phi$,
even if our computational box size is limited in the $\phi$-direction.
We consider various $\Delta\phi$ in order to check whether this changes our results.
Previous GRMHD simulations with the full $\Delta\phi=2\pi$ extent
suggest that $\Delta\phi=\pi/2$ is sufficient since coherent
structures only extend for about one radian (see Fig.~12 in \citealt{skh06}).
However, in other GRMHD studies with $\Delta\phi=2\pi$,
the $m=1$ mode was found to be dominant,
so this requires further consideration \citep{mb09}.
Note that S08 used $\Delta\phi=\pi/4$, while both N09 and N10 used $\Delta\phi=\pi/2$.
The duration of our simulations with the thinnest disks
varies from approximately $20000M$ to $30000M$
in order to reach inflow equilibrium
and to minimize fluctuations in time-averaged quantities.
We ensure that each simulation runs for a couple of viscous times
in order to reach inflow equilibrium over a reasonable range of radius.
Note that the simulations cannot be run for a duration
longer than $t_{\rm acc}\sim M_{\rm disk}(t=0)/\dot{M}\sim 10^5M$,
corresponding to the time-scale for accreting
a significant fraction of the initial torus. We are always well
below this limit.
Given finite computational resources,
there is a competition between duration and resolution of a simulation.
Our simulations run for relatively long durations,
and we use a numerical resolution of $N_r\times N_\theta \times N_\phi
=256\times 64\times 32$ for all models (except those used for convergence testing).
In S08 we found this resolution to be sufficient to obtain convergence compared
to a similar $512\times 128\times 32$ model with $\Delta\phi=\pi/4$.
In this paper, we explicitly confirm that our resolution is sufficient
by convergence testing our results (see section~\ref{sec:convergence}).
Near the equatorial plane at the ISCO,
the grid aspect ratio in $dr:r d\theta:r\sin\theta d\phi$
is 2:1:7, 1:1:4, 1:1:3, and 1:1:3, respectively, for
our HR07, HR1, HR2, and HR3 models.
The 2:1:7 grid aspect ratio for the HR07
model was found to be sufficient in S08.
A grid aspect ratio of 1:1:1 would be preferable
in order to ensure the dissipation is isotropic in Cartesian coordinates,
since in Nature one would not expect highly anisotropic dissipation
on the scale resolved by our grid cells.
However, finite computational resources require a balance
between a minimum required resolution, grid aspect ratio,
and duration of the simulation.
As described below, we ensure that the MRI is resolved in each simulation
both as a function of space and as a function of time by measuring
the number of grid cells per fastest growing MRI mode:
\begin{equation}
Q_{\rm MRI} \equiv \frac{\lambda_{\rm MRI}}{\Delta_{\hat{\theta}}} \approx 2\pi \frac{|v^{\hat{\theta}}_{\rm A}|/|\Omega(r,\theta)|}{\Delta_{\hat{\theta}}},
\end{equation}
where $\Delta_{\hat{\theta}} \equiv |{e^{\hat{\theta}}}_{\mu} dx^\mu|$
is the comoving orthonormal $\theta$-directed grid cell length,
${e^{\hat{\nu}}}_{\mu}$ is the contravariant tetrad system in the local fluid-frame,
$|v^{\hat{\theta}}_{\rm A}|=\sqrt{b_{\hat{\theta}} b^{\hat{\theta}}/(b^2 + \rho_0 + u_g + p_g)}$
is the~Alfv\'en speed,
$b^{\hat{\theta}} \equiv {e^{\hat{\theta}}}_{\mu} b^\mu$
is the comoving orthonormal $\theta$-directed 4-field,
and $|\Omega(r,\theta)|$ is the temporally and azimuthally averaged absolute value
of the orbital frequency.
During the simulation,
the rest-mass density and internal energy densities
can become quite low beyond the corona,
but the code only remains accurate and stable
for a finite value of $b^2/\rho_0$, $b^2/u_g$, and $u_g/\rho_0$ for any given resolution.
We enforce $b^2/\rho_0\lesssim 10^4$, $b^2/u_g\lesssim 10^4$, and $u_g/\rho_0\lesssim 10^4$
by injecting a sufficient amount of mass or internal energy
into a fixed zero angular momentum observer (ZAMO)
frame with 4-velocity $u_\mu=\{-\alpha,0,0,0\}$,
where $\alpha=1/\sqrt{-g^{tt}}$ is the lapse.
In some simulations, we have to use stronger limits
given by $b^2/\rho_0\lesssim 10$, $b^2/u_g\lesssim 10^2$, and $u_g/\rho_0\lesssim 10$,
in order to maintain stability and accuracy.
Compared to our older method of injecting mass-energy into the comoving frame,
the new method avoids run-away injection of energy-momentum in the low-density regions.
We have confirmed that this procedure of injecting mass-energy
does not contaminate our results for the accretion rates and other diagnostics.
\section{Dependence on Black Hole Spin and Disk Thickness}
\label{sec:thicknessandspin}
In addition to the fiducial model and the convergence runs described
in previous sections, we have run a number of other simulations to
explore the effect of the black hole spin parameter $a/M$ and the disk
thickness $|h/r|$ on our various diagnostics: $\jmath$, $\jmath_{\rm
in}$, ${\tilde{e}}$, the luminosity, and $\Upsilon$.
We consider four values of the black hole
spin parameter, viz., $a/M=0, ~0.7, ~0.9, ~0.98$, and four disk
thicknesses, viz., $|h/r|=0.07, ~0.1, ~0.2, ~0.3$. We summarize
here our results for this $4\times4$ grid of models.
Geometrically thick disks are expected on quite general grounds to
deviate from the standard thin disk model. The inner edge of the
disk, as measured for instance by the location of the sonic point, is
expected to deviate from the ISCO, the shift scaling roughly as
$|r_{\rm in}-r_{\rm ISCO}| \propto (c_{\rm s}/v_{\rm K})^2$ ($c_{\rm s}$ is
sound speed, where $c_{\rm s}^2=\Gamma p_g/(\rho_0 + u_g + p_g)$). This
effect is seen in hydrodynamic models of thick disks, e.g.
\citet{nkh97} and \citet{abr10}, where it is shown that $r_{\rm in}$
can move either inside or outside the ISCO; it moves inside when $\alpha$
is small and outside when $\alpha$ is large. In either case, these
hydrodynamic models clearly show that, as $|h/r|\to 0$, i.e., as
$c_{\rm s}/v_{\rm K}\to 0$, the solution always tends toward the NT model \citep{snm08}.
While the hydrodynamic studies mentioned above have driven much of our
intuition on the behavior of thick and thin disks, it is an open
question whether or not the magnetic field plays a significant role.
In principle, magnetic effects may cause
the solution to deviate significantly from the NT model even in
the limit $|h/r|\to 0$ \citep{krolik99,gammie99}. One of the major
goals of the present paper is to investigate this question. We show
in this section that, as $|h/r|\to 0$, magnetized disks do tend toward
the NT model. This statement appears to be true for a range of black
hole spins. We also show that the specific magnetic flux $\Upsilon$
inside the ISCO decreases with decreasing $|h/r|$ and remains quite
small. This explains why the magnetic field does not cause significant
deviations from NT in thin disks.
Figure~\ref{jspecvsspin} shows the specific angular momentum, $\jmath$,
and the ingoing component of this quantity,
$\jmath_{\rm in}$, vs. radius for the $4\times 4$ grid of models.
The $\theta$ integral has been taken over $\pm 2|h/r|$ around the midplane
in order to focus on the equatorial disk properties.
The value of $\jmath$ is roughly constant out to a radius
well outside the ISCO,
indicating that we have converged solutions in inflow equilibrium
extending over a useful range of radius.
As discussed in section~\ref{sec_infloweq},
inflow equilibrium is expected within $r/M=9,~7,~5.5,~5$,
respectively, for $a/M=0,~0.7,~0.9,~0.98$.
This is roughly consistent with the radius out to which
the quantity $\jmath$ (integrated over all angles) is constant,
and this motivates why in all such plots we only show $\jmath$ over the
region where the flow is in inflow equilibrium.
The four panels in Figure~\ref{jspecvsspin}
show a clear trend, viz., deviations from NT are larger
for thicker disks, as expected.
Interestingly, for higher black hole spins,
the relative deviations from NT actually decrease.
\begin{figure}
\centering
\includegraphics[width=3.3in,clip]{f17.eps}
\caption{The net accreted specific angular momentum, $\jmath$ (the nearly horizontal
dotted lines),
and the ingoing component of this quantity, $\jmath_{\rm in}$ (the sloping curved lines),
as a function of radius for the $4\times4$ grid of models.
Each panel corresponds to a single black hole spin, $a/M=0, ~0.7, ~0.9$, or
$0.98$, and shows models
with four disk thicknesses, $|h/r|\approx 0.07, ~0.1, ~0.2, ~0.3$ (see legend).
The $\theta$ integral has been taken over $\pm 2|h/r|$ around the midplane.
In each panel, the thin dashed black line, marked by two circles
which indicate the location of the horizon and the ISCO,
shows the NT solution for $\jmath_{\rm in}$.
As expected, we see that thicker disks exhibit larger deviations from NT.
However, as a function of spin, there is no indication that deviations from NT
become any larger for larger spins. In the case of the thinnest models with $|h/r| \approx 0.07$,
the NT model works well for gas close to
the midplane for all spins.
}
\label{jspecvsspin}
\end{figure}
Figure~\ref{effvsspin} shows the nominal efficiency, ${\tilde{e}}$, as a function
of radius for the $4\times 4$ grid of models.
Our thickest disk models ($|h/r|\approx 0.3$) do not include
cooling, so the efficiency shown is only due to losses by a wind-jet.
We see that the efficiency is fairly close to the NT
value for all four thin disk simulations with $|h/r|\sim0.07$; even in the worst case,
viz., $a/M=0.98$, the
deviation from NT is only $\sim5\%$.
In the case of thicker disks, the efficiency shows larger deviations
from NT and the profile as a function of radius also looks different.
For models with $|h/r|\approx0.3$, there is no cooling so large deviations are
expected.
\begin{figure}
\centering
\includegraphics[width=3.3in,clip]{f18.eps}
\caption{Similar to Figure~\ref{jspecvsspin}, but for the nominal efficiency, ${\tilde{e}}$.
For thin ($|h/r|\lesssim 0.1$) disks,
the results are close to NT for all black hole spins.
As expected, the thicker models deviate significantly from NT.
In part this is because the ad hoc cooling function
we use in the simulations
is less accurate for thick disks,
and in part because the models with $|h/r|\approx 0.3$ have no cooling
and start with marginally bound/unbound gas that implies ${\tilde{e}}\sim 0$.
The $a/M=0.98$ models show erratic behavior at large radii
where the flow has not achieved inflow equilibrium.
}
\label{effvsspin}
\end{figure}
Figure~\ref{lumvsspin} shows the luminosity, $L(<r)/\dot{M}$,
vs. radius for our $4\times 4$ grid of models, focusing just on the
region that has reached inflow equilibrium.
The luminosity is estimated by integrating over all $\theta$ angles.
Our thickest disk models ($|h/r|\approx 0.3$) do not include
cooling and so are not plotted.
The various panels show that,
as $|h/r|\to 0$, the luminosity becomes progressively closer to the NT
result in the steady state region of the flow near and inside the ISCO.
Thus, once again, we conclude that the NT luminosity profile is valid for
geometrically thin disks even when the accreting gas is magnetized.
\begin{figure}
\centering
\includegraphics[width=3.3in,clip]{f19.eps}
\caption{Similar to Figure~\ref{jspecvsspin}, but for the normalized
luminosity, $L(<r)/\dot{M}$.
For thin disks, the luminosity deviates only slightly from
NT near and inside the ISCO. There is no strong evidence for any
dependence on the black hole spin.
The region at large radii has not reached
inflow equilibrium and is not shown.
}
\label{lumvsspin}
\end{figure}
A figure (not shown) that is similar to Figure~\ref{jspecvsspin} but
for the specific magnetic flux
indicates that $\Upsilon\le 1$ within $\pm 2|h/r|$
near the ISCO for all black hole spins and disk thicknesses.
For our thinnest models, $\Upsilon\le 0.45$,
for which the model of \citet{gammie99}
predicts that the specific angular momentum will deviate from NT
by less than $1.9\%, ~3.0\%, ~3.8\%, ~4.2\%$ for black hole spins
$a/M=0, ~0.7, ~0.9, ~0.98$, respectively (see Appendix~\ref{sec_gammie}).
The numerical results from the simulations
show deviations from NT that are similar to these values.
Thus, overall, our results indicate that
electromagnetic stresses are weak inside the ISCO for geometrically thin disks.
Finally, for all models
we look at plots (not shown) of $M(<r)$ (mass enclosed within radius),
$\dot{M}(r)$ (total mass accretion rate vs. radius),
and $[h/r](r)$ (disk scale-height vs. radius).
We find that these are consistently flat to the same degree
and to the same radius as the quantity $\jmath(r)$ is constant as shown
in Figure~\ref{jspecvsspin}.
This further indicates that our models are in inflow equilibrium
out to the expected radius.
\subsection{Scaling Laws vs. $a/M$ and $|h/r|$}\label{scalinglaws}
\input{table_thickspin.tex}
We now consider how the magnitude of $\jmath$, ${\tilde{e}}$, $L(<r_{\rm ISCO})$,
and $\Upsilon$ scale with disk thickness and black hole spin.
Table~\ref{tbl_thickspin} lists numerical results corresponding to
$\theta$ integrations
over $\pm 2|h/r|$ around the midplane and over all angles\footnote{Some
thicker disk models without cooling show small or slightly negative efficiencies, ${\tilde{e}}$,
which signifies the accretion of weakly unbound gas.
This can occur when a magnetic field is inserted
into a weakly bound gas in hydrostatic equilibrium.}.
Figure~\ref{jandeffandlumscale} shows selected results corresponding to
models with a non-rotating black hole for quantities integrated
over $\pm 2|h/r|$.
We see that the deviations of various diagnostics
from the NT values scale roughly as $|h/r|$. In general, the deviations are quite small for
the thinnest model with $|h/r|\approx 0.07$.
\begin{figure}
\begin{center}
\includegraphics[width=3.3in,clip]{f20.eps}
\end{center}
\caption{The relative difference between $\jmath$ in the simulation
and in the NT model (top panel), the relative difference between the
nominal efficiency, ${\tilde{e}}$, and the NT value (middle panel),
and the luminosity inside the ISCO normalized by the
net radiative efficiency of the NT model,
$\tilde{L}_{\rm in}$ (bottom panel),
where ${\tilde{e}}[{\rm NT}]$ has been evaluated at the horizon (equivalently at the ISCO).
There is a rough linear dependence on $|h/r|$ for all quantities,
where a linear fit is shown as a dotted line in each panel.
Note that the thicker disk models are not expected to behave like NT,
and actually have $\jmath$ roughly similar across all spins.
For $|h/r|\approx 0.07$, the excess luminosity from within the ISCO
is less than $4\%$ of the total NT efficiency.
}
\label{jandeffandlumscale}
\end{figure}
Next, we consider fits of our simulation data as a function
of black hole spin and disk thickness to reveal if, at all,
these two parameters control how much the flow deviates from NT.
In some cases we directly fit the simulation results instead of their deviations from NT,
since for thick disks the actual measurement values can saturate independent of thickness
leading to large non-linear deviations from NT.
Before making the fits, we ask how quantities might scale with $a/M$ and $|h/r|$.
With no disk present, the rotational symmetry forces any scaling
to be an even power of black hole spin \citep{mg04}.
However, the presence of a rotating disk breaks this symmetry, and any accretion flow
properties, such as deviations from NT's model,
could depend linearly upon $a/M$ (at least for small spins).
This motivates performing a linear fit in $a/M$.
Similarly, the thickness relates to a dimensionless speed: $c_{\rm s}/v_{\rm K}\sim |h/r|$,
while there are several different speeds in the accretion problem
that could force quantities to have an arbitrary dependence on $|h/r|$.
Although, in principle, deviations might scale as some power of $|h/r|$, we
assume here a linear scaling $\propto |h/r|$. This choice is driven partly by
simplicity and
partly by Figure~\ref{jandeffandlumscale} which shows that the simulation results
agree well with this scaling.
These rough arguments motivate obtaining explicit scaling laws for a quantity's
deviations from NT as a function of $a/M$ and $|h/r|$.
For all quantities we use the full $4\times 4$ set of models,
except for the luminosity and efficiency we exclude the two thickest models
in order to focus on the luminosity for thin disks with cooling.
We perform a linear least squares fit in both $a/M$ and $|h/r|$,
and we report the absolute percent difference between the upper $95\%$ confidence limit ($C_+$)
and the best-fit parameter value ($f$) given by $E = 100|C_+ - f|/|f|$.
Note that if $E>100\%$, then the best-fit value
is no different from zero to $95\%$ confidence (such parameter values are not reported).
After the linear fit is provided,
the value of $E$ is given for each parameter in order of appearance in the fit.
Only the statistically significant digits are shown.
First, we consider how electromagnetic stresses depend upon $a/M$ and $|h/r|$.
\citet{gammie99} has shown that the effects of electromagnetic stresses
are tied to the specific magnetic flux, $\Upsilon$,
and that for $\Upsilon\lesssim 1$
there are weak electromagnetic stresses causing only minor deviations
(less than $12\%$ for $\jmath$ across all black hole spins) from NT.
Let us consider how $\Upsilon$ should scale with $|h/r|$,
where $\Upsilon= \sqrt{2} (r/M)^2 B^r/\left(\sqrt{-(r/M)^2\rho_0 u^r}\right)$
in the equatorial plane and is assumed to be constant from the ISCO to the horizon.
For simplicity, let us study the case of a rapidly rotating black hole.
First, consider the boundary conditions near the ISCO provided by the disk,
where $c_{\rm s}/v_{\rm K}\sim |h/r|$
and the Keplerian rotation speed reaches $v_{\rm K}\sim 0.5$.
This implies $c_{\rm s}\sim 0.5|h/r|$.
Second, consider the flow that connects the ISCO and the horizon.
The gas in the disk beyond the ISCO has $\beta\sim (c_{\rm s}/v_{\rm A})^2\sim 10$,
but reaches $\beta\sim 1$ inside the ISCO
in any GRMHD simulations of turbulent magnetized disks,
which gives that $c_{\rm s}\sim v_{\rm A}$.
Thus, $v_{\rm A}\sim 0.5|h/r|$.
Finally, notice that $\Upsilon\sim 1.4 B^r/\sqrt{\rho_0}$
at the horizon where $u^r\sim -1$ and $r=M$.
The Keplerian rotation at the ISCO leads to a magnetic field
with orthonormal radial ($|B_r|\sim |B^r|$) and toroidal ($|B_\phi|$) components
with similar values near the ISCO and horizon,
giving $|B^r|\sim |B_r|\sim |B_\phi|\sim |b|$ and so $\Upsilon\sim 1.4 |b|/\sqrt{\rho_0}$.
Further, the~Alfv\'en 3-speed is $v_{\rm A}=|b|/\sqrt{b^2+\rho_0+u_g+p_g}\sim |b|/\sqrt{\rho_0}$
in any massive disk,
so that $\Upsilon\sim 1.4 v_{\rm A}\sim 0.7|h/r|$ for a rapidly rotating black hole.
Extending these rough arguments to all black hole spins at fixed disk thickness
also gives that $\Upsilon\propto -0.8(a/M)$ for $a/M\lesssim 0.7$.
These arguments demonstrate three points:
1) $\Upsilon\gg 1$ gives $b^2/\rho_0\gg 1$, implying a force-free magnetosphere
instead of a massive accretion disk ;
2) $\Upsilon\propto |h/r|$ ;
and 3) $\Upsilon\propto -(a/M)$.
Since the local condition for the magnetic field ejecting mass is $b^2/\rho_0\gg 1$
(see, e.g., \citealt{kombar09}),
this shows that $\Upsilon\sim 1$ defines a boundary that the disk component of the flow
cannot significantly pass beyond without eventually
incurring disruption by the strong magnetic field within the disk.
We now obtain the actual fit, which for an integration over $\pm 2|h/r|$
gives
\begin{equation}
\Upsilon\approx 0.7 + \left|\frac{h}{r}\right| - 0.6\frac{a}{M} ,
\end{equation}
with $E=33\%,~70\%,~40\%$, indicating a reasonable fit.
There is essentially $100\%$ confidence in the sign of the 1st and 3rd parameters
and $98\%$ confidence in the sign of the 2nd parameter.
This fit is consistent with our basic analytical estimate for the scaling.
Since most likely $\Upsilon\le 0.9$ in the limit that $|h/r|\to 0$ across all black hole spins,
the electromagnetic stresses are weak and cause less than $12\%$ deviation from NT in $\jmath$,
This means that NT solution is essentially recovered for magnetized thin disks.
For an integration over all angles, $\Upsilon\approx 1$ with $E=35\%$,
and there is no statistically significant trend
with disk thickness or black hole spin.
The value of $\Upsilon\sim 1$ is consistent with
the presence of the highly-magnetized corona-wind-jet
above the disk component \citep{mg04}.
Next, we consider whether our simulations can determine the
equilibrium value of the black hole spin as a function of $|h/r|$.
The spin evolves as the black hole accretes mass, energy, and angular momentum,
and it can stop evolving when these come into a certain balance leading to $d(a/M)/dt=0$
(see equation~\ref{spinevolve}).
In spin-equilibrium, the spin-up parameter $s = \jmath - 2(a/M){\rm e}$
has $s=0$ and solving for $a$ gives the equilibrium spin $a_{\rm eq}/M=\jmath/(2{\rm e})$.
For the NT solution, $s$ is fairly linear for $a/M>0$ and $a_{\rm eq}/M=1$.
In appendix~\ref{sec_gammie}, we note that for $\Upsilon\sim 0.2$--$1$
that the deviations from NT roughly scale as $\Upsilon$.
Since $\Upsilon\propto |h/r|$, one expects $s$ to also roughly
scale with $|h/r|$. This implies that deviations from NT in the spin equilibrium
should scale as $|h/r|$. Hence, one should have $1-a_{\rm eq}/M\propto |h/r|$.
Now we obtain the actual fit.
We consider two types of fits. In one case, we fit $s$
(with fluxes integrated over all angles)
and solve $s=0$ for $a_{\rm eq}/M$. This gives
\begin{equation}
s \approx 3.2 - 2.5\left|\frac{h}{r}\right| - 2.9\frac{a}{M} ,
\end{equation}
with $E=8\%,~36\%,~8\%$, indicating quite a good fit.
There is an essentially $100\%$ confidence for the sign of all parameters,
indicating the presence of well-defined trends.
Solving the equation $s=0$ for $a/M$ shows that the spin equilibrium value, $a_{\rm eq}/M$,
is given by
\begin{equation}
1-\frac{a_{\rm eq}}{M} \approx -0.08 + 0.8\left|\frac{h}{r}\right| .
\end{equation}
In the other case, we fit $\jmath/(2{\rm e})$ and re-solve for $a_{\rm eq}/M$,
which gives directly
\begin{equation}
1-\frac{a_{\rm eq}}{M} \approx -0.10 + 0.9\left|\frac{h}{r}\right| ,
\end{equation}
with $E=9\%,~38\%$ with a $99.99\%$ confidence in the sign of the $|h/r|$ term.
Both of these procedures give a similar fit (the first fit is statistically better)
and agree within statistical errors, which indicates a linear fit is reasonable.
For either fit, one should set $a_{\rm eq}/M=1$ when the above formula gives $a_{\rm eq}/M>1$
to be consistent with our statistical errors and the correct physics.
Note that the overshoot $a_{\rm eq}/M>1$ in the fit
is consistent with a linear extrapolation of the NT dependence
of $s$ for $a/M>0$, which also overshoots in the same way
due to the progressively non-linear behavior of $s$ above $a/M\approx 0.95$.
These spin equilibrium fits imply that, within our statistical errors,
the spin can reach $a_{\rm eq}/M\to 1$ as $|h/r|\to 0$.
Thus, our results are consistent with NT by allowing maximal black hole
spin for thin disks\footnote{Here, we do not include black hole spin changes
by photon capture, which gives a limit of $a_{\rm eq}/M=0.998$ \citep{thorne74}.}.
Our results are also roughly consistent with the thick
disk 1-loop field geometry study by \citet{gammie_bh_spin_evolution_2004}.
Using our definition of disk thickness,
their model had $|h/r|\sim 0.2$--$0.25$
and they found $a_{\rm eq}/M\sim 0.9$, which is roughly consistent with our scaling law.
The fit is also consistent with results for even thicker disks ($|h/r|\sim 0.4$ near the horizon)
with $a_{\rm eq}/M\sim 0.8$ \citep{ajs78,pg98}.
Overall, the precise scaling relations given
for $\Upsilon$ and $a_{\rm eq}$ should be
considered as suggestive and preliminary.
More work is required to test the convergence
and generality of the actual coefficients.
While we explicitly tested convergence for the $a/M=0$
fiducial model, the other $a/M$ were not tested as rigorously.
A potential issue is that we find
the saturated state has fewer cells per
(vertical magnetic field) fastest growing mode for the axisymmetric MRI
in models with $a/M=0.9,0.98$ than in models with $a/M=0,0.7$
due to the relative weakness of the vertical field in the saturated state
for the high spin models.
However, both the rough analytical arguments
and the numerical solutions imply that
electromagnetic stresses scale somewhat linearly with black hole spin.
This consistency suggests that many measurements for the simulations,
such as $\Upsilon$ and $a_{\rm eq}$,
may be independent of smallness of the vertical field.
This fact could be due to these quantities being only directly related
to the radial and toroidal magnetic field strengths rather than
the vertical magnetic field strength.
Further, our thick disk models resolve
the axisymmetric MRI better than the thinnest disk model.
This suggests that the scaling of $\Upsilon$ and $a_{\rm eq}$
with disk thickness may be a robust result.
Lastly, we consider how the specific angular momentum,
nominal efficiency, and luminosity from within the ISCO
deviate from NT as functions of spin and thickness.
Overall, fitting these quantities does not give
very strong constraints on the actual parameter values,
but we can state the confidence level of any trends.
For each of ${\tilde{e}}$, $\jmath$, $\jmath_{\rm in}$,
and $\tilde{L}_{\rm in}$,
the deviation from NT as $|h/r|\to 0$ is
less than $5\%$ with a confidence of $95\%$.
For $\jmath$ integrated over $\pm 2|h/r|$,
$D[\jmath]$ decreases with $|h/r|$ and increases with $a/M$
both with $99\%$ confidence.
When integrating $\jmath$ over all angles,
$D[\jmath]$ only decreases with $|h/r|$ to $99\%$ confidence.
For $\jmath_{\rm in}$ integrated over $\pm 2|h/r|$,
$D[\jmath_{\rm in}]$ only increases with $a/M$ with $99.8\%$ confidence
and only decreases with $|h/r|$ with $97\%$ confidence.
When integrating $\jmath_{\rm in}$ over all angles,
$D[\jmath_{\rm in}]$ only increases with $a/M$ to essentially $100\%$ confidence
and only decreases with $|h/r|$ to $99.8\%$ confidence.
For ${\tilde{e}}$ integrated over $\pm 2|h/r|$,
$D[{\tilde{e}}]$ only increases with $|h/r|$ with $98\%$ confidence
with no significant trend with $a/M$.
When integrating ${\tilde{e}}$ over all angles,
$D[{\tilde{e}}]$ only increases with $a/M$ with $95\%$ confidence
with no significant trend with $|h/r|$.
For $\tilde{L}_{\rm in}$,
there is a $98\%$ confidence for this to increase with $|h/r|$
with no significant trend with $a/M$.
Overall, the most certain statement that can be made
is that our results are strongly consistent
with all deviations from NT becoming less than a few percent
in the limit that $|h/r|\to 0$ across the full range of black hole spins.
| proofpile-arXiv_065-5168 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction} \label{sec1}
Most Seyfert AGN are associated with weak
nuclear radio sources which show radio spectral indices and morphologies
(if resolved) consistent with synchrotron-emitting jets and lobes
\citep{ulv84, nagar99}, though their physical properties are poorly constrained.
However, they are effective avenues of kinetic and thermal feedback from the active nucleus
and may play an important role in determining the evolution of the central spheroid.
In the small fraction of Seyferts with kpc-scale radio jets, several studies have uncovered
clear signatures of interactions between the jet and surrounding gas, in the form of disturbed
emission line profiles in the inner Narrow-Line Region (NLR), as well as close
associations between resolved jet structures and NLR gas \citep[e.g.][]{whittle88,capetti96,fws98,cooke00,cecil02,whittle04}.
Depending on the physical make-up of the jet, relativistic or ram pressure can drive
fast shocks, compressing, sweeping up and altering the appearance
of the NLR. Postshock gas, with a temperature of several $10^7$ K, is
a source of ionizing photons which couple the shock properties to the ionization
state of the surrounding gas \citep[e.g.][]{ds96}. Shocks and winds can also
change the distribution of ISM phases in the NLR by destroying and ablating
clouds \citep{fragile05}. While the active nucleus is usually the dominant source of
ionization even in strongly jetted Seyferts \citep[e.g.][]{whittle05}, widespread shocks
driven by a jet can alter the ionization of the NLR by affecting the properties of the ISM.
Studies of shock structure and energetics \citep{ds96, allen08} predict strong differences
between the emission line spectrum of dense post-shock gas and gas
that is ionized by, but not in direct dynamical contact with, the shock (the precursor).
Therefore, a clear signature of shock ionized gas is a difference between the line profiles of
low and high ionization lines, which are preferentially produced by post-shock and
precursor gas, respectively \citep{whittle05}.
In this work, we present an HST/STIS spectroscopic study of NGC 5929, a local Seyfert
galaxy with a well-studied bi-polar radio jet. Previous ground-based spectroscopic
studies find evidence of a localized interaction between the jet and the
near-nuclear emission line gas \citep{whittle86,wakamatsu88,taylor89,wilson89,ferruit97}.
The datasets and analysis methods used are briefly reviewed
in $\S2$. Direct shock features and a picture of the interaction are developed in
$\S3$ and $\S4$. We discuss the role of shocks in Seyferts and AGN feedback in $\S5$.
NGC 5929 has a systemic heliocentric velocity of $cz\,=\,2492$ km s$^{-1}$
based on the stellar aborption line redshift of \citet{nelsonnwhittle95}, which corresponds to
$161$ pc arcsec$^{-1}$ (H$_0 = 75$ km s$^{-1}$ Mpc$^{-1}$).
\begin{figure*}[ht]
\label{corrplot}
\centering
\includegraphics[width=0.7\columnwidth,angle=270]{ngc5929_corrplot.eps}
\caption[NGC 5929: Combined datasets]
{ Panels of images and spectra of the nuclear region of NGC 5929. The 2D spectra in Panels 2 and 3
correspond to STIS long-slit apertures A and B respectively, as indicated on the images plotted in
Panels 1 and 4. The radio map (gaussian smoothed by $0\farcs3$ to bring out its structure)
is plotted in Panel 1 in contours.
}
\end{figure*}
\section{Observations and Reductions} \label{sec2}
\subsection{STIS Dataset} \label{sec2p1}
Our principal dataset is two medium-dispersion STIS G430M long-slit
spectra covering the lines of the [\ion{O}{3}]$\lambda \lambda 5007, 4959$
doublet and H$\beta$ (HST Program: GO 8253; PI : Whittle).
The data was spectrophotometrically calibrated using the CALSTIS pipeline.
For each spatial increment, the continuum was modeled using a lower
order polynomial and subtracted.
These spectra were combined to generate a continuum-free 2-D spectrum
from both STIS slits
The equivalent width of H$\beta$ is always greater than 10 \AA\ in the NLR of NGC 5929,
and $\approx 140$ \AA\ at the location of the interaction. Therefore,
corrections to the emission line strengths and profiles
due to Balmer absorption are negligible.
\begin{deluxetable}{cccc}
\tabletypesize{\scriptsize}
\tablewidth{\columnwidth}
\tablecaption{STIS observations \tablenotemark{\dag}} \label{tab1}
\tablenotetext{\dag}{Date: 07/02/00; Grating: G430M; $\Delta\lambda$ (\AA\ pix$^{-1}$): 0.277; PA: $-134\arcdeg6$}
\tablenum{1}
\tablehead{\colhead{Aperture}&\colhead{Dataset}&\colhead{Exp. (s)}&\colhead{Nuc. Offset (")} }
\startdata
NGC 5929 A & O5G403010 & 1524 & 0.198 \\
NGC 5929 B & O5G403020 & 600 & 0.390 \\
\enddata
\end{deluxetable}
\subsection{HST and Radio Images} \label{sec2p2}
Emission line maps of NGC 5929 in
[OIII]$\lambda\lambda4959,5007$+H$\beta$ were prepared from
archival HST narrow/medium-band WF/PC-1 images
(Program: GO 3724; PI: Wilson). Details of these maps can be found in
\citet{bow94}.
In addition, a high S/N F606W WFPC2 archival image of the galaxy
was used to trace the stellar and dust geometry in and around the NLR.
A reduced and calibrated 5 GHz radio map of NGC 5929 was obtained
from the MERLIN archive (http://www.merlin.ac.uk/archive/). This data was first presented in \citet{su96}.
The various HST datasets were registered to a common astrometric frame using image
cross-correlation techniques. The dust lane in
NGC 5929 crosses the NLR, effectively obscuring the true position of the nucleus.
\citet{latt97} use the astrometric information of stars in the WF/PC-1 images of the galaxy
to compare the locations of the continuum peak and the unresolved core
in a MERLIN radio map. Assuming the radio core corresponds to the true nucleus,
they find that the WF/PC-1 continuum peak is offset by $0\farcs1 \pm 0\farcs05$ NW
from the radio core. We have included this small correction in our final
registered images (see Panel 4 in Figure \ref{corrplot}
for a sense of the amount of correction involved).
\section{The NLR of NGC 5929} \label{sec3}
\subsection{Descriptive Framework} \label{sec3p2}
NGC 5929 is part of a major galaxy merger with NGC 5930 (Arp 90). Clear signs of tidal
tails and disturbed gaseous kinematics are seen in ground-based
two-dimensional spectra of the merger system \citep{lewis93}, and
twisting, filamentary dust structure is visible in the nuclear region
(Panel 4 of Fig. \ref{corrplot})
The radio source has been imaged at high resolution
with the VLA \citep{ulv84,wilson89} and MERLIN \citep{su96}.
It exhibits a triple structure, with two bright compact steep-spectrum
hotspots on either side of a faint flat-spectrum core that is unresolved
even in the highest resolution maps. Low surface brightness
emission joins the hotspots to the radio core along PA $61^{\circ}$,
which is also the rough orientation of the elongated emission line distribution.
The diverse imaging and spectroscopic datasets are showcased
together in Figure \ref{corrplot}. In Panel 1, the [\ion{O}{3}]$+$H$\beta$
emission line image is displayed in greyscale, with the
STIS slit positions and radio map contours overplotted. Three emission line regions,
identified by \citet{bow94} are labelled A, B and C. The SW radio hotspot
coincides with the edge of Region A, while the NW radio hotspot lies near
Region B. From the broadband image in Panel 4, the emission
line features are revealed as parts of a possibly contiguous region,
crossed by a dust lane that obscures the line and continuum emission
and modulates the appearance of the NLR. This dust structure is part of
a filamentary network extending to several kpc \citep{malkan98}.
The [OIII]$\lambda 5007$ line from the two medium-resolution STIS
spectra are plotted in Panels 2 \& 3 of Figure \ref{corrplot}. The slits
are oriented NE-SW from the bottom to the top of the panel, and a
dashed horizontal line marks the reference position on the slit, i.e,
the point along the slit closest to the nucleus of the galaxy. Solid vertical
lines indicate the systemic velocity.
The [O III] line from both STIS spectra shows clear velocity
structure. A detailed treatment of the kinematics of the NLR
is not the aim of this Letter and will be addressed in a later paper.
A brief description will suffice here.
Slit A traverses the brighter emission line regions and therefore
gives the best picture of the ionized gas kinematics. The velocity
of the line peak increases along this slit from almost systemic
at the reference position to a maximum value of $+185$ km s$^{-1}$
at a nuclear radius of $\sim 0\farcs65$ SW (104 pc). This broad gradient
is mirrored, though less clearly, to the NE in both Slits
A and B. The gradient between the nucleus and Regions
B and C are similar in both slits, implying that both Regions
are part of the same gaseous complex, bisected
in projection by the dust lane. The FWHM of the [O III] line goes through
a rapid transition from $\sim 125$ km s$^{-1}$\ to greater
than $200$ km s$^{-1}$\ at a nuclear distance of $0\farcs5$ in the
brightest portion of Region A. Regions B and C, on the other hand,
exhibit uniformly narrow line profiles.
\begin{figure}[t]
\label{ratiomap}
\centering
\includegraphics[width=0.6\columnwidth,angle=270]{o3_hb_ratiomap.eps}
\caption[NGC 5929: Overlay and Excitation ratio map]
{The [OIII] image with MERLIN 6 cm contours overlayed (left)
and the [OIII]/H$\beta$ log ratio map (right). The location of the slit
aperture is indicated on the image as a rectangle, corresponding to
the spatial range of the ratio map. A grayscale lookup table is
plotted in the bar above the map. A fiducial value of 0.43 is used to
mask regions with no significant line emission, which helps to improve
the visibility of the ratio map.
}
\end{figure}
\subsection{Ionization Conditions} \label{sec3p3}
In Figure 2, a map of the [OIII]$\lambda5007$/H$\beta$ line
ratio from the Slit A spectrum is plotted, alongside the WF/PC-1 [O III] image,
overlayed with the contours of the 5 GHz radio map
and the boundaries of the STIS aperture. The ratio image was created by
first rebinning the two-dimension spectrum
of each line onto a common velocity range and then dividing them to
generate a map of the [O III]/H$\beta$ ratio
as a function of velocity and slit position. This ratio of is quite sensitive to
ionization state, yet insensitive to dust reddening.
\begin{figure}[t]
\label{profilecomps}
\centering
\includegraphics[width=1.1\columnwidth]{profile_comps.eps}
\caption[NGC 5929: Profile Comparisons]
{A comparison of the line profiles of the [O III] line (black) and
H$\beta$ line (red) in four parts of the NGC 5929 NLR. The lines have been scaled
to a common peak value. The jet interaction, in the outer part of Region A,
is associated with broad H$\beta$, unlike any other part of the emission line region.
}
\end{figure}
The [O III]$\lambda5007$/H$\beta$ ratio is in the range of $2-6$, intermediate between
Seyferts and LINERs \citep{kewley06}. Variations in ionization are seen
across the NLR. The average excitation of Region B is towards the low
end of the observed range, with average [O III]$\lambda5007$/H$\beta \sim 3$,
while the inner part of Region A is more highly ionized ([O III]$\lambda5007$/H$\beta
\sim 5$). However, surrounding the position of the bright knot in Region A, a sharp
drop in the ratio is seen. Interestingly, this change in ionization is velocity
dependent. In the high velocity wings of the lines, the [O III]/H$\beta$
drops to almost unity implying a very low ionization state, while the central core of
the lines remain at a more modest ionization. This trend is different from most
Seyferts where the high velocity gas tends to be of equal or \emph{higher} ionization
than low velocity material \citep[e.g.,][]{pelat81}.
This difference between the average ionization of the NLR and Region
A is brought out well by comparing the velocity profiles of [O III]$\lambda 5007$
and H$\beta$ in different parts of the NLR, as is done in Figure 3.
The width of the H$\beta$ line compared to [\ion{O}{3}] is substantially higher
in Region A, compared to both the inner part of the same Region and the profiles
in Region B and C.
What is the reason for this peculiar ionization behaviour? We believe this
results from strong shocks driven by the radio plasma into the ISM.
The compressed gas in the post-shock cooling zone is expected to share the
high velocities of the shock, yet is predicted to have [O III]/H$\beta$ in
the range of $0.7-4.0$, depending on the shock speed and local magnetic field \citep{ds96}.
The enhancement of line emission from shocks and possibly a precursor
region, over and above the emission from gas photoionized by the AGN,
could then account for the appearance of the bright emission-line knot in Region A associated
with the radio hotspot. A moderate contribution to the line emission from
shocks is also consistent with the HST spectroscopic study of \citet{ferruit99}. From the
relative strengths of the [SII] $\lambda\lambda 4069,4077$ and
$\lambda\lambda 6717,6731$ lines, which are produced profusely in
the post-shock cooling zone, they estimate high temperatures
(greater than $20,000-50,000$ K) and somewhat low electron densities (around
$300$ cm$^{-3}$) for the [SII]-emitting gas. These values are quite reasonable
for normal diffuse ISM that has been compressed by a shock front.
Based on a simple double gaussian decomposition of the H$\beta$
line integrated across the knot, we estimate an approximate linewidth
of $420$ km s$^{-1}$ for the broad component, which includes the high velocity
wings. This nicely matches the predicted shock velocities from \citet{ferruit99}
which were based on comparisons to shock ionization models of \citet{ds96}.
While it is difficult to directly associate shock speeds with integrated line kinematics,
this broad consistency adds credence to a shock-driven origin for the emission of the
knot in Region A.
\section{The Nature of the Jet Interaction}
Our combination of high spatial resolution optical spectroscopy and imaging provide the best
existing view of the compact jet-ISM interaction in NGC 5929. High velocity gas is
only seen at the location of the south-western radio hotspot, while in other
parts of the NLR, linewidths are narrow and a velocity gradient expected from virial motion
is seen, consistent with ground-based slit and IFU spectroscopy \citep{keel85, stoklasova09}.
We adopt a simple model for the jet interaction, following that of \citet{whittle86}.
The radio jet plows through the ISM as it propagates outward, driving strong shocks
at its head (the radio hotspot). As this shocked gas cools and gets denser,
it becomes the [\ion{S}{2}]-emitting gas described by \citet{ferruit99} and creates the
broad H$\beta$ emission. \citet{allen08} model the properties of post-shock gas
as a function of shock velocity $V_{sh}$, pre-shock density and magnetic field strength.
Taking a magnetized shock as fiducial, the [\ion{S}{2}]-derived densities and
$V_{sh} \sim 400$ km s$^{-1}$ (from the linewidth of the broad H$\beta$ component)
imply a pre-shock density of $n \sim 30$ cm$^{-3}$ and a shock cooling length
around tens of parsecs. The pre-shock density estimate has a large uncertainty
and could be as low as 1 cm$^{-3}$ if the shocks are weakly magentized,
which would also decrease the cooling length significantly. However, since
Seyfert jets are generally associated with mG level magnetic field, we adopt our estimates for
magnetic shocks as most likely. The cooling length we derive is resolved in our HST images
and may explain the small offset of $0\farcs15$ between the location of the radio hotspot and the
peak of the line emission in Region A. This interpretation is very tentative, since
patchy dust obscuration is the region may also cause such an offset.
\citet{su96} estimate flux densities, radio spectral indices and source sizes
for the various components of the radio source, from which we calculate radio
source equipartition pressures, using the relations of \citet{miley80} and
assuming that the radio emitting plasma has a filling factor of unity
and an ion fraction $a=10$, and that the radio spectrum extends from
0.01 GHz to 100 GHz with a constant spectral index of -0.82. The equipartition synchrotron
pressure of the radio hotspot is $10^{-7}$ dyne cm$^{-2}$,
which may be compared to the ram pressure of the shock front,
$m_{p}\:n\:V_{sh}^2 \sim 8\times10^{-8}$ dyne cm$^{-2}$. The two pressures are
comparable, consistent with the view that the shocked gas surrounds
and confines the radio hotspot. It is worthwhile to note that such pressures are
considerably higher than those typically found in Seyfert radio sources, which may indicate that
this is a relatively young jet ejection event. Eventually, interactions with the
surrounding gas will confine the jet flow into a static lobe, like those found in late-stage
interactions \citep{capetti96, whittle04}.
\section{Discussion: Shock Signatures in Seyferts} \label{sec4}
Why do we see such obvious signatures of shocks in this object and not in others?
This may be due to the weakness of the AGN: NGC 5929 has an absorption-corrected
hard X-ray luminosity of $1.8\times 10^{41}$ erg s$^{-1}$ -- low compared to average
local Seyferts \citep{cardamone07}. Or perhaps the nuclear environment of the
galaxy is dense and gas-rich due to its ongoing merger. This
will lower the ionization parameter of nuclear radiation
and produce a more compact emission line region. In both scenarios, the influence of the
nucleus at larger radii will be relatively unimportant, making shock
ionized line emission visible against the general background of centrally
ionized gas. If this is indeed the case, it implies that radiative shock
ionization from nuclear outflows is widespread in Seyfert NLRs, but is usually
secondary to nuclear photoionization processes and only visible
in low-luminosity Seyferts or those with dense nuclear environments.
A rich nuclear environment raises another possibility that the jet is impacting
a dense molecular cloud, enhancing the shock luminosity. This may explain
why no clear shock signatures are seen around the NE radio hotspot, though
the obscuration of the main dust lane prevents a direct view of this
region.
The luminosity of the broad H$\beta$ component is $8\times 10^{38}$ erg s$^{-1}$.
Following \citet{ds96}, the H$\beta$ flux scales with the total radiative flux from the
shock, with a weak dependence on $V_{sh}$, giving shock
luminosities of $5 \times 10^{41}$ erg s$^{-1}$ -- comparable or slightly less than
the total luminous output of the AGN (taking a X-ray bolometric correction of $\sim 10$).
Using standard relationships from \citet{osterbrock89}, the H$\beta$ luminosity
can be used to derive the mass of ionized gas in the broad component: $3 \times 10^{4}$ M$_{\odot}$.
If this mass of gas was accelerated to $V_{sh}$, it would have a total kinetic energy
of $5 \times 10^{52}$ ergs. Taking the approximate acceleration timescale to be
the crossing time of the region of size 0\farcs3 (48 pc) at $V_{sh}$ (around $10^{5}$ yr),
a lower limit on the `kinetic luminosity' of the jet is estimated to be $1.5 \times 10^{40}$ erg s$^{-1}$.
This is few to several percent of the total AGN energy output.
Given that jet outflows are a relatively common feature of Seyfert activity
and couple strongly to the ISM through shocks, jet driven feedback can
effectively carry as much as energy as the AGN photon luminosity to kpc scales, and
transfer a tenth or less of this energy in the form of kinetic energy to the NLR. This
can have important consequences for the suppression of bulge star-formation
and the energy budget and dynamics of circum-nuclear gas.
\acknowledgments
We thank the referee for their helpful review. DR acknowledges the support of
NSF grants AST-0507483 and AST-0808133. Based on observations made
with the NASA/ESA Hubble Space Telescope. MERLIN is a National Facility
operated by the University of Manchester at Jodrell Bank on behalf of STFC.
| proofpile-arXiv_065-5169 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
An important object of Ricci flow is to find Einstein metrics on a
given manifold. In the seminal paper~\cite{Ha82}, Hamilton showed that staring from any
metric with positive Ricci curvature on $M^3$, the normalized Ricci flow will always
converge to an Einstein metric at last. In the set of K\"ahler manifolds,
\KRf was developed as an important tool in search of KE (K\"ahler Einstein)
metrics. In~\cite{Cao85}, based on the fundamental estimate of
Yau(~\cite{Yau78}), Cao showed the long time existence of \KRf and the convergence of
\KRf when $c_1(M)\leq 0$. If $c_1(M)>0$, $M$ is called Fano manifold. In this case,
situations are much more delicate. $M$ may not have KE metric. So we cannot expect the convergence of the \KRf to a KE metric in general.
If the existence of KE metric is assumed, Chen and Tian showed that \KRf converges
exponentially fast toward the KE metric if the initial metric has positive bisectional
curvature (cf. \cite{CT1}, \cite{CT2}). Using his famous $\mu$-functional, Perelman developed fundamental estimates along \KRf on Fano manifolds. He also claimed that the \KRf will always converge
to the KE metric on any KE manifold. This result was generalized to
manifolds with K\"ahler Ricci solitons by Tian and Zhu (\cite{TZ}).
If the existence of KE metric is not assumed, there are a lot of works toward the convergence of \KRf
after G. Perelman's fundamental estimates. For example, important progress can be found in
(listed in alphabetical order)
~\cite{CLH},~\cite{CST},~\cite{CW1},~\cite{CW2},~\cite{Hei},~\cite{PSS},~\cite{PSSW1},
~\cite{PSSW2},~\cite{Ru},~\cite{RZZ},~\cite{Se1},~\cite{To},~\cite{TZs} and references therein.\\
Following Tian's original idea of $\alpha_{\nu, k}$-invariant in~\cite{Tian90}
and~\cite{Tian91}, Chen and the author (c.f.~\cite{CW3} and~\cite{CW4}) proved
that the \KRf converges to a KE metric if the $\alpha_{\nu, 1}(M)$ or $\alpha_{\nu, 2}(M)$ is big enough and the flow is tamed.
They also showed that every 2-dimensional \KRf is tamed.
Using the calculation of $\alpha_{\nu, 1}$ and $\alpha_{\nu, 2}$ of every Fano surface (c.f.~\cite{ChS},~\cite{SYl}), they showed the convergence of \KRf to a KE metric on every Fano surface $M$
satisfying $1 \leq c_1^2(M) \leq 6$. This gives a
flow proof of Calabi's conjecture on Fano surfaces. The existence of KE metrics on such manifolds were originally proved by
Tian in~\cite{Tian90}.\\
A natural question is: can we generalize these results to Fano orbifolds and use \KRf to search the KE metrics on Fano orbifolds? In this paper, we answer this question affirmatively. We use \KRf as a tool to find new KE metrics on some orbifold Fano surfaces. However, before we can use the orbifold \KRfc we firstly need some general results of orbifold Ricci flows.
So we generalize Perelman's Ricci flow theory to orbifold case. The study of orbifold Ricci flow is pioneered by the work~\cite{CWL},~\cite{Wu},~\cite{Lu1}.\\
We have the following theorems.
\begin{theoremin}
Suppose $Y$ is a Fano orbifold, $\{(Y, g(t)), 0 \leq t < \infty \}$ is a \KRf solution
tamed by $\nu$. Then this flow converges to a KE metric if one of the
following conditions is satisfied.
\begin{itemize}
\item $\alpha_{\nu, 1}>\frac{n}{n+1}$.
\item $\alpha_{\nu, 2}>\frac{n}{n+1}, \; \alpha_{\nu, 1} >
\frac{1}{2-\frac{(n-1)}{(n+1)\alpha_{\nu, 2}}}$.
\end{itemize}
\label{theoremin: tamedconvergence}
\end{theoremin}
The tamedness condition originates from Tian's work in~\cite{Tian90}
(c.f. eqation (0.3) of~\cite{Tian90}). Under \KRfc it's first defined
in~\cite{CW4}. A flow is called tamed by constant $\nu$ if the function
\begin{align}
F(\nu, x, t) \triangleq \frac{1}{\nu} \log
\sum_{\beta=0}^{N_{\nu}} \snorm{S_{\nu, \beta}^t}{h^{\nu}}^2(x)
\label{eqn: tamed}
\end{align}
is uniformly bounded on $Y \times [0, \infty)$.
Here $\{S_{\nu, \beta}^t\}_{\beta=0}^{N_{\nu}}$
are orthonormal basis of $H^0(K_Y^{-\nu})$,
i.e.,
\begin{align*}
\int_Y \langle S_{\nu, \alpha}^{t}, S_{\nu, \beta}^{t}\rangle_{h^{\nu}}
\omega_t^n=\delta_{\alpha \beta}, \quad 0 \leq \alpha, \beta \leq N_{\nu}=\dim H^0(K_Y^{-\nu})-1;
\quad
h=\det g_{i\bar{j}}(t).
\end{align*}
Therefore, this theorem gives us a way to search KE metric by
\KRfd
$\alpha_{\nu, k}$ are defined as (c.f. Definition~\ref{definition: nualpha}
for more details)
\begin{align*}
\alpha_{\nu, k} \triangleq
\sup\{ \alpha | \sup_{\varphi \in \mathscr{P}_{\nu, k}} \int_Y e^{-2\alpha \varphi} \omega_0^n <
\infty\}
\end{align*}
where $\mathscr{P}_{\nu, k}$ is the collection of all functions of
the form $\displaystyle \frac{1}{2\nu}\log (\sum_{\beta=0}^{k-1} \norm{\tilde{S}_{\nu,
\beta}}{h_0^{\nu}}^2)$ for some orthonormal basis $\{\tilde{S}_{\nu, \beta}\}_{\beta=0}^{k-1}$
of a $k$-dimensional subspace of $H^0(K_Y^{-\nu})$.
Note that $\alpha_{\nu, k}$ are algebraic invariants which can be calculated explicitly in many
cases, the most
important thing now is to show when the tamedness condition is satisfied.
\begin{theoremin}
Suppose $Y$ is a Fano surface orbifold, $\{(Y, g(t)), 0 \leq t< \infty \}$
is a \KRf solution. Then there is a constant $\nu$ such that this
flow is tamed by $\nu$.
\label{theoremin: surfacetamed}
\end{theoremin}
According to these two theorems and the calculations done
in~\cite{Kosta} and in~\cite{SYl}, we obtain the existence of K\"ahler Einstein
metrics on some orbifold Fano surfaces.
\begin{corollaryin}
Suppose $Y$ is a cubic surface with only one ordinary double point, or $Y$ is
a degree $1$ del Pezzo surface having only Du Val singularities of type $\A_k$ for $k \leq 6$. Starting from any
metric $\omega$ satisfying $[\omega]=2\pi c_1(Y)$,
the \KRf will converge to a KE metric on $Y$. In particular, $Y$ admits a KE metric.
\label{corollaryin: KEexample}
\end{corollaryin}
Actually, both Theorem~\ref{theoremin: tamedconvergence}
and Theorem~\ref{theoremin: surfacetamed} have corresponding versions
in~\cite{CW3} and~\cite{CW4}. Their proofs are also similar to the
ones in~\cite{CW3} and~\cite{CW4}.\\
Theorem~\ref{theoremin: tamedconvergence} follows from the
partial $C^0$-estimated given by the tamedness condition:
\begin{align}
\left|\varphi(t) - \sup_M \varphi(t)
-\frac{1}{\nu}\log \sum_{\beta=0}^{N_{\nu}}
\snorm{\lambda_{\beta}(t) \tilde{S}_{\nu, \beta}^t}{h_0^{\nu}}^2 \right| <C,
\label{eqn: spe}
\end{align}
where $\varphi(t)$ is the evolving K\"ahler potential,
$0 < \lambda_0(t) \leq \lambda_1(t) \leq \cdots \leq \lambda_{N_{\nu}}(t)=1 $
are $N_{\nu}+1$ positive functions of time $t$,
$\{\tilde{S}_{\nu, \beta}^t\}_{\beta=0}^{N_{\nu}}$ is an orthonormal
basis of $H^0(K_M^{-\nu})$ under the fixed metric $g_0$.
Intuitively, inequality (\ref{eqn: spe}) means that we can control $Osc_{M} \varphi(t)$ by
$\displaystyle \frac{1}{\nu}\log \sum_{\beta=0}^{N_{\nu}} \snorm{\lambda_{\beta}(t) \tilde{S}_{\nu, \beta}^t}{h_0^{\nu}}^2$
which only blows up along intersections of pluri-anticanonical divisors.
Therefore, the estimate of $\varphi(t)$ is more or less translated to the study of the property of
pluri-anticanonical holomorphic sections, which are described by $\alpha_{\nu, k}$.\\
Theorem~\ref{theoremin: surfacetamed} can be looked as the
combination of the following two lemmas.
\begin{lemmain}
Suppose $Y$ is a Fano orbifold, $\{(Y^n, g(t)), 0 \leq t < \infty\}$ is a \KRf
solution satisfying the following two conditions
\begin{itemize}
\item No concentration: There is a constant $K$ such that
\begin{align*}
\Vol_{g(t)}(B_{g(t)}(x, r)) \leq Kr^{2n}
\end{align*}
for every $(x, t) \in Y \times [0, \infty), r \in (0,K^{-1}]$.
\item Weak compactness: For every sequence $t_i \to \infty$, by
passing to subsequence, we have
\begin{align*}
(Y, g(t_i)) \sconv (\hat{Y}, \hat{g}),
\end{align*}
where $(\hat{Y}, \hat{g})$ is a Q-Fano normal variety.
\end{itemize}
Then this flow is tamed by some big constant $\nu$.
\label{lemmain: justtamed}
\end{lemmain}
Note that the $Q$-Fano normal variety is a normal variety with a
very ample line bundle whose restriction on the smooth part is the
plurianticanonical line bundle. The convergence $\sconv$ is the
convergence in Cheeger-Gromov topology, i.e., it means that the following two
properties are satisfied simultaneously:
\begin{itemize}
\item $d_{GH}(Y_i, \hat{Y}) \to 0$ where $d_{GH}$ is the
Gromov-Hausdorff distance among metric spaces.
\item for every smooth compact set
$K \subset \hat{Y}$, there are diffeomorphisms $\varphi_i: K \to Y_i$
such that $Im(\varphi_i)$ is a smooth subset of $Y_i$ and $\varphi_i^*(g_i)$
converges to $\hat{g}$ smoothly on $K$.
\end{itemize}
\begin{lemmain}
Suppose $Y$ is an orbifold Fano surface, $\{(Y, g(t)), 0 \leq t<\infty\}$
is a \KRf solution, then this flow satisfies the no concentration
and weak compactness property mentioned in Lemma~\ref{lemmain: justtamed}.
Moreover, every limit space $(\hat{Y}, \hat{g})$ is a K\"ahler
Ricci soliton.
\label{lemmain: surfacetamed}
\end{lemmain}
The proof of Lemma~\ref{lemmain: justtamed} follows directly (c.f. Theorem 3.2 of~\cite{CW4})
if we have the continuity of plurianticanonical holomorphic
sections --- orthonormal bases of $H^0(K_Y^{-\nu})$ (under metric $g_i$)
converge to an orthonormal basis of $H^0(K_{\hat{Y}}^{-\nu})$
(under metric $\hat{g}$)
whenever $(Y, g_i)$ converge to $(\hat{Y}, \hat{g})$. Moreover, every
orthonormal basis of $H^0(K_{\hat{Y}}^{-\nu})$ is a limit of
orthonromal bases of $H^0(K_Y^{-\nu})$. This fact is assured by H\"ormand's $L^2$-estimate of
$\bar{\partial}$-operator and an a priori estimate of
$\snorm{S}{}$ and $\snorm{\nabla S}{}$, where $S$ is a unit norm
section of $H^0(K_Y^{-1})$ (c.f. Lemma~\ref{lemma: boundh} for the a priori bounds of sections,
Theorem 3.1 of~\cite{CW4} for the continuity of holomorphic sections).
The proof of Lemma~\ref{lemmain: surfacetamed} is essentially based
on Riemannian geometry. It is a corollary of the following Theorem~\ref{theoremin: centerwcpt}.
In fact, if we define
$\mathscr{O}(m, c, \sigma, \kappa, E)$ as the the moduli space of
compact orbifold Ricci flow solutions $\{(X^m, g(t)), -1 \leq t \leq 1\}$
whose normalization constant is bounded by $c$, scalar curvature
bounded by $\sigma$, volume ratio bounded by $\kappa$ from below,
energy bounded by $E$ (c.f. Definition~\ref{definition: moduli}),
then this moduli space have no concentration and weak compactness properties.
\begin{theoremin}
$\mathscr{O}(m, c, \sigma, \kappa, E)$ satisfies the following two
properties.
\begin{itemize}
\item No concentration. There is a constant $K$ such that
\begin{align*}
\Vol_{g(0)}(B_{g(0)}(x, r)) \leq Kr^m
\end{align*}
whenever $r \in (0, K^{-1}]$, $x \in X$,
$\{(X, g(t)), -1 \leq t \leq 1\} \in \mathscr{O}(m, c, \sigma, \kappa, E)$.
\item Weak compactness. If $\{ (X_i, x_i, g_i(t)) , -1 \leq t \leq 1\} \in \mathscr{O}(m, c, \sigma, \kappa, E)$
for every $i$, by passing to subsequence if necessary, we have
\begin{align*}
(X_i, x_i, g_i(0)) \sconv (\hat{X}, \hat{x}, \hat{g})
\end{align*}
for some $C^0$-orbifold $\hat{X}$ in Cheeger-Gromov sense.
\end{itemize}
\label{theoremin: centerwcpt}
\end{theoremin}
Actually, according to the fact that scalar
curvature and $\int_Y |Rm|^2 \omega_t^2$ are
uniformly bounded (c.f. Proposition~\ref{proposition: perelman})
along \KRf on orbifold Fano surface, it is clear that
$\{(Y, g(t+T)), -1 \leq t \leq 1\} \in \mathscr{O}(4, 1, \sigma, \kappa, E)$
for every $T \geq 1$. Therefore
Theorem~\ref{theoremin: centerwcpt} applies. In order to obtain
Lemma~\ref{lemmain: surfacetamed}, we need to show that the
limit space $\hat{Y}$ is a K\"ahler Ricci soliton and every orbifold singularity is a
$C^{\infty}$-orbfiold point (c.f. Definition~\ref{definition: orbifold}). The first property is a direct
application of Perelman functional's monotonicity (c.f.~\cite{Se1}), the second property follows from
Uhlenbeck's removing singularity method (c.f.~\cite{CS}).\\
Theorem~\ref{theoremin: centerwcpt} is a generalization of the
corresponding weak compactness theorem in~\cite{CW3}.
If we assume Perelman's pseudolocality theorem (Theorem 10.3 of~\cite{Pe1})
holds in orbifold case, then its proof can be
almost the same as the corresponding theorems in~\cite{CW3}.
Therefore, an important technical difficulty of this paper is
the following pseudolocality theorem.
\begin{theoremin}
There exists $\eta=\eta(m, \kappa)>0$ with the following property.
Suppose $\{(X, g(t)), 0 \leq t \leq r_0^2\}$ is a compact orbifold Ricci flow solution.
Assume that at $t=0$ we have $|\hat{Rm}|(x) \leq r_0^{-2}$
in $B(x, r_0)$, and $\Vol B(x, r_0) \geq \kappa r_0^m$. Then the estimate
$|\hat{Rm}|_{g(t)}(y) \leq (\eta r_0)^{-2}$ holds whenever $0 \leq t \leq (\eta r_0)^2$,
$d_{g(t)}(y, x) < \eta r_0$.
\label{theoremin: improvedbound}
\end{theoremin}
Note that $|\hat{Rm}|$ is defined as
\begin{align*}
|\hat{Rm}|(x)=
\left\{
\begin{array}{ll}
|Rm|(x), & \textrm{if $x$ is a smooth point.}\\
\infty, & \textrm{if $x$ is a singularity.}
\end{array}
\right.
\end{align*}
The proof of Theorem~\ref{theoremin: improvedbound} is a
combination of Perelman's point selecting method and maximal principle.
Note that the manifold version of
Theorem~\ref{theoremin: improvedbound} (Theorem 10.3 of~\cite{Pe1})
is claimed by Perelman without proof. This first
written proof is given by Lu Peng in~\cite{Lu2} recently.\\
With Theorem~\ref{theoremin: improvedbound} in hand, we can prove
Theorem~\ref{theoremin: centerwcpt} as we did in~\cite{CW3}.
However, we prefer to give a new proof. In~\cite{CW3},
the proof of weak compactness theorem is complicated. A lot of
efforts are paid to show the locally connectedness of the limit
space. In other words, we need to show the limit space is an
orbifold, not a multifold. We used bubble tree on space time to
argue by contradiction. If we are able to construct bubble tree on
a fixed time slice, then the argument will be much easier.
In this paper, we achieve this by observing some stability of
$\int |Rm|^{\frac{m}{2}}$ in unit geodesic balls. \\
In short, the new ingredients of this paper are listed as follows.
\begin{itemize}
\item We offer a method to find KE metrics on orbifold Fano surfaces.
\item We give a simplified proof of weak compactness theorem,
i.e., Theorem~\ref{theoremin: centerwcpt}.
\item We prove the pseudolocality theorem in orbifold Ricci flow.
\end{itemize}
It's interesting to compare the two methods used in
search of KE metrics: the continuity method and the flow method.
Suppose $(M, g, J)$ is a K\"ahler manifold with positive first Chern class
$c_1$, $\omega$ is the $(1, 1)$-form
compatible to $g$ and $J$. The existence of KE metric under the
complex structure $J$ is equivalent to the solvability of the
equation
\begin{align*}
\det (g_{i\bar{j}} + \frac{\partial^2 \varphi}{\partial z_i \bar{\partial}
\overline{z_j}})=e^{-u-\varphi} \det (g_{i\bar{j}}),
\quad
g_{i\bar{j}} + \frac{\partial^2 \varphi}{\partial z_i \bar{\partial}
\overline{z_j}}>0,
\end{align*}
where $u$ is a smooth function on $M$ satisfying
\begin{align*}
u_{i\bar{j}}=g_{i\bar{j}}-R_{i\bar{j}},
\quad \frac{1}{V} \int_M (e^{-u}-1) \omega^n=1.
\end{align*}
In continuity method, we try to solve a family of equation
($0 \leq t \leq 1$):
\begin{align*}
\left\{
\begin{array}{l}
\det (g_{i\bar{j}} + \frac{\partial^2 \varphi}{\partial z_i \bar{\partial}
\overline{z_j}})=e^{-u-t\varphi} \det (g_{i\bar{j}}),\\
g_{i\bar{j}} + \frac{\partial^2 \varphi}{\partial z_i \bar{\partial}
\overline{z_j}}>0.
\end{array}
\right.
\end{align*}
In K\"ahler Ricci flow method, we try to show the convergence of the parabolic equation solution:
\begin{align*}
\D{\varphi}{t} = \log \frac{\det (g_{i\bar{j}} + \frac{\partial^2 \varphi}{\partial z_i \bar{\partial}
\overline{z_j}})}{\det (g_{i\bar{j}})}
+ \varphi + u.
\end{align*}
In both methods, the existence of KE metric is reduced to set up a
uniform $C^0$-bound of the K\"ahler potential function $\varphi$.
If $\alpha(M)>\frac{n}{n+1}$, then $\varphi$ is uniformly bounded
in either case (c.f.~\cite{Tian87},~\cite{Ru},~\cite{CW2}).
If $\alpha(M) \leq \frac{n}{n+1}$, we need more
geometric estimates to show the uniform bound of $\varphi$. Under
continuity path, these geometric estimates are
stated by Tian in~\cite{Tian90} and~\cite{Tian91} (c.f. inequality (0.3) of~\cite{Tian90}
and inequality (5.2) of~\cite{Tian91}).
In \KRf case, we used a similar statement and called it as
tamedness condition(c.f. equation (\ref{eqn: tamed})) for simplicity.
If the continuity path or \KRf is tamed, then the $\varphi$ is
uniformly bounded if $\alpha_{\nu, k} (k=1, 2)$ is big enough.
However, if the complex structure is fixed,
there are slight difference in obtaining the tamedness condition
between these two methods. The tamedness condition of a \KRf maybe easier
to verify under the help of Perelman's functional.
On a continuity path, the tamedness condition is
conjectured to be true by Tian(c.f. Inequality (5.2.) of~\cite{Tian91}).
Let's recall how to find the KE metric on K\"ahler surface
$(M, J)$ whenever $c_1^2(M)=3$ and $(M, J)$ contains Eckhard point.
It was first found by Tian in~\cite{Tian90} where he used
continuity method twice. Note that on the differential manifold
$M \sim \Blow{6}$, all the complex structures such that $c_1$ positive
form a connected $4$-dimensional algebraic variety $\mathscr{J}$. Choose
$J_0 \in \mathscr{J}$ such that $\alpha_{G}(M, J_0)>\frac23$ for some compact group
$G \subset Aut(M, J_0)$(e.g. Fermat surface).
By continuity method, there is a KE metric $g_0$ compatible with $J_0$.
Now connecting $J_0$ and $J$ by a family of complex structures
$J_t \in \mathscr{J}, 0 \leq t \leq 1$ such that $J_1=J$.
Choose $\tilde{g}_t$ be a continuous family of metrics compatible with $J_t$.
Let $I$ be the
collection of all $t$ such that there exists a KE metric
$g_t$ compatible with $J_t$. It's easy to show that $I$ is an open
subset of $[0, 1]$. In order to prove $I=[0, 1]$, one only need
to show the closedness of $I$. Let
\begin{align*}
(g_t)_{i\bar{j}}= (\tilde{g}_t)_{i\bar{j}} + \varphi_{i\bar{j}}.
\end{align*}
Then it suffices to show a uniform bound of $Osc_M \varphi(t)$ on $I$.
Since along this curve of complex structures,
$\alpha_{\nu, 2}(M, J_t)>\frac23, \alpha_{\nu, 1}(M, J_t) \geq \frac23$
for every $t\in I \subset [0, 1]$ (c.f.~\cite{SYl},~\cite{ChS}),
it suffices to show the tamedness condition
(inequality (0.3) of~\cite{Tian90}) on the set $I$. In fact, this tamedness
condition is guaranteed by the
weak compactness theorem of KE metrics on $M$(c.f. Proposition 4.2 of~\cite{Tian90}).
In \KRf method, we are unable to change complex structure.
Inspired by the continuity method, we also reduce the boundedness
of $\varphi$ to the tamedness condition since
$\alpha_{\nu, 2}(M, J)>\frac23, \alpha_{\nu, 1}(M, J) \geq \frac23$.
Now in order to show the tamedness condition, we need a weak
compactness of time slices of a \KRfd This seems to be more
difficult since each time slice is only a K\"ahler metric, not a
KE metric, we therefore lose the regularity property of KE
metrics. Luckily, under the help of Perelman's estimates and
pseudolocality theorem, we are able to show the weak compactness
theorem(c.f. Theorem 4.4 of~\cite{CW3}).
Consequently the tamedness condition of the \KRf on
$M$ holds, so $\varphi$ is uniformly bounded and this flow converges
to a KE metric.
Once the weak compactness of time slices is proved, the
disadvantage of \KRf becomes an advantage: we can prove the
tamedness condition without changing complex structure.
This is not easy to be proved under a continuity path when the
complex structure is fixed. Suppose we have a differential
manifold $M$ whose complex structures with positive $c_1$ form a
space $\mathscr{J}$ satisfying
\begin{align*}
\alpha_{\nu, 1}(M, J) \leq \frac{n}{n+1}, \quad \forall \; J \in
\mathscr{J}.
\end{align*}
Without using symmetry of the initial metric,
we cannot apply continuity method directly to draw
conclusion about the existence of KE metrics on $(M, J)$. However, \KRf can still
possibly be applied. For example, if $(Y, J)$ is an Fano orbifold
surface with degree $1$ and with three rational double points of type $\A_5, \A_2$ and $\A_1$.
Then $J$ is the unique complex structure on
$Y$ such that $c_1(Y)>0$ (c.f.~\cite{Zhd},~\cite{YQ}).
According to the calculations in~\cite{Kosta}, we know
$\alpha_{\nu, 1}(Y)=\frac23$,
$\alpha_{\nu, 2}>\frac23$. So we are unable to use continuity method
directly to conclude the existence of KE metric on $(Y, J)$ because of the absence of tamedness condition.
However, we do have this condition under \KRf by Theorem~\ref{theoremin: surfacetamed}.
Therefore, the \KRf on $(Y, J)$ must converge to a KE metric. \\
The organization of this paper is as follows. In section 2, we set
up notations. In section 3, we go over Perelman's theory on Ricci
flow on orbifolds and prove the pseudolocality theorem
(Theorem~\ref{theoremin: improvedbound}). In
section 4, we give a simplified version of proof of weak
compactness theorem (Theorem~\ref{theoremin: centerwcpt}).
In section 5, we give some improved estimates
of plurianticanonical line bundles and prove
Theorem~\ref{theoremin: tamedconvergence} and
Theorem~\ref{theoremin: surfacetamed}. At last, in section 6,
we give some examples where our theorems can be applied. In
particular, we show Corollary~\ref{corollaryin: KEexample}.\\
\noindent {\bf Acknowledgment}
The author would like to thank his advisor, Xiuxiong Chen, for
bringing him into this field and for his constant encouragement.
The author is very grateful to Gang Tian for
many insightful and inspiring conversations with him.
Thanks also go to John Lott, Yuanqi Wang, Fang Yuan for many
interesting discussions.
\section{Set up of Notations}
\begin{definition}
A $C^{\infty} (C^0)$-orbifold $(\hat{X}^m, \hat{g})$ is a topological
space which is a smooth manifold with a smooth Riemannian metric
away from finitely many singular points. At every singular point,
$\hat{X}$ is locally diffeomorphic to a cone over $S^{m-1} / \Gamma$
for some finite subgroup $\Gamma \subset SO(m)$. Furthermore, at
such a singular point, the metric is locally the quotient of a
smooth (continuous) $\Gamma$-invariant metric on $B^{m}$ under the orbifold
group $\Gamma$.
A $C^{\infty}(C^0)$-multifold $(\tilde{X}, \tilde{g})$ is a finite union
of $C^{\infty}(C^0)$-orbifolds after identifying finite points. In other words,
$\displaystyle \tilde{X}= \coprod_{i=1}^{N} \hat{X}_i / \sim$ where every
$\displaystyle \hat{X}_i$ is an orbifold, the relation $\sim$ identifies
finite points of $\displaystyle \coprod_{i=1}^{N} \hat{X}_i$.
For simplicity, we say a space is a Riemannian orbifold or orbifold (multifold)
if it is a $C^{\infty}$-orbifold ($C^{\infty}$-multifold).
\label{definition: orbifold}
\end{definition}
\begin{definition}
For a compact Riemannian orbifold $X^m$ without boundary, we define its isoperimetric
constant as
\begin{align*}
\mathbf{I}(X) \triangleq
\inf_{\Omega} \frac{|\partial \Omega|}{\min\{|\Omega|, |X \backslash \Omega|\}^{\frac{m-1}{m}}}
\end{align*}
where $\Omega$ runs over all domains with rectifiable boundaries in $X$.
For a complete Riemannian orbifold $X^m$ with boundary, we define its isoperimetric
constant as
\begin{align*}
\mathbf{I}(X) \triangleq
\inf_{\Omega} \frac{|\partial \Omega|}{|\Omega|^{\frac{m-1}{m}}}
\end{align*}
where $\Omega$ runs over all domains with rectifiable boundaries
in the interior of $X$.
\end{definition}
\begin{definition}
A geodesic ball $B(p, \rho)$ is called $\kappa$-noncollapsed if
$\displaystyle \frac{\Vol(B(q, s))}{s^m} > \kappa $
whenever $B(q, s) \subset B(p, \rho)$.
A Riemannian orbifold $X^m$ is called $\kappa$-noncollapsed on
scale $r$ if every geodesic ball $B(p, \rho) \subset X$ is
$\kappa$-noncollapsed whenever $\rho \leq r$.
A Riemannian orbifold $X^m$ is called $\kappa$-noncollapsed if it
is $\kappa$-noncollapsed on every scale $r \leq \diam (X^m)$.
\end{definition}
\begin{definition}
Suppose $(x, t)$ is a point in a Ricci
flow solution. Then parabolic balls are defined as
\begin{align*}
& P^+(x, t, r , \theta)= \{(y, s)| d_{g(t)}(y, x) \leq r, \; t\leq s
\leq s+\theta\}.\\
& P^-(x, t, r , \theta)= \{(y, s)| d_{g(t)}(y, x) \leq r, \; t-\theta \leq s \leq s\}.
\end{align*}
Geometric parabolic balls are defined as
\begin{align*}
& \tilde{P}^+(x, t, r , \theta)= \{(y, s)| d_{g(s)}(y, x) \leq r, \;t\leq s
\leq s+\theta\}.\\
& \tilde{P}^-(x, t, r , \theta)= \{(y, s)| d_{g(s)}(y, x) \leq r, \; t-\theta \leq s \leq s\}.\\
\end{align*}
\label{definition: parabolicballs }
\end{definition}
\begin{definition}
Suppose $x$ is a point in the Riemannian orbifold $X$. Then we define
\begin{align*}
\snorm{\hat{Rm}}{}= \left\{
\begin{array}{ll}
&|Rm|(x), \quad \textrm{if $x$ is a smooth point,}\\
&\infty, \quad \textrm{if $x$ is a singular point.}
\end{array}
\right.
\end{align*}
\label{definition: rmhat}
\end{definition}
\section{Pseudolocality Theorem}
\subsection{Perelman's Functional and Reduced Distance}
Denote $\square= \D{}{t} - \triangle$,
$\square^*=-\D{}{t} - \triangle + R$.
In our setting, every orbifold only has finite singularities. All the concepts in~\cite{Pe1}
can be reestablished in our orbifold case. For example,
we can define $W$-functional, reduced distance, reduced volume on orbifold Ricci flow.
\begin{definition}
Let $(X,g)$ be a Riemannian orbifold, $\tau>0$ a constant, $f$ a smooth function on $X$.
Define
\begin{align*}
W(g, \tau, f) &= \int_X \{ \tau(R+ |\nabla f|^2) +f-n \} (4 \pi
\tau)^{-\frac{n}{2}} e^{-f}dv,\\
\mu(g, \tau) &= \inf_{\int_X (4 \pi \tau)^{-\frac{n}{2}}e^{-f}dv=1} W(g, \tau, f).
\end{align*}
\end{definition}
Since the Sobolev constant of $X$ exists,
we know $\mu(g, \tau)>-\infty$ and it is achieved by some smooth function $f$.
Suppose $\{(X, g(t)), 0 \leq t \leq T\}$ is a Ricci flow solution
on compact orbifold $X$, $u=(4\pi(T-t))^{-\frac{n}{2}}e^{-f}$
satisfies $\square^*u=0$. Let
$v=\{(T-t)(2\triangle f - |\nabla f|^2 +R) +f -m\}u$, then
\begin{align*}
\square^* v= -2(T-t)|R_{ij}+f_{ij}-\frac{1}{2(T-t)}g_{ij}|^2u
\leq 0.
\end{align*}
This implies that
\begin{align*}
\D{}{t} \int_X\{(T-t)(R+|\nabla f|^2) +f -m\}(4\pi
\tau)^{-\frac{m}{2}}e^{-f}
=\D{}{t} \int_X v = \int_X \square^* v \leq 0.
\end{align*}
It follows that $\mu(g(t), T-t)$ is nondecreasing along Ricci flow.
From this monotonicity, we can obtain the no-local-collapsing
theorem.
\begin{proposition}
Suppose $\{(X, g(t)), 0 \leq t <T_0\}$
is a Ricci flow solution on compact orbifold $X$, then there is a
constant $\kappa$ such that the following property holds.
Under metric $g(t)$, if scalar curvature norm $|R| \leq r^{-2}$ in $B(x, r)$ for some
$r<1$, then $\Vol(B(x, r)) \geq \kappa r^m$.
\end{proposition}
The proof of this proposition is the same as Theorem 4.1
in~\cite{Pe1} if $R$ is replaced by $|Rm|$.
See~\cite{KL}, ~\cite{SeT} for the improvement to scalar
curvature.
\begin{definition}
Fix a base point $p \in X$. Let $\mathcal{C}(p, q, \bar{\tau})$ be the
collection of all smooth curves $\{\gamma(\tau), 0 \leq \tau \leq \bar{\tau}\}$ satisfying
$\gamma(0)=p, \gamma(\bar{\tau})=q$. As in~\cite{Pe1}, we define
\begin{align*}
\mathcal{L}(\gamma) &= \int_{0}^{\bar{\tau}} \sqrt{\tau} (R + |\dot{\gamma}(\tau)|^2)
d\tau,\\
L(p, q, \bar{\tau})&= \inf_{\gamma \in
\mathcal{C}(p,q,\bar{\tau})} \mathcal{L}(\gamma),\\
l(p, q, \bar{\tau}) &=\frac{L(p, q,
\bar{\tau})}{2\sqrt{\bar{\tau}}}.
\end{align*}
\label{definition: rd}
\end{definition}
Like manifold case, $L(p, q, \bar{\tau})$ is achieved by some
shortest $\mathcal{L}$-geodesic $\gamma$.\\
Under Ricci flow, since the evolution of distance is controlled by
Ricci curvature. Definition~\ref{definition: rd} yields the following estimate (c.f.~\cite{Ye}).
\begin{proposition}
Suppose $|Ric| \leq Cg$ when $0 \leq \tau \leq \bar{\tau}$ for a
nonnegative constant $C$. Then
\begin{align*}
e^{-2C\tau} \frac{d_{g(0)}^2(p, q)}{4\tau} -\frac{nC}{3}\tau
\leq l(p, q, \tau) \leq e^{2C\tau} \frac{d_{g(0)}^2(p, q)}{4\tau} +
\frac{nC}{3}\tau.
\end{align*}
\end{proposition}
Therefore, as $\tau \to 0$, $l(p, q, \tau)$
behaves like $\frac{d_{g(0)}^2(p, q)}{4\tau}$.
\begin{proposition}
Let $u(p, q, \tau)$ be the heat kernel of $\square^*$ on $X \times [0,
\bar{\tau}]$. As $q \to p, \; \tau \to 0$, we have
\begin{align*}
u(p, q, \tau) \sim (4\pi\tau)^{-\frac{n}{2}} e^{-\frac{d_{g(0)}^2(p,
q)}{4\tau} + \log |\Gamma_p|}.
\end{align*}
\end{proposition}
In the case the underlying space is a manifold, this approximation
can be proved by constructing parametrix for the operator
$\square^*$(c.f.~\cite{CLN} for detailed proof).
This construction can be applied to orbifold case easily. See~\cite{DSGW} for the construction of parametrix of heat kernel on
general orbifold under fixed metrics. This proposition is the
combination of the corresponding theorems in~\cite{CLN}
and~\cite{DSGW}. The proof method is the same,
so we omit the proof for simplicity.
\begin{proposition}
$\square^* \{(4\pi \tau)^{-\frac{n}{2}} e^{-l}\} \leq 0$.
\end{proposition}
\begin{proposition}
Suppose $h$ is the solution of $\square h=0$, then
\begin{align*}
\lim_{t \to 0} \int_X hv \leq -\log |\Gamma|h(p, 0).
\end{align*}
\label{proposition: deltalimit}
\end{proposition}
\begin{proof}
Direct calculation shows that
\begin{align*}
\D{}{t}\{\int_X hv\}=-\int_X h \square^* v= 2 \tau \int_X \snorm{R_{ij}+f_{,ij}-\frac{g_{ij}}{2\tau}}{}^2
uh \geq 0.
\end{align*}
Therefore, $\displaystyle \lim_{t\to 0^-} \int_X hv$ exists if $\int_X hv$ is
uniformly bounded as $t \to 0^{-}$. However, we can decompose
$\int_X hv$ as
\begin{align*}
\int_X hv &=\int_X [\tau(2\triangle f -|\nabla f|^2 +R) + f
-n]uh\\
&=(4\pi \tau)^{-\frac{n}{2}} \int_X [\tau(2\triangle f -|\nabla f|^2 +R) + f -n]e^{-f}h\\
&=(4\pi \tau)^{-\frac{n}{2}} \{\int_X [\tau(|\nabla f|^2 +R) + f-n]e^{-f}h -2\tau \int_X h \triangle
e^{-f}\}\\
&=\underbrace{\int_X [-2\tau \triangle h + (R\tau -n)h]u}_{I}
+\underbrace{\int_X \tau |\nabla f|^2 uh}_{II}
+\underbrace{\int_X fuh}_{III}.
\end{align*}
Note that $\int_X u \equiv 1$. Term $I$ is uniformly bounded.
By the gradient estimate of heat equation, as in~\cite{Ni1}, we have
\begin{align*}
\tau \frac{|\nabla u|^2}{u^2} \leq (2+C_1\tau) \{ \log (\frac{B}{u\tau^{\frac{n}{2}}} \int_X u) + C_2 \tau\}
\end{align*}
for some constants $C_1, C_2$. Together with $\int_X u \equiv 1$, this implies
\begin{align*}
II =\int_X \tau |\nabla f|^2 uh \leq (2+C_1\tau) \{ \int_X ( \log B + f + C_2 \tau)uh\}
\leq C + 3 \int_X fuh,
\end{align*}
where $C$ is a constant depending on $X$ and $h$. It follows that
\begin{align*}
\int_X hv \leq C' + 4 \int_X fuh.
\end{align*}
In order to show $\int_X hv$ have a uniform upper bound, it suffices to
show that $III=\int_X fuh$ is uniformly bounded from above.
Around $(p, 0)$, the reduced distance $l$ on $X$ approximates
$\frac{d^2}{4\tau}$. (See~\cite{Pe1},~\cite{Ye} for more details.)
As a consequence,
we have
\begin{align*}
\square^* \{(4\pi \tau)^{-\frac{n}{2}} e^{-l(y, \tau) + \log|\Gamma|}\} \leq 0,
\quad
\lim_{\tau \to 0} (4\pi \tau)^{-\frac{n}{2}} e^{-l(y, \tau)+ \log |\Gamma|}=\delta_x(y).
\end{align*}
Then maximal principle implies that
\begin{align}
f(y, \tau) \leq l(y, \tau) - \log |\Gamma|.
\label{eqn: fl}
\end{align}
for every $y \in X$, $0 <\tau \leq 1$.
Inequality (\ref{eqn: fl}) implies
\begin{align*}
\limsup_{\tau \to 0} \int_X fuh &\leq \limsup_{\tau \to 0}\int_X (l-\log|\Gamma|)uh\\
&=-\log|\Gamma| h(p,0) + \limsup_{\tau \to 0} \int_X
\frac{d^2}{4\tau}uh\\
&\leq (\frac{n}{2} - \log|\Gamma|) h(p, 0).
\end{align*}
The last step holds since the expansion of $u$ around point
$(p, 0)$ tells us that
\begin{align*}
\limsup_{\tau \to 0}\int_X \frac{d^2}{4\tau} uh
\leq h(p, 0)
\{\int_{\R^n / \Gamma} \frac{|z|^2}{4} \cdot (4\pi)^{-\frac{n}{2}}
\cdot e^{-\frac{|z|^2}{4} + \log|\Gamma|}\}
=\frac{n}{2}h(p,0).
\end{align*}
After the uniform upper bound of $\int_X vh$ is set up, by the
monotonicity of $\int_X vh$, we see
$\displaystyle \lim_{\tau \to 0} \int_X vh$ exists. Since $\frac{1}{\tau}$
is not integrable on $[0,1]$, for every $k$, there are small $\tau$'s such that
$\displaystyle \D{}{t}\int_X vh \leq \frac{1}{k \tau}$.
So we can extract a sequence of $\tau_k \to 0$ such that
\begin{align*}
\lim_{k \to \infty} 2\tau_k^2
\int_X \snorm{R_{ij}+f_{,ij}-\frac{g_{ij}}{2\tau}}{}^2 uh
= \lim_{k \to \infty} \tau_k \D{}{t}\int_X vh
\leq \lim_{k \to \infty} \frac{1}{k}=0.
\end{align*}
H\"older inequality and Cauchy-Schwartz inequality implies that
\begin{align*}
\lim_{\tau_k \to 0} \tau_k \int_X (R + \triangle f - \frac{n}{2\tau_k})uh
&\leq
\lim_{\tau_k \to 0} \tau_k \int_X
\snorm{R_{ij}+f_{,ij}-\frac{g_{ij}}{2\tau_k}}{}uh\\
&\leq \lim_{\tau_k \to 0} \{\tau_k^2 \int_X
\snorm{R_{ij}+f_{,ij}-\frac{g_{ij}}{2\tau_k}}{}^2uh\}^{\frac12}
\cdot \{\int_X uh\}^{\frac12}\\
&=0.
\end{align*}
Therefore,
\begin{align*}
\lim_{\tau \to 0} \int_X vh
&=\lim_{\tau_k \to 0} \int_X vh\\
&=\lim_{\tau_k \to 0} \tau_k \int_X [(R + \triangle f) -
\frac{n}{2\tau_k}]uh
-\lim_{\tau_k \to 0} \tau_k \int_X u \triangle h + \lim_{\tau_k \to 0}\int_X
(f-\frac{n}{2})uh\\
&=\lim_{\tau_k \to 0}\int_X
(f-\frac{n}{2})uh\\
&\leq -\log|\Gamma| h(p, 0).
\end{align*}
\end{proof}
\begin{corollary}
$v \leq 0$.
\end{corollary}
\begin{theorem}
Suppose $h$ is a nonnegative function, there is a large constant $K$ such that
$\max \{\square h, -\triangle h \}\leq K$
whenever $t \in [-K^{-1}, 0]$. Then
\begin{align*}
\lim_{t \to 0} \int_X hv \leq -\log |\Gamma|h(p, 0).
\end{align*}
\label{theorem: deltalimit}
\end{theorem}
\begin{proof}
The monotonicity of $\int_X v$ tells us that
\begin{align*}
\int_X v \geq \int_X v|_{t=K^{-1}} \geq \mu(g(-K^{-1}), K^{-1}).
\end{align*}
whenever $t \in [-K^{-1}, 0)$.
The conditions $\square h \leq K$, $v \leq 0$ imply
\begin{align*}
\D{}{t}\{\int_X hv\}=\int_X (v \square h - h\square^*v) \geq K \int_X v-\int_X h \square^* v
\geq C +2 \tau \int_X \snorm{R_{ij}+f_{,ij}-\frac{g_{ij}}{2\tau}}{}^2 uh
\end{align*}
where $C=K\mu(g(-K^{-1}), K^{-1})$, $\tau=-t$.
In other words,
\begin{align*}
\D{}{t}\{C\tau +\int_X hv\} \geq 2 \tau \int_X \snorm{R_{ij}+f_{,ij}-\frac{g_{ij}}{2\tau}}{}^2
uh \geq 0.
\end{align*}
By the same argument as in
Proposition~\ref{proposition: deltalimit}, $C\tau + \int_X hv$ is
uniformly bounded from above. So the limit
$\displaystyle \lim_{\tau \to 0} \int_X hv=\lim_{\tau \to 0} C\tau + \int_X hv$ exists. There
is a sequence $\tau_k \to 0$ such that
\begin{align*}
2 \tau_k^2 \int_X \snorm{R_{ij}+f_{,ij}-\frac{g_{ij}}{2\tau}}{}^2
\longrightarrow 0.
\end{align*}
This yields that $\displaystyle \lim_{\tau_k \to 0} \int_X (R+ \triangle f - \frac{n}{2\tau_k})uh
=0$.
Note $-\triangle h \leq K$, as in Proposition~\ref{proposition: deltalimit}, we have
\begin{align*}
\lim_{\tau \to 0} \int_X vh
&=\lim_{\tau_k \to 0} \left\{ \tau_k \int_X [(R + \triangle f) -
\frac{n}{2\tau_k}]uh - \tau_k \int_X u \triangle h
+\int_X (f-\frac{n}{2})uh \right\}\\
&\leq -\log|\Gamma| h(p, 0).
\end{align*}
\end{proof}
\subsection{Proof of Pseudolocality Theorem}
\textbf{In this section, we fix $\alpha= \frac{1}{10^6m}$.}
\begin{theorem}[\textbf{Pseudolocality theorem}]
There exist $\delta>0, \epsilon>0$ with the following property.
Suppose $\{(X, g(t)), 0 \leq t \leq \epsilon^2\}$ is an orbifold Ricci flow
solution satisfying
\begin{itemize}
\item Isoperimetric constant close to Euclidean one: $\mathbf{I}(B(x, 1)) \geq (1-\delta) \mathbf{I}(\R^n)$
\item Scalar curvature bounded from below: $R \geq -1$ in $B(x, 1)$.
\end{itemize}
under metric $g(0)$. Then in the geometric parabolic ball
$\tilde{P}^+(x, 0, \epsilon, \epsilon^2)$,
every point is smooth and $|Rm| \leq \frac{\alpha}{t} +
\epsilon^{-2}$.
\end{theorem}
\begin{remark}
The condition $\mathbf{I}(B(x, 1))> (1-\delta)\mathbf{I}(\R^n)$
implies that there is no orbifold singularity in $B(x, 1)$.
\end{remark}
\begin{proof}
Define $\displaystyle F(x, r) \triangleq \sup_{(y, t) \in \tilde{P}^+(x, 0, r, r^2)}\{ |\hat{Rm}| - \frac{\alpha}{t} -
r^{-2}\}$ where $|\hat{Rm}|$ is defined in Definition~\ref{definition: rmhat}. Then the conclusion of the theorem is equivalent to
$F(x, \epsilon) \leq 0$.
Suppose this theorem is wrong.
For every $(\delta, \eta) \in \R^+ \times \R^+$, there are
orbifold Ricci flow solutions violating the property. So we can
take a sequence of positive numbers $(\delta_i, \eta_i) \to (0, 0)$
and orbifold Ricci flow solutions $\{(X_i, x_i, g_i(t)), 0 \leq t \leq \eta_i^2 \}$
satisfying the initial conditions but
$F(x_i, \eta_i)>0$.
Define $\epsilon_i$ to be the infimum of $r$ such that $F(x_i, r) \geq 0$.
Since $x_{i}$ is a smooth point, we have $\eta_i>\epsilon_i >0$. For every point
$(z, t) \in \tilde{P}^+(x_i, 0, \epsilon_i, \epsilon_i^2)$, we have
\begin{align}
|\hat{Rm}|_{g_i(t)}(z) -\frac{\alpha}{t}- \epsilon_i^{-2}
\leq |\hat{Rm}|_{g_i(t_i)}(y_i) -\frac{\alpha}{t}- \epsilon_i^{-2}=0
\label{eqn: bschoice}
\end{align}
for some point $(y_i, t_i) \in \tilde{P}^+(x_i, 0, \epsilon_i, \epsilon_i^2)$.
Let $A_i =\alpha \epsilon_i^{-1}=\frac{1}{10^6m \epsilon_i}$.
\begin{clm}
Every point in the geometric parabolic ball
$\tilde{P}^+(x_i, 0, 4A_i \epsilon_i, \epsilon_i^2)$
is a smooth point.
\end{clm}
For convenience, we omit the subindex $i$.
Suppose that $(p, s)$ is a singular point in
$\tilde{P}^+(x,0,4A\epsilon, \epsilon^2)$.
Let
\begin{align*}
\eta(y, t) = \phi (\frac{d_{g(t)}(y, x) +200m \sqrt{t}}{10A\epsilon})
\end{align*}
where $\phi$ is a cutoff function satisfying the following
properties. It takes value one on $(-\infty, 1]$ and decreases to
zero on $[1, 2]$. Moreover, $-\phi'' \leq 10\phi, (\phi')^2 \leq 10\phi$.
Recall that
\begin{align*}
|\hat{Rm}| \leq \frac{\alpha}{t} + \epsilon^{-2} \leq
\frac{1+\alpha}{t} < \frac{2}{t}
\end{align*}
in the set $\tilde{P}^+(x,0, \epsilon, \epsilon^2)$. In
particular, every point in $B_{g(t)(x, \sqrt{\frac{t}{2}})}$ is
smooth and satisfies $|Rm| < \frac{2}{t}$. This
curvature estimate implies that (c.f. Lemma 8.3 (a) of~\cite{Pe1}, it also holds in orbifold case.)
\begin{align*}
\square d \geq -(m-1)(\frac23 \cdot \frac{2}{t} \cdot \sqrt{\frac{t}{2}} + \sqrt{\frac{2}{t}})
>-4m t^{-\frac12},
\end{align*}
where $d(\cdot)=d_{g(t)}(\cdot, x)$. Therefore, as calculated in~\cite{Pe1},
we have
\begin{align*}
\square \eta = \frac{1}{10A\epsilon}(\square d +100m t^{-\frac12})\phi' - \frac{1}{(10A\epsilon)^2} \phi''
\leq \frac{10\eta}{(10A\epsilon)^2}.
\end{align*}
Let $u$ be the fundamental solution of the backward heat equation
$\square^* u=0$ and $u=\delta_p$ at point $(p, s)$.
We can calculate
\begin{align*}
\D{}{t} \int_X \eta u &= \int_X (u \square \eta - \eta \square^*u)
=\int_X u \square \eta
\leq \frac{1}{10(A\epsilon)^2} \int_X u\eta
\leq \frac{1}{10(A\epsilon)^2} \int_X u
=\frac{1}{10(A\epsilon)^2}.
\end{align*}
It follows that
\begin{align*}
\left.\int_X \eta u \right|_{t=0} \geq \left.\int_X \eta u \right|_{t=s} -\frac{s}{10(A\epsilon)^2}
\geq 1 - \frac{1}{10A^2}.
\end{align*}
Similarly, if we let $\bar{\eta}(y, t) = \phi (\frac{d_{g(t)}(y, x) +200m
\sqrt{t}}{5A\epsilon})$, we can obtain
\begin{align*}
\int_{B(x, 10A\epsilon)} u \geq \left. \int_X \bar{\eta} u \right|_{t=0}
\geq 1-\frac{10}{(5A)^2}.
\end{align*}
It forces that
\begin{align*}
\left.\int_{B(x, 20A\epsilon) \backslash B(x, 10A\epsilon)} \eta u \right|_{t=0}
\leq 1-(1-\frac{10}{(5A)^2}) < A^{-2}.
\end{align*}
On the other hand, we have
\begin{align*}
\D{}{t} \int_X -\eta v &= \int_X (-v \square \eta + \eta \square^*
v)\\
&\leq \int_X -v \square \eta\\
&\leq \frac{1}{10(A\epsilon)^2} \int_X -\eta v,
\end{align*}
where we used the fact $-v \geq 0$ and $\square \eta \leq \frac{\eta}{10(A\epsilon)^2}$.
This inequality together with Theorem~\ref{theorem: deltalimit} implies
\begin{align*}
\left.\int_X -\eta v \right|_{t=0}
\geq e^{-\frac{s}{10(A\epsilon)^2}} \left.\int_X -\eta v\right|_{t=s}
\geq \log |\Gamma| \eta(x, s) e^{-\frac{s}{10(A\epsilon)^2}} > \frac12 \log |\Gamma|
\geq \frac12 \log 2.
\end{align*}
Let $\tilde{u}=u\eta$ and $\tilde{f}=f -\log \eta$. At $t=0$, as in~\cite{Pe1},
we can compute
\begin{align*}
\frac12 \log 2 &\leq -\int_X v \eta
= \int_X \{(-2\triangle f + |\nabla f|^2 -R) s -f +m\} \eta u\\
&=\int_X \{ -s|\nabla \tilde{f}|^2 -\tilde{f} +m\} \tilde{u}
+\int_X \{s(\frac{|\nabla \eta|^2}{\eta} -R\eta) -\eta \log
\eta\} u\\
&\leq 10A^{-1} + 100\epsilon^2 +\int_X\{-s|\nabla \tilde{f}|^2 -\tilde{f} -m\} \tilde{u}
\end{align*}
After rescaling $s$ to be $\frac12$, we obtain
\begin{align*}
\left\{
\begin{array}{ll}
&\int_{B(x, \frac{20A\epsilon}{\sqrt{2s}})} \{\frac12 (|\nabla \tilde{f}|^2 + \tilde{f} -m)\}
< -\frac14 \log 2.\\
&1-A^{-2} <\int_{B(x, \frac{20A\epsilon}{\sqrt{2s}})} \tilde{u} \leq
1.
\end{array}
\right.
\end{align*}
This contradicts to the fact that
$B(x, \frac{20A\epsilon}{\sqrt{2s}}) \subset B(x, \frac{1}{\sqrt{2s}})$
has almost Euclidean isoperimetric constant
(c.f. Proposition 3.1 of~\cite{Ni2} for more details).
So we finish the proof of the claim.\\
Now we can do as Perelman did in Claim 1 of the proof of Theorem 10.1 of~\cite{Pe1}.
We can find a point
$(\bar{x}, \bar{t})$ such that
\begin{align*}
|\hat{Rm}|_{g(t)}(z) \leq 4 |\hat{Rm}|_{g(\bar{t})}(\bar{x})
\end{align*}
whenever
\begin{align*}
(z, t) \in X_{\alpha}, \; 0< t \leq \bar{t},\;
d_{g(t)}(z, x) \leq d_{g(\bar{t})}(\bar{x}, x) +
A|\hat{Rm}|_{g(\bar{t})}(\bar{x})^{-\frac12}
\end{align*}
where $X_{\alpha}$ is the set of pairs $(z, t)$ satisfying
$|\hat{Rm}|_{g(t)}(z) \geq \frac{\alpha}{t}$.
Moreover, we also have $d_{g(\bar{t})}(\bar{x}, x)<(2A+1)
\epsilon$. Therefore, the geometric parabolic ball
$\tilde{P}^+(\bar{x}, 0, A\epsilon, \bar{t})$
is strictly contained in the geometric parabolic ball
$\tilde{P}^+(x, 0, 4A\epsilon,\epsilon^2)$. Therefore, every point around $(\bar{x}, \bar{t})$
is smooth. We can replace $|\hat{Rm}|$ by $|Rm|$
and all the arguments of Perelman's proof in~\cite{Pe1}
apply directly. For simplicity, we only sketch the basic steps.
$P^-(\bar{x}, \bar{t}, \frac{1}{10}AQ^{-\frac12}, \frac12 \alpha Q^{-1})$
is a parabolic ball satisfying $|Rm| \leq
4Q=4|Rm|_{g(\bar{t})}(\bar{x})$, every point in it is smooth. Then
by blowup argument, we can show that there is a time
$\tilde{t} \in [\bar{t}-\frac12\alpha Q^{-1}, \bar{t}]$, such that
$\int_{B_{g(\tilde{t})}(\bar{x}, \sqrt{\bar{t}-\tilde{t}})} v < -c_0$
for some positive constant $c_0$, where $v$ is the auxiliary function related to the fundamental solution
$u=(4\pi(\bar{t}-t))^{-\frac{n}{2}}e^{-f}$ of conjugate heat
equation, starting from $\delta$-functions at $(\bar{x}, \bar{t})$.
Under the help of cutoff functions, we can construct a function
$\tilde{f}$ satisfying
\begin{align*}
\left\{
\begin{array}{ll}
&\int_{B(x, \frac{20A\epsilon}{\sqrt{2\bar{t}}})} \{\frac12 (|\nabla \tilde{f}|^2 + \tilde{f} -m)\}
< -\frac12 c_0.\\
&1-A^{-2} <\int_{B(x, \frac{20A\epsilon}{\sqrt{2\bar{t}}})} \tilde{u} \leq 1.
\end{array}
\right.
\end{align*}
under the metric $\frac{1}{2\bar{t}}g(0)$. Since
$B(x, \frac{20A\epsilon}{\sqrt{2\bar{t}}}) \subset B(x, \frac{1}{\sqrt{2\bar{t}}})$
has almost Euclidean isoperimetric constant as $\epsilon \to 0, A \to
\infty$, we know these inequalities cannot hold simultaneously!
\end{proof}
\begin{proposition}
Let $\{(X, g(t)), 0 \leq t \leq 1\}$, $x, \delta, \epsilon$ be the
same as in the previous theorem. If in addition,
$|Rm|<1$ in the ball $B(x, 1)$ at time $t=0$, then
\begin{align*}
|Rm|_{g(t)}(y) < (\alpha\epsilon)^{-2}
\end{align*}
whenever $0<t< (\alpha \epsilon)^2, \; dist_{g(t)}(y, x)< \alpha \epsilon$.
\end{proposition}
\begin{proof}
Suppose not. There is a point $(y_0, t_0)$ satisfying
\begin{align*}
|Rm|_{g(t_0)}(y_0) \geq (\alpha \epsilon)^{-2}, \quad
0<t< (\alpha \epsilon)^2,
\quad d_{g(t_0)}(y_0, x)< \alpha \epsilon.
\end{align*}
Check if $Q=|Rm|_{g(t_0)}(y_0)$ can control $|Rm|$ of ``previous and
outside" points. In other words, check if the following property $\clubsuit$ is
satisfied.
\begin{align*}
\clubsuit: \qquad |Rm|_{g(t)}(z) \leq 4Q, \quad
\forall \; 0 \leq t \leq t_0, \quad
d_{g(t)}(z, x) \leq d_{g(t_0)}(y_0, x) + Q^{-\frac12}.
\end{align*}
If not, there is a point $(z, s)$ such that
\begin{align*}
|Rm|_{g(s)}(z) > 4Q, \quad 0 < s \leq t_0, \quad
d_{g(s)}(z, x) \leq d_{g(t_0)}(y_0, x) + Q^{-\frac12}.
\end{align*}
Then we denote $(z, s)$ as $(y_1, t_1)$ and check if the property $\clubsuit$
is satisfied at this new base point.
Now matter how many steps this process are performed, the base
point $(y_k, t_k)$ satisfies
\begin{align*}
0 &< t_k \leq t_0 < (\alpha \epsilon)^2, \\
d_{g(t_k)}(y_k, x)
&< d_{g(t_0)}(y_0, x) + \sum_{l=0}^{k-1} 2^{-l} Q^{-\frac12}
< d_{g(t_0)}(y_0, x) + 2Q^{-\frac12}
< 3\alpha \epsilon < \epsilon.
\end{align*}
Namely, $(y_k, t_k)$ will never escape the compact set
\begin{align*}
\Omega=\{(z, s)| 0 \leq s \leq (\alpha \epsilon)^2,
\; d_{g(s)}(z, x) < 3\alpha \epsilon \}
\end{align*}
which has bounded $|Rm|$. During each step, $|Rm|$ doubles at least.
Therefore, this process must terminate in
finite steps and property $\clubsuit$ will finally hold.
Without loss of generality, we can assume property $\clubsuit$ holds already at
the point $(y_0, t_0) \in \Omega$. Define
\begin{align*}
P= \{(z,s)| 0 \leq s \leq t_0, \quad d_{g(s)}(z, y_0) < Q^{-\frac12}\}.
\end{align*}
Triangle inequality and property $\clubsuit$ implies $|Rm| \leq 4Q$
holds in $P$.
Let $\tilde{g}(t)=4Qg(\frac{t}{4Q})$, we have
\begin{align*}
P= \{(z,s)| 0 \leq s \leq 4Qt_0, \quad d_{\tilde{g}(s)}(z, y_0) < 2\}.
\end{align*}
From now on, we do all the calculation under the metric
$\tilde{g}(t)$.
Define $\eta(z, t)= 5\phi(d(z, y_0) + 100mt)$
where $\phi$ is the same cutoff function as before. It equals $1$ on $(-\infty, 0]$ and decreases to zero on $[1, 2]$. It satisfies $-\phi'' \leq 10 \phi, \quad (\phi')^2 \leq 10\phi$.
In $P$, we calculate
\begin{align*}
\snorm{\nabla \eta}{}^2&= 25(\phi')^2 \snorm{\nabla d}{}^2 = 25 (\phi')^2 \leq 250 \phi=50\eta,\\
\square \eta
&=5(\square d + 100m) \phi' -5\phi''
\leq -5\phi'' \leq 10 \eta,\\
\square \eta^{-4} &=-4 \eta^{-5} \square
\eta -20 \eta^{-6}\snorm{\nabla \eta}{}^2 \geq -40 \eta^{-4} -1000 \eta^{-5}
=(-40\eta^2- 1000\eta) (\eta^{-6})\\
&\geq -6000(\eta^{-4})^{\frac32}.
\end{align*}
On the other hand, in $P$, we have
\begin{align*}
\square \{|Rm|^2(1-\frac{t}{32\alpha})\} &\leq 16|Rm|^3(1-\frac{t}{32\alpha})
- \frac{1}{32\alpha} |Rm|^2\\
&\leq (16- \frac{1+16t}{32\alpha})|Rm|^2 \\
&\leq (16- \frac{1+16t}{32\alpha})|Rm|^3 \\
&\leq (16- \frac{1+16t}{32\alpha})\{|Rm|^2(1-\frac{t}{32\alpha})\}^{\frac32}\\
&\leq -6000\{|Rm|^2(1-\frac{t}{32\alpha})\}^{\frac32}.
\end{align*}
In these inequalities, we used the fact that $|Rm| \leq 1$, $16- \frac{1+16t}{32\alpha}<-6000<0$ and $1-\frac{t}{32\alpha}>0$
in $P$.
$|Rm| \leq 1$ is guaranteed by the choice of $P$. Recall $\alpha=\frac{1}{10^6m}$, so $16- \frac{1+16t}{32\alpha}<-6000$ is
obvious. To prove $1-\frac{t}{32\alpha}>0$, we note that
$Q=|Rm|_{g(t_0)}(y_0)< \frac{\alpha}{t_0} + \epsilon^{-2}$, so we have
\begin{align*}
Qt_0< \alpha + t_0 \epsilon^{-2} < \alpha(1+\alpha), \quad
1-\frac{t}{32\alpha} \geq 1- \frac{4Qt_0}{32\alpha}> 1 -\frac{1+\alpha}{8}>0.
\end{align*}
It follows that
\begin{align*}
\square \{|Rm|^2(1-\frac{t}{32\alpha})\}
<-6000 \{|Rm|^2(1-\frac{t}{32\alpha})\}^{\frac32}
\end{align*}
in $P$.
Therefore, $\eta^{-4}$ is a super solution of $\square F=-6000F^{\frac32}$, $|Rm|^2(1-\frac{t}{32\alpha})$ is a sub solution of this equation. Moreover,
\begin{align*}
&|Rm|^2 (1-\frac{t}{32\alpha}) < \frac{1}{4Q} \leq \frac14 (\alpha \epsilon)^2 <\frac{1}{625} < \eta^{-4}, \quad \textrm{whenever} \;
t=0.\\
&|Rm|^2 (1-\frac{t}{32\alpha})< \infty= \eta^{-4},
\quad \textrm{whenever} \; d(z, y)=2.
\end{align*}
Therefore, for every point in $P$, $|Rm|^2(1-\frac{t}{32\alpha})$ is controlled
by $\eta^{-4}$. In particular, under metric $\tilde{g}$, at point $(y_0, 4Qt_0)$,
we have
\begin{align*}
|Rm|^2(y_0)(1-\frac{4Qt_0}{32\alpha}) &\leq \eta(y_0, 4Qt_0)^{-4}\\
&=\{5\phi(400mQt_0)\}^{-4}\\
&\leq 5^{-4} \{\phi(400m\alpha(1+\alpha))\}^{-4}\\
&\leq 5^{-4} \{\phi(1)\}^{-4}=\frac{1}{625}
\end{align*}
On the other hand, recall that $\alpha= \frac{1}{10^6m}$, we have
\begin{align*}
|Rm|^2(y_0)(1-\frac{4Qt_0}{32\alpha})=\frac{1}{16}
(1-\frac{Qt_0}{8\alpha})> \frac{1}{16} (1-\frac{1+\alpha}{8})> \frac{1}{32}.
\end{align*}
It follows that $\frac{1}{32} < \frac{1}{625}$. Contradiction!
\end{proof}
As a corollary of this proposition, we can obtain the improved
Pseudolocality theroem.
\begin{theorem}[\textbf{Improved Pseudolocality Theorem}]
There exists $\eta=\eta(m, \kappa)>0$ with the following property.
Suppose $\{(X, g(t)), 0 \leq t \leq r_0^2\}$ is a compact orbifold Ricci flow solution.
Assume that at $t=0$ we have $|\hat{Rm}|(x) \leq r_0^{-2}$
in $B(x, r_0)$, and $\Vol B(x, r_0) \geq \kappa r_0^m$. Then the estimate
$|\hat{Rm}|_{g(t)}(x) \leq (\eta r_0)^{-2}$ holds whenever $0 \leq t \leq (\eta r_0)^2$,
$d_{g(t)}(x, x) < \eta r_0$.
\label{theorem: improvedbound}
\end{theorem}
\begin{remark}
Suppose $c_0 \geq -c$ is a constant, then the ``normalized flow" $\D{g}{t}=-Ric + c_0g$
is just a parabolic rescaling of the flow $\D{g}{t}=-2Ric$. So
Theorem~\ref{theorem: improvedbound} also hold for ``normalized" Ricci flow solutions
$\D{g}{t}=-Ric + c_0g$. However, the constant $\eta$ will also depend on $c$ then.
\end{remark}
\section{Weak Compactness Theroem}
\textbf{ In this section, $\kappa, E$ are fixed constants. $\hslash, \xi$
are small constants depending only on $\kappa$ and $m$ by Definition~\ref{definition: hslash}
and Definition~\ref{definition: xi}.}
\subsection{Choice of Constants}
\begin{proposition}[Bando,~\cite{Ban90}]
There exists a constant $\hslash_a=\hslash_a(m, \kappa)$ such that
the following property holds.
If $X$ is a $\kappa$-noncollapsed, Ricci-flat ALE orbifold,
it has unique singularity and small energy, i.e.,
$\int_X |Rm|^{\frac{m}{2}}d\mu<\hslash_a$,
then $X$ is a flat cone.
\label{proposition: gap}
\end{proposition}
\begin{proposition}
Suppose $B(p, \rho)$ is a smooth, Ricci-flat, $\kappa$-noncollapsed geodesic
ball and $\partial B(p, \rho) \neq \emptyset$. Then there is a small constant
$\hslash_b=\hslash_b(m, \kappa)<(\frac{1}{2C_0})^{\frac{m}{2}}$ such that
\begin{align}
\sup_{B(p, \frac{\rho}{2})} |\nabla^k Rm| \leq
\frac{C_k}{\rho^{2+k}} \{\int_{B(p, \rho)} |Rm|^{\frac{m}{2}} d\mu\}^{\frac{2}{m}}
\label{eqn: econ}
\end{align}
whenever $\int_{B(p, \rho)} |Rm|^{\frac{m}{2}} d\mu < \hslash_b$. Here $C_k$ are constants depending only on the dimension $m$.
In particular, $B(p, \rho)$ satisfies energy concentration
property. In other words, if $|Rm|(p) \geq \frac{1}{2 \rho^2}$, then we have
\begin{align*}
\int_{B(p, \frac{\rho}{2})} |Rm|^{\frac{m}{2}} d\mu > \hslash_b.
\end{align*}
\label{proposition: econ}
\end{proposition}
\begin{definition}
Let $\hslash \triangleq \min\{\hslash_a, \hslash_b\}$.
\label{definition: hslash}
\end{definition}
\begin{proposition}
There is a small constant $\xi_a(\kappa, m)$ such that
the following properties hold.
Suppose that $\{(X, g(t)), 0 \leq t \leq 1\}$ is a Ricci flow
solution on a compact orbifold $X$ which
is $\kappa$-noncollapsed. $\Omega \subset X$ and
$|Rm|_{g(0)}(x) \leq \xi_a^{-\frac32}$
for every point $x \in \Omega$. Then we have
\begin{align*}
|Rm|_{g_i(t)}(x) \leq \frac{1}{10000m^2}\xi_a^{-2}, \quad
\forall \; x \in \Omega', \; t \in [0, 9 \xi_a^2].
\end{align*}
where $\Omega'=\{ y \in \Omega | d_{g(0)}(y, \partial \Omega) > \xi_a^{\frac34}\}$.
\label{proposition: xia}
\end{proposition}
\begin{proof}
Suppose not. There are sequence of $\{(X_i, g_i(t)), 0 \leq t \leq
1\}$, $x_i$, $\Omega'$, $\Omega_i$, and $\xi_i \to 0$ violating the statement.
Blowup them by scale $\xi_i^{-\frac32}$, let $\tilde{g}_i(t)= g_i(\xi_i^{-\frac32}
t)$. We can choose a sequence of points
$y_i \in \Omega_i', t_i \in [0, 9 \xi_i^{\frac12}]$ satisfying
\begin{align}
|Rm|_{\tilde{g}_i(t_i)}(y_i)> \frac{1}{10000m^2} \xi_i^{-\frac12} \longrightarrow
\infty.
\label{eqn: rmbig}
\end{align}
Note that under metric $\tilde{g}_i(0)$, $|Rm|\leq 1$ in $B(y_i,1)$,
so inequality (\ref{eqn: rmbig}) contradicts the improved
pseudolocality theorem!
\end{proof}
\begin{proposition}
Suppose $X$ is an orbifold which is $\kappa$-noncollapsed on
scale $1$, $|Rm| \leq 1$ in the smooth geodesic ball $B(x, 1) \subset
X$. Then there is a small constant $\xi_b$ such that
\begin{align*}
\frac{\Vol(B(y, r))}{r^m} > \frac78 \omega(m)
\end{align*}
whenever $y \in B(x, \frac12)$ and $r < \xi_b^{\frac12}$.
\label{proposition: xib}
\end{proposition}
\begin{definition}
Define $\xi = \min\{\xi_a, \xi_b, (10000m^2 \frac{E}{\hslash})^{-4}\}$.
\label{definition: xi}
\end{definition}
\subsection{Refined Sequences}
The main theorems of this section are almost the same as that of~\cite{CW3}.
\begin{definition}
Define
$\mathscr{O}(m, c, \sigma, \kappa, E)$ as the the moduli space of
compact orbifold Ricci flow solutions $\{(X, g(t)), -1 \leq t \leq 1\}$
satisfying:
\begin{enumerate}
\item $\D{}{t}g(t) = -Ric_{g(t)} + c_0 g(t)$ where $c_0 $ is a
constant satisfying $|c_0| \leq c$.
\item $\displaystyle \norm{R}{L^{\infty}(X \times [-1, 1])} \leq \sigma$.
\item $\displaystyle \frac{\Vol_{g(t)}(B_{g(t)}(x, r))}{r^m} \geq \kappa$
for all $x \in X, t\in [-1, 1], r \in (0, 1]$.
\item $ \{\#Sing(X)\} \cdot \hslash + \int_X |Rm|_{g(t)}^{\frac{m}{2}} d\mu_{g(t)} \leq E$
for all $t \in [-1, 1]$.
\end{enumerate}
\label{definition: moduli}
\end{definition}
Clearly, in order this moduli space be really a generalization of
the $\mathscr{M}(m, c, \sigma, \kappa, E)$ defined in~\cite{CW3},
we need $m$ to be an even number.
We want to show the weak compactness and
uniform isoperimetric constant bound of
$\mathscr{O}(m, c, \sigma, \kappa, E)$.
As in~\cite{CW3}, we use refined sequence as a tool to study
$\mathscr{O}(m, c, \sigma, \kappa, E)$. After we obtain the
weak compactness theorem of refined sequence, the properties of
$\mathscr{O}(m, c, \sigma, \kappa, E)$ follows from routine blowup
and bubble tree arguments. However, we'll give a simpler
proof of the weak compactness theorem of refined sequences.
As in~\cite{CW3}, we define Refined sequence.
\begin{definition}
Let $\{ (X_i^m, g_i(t)), -1 \leq t \leq 1 \}$ be a sequence of Ricci flows on
closed orbifolds $X_i^m$. It is called a
refined sequence if the following properties are satisfied for every $i$.
\begin{enumerate}
\item $\displaystyle \D{}{t} g_{i} = -Ric_{g_i} + c_i g_i$ and
$\displaystyle \lim_{i \to \infty} c_i =0$.
\item Scalar curvature norm tends to zero:
$\displaystyle \lim_{i \to \infty} \norm{R}{L^{\infty}(X_i \times [-1, 1])}=0.$
\item For every radius $r$, there exists $N(r)$ such that
$(X_i, g_i(t))$ is $\kappa$-noncollapsed on scale $r$ for every $t \in [-1, 1]$
whenever $i>N(r)$.
\item Energy uniformly bounded by $E$:
\begin{align*}
\{\# (Sing(X_i))\} \cdot \hslash +
\int_{X_i} |Rm|_{g_i(t)}^{\frac{m}{2}} d\mu_{g_i(t)} \leq E, \qquad \forall \; t
\in [-1,1].
\end{align*}
\end{enumerate}
\label{definition: refined}
\end{definition}
In order to show the weak compactness theorem for every refined
sequence, we need two auxiliary concepts.
\begin{definition}
A refined sequence $\{(X_i, g_i(t)), -1 \leq t \leq 1\}$
is called an E-refined sequence under constraint $H$ if
under metric $g_i(t)$, we have
\begin{align}
\{\# Sing(B(x_0, Q^{-\frac12}))\} \cdot \hslash + \int_{B(x_0,Q^{-\frac12})} |Rm|^{\frac{m}{2}}
d\mu \geq \hslash,
\quad \forall t \in [t_0 , t_0+ \xi^2 Q^{-1}],
\label{eqn: econ}
\end{align}
whenever $(x_0,t_0) \in X_i \times [-\frac12, \frac12]$
and $Q=|Rm|_{g_i(t_0)}(x_0) \geq H$.
\end{definition}
\begin{definition}
An E-refined sequence $\{(X_i, g_i(t)), -1 \leq t \leq 1\}$
under constraint $H$
is called an EV-refined sequence under constraint $(H, K)$ if under metric $g_i(t)$,
we have
\begin{align}
\frac{\Vol{B(x, r)}}{r^m} \leq K
\end{align}
for every $i$ and $(x, t) \in X_i \times [-\frac14, \frac14]$, $r \in (0, 1]$.
\end{definition}
When meaning is clear, we omit the constraint when we mention
$E$-refined and $EV$-refined sequences. Clearly, an E-refined sequence is a refined sequence whose
center-part-solutions satisfy energy concentration property,
an EV-refined sequence is an E-refined sequence whose
center-part-solutions have bounded volume ratios.
For convenience, we also call a pointed normalized Ricci flow sequence
$\{ (X_i, x_i, g_i(t)), -1 \leq t \leq 1 \}$ a (E-, EV-)refined
sequence if $\{ (X_i, g_i(t)), -1 \leq t \leq 1 \}$ is a (E-, EV-)refined
sequence.
Since volume ratio, energy are scaling invariants,
blowing up a (E-, EV-)refined sequence at proper points
generates a new (E-, EV-)refined sequence with smaller constraints.\\
\begin{remark}
The definition of refined sequence is the same as in~\cite{CW3}.
However, the definiton of $E-, EV-$refined sequence here is a slight
different. This is for the
convenience of a simplified proof of the weak compactness theorem
of refined sequences.
\end{remark}
We first prove the weak compactness of EV-refined sequence.
\begin{proposition}[\textbf{$C^{1, \frac12}$-Weak Compactness of EV-refined Sequence}]
Suppose that $\{(X_i, x_i, g_i(t)), -1 \leq t \leq 1\}$
is an EV-refined sequence, we have
\begin{align*}
(X_i, x_i, g_i(0)) \stackrel{C^{1, \frac12}}{\longrightarrow}
(X_{\infty}, x_{\infty}, g_{\infty})
\end{align*}
where $X_{\infty}$ is a Ricci flat ALE orbifold.
\label{proposition: EVweakc1h}
\end{proposition}
\begin{proof}
As volume ratio upper bound and energy concentration holds, it
is not hard to see that
\begin{align*}
(X_i, x_i, g_i(0)) \overset{C^{1, \frac12}}{\longrightarrow} (X_{\infty}, x_{\infty}, g_{\infty})
\end{align*}
for some limit metric space $X_{\infty}$, which has finite singularity and it's
regular part is a $C^{1, \frac12}$ manifold. Moreover, by the
improved pseudolocality theorem and the almost scalar flat property of the limit sequence,
every smooth open set of $X_{\infty}$ is isometric to an open set of a time slice of a scalar flat, hence
Ricci flat Ricci flow solution. In other words, every open set of the smooth part of $X_{\infty}$
is Ricci flat. It's not hard to see that
\begin{align*}
\sharp Sing(X_{\infty}) \cdot \hslash + \int_{X_{\infty}}
|Rm|^{\frac{m}{2}} d\mu \leq E.
\end{align*}
This energy bound forces that the tangent space of every singular point
to be a flat cone, but maybe with more than one ends. Also, the
tangent cone at infinity is a flat cone(c.f.~\cite{BKN}, ~\cite{An90}, ~\cite{Tian90}). In other
words, $X_{\infty}$ is a Ricci flat, smooth, ALE multifold with finite energy.
We need to show that this limit is an orbifold, i.e., for every $p \in X_{\infty}$,
the tangent space of $p$ is a flat cone with a unique end. This can be done through
the following two steps.
\textit{Step1. \quad Every singular point of $X_{\infty}$ cannot sit on a
smooth component. In other words, suppose $p$ is a singular point of $X_{\infty}$,
then there exists $\delta_0$ depending on
$p$ such that every component of $\partial B(p, \delta) $ has nontrivial $\pi_1$ whenever
$\delta < \delta_0$.}
If this statement is wrong, we can choose $\delta_i \to 0$ such
that
\begin{align*}
|\partial E_{\delta_i}| > \frac{7}{8} m\omega(m) \delta_i^{m-1}
\end{align*}
where $E_{\delta_i}$ is a component of $\partial B(p, \delta_i)$.
By taking subsequence if necessary, we can choose
$X_i \ni p_i \to p$ satisfying
\begin{align*}
|\partial E_{\delta_i}^i| > \frac78 m\omega(m) \delta_i^{m-1}
\end{align*}
where $E_{\delta_i}^i$ is some component of $\partial B(p_i, \delta_i)$.
Moreover, we can let $p_i$ be the point with
largest Riemannian curvature in $B(p_i, \rho)$
for some fixed small number $\rho$.
Define
\begin{itemize}
\item
$ r_i \triangleq \sup \{r | r< \delta_i, \quad \textrm{the largest component of}\; \partial B(p_i, r) \; \textrm{has area ratio}
\leq \frac78 m \omega(m)\}$
\item $r'_i \triangleq \inf \{r |\textrm{the ball} \;
B(p_i, r) \; \textrm{has volume ratio} \leq \frac34 \omega(m) \}$.
\end{itemize}
We claim that $r_i' \leq C Q_i^{-\frac12}$ where $Q_i = |Rm|(p_i)$ and $C$ is
a uniform constant. Otherwise, by rescaling $Q_i$ to be $1$ and fixing the central time slice
to be time $0$, we can take limit for a new $EV$-refined sequence
\begin{align*}
\{(X_i, p_i, \tilde{g}_i(t)), 0 \leq t \leq \xi^2 \} \overset{C^{1, \frac12}}{\to}
\{(\tilde{X}_{\infty}, p_{\infty}, \tilde{g}_{\infty}(t)), 0 \leq t \leq
\xi^2\},
\end{align*}
where the limit is a stable (Ricci flat ) Ricci flow solution on
a complete manifold $\tilde{X}_{\infty}$. Moreover, the convergence is
smooth when $t>0$. Therefore, $(\tilde{X}_{\infty}, p_{\infty}, \tilde{g}_{\infty}(0))$ is
isometric to $(\tilde{X}_{\infty}, p_{\infty},
\tilde{g}_{\infty}(\xi^2))$. This forces that
$(\tilde{X}_{\infty}, p_{\infty}, \tilde{g}_{\infty}(\xi^2))$
satisfies
\begin{align*}
\hslash \leq \int_{X_{\infty}} |Rm|^{\frac{m}{2}} d\mu \leq E,
\quad
\lim_{r \to \infty} \frac{\Vol(B(p_{\infty}, r))}{r^m} \geq
\frac34 \omega(m).
\end{align*}
simultaneously. This is impossible!
Therefore, $r_i' \leq C Q_i^{-\frac12} \to 0$.
This estimate of $r_i'$ implies that $r_i$ is
well defined. Moreover, similar blowup argument shows that
$\displaystyle \lim_{i \to \infty} \frac{r_i}{r_i'}=\infty$.
Clearly, $r_i < \delta_i \to 0$. Rescale $r_i$ to be $1$ to obtain
a new EV-refined sequence
\begin{align*}
\{ (X_i^{(1)}, x_i^{(1)}, g_i^{(1)}(t)), -1 \leq t \leq 1\}
\end{align*}
where $x_i^{(1)}=p_i$. We have convergence
\begin{align*}
(X_i^{(1)}, x_i^{(1)}, g_i^{(1)}(0)) \stackrel{C^{1, \frac12}}{\longrightarrow}
(X_{\infty}^{(1)}, x_{\infty}^{(1)}, g_{\infty}^{(1)}).
\end{align*}
By our choice of $r_i$, for every $r>1$, there is a component of
$\partial B(x_{\infty}^{(1)}, r)$ whose area is at least $\frac78 m\omega(m)r^{m-1}$.
Therefore, the ALE space $X_{\infty}^{(1)}$
has an end whose volume growth is greater than $\frac78 \omega(m) r^m > \frac12 \omega(m)
r^m$. Detach $X_{\infty}$ as union of orbifolds. One of them
must be ALE space whose volume growth at infinity is exactly $\omega(m) r^m$,
the Ricci flatness forces that this ALE component is isometric to
Euclidean space. Since one component of $\partial B(x_{\infty}^{1}, 1)$
has volume $\frac78 m\omega(m)$, $X_{\infty}^{(1)}$ itself
cannot be Euclidean. Therefore, $X_{\infty}^{(1)}$ must contain
a singular point which connects a Euclidean space. In other
words, $X_{\infty}^{(1)}$ contains a singular point $q$ which sit in a smooth
component.
If $X_{\infty}^{(1)}$ has more than one singularity,
we can blowup at point $q$ as before and obtain a
new bubble $X_{\infty}^{(2)}$. However, a fixed amount of energy
(at least $\hslash$) will be lost during this process. Therefore such
process must stop in finite times. Without loss of generality,
we can assume that $X_{\infty}^{(1)}$ has a unique singularity $q$.
By the choice of $p_i$, $\displaystyle x_{\infty}^{(1)} = \lim_{i \to \infty} p_i$
must be singular if
$X_{\infty}^{(1)}$ contains a singular point. It follows that
$X_{\infty}^{(1)}$ has a unique singularity $x_{\infty}^{(1)}$.
From the previous argument, we already know that $x_{\infty}^{(1)}=q$
connects a
Euclidean space. Since $x_{\infty}^{(1)}$ is the unique singularity, every
geodesic $\gamma$ connecting $x_{\infty}^{(1)}$ and some point $x$ in the
Euclidean space must stay in that Euclidean space. Therefore,
$\partial B(x_{\infty}^{(1)}, 1)$ has a component which is a standard sphere
whose area is $m\omega(m)> \frac78 m\omega(m)$. So for large
$i$, the largest component of $\partial B(p_i, r_i)$
has area strictly greater than $\frac78 m\omega(m)r_i^{m-1}$.
This contradicts to the choice of $r_i$!
\textit{Step 2. Every singular point of $X_{\infty}$ has only one end. In other words, suppose $p$ is a singular
point of $X_{\infty}$, then there exists $\delta_0$ depending on $p$ such that $\partial B(p, \delta)$
is connected whenever $\delta < \delta_0$.}
Suppose not, there is a small $\delta$ such that
$\partial B(p, \delta)$ is not connected. Choose $x, y$
in two different components of $\partial B(p, \delta)$. Let $\gamma$
be the shortest geodesic connecting $x$ and $y$. It must pass
through $p$. Suppose $x_i, y_i, p_i \in X_i$, $\gamma_i \subset X_i$
satisfy
\begin{align*}
x_i \to x, \quad y_i \to y, \quad p_i \to p, \quad \gamma_i
\to \gamma
\end{align*}
where $\gamma_i$ is the shortest geodesic connecting $x_i, y_i$.
For every $z \in X_i$, we can define
\begin{align*}
\mathscr{R} (z) \triangleq
\sup \{r | (\# Sing(B(z, r)))\cdot \hslash + \int_{B(z,r)} |Rm|^{\frac{m}{2}} d\mu=\frac12 \hslash.\}
\end{align*}
under the metric $g_i(0)$. Clearly, $\mathscr{R}(z)=0$ iff $z$
is singular. On $\gamma_i$, let $q_i$ be the point with the smallest
$\mathscr{R}$ value and define $r_i=\mathscr{R}(q_i)$.
Note that on orbifold $X_i$, every shortest geodesic connecting two
smooth points never pass through orbifold singularity.
This implies that $r_i= \mathscr{R}(q_i)>0$.
Clearly, $r_i \to 0$. Now, we
rescale $r_i$ to be $1$ to obtain new EV-refined sequence
$\{(X_i, q_i, g_i^{(1)}(t)), -1 \leq t \leq 1\}$
and take limit
\begin{align*}
(X_i, q_i, g_i^{(1)}(0)) \stackrel{C^{1, \frac12}}{\longrightarrow}
(X_{\infty}^{(1)}, q_{\infty}, g_{\infty}^{(1)}).
\end{align*}
Clearly, $X_{\infty}^{(1)}$ contains a straight line passing
through $q_{\infty}$ which we denote as $\gamma_{\infty}$.
After rescaling, every unit geodesic ball centered on a point of
$\gamma_{i}$ contains energy not more than $\frac12 \hslash$. The
energy concentration property forces that $|Rm|$ is uniformly
bounded around $\gamma_i$. So $\gamma_{\infty}$ is a straigt line
free of singular point. Detach $X_{\infty}^{(1)}$ as union of
orbifolds. Then $\gamma_{\infty}$ must stay in one orbifold
component. Therefore, there is an orbifold component containing
a straight line. Then the splitting theorem for Ricci flat
orbifolds applies and forces that component must be $\R \times
N^{n-1}$. The ALE condition forces that component must be
Euclidean space. Since $X_{\infty}^{(1)}$ contains a Euclidean
component. From Step 1, we know every singularity cannot stay
on smooth component. Therefore, $X_{\infty}^{(1)}$ itself must
be the Euclidean space. So we actually have convergence
\begin{align}
\{(X_i, q_i, g_i^{(1)}(t)), 0 < t \leq \xi^2 \} \stackrel{C^{\infty}}{\longrightarrow}
\{(X_{\infty}^{(1)}, q_{\infty}, g_{\infty}^{(1)}(t)), 0 < t \leq \xi^2 \}.
\label{eqn: sconv}
\end{align}
This forces that $(X_{\infty}^{(1)}, q_{\infty}, g_{\infty}^{(1)}(t))$ is
Euclidean space for every $t \in [0, \xi^2]$.
Now return to the choice of $q_i$
\begin{align*}
\{\# Sing(B(q_i, r_i))\} \cdot \hslash
+ \int_{B(q_i, r_i)} |Rm|^{\frac{m}{2}} d\mu =\frac12 \hslash
\end{align*}
actually reads $\int_{B(q_i, r_i)} |Rm|^{\frac{m}{2}} d\mu = \frac12 \hslash$.
There is a point $q'_i \in B(q_i, r_i)$ satisfying
\begin{align*}
Q_i' \triangleq |Rm|(q'_i) > \left(\frac{\hslash}{2\Vol(B(q_i, 2r_i))} \right)^{\frac{2}{m}}
> \left(\frac{\hslash}{4} \cdot
\frac{1}{\omega(m)(2r_i)^m } \right)^{\frac{2}{m}}
> (\frac{\hslash}{4^m \omega(m)})^{\frac{2}{m}} r_i^{-2}
\to \infty.
\end{align*}
On the other hand, the no-singularity-property of
$X_{\infty}^{(1)}$ implies $Q_i' < Cr_i^{-2}$ for some
uniform constant $C$. Therefore, we have
\begin{align}
\delta^2 \triangleq (\frac{\hslash}{4^m \omega(m)})^{\frac{2}{m}} <Q_i'
r_i^2<C.
\label{eqn: twosidesq'}
\end{align}
In particular, $Q_i' \to \infty$ and therefore the energy concentration property
applies on $q_i'$:
\begin{align*}
\int_{B_{g_i(t)}(q_i', (Q_i')^{-\frac12})} |Rm|^{\frac{m}{2}} d\mu \geq
\hslash, \quad \; \forall \; 0 \leq t \leq \xi^2 (Q_i')^{-1}.
\end{align*}
Combining this with inequality (\ref{eqn: twosidesq'}) implies
\begin{align*}
\int_{B_{g_i(t)}(q_i', \delta^{-1}r_i)} |Rm|^{\frac{m}{2}} d\mu \geq
\hslash, \quad \; \forall \; 0 \leq t \leq \frac{\xi^2}{C}r_i^2.
\end{align*}
After rescaling, we have
\begin{align*}
\int_{B_{g_i^{(1)}(\bar{t})}(q_i', \delta^{-1})} |Rm|^{\frac{m}{2}}
d\mu \geq \hslash, \quad
\bar{t}=\frac{\xi^2}{C}.
\end{align*}
The smooth convergence (\ref{eqn: sconv}) implies
that the energy of $(X_{\infty}, g_{\infty}^{(1)}(\bar{t}))$ is not less than
$\hslash$. This
contradicts to the property that $(X_{\infty}, g_{\infty}^{(1)}(\bar{t}))$
is a Euclidean space! \\
Therefore, every singular point of $X_{\infty}$ has a unique
nontrivial end, i.e., it has tangent space $\R^{n} / \Gamma$
for some nontrivial $\Gamma$. So $X_{\infty}$ is an orbifold.
\end{proof}
\begin{proposition}
Every $E$-refined sequence is an $EV$-refined sequence.
\label{proposition: eev}
\end{proposition}
The next thing we need to do is to improve the convergence
topology from $C^{1, \frac12}$ to $C^{\infty}$. In light of Shi's estimate, the following
backward pseudolocality property assures this improvement.
\begin{proposition}
Suppose $\{(X_i, x_i, g_i(t)), -1 \leq t \leq 1\}$ is an E-refined
sequence satisfying $|Rm|_{g_i(0)} \leq r^{-2}$ in $B_{g_i(0)}(x_i, r)$ for
some $r \in (0, 1)$.
Then there is a uniform constant $C$ depending on this sequence such that
\begin{align*}
|Rm|_{g(t)}(y) \leq C, \; \textrm{whenever} \;
(y, t) \in P^-(x_i, 0, \frac12 r, 9\xi^2 r^2).
\end{align*}
\end{proposition}
\begin{proof}
Without loss of generality, we can assume $r=1$.
Suppose this statement is wrong, there are points $(y_i, t_i) \in P^-(x_i, 0, \frac12, 9 \xi^2)$
satisfying $|Rm|_{g_i(t_i)}(y_i) \to \infty$. According to Proposition~\ref{proposition: EVweakc1h}
and Proposition~\ref{proposition: eev}, we can take limit
\begin{align*}
(X_i, y_i, g_i(t_i)) \stackrel{C^{1, \frac12}}{\longrightarrow}
(Y_{\infty}, y_{\infty}, h_{\infty}),
\end{align*}
where $Y_{\infty}$ is a Ricci flat ALE orbifold, $y_{\infty}$ is
a singular point.
\begin{clm}
$B_{g_i(0)}(y_i, \xi^{\frac12}) \subset B_{g_i(t_i)}(y_i, \lambda_m \xi^{\frac12})$
for large $i$, where $\lambda_m = 1+ \frac{1}{100m}$.
\end{clm}
Actually, let $\gamma$ be the shortest geodesic connecting
$y_i$ and $p \in B_{g_i(0)}(y_i, \xi^{\frac12})$ under metric $g_i(0)$.
By energy concentration property, under metric $g_i(t_i)$,
after deleting (at most) $N=\lfloor \frac{E}{\hslash} \rfloor$
geodesic balls of radius $\xi^{\frac34}$, the remainder set which we denote as $\Omega_i$
has uniform Riemannian curvature bounded by $\xi^{-\frac32}$.
Therefore, according to the choice of $\xi$ (c.f. Proposition~\ref{proposition: xia}),
we know $|Rm|$ is
uniformly bounded by $\frac{1}{10000m^2} \xi^{-2}$ on $ \Omega_i' \times [t_i, 0]$
where
\begin{align*}
\Omega_i'= \{x \in \Omega_i| d_{g_i(t_i)}(x, \partial \Omega_i) \geq
\xi^{\frac34}\},
\quad [t_i, 0] \subset [t_i, t_i + 9\xi^2].
\end{align*}
As the change of length is controlled by integration of Ricci curvature over time, we have
\begin{align*}
dist_{g_i(t_i)}(p, y_i) &\leq e^{\frac{1}{10^6m}} length_{g_i(0)}
\gamma + N \cdot 2\xi^{\frac34}
\leq (e^{\frac{1}{10^6m}} + N \cdot 2\xi^{\frac14})\xi^{\frac12}
< \lambda_m \xi^{\frac12},
\end{align*}
where the last step follows from the choice of $\xi$.
The Claim is proved.\\
Since $y_i \in B_{gi(0)}(x_i, \frac12)$, according to the choice of $\xi$,
we have
$\Vol_{g_i(0)}(B_{g_i(0)}(y_i, \xi^{\frac12})) > \frac78 \omega(m) \xi^{\frac{m}{2}}$.
On the other hand,
$C^{1, \frac12}$-convergence and volume comparison implies
\begin{align*}
\frac{\Vol_{g_i(t_i)}(B_{g_i(t_i)}(y_i, \lambda_m \xi^{\frac12}))}{(\lambda_m \xi^{\frac12})^m}
\to \frac{\Vol(B(y_{\infty}, \lambda_m \xi^{\frac12}))}{(\lambda_m
\xi^{\frac12})^m}
\leq \lim_{r \to 0} \frac{\Vol(B(y_{\infty}, r))}{r^m}
= \frac{\omega(m)}{|\Gamma(y_{\infty})|}.
\end{align*}
As volume change is controlled by integration of scalar curvature
which is tending to zero, we know
\begin{align*}
\lim_{i \to \infty}
\frac{\Vol_{g_i(0)}(B_{g_i(t_i)}(y_i, \lambda_m \xi^{\frac12}))}{(\lambda_m \xi^{\frac12})^m}
= \lim_{i \to \infty}
\frac{\Vol_{g_i(t_i)}(B_{g_i(t_i)}(y_i, \lambda_m \xi^{\frac12}))}{(\lambda_m \xi^{\frac12})^m}
\leq \frac{\omega(m)}{|\Gamma(p_{\infty})|}
\leq \frac12 \omega(m).
\end{align*}
Therefore, for large $i$, we have
\begin{align*}
\frac78 \omega(m) \xi^{\frac{m}{2}}
<\Vol_{g_i(0)}(B_{g_i(0)}(y_i, \xi^{\frac12}))
\leq \Vol_{g_i(0)}(B_{g_i(t_i)}(y_i, \lambda_m \xi^{\frac12}))
<\frac34 \omega(m)(\lambda_m)^m \xi^{\frac{m}{2}}.
\end{align*}
It implies $e^{\frac{1}{100}} >(1+\frac{1}{100m})^m=\lambda_m^m > \frac76$ which is impossible!
This contradiction establish the proof of backward pseudolocality.
\end{proof}
Using this backward pseudolocality theorem, we can
improve $C^{1,\frac12}$-convergence to $C^{\infty}$-convergence.
\begin{proposition}[\textbf{$C^{\infty}$-Weak Compactness of EV-refined Sequence}]
Suppose that $\{(X_i, x_i, g_i(t)), -1 \leq t \leq 1\}$
is an EV-refined sequence, we have
\begin{align*}
(X_i, x_i, g_i(0)) \stackrel{C^{\infty}}{\longrightarrow}
(X_{\infty}, x_{\infty}, g_{\infty})
\end{align*}
where $X_{\infty}$ is a Ricci flat ALE orbifold.
\label{proposition: EVweak}
\end{proposition}
\begin{proposition}
Every refined sequence is an $E$-refined sequence.
\end{proposition}
\begin{proof}
Suppose not.
Then by delicate selecting of base points and
blowup, we can find an $E$-refined sequence
$\{(X_i, x_i, g_i(t)), -1 \leq t \leq 1\}$ under constraint $2$ satisfying
$|Rm|_{g_i(0)}(x_i)=1$ and energy concentration fails at
$(x_i, 0)$, i.e.,
\begin{align*}
\sharp(Sing(B_{g_i(t_i)}(x_i, 1))) + \int_{B_{g_i(t_i)}(x_i, 1)} |Rm|^{\frac{m}{2}} d\mu <
\hslash
\end{align*}
for some $t_i \in [0, \xi^2]$. This means that, under metric
$g_i(t_i)$,
$B(x_i, 1)$ is free of singularity and
$\int_{B(x_i, 1)} |Rm|^{\frac{m}{2}} d\mu <\hslash$.
The energy concentration
property implies that $|Rm| \leq 4$ in $B(x_i, \frac12)$ under the
metric $g_i(t_i)$. Therefore, by the backward pseudolocality,
we have
\begin{align*}
|Rm|_{g_i(t)}(x) \leq 4C, \quad \forall \; (x, t) \in P^-(x_i, t_i, \frac14, \frac94 \xi^2).
\end{align*}
In particular, $|Rm|_{g_i(0)}(y) \leq 4C$ for every $y \in B_{g_i(t_i)}(x_i, \frac14)$.
Based at $(x_i, t_i)$, we can take the smooth limit of
$P^-(x_i, t_i, \frac18, 2\xi^2) \subset P^-(x_i, t_i, \frac14, \frac94 \xi^2)$,
which will be a Ricci flat Ricci flow solution. Therefore,
\begin{align}
\lim_{i \to \infty} |Rm|_{g_i(t_i)}(x_i) = \lim_{i \to
\infty} |Rm|_{g_i(0)}(x_i)=1.
\label{eqn: backextend}
\end{align}
On the other hand, the sequence $\{(X_i, x_i, g_i(t)), -1 \leq t \leq 1\}$
is an $EV$-refined sequence by Proposition~\ref{proposition: eev},
the $C^{\infty}$-weak compactness theorem for $EV$-refined sequence (under constraint $(2, K_0)$) implies
\begin{align*}
(X_i, x_i, g_i(t_i)) \overset{C^{\infty}}{\longrightarrow} (X_{\infty}, x_{\infty}, g_{\infty})
\end{align*}
for some Ricci flat ALE orbifold $X_{\infty}$.
Clearly, $B(x_{\infty}, 1)$ is free of singularity. So Moser
iteration of $|Rm|$ implies that $|Rm|(x_{\infty})<\frac12$.
It follows that $\displaystyle \lim_{i \to \infty} |Rm|_{g_i(t_i)}(x_i) \leq
\frac12$ which contradicts to equation (\ref{eqn: backextend})!
\end{proof}
It follows directly the following theorem.
\begin{theorem}[\textbf{Weak compactness of refined sequence}]
Suppose $\{(X_i, x_i, g_i(t)), -1 \leq t \leq 1\}$
is a refined sequence. Then we have
\begin{align*}
(X_i, x_i, g_i(0)) \stackrel{C^{\infty}}{\longrightarrow}
(X_{\infty}, x_{\infty}, g_{\infty})
\end{align*}
for some Ricci flat, ALE orbifold $X_{\infty}$.
\label{theorem: refinedweak}
\end{theorem}
\subsection{Applications of Refined Sequences}
After we obtain this smooth weak convergence, we can use refined
sequence as a tool to study the moduli space
$\mathscr{O}(m, c, \sigma, \kappa, E)$ which is defined at the
begining of this section. Using the same argument as
in~\cite{CW3}, we can obtain the following theorems.
\begin{theorem}[\textbf{No Volume Concentration and Weak Compactness}]
$\mathscr{O}(m, c, \sigma, \kappa, E)$ satisfies the following two
properties.
\begin{itemize}
\item No volume concentration. There is a constant $K$ such that
\begin{align*}
\Vol_{g(0)}(B_{g(0)}(x, r)) \leq Kr^m
\end{align*}
whenever $r \in (0, K^{-1}]$, $x \in X$,
$\{(X, g(t)), -1 \leq t \leq 1\} \in \mathscr{O}(m, c, \sigma, \kappa, E)$.
\item Weak compactness. If $\{ (X_i, x_i, g_i(t)) , -1 \leq t \leq 1\} \in \mathscr{O}(m, c, \sigma, \kappa, E)$
for every $i$, by passing to subsequence if necessary, we have
\begin{align*}
(X_i, x_i, g_i(0)) \sconv (\hat{X}, \hat{x}, \hat{g})
\end{align*}
for some $C^0$-orbifold $\hat{X}$ in Cheeger-Gromov sense.
\end{itemize}
\label{theorem: centerwcpt}
\end{theorem}
\begin{theorem}[\textbf{Isoperimetric Constants}]
There is a constant $\iota=\iota(m, c, \sigma, \kappa, E, D)$ such
that the following property holds.
If $\{(X, g(t)), -1 \leq t \leq 1\} \in \mathscr{O}(m, c, \sigma, \kappa, E)$
and $\diam_{g(0)}(X)<D$, then
\begin{align*}
\mathbf{I}(X, g(0)) > \iota.
\end{align*}
\label{theorem: centersob}
\end{theorem}
Theorem 4.5 of~\cite{CW3} can be improved as
$\mathcal{OS}(m, \sigma, \kappa, E, V)$---the moduli space of compact
gradient shrinking Ricci soliton
orbifolds---is compact.
\section{\KRF on Fano Orbifolds}
\subsection{Some Estimates}
All the estimates developed under the \KRf on Fano manifolds hold
for Fano orbifolds. We list the important ones and only give sketch
of proofs if the statement is not obvious.
\begin{proposition}[Perelman, c.f.~\cite{SeT}]
Suppose $\{(Y^n, g(t)), 0 \leq t < \infty \}$ is a \KRf solution on
Fano orbifold $Y^n$.
There are two positive constants $\mathcal{B}, \kappa$ depending only on this flow such that the
following two estimates hold.
\begin{enumerate}
\item Under metric $g(t)$, let $R$ be the scalar curvature,
$-u$ be the normalized Ricci potential, i.e.,
\begin{align*}
Ric-\omega_{\varphi(t)}= - \st \ddb{u}, \quad
\frac{1}{V} \int_Y e^{-u} \omega_{\varphi(t)}^n=1.
\end{align*}
Then we have
\begin{align*}
\norm{R}{C^0} + \diam Y +
\norm{u}{C^0} + \norm{\nabla u}{C^0} < \mathcal{B}.
\end{align*}
\item Under metric $g(t)$, $ \displaystyle
\frac{\Vol(B(x, r))}{r^{2n}} > \kappa$ for every
$r \in (0, 1)$, $(x, t) \in Y \times [0, \infty)$.
\end{enumerate}
\label{proposition: perelman}
\end{proposition}
\begin{proof}
When scalar curvature norm $|R|$ is uniformly bounded, the second
estimate becomes a direct corollary of the general noncollapsing
theorem. So we only need to show the first estimate. The proof
is almost the same as the manifold case.
First, note that Green's function exists on every compact orbifold,
and Perelman's functional behaves the same as in manifold case.
Same as in~\cite{SeT}, we can apply Green's function and Perelman's functional
to obtain a uniform lower bound of $u(t)$ where $-u(t)$ is the normalized Ricci potential.
Second, since $u(t)$ is uniformly bounded from below, we can find
a big constant $B$ such that $u+2B>B$. Then maximal
principle tells us that there is a constant $C$ such that
\begin{align*}
\frac{\triangle u}{u+2B}<C, \quad
\frac{|\nabla u|^2}{u+2B}<C.
\end{align*}
By the second inequality, we know $u$ is Lipshitz. Therefore, $u$ will be bounded whenever diameter is
bounded.
Third, diameter is bounded. Suppose that diameter is unbounded,
we can find a sequence of annulus $A_i=B_{g(t_i)}(x_i, 2^{i+2}) \backslash B_{g(t_i)}(x_i, 2^{i-2})$
such that the following properties hold.
\begin{itemize}
\item The closure $\overline{A_i}$ contains no singular point.
\item $\Vol_{g(t_i)}(A_i) \to 0$.
\item Under metric $g(t_i)$, $\frac{\Vol(B(x_i, 2^{i+2}) \backslash B(x_i, 2^{i-2}))}
{\Vol(B(x_i, 2^{i+1}) \backslash B(x_i, 2^{i-1}))} < 2^{10n}$.
\end{itemize}
The reason we can do this is that $Y$ contains only finite
singularities. Then by taking a proper cutoff function whose
support is in $A_i$, we can deduce that Perelman's functional
$\mu(g_0, \frac12)$ must tend to $-\infty$. Impossible!
\end{proof}
The following estimates on orbifolds are exactly the same as the corresponding estimates on
manifolds.
\begin{proposition}[\cite{Zhq}, \cite{Ye}]
$\{(Y^n, g(t)), 0 \leq t <\infty \}$ is a \KRf on Fano orbifold
$Y^n$. Then there is a uniform Sobolev constant $C_S$ along this
flow. In other words, for every
$f \in C^{\infty}(Y)$, we have
\begin{align*}
(\int_Y |f|^{\frac{2n}{n-1}}
\omega_{\varphi}^n)^{\frac{n-1}{n}}
\leq
C_S\{\int_Y |\nabla f|^2 \omega_{\varphi}^n
+ \frac{1}{V^{\frac{1}{n}}} \int_Y |f|^2 \omega_{\varphi}^n\}.
\end{align*}
\label{proposition: sobolev}
\end{proposition}
\begin{proposition}[c.f.~\cite{Fu2},~\cite{TZ}]
$\{(Y^n, g(t)), 0 \leq t <\infty \}$ is a \KRf on Fano orbifold $Y^n$.
Then there is a uniform weak Poincar\`e constant $C_P$ along this flow.
Namely, for every nonnegative function $f \in C^{\infty}(Y)$, we have
\begin{align*}
\frac{1}{V} \int_Y f^2 \omega_{\varphi}^n \leq
C_P\{\frac{1}{V} \int_Y |\nabla f|^2 \omega_{\varphi}^n
+ (\frac{1}{V} \int_Y f \omega_{\varphi}^n)^2\}.
\end{align*}
\label{proposition: poincare}
\end{proposition}
\begin{proposition}[c.f. \cite{PSS}, \cite{CW2}]
By properly choosing initial condition, we have
\begin{align*}
\norm{\dot{\varphi}}{C^0} + \norm{\nabla \dot{\varphi}}{C^0}<C
\end{align*}
for some constant $C$ independent of time $t$.
\label{proposition: dotphi}
\end{proposition}
\begin{proposition}[\cite{CW2}]
There is a constant $C$ such that
\begin{align}
\frac{1}{V} \int_Y (-\varphi) \omega_{\varphi}^n \leq
n \sup_Y \varphi - \sum_{i=0}^{n-1}
\frac{i}{V} \int_Y \st \partial \varphi \wedge
\bar{\partial} \varphi \wedge \omega^i \wedge
\omega_{\varphi}^{n-1-i} + C.
\label{eqn: dsupphi}
\end{align}
\end{proposition}
\begin{proposition}[\cite{Ru}, c.f. \cite{CW2}]
$\{(Y^n, g(t)), 0 \leq t <\infty \}$ is a \KRf on Fano orbifold $Y^n$.
Then the following conditions are equivalent.
\begin{itemize}
\item $\varphi$ is uniformly bounded.
\item $\displaystyle \sup_Y \varphi$ is uniformly bounded from above.
\item $\displaystyle \inf_Y \varphi$ is uniformly bounded from below.
\item $\int_Y \varphi \omega^n$ is uniformly bounded from above.
\item $\int_Y (-\varphi) \omega_{\varphi}^n$ is
uniformly bounded from above.
\item $I_{\omega}(\varphi)$ is uniformly bounded.
\item $Osc_{Y} \varphi$ is uniformly bounded.
\end{itemize}
\label{proposition: conditions}
\end{proposition}
\subsection{Tamed Condition by Two Functions: $F$ and $\mathcal{F}$}
This subsection is similar to the corresponding part
in~\cite{CW3}. However, we compare different metrics on the line
bundle to study the tamedness condition.
Along the \KRF, we have
\begin{align*}
\omega_{\varphi(t)}= \omega_0 + \st \ddb \varphi(t), \quad
\st \ddb \dot{\varphi}(t) = \omega_{\varphi(t)} - Ric_{\omega_{\varphi(t)}}
\end{align*}
For simplicity, we omit the subindex $t$. Let $h$ be the metric on $K_Y^{-1}$ induced directly by the
metric on $Y$, i.e., $h= \det g_{i\bar{j}}$. Let
$l=e^{-\dot{\varphi}}h$. Clearly, we have
\begin{align*}
-\st \ddb \log \snorm{S}{l}^2+
\st \ddb \log \snorm{S}{h}^2
=\st \ddb \dot{\varphi} =\omega_{\varphi} -
Ric_{\omega_{\varphi}}
\end{align*}
It follows that $\st \ddb \log \snorm{S}{l}^2= \omega_{\varphi}$.
\begin{definition}
Choose $\{T_{\nu, \beta}^t\}_{\beta=0}^{N_{\nu}}$
as orthonormal basis of $H^0(K_Y^{-\nu})$ under the metric $h^{\nu}$.
Then
\begin{align*}
&F(\nu, x, t) = \frac{1}{\nu} \log
\sum_{\beta=0}^{N_{\nu}} \snorm{T_{\nu, \beta}^t}{h^{\nu}}^2(x),\\
&G(\nu, x, t)=\sum_{\beta=0}^{N_{\nu}} \snorm{\nabla T_{\nu, \beta}^t}{h^{\nu}}^2(x)
\end{align*}
are well defined functions on $Y \times [0, \infty)$.
We call the flow is tamed by $\nu$ if $F(\nu, \cdot, \cdot)$
is a bounded function on $Y \times [0, \infty)$.
\end{definition}
\begin{remark}
If $Y$ is an orbifold, $K_Y^{-\nu}$ is a line bundle if and only if $\nu$ is an integer multiple of of Gorenstein index
of $Y$. We call such $\nu$ as appropriate. In this note, we always choose $\nu$ as appropriate ones.
\end{remark}
Clearly, $G= \triangle e^{\nu F} - \nu R e^{\nu F}$.
Fix $(x, t)$, by rotating basis, we can always find a section $T$ such that
\begin{align*}
\int_Y \snorm{T}{h^{\nu}(t)}^2 \omega_{\varphi}^n =1, \quad e^{\nu F(\nu, x, t)} =
\snorm{T}{h^{\nu}(t)}^2(x).
\end{align*}
There also exists a section $T'$ such that
\begin{align*}
\int_Y \snorm{T'}{h^{\nu}(t)}^2 \omega_{\varphi}^n =1, \quad G(\nu, x, t) =
\snorm{\nabla T'}{h^{\nu}(t)}^2(x).
\end{align*}
\begin{definition}
Choose $\{S_{\nu, \beta}^t\}_{\beta=0}^{N_{\nu}}$
as orthonormal basis of $H^0(K_Y^{-\nu})$ under the metric $l^{\nu}$.
Then
\begin{align*}
&\mathcal{F}(\nu, x, t) = \frac{1}{\nu} \log
\sum_{\beta=0}^{N_{\nu}} \snorm{S_{\nu,
\beta}^t}{l^{\nu}}^2(x),\\
&\mathcal{G}(\nu, x, t)=\sum_{\beta=0}^{N_{\nu}} \snorm{\nabla S_{\nu, \beta}^t}{l^{\nu}}^2(x).
\end{align*}
are well defined functions on $Y \times [0, \infty)$.
\end{definition}
Similarly, $\mathcal{G}= \triangle e^{\nu \mathcal{F}} - n\nu e^{\nu \mathcal{F}}$.
Fix $(x, t)$, by rotating basis, there are unit norm sections $S$
and $S'$ such that
\begin{align*}
&\int_Y \snorm{S}{l^{\nu}(t)}^2 \omega_{\varphi}^n =1, \quad e^{\nu \mathcal{F}(\nu, x, t)} =
\snorm{S}{l^{\nu}(t)}^2(x);\\
&\int_Y \snorm{S'}{l^{\nu}(t)}^2 \omega_{\varphi}^n =1, \quad \mathcal{G}(\nu, x, t) =
\snorm{\nabla S'}{l^{\nu}(t)}^2(x).
\end{align*}
At point $(x, t)$, we have
\begin{align*}
e^{\nu \mathcal{F}}= \snorm{S}{l^{\nu}(t)}^2= e^{-\nu
\dot{\varphi}}\snorm{S}{h^{\nu}(t)}^2
= e^{-\nu \dot{\varphi}} \cdot \frac{\snorm{S}{h^{\nu}(t)}^2(x)}{\int_Y \snorm{S}{h^{\nu}(t)}^2 \omega_{\varphi}^n}
\cdot \int_Y \snorm{S}{h^{\nu}(t)}^2 \omega_{\varphi}^n
\leq e^{\nu (F-\dot{\varphi} + \snorm{\dot{\varphi}}{C^0})}
\leq e^{2\nu \mathcal{B}} e^{\nu F}.
\end{align*}
Similarly, we can do the other way and it follows that
\begin{align*}
F - 2\mathcal{B} \leq \mathcal{F} \leq F + 2\mathcal{B}.
\end{align*}
Therefore, a flow is tamed by $\nu$ if and only if
$\mathcal{F}(\nu, \cdot, \cdot)$ is uniformly bounded on $Y \times [0, \infty)$.
However, the calculation under the metric $l^{\nu}$ is easier in many cases.
\footnote{The calculation under the metric $l^{\nu}$ was first suggested to the author by Tian.}
Some estimates in~\cite{CW4} can be improved.
\begin{lemma}
There is a uniform constant $A=A(\mathcal{B}, C_S, n)$ such that
\begin{align}
&\snorm{S}{l^{\nu}} < A\nu^{\frac{n}{2}}, \label{eqn: slinf}\\
&\snorm{\nabla S}{l^{\nu}} < A \nu^{\frac{n+1}{2}}, \label{eqn: naslinf}
\end{align}
whenever $S \in H^0(Y, K_Y^{-\nu})$ is a unit norm section (under the metric $l^{\nu}$).
\label{lemma: boundl}
\end{lemma}
\begin{proof}
For simplicity, we omit subindex $l^{\nu}$ in the proof.
Note $\triangle_{\omega_{\varphi}} \snorm{S}{}^2= \snorm{\nabla S}{}^2-
n\nu\snorm{S}{}^2$,
the proof of inequality (\ref{eqn: slinf}) follows directly the
proof of Lemma 3.1 in~\cite{CW4}. So we only
prove inequality (\ref{eqn: naslinf}).
Direct calculation shows that
\begin{align}
\triangle_{\omega_{\varphi}} |\nabla S|^2 &=
\snorm{\nabla \nabla S}{}^2 - (n+2)\nu \snorm{\nabla S}{}^2 +
n\nu^2 |S|^2 + R_{i\bar{j}} \bar{S}_{,\bar{i}}S_{,j} \notag\\
&=\snorm{\nabla \nabla S}{}^2 - [(n+2)\nu -1]\snorm{\nabla S}{}^2 +
n\nu^2 |S|^2 -\dot{\varphi}_{, i\bar{j}} \bar{S}_{,\bar{i}}S_{,j}.
\label{eqn: naseqn}
\end{align}
Note that $S_{,i\bar{j}}= -\nu S g_{i\bar{j}}$,
integration under measure $\omega_{\varphi}^n$ implies
\begin{align*}
\int_Y |\nabla \nabla S|^2 &= -n\nu^2
+[(n+2)\nu -1]\int_Y \snorm{\nabla S}{}^2+\int_Y \dot{\varphi}_{,i\bar{j}}
\bar{S}_{,\bar{i}}S_{,j}\\
&=n\nu[(n+1)\nu-1] -\int_Y \dot{\varphi}_i \bar{S}_{,\bar{i}\bar{j}}S_{,j}
+ n\nu \int_Y \dot{\varphi}_i\bar{S}_{,\bar{i}} S
\end{align*}
In view of $|\dot{\varphi}| \leq \mathcal{B}$, H\"older inequality
implies
\begin{align*}
\int_Y \snorm{\nabla \nabla S}{}^2
&\leq \mathcal{B} \left(\{\int_Y \snorm{\nabla \nabla S}{}^2\}^{\frac12}
+ n\nu \{\int_Y \snorm{S}{}^2 \}^{\frac12}\right)
\{\int_Y \snorm{\nabla S}{}^2\}^{\frac12} + n\nu[(n+1)\nu-1]\\
&=\sqrt{n\nu} \mathcal{B} \left(\{\int_Y \snorm{\nabla \nabla S}{}^2\}^{\frac12}
+ n\nu \right) + n\nu[(n+1)\nu-1]\\
&\leq \frac12 \int_Y \snorm{\nabla \nabla S}{}^2
+ \frac12 n\nu \mathcal{B}^2 + (n\nu)^{\frac32} \mathcal{B} + n\nu[(n+1)\nu-1].
\end{align*}
It follows that
\begin{align*}
\int_Y \snorm{\nabla \nabla S}{}^2 \leq C \nu^2.
\end{align*}
for some constant $C=C(n,\mathcal{B})$.
Combinging with the fact $\int_Y |\bar{\nabla}\nabla S|^2= n\nu^2$,
Sobolev inequality implies
\begin{align}
\left(\int_Y \snorm{\nabla
S}{}^{\frac{2n}{n-1}}\right)^{\frac{n-1}{n}}
\leq C\nu^2.
\label{eqn: nasn}
\end{align}
Fix $\beta>1$, multiplying $-\snorm{\nabla S}{}^{2(\beta-1)}$ to
both sides of equation (\ref{eqn: naseqn}), we have
\begin{align*}
&\quad \frac{4(\beta-1)}{\beta^2} \int_Y \left| \nabla \snorm{\nabla S}{}^{\beta}\right|^2\\
&= -\int_Y (n\nu^2\snorm{S}{}^2 +\snorm{\nabla \nabla S}{}^2)\snorm{\nabla
S}{}^{2(\beta-1)} + [(n+2)\nu-1] \int_Y \snorm{\nabla S}{}^{2\beta}\\
& \qquad +\int_Y \dot{\varphi}_{,i\bar{j}}
\bar{S}_{,\bar{i}}S_{,j}\snorm{\nabla S}{}^{2(\beta-1)}
\end{align*}
Note that
\begin{align*}
&\qquad \int_Y \dot{\varphi}_{,i\bar{j}}
\bar{S}_{,\bar{i}}S_{,j}\snorm{\nabla S}{}^{2(\beta-1)}\\
&=-\int_Y \dot{\varphi}_{,i} (\bar{S}_{,\bar{i}\bar{j}}S_{,j}
+ \bar{S}_{,\bar{i}} S_{,j\bar{j}})\snorm{\nabla S}{}^{2(\beta-1)}
- (\beta-1) \int_Y \dot{\varphi}_{,i} \bar{S}_{,\bar{i}}S_{,j}(S_{k\bar{j}}\bar{S}_{,\bar{k}}
+ S_k\bar{S}_{\bar{k}\bar{j}}) \snorm{\nabla
S}{}^{2(\beta-2)}\\
&\leq \nu[\beta-1+n] \int_Y \dot{\varphi}_{,i} S
\bar{S}_{,\bar{i}} \snorm{\nabla S}{}^{2(\beta-1)}
+ \mathcal{B} \beta \int_Y \snorm{\nabla \nabla S}{} \snorm{\nabla
S}{}^{2\beta-1}\\
\end{align*}
H\"older inequality and Schwartz inequality yield that
\begin{align*}
&\mathcal{B}\nu[\beta-1+n] \{\int_Y
\snorm{S}{}^2 \snorm{\nabla
S}{}^{2(\beta-1)}\}^{\frac12}\{\int_Y \snorm{\nabla
S}{}^{2\beta}\}^{\frac12}\\
&\qquad+ \mathcal{B}\beta \{\int_Y \snorm{\nabla \nabla S}{}^2
\snorm{\nabla S}{}^{2(\beta-1)}\}^{\frac12} \{\int_Y \snorm{\nabla
S}{}^{2\beta}\}^{\frac12}\\
&\leq n\nu^2 \left(\{\int_Y \snorm{S}{}^2 \snorm{\nabla S}{}^{2(\beta-1)}\}
+ \{\int_Y \snorm{\nabla \nabla S}{}^2 \snorm{\nabla S}{}^{2(\beta-1)}\}
\right)\\
&\qquad +\{\frac{\mathcal{B}^2(\beta-1+n)^2}{4n} + \frac{\mathcal{B}^2 \beta^2}{4n\nu^2}\}
\int_Y \snorm{\nabla S}{}^{2\beta}.
\end{align*}
If $\beta \geq \frac{n}{n-1}$, combining previous three inequalities implies
\begin{align*}
\int_Y \left| \nabla \snorm{\nabla S}{}^{\beta}\right|^2
\leq C \beta(\beta^2 + \nu) \int_Y \snorm{\nabla S}{}^{2\beta}.
\end{align*}
In light of Sobolev inequality, we have
\begin{align*}
\left(\int_Y \snorm{\nabla S}{}^{\beta \cdot
\frac{2n}{n-1}}\right)^{\frac{n-1}{n}}
\leq C_S\{ \int_Y \left| \nabla \snorm{\nabla S}{}^{\beta}\right|^2 + \int_Y \snorm{\nabla S}{}^{2\beta}\}
\leq C \beta (\beta^2 + \nu) \int_Y \snorm{\nabla S}{}^{2\beta}.
\end{align*}
Let $k_0$ be the number such that $\lambda^{2k_0} \geq \nu > \lambda^{2(k_0-1)}$ where
$\lambda=\frac{n}{n-1}$, we have
\begin{align*}
\left(\int_Y \snorm{\nabla S}{}^{\beta \cdot
\frac{2n}{n-1}}\right)^{\frac{n-1}{n}}
\leq
\left\{
\begin{array}{ll}
C \beta^3 & \textrm{if} \quad \beta >\lambda^{k_0},\\
(C \nu) \beta &\textrm{if} \quad \beta \leq \lambda^{k_0}.\\
\end{array}
\right.
\end{align*}
Iteration implies
\begin{align*}
\left\{
\begin{array}{l}
\norm{\snorm{\nabla S}{}^2}{L^{\infty}}
\leq C^{\sum_{k=1}^\infty \lambda^{-k}} \lambda^{\sum_{k=1}^\infty k\lambda^{-k}}
\norm{\snorm{\nabla S}{}^2}{L^{\lambda^{k_0}}},\\
\norm{\snorm{\nabla S}{}^2}{L^{\lambda^{k_0}}}
\leq (C\nu)^{\sum_{k=1}^{k_0} \lambda^{-k}} \norm{\snorm{\nabla
S}{}^2}{L^{\lambda}}.
\end{array}
\right.
\end{align*}
Since $\sum_{k=1}^{k_0} \lambda^{-k} <\sum_{k=1}^{\infty}
\lambda^{-k}=n-1$, combining these inequalities with inequality (\ref{eqn: nasn})
gives us
\begin{align*}
\norm{\snorm{\nabla S}{}^2}{L^{\infty}} \leq C\nu^{n+1}.
\end{align*}
This proves inequality (\ref{eqn: naslinf}).
\end{proof}
Similarly, by sharpening the constants in Lemma 3.2 of~\cite{CW4},
we obtain
\begin{lemma}
There is a uniform constant $A=A(\mathcal{B}, C_S, n)$ such that
\begin{align}
&\snorm{S}{h^{\nu}} < A\nu^{\frac{n}{2}}, \label{eqn: slinfh}\\
&\snorm{\nabla S}{h^{\nu}} < A \nu^{\frac{n+1}{2}}, \label{eqn: naslinfh}
\end{align}
whenever $S \in H^0(Y, K_Y^{-\nu})$ is a unit norm section (under the metric $h^{\nu}$).
\label{lemma: boundh}
\end{lemma}
Lemma~\ref{lemma: boundl} and Lemma~\ref{lemma: boundh} clearly
implies the following estimates.
\begin{corollary}
There is a uniform constant $A=A(\mathcal{B}, C_S, n)$ such that
\begin{align*}
&\max\{\mathcal{F}, F\} \leq \frac{\log A + n \log \nu}{\nu}, \\
&\max\{\mathcal{G}, G\} \leq A\nu^{n+1}.
\end{align*}
\end{corollary}
\begin{proposition}
Along the flow, $\mathcal{F}$ satisfies
\begin{align*}
\left\{
\begin{array}{ll}
\D{}{t} \mathcal{F}&=-\dot{\varphi}
+ \int_Y (\dot{\varphi} - \frac{\triangle
\dot{\varphi}}{\nu})e^{\nu \mathcal{F}} \omega_{\varphi}^n,\\
\triangle \mathcal{F}&= -n+ (\frac{1}{\nu}e^{-\nu \mathcal{F}} \mathcal{G} -\nu \snorm{\nabla
\mathcal{F}}{}^2) \geq -n,\\
\square \mathcal{F}&= n-\dot{\varphi} + \int_Y (\dot{\varphi} - \frac{\triangle
\dot{\varphi}}{\nu}) e^{\nu \mathcal{F}} \omega_{\varphi}^n +
(\nu \snorm{\nabla \mathcal{F}}{}^2 - \frac{1}{\nu}e^{-\nu \mathcal{F}} \mathcal{G}).
\end{array}
\right.
\end{align*}
$F$ satisfies
\begin{align*}
\left\{
\begin{array}{ll}
\D{}{t} F &= (n-R) - (1+\frac{1}{\nu})\int_Y
(n-R) e^{\nu F} \omega_{\varphi}^n,\\
\triangle F &= -R + (\frac{1}{\nu}e^{-\nu F}G - \nu \snorm{\nabla F}{}^2) \geq
-R,\\
\square F &= n-(1+\frac{1}{\nu})\int_Y (n-R) e^{\nu F} \omega_{\varphi}^n
+ (\nu \snorm{\nabla F}{}^2 - \frac{1}{\nu}e^{-\nu F}G).
\end{array}
\right.
\end{align*}
\end{proposition}
\begin{proof}
At $t= t_0$, suppose $\{S_{\beta}\}_{\beta=0}^{N_{\nu}}$ are
orthonormal holomorphic sections of $H^0(Y, K_Y^{-1})$ under the
metric $l^{\nu}(t_0)$. Assume $\{a_{\alpha \beta}(t)S_{\beta}\}_{\alpha=0}^{N_{\nu}}$
are orthonormal holomorphic sections at time $t$ under the metric
$l^{\nu}(t)$. By these uniformization condition, we have
\begin{align*}
&a_{\alpha \beta}(t_0) = \delta_{\alpha \beta},\\
&\delta_{\alpha \gamma}=a_{\alpha \beta}\bar{a}_{\gamma \xi}
\int_Y \langle S_{\beta}, S_{\xi}\rangle \omega_{\varphi}^n,\\
&0=\dot{a}_{\alpha \beta}a_{\gamma \xi}
\int_Y \langle S_{\beta}, S_{\xi}\rangle \omega_{\varphi}^n
+a_{\alpha \beta} \dot{\bar{a}}_{\gamma \xi}
\int_Y \langle S_{\beta}, S_{\xi}\rangle \omega_{\varphi}^n\\
&\qquad +a_{\alpha \beta} \bar{a}_{\gamma \xi}
\int_Y (-\nu \dot{\varphi}
+ \triangle \dot{\varphi}) \langle S_{\beta}, S_{\xi}\rangle
\omega_{\varphi}^n.
\end{align*}
In particular, at $t=t_0$, using sum convention we have
\begin{align*}
&0=\dot{a}_{\alpha \gamma} + \dot{\bar{a}}_{\gamma \alpha} +
\int_Y (-\nu \dot{\varphi} + \triangle \dot{\varphi})
\langle S_{\alpha}, S_{\gamma} \rangle \omega_{\varphi}^n, \\
&\left. \D{}{t} e^{\nu\mathcal{F}} \right|_{t=t_0}
= \left.\D{}{t}
\left(a_{\alpha \beta} \bar{a}_{\alpha \gamma} \langle S_{\beta},
S_{\gamma}\rangle \right) \right|_{t=t_0}\\
&\qquad =\dot{a}_{\alpha \beta} \langle S_{\beta}, S_{\alpha}\rangle
+ \dot{\bar{a}}_{\alpha \gamma} \langle S_{\alpha}, S_{\gamma}\rangle
+(-\nu \dot{\varphi}) \langle S_{\alpha}, S_{\alpha} \rangle.
\end{align*}
Fix $x \in Y$, at $t=t_0$, there is a unit norm section
$S$ such that $\snorm{S}{l^{\nu}}^2(x)= e^{\nu\mathcal{F}}$. Let
$S_{0}=S$, then we have
\begin{align*}
\left. \D{}{t} e^{\nu\mathcal{F}} \right|_{t=t_0}
= e^{\nu \mathcal{F}} (\dot{a}_{00} + \dot{\bar{a}}_{00} - \nu \dot{\varphi})
=e^{\nu \mathcal{F}}
(\int_Y (\nu \dot{\varphi} - \triangle \dot{\varphi}) e^{\nu \mathcal{F}} \omega_{\varphi}^n - \nu\dot{\varphi})
\end{align*}
On the other hand,
\begin{align*}
\triangle e^{\nu \mathcal{F}}= \langle \nabla S_{\alpha}, \nabla
S_{\alpha}\rangle - n\nu \langle S_{\alpha}, S_{\alpha} \rangle
= \mathcal{G} -n\nu e^{\nu \mathcal{F}}.
\end{align*}
It follows that
\begin{align*}
(\D{}{t} - \triangle) e^{\nu \mathcal{F}}=
e^{\nu \mathcal{F}}
\{\int_Y (\nu \dot{\varphi} - \triangle \dot{\varphi}) e^{\nu \mathcal{F}} \omega_{\varphi}^n + \nu
(n-\dot{\varphi})\} - \mathcal{G}.
\end{align*}
Similarly, we can have
\begin{align*}
\D{}{t}e^{\nu F}&=e^{\nu F} \{\nu \triangle \dot{\varphi}
- (\nu+1) \int_Y \triangle \dot{\varphi}e^{\nu F}
\omega_{\varphi}^n\}\\
&=e^{\nu F} \{\nu (n-R)
- (\nu+1) \int_Y (n-R) e^{\nu F} \omega_{\varphi}^n\},\\
\triangle e^{\nu F}&= G - \nu R e^{\nu F},\\
\square e^{\nu F} &=e^{\nu F} \{ n\nu
-(\nu+1)\int_Y (n-R)e^{\nu F}
\omega_{\varphi}^n \}-G.
\end{align*}
From the evolution equation of $e^{\nu \mathcal{F}}$
and $e^{\nu F}$, we can easily obtain the evolution equation of $\mathcal{F}$
and $F$.
\end{proof}
\begin{remark}
The advantage of $F$ appears when the evolution equation is
calculated. Every term in $\D{F}{t}$ is a geometric quantity.
Suppose that $\int_0^{\infty} \int_Y (R-n)_{-} \omega_{\varphi}^n dt<
\infty$ and $\int_0^{\infty} (R_{\max}(t)-n) dt< \infty$, then $F$ must be
bounded from below and the flow is tamed.
When we consider the convergence of metric space, the smooth
convergence of $g_{i\bar{j}}$ will automatically induce the smooth
convergence of $h^{\nu}= \det(g_{i\bar{j}})^{\nu}$. Therefore,
we prefer to use $h^{\nu}$ as the more natural metric of $K_Y^{-\nu}$
under the \KRf.
\end{remark}
Since H\"ormarnder's estimate hold in the orbifold case. The
bound in Lemma~\ref{lemma: boundh} implies the convergence of
plurianticanonical sections when the underlying orbifolds
converge.
\begin{proposition}
Suppose $Y$ is a Fano orbifold, $\{(Y, g(t)), 0 \leq t < \infty\}$ is a \KRf
without volume concentration. Let $t_i$ be a sequence of time such that
$\displaystyle (Y, g(t_i)) \sconv (\hat{Y}, \hat{g})$
for some Q-Fano normal variety $(\hat{Y}, \hat{g})$.
Then for any fixed positive integer $\nu$ (appropriate for both $Y$ and $\hat{Y}$),
the following properties hold.
\begin{enumerate}
\item If $S_i \in H^0(Y, K_{Y}^{-\nu})$ and $\int_{Y} \snorm{S_i}{h^{\nu}(t_i)}^2
\omega_{\varphi(t_i)}^n=1$, then by taking subsequence if necessary, we have
$\hat{S} \in H^0(\hat{Y}, K_{\hat{Y}}^{-\nu})$ such that
\begin{align*}
S_i \sconv \hat{S},
\quad \int_{\hat{Y}} \snorm{\hat{S}}{\hat{h}^{\nu}}^2 \hat{\omega}^n=1.
\end{align*}
\item If $\hat{S} \in H^0(\hat{Y}, K_{\hat{Y}}^{-\nu})$ and
$\int_{\hat{Y}} \snorm{\hat{S}}{\hat{h}^{\nu}}^2 \hat{\omega}^n =1$, then there is a subsequence of
sections $S_i \in H^0(Y_i, K_{Y_i}^{-\nu})$ satisfying
\begin{align*}
\int_{Y_i} \snorm{S_i}{h^{\nu}(t_i)}^2 \omega_{\varphi(t_i)}^n=1,
\quad S_i \sconv \hat{S}.
\end{align*}
\end{enumerate}
\label{proposition: bundleconv}
\end{proposition}
Using this property, we can justify the tamedness condition by
weak compactness exactly as Theorem 3.2 of~\cite{CW4}.
\begin{theorem}
Suppose $Y$ is a Fano orbifold, $\{(Y, g(t)), 0 \leq t < \infty\}$ is a \KRf
without volume concentration. Suppose this flow satisfies weak compactness, i.e., for every sequence $t_i \to \infty$, by
passing to subsequence, we have
\begin{align*}
(Y, g(t_i)) \sconv (\hat{Y}, \hat{g}),
\end{align*}
where $(\hat{Y}, \hat{g})$ is a Q-Fano normal variety.
Then this flow is tamed by a big constant $\nu$.
\label{theorem: justtamed}
\end{theorem}
As mentioned in the introduction. Suppose $Y$ is
an orbifold Fano surface, $\{(Y, g(t)), 0 \leq t <\infty\}$ is a \KRf solution. Then
this flow has no volume concentration and satisfies weak
compactness theorem. Under the help of Perelman's functional,
every weak limit $(\hat{Y}, \hat{g})$ must satisfy K\"ahler Ricci
soliton equation on its smooth part. On the other hand, the
soliton potential function has uniform $C^1$-norm bound since it is
the smooth limit of $-\dot{\varphi}(t_i)$. Therefore Uhlenbeck's
removing singularity method applies and we obtain $(\hat{Y}, \hat{g})$
is a smooth orbifold which can be embedded into $\CP^{N_{\nu}}$ by
line bundle $K_{\hat{Y}}^{-\nu}$ for some big $\nu$(c.f.~\cite{Baily}).
Then the following Theorem from~\ref{theorem: justtamed} directly.
\begin{theorem}
Suppose $Y$ is an orbifold Fano surface, $\{(Y, g(t)), 0 \leq t <\infty\}$
is a \KRf solution. Then there is a big constant $\nu$ such that
this flow is tamed by $\nu$.
\label{theorem: surfacetamed}
\end{theorem}
\subsection{Properties of Tamed Flow}
Follow~\cite{Tian91}, we define
\begin{definition}
Let $\mathscr{P}_{G, \nu, k}(Y, \omega)$ be the collection of all
$G$-invariant functions of form
$\displaystyle \frac{1}{2\nu}\log (\sum_{\beta=0}^{k-1} \norm{\tilde{S}_{\nu,
\beta}}{h^{\nu}}^2)$, where $\tilde{S}_{\nu, \beta} \in H^0(K_Y^{-\nu})$ satisfies
\begin{align*}
\int_Y \langle \tilde{S}_{\alpha}, \tilde{S}_{\beta} \rangle_{h^{\nu}}
\omega^n=\delta_{\alpha \beta}, \quad 0 \leq \alpha, \beta
\leq k-1 \leq \dim(K_Y^{-\nu}) -1;
\qquad h= \det g_{\omega}.
\end{align*}
Define
\begin{align*}
\alpha_{G, \nu, k} \triangleq
\sup\{ \alpha | \sup_{\varphi \in \mathscr{P}_{G,\mu, k}} \int_Y e^{-2\alpha \varphi} \omega^n <
\infty\}.
\end{align*}
If $G$ is trivial, we denote $\alpha_{\nu, k}$
as $\alpha_{G, \nu, k}$, denote $\mathscr{P}(\nu, k)$ as $\mathscr{P}(G, \nu, k)$.
\label{definition: nualpha}
\end{definition}
The next definition follows~\cite{DK}.
\begin{definition}
Let $Y$ be a complex orbifold and $f$ is a plurisubharmonic function and
$f \in L^1(Y)$.
For any compact set $K \subset Y$, define
\begin{align*}
\alpha_K(f)= \sup \{ c \geq 0: \; e^{-2cf} \textrm{is $L^1$ on a neighborhood
of}\; K\},
\end{align*}
This $\alpha_K(f)$ is called the complex singularity exponent of $f$ on $K$.
\label{definition: singexp}
\end{definition}
If $f \in \mathscr{P}(\nu, k)$ and $\alpha < \alpha_{\nu, k}$,
we have $\int_Y e^{-2\alpha f}< \infty$ by definition. Since the
set $\mathscr{P}(\nu, k)$ is compact in $L^1(Y)$-topology (actually in $C^{\infty}$ topology).
By the semicontinuity property proved in~\cite{DK}, we see there is
a uniform constant $C_{\alpha, \nu, k}$ such that
\begin{align*}
\int_Y e^{-2\alpha f}< C_{\alpha, \nu, k},
\quad
\forall \; f \in \mathscr{P}(\nu, k).
\end{align*}
Suppose the flow is tamed by $\nu$. By rotating basis, we can
choose $\{S_{\nu, \beta}^t\}_{\beta=0}^{N_{\nu}}$ and
$\{\tilde{S}_{\nu, \beta}^t\}_{\beta=0}^{N_{\nu}}$ as orthonormal
basis of $H^0(K_Y^{-\nu})$ under the metric $h^{\nu}(t)$ and $h^{\nu}(0)$
respectively, and they satisfies
\begin{align*}
S_{\nu, \beta}^t= a(t)\lambda_{\beta}(t)\tilde{S}_{\nu,
\beta}^t, \quad
0< \lambda_0(t) \leq \lambda_1(t) \leq \cdots \leq
\lambda_{N_{\nu}}(t)=1.
\end{align*}
As in~\cite{CW4}, we have the partial $C^0$-estimate
\begin{align*}
\snorm{\varphi - \sup_Y \varphi -\frac{1}{\nu} \log
\sum_{\beta=0}^{N_{\nu}} \snorm{\lambda_{\beta}(t)\tilde{S}_{\nu, \beta}^t}{h_0^{\nu}}^2}{} <
C.
\end{align*}
This yields
\begin{align*}
\int_Y e^{-\alpha(\varphi -\sup_Y \varphi)} \omega^n
&< e^C \int_Y \left(\sum_{\beta=0}^{N_{\nu}}
\snorm{\lambda_{\beta}(t)\tilde{S}_{\nu, \beta}^t}{h_0^{\nu}}^2
\right)^{-\frac{\alpha}{\nu}} \omega^n\\
&< e^C \int_Y \left(\sum_{\beta=N_{\nu}-k+1}^{N_{\nu}}
\snorm{\lambda_{\beta}(t)\tilde{S}_{\nu, \beta}^t}{h_0^{\nu}}^2
\right)^{-\frac{\alpha}{\nu}} \omega^n\\
&\leq e^C \lambda_{N_{\nu}-k+1}^{-\frac{2 \alpha}{\nu}}
\int_Y \left(\sum_{\beta=N_{\nu}-k+1}^{N_{\nu}}
\snorm{\tilde{S}_{\nu, \beta}^t}{h_0^{\nu}}^2
\right)^{-\frac{\alpha}{\nu}} \omega^n\\
&<e^C C_{\alpha, \nu, k} \lambda_{N_{\nu}-k+1}^{-\frac{2
\alpha}{\nu}}.
\end{align*}
Plug in the equation $\dot{\varphi}= \log \frac{\omega_{\varphi}^n}{\omega^n} +
\varphi+u_{\omega}$ and note that $\dot{\varphi}, u_{\omega}$ are bounded, we have
\begin{align*}
\int_Y e^{(1-\alpha)\varphi + \alpha \sup_Y \varphi} \omega_{\varphi}^n <
C'(\alpha, \nu, k) \lambda_{N_{\nu}-k+1}^{-\frac{2\alpha}{\nu}}
\end{align*}
The convexity of exponential function implies
\begin{align}
(1-\alpha) \frac{1}{V} \int_Y \varphi \omega_{\varphi}^n + \alpha \sup_Y \varphi < C''(\alpha, \nu, k) -
\frac{2\alpha}{\nu} \log \lambda_{N_{\nu}-k+1}.
\label{eqn: intsupsum}
\end{align}
whenever $\alpha< \alpha_{\nu, k}$. Using this estimate, we can
obtain the following two convergence theorems as in~\cite{CW4}.
\begin{theorem}
Suppose $\{(Y^n, g(t)), 0 \leq t < \infty \}$ is a \KRf tamed by $\nu$.
If $\alpha_{\nu, 1}> \frac{n}{(n+1)}$,
then $\varphi$ is uniformly bounded along this flow. In
particular, this flow converges to a KE metric exponentially fast.
\label{theorem: nuconv}
\end{theorem}
\begin{proof}
Choose $\alpha \in (\frac{n}{n+1}, \alpha_{\nu, 1})$. Put $k=1$ into inequality (\ref{eqn: intsupsum}),
we have
\begin{align*}
(1-\alpha) \frac{1}{V} \int_Y \varphi \omega_{\varphi}^n + \alpha \sup_Y \varphi
< C(\alpha, \nu).
\end{align*}
Together with
$\frac{1}{V} \int_Y (-\varphi) \omega_{\varphi}^n \leq n \sup_Y
\varphi+C$, it implies
\begin{align*}
\{\alpha - n(1-\alpha)\}\sup_{Y} \varphi < C.
\end{align*}
As $\alpha>\alpha_{\nu, 1}>\frac{n}{n+1}$, we have $\alpha - n(1-\alpha)>0$,
this yields that $\displaystyle \sup_Y \varphi$ is
uniformly bounded from above. Therefore, $\varphi$ is uniformly
bounded.
\end{proof}
\begin{theorem}
Suppose $\{(Y^n, g(t)), 0 \leq t < \infty \}$ is a \KRf tamed by $\nu$.
If $\alpha_{\nu, 2}>\frac{n}{n+1}$ and
$\alpha_{\nu, 1} > \frac{1}{2- \frac{n-1}{(n+1) \alpha_{\nu,2}}}$, then $\varphi$ is
uniformly bounded along this flow. In particular, this flow converges to a KE
metric exponentially fast.
\label{theorem: nuconvr}
\end{theorem}
\begin{proof}
We argue by contradiction. Suppose that $\varphi$ is not
uniformly bounded.
Then there must be a sequence of $t_i$ such that
$\displaystyle \sup_{Y} \varphi(t_i) \to \infty$.
We claim that $\lambda_{N_{\nu}-1}(t_i) \to 0$. Otherwise, $\log \lambda_{N_{\nu}-1}(t_i)$ is uniformly
bounded. Choose $\alpha \in (\frac{n}{n+1}, \alpha_{\nu, 2})$.
Combining $\frac{1}{V} \int_Y (-\varphi) \omega_{\varphi}^n \leq n \sup_Y
\varphi+C$ and the inequality (\ref{eqn: intsupsum})
in the case $k=1$, the same argument as in the proof of Theorem~\ref{theorem: nuconv}
implies that $\displaystyle \sup_Y \varphi(t_i)$ is uniformly
bounded. This contradicts to our assumption!
Note that $\R$-coefficient \Poincare duality holds on orbifold, singularities on $Y$
are isolated which can be included in small geodesic balls with few contribution to integration.
Since $\lambda_{N_{\nu}-1}(t_i) \to 0$, as in~\cite{Tian91}, for every small $\delta>0$,
we have
\begin{align*}
\frac{1}{V} \int_Y \st \partial X_{t_i} \wedge
\bar{\partial} X_{t_i} \wedge \omega^{n-1}
\geq -\frac{(1-\delta)}{\nu} \log \lambda_{N_{\nu}-1}(t_i) -C
\end{align*}
for large $i$.
Here
$\displaystyle X_{t_i}=\frac{1}{\nu} \log \sum_{\beta=0}^{N_{\nu}} \snorm{\lambda_{\beta}(t_i)\tilde{S}_{\nu,
\beta}^{t_i}}{h_0^{\nu}}^2$.
For notation simplicity, we omit the subindex $t_i$ from now
on. It follows that
\begin{align*}
&\quad \sum_{j=0}^{n-1}
\frac{j}{V} \int_Y \st \partial \varphi \wedge
\bar{\partial} \varphi \wedge \omega^j \wedge
\omega_{\varphi}^{n-1-j}\\
&\geq \frac{n-1}{V} \int_Y \st \partial \varphi \wedge
\bar{\partial} \varphi \wedge \omega^{n-1}\\
&\geq \frac{n-1}{V} \int_Y \st \partial X \wedge
\bar{\partial} X \wedge \omega^{n-1} -C\\
&\geq -(1-\delta) \cdot \frac{(n-1)}{\nu}
\cdot \log \lambda_{N_{\nu}-1} -C.
\end{align*}
Plug this into inequality (\ref{eqn: intsupsum}) in the case $k=1$,
we arrive
\begin{align*}
(1-\alpha) \frac{1}{V} \int_Y \varphi \omega_{\varphi}^n + \alpha \sup_Y \varphi < C(\alpha, \nu)
+\frac{1}{1-\delta} \cdot \frac{2\alpha}{(n-1)}\sum_{i=0}^{n-1}
\frac{i}{V} \int_Y \st \partial \varphi \wedge
\bar{\partial} \varphi \wedge \omega^i \wedge
\omega_{\varphi}^{n-1-i}.
\end{align*}
Combining it with
\begin{align}
\frac{1}{V} \int_Y (-\varphi) \omega_{\varphi}^n \leq
n \sup_Y \varphi - \sum_{i=0}^{n-1}
\frac{i}{V} \int_Y \st \partial \varphi \wedge
\bar{\partial} \varphi \wedge \omega^i \wedge
\omega_{\varphi}^{n-1-i} + C
\end{align}
we have
\begin{align*}
\left(2A\alpha -(1-\alpha)\right) \frac{1}{V} \int_Y
(-\varphi) \omega_{\varphi}^n < \alpha (2An -1) \sup_Y \varphi +C.
\end{align*}
where $A=\frac{1}{(n-1)(1-\delta)}$.
Combining this with the estimate (\ref{eqn: intsupsum}) for $k=1$
implies
\begin{align*}
\left(2A\alpha -(1-\alpha)\right) \frac{1}{V} \int_Y
(-\varphi) \omega_{\varphi}^n < (2A n-1) \alpha \sup_Y \varphi +C
< (2An -1) \frac{\alpha}{\beta}(1-\beta) \frac{1}{V} \int_Y(-\varphi) \omega_{\varphi}^n
+C.
\end{align*}
where $\beta$ is any number less that $\alpha_{\nu, 1}$, $A=\frac{1}{(n-1)(1-\delta)}$. Therefore,
we have
\begin{align}
\{ (2A +1 - \frac{1-\beta}{\beta} (2nA -1))\alpha -1\} \frac{1}{V} \int_Y(-\varphi)
\omega_{\varphi}^n<C.
\label{eqn: intphi}
\end{align}
If $\alpha_{\nu, 1} > \frac{1}{2-\frac{(n+1)}{(n-1)\alpha_{\nu, 2}}}$, then
we can find $\beta$ a little bit less than $\alpha_{\nu, 1}$,
$\alpha$ a little bit less than $\alpha_{\nu, 2}$, $A$ a little bit
greater than $\frac{1}{n-1}$ such that
\begin{align*}
(2A +1 - \frac{1-\beta}{\beta} (2nA -1))\alpha -1>0.
\end{align*}
Recall our subindex $t_i$ in inequality (\ref{eqn: intphi}),
we have $\frac{1}{V} \int_Y(-\varphi_{t_i}) \omega_{\varphi_{t_i}}^n$
is uniformly bounded from above. This implies that $\displaystyle \sup_Y \varphi_{t_i}$
is uniformly bounded. Contradiction!
\end{proof}
\section{Some Applications and Examples}
The following theorem is a direct corollary of
Theorem~\ref{theorem: surfacetamed},
Theorem~\ref{theorem: nuconv} and Theorem~\ref{theorem: nuconvr}.
\begin{theorem}
Suppose that $Y$ is an orbifold Fano surface such that one of the
following two conditions holds for every large integer $\nu$,
\begin{itemize}
\item $\alpha_{\nu, 1}> \frac23$.
\item $\alpha_{\nu, 2} > \frac23, \quad \alpha_{\nu, 1} > \frac{1}{2- \frac{1}{3\alpha_{\nu, 2}}}$.
\end{itemize}
Then $Y$ admits a KE metric.
\label{theorem: KEexist}
\end{theorem}
There are a lot of orbifold Fano surfaces where Theorem~\ref{theorem: KEexist}
can be applied. For simplicity, we only consider the good case:
every singularity is a rational double point. This kind of orbifolds are called
Gorenstein log del Pezzo surfaces.
Let's first recall some definitions.
\begin{definition}
Suppose that $X$ is a normal variety and $D=\sum d_i D_i$ is a
$\Q$-cartier divisor on $X$ such that $K_X +D$ is $\Q$-cartier
and let $f: Y \to X$ be a birational morphism, where $Y$ is
normal. We can write
\begin{align*}
K_{Y} \sim_{\Q} f^* (K_X +D) + \sum a(X, D, E)E.
\end{align*}
The discrepancy of the log pair $(X, D)$ is the number
\begin{align*}
discrep(X, D)= \inf_E \{ a(X, D, E) | E \; \textrm{is exceptional divisor over} \;
X\}.
\end{align*}
The total discrepancy of the log pair $(X, D)$ is the number
\begin{align*}
totaldiscrep(X, D)= \inf_E \{a(X,D,E) | E \; \textrm{is divisor over} \;
X\}.
\end{align*}
We say that the log pair $K_X +D$ is
\begin{itemize}
\item Kawamata log terminal (or log terminal) if and only if
$totaldiscrep(X, D)>-1$.
\item log canonical iff $discrep(X, D)\geq -1$.
\end{itemize}
\end{definition}
Assume now that $X$ is a variety with log terminal singularities,
let $Z \subset X$ be a closed subvariety and let $D$ be an
effective $\Q$-Cartier divisor on $X$. Then the number
\begin{align*}
lct_Z(X, D) = \sup\{ \lambda \in \Q | \textrm{the log pair} \; (X, \lambda D)\; \textrm{is log canonical along}
\; Z\}.
\end{align*}
Let $x$ be a point in $X$, $f$ be a local defining holomorphic
function of divisor $D$ around $x$, then we have
\begin{align*}
lct_x(X, D)= \alpha_x(\log f)
\end{align*}
where $\alpha_x(\log f)$ is the singularity exponent of plurisubharmonic function $\log f$ around point
$x$. (c.f. definition~\ref{definition: singexp}).
\begin{definition}
\begin{align*}
lct_{\nu}(X)= \inf\{ lct(X, \frac{1}{\nu}D) | D \; \textrm{effective} \; \textrm{$\Q$-divisor on}
\; X \; \textrm{such that} \; D \in |-\nu K_X|\}.
\end{align*}
The global log canonical threshold of $X$ is the number
\begin{align*}
lct(X)= \inf\{lct(X, D)| D \; \textrm{effective divisor of} \; X \;\textrm{such that}
\; D \sim_{\Q} -K_X\}.
\end{align*}
\end{definition}
It's not hard to see $lct_{\nu}(X)= \alpha_{\nu, 1}$. According to the proof
of Demailly (c.f.~\cite{ChS},~\cite{SYl}), we know
\begin{align*}
\alpha(X) = lct(X) = \lim_{\nu \to \infty} lct_{\nu}(X) = \lim_{\nu \to
\infty} \alpha_{\nu, 1}.
\end{align*}
Therefore, we have
\begin{align*}
\infty=\alpha_{\nu, N_{\nu}+1} \geq \cdots \geq \alpha_{\nu, 3} \geq
\alpha_{\nu, 2} \geq \alpha_{\nu, 1}(X)=lct_{\nu}(X) \geq lct(X) = \alpha(X).
\end{align*}
The calculation of $\alpha_{\nu, k}$ is itself a very interesting
problem (c.f.~\cite{SYl},~\cite{ChS}). Here we will
use some results calculated in~\cite{Kosta}.
\begin{lemma}[\cite{Kosta}]
Let $Y$ be a Gorenstein log del Pezzo surface, every singularity of $Y$ is of type
$\A_k$. Suppose $Y$ satisfies one of the following conditions.
\begin{itemize}
\item $Y$ has only singularities of type $\A_1$ or $\A_2$
and $K_Y^2=1$. $|-K_Y|$ has a cuspidal curve $C$ such that
$Sing(C)$ contains an $\A_2$ singularity.
\item $Y$ has one singularity of type $\A_5$ and $K_Y^2=1$.
\item $Y$ has one singularity of type $\A_6$ and $K_Y^2=1$.
\end{itemize}
Then $\alpha_{\nu, 1} \geq \frac23$
and $\alpha_{\nu, 2}> \frac23$.
\label{lemma: cacalpha}
\end{lemma}
\begin{proof}
The proof argues case by case and the main ingredients are
contained in~\cite{Kosta} already. For simplicity, we only give a
sketch proof of the second case.
If $f \in \mathscr{P}(\nu, 1)$ and $\alpha_x(f) \leq \frac23$,
one can show that $f = \frac{1}{2\nu} \log \snorm{S}{h_0^{\nu}}^2$
for some $S \in H^0(K_Y^{-\nu})$. Moreover, $x$ is the unique singularity
of type $\A_5$ and $S= (S')^{\nu}$
for some $S' \in H^0(K_Y^{-1})$. $Z(S')$ is the unique
divisor passing through $x$ such that
$lct_x(Y, Z(S'))= \frac23$.
For every $\varphi \in \mathscr{P}(\nu, 2)$,
we have $e^{2\nu \varphi} = e^{2\nu \varphi_1} + e^{2\nu \varphi_2}$
where
\begin{align*}
\varphi_1= \frac{1}{\nu} \log \snorm{S_1}{h_0^{\nu}}^2, \quad
\varphi_2= \frac{1}{\nu} \log \snorm{S_2}{h_0^{\nu}}^2; \quad
\int_Y \langle S_1, S_2\rangle_{h_0^{\nu}} \omega_0^n =0.
\end{align*}
Clearly, for every point $y \in Y$, we have
\begin{align*}
\alpha_y(\varphi) \geq \max \{ \alpha_y(\varphi_1),
\alpha_y(\varphi_2)\}> \frac23.
\end{align*}
Since $\alpha_y(\varphi_1), \alpha_y(\varphi_2)$ can only achieve finite possible
values, we have
\begin{align*}
\inf_{y \in Y, \varphi \in \mathscr{P}(\nu, 2)}
\alpha_y(\varphi) > \frac23.
\end{align*}
By the compactness of $Y$ and the semicontinuity property proved
in~\cite{DK}, we have the inequality $\alpha_{\nu, 2} > \frac23$.
\end{proof}
Therefore, Theorem~\ref{theorem: KEexist} applies and we know KE metrics
exist on such orbifolds $Y$ in Lemma~\ref{lemma: cacalpha}.
Together with Theorem 1.6 of~\cite{Kosta} and Theorem 5.1 of~\cite{SYl}, we have proved the following theorem.
\begin{theorem}
Suppose $Y$ is a cubic surface with only one ordinary double point, or $Y$ is
a degree $1$ del Pezzo surface having only Du Val singularities of type $\A_k$ for $k \leq 6$. Starting from any
metric $\omega$ satisfying $[\omega]=2\pi c_1(Y)$,
the \KRf will converge to a KE metric on $Y$. In particular, $Y$ admits a KE metric.
\end{theorem}
\begin{remark}
If we consider $\alpha_{G, \nu, k}$ instead of $\alpha_{\nu, k}$
for some finite group $G \subset Aut(Y)$, it's still possible to study
the existence of KE metrics on degree $1$ Gorenstein log Del Pezzo surfaces
with $\A_7$ or $\A_8$ singularities.
\end{remark}
| proofpile-arXiv_065-5178 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{\label{sec:intro}Introduction}
One of the main objectives of {\it nonequilibrium thermodynamics} is
the study of the response of physical systems to applied external
perturbations. Around the mid of the last century major advancements
were obtained in this field with the development of linear response
theory by several authors, among which we mention
\cite{CallenWelton51,Green_JChemPhys52,Kubo57}. This theory,
inspired by the works of \citet{EinsteinBook} on the Brownian
movement and of \citet{Johnson_PR28} and \citet{Nyquist_PR28} on
noise in electrical circuits, established that, under certain
circumstances, the linear response to small perturbations is
determined by the equilibrium fluctuations of the system. In
particular, the linear response coefficients are proportional to
two-point correlation functions for Hamiltonian systems
\citep{Kubo57} as well as for stochastic, generally non-equilibrium
systems \citep{HanggiThomas82}. In principle, an infinite hierarchy
of higher order fluctuation-dissipation relations connects the
$n$-th order response coefficients to $(n+1)$-point correlation
functions.
In contrast, fluctuation theorems are compact relations that provide
information about the fully {\it nonlinear response}. Accordingly,
fluctuation-dissipation relations of all orders can be derived
therefrom. \citet{BochkovKuzovlev77,BochkovKuzovlev81} were the
first to put forward one such fully nonlinear fluctuation theorem.
These authors noticed that, for a {\it classical} system, their
general fluctuation theorem implies the following, extremely simple
nonequilibrium identity:
\begin{equation}
\langle e^{-\beta W_0}\rangle =1
\label{eq:BKE}
\end{equation}
where $W_0$ is the work done on the system by the external
perturbation during one specific realization thereof, $\langle \cdot
\rangle$ denotes the average over many realizations of the same
perturbation, and $\beta=(k_B T)^{-1}$, with $T$ the initial
temperature of the system and $k_B$ Boltzmann's constant. Due to the
properties of convexity of the exponential function, an almost
immediate consequence of (\ref{eq:BKE}), is the second law of
thermodynamics in the form, $\langle W_0 \rangle \geq 0$; i.e. when
a system is perturbed from an initial thermal equilibrium, on
average, it can only absorb energy.
The works of \citet{BochkovKuzovlev77,BochkovKuzovlev81} has
recently re-gained a great deal of attention, after \citet{Jarz97}
derived, within the framework of classical mechanics, a salient
result similar to Eq. (\ref{eq:BKE}):
\begin{equation}
\langle e^{-\beta W}\rangle = e^{-\beta \Delta F}
\label{eq:JE}
\end{equation}
which, in contrast to Eq. (\ref{eq:BKE}), allows to extract an
equilibrium property of the system, i.e., its free energy
(difference) $F$, from nonequilibrium fluctuations of work $W$.
Evidently, the definitions of work adopted by \citet{Jarz97} and
\citet{BochkovKuzovlev77,BochkovKuzovlev81} (denoted here
respectively as $W$ and $W_0$) do not coincide. The relationships
between these two work definitions and the corresponding
nonequilibrium identities, Eqs. (\ref{eq:BKE},\ref{eq:JE}), were
discussed in a very clear and elucidating manner in
\citep{Jarz_CRPhys07,Jarz_JStatMech07}, which, for the sake of
clarity, we summarize below.
Let us express the time dependent
Hamiltonian of the driven classical system as the sum of the
unperturbed system Hamiltonian $H_0$ and the interaction energy
stemming from the coupling of the external time dependent
perturbation $X(t)$ to a certain system observable $Q$:
\begin{equation}
H(\mathbf{q},\mathbf{p};t)= H_0(\mathbf{q},\mathbf{p}) -
X(t)Q(\mathbf{q},\mathbf{p}),
\label{eq:H-cl}
\end{equation}
We restrict ourselves to the simplest case of a protocol governed by a
single ``field'' $X(t)$ coupling to the conjugated generalized
coordinate $Q$.
Generalization to several fields $X_i(t)$ coupling to different
generalized coordinates $Q_i$ is straightforward.
The definition of work $W$ according to \citet{Jarz97} stems from
an {\it inclusive} viewpoint, where one counts the term $X(t)Q$ as
being part of the system internal energy. In contrast the
definition of work $W_0$ according to
\citet{BochkovKuzovlev77,BochkovKuzovlev81}
belongs to an {\it exclusive} viewpoint where instead, such term
is not counted as part of the energy of the system. More explicitly,
if $\mathbf{q}_0,\mathbf{p}_0$ is a certain initial condition that
evolves to $\mathbf{q}_f,\mathbf{p}_f$ in a time $t_f-t_0$, according
to the Hamiltonian evolution generated by $H$, then the two different
definitions of work become:
\begin{align}
W &\doteq H(\mathbf{q}_f,\mathbf{p}_f;t_f)-
H(\mathbf{q}_0,\mathbf{p}_0;t_0)\\
W_0 & \doteq H_0(\mathbf{q}_f,\mathbf{p}_f)-
H_0(\mathbf{q}_0,\mathbf{p}_0)
\end{align}
It is important to stress that \citet{BochkovKuzovlev77,BochkovKuzovlev81} only
obtained Eq. (\ref{eq:BKE}) in the {\it classical} case,
notwithstanding the fact that they developed a quantum version of
their theory, as well. This difficulty is related to the fact that
work was identified by \citet{BochkovKuzovlev77,BochkovKuzovlev81} with the
quantum
expectation of a pretended {\it work operator}, given by the
difference of final and initial Hamiltonian in the
Heisenberg representation.
To be more clear, if the quantum Hamiltonian reads:
\begin{equation}
H(t)= H_0-X(t) Q
\label{eq:H}
\end{equation}
where now $ H, H_0$ and $ Q$ are hermitian operators, the work
operator was defined by \citet{BochkovKuzovlev77,BochkovKuzovlev81} as:
\footnote{\citet{BochkovKuzovlev77} defined the ``operator of energy absorbed by
the system'' $E=\int_{t_0}^{t_f}X(\tau)Q^H(\tau)d\tau$, where $Q^H(\tau)$ is the
operator $Q$ in the Heisenberg representation. It is not difficult to prove that
$E$ coincides with $W_0$ in Eq. (\ref{eq:work-op-ex}).}
\begin{equation}
W_0 \doteq H_0^H(t_f)- H_0
\label{eq:work-op-ex}
\end{equation}
where the superscript $H$ denotes Heisenberg picture.
A similar approach was employed also within the inclusive viewpoint,
with work defined as \citep{Allahverdyan_PRE05}
\begin{equation}
W = H^H(t_f)- H_0
\label{eq:work-op-in}
\end{equation}
As pointed out clearly by some of us before with the work in
\citep{Lutz_PRE07} the Jarzynski Equality (\ref{eq:JE}) {\it cannot}
be obtained on the basis of the work operator (\ref{eq:work-op-in}).
Likewise the Bochkov-Kuzovlev identity (\ref{eq:BKE}) cannot be
obtained on the basis of Eq. (\ref{eq:work-op-ex}). It is by now
clear that the impossibility of extending the classical results
(\ref{eq:BKE},\ref{eq:JE}) on the basis of quantum work operators
(\ref{eq:work-op-ex},\ref{eq:work-op-in}), respectively, is related
to the fact that work characterizes a process, rather than a state
of the system, and consequently cannot be represented by an
observable that would yield work as the result of a single
projective measurement. In contrast, energy must be measured twice
in order to determine work, once before the protocol starts and a
second time immediately after it has ended. The difference of the
outcomes of these two measurements then yields the work performed on
the system in a particular realization \citep{Lutz_PRE07}.
In this paper we will adopt the exclusive viewpoint of
\citet{BochkovKuzovlev77,BochkovKuzovlev81}, but use the proper
definition of work as the difference between the outcomes of two
projective measurements of $H_0$, to obtain the quantum version of
Eq. (\ref{eq:BKE}). Indeed we will develop the theory of quantum
work fluctuations within the exclusive two-point measurements
viewpoint in great generality. In Sec. \ref{sec:CFW} we study the
characteristic function of work. In Sec. \ref{sec:can} and Sec.
\ref{sec:mu-can} we derive the exclusive versions of the
Tasaki-Crooks quantum fluctuation theorem
\citep{TH_JPA07,TCH_JStatMech09,CTH_PRL09}, and of the
microcanonical quantum fluctuation theorem \citep{THM_PRE08},
respectively. Sec. \ref{sec:discussion} closes the paper with some
remarks concerning the relationships between the inclusive-work, the
exclusive-work, and the dissipated-work.
\section{\label{sec:CFW}Characteristic function of work}
As mentioned in the introduction, work is properly defined in
quantum mechanics as the difference of the energies measured at
the beginning and end of the protocol, i.e., at times $t_0$ and
$t_f>t_0$, respectively.
According to the exclusive
viewpoint that we adopt here, this energy is given by the
unperturbed Hamiltonian $H_0$. Let $e_{n}$ and
$|n,\lambda\rangle$, denote the eigenvalues and eigenvectors of
$H_0$:
\begin{equation}
H_0 |n,\lambda \rangle=e_{n} |n,\lambda\rangle\, .
\end{equation}
Here $n$ is the quantum number labeling the eigenvalues of $H_0$ and
$\lambda$ denotes further quantum numbers needed to specify uniquely
the state of the system, in case of degenerate energies. A
measurement of $H_0$ at time $t_0$ gives a certain eigenvalue $e_n$
while a subsequent measurement of $H_0$ at time $t_f$ gives another
eigenvalue $e_m$, so that
\begin{equation}
w_0 = e_m-e_n
\end{equation}
Evidently $w_0$ is a stochastic variable due to the intrinsic
randomness entailed in the quantum measurement processes and
possibly in the statistical mixture nature of the initial
preparation. In the following we derive the statistical properties
of $w_0$, in terms of its probability density function (pdf), and the
associated characteristic function of work.
Let the system be prepared at time $t<t_0$ in a certain state
described by the density matrix $\rho(t_0)$. We further assume
that the perturbation $X(t)$ is switched on at time $t_0$. At the
same time the first measurement of $H_0$ is performed, with
outcome $e_n$. This occurs with probability:
\begin{equation}
p_n= \sum_\lambda \langle n, \lambda| \rho(t_0)|n, \lambda \rangle=
\mbox{Tr} \, P_n \rho(t_0)
\end{equation}
where $P_n$ is the projector onto the eigenspace spanned by the
eigenvectors belonging to the eigenvalue $e_n$:
\begin{equation}
P_n = \sum_{\lambda} |n,\lambda\rangle \langle n, \lambda |
\end{equation}
and $\mbox{Tr}$ denotes trace over the Hilbert space.
According to the postulates of quantum mechanics, immediately
after the measurement the system is found in the state:
\begin{equation}
\rho_n=P_n \rho(t_0) P_n /p_n \, .
\end{equation}
For times $t>t_0$ the system evolves according to
\begin{equation}
\rho(t)=U_{t,t_0}\rho_n U^{\dagger}_{t,t_0}
\end{equation}
with $U_{t,t_f}$ denoting the unitary time evolution operator obeying the
Schr{\"o}dinger equation governed by the full time dependent
Hamiltonian (Eq. \ref{eq:H}):
\begin{equation}
i\hbar \partial_t \, U_{t,t_0}=H(t)U_{t,t_0}\, , \qquad U_{t_0,t_0}=1 \, .
\end{equation}
At time $t_f$ the second measurement of $H_0$ is performed, and
the eigenvalue $e_m$ is obtained with probability:
\begin{equation}
p(m|n) = \mbox{Tr} \,P_m \rho_n(t_f) \, .
\end{equation}
Therefore the probability density to observe a certain value of
work $w_0$ is given by:
\begin{equation}
p^0_{t_f,t_0}(w_0)= \sum_{m,n} \delta(w_0-[e_m-e_n])p(m|n)p_n \, .
\end{equation}
We use the superscript $0$ throughout this paper to indicate the
exclusive viewpoint. The same symbols, without the superscript $0$
denote the respective quantities within the inclusive viewpoint.
The Fourier transform of the probability density of work gives the
characteristic function of work
\begin{equation}
G^0_{t_f,t_0}(u)= \int dw_0 p^0_{t_f,t_0}(w_0) e^{iuw_0}
\end{equation}
which allows quick derivations of fluctuation theorems and
nonequilibrium equalities. Performing calculations analogous to
those reported by \citet{THM_PRE08} we find the characteristic
function of work, in the form of a two point {\it quantum
correlation function}:
\begin{equation}
G^0_{t_f,t_0}(u)= \mbox{Tr} \, e^{i u H_0^H(t_f)} e^{-i u H_0}\bar
\rho(t_0) \equiv \langle e^{i u H_0^H(t_f)} e^{-i u H_0}\rangle
\label{eq:G0}
\end{equation}
where $\bar \rho(t_0)$ is defined as:
\begin{equation}
\bar \rho(t_0) = \sum_n p_n \rho_n =\sum_n P_n \rho(t_0) P_n
\label{eq:bar-rho}
\end{equation}
and the superscript $H$ stands for Heisenberg representation,
i.e.:
\begin{equation}
H^H_0(t_f) = U^{\dagger}_{t_f,t_0} H_0 U_{t_f,t_0} \label{eq:H^H}
\end{equation}
This exclusive-work characteristic function $G^0_{t_f,t_0}$ should
be compared to the in\-clusive-work characteristic function
$G_{t_f,t_0}$ that is obtained when looking at the difference $w$ of
the outcomes $E_n(t_0)$ and $E_m(t_f)$ of measurements of the {\it
total} time dependent Hamiltonian $H(t)$. In this case one obtains
\citep{Lutz_PRE07,THM_PRE08}:
\begin{equation}
G_{t_f,t_0}(u)= \mbox{Tr} \, e^{i u H^H(t_f)} e^{-i u H_0}\bar \rho(t_0)
\equiv \langle e^{i u H^H(t_f)} e^{-i u H_0}\rangle
\end{equation}
The difference lies in the distinct fact that $H_0^H(t_f)$ appears
in the exclusive approach in place of the full $H^H(t_f)$.
\subsection{\label{sec:rev}Reversed protocol}
Consider next the reversed protocol
\begin{equation}
\widetilde X(t) = X(t_f+t_0-t)
\label{eq:Xtilde}
\end{equation}
which consecutively assumes values as if time was reversed.
Let $\widetilde H(t)$ be the resulting Hamiltonian:
\begin{equation}
\widetilde H(t) = H_0 - \widetilde X(t) Q
\label{eq:Htilde}
\end{equation}
The characteristic function of work now reads:
\begin{equation}
\widetilde G_{t_f,t_0}(u)= \mbox{Tr} \, e^{i u \widetilde H^H(t_f)}
e^{-i u H_0}\bar \rho(t_0) \equiv \langle e^{i u\widetilde
H_0^H(t_f)} e^{-i u H_0}\rangle
\end{equation}
where
\begin{equation}
\widetilde H^H_0(t_f) = \widetilde U^{\dagger}_{t_f,t_0} H_0
\widetilde U_{t_f,t_0} \label{eq:H^H-tilde}
\end{equation}
and $\widetilde U_{t_f,t_0}$ is the time evolution operator
generated by $\widetilde H(t)$:
\begin{equation}
i\hbar \partial_t \,\widetilde U_{t,t_0}=\widetilde H(t)\widetilde
U_{t,t_0} \qquad \widetilde U_{t_0,t_0}=1
\end{equation}
Assuming that the Hamiltonian $H(t)$ is invariant under time
reversal i.e.,\footnote{Here we assume that the Hamiltonian does
not depend on any odd parameter, e.g., a magnetic field. Treating
that case is straightforward and amounts to reverse the sign of
the odd parameter in the r.h.s. of Eq. (\ref{eq:Theta-H-Theta}),
see \citep{Andrieux_NJP09}.}
\begin{equation}
\Theta H(t) \Theta^{-1} = H(t)\, ,
\label{eq:Theta-H-Theta}
\end{equation}
where $\Theta$ is the antiunitary time reversal operator
\citep{MessiahBook},
the time evolution operators associated to the forward and
backward protocols are related by the following important relation,
see \ref{app:theta}:
\begin{equation}
U_{t_0,t_f}= U_{t_f,t_0}^{\dagger} = \Theta \widetilde U_{t_f,t_0} \Theta^{-1}
\, .
\label{eq:Theta-U-Theta}
\end{equation}
In the following section we will derive the quantum version of Eq.
(\ref{eq:BKE}) and its associated work fluctuation theorem. This
will be accomplished by choosing the initial density matrix to be
a Gibbs canonical state. In Sec. \ref{sec:mu-can} we will,
instead, assume an initial microcanonical state.
\section{\label{sec:can}Canonical initial state}
For a system staying at time $t_0$ in a canonical Gibbs state:
\begin{equation}
\rho(t_0)=\bar \rho(t_0)= e^{-\beta H_0}/Z_0
\label{eq:rho-can}
\end{equation}
where $Z_0= \mbox{Tr}\, e^{-\beta H_0}$, $\bar \rho (t_0)$ coincides
with $\rho(t_0)$ because the latter is diagonal with respect to the
eigenbasis of $H_0$ (see Eq. \ref{eq:bar-rho}). Plugging Eq.
(\ref{eq:rho-can}) into (\ref{eq:G0}), we obtain:
\begin{equation}
G^0_{t_f,t_0}(\beta;u) = \mbox{Tr} \, e^{i u H_0^H(t_f)} e^{-i u H_0}
e^{-\beta H_0}/Z_0
\label{eq:G0-can}
\end{equation}
where for completeness we have listed the dependence upon $\beta$
among the arguments of $ G^0_{t_f,t_0}$. The quantum version of Eq.
(\ref{eq:BKE}) immediately follows by setting $u=i\beta$:
\begin{align}
\langle e^{-\beta w_0}\rangle =G^0_{t_f,t_0}(\beta;i\beta)= \mbox{Tr} \,
e^{-\beta H_0^H(t_f)}/Z_0=\mbox{Tr} \, e^{-\beta H_0}/Z_0=1
\label{eq:BKE-quantum}
\end{align}
where in the third equation we have used Eq. (\ref{eq:H^H}), the
cyclic property of the trace and the unitarity of the time
evolution operator: $U^{\dagger}_{t_f,t_0} U_{t_f,t_0}=1$.
Moreover we find the following important relation between $G^0_{t_f,t_0}$
and $\widetilde G^0_{t_f,t_0}$, see \ref{app:G}:
\begin{equation}
G^0_{t_f,t_0}(\beta;u)= \widetilde G^0_{t_f,t_0}(\beta;-u+i\beta) \, .
\label{eq:G=Gtilde}
\end{equation}
By means of inverse Fourier transform, the following quantum
Bochkov-Kuzovlev fluctuation relation between the forward and
backward work probability density functions is obtained:
\begin{equation}
\frac{p^0_{t_f,t_0}(\beta;w_0)}{\widetilde
p^0_{t_f,t_0}(\beta;-w_0)} = e^{\beta w_0} \, .
\label{eq:BK-FT}
\end{equation}
This must be compared to the quantum Tasaki-Crooks relation that
is obtained within the inclusive viewpoint \citep{TH_JPA07}:
\begin{equation}
\frac{p_{t_f,t_0}(\beta;w)}{\widetilde p_{t_f,t_0}(\beta;-w)} =
e^{\beta (w-\Delta F)} \label{eq:TC-FT}
\end{equation}
where, in contrast to Eq. (\ref{eq:BK-FT}) the term $\Delta
F=-\beta^{-1}[ \ln {\mbox{Tr} \, e^{-\beta H(t)}}- \ln {\mbox{Tr} \, e^{-\beta
H_0}}]$, appears.
\subsection{Remarks}
Eqs. (\ref{eq:BKE-quantum}, \ref{eq:BK-FT}) constitute original
quantum results that do not appear in the works of
\citet{BochkovKuzovlev77,BochkovKuzovlev81}. In the {\it classical}
case they found a fluctuation theorem similar to Eq.
(\ref{eq:BK-FT}), reading:
\begin{equation}
\frac{P[Q(\tau);X(\tau)]}{P[\varepsilon \widetilde Q(\tau) ;\varepsilon
\widetilde X(\tau)]} = \exp\left[\beta \int_{t_0}^{t_f} X(\tau)\dot
Q(\tau)\right]
\label{eq:BK-FT-QX}
\end{equation}
where $P[Q(\tau);X(\tau)]$ is the probability density {\it
functional} to observe a certain trajectory $Q(\tau)$ given a
certain protocol $X(\tau)$. Here $Q(\tau)$ is a short hand notation
for
$Q(\mathbf{q}(\mathbf{q}_0,\mathbf{p}_0,\tau),\mathbf{p}(\mathbf{q}_0,\mathbf{p}
_0,\tau))$, see Eq. (\ref{eq:H-cl}), where $(\mathbf{q}
(\mathbf{q}_0,\mathbf{p}_0,\tau),\mathbf{p}(\mathbf{q}_0,\mathbf{p}_0,\tau))$
is the evolved initial condition $\mathbf{q}_0,\mathbf{p}_0$ at some
time $\tau \in [t_0,t_f]$, for a certain protocol $X(\tau)$. The
symbol $\varepsilon$ denotes the parity of the observable $Q$ under
time reversal (assumed to be equal to $1$ in this paper). The symbol
$\sim$ denotes quantities referring to the reversed protocol. The
{\em classical} probability of work $p^{cl,0}_{t_f,t_0}(W_0)$ is
obtained from the $Q$-trajectory probability density functional
$P[Q(\tau);X(\tau)]$ via the formula:
\begin{equation}
p^{cl,0}_{t_f,t_0}(W_0)=
\int \mathcal{D}Q(\tau) \,P[Q(\tau);X(\tau)] \,
\delta\left[{W_0-\int_{t_0}^{t_f} X(\tau)\dot Q(\tau)} \right]
\end{equation}
where the integration is a functional integration over all possible trajectories
such that $\int_{t_0}^{t_f} X(\tau)\dot Q(\tau)=W_0$. With this formula one
finds from Eq. (\ref{eq:BK-FT-QX}) the exclusive version of the classical Crooks
fluctuation theorem for the work probability densities \citep{Jarz_JStatMech07}
\begin{equation}
p^{cl,0}_{t_f,t_0}(\beta;W_0)=\widetilde p^{cl,0}_{t_f,t_0}(\beta;-W_0)e^{\beta
W_0} \,.
\end{equation}
Notably, a quantum version of Eq. (\ref{eq:BK-FT-QX}) does not
exists because:... ``in the quantum case it is impossible to
introduce unambiguously a [...] probability functional''
\citep{BochkovKuzovlev81}. It is only by giving up the idea of true
quantum trajectories and embracing instead the two-point measurement
approach that the quantum exclusive fluctuation theorem Eq.
(\ref{eq:BK-FT}) can be obtained, and has been obtained here, for
the first time.
\section{\label{sec:mu-can}Microcanonical initial state}
We consider next an initial microcanonical initial state of energy
$E$, that can formally be expressed as:
\begin{equation}
\rho(t_0)=\bar \rho(t_0)= \delta(H_0-E)/\Omega_0(E) \, ,
\label{eq:rho-mu-can}
\end{equation}
wherein $\Omega_0(E)= \mbox{Tr} \, \delta(H_0-E)$. Actually one has to
replace the singular Dirac function $\delta(x)$ by a smooth function
sharply peaked around $x=0$, but with infinite support. A normalized
gaussian with arbitrarily small width serves this purpose well.
With this choice of initial condition, the characteristic function
of work reads:
\begin{align}
G^0_{t_f,t_0}(E;u) &= \mbox{Tr} \, e^{i u H_0^H(t_f)} e^{-i u H_0} \delta
(H_0-E)/\Omega_0(E) \nonumber \\
&= \mbox{Tr} \, e^{i u [H_0^H(t_f)-E]} \delta (H_0-E)/\Omega_0(E)
\label{eq:G0-mu-can}
\end{align}
where for completeness we listed the dependence upon $E$ among the
arguments of $ G^0_{t_f,t_0}$. By applying the inverse Fourier
transform we obtain:
\begin{equation}
p^0_{t_f,t_0}(E;w_0)= \mbox{Tr} \, \delta(H_0^H(t_f)-E-w_0)\delta
(H_0-E)/\Omega_0(E)
\label{eq:p-mu-can}
\end{equation}
Likewise, for the reversed protocol,
\begin{equation}
\widetilde p^0_{t_f,t_0}(E;w_0)= \mbox{Tr} \, \delta(\widetilde
H_0^H(t_f)-E-w_0)\delta (H_0-E)/\Omega_0(E)
\label{eq:p-mu-can-Rev}
\end{equation}
is found.
We then find the following relation between the forward and
backward work probability densities, see \ref{app:OmegaP}:
\begin{equation}
\Omega_0(E) p^0_{t_f,t_0}(E;w_0) = \Omega_0(E+w_0) \widetilde
p^0_{t_f,t_0}(E+w_0;-w_0)
\label{eq:OmegaP=OmegaPtilde}
\end{equation}
Then, the quantum microcanonical fluctuation theorem reads,
within the exclusive viewpoint:
\begin{equation}
\frac{p^0_{t_f,t_0}(E;w_0)}{\widetilde p^0_{t_f,t_0}(E+w_0;-w_0)}
= \frac{\Omega_0(E+w_0)}{\Omega_0(E)}\label{eq:BK-FT-mu}
\end{equation}
This must be compared to the quantum microcanonical fluctuation
theorem, obtained within the inclusive viewpoint \citep{THM_PRE08}
\begin{equation}
\frac{p_{t_f,t_0}(E;w)}{\widetilde p_{t_f,t_0}(E+w;-w)} =
\frac{\Omega_f(E+w)}{\Omega_0(E)}\label{eq:TC-FT-mu}
\end{equation}
The difference lies in the fact that within the exclusive
viewpoint the densities of states at the final energy $E+w_0$, is
determined by the unperturbed Hamiltonian, i.e., $\Omega_0(E+w_0)=\mbox{Tr}
\, \delta (H_0-(E+w_0))$, whereas it results from the total
Hamiltonian in the inclusive approach: $\Omega_f(E+w)=\mbox{Tr} \,
\delta (H(t_f)-E-w)$.
Eq. (\ref{eq:TC-FT-mu}) was first obtained
within the classical framework by \citep{Cleuren_PRL06}. It is not
difficult to see that Eq. (\ref{eq:BK-FT-mu}) holds classically as
well.
\subsection{Remarks}
Just as Eq. (\ref{eq:BK-FT}), this Eq. (\ref{eq:BK-FT-mu}) is a new
result that was not reported before by
\citet{BochkovKuzovlev77,BochkovKuzovlev81}. It is very interesting
to notice, however, that those authors already put forward a {\it
classical} fluctuation theorem for the microcanonical ensemble,
which can be recast in the form \citep{BochkovKuzovlev81}:
\begin{equation}
\frac{P[I(\tau);X(\tau);E]}{P[-\varepsilon \widetilde I(\tau);\varepsilon
\widetilde X(\tau);E+W_0]} =\frac{\Omega_0(E+W_0)}{\Omega_0(E)}
\label{eq:BK-FT-mu-IX}
\end{equation}
where $P[I(\tau);X(\tau);E]$ is the probability density {\it functional} to
observe a certain trajectory $I(\tau)$ given a certain protocol and an initial
microcanonical ensemble of energy $E$. Here
\begin{equation}
I(\tau) = \dot
Q(\mathbf{q}(\mathbf{q}_0,\mathbf{p}_0,\tau),\mathbf{p}(\mathbf{q}_0,\mathbf{p}
_0,\tau))
\end{equation}
denotes the {\it current}.
By functional integration the classical microcanonical theorem for the pdf of
work
\begin{equation}
\frac{p^{cl,0}_{t_f,t_0}(E,W_0)}{\widetilde p^{cl,0}_{t_f,t_0}(E+W_0,W_0)}=
\frac{\Omega_0(E+W_0)}{\Omega_0(E)}
\end{equation}
is obtained from (\ref{eq:BK-FT-mu-IX}) in the same way as
(\ref{eq:BK-FT}) follows from (\ref{eq:BK-FT-QX}). However the
quantum version of (\ref{eq:BK-FT-mu-IX}) does not exists and the
derivation of the quantum microcanonical fluctuation theorem
(\ref{eq:BK-FT-mu}) is indeed only possible if the two-point
measurement approach is adopted.
The fluctuation relations of Eqs. (\ref{eq:BK-FT-mu},
\ref{eq:TC-FT-mu}) can be further expressed in terms of entropy,
according to the rules of statistical mechanics.
Following \cite{Gibbs02} two different prescriptions are found in
textbooks to obtain the entropy associated to the microcanonical
ensemble:
\begin{align}
s(E)&= k_B \ln \Omega(E) = \mbox{Tr}\,\delta(H-E) \\
S(E)&= k_B \ln \Phi(E) = \mbox{Tr}\, \theta(H-E)
\end{align}
The two definitions coincide for large systems with short range
interactions among their constituents, but may substantially differ
if the size of the system under study is small. It is by now clear
that, of the two, only the second -- customarily called ``Hertz
entropy" -- is the fundamentally correct one
(\cite{Hertz1,Hertz2,Schluter48,Pearson85,Campisi05,Campisi08,Campisi_AJP10,
Dunkel06}). \footnote{It is interesting to notice that Einstein was
well aware of
the works of \citet{Hertz1,Hertz2} which he praised as excellent
(``vortrefflich'') \citep{Einstein11}.}
Using the microcanonical expression for the
temperature $k_B T(E)=(\partial S(E)/ \partial E)^{-1}=\Phi(E)/\Omega(E)$, we can re-express the
quantum microcanonical Bochkov-Kuzovlev fluctuation relation in
terms of entropy and temperature as:
\begin{equation}
\frac{p^0_{t_f,t_0}(E;w_0)}{\widetilde p^0_{t_f,t_0}(E+w_0;-w_0)}
= \frac{T_0(E)}{T_0(E+w_0)}e^{[S_0(E+w_0)-S_0(E)]/k_B} \label{eq:BK-FTb}
\end{equation}
where the subscript $0$ in $T$ and $S$ denotes that these
quantities are calculated for the unperturbed Hamiltonian $H_0$.
Likewise, adopting the inclusive viewpoint one obtains:
\begin{equation}
\frac{p_{t_f,t_0}(E;w)}{\widetilde p_{t_f,t_0}(E+w;-w)} =
\frac{T_0(E)}{T_f(E+w)}e^{[S_f(E+w)-S_0(E)]/k_B} \label{eq:TC-FTb}
\end{equation}
where the subscript $f$ in $T$ and $S$ denotes that these
quantities are calculated for the total final Hamiltonian
$H(t_f)$. \footnote{If instead of the microcanonical ensemble
(\ref{eq:rho-mu-can}), the modified microcanonical ensemble $\varrho(t_0)=\theta(E-H_0)/[\mbox{Tr}\, \theta(E-H_0)]$, \citep{Ruelle}
would be used as the initial equilibrium state, then the fluctuation
theorem assumes the same form as in Eq. (\ref{eq:TC-FTb}),
but without the ratio of temperatures \citep{THM_PRE08}.}
\section{\label{sec:discussion}Discussion}
We derived the quantum Bochkov-Kuzovlev identity as well as the
quantum canonical and microcanonical work fluctuation theorems within
the exclusive approach, and have elucidated their relations to the original
works of \citet{BochkovKuzovlev77,BochkovKuzovlev81}.
The extension of the corresponding classical theorems to the quantum
regime is only possible thanks to the proper definition of work as a
two-time quantum observable.
We close with two comments: \footnote{Similar remarks were made also
within the classical framework by \cite{Jarz_CRPhys07}.}
1. For a cyclic process, $X(t_f)=X(t_0)$, inclusive and exclusive
work fluctuation theorems coincide. However in no way is it true
that the exclusive approach of Bochkov \& Kuzovlev, adopted here,
is restricted to cyclic processes, as some authors have suggested
\citep{Allahverdyan_PRE05,Cohen_2005,Andrieux_PRL08}. As stressed in
the introduction, the difference of the two approaches originates
from the different definitions of work, and is not related to
whether the process under study is cyclic or is not cyclic.
2. Within the inclusive approach it is natural to define the
\emph{dissipated work} as $w_{dis}=w-\Delta F$
\citep{Kawai_PRL07,Jarzynski_EPL09}. Then, the Jarzynski equality
(\ref{eq:JE}) can be rewritten as $\langle e^{-\beta w_{dis}}\rangle
=1$. This might make one believe that the exclusive work $w_0$
coincides with the dissipated work $w_{dis}$. This, though, would be
generally wrong. The dissipated work $w_{dis}$ is a stochastic
quantity whose statistics, given by
$p^{dis}_{t_f,t_0}(w_{dis})=p_{t_f,t_0}(w_{dis} +\Delta F)$, in
general does not coincide with the statistics of exclusive work
$w_0$, given by $p^{0}_{t_f,t_0}(w_0)$. See \ref{app:WdissPDF} for a
counterexample. Only for a cyclic process, for which $\Delta F=0$,
does the dissipated-work $w_{dis}$ coincide with the inclusive-work
$w$, which in turn coincides with the exclusive-work $w_0$.
\subsection*{Acknowledgments}
The authors gratefully acknowledge financial support from the German
Excellence Initiative via the {\it Nanosystems Initiative Munich}
(NIM), the Volkswagen Foundation (project I/80424), and the DFG via
the collaborative research center SFB-486, Project A-10, via the DFG
project no. HA1517/26--2.
| proofpile-arXiv_065-5179 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
The epoch of reionization (EoR) started with the formation of the first sources of light around $z=15 - 30$. As shown
by the Gunn-Peterson effect \citep{Gunn65} in the spectra of high-Fredshift quasars (QSO) (e.g.,
\citealt{Fan06}), the universe was fully reionized by $z \sim 6$. WMAP 5-year results show that the optical depth
for the Thomson scattering of
CMB photons traveling through the reionizing universe is $\tau= 0.084 \pm 0.016$ \citep{Koma08}. Together
with the Gunn-Peterson data, this strongly favors an extended reionization period between $z>11$ and $z=6$.
While other observations,
such as the Lyman-$\alpha$ emitter luminosity function \citep{Ouch09}, may produce other constraints on the history of
reionization in the next few years, the most promising is the observation of the $21$-cm line in the neutral
IGM using large radio-interferometers
(LOFAR\footnote{LOw Frequency ARray, \url{http://www.lofar.org}},
MWA\footnote{Murchison Widefield Array, \url{http://mwatelescope.org/}},
GMRT\footnote{Giant Metrewave Radio Telescope, \url{http://www.gmrt.ncra.tifr.res.in}},
21-CMA\footnote{21 Centimeter Array, \url{http://21cma.bao.ac.cn/}}, SKA\footnote{Square Kilometre Array,
\url{http://www.skatelescope.org/}}).
The signal will be observed in emission or in absorption against the CMB continuum. Both theoretical modeling
\citep{Mada97,Furl06a} and simulations (e.g., \citealt{Ciar03c,Gned04,Mell06b,Lidz08,Ichi09,Thom09})
show that the brightness temperature fluctuations of the 21 cm signal have an
amplitude of a few $10$ mK in emission, on scales from tens of arcmin down to sub-arcmin.
With this amplitude, and ignoring the issue of
foreground cleaning residuals, statistical quantities such as the three-dimensional
power spectrum should be measurable with LOFAR or MWA with a few
100 hours integration \citep{Mora04,Furl06a,Lidz08}.
In absorption however, the amplitude of the fluctuations
may exceed $100$ mK \citep{Gned04,Sant08,Baek09}, the exact
level depending on the relative contribution of the X-ray and UV sources to the process
of cosmic reionization.
The signal will be seen in
absorption during the initial phase of reionization, probably at $z> 10$,
when the accumulated amount of emitted X-ray is not yet sufficient to raise the
IGM temperature above the CMB temperature. The duration and intensity of this
absorption phase, regulated by the spectral energy distribution
(SED) of the sources, are crucial. SKA precursors able to probe the relevant frequency
range, 70 - 140 MHz, may benefit from a much higher
signal-to-noise than during later periods in the EoR. However, if the absorption phase is
confined at redshifts above $15$, RFI and the ionosphere
will become an problem. Quite clearly, the different types of sources of reionization and different
formation histories produce very different properties
for the 21-cm signal. It is therefore important for future observations to explore the range of astrophysically plausible
scenarios using numerical simulations.
To properly model the signal, it is necessary to use $ > 100 {h}^{-1}\mathrm{Mpc}$
box sizes \citep{Bark04}. Together with
a large box size, it is desirable to resolve halos with masses down to $10^8 M_{\odot}$
as these contain sources (able to cool below their virial temperature by atomic processes),
or even minihalos with masses down
to $10^4 M_{\odot}$ became they act as an efficient photon sink because of their high recombination
rate \citep{Ilie05}. As this work focuses on improving the physical modeling,
we restrict ourselves to resolving halos with a mass $10^{10} M_{\odot}$ or higher.
Indeed, simulating the absorption phase correctly, as we do in this work, requires a more
extensive and more costly implementation of radiative transfer. We are exploring the direct
implication of this improved physical modeling, and will turn to better mass resolution
in the near future.
There are three bands in the sources SED that influence the level of the 21 cm signal:
the Lyman band, the ionizing UV band, and the soft X-ray band.
Lyman band photons are necessary to decouple the spin temperature of hydrogen from the CMB
temperature through the Wouthuysen-Field effect \citep{Wout52,Fiel58},
and make the EoR signal visible. UV band photons
are of course responsible for the ionization of the IGM, and
soft X-rays are able to preheat the neutral gas ahead of the ionizing front, deciding whether
the decoupled spin temperature is less (weak preheating) or
greater (strong preheating) than the CMB temperature. While a proper modeling should perform the
full 3D radiative transfer in all 3 bands, a simpler modeling
has often been used in previous works. Indeed, for the usual source SEDs and source formation
histories, once the average ionization fraction of the universe is greater
than $\sim 10 \%$, the background flux of Lyman-$\alpha$ photons is so high that the hydrogen
spin temperature is fully coupled to the kinetic temperature
by the Wouthuysen-Field effect \citep{Baek09}. Thereafter, computing the Lyman band
radiative transfer is unnecessary. In the same spirit,
it has usually been assumed that the preheating of the IGM by soft X-ray was strong enough to raise
the kinetic temperature much higher than the CMB
temperature everywhere early in the EoR. However, both assumptions fail during the early reionization:
the absorption phase.
Even in the later part of reionization the second assumption may fail, depending on the nature of the
sources. We will quantify this possibility in this paper.
Computing the full radiative transfer in all three bands is necessary to study the absorption regime.
Indeed, fluctuations in the local Lyman-$\alpha$ flux
induce fluctuations in the spin temperature (while the Wouthuysen-Field effect is not yet saturated),
which, in turn, modify the power spectrum of the
$21$ cm signal \citep{Bark05, Seme07, Chuz07, Naoz08, Baek09}.
The same is true for the fluctuations in the local flux of X-ray photons \citep{Prit07,Sant08}.
Let us emphasize however that, in modeling Lyman-$\alpha$ and X-ray fluctuations,
\citet{Bark05}, \citet{Naoz08}, \citet{Prit07} and
\citet{Sant08} all use the semi-analytical approximation that the IGM has a uniform density
of absorbers and ignore wing effects in the radiative transfer of Lyman-$\alpha$
photons. \citet{Seme07} and \citet{Chuz07} have shown that
these wing effects do exist. Moreover, once reionization is under way,
ionized bubbles create sharp fluctuations in the number density of absorbers (not to mention simple
matter density fluctuations). In this work, for the first time,
we present results based on simulations with full radiative transfer for both Lyman-$\alpha$ and X-ray photons.
What are the possible candidates as sources of reionization? Usually, two categories are considered:
ionizing UV sources (Pop II and III stars), and X-ray sources (quasars).
When we study 21 cm absorption, however, we must distinguish between Pop II and Pop III stars beyond
the large difference in luminosity per
unit mass of formed star. Indeed Pop II stars have a three times larger Lyman band to ionizing UV band
luminosity ratio than Pop III stars.
It means that the 21 cm signal
will reach its full power (near saturated Wouthuysen-Field effect) at a lower average ionization
fraction for Pop II stars than for Pop III stars. The relevant
question is: how long do Pop III stars dominate the source population before Pop II stars take over?
The answer to this question, related to the whole process
of star formation, feedback and metal enrichment of the IGM, is a difficult one. At this stage,
state of the art numerical simulations of the EoR use simple
prescriptions in the best case (e.g. \citealt{Ilie07a}), or simply ignore this issue.
The other category of sources are X-ray sources. They may be
mini-quasars, X-ray binaries, supernovae \citep{Oh01, Glov03}, or even
more exotic candidates such as dark stars
\citep{schl09}. The exact level of emission from these sources is
a matter of speculation. The generally accepted view is that stars
dominate over X-ray sources and are sufficient to drive reionization
\citep{Shap87, Giro96, Mada99, Ciar03a}. Recently, \citet{Volo09} supported
the opposite view. While, in their models, X-ray sources are marginally able to complete
reionization by $z \sim 6$, they find a very low contribution from stars. Indeed they rely
on \citet{Gned08} who find, using numerical simulations, a negligible escape
fraction for ionizing radiations from galaxies with total mass less than a few $10^{10} M_{\odot}$, who
should actually contribute to $90\%$ of the ionizing photon production during the EoR \citep{Chou07}.
While the physical modeling in their innovative simulations is quite detailed, this surprizing behavior
of the escape fraction definitely needs to be checked at higher resolution and with different codes.
For the time being the best simulations can only explore a plausible range of X-ray contributions, and quantify
the impact on observables. When the observations become available we would like to be able, using
simulation results, to derive tight constraints on the relative
level of emission from ionizing UV and X-ray sources. This work, exploring the 21 cm signal for a
few different levels of X-ray emission, is a first step toward this
goal.
The paper is organized as follows. We present the numerical methods in \S2 and
describe our source models in \S3.
In \S4, we show the results and analyze the differences between the
models. We discuss our findings and conclude in \S5.
\section{Numerical simulation}
The numerical methods used in this work are similar to those
presented in \citet{Baek09} (hereafter Paper I). The references to previous and some new validation
tests are presented in the Appendix.
The dynamical simulations have been run with GADGET-2 \citep{Spri05}
and post-processed with UV continuum radiative transfer and further processed with Ly-$\alpha$
transfer using LICORICE.
The same cosmological parameters and particle number are used
and we refer the reader to Paper I
for details related to the numerical methods and parameters.
The main improvement on the previous work is using a more realistic source model including
soft X-ray and implementing He chemistry.
We have run seven different simulations, all of which use the same $100\,h^{-1}$Mpc box, density
fields, and star formation rate, but with different initial mass functions (IMF), chemistry (with helium or without),
X-ray fraction of the total luminosity or X-ray spectral index.
S1 is the reference model. S2 has a top-heavy IMF (Salpeter IMF restricted to a $100-120 M_\odot$ range), while the others uses a Salpeter IMF in a $1.6-120 M_\odot$ range.
Only S3 contains helium. In all other models, helium is replaced by the same mass of hydrogen. X-ray radiative transfer is included in S4, S5, S6 and S7.
They have either different X-ray fraction of the total luminosity or X-ray spectral index.
The basic parameters of these simulations are summarized in Tab.~\ref{model}
The simulations are controlled by a few parameters.
We adopted the same value as in paper I for the maximum value of the number of particles per radiative transfer cell in the adaptive grid: $N_{max}=30$. The resulting minimum radiative transfer cell size is
200 $h^{-1}$ kpc at $z=6.6$.
Between two snapshots, i. e. $\sim 10$ Myr, we cast $3 \times 10 ^{6}$ photon packets for photoionization (all in the UV for models S1 to S3, half in the UV and half X-rays for models S4 to S7), and
$3 \times 10^{7}$ photons for Lyman-$\alpha$ transfer. At the end of the simulations ($z \sim 6$), the number of sources
reaches $\sim 15000$, so the number of ionizing photon packets per source is only 200.
However, at this final stage the sources are highly clustered and very large and ionized regions
surround the source clusters. So the clustered sources cooperate to reduce the Monte Carlo noise at the
ionization fronts.
In addition, the adaptive grid responds better than a fixed grid to sampling issues: big cells where there
are few photons, small cells where there are many. \citet{Mase05} presents convergence tests for a
Monte-Carlo radiative transfer code very similar to ours. Their convergence tests suggest that the typical
level of noise in our ionization and temperature cubes is $\sim 10\%$. We accept is as a reasonable value,
especially since, having run the \citet{Ilie09} comparison tests, we are confident that our ionization
fronts propagate at the correct speed.
We use 1000 frequency bins in each of the photoionizing-UV and X-ray spectra. For Lyman-$\alpha$ transfer,
we sample the frequency at random between Lyman-$\alpha$ and Lyman-$\beta$.
\begin{table}[hbp]
\centering
\small
\tabcolsep 3pt
\renewcommand\arraystretch{1.2}
\begin{tabular}{c c c c c c}
\hline
\hline
Model &IMF & Helium & $L_{\rm{star}}$ &$L_{\rm{QSO}}$ & spectral index \\
\hline
\hline
S1 &$1.6-120M_{\odot}$ & No & 100 \% &0 \% &- \\
S2 &$100-120M_{\odot}$ & No & 100 \% &0 \% &- \\
S3 &$1.6-120M_{\odot}$ & Yes &100 \% &0 \% &- \\
S4 &$1.6-120M_{\odot}$ & No &99.9 \% &0.1 \% &$\alpha$=1.6 \\
S5 &$1.6-120M_{\odot}$ & No &99.9 \% &0.1 \% &$\alpha$=0.6 \\
S6 &$1.6-120M_{\odot}$ & No &99 \% &1 \% &$\alpha$=1.6 \\
S7 &$1.6-120M_{\odot}$ & No &90 \% &10 \% &$\alpha$=1.6 \\
\hline
\end{tabular}
\caption[]{Simulation parameters. $L_{star}$ is the stellar luminosity fraction
and $L_{QSO}$ is the X-ray luminosity fraction of the total luminosity.}
\label{model}
\end{table}
\subsection{X-ray radiative transfer}
\label{X-ray}
The main difference between the cosmological radiative transfer of ionizing UV and X-ray is the mean free
path of the photons, at most a few $10$ comoving Mpc in the first case, possibly several $100$ Mpc
in the second case. A usual
trick in implementing UV transfer is to use an infinite speed of light: do so with LICORICE (see Paper I). This is correct
if the crossing time of the photons between emission and absorption points is much less than the recombination time,
the photo-ionization time \citep{Abel99} and the typical time for the variation of luminosity of the sources. This is the
case in most of the IGM during the EoR, except very close to the sources where the photo-ionizing rate is very high.
Obviously, this is not the case for X-rays which have a much longer
crossing time. Consequently, we implemented the correct propagation speed for X-ray photons.
We propagate an X-ray photon packet during one radiative transfer time step $\Delta t_{reg}$ ($< 1$ Myr, same notation
from Fig.1 of Paper I)
over a distance of $c \Delta t_{reg}$, where $c$ is the velocity of light.
Then, the frequency of the photon packet and photoionizing cross sections are
recomputed with the updated value of the cosmological expansion factor.
The photon packet propagates during the next radiative transfer time step using these new
parameters.
If the photon packet loses 99\% of its initial energy, we drop it.
X-ray photon packets containing photons with an energy of
several keV pass through the periodic simulation box several times
before they lose most of their initial content. For each density snapshot, that is every $\sim 10$ Myr,
1.5 millions of photon packets are sent from the X-ray sources. About half of them are absorbed during the
computation on the same density snapshot when they were emitted,
and the other half is stored in memory to be propagated through the next density snapshots.
Some X-ray packets with very high energy photons still survive several snapshots later, so the
number of stored photon packets grows as simulations progress.
About 50 millions photon packets are stored in memory toward the end of the simulations.
It may seem that this memory overhead, which sets a limitation to the possible simulations with LICORICE,
would not appear with radiative transfer algorithms which naturally include a finite velocity
of light like moment methods. However, these methods would suffer from an overhead connected to the
number of frequency bins necessary to correctly model X-rays, while it does not exist in Monte-Carlo
methods. Including complete X-ray transfer in EoR simulations comes at a non-negligible
cost, whatever the numerical implementation.
Since the X-ray photons can propagate over several box sizes during several tens of Myr,
the X-ray frequency can redshift considerably between emission and absorption.
The cross-section of photoionization has a strong frequency dependence, so we
have to redshift the frequency of the photons. At each radiative transfer
time step $\Delta t_{reg}$, we update the frequency of all the X-ray photon packets,
\be
\nu (t+\Delta t_{reg})=\frac{a(t)}{a(t+\Delta t_{reg})}\nu(t),
\ee
where $a(t)$ is the expansion factor of the Universe.
The treatment of non-thermal electrons produced by X-ray will be described in \S \ref{sourceX}
\subsection{Helium reionization}
The intergalactic medium is mainly composed of hydrogen and helium, with contributions of
90\% and 10\% in number. Until now, we have run simulations with hydrogen only, but
including helium is worth studying because the different value of the ionization thresholds and
photoionization rates could affect the reionization history.
We included He, $\text{He}^+$, and $\text{He}^{++}$ in LICORICE, and used
\citet{Cen92} and \citet{Vern96} for various cooling rate and cross sections.
When helium chemistry is turned on, the ionization fractions ($\text{H}^+$, $\text{He}^+$ and
$\text{He}^{++}$) and the temperature are integrated explicitly using the adaptive scheme described
in Paper I. More details on the numerical methods and a validation tests of the treatment of helium are presented in Appendix.
\section{Source model}
\subsection{Computing the star formation rate}
Our new source model needs the star formation rate for all baryon particles.
We recompute the star formation in the radiative transfer
simulations rather than to rerun the dynamical simulation. Here is why and how.
We adopted the procedure described in \citet{Miho94}, employing a “local”
Schmidt law and an hybrid-particles algorithm to implement it
in our code.
Indeed, in our model,
the star formation rate solely depends on the density, and we make the assumption
that the star formation feedback (kinetic and thermal) on the dynamics does not vary much
from the fiducial simulation.
LICORICE uses the classical Schmidt law:
\be
\frac{df _*}{dt}=\frac{1}{t_*} \,\,\,(\text{if}\,\,\,\rho_{g} > \rho_{\text{threshold}}) ,
\ee
where $t_*$ is defined by :
\be
t_*=t_{0*}\left( \frac{\rho_g}{\rho _{\text{threshold}}} \right)^{-1/2} .
\ee
$\rho _g$ is the gas density and $f _*$ is the star fraction.
We set the parameters $t_{0*}$ and $\rho _{\text{threshold}}$ so that the evolution of the
global star fraction follows closely that of the S20 simulation ($20 h^{-1}$ Mpc) in Paper I, and
reionization completes at $z \sim 6$. In this way, we reuse the tuning made for the S20 simulation, and
at high $z$, we get a similar star formation history as in the higher resolution (but smaller box size) S20
simulation.
All simulations in the present work have a $100 h^{-1}$ Mpc box size.
Following the above equations, a gas particle whose local density exceeds the threshold
($\rho_{\text{threshold}} =5\,\rho _{\rm{critical}}\times \Omega_{b}$) increases its star fraction, $f_*$, where
$\rho _{\rm{critical}}$ is the critical density of the universe and $\Omega_{b}$ is the cosmological
baryon density parameter.
\subsection{Limiting the number of sources}
We compute the \emph{increase} in the star fraction for each particle,
$\Delta f_*$ between two consecutive snapshots.
Then the total mass of young stars formed in a particle is
$m \times \Delta f_*$, where $m$ is the mass of the particle.
To avoid a huge number of sources, we had to
set a threshold on the new star fraction for the particle to act as a source.
We used $\Delta f_*>0.001$. We checked that this leaves out a negligible amount of star formation,
about 0.4\%. It happens that several source particles reside in the same radiative transfer cell, but we treated individually
ray tracing for each source.
\subsection{Choosing an IMF}
With our mass resolution, this amount $m_{\text{gas}} \times \Delta f_*$ corresponds
to a star cluster or a dwarf galaxy so an IMF should be taken into account.
We choose a Salpeter IMF, with masses in the range $1.6\text{M}_{\odot}-120\text{M}_{\odot}$ or
$100\text{M}_{\odot}-120\text{M}_{\odot}$ (model S2). The first range is used to model
the SED of an intermediated Pop II and Pop III star population, and the other one is for pure Pop III stars.
\subsection{Computing the luminosity and SED of the stellar sources}
The next step is to make the link between the amount of created stars and the luminosity of the
sources. When only the ionizing UV luminosity is considered, it is quite justified to use simple
models. For example we can make it proportional to the mass of the host dark matter halo \citep{Ilie06b},
or, as we did in Paper I, to the mass of the baryonic particles newly converted into stars. Things
are more complicated when we consider both the Lyman band and ionizing UV. Indeed, since each particle
is massive enough to contain a representative sample of the choosen IMF, and since
each mass bin has a different life time, we should consider an SED evolving with the age of the star particle.
This would be possible using pre-tabulated SEDs. However, unlike in Paper I, we decided to use hybrid particles
which begin to produce photons as soon as a small fraction of the particle is turned into stars. This is
useful to make the local luminosity less noisy in the early EoR when the source mass resolution is an issue.
Including this star formation history
for each particle and convolving with the time-varying SED would be extremely costly in terms of both memory
and computation time.
We simplified the issue by considering the fact that in the Lyman and UV band, most of the luminosity
is produced by the massive stars, with a short life time comparable to the time between two snapshots of
our simulations. So we decided to use a constant SED and luminosity
during a characteristic life time. Both luminosity and SED are computed independently in each of the Lyman and
ionizing band. To compute the luminosity and the SED for a star particle we use the data for massive, low
metallicity ($Z=0.004Z_{\odot}$) stars in the main sequence \citep{Meyn05,Hans94} (see Tab.~\ref{star_info}).
The details of how this is done can be found in Appendix.
The constant luminosity and characteristic life time, computed in the two spectral bands, are given in Tab.~\ref{luminosity_life}
We find characteristic life time of $< 8$ Myr for the UV band. In the implementation of the UV transfer however, for technical reasons,
the source fraction of the particle actually shines for a duration equal the interval between two snapshots. This varies
varies between $6$ and $20$ Myr, so we recalibrate the luminosity to produce the correct amount of energy. The whole
point of the procedure, is to take the different typical life time in the Lyman band into account, especially at at $z$,
when Lyman-$\alpha$ coupling is not yet saturated. We should not concentrate the emission within a single
snapshot interval, which is 3 times shorter than the source life-time, or we would artificially boost the coupling between t
he spin temperature of hydrogen and the kinetic temperature
of the gas and alter the resulting brightness temperature. Consequently we let each newly formed star fraction of a particle shine
for $3$ consecutive snapshots, which is close to the typical life time in the Lyman band, and we still recalibrate the luminosity
to produce the correct amount of energy. While we do not
use a time-evolving SED, we believe that implementing different life times for the Lyman and UV sources with the correctly
average luminosities is a substantial improvement in our source model.
We use an escape fraction $f_{esc}=0.12$ for photoionizing UV photons and $f_{esc}=1$ for Lyman-$\alpha$.
\begin{table}
\centering
\tabcolsep 5.8pt
\renewcommand\arraystretch{1.2}
\begin{tabular}{ c c c c}
\hline
\hline
mass $[M_{\odot}]$ & log$(L/L_{M_{\odot}})$ & log$(T_{\mathrm{eff}})$&$t_{\mathrm{life}} [Myr]$ \\
\hline
\hline
120 & 6.3 & 4.7 & 3 \\
60 & 5.8 & 4.6 & 4.5 \\
40 & 5.6 & 4.5 & 6 \\
30 & 5.2 & 4.5 & 7 \\
20 & 4.8 & 4.45 & 10 \\
15 & 4.65 & 4.4 & 14 \\
12 & 4.2 & 4.37 & 20 \\
9 & 3.8 & 4.3 & 34 \\
5.9 & 2.92 & 4.18 & 120 \\
2.9 & 1.73 & 3.97 & 700 \\
1.6 & 0.81 & 3.85 & 3000 \\
\hline
\end{tabular}
\caption[]{Physical properties of low metalicity ($Z=0.004Z_{\odot}$) main sequence stars \citep{Meyn05}
L is the bolometric luminosity, $(T_{\mathrm{eff}})$ is the effective temperature and $t_{\mathrm{life}} [Myr]$ is the life time
of the star. }.
\label{star_info}
\end{table}
\begin{figure}[h]
\centering
\resizebox{\hsize}{!}{\includegraphics{Image/spectrum.eps}}
\caption{Normalized spectral intensity of our source model. Black solid line is the SED from Salpeter IMF
and red dashed line is from top-heavy IMF.}
\label{SED}
\end{figure}
\begin{table*}
\centering
\renewcommand\arraystretch{1.2}
\begin{tabular}{ c c c c }
\hline
\hline
IMF mass range & Energy band & $10.24\, \rm{eV} \leqslant E < 13.6 \,\rm{eV}$ & $E \geqslant 13.6 \,\rm{eV}$ \\
\hline
\hline
1-120 $[M_{\odot}]$ & Luminosity[erg/s]$^A$ & $6.32 \times 10^{44} $ & $2.14 \times 10^{45}$ \\
& Life time[Myr]$^A$ & $20.36$ & $8.03$ \\
\hline
100-120 $[M_{\odot}]$ & Luminosity[erg/s]$^B$ & $9.96 \times 10^{45} $ & $3.12 \times 10^{46}$ \\
& Life time[Myr]$^B$ & $3.32$ & $3.31$ \\
\hline
\end{tabular}
\caption[]{Averaged luminosities and life times of our source model for a baryon particle
depending on the energy band.
A values are from Salpeter IMF and B values are form top-heavy IMF. }
\label{luminosity_life}
\end{table*}
\subsection{X-ray source model}
\label{sourceX}
X-rays can have a significant effect on the 21 cm brightness temperature.
The X-ray photons, having a smaller ionizing cross-section, can penetrate
neutral hydrogen further than UV photons and heat the gas above the CMB
temperature. This X-ray heating effect on the IGM is often assumed to be homogeneous
because of X-rays' long mean free path. In reality, the X-ray flux is stronger around the
sources and the inhomogeneous X-ray flux can bring on extra fluctuations for the 21 cm brightness
temperature \citep{Prit07,Sant08}. Moreover, patchy reionization induce further fluctuations in the
local X-ray flux which can only be accounted for using a full radiative transfer modeling.
\subsubsection*{X-rays luminosity}
First, we need to determine the luminosity and location of X-rays sources.
We simply divided the total luminosity, $L_{tot}$, of all source particles
into a stellar contribution, $L_{star}$,
and a QSO contribution, $L_{QSO}$.
$L_{QSO}$ depends on the star formation rate, since $L_{tot}$ itself is proportional to
the increment of the star fraction, $\Delta f_*$, between two snapshots of the dynamic simulation.
One reason for this approach is that X-ray binaries and supernova remnants
contribute to X-ray sources as well as quasars
and they are strongly related to the star formation rate.
Following the work of \citet{Glov03}, we took 0.1\% of $L_{tot}$ as the fiducial X-ray source
luminosity, $L_{QSO}$. However, considering that they assumed a simple and empirically motivated model
we have also run simulations with different values $L_{QSO}$, 1\% and 10\% of $L_{tot}$.
Quasar luminosity fractions less than 0.1\% are not of interest for us, since
their heating effect will be negligible.
\subsubsection*{X-ray energy range and nature of the sources}
First, we have to choose the photon energy range since hard X-ray photons have
a huge mean free path which costs a lot for ray-tracing computations.
The comoving mean free path of an X-ray with energy $E$ is \citep{Furl06b}
\be
\lambda _{X}=4.9 \overline{x} ^{-1/3} _{\rm{HI}} \left( \frac{1+z}{15}\right) ^{-2}
\left( \frac{E}{300eV} \right) ^3 \,\,\rm{comoving\ Mpc}.
\ee
Only photons with energy below $E\sim 2[(1+z)/15]^{1/2}\overline{x}^{1/3}_{\rm{HI}}\,k$eV
are absorbed within a Hubble time and the $E^{-3}$ dependence of the cross-section means that heating
is dominated by soft X-rays, which do fluctuate on small scales (Furlanetto et al. 2006). Therefore,
we choose an energy range for X-ray photons from $0.1k$eV to $2k$eV. The photons with energy higher
than $2k$eV are not absorbed until the end of simulation at $z\approx 6$.
While the most likely astrophysical sources of X-ray during the EoR are supernovae, X-binaries and (mini-) quasars,
it is interesting to mention that the X-ray SED of supernovae and X-binaries typically peaks above $1$ $k$eV (e.g. \citealt{Oh01}).
This means that most of the X-rays emitted by these sources will interact with the IGM more than $10^8$ years later, which is not true
for QSO-like SEDs. During this time interval the global source mass (and, to first order, luminosity) easily rises by a factor of 10.
Thus the longer delay will lower the effective luminosity of X-binaries and supernovae compared to QSO. For this reason, but also to
avoid detailed modeling of some aspects while others, like the overall luminosity of X-ray sources, remain largely unconstrained, we
use QSO as our typical X-ray source.
\subsubsection*{QSO spectral Index}
We model the specific luminosity of our QSO-like sources as a
power-law with index $\alpha$;
\be
L_{\nu}=k\left( \frac{\nu}{\nu_0} \right)^{-\alpha}.
\label{power_index}
\ee
$k$ is a normalization constant so that
\be
L_{QSO}=\int ^{\nu _2} _{\nu _1} L_{\nu} \,\rm{d}\nu ,
\label{power_index2}
\ee
where $h\nu _1=0.1\,k$eV and $h\nu _1=2\,k$eV.
The amount of X-ray heating can be altered by the shape of the spectrum but
there exists a large observational uncertainty in the mean and distribution of $\alpha$.
We extrapolate from the measurement of extreme UV spectral index by \citet{Telf02} to higher energy.
The index values are derived from fitting $1 Ry < E < 4 Ry$, and we extrapolated to 2$k$eV.
The measured value by \citet{Telf02} is approximately
$\approx$ 1.6, but with a large gaussian standard deviation of 0.86.
\citet{Scot04} derived an average value of $\alpha=0.6$ from a sample of FUSE and HST quasars.
We choose $\alpha=1.6$ as our fiducial case, and used $\alpha=0.6$ for comparison.
\subsubsection*{Secondary Ionization}
X-rays deposit energy in the IGM by photoionization through three channels.
The primary high velocity electron torn from hydrogen and helium atoms distributes its energy
by 1) collisional ionization, producing secondary electrons, 2) collisional excitation
of H and He and 3) Coulomb collision with thermal electrons. The fitting formula in
\citet{Shul85} is used to compute the fraction of secondary ionization and heating.
These are taken into account when computing the evolution of the state of the IGM.
Then, it is legitimate to ask whether the lyman-$\alpha$ electrons resulting from the collisional
excitations are important for the Wouthuysen-Field effect. The simple answer is that, in our choice
of models, the energy emitted as X-ray is at most 10\% of the UV energy, itself $3$ times less than
the Lyman luminosity. Moreover at most 40\% of the X-ray energy is converted into excitations \citep{Shul85}, and
only $\sim 30$\% of the excitations result in a Lyman-alpha photon \citep{Prit06}. So, in the best case,
Lyman-$\alpha$ photons produced by X-ray represent only $0.3$\% of the photons produced directly by the sources.
\section{Results}
\subsection{Ionization fraction}
The evolution of the averaged ionization fraction tells us about the global history of
reionization. We plot the mass and volume weighted average ionization fraction in Fig.~\ref{xh}.
Including quasar (S4, S5, S6 and S7, not plotted) does not change the global evolution of the
ionization fraction much, because of its small fraction of the total luminosity.
The total number of emitted photons is similar to S1.
The S3 simulation has also the same number of emitted photons, but S3 reaches
the end of reionization a little bit earlier ($\Delta z \approx 0.25$) than S1.
Unlike other simulations, S3 contains helium which occupies
25\% of the IGM in mass. Including helium, the total number of atoms is reduced by
20\%. At the same time, unless X-ray are the dominant source of reionization, most of the helium
is only ionized once while $z>6$, due to the higher energy threshold for the secondary ionization($\rm {He}^{++}$).
Therefore the number of emitted photons per baryon is higher in S3 than S1, and it
results in an earlier reionization.
On the other hand, S2 has the same number of photon absorbers as S1, but
the total number of emitted photons is much higher than in S1.
Using a top-heavy IMF, it produces 10 times more photons
(see Fig.~\ref{SED} and Tab.~\ref{luminosity_life}), and results in a $\Delta z \approx 1$
earlier reionization.
In all three cases, volume weighted values are less than mass weighted values,
since gas particles in dense regions around the sources are ionized first.
The volume occupied by each particle is estimated using the SPH smoothing length.
We computed the Thomson optical depth for all
simulations, the values are $\tau =$ 0.062, 0.076, 0.064 for S1, S2 and S3. The other simulations
(S4-S5) have the same $\tau$ as S1, since they follow the same evolution of ionization fraction.
They are somewhat lower than the Thompson optical depth derived from WMAP5 \citep{Hins09},
$\tau =0.084\pm 0.016$,
only the S2 value is within 1$\sigma$ of the WMAP5 value.
A variable escape fraction, decreasing with time, would allow the IGM to start
ionization earlier and increase $\tau$, without terminating ionization after z=6.
\begin{figure}[h]
\centering
\resizebox{\hsize}{!}{\includegraphics{Image/xh.eps}}
\caption{Mass (thicker lines) and volume (thinner lines) weighted ionization fraction of hydrogen. S1 and
S3 use Salpeter IMF and S2 use top-heavy IMF. S3 contains helium element.}
\label{xh}
\end{figure}
\subsection{Gas temperature}
The main goal of this study is to investigate the effect of inhomogeneous X-ray heating on the
21-cm signal. If the Ly-$\alpha$ coupling is sufficient, and X-rays can heat the gas
above the CMB temperature $T_\mathrm{CMB}$, the 21-cm signal will be observed in emission. However if
the X-ray heating is not very effective, particularly during the early phase of the EoR,
we will observe the signal in absorption.
We plot in Fig.~\ref{Tk_evol} the averaged gas temperature of the neutral IGM whose
ionization fraction $x_{HII}$ is less than 0.01.
We chose the criterion of $x_{HII}<0.01$ for the following reasons. Once a gas particle is 10\% ionized,
it is heated by photoheating to a temperature of several thousand Kelvin.
At redshift 10, the number of gas particles which have an
ionization fraction between 0.01 and 0.1 is only 0.1\% of all the particles, but if we include these
particles, the average temperature increases from 2.94K to 5.41K. Therefore, we used the criterion $x_{HII}<0.01$ to evaluate
properly the average temperature of neutral regions, and verified that $x_{HII}<0.001$ gives
a very similar average temperature. We have checked that even for model S7 which has the highest level of X-rays,
at $z > 7.5$, 99\% of the
\textit{neutral} IGM has indeed an ionization fraction less than 1\%, so we have not excluded a significant fraction of the $21$ cm emitting
IGM from our average.
This neutral gas is mostly located in the voids of the IGM.
In fact, we have to consider Ly-$\alpha$ heating
as well as X-ray heating since a few K difference can reduce the intensity of $\delta T_b$
by up to 100 mK.
We recompute the gas temperature to include Ly-$\alpha$ heating
as a post-treatment using the formula
from \citet{Furl06d}. This was detailed in Paper I.
The temperature of all simulations in Fig.~\ref{Tk_evol} decreases until $z\approx12$
because of the adiabatic expansion of the universe.
Then S7, which has the highest $L_{QSO}$, starts to increase first and reaches the CMB temperature at redshift $z\approx 8.8$.
Our fiducial model, S4, which contains 0.1\% of total energy as X-rays, shows
very little increase with respect to S1, a simulation without X-rays.
Even for S7, the gas temperature of neutral hydrogen in the void is still
under the $T_\mathrm{CMB}$ until $z\approx 8.8$. This means that
the X-ray heating needs time to heat the IGM above $T_\mathrm{CMB}$. Even with
a rather high level of X-ray, the absorption phase survives and produces
greater brightness temperature fluctuations than the subsequent emission phase
(the delay in the absorption of X-ray connected to the long mean free path is
partly responsible for this). It will be important to keep this result in mind
when choosing the design and observation strategies for the future instruments.
\begin{figure}[h]
\centering
\resizebox{\hsize}{!}{\includegraphics{Image/evolution2.eps}}
\caption{The evolution of the gas temperature of the neutral IGM with redshift.
The neutral gas is chosen so that its ionization fraction is less than 0.01.}
\label{Tk_evol}
\end{figure}
There is a large observational uncertainty in the mean and distribution
of the quasar spectral index $\alpha$. Our fiducial model assumes
$\alpha =1.6$ but we run a simulation (S5) with $\alpha =0.6$ for comparison.
The total emitted energy is fixed, but the S5 simulation has more energetic photons which
penetrate the ionization front further than those of S4.
However, the difference of the gas temperature between two simulations is negligible.
The temperature of S5 is slightly higher than S4, but the difference is
less than 0.5K at all redshifts.
We estimate that our 1\% model yields around $5$ times more X-rays than the fiducial model in \citet{Sant08}, although
the source formation modeling is quite different and the comparison is difficult (we do not use the dark matter halo mass
at all in computing the star formation rate).
However, comparing their plots of the average temperature and ionization fraction evolution with ours, we can
deduce that their average gas kinetic
temperature rises above the CMB temperature around ionization fraction $x_{\mathrm{HII}}=10\%$
while in our case the same event occurs at $x_{\mathrm{HII}}=15\%$. We
find several reasons for this apparent discrepancy. First we defined neutral IGM as $x_{\mathrm{HII}} < 0.01$ and use this to compute
the average gas temperature. Although this is not absolutely explicit in the paper, we believe they use $x_{\mathrm{HII}} < 0.5$ ,
thereby including warmer gas in the average.
Then, they have a more extended reionization history, which reduces the effect
of the delay in the X-ray heating (see next section). Finally the initial X-ray heating is shifted to higher redshifts, when the
difference between the average neutral gas temperature and the CMB temperature is less.
\subsection{Brightness temperature maps}
We have run Lyman-$\alpha$ simulations as a further post-treatment to obtain
the differential brightness temperature $\delta T_b$.
The $\delta T_b$ is determined by various elements, and it is expressed as \citep{Mada97}:
\be
\delta T_b \approx 28.1 \,\,{\mathrm{mK}}\,\,x_{\mathrm{HI}}\,\, (1+\delta) \left({1+z \over 10}\right)^{1 \over 2}\,\, {T_S -T_{\mathrm{CMB}} \over T_S},
\label{dTb_equation}
\ee
\noindent
where $\delta$ is the baryon over density, $T_s$ is the spin temperature, $T_\mathrm{CMB}$ is the CMB temperature, and $x_{\mathrm{HI}}$ is the neutral fraction.
The contribution of the gradient of the proper velocity is not considered in this work.
The spin temperature $T_s$ can be computed with:
\be
T^{-1}_{S}= {T_{\mathrm{CMB}}+x_{\alpha}T^{-1}_c+x_cT^{-1}_K \over 1 + x_{\alpha} + x_c}
\label{T_spin}
\ee
\noindent
and
\be
x_{\alpha}={ 4 P_{\alpha} T_{\star} \over 27 A_{10} T_{\mathrm{CMB}} } \qquad \mathrm{and} \qquad x_{c}={ C_{10} T_{\star} \over A_{10} T_{\mathrm{CMB}} }
\label{Ts_eq}
\ee
\noindent
where $P_\alpha$ is the number of Lyman-$\alpha$ scatterings per atom per second, $A_{10}$ is the spontaneous emission coefficient of the $21$ cm hyperfine transition,
$T_{\star}$ is the excitation temperature of the $21$cm transition, and $C_{10}$ is the deexcitation rate via collisions. Details on deriving these relations and
computing $C_{10}$ can be found, e. g., in \citet{Furl06a}. The peculiar velocity gradients \citep{Bark05,Bhar04}
is not considered in this work.
As we can see in eq. \ref{Ts_eq}, $T_S$ is coupled to the CMB temperature $T_\mathrm{CMB}$ by Thomson scattering of CMB
photons, and to the kinetic temperature of the gas $T_K$ by collisions and Ly-$\alpha$ pumping.
Coupling by collisions is efficient only at $z > 20$, or in dense clumps, so Ly-$\alpha$ is the key coupling process in the diffuse IGM.
The $\delta T_b$ maps are a good way to see how these different
elements affect the signal.
In Fig.~\ref{dTb} we show several $\delta T_b$ maps of the same slice from different radiative transfer simulations, S1, S2, S6 and S7.
S4 and S5, which sets the X-ray luminosity at 0.1 \% of the UV luminosity show
a trend very similar to S1 and are not plotted.
The bandwidth of the slice is 0.1 MHz for all maps, which corresponds to 1.9 Mpc for the maps on the left column
(a)-(d), 1.8Mpc for (e)-(h) and 1.6 Mpc for (i)-(l).
The left
four maps of $\delta T_b$ in Fig.~\ref{dTb}, (a)-(d), are plotted when the mass averaged Ly-$\alpha$
coupling coefficient $x_{\alpha}$ is $\langle x_{\alpha} \rangle=1$. This value is interesting because in this moderate coupling regime,
fluctuations in the Ly-$\alpha$ local flux induce fluctuations in the brightness temperature, which is not the case anymore when the coupling saturates.
The corresponding redshifts are $z=$ 10.50 for S2 and 10.13 for
the others. The corresponding averaged ionization fractions are 0.005, 0.018, 0.005 and 0.005 for (a)-(d).
Indeed, the averaged ionization fraction of S2 is higher than the others since it uses a harder spectrum.
The ratio of the integrated energy emitted in the Lyman band (Ly-$\alpha < E < $ Ly-limit)
with respect to the ionizing band, $\beta=E_{Lyman}/E_{ion}$, is three times less for S2 than for
the others. For a given number of emitted Ly-$\alpha$ photons, a harder spectrum produces a
larger number of UV ionizing photons, therefore S2 has a higher ionization fraction when
$\langle x_{\alpha} \rangle=1$.
S1 shows a deeper absorption region around the ionized bubbles than S2, and it is also
due to the different ratio of the number photons in the Lyman band and the ionizing band.
In the case of S1, the ionized bubble is smaller than the highly Ly-$\alpha$ coupled region.
Since the kinetic temperature outside of ionized bubble is a few kelvin, which is lower than the
CMB temperature ($T_\mathrm{CMB}\approx 30 \,K$), the neutral hydrogen has a strong 21-cm absorption signal.
On the other hand, the highly Ly-$\alpha$ coupled regions in S2 mostly resides in the ionized bubbles, which
are bigger than in S1. S7 has almost the same averaged ionization fraction and ionized bubble size as S1, but
the gas around the ionized bubbles as well as in the void is heated by strong X-rays.
The signal is still in absorption
because the X-ray heating has not been able to raise the IGM temperature above
$T_\mathrm{CMB}$, but the intensity is reduced.
Contrary to S1 and S2, the neutral gas around the ionized bubbles produces a weaker signal
than in the void, because the gas around the bubbles is more efficiently heated by X-rays.
S6 shows an intermediate behavior between S1 and S7.
The four maps in the middle of Fig.~\ref{dTb}, (e)-(h), are for $\langle x_{\alpha} \rangle=10$.
The corresponding redshifts are $z=$ 9.03 for S2 and 8.57 for the others.
The averaged ionization fractions are 0.043, 0.141, 0.043 and 0.040 for (e)-(h).
These redshifts when $\langle x_{\alpha} \rangle=10$ are interesting
because the amplitude of fluctuation $\delta T_b$ reaches a maximum.
If we do not consider the effect of Ly-$\alpha$ coupling and assume $T_k \gg T_\mathrm{CMB}$,
which does not allow the signal in absorption, the largest fluctuations
would appear around $\langle x_{\mathrm{HII}} \rangle =0.5$ as noticed by \citet{Mell06} and \citet{Lidz07}, but
including the inhomogeneous
Ly-$\alpha$ coupling and computing $T_K$ self-consistently, this maximum is shifted to
an earlier phase of reionization.
The ionization fraction and bubble size in S2 are still
greater than in the other models, but the absorption intensity is lower than in S1.
Here is why. The evolution of kinetic temperature in the void regions is dominated by the
adiabatic cooling: the temperature drops as the expansion progresses.
The kinetic temperature of S1 in the voids is lower than in
S2 by 0.5 K due to the difference in redshift, which explains the stronger absorption intensity in S1.
The neutral gas around ionized bubbles in
S7 is heated by the high X-rays level above $T_\mathrm{CMB}$, and starts to produce the signal in emission.
The neutral gas in the voids is also affected by X-rays. However, it is not sufficiently heated yet
so that the signal turns everywhere from absorption to emission. Nevertheless the intensity of the signal
is reduced by the X-ray heating, and it shows the weakest signal among the four maps at $\langle x_{\alpha}\rangle$.
The four maps on the right of Fig.~\ref{dTb}, (i)-(l), are for $\langle x_{\mathrm{HII}} \rangle=0.5$.
The corresponding redshifts are $z=$ 7.68 for S2, 7.00 for S1 and S6, 6.93 for S7. The averaged
Ly-$\alpha$ coupling coefficients, $x_{\alpha}$, are 138.3, 34.9, 138.75 and 187.75 for (i)-(l).
Contrary to the above cases, the absorption intensity of S1 in the void region is
weaker than that of S2. This is due to the Ly-$\alpha$ heating.
Ly-$\alpha$ heating is negligible during the
early phase, but the amount of Ly-$\alpha$ heating accumulated between
$z \sim 12$ and $z \sim 7$ can heat the gas
in the voids by several kelvins.
In order to reach 50\% ionization, S1 produces a larger number of Ly-$\alpha$
photons, which propagate beyond the ionizing front. The $\langle x_{\alpha} \rangle$
of S1 is almost 4 times greater than that of S2, and the accumulated Ly-$\alpha$ heating
increases the kinetic temperature by 3-4 K more than in S2.
The intensity in absorption is very sensitive to the value of kinetic temperature,
so this small amount of heating reduces the signal by up to 100 mK.
The X-ray heating in S7 is strong enough to heat all the gas above $T_\mathrm{CMB}$ at this
redshift, so we see the signal in emission everywhere.
S6 shows intermediate features between S1 and S7, showing the weakest signal. Indeed,
the X-ray heating in S6 increased the gas temperature in the neutral voids just around
the $T_\mathrm{CMB}$, which is the transition phase from absorption to emission.
\begin{figure*}[th]
\centering
\resizebox{0.85\hsize}{!}{\includegraphics{Image/final_cont.eps}}
\caption{Differential brightness temperature maps for different simulations. The thickness
of the slice is $\approx$ 2 Mpc. The maps in the left column
are when $\langle x_{\alpha} \rangle =1$, the middles when $\langle x_{\alpha} \rangle =10$
and the right when $\langle x_{\mathrm{HII}} \rangle =0.5$. The black contour separates
absorption and emission region. (l) shows no absorption region.}
\label{dTb}
\end{figure*}
\subsection{Power spectrum}
Fig.~\ref{power_panel} shows 3 dimensional power spectra of the brightness temperature
fluctuations for the S1, S2, S6 and S7 simulations.
The power spectrum can be defined as the variance of the amplitude of the Fourier modes of the signal for a given
wavenumber modulus:
\be
P(k)=\langle \hat{\delta T_b}(\mathrm{\bf k}) \hat{\delta T_b}^{\star}(-\mathrm{\bf k})\rangle.
\ee
We binned our modes with $\delta k=\frac{2\pi}{100}(\text{Mpc}/h)^{-1}$ and plotted the quantity
$\Delta^2=k^3P(k)/2\pi^2$.
The power spectra of S4, S5 are not presented in
Fig.~\ref{power_panel} since their patterns are similar to S1.
During the early phase, when $\langle x_{\alpha} \rangle=1$, the amplitude of the powerspectrum of
4 simulations
are similar. The spectra follow patterns similar to the power spectrum of S100 in Paper I (see Fig.15).
Model S6, shows a spectrum similar to the fiducial model of \citet{Sant08}. The main difference
in shape appears at $k > 1 \mathrm{h}^{-1}.\mathrm{Mpc}$ in our model. This is possibly connected to the
fact that they assume a ${1 \over r^2}$ dependence of the Lyman-$\alpha$ flux, while at short distances from
the sources ( $< 10$ comoving Mpc), wings scattering effects produce a ${1 \over r^{7/3}}$ dependence \citep{Seme07}.
Also noticeable is the difference between S6 and S7. S7 is depleted at small scale, the strong X-ray heating damping
the absorption near the sources. On very large scale, however the already strong heating in S7 creates temperature
fluctuations which boost the S7 power spectrum.
When $\langle x_{\alpha} \rangle=10$, the power of both S6 and S7 decreases since X-ray heating
prevents a strong absorption signal. The strong X-ray heating of S7 increases the gas temperature
around $T_\mathrm{CMB}$, and it shows the smallest power.
However, the power of S6 falls down under the power of S7 when the hydrogen is 50 \% ionized.
At this redshift, the rising temperature of neutral gas reaches $T_\mathrm{CMB}$ in S6, while
it is already much greater than $T_\mathrm{CMB}$ in S7. This is also visible in Fig.~\ref{dTb}.
Later, when $\langle x_{\mathrm{HII}} \rangle=0.9$, all power spectra drop. Our S6 model agrees
quite well with \citet{Sant08} both in shape and amplitude for these two last stages. Indeed
both the effects of Lyman-$\alpha$ coupling and X-ray heating reach a saturation in the determination
of the brightness temperature, erasing the differences in our treatments.
S2 has the largest power over all scale when $\langle x_{\mathrm{HII}} \rangle=0.5$ and $\langle x_{\mathrm{HII}} \rangle=0.9$.
This is due to the near lack of Ly-$\alpha$ heating. Let us mention however that some sort of transition
to Pop II formation should have occurred by then, providing some level of Ly-$\alpha$ heating. So S2 is probably
not realistic during the late EoR.
In brief, the 21-cm power spectra of our models vary in the 10 to 1000 mK$^2$ range, in broad agreement with \citet{Sant08}
who included the inhomogeneous X-ray and Ly-$\alpha$ effect on the signal in a semi-analytical way, with moderate discrepancies
at high-redshift and small scale due to wing effet in the Lyman-$\alpha$ radiative transfer. Quite logically our results differ
at high-redshift from \citep{Mell06b,Zahn07,Lidz07b,Mcqu06} who focused on the emission regime. These authors found a flattening of
the spectrum around $\langle x_{\mathrm{HII}} \rangle=0.5$. It is interesting to notice that in the case of a strong X-ray heating (model S7)
the spectrum is quite flat at all redshift (temperature fluctuation boost the power on large scales at high-redshift).
In the future observations,
this would be a first clue of larger-than-expected contribution from X-ray sources.
\begin{figure}[h]
\centering
\resizebox{0.9\hsize}{!}{\includegraphics{Image/power_label.eps}}
\caption{Power spectrum evolution of the $\delta T_b$ from the S1 (thin black),
S2 (thick black), S6 (thin gray) and S7 (thick gray) simulations. S2 use top-heavy IMF
whereas the others use Salpeter IMF. S6 has 1\% and S7 has
10\% of total luminosity for X-rays.}
\label{power_panel}
\end{figure}
We now plot the evolution of the power as a function
of redshift for 4 different $k$ values.
The evolution of the power spectrum with and without X-rays is very different.
S1 and S2, which do not have X-rays, show a \textbf{single} maximum on small scales ($k=1.00$ h/Mpc and $k=3.15$ h/Mpc)
around redshifts 8.5 and 9 which
correspond to the redshifts of $\langle x_{\alpha} \rangle =10$ for each simulation.
On large scales ($k=0.07$ h/Mpc and $k=0.19$ h/Mpc) the power spectrum shows \textbf{two} local maxima on large scale.
The first peak is related to the Ly-$\alpha$ fluctuations. The $\delta T_b$ fluctuations
are dominated by Ly-$\alpha$ fluctuations at high-redshift, but it decreases when
the Ly-$\alpha$ coupling saturates. Then it rises again. This time,
the fluctuations are dominated by the fluctuations of ionization fraction. The second peak appears
at the redshift when $\langle x_{\mathrm{HII}} \rangle=0.5$ for each simulations. The
overall amplitude of S1 and S2 are similar, but the position of the local maximum peaks
of S2 are at higher redshift due to the faster reionization. The key to the single-double peak
difference is that the contribution of the ionizing field fluctuation to the brightness temperature
power spectrum increases during reionization on large scale but not on small scales \citep{Ilie06b}.
With X-rays (models S6 and S7), the evolution follows a different scenario. We find a pattern similar to
\citet{Sant08}.
On small scales (thin and thick gray in Fig.~\ref{power_evolution}),
the intensity of the signal increases up to the maximum as the
spin temperature couples to the kinetic temperature.
Then it decreases
during the absorption-emission transition. As the fluctuation due to the
ionization fraction comes to dominate, the power reincreases slightly or remains in
a plateau until it drops at the end of reionization.
The evolution of the power on small scale does not show a marked minimum.
The evolution of large scales (thin and thick gray in Fig.~\ref{power_evolution})
is the most interesting: it shows \textbf{three} maxima. From high-redshift to low redshift,
each peak corresponds to the period where the fluctuation of the Ly-$\alpha$ coupling,
gas temperature and ionization fraction dominate.
There exists a deep suppression between the second and the third
peaks which does not appear without X-ray. It occurs when the X-ray heating raises the gas temperature
of the neutral IGM around $T_\mathrm{CMB}$, which dampens the signal.
The second minimum in S7 occurs earlier than in S6, since the stronger X-ray heating of S7
increase the gas temperature around $T_\mathrm{CMB}$ at a higher redshift.
We find a much narrower third peak in S4 (not plotted), which uses a 10 times weaker X-ray heating than S6.
The position and amplitude of the peaks as well as the width depend on the
intensity of X-ray heating. The width of the third bump ($6.5<z<7.5$) is the largest in S7 (with 10\% X-ray)
and the smallest or negligible in S4 (with 0.1\% X-ray).
The existence/position of this third peak and of
the second dip in the evolution of large scale power spectrum will be measurable by
LOFAR and SKA observations, and it will help us constrain the nature of the sources during the EoR.
\begin{figure}[h]
\centering
\resizebox{0.9\hsize}{!}{\includegraphics{Image/power_evol_label.eps}}
\caption{Evolution of the brightness temperature power spectrum with redshift.
$k=0.07$ h/Mpc (thin black), $k=0.19$ h/Mpc (thick black ), $k=1.00$ h/Mpc (thin gray) and $k=3.15$ h/Mpc (thick gray)}
\label{power_evolution}
\end{figure}
\subsection{Non-Gaussianity of the 21-cm signal}
The non-gaussianity of the 21-cm signal has been studied in previous works.
\citet{Ciar03c,Mell06b} shows the non-gaussianity of the 21-cm signal in numerical simulations by computing
the Pixel Distribution Function (PDF). \citet{Ichi09} draws the history of reionization from the
measurement of the 21-cm PDF. \citet{Hark09} compute the skewness of the PDF and show
how it could help in separating the
cosmological signals from the foregrounds. However, all these works
model the signal in emission only.
We present the 21-cm PDF from our simulations in Fig.~\ref{pdf}, for several
representative redshifts. To obtain the 21-cm PDF, we sample $\delta T_b$ within
a 1 $h^{-1}$Mpc resolution, an acceptable value for SKA.
The 21-cm PDF from our simulations is highly
non-Gaussian as expected, but is also quite different from \citet{Ciar03c,Mell06b,Hark09} and \citet{Ichi09},
Our distributions extend to negative differential brightness temperature with a variety of shapes depending on the redshift.
The panel (a) of Fig.~\ref{pdf} is the 21-cm PDF at the beginning of reionization, $z=14.05$.
It is at the beginning of reionization that \citet{Ichi09} find a 21-cm PDF closest to
a Gaussian, but this is also when their model is the less relevant. In our case,
all signals are found in absorption and their distribution is peaked around
0mK, completely non-Gaussian.
We show the PDFs at $z=10.64$, still during the early EoR, in panel (b) of Fig.~\ref{pdf}.
The position of peaks are shifted around -100mK $\sim$ -50mK which means that
the spin temperature of the particles is decoupled from $T_\mathrm{CMB}$ by Ly-$\alpha$ photons.
The PDF is much close to a gaussian distribution, extending to positive values (the smooth curves are the best gaussian fit to the PDFs).
However, all of them are left skewed (toward negative temperature). The reason for this negative skewness is the same
as the reason for the positive skewness in \citet{Mell06b,Ichi09,Hark09}: it is due to the signal from high density regions seen
in absorption for us and in emission for them.
Panel (c) in Fig.~\ref{pdf} shows the PDFs when $z=8.48$. Here we find a bimodal distribution with a plateau in the absorption
region between 100 mK and 0 mK, for models S1, S2 and S4.
The left peaks of the PDFs moves also toward higher $\delta T_b$ with increasing heating efficiency.
In the case of S7, the left peak merges with the right one, and the form is very similar to a gaussian.
The panel (d) of Fig.~\ref{pdf} is plotted when the ionization fraction is 50\%.
The width of the PDF of S1 and S4 is reduced because Ly-$\alpha$ heating is well advanced.
We find signals in emission in S6 and S7, since X-rays heat the gas in neutral regions
above $T_\mathrm{CMB}$. Indeed, these PDF forms are similar to \citet{Ichi09}.
The PDFs of S6 and S7 could be fitted by the Dirac-exponential-Gaussian distribution used by \citet{Ichi09}.
S2 has broad PDF still, since the Ly-$\alpha$ heating is 4 times lower than in the others and
it retains the signal in strong absorption.
It is interesting to note that the PDF always shows a spike around $\sim 0$ mK.
During the beginning of reionization, it is due to the large amount of neutral hydrogen whose
spin temperature is still well coupled to the CMB temperature. As the reionization proceed, the spin temperature
is decoupled from the CMB and the number of pixels at $\sim 0$ mK decreases, but the peak grows again with the
increasing contribution from completely ionized regions. This feature is interesting since interferometers such as
LOFAR or SKA only measure fluctuations in the signal and do not directly provide a zero point.
\citet{Ichi09} extract informations about the averaged
ionization fraction from the 21-cm PDF. As could be expected
our PDFs converge with their result when the contribution from
X-ray sources is sufficient and the EoR somewhat advanced. What we find
is that a clear tracking of the nature of the ionizing source remains on
the PDF when the absorption phase is modeled. A strong X-ray contribution
produce unimodal PDFs while a weak X-ray contribution yield a bimodal PDF.
The evolution of the skewness is presented in Fig.~\ref{skewness}.
The skewness $\gamma$ is defined as
\begin{equation}
\gamma=\frac{\frac{1}{N}\Sigma _i (\delta T_{b}^i -\overline{ \delta T_{b}})^3 }
{[\frac{1}{N}\Sigma _i (\delta T_{b}^i -\overline{\delta T_{b}})^2]^{3/2}} \,\,,
\end{equation}
where N is total number of pixels in $\delta T_b$ data cube, $\delta T_{b}^i$
is $\delta T_b$ in $i^{th}$ pixel, and $\overline{\delta T_{b}}$ is the
average on the data cube.
At the beginning the skewness is highly negative for all simulations
as we can expect from the panel (a) and (b) of Fig.~\ref{pdf}. Then, in
all models, the skewness rises to a local positive maximum when
the average ionization fraction is a few percents.
It is interesting to notice that the skewness of all simulations
close to zero again when the neutral fraction is about 0.3. While
this behavior could be used to provide a milestone of reionization, its
robustness should first be checked. We can notice that two of the three
models presented in \citet{Hark09} show the same behavior. Those are
however the two less detailed models.
Another interesting feature in Fig.~\ref{skewness} is that
the skewness of S7 has two local maxima while others do not.
Again, this could be used as a clue to a large contribution from
X-ray sources.
\begin{figure*}[tp]
\centering
\subfigure[]{\label{pdf35}\includegraphics[angle=-90,width=10cm]{Image/pdf35.ps}}
\subfigure[]{\label{pdf55}\includegraphics[angle=-90,width=10cm]{Image/pdf55.ps}}
\subfigure[]{\label{pdf75}\includegraphics[angle=-90,width=10cm]{Image/pdf75.ps}}
\subfigure[]{\label{pdf95}\includegraphics[angle=-90,width=10cm]{Image/pdf95.ps}}
\caption{The evolution of the 21-cm PDF. The redshifts are 14.05, 10.64 and 8.48 for
(a), (b), and (c). The PDFs of panel (d) is chosen so that the ionization fraction is 0.5.
The red curves on (b) are gaussian fits with the mean and variance of the PDFs. }
\label{pdf}
\end{figure*}
\begin{figure}[h]
\centering
\resizebox{\hsize}{!}{\includegraphics{./Image/skew_xh_mass.eps}}
\caption{Evolution of the skewness with the ionization fraction.}
\label{skewness}
\end{figure}
\section{Conclusions}
We modeled the 21-cm signal during the EoR using numerical simulations
putting the emphasis on how various types of sources can affect the signal.
The numerical methods used in this work are similar to \citet{Baek09}.
The N-body and hydrodynamical simulations have been run with GADGET2 and post-processed with
UV continuum radiative transfer further processed with Ly-$\alpha$ transfer
using LICORICE, allowing us to model the signal in absorption. The main difference from the
previous work is a more elaborated source model, including X-ray radiative transfer and He chemistry.
We have run 7 simulations to investigate the effect of different
IMFs, helium, different spectral indexes and
the different luminosities of X-rays sources.
The reference simulation in this work, S1, using only hydrogen and stellar type
sources, reached the end of reionization at $z\approx 6.5$ and showed a strong absorption
signal until the end of reionization.
Our top heavy IMF (model S2) produces $\sim 2.6$ times more ionizing photons than
the Salpeter IMF. S2 reached the end of reionization
earlier than the others by $\Delta z \approx 1$. In addition the different SED
changes the ratio of Ly-$\alpha$ and to ionizing UV photon numbers, and it slows down the
saturation of the Ly-$\alpha$ coupling and the heating by Lyman-$\alpha$ in the top heavy IMF case.
This modifies the statistical properties of 21-cm signal.
The simulation with helium, S3 also has a slightly earlier
reionization than the others since the number of emitted photons per baryon is higher.
Except for the slightly lower kinetic temperature in the bulk of ionized regions due to the
higher ionization potential than for hydrogen,
the properties of the 21-cm signal from S3 is similar to S1.
We chose QSO type sources with a power-law spectrum as X-ray sources in model S4 to S7.
The spectral index $\alpha$ has large observational uncertainty, so we used two different
spectral indexes. S4 and S5 have 0.1\% of the total luminosity in the X-ray band.
S5 uses $\alpha = 0.6$, while other simulations with X-rays use
$\alpha=1.6$. S5 showed very little difference on the gas temperature with respect to
S4.
S4, S6 and S7 have different luminosities in the X-ray band, keeping
the same values for the other simulation parameters.
Using a stronger X-ray luminosity indeed increased the gas temperature
in the neutral hydrogen. Accordingly the 21-cm signal and its power spectra are
modified.
We found an increase of a few kelvin for the neutral gas temperature in
our fiducial model, S4, in which X-rays account for 0.1\% of the total emitted energy.
The 21-cm signal in S4 was similar to S1, showing the maximum intensity in
absorption, $\sim 200$ mK, at $z \approx 9$.
Stronger X-ray levels increase the gas temperature and reduce the intensity. We found that
in S6 and S7, which uses 1\% and 10\% of the total luminosity for X-rays, the absolute maximum intensity in
absorption decreases to $\sim 130$ mK and $\sim 80$ mK.
The 21-cm power spectrum of our work is greater by two or three orders of magnitude than
in works focusing on the emission regime \citep{Mell06b,Zahn07,Lidz07b,Mcqu06}.
However, the results are in broad agreement with the work of
\citet{Sant08}, who modeled absorption using semi-analytical methods for X-ray and Lyman-$\alpha$ transfer.
We noticed that the 21-cm fluctuation is dominated by Ly-$\alpha$ fluctuations during the early phase,
X-rays later (or the gas temperature), and the ionization fraction at the end.
This is visible on the evolution of the 21-cm power spectrum with redshift.
The 21-cm PDF of our work was different from other work, since
we do not assume that the spin temperature $T_s \gg T_\mathrm{CMB}$.
\textit{The first most important conclusion} from our work is that even including a higher than generally expected
level of X-ray, the absorption phase of the 21-cm survives. Its intensity and duration are reduced, but
the signal is still stronger than in the emission regime. Heating the IGM with X-rays takes time!
\textit{The second important result} is that we found three diagnostics which could be used in the analysis of future observations to
constrain the nature of the sources of reionization. $(i)$ The first and maybe the most robust is the
evolution with redshift of large scale modes ( $k \sim 0.1$ h/Mpc) of the powerspectrum. If reionization
is overwhelmingly powered by stars, this evolution should have one local minimum (two local maxima) . However,
if the energy contribution of QSO is greater than $\sim 1\%$, a second local minimum (third maximum) appears.
The higher the X-ray level, the broader the third peak. $(ii)$ The second simple diagnostic is the bimodal aspect of the
PDF which disappears when the X-ray level rises above $1\%$ of the total ionizing luminosity. $(iii)$ Last is the redshift evolution
of the skewness of the 21-cm signal PDF. While all other models show a single local maximum at a few percent reionization,
a very high level of X-rays ($> 10\%$ of the total ionizing luminosity) produces a second local maximum appear around $50\%$ reionization.
Modeling the sources in the simulation is complex. It involves taking the formation history, IMF, SED, life time, and more into account. Although
detailed models are desirable for the credibility of the results, we believe that the effect in the 21-cm signal can be bundled in 3 quantities.
The first is the efficiency: how many photons are produced by atoms locked into a star. This parameter must be calibrated to fit observational
constraint: end of reionization between redshift $6$ and $7$, and Thomson scattering optical depth in agreement with CMB experiment. The two other
quantities which contain most of the information are two box-averaged ratios: the energy emitted in the Lyman band to the energy emitted in the ionizing band ratio
and the same ionizing UV to X-ray ratio. In this work we explored values of 0.32 (model S2) and 0.75 (all other models) for the former and $0.001$, $0.01$ and
$0.1$ for the latter. Once additional physics is included in the simulation and using a higher resolution to account for all the sources, it will be
interesting to explore the value of these quantities systematically.
We mentioned in the introduction that the minimum boxsize for reliable predictions of the signal is $100\,h^{-1}$Mpc.
It is important to realize that this value
(confirmed by emission regime simulations, e. g., \citealt{Ilie06b}) is estimated based on the clustering properties
of the sources and applies to the topology of the ionization field.
It may be underestimated when we study the early absorption regime, when only the highest
density peaks contain sources. Their distribution is the most
sensitive to possible non-gaussianities in the matter power spectrum. Moreover, they are
distant from each other and, consequently, produce large scale
fluctuations in the local flux of Lyman-$\alpha$ and X-ray photons. We intend to extend
our investigation to larger box sizes in a future work.
A few final words on additional physics not included in our model.
Shock heating from the cosmological structure formation is ignored,
but it could have the potential to affect the 21-cm signal by increasing the gas temperature
above the CMB temperature. However, it is not sure whether shocks are strong enough in the filaments
of the neutral regions to affect the 21-cm signal. Mini-halos ($\sim 10^4-10^8\,\,M_{\odot}$)
form very early during the EoR and are dense and warm enough from
shock heating during virialization to emit the 21 cm signal, but \citet{Furl06c}
find that the contribution of mini-halos will not dominate, because of the limited
resolution of the instrumentation. However, shock heating is worth investigating
with coupled radiative hydrodynamic simulations with higher mass resolution. Also worth
investigating is the effect of including higher Lyman lines in the radiative transfer.
\begin{acknowledgements}
This work was realized in the context of the SKADS and LIDAU projects.
SKADS, the Design Study of the SKA project, financed by the FP6 of the
European Commission, the ANR project LIDAU, financed by the French
Ministry of Research.
\end{acknowledgements}
\bibliographystyle{apj}
| proofpile-arXiv_065-5186 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction} \label{sec:intro}
Maximum likelihood estimation is by far the most popular point estimation
technique in machine learning and statistics. Assuming that the data consists
of $n,$ $m$-dimensional vectors
\begin{align} \label{eq:data}
D=(X^{(1)},\ldots,X^{(n)}), \quad X^{(i)}\in\R^m,
\end{align}
and is sampled iid from a parametric distribution $p_{\theta_0}$ with $\theta_0 \in \Theta\subset \R^r$, a maximum likelihood estimator (mle) $\hat\theta_n^{\text{ml}}$ is a maximizer of the loglikelihood function
\begin{align} \label{eq:l}
\ell_n(\theta\,; D) &= \sum_{i=1}^n \log p_{\theta}(X^{(i)})
\\ \hat\theta_n^{\text{ml}} &= \argmax_{\theta\in\Theta} \ell_n(\theta\,;D).
\end{align}
The use of the mle is motivated by its consistency\footnote{The consistency $\hat\theta_n^{\text{ml}}\to \theta_0$ with probability 1 is sometimes called strong consistency in order to differentiate it from the weaker notion of consistency in probability $P(|\hat\theta_n^{\text{ml}}- \theta_0|<\epsilon)\to 0$.}, i.e.
$\hat\theta_n^{\text{ml}}\to \theta_0$ as $n\to\infty$ with probability 1 \citep{Ferguson1996}. The consistency property ensures that as the number $n$ of samples grows, the estimator will converge to the true parameter $\theta_0$ governing the data generation process.
An even stronger motivation for the use of the mle is that it has an
asymptotically normal distribution with mean vector $\theta_0$ and variance matrix $(nI(\theta_0))^{-1}$. More formally, we have the following convergence in distribution as $n\to\infty$ \citep{Ferguson1996}
\begin{align} \label{eq:mleEff}
\sqrt{n}\,(\hat\theta_n^{\text{ml}}-\theta_0)\tood N(0,I^{-1}(\theta_0)),
\end{align}
where $I(\theta)$ is the $r\times r$ Fisher information matrix
\begin{align}
I(\theta)&=\E_{p_{\theta}} \{\nabla \log p_{\theta}(X) (\nabla \log p_{\theta}(X))^{\top}\}
\end{align}
with $\nabla f$ representing the $r\times 1$ gradient vector of $f(\theta)$ with respect to $\theta$. The convergence \eqref{eq:mleEff} is especially striking since according to the Cramer-Rao lower bound, the asymptotic variance $(n I(\theta_0))^{-1}$ of the mle is the smallest possible variance for any estimator. Since it achieves the lowest possible asymptotic variance, the mle (and other estimators which share this property) is said to be asymptotically efficient.
The consistency and asymptotic efficiency of the mle motivate its use in many circumstances. Unfortunately, in some situations the maximization or even evaluation of the loglikelihood \eqref{eq:l} and its derivatives is impossible due to computational considerations. For instance this is the situation in many high dimensional exponential family distributions, including Markov random fields whose graphical structure contains cycles. This has lead to the proposal of alternative estimators under the premise that a loss of asymptotic efficiency is acceptable--in return for reduced computational complexity.
In contrast to asymptotic efficiency, we view consistency as a less negotiable property and prefer to avoid inconsistent estimators if at all possible. This common viewpoint in statistics is somewhat at odds with recent advances in the machine learning literature promoting non-consistent estimators, for example using variational techniques \citep{Jordan1999}. Nevertheless, we feel that there is a consensus regarding the benefits of having consistent estimators over non-consistent ones.
In this paper, we propose a family of estimators, for use in situations where the computation of the mle is intractable. In contrast to many previously proposed approximate estimators, our estimators are statistically consistent and admit a precise quantification of both computational complexity and statistical accuracy through their asymptotic variance. Due to the continuous parameterization of the estimator family, we obtain an effective framework for achieving a predefined problem-specific balance between computational tractability and statistical accuracy. We also demonstrate that in some cases reduced computational complexity may in fact act as a regularizer, increasing robustness and therefore accomplishing both reduced computation and increased accuracy. This ``win-win'' situation conflicts with the conventional wisdom stating that moving from the mle to pseudo-likelihood and other related estimators result in a computational win but a statistical loss. Nevertheless we show that this occurs in some practical situations.
For the sake of concreteness, we focus on the case of estimating the parameters associated with Markov random fields. In this case, we provide a detailed discussion of the accuracy--complexity tradeoff. We include experiments on both simulated and real world data for several models including the Boltzmann machine, conditional random fields, and the Boltzmann linear chain model.
\section{Related Work} \label{sec:related}
There is a large body of work dedicated to tractable learning techniques. Two popular categories are Markov chain Monte Carlo (MCMC) and variational methods. MCMC is a general purpose technique for approximating expectations and can be used to approximate the normalization term and other intractable portions of the loglikelihood and its gradient \citep{Casella2004}. Variational methods are
techniques for conducting inference and learning based on tractable bounds \citep{Jordan1999}.
Despite the substantial work on MCMC and variational methods, there are little practical results concerning the convergence and approximation rate of the resulting parameter estimators. Variational techniques are sometimes inconsistent and it is hard to analyze their asymptotic statistical behavior. In the case of MCMC, a number of asymptotic results exist \citep{Casella2004}, but since MCMC plays a role inside each gradient descent or EM iteration it is hard to analyze the asymptotic behavior of the resulting parameter estimates. An advantage of our framework is that we are able to directly characterize the asymptotic behavior of the estimator and relate it to the amount of computational savings.
Our work draws on the composite likelihood method for parameter estimation proposed by \citet{Lindsay1988} which in turn generalized the pseudo likelihood of \citet{Besag1974}. A selection of more recent studies on pseudo and composite likelihood are \citep{Arnold1991, Liang2003, Varin2005b, Sutton2007, Hjort2008}. Most of the
recent studies in this area examine the behavior of the pseudo or composite likelihood in a particular modeling situation. We believe that the present paper is the first to systematically examine statistical and computational tradeoffs in a general quantitative framework. Possible exceptions are \citep{Zhu2002} which is an experimental study on texture generation, \citep{Xing2003a} which is focused on inference rather than parameter estimation, and \citep{Liang2008} which compares discriminative and generative risks.
\section{Stochastic Composite Likelihood} \label{sec:scl}
In many cases, the absence of a closed form expression for the normalization term prevents the computation of the loglikelihood \eqref{eq:l} and its derivatives thereby severely limiting the use of the mle. A popular example is Markov random fields, wherein the computation of the normalization term is often intractable (see Section~\ref{sec:comp} for more details). In this paper we propose alternative estimators based on the maximization of a stochastic
variation of the composite likelihood.
We denote multiple samples using superscripts and individual dimensions using subscripts. Thus $X^{(r)}_j$ refers to the $j$-dimension of the $r$ sample. Following standard convention we refer to random variables (RV) using uppercase letters and their corresponding values using lowercase letters. We also use the standard notations for extracting a subset of the dimensions of a random variable
\begin{align} \label{eq:X_S}
X_S \defeq \{X_i:i\in S\}, \qquad X_{-j} \defeq \{X_i:i\neq j\}.
\end{align}
We start by reviewing the pseudo loglikelihood function \citep{Besag1974} associated with the data $D$ \eqref{eq:data},
\begin{align}
p\ell_n(\theta\,; D) &\defeq \sum_{i=1}^n \sum_{j=1}^m\log p_{\theta}(X^{(i)}_j|X^{(i)}_{-j})\label{eq:pl}.
\end{align}
The maximum pseudo likelihood estimator (mple) $\hat\theta_n^\text{mpl}$ is
consistent i.e., $\hat\theta_n^\text{mpl}\to\theta_0$ with probability 1, but possesses considerably higher asymptotic variance than the mle's $(nI(\theta_0))^{-1}$. Its main advantage is that it does not require the computation of the normalization term as it cancels out in the probability ratio defining conditional distributions
\begin{eqnarray}
p_{\theta}(X_j|X_{-j})=p_{\theta}(X_j|\{X_k:k\neq j\})=\frac{p_{\theta}(X)}{\sum_{x_j} p_{\theta}(X_1,\ldots,X_{j-1},X_j=x_j,X_{j+1},\ldots,X_m)}.
\end{eqnarray}
The mle and mple represent two different ways of resolving the tradeoff between asymptotic variance and computational complexity. The mle has low asymptotic variance but high computational complexity while the mple has higher asymptotic variance but low computational complexity. It is desirable to obtain additional estimators realizing alternative resolutions of the accuracy complexity tradeoff. To this end we define the stochastic composite likelihood whose maximization provides a family of consistent estimators with statistical accuracy and computational complexity spanning the entire accuracy-complexity spectrum.
Stochastic composite likelihood generalizes the likelihood and pseudo
likelihood functions by constructing an objective function that is a stochastic sum of likelihood objects. We start by defining the notion of $m$-pairs and likelihood objects and then proceed to stochastic composite likelihood.
\begin{defn}
An $m$-pair $(A,B)$ is a pair of sets $A,B\subset\{1,\ldots,m\}$ satisfying $A\neq\emptyset=A\cap B$. The likelihood object associated with an $m$-pair $(A,B)$ and $X$ is $S_{\theta}(A,B)\defeq\log p_{\theta}(X_A|X_B)$ where $X_S$ is defined in \eqref{eq:X_S}. The composite loglikelihood function \citep{Lindsay1988}
is a collection of likelihood objects defined by a finite sequence of $m$-pairs $(A_1,B_1),\ldots,(A_k,B_k)$
\begin{align}
c\ell_n(\theta\,;D)
&\defeq \sum_{i=1}^n \sum_{j=1}^k \log p_{\theta}(X^{(i)}_{A_j}|X^{(i)}_{B_j}).\label{eq:cl}
\end{align}
\end{defn}
There is a certain lack of flexibility associated with the composite likelihood framework as each likelihood object is either selected or not for the entire sample $X^{(1)},\ldots,X^{(n)}$. There is no allowance for some objects to be selected more frequently than others. For example, available computational resources may allow the computation of the loglikelihood for 20\% of the samples, and the pseudo-likelihood for the remaining 80\%. In the case of composite likelihood if we select the full-likelihood component (or the pseudo-likelihood or any other likelihood object) then this component is applied to all samples indiscriminately.
In SCL, different likelihood objects $S_{\theta}(A_j,B_j)$ may be selected for different samples with the possibility of some likelihood objects being selected for only a small fraction of the data samples. The selection may be non-coordinated, in which case each component is selected or not independently of the other components. Or it may be coordinated in which case the selection of one component depends on the selection of the other ones. For example, we may wish to avoid selecting a pseudo likelihood component for a certain sample $X^{(i)}$ if the full likelihood component was already selected for it.
Another important advantage of stochastic selection is that the discrete parameterization of \eqref{eq:cl} defined by the sequence $(A_1,B_1),\ldots,(A_k,B_k)$ is less convenient for theoretical analysis. Each component is either selected or not, turning the problem of optimally selecting components into a hard combinatorial problem. The stochastic composite likelihood, which is defined below, enjoys continuous parameterization leading to more convenient optimization techniques and convergence analysis.
\begin{defn} \label{def:scl}
Consider a finite sequence of $m$-pairs $(A_1,B_1),\ldots,(A_k,B_k)$, a dataset $D=(X^{(1)},\ldots,X^{(n)})$, $\beta\in\R^k_+$, and $m$ iid binary random vectors $Z^{(1)},\ldots,Z^{(m)}\iid P(Z)$ with $\lambda_j\defeq \E(Z_j)>0$.
The stochastic composite loglikelihood (scl) is
\begin{align} \label{eq:scl}
sc\ell_n(\theta\,;D) &\defeq
\frac{1}{n} \sum_{i=1}^n m_{\theta}(X^{(i)},Z^{(i)}), \quad \text{where}\quad \\
m_{\theta}(X,Z) &\defeq\sum_{j=1}^k \beta_j Z_j \log p_{\theta}(X_{A_j}|X_{B_j}). \label{eq:mDef}
\end{align}
\end{defn}
In other words, the scl is a stochastic extension of \eqref{eq:cl} where for each sample $X^{(i)}, i=1,\ldots,n$, the likelihood objects $S(A_1,B_1),\ldots,S(A_k,B_k)$ are either selected or not, depending on the values of the binary random variables $Z^{(i)}_1,\ldots,Z^{(i)}_m$ and weighted by the constants $\beta_1,\ldots,\beta_m$. Note that $Z^{(i)}_j$ may in general depend on $Z^{(i)}_r$ but not on $Z^{(l)}_r$ or on $X^{(i)}$.
When we focus on examining different models for $P(Z)$ we sometimes parameterize it, for example by $\lambda$ i.e., $P_{\lambda}(Z)$. This reuse of $\lambda$ (it is also used in Definition~\ref{def:scl}) is a notational abuse. We accept it, however, as in most of the cases that we consider $\lambda_1,\ldots,\lambda_k$ from Definition~\ref{def:scl} either form the parameter vector for $P(Z)$ or are part of it.
Some illustrative examples follow.
\begin{description}
\item[Independence.] Factorizing $P_{\lambda}(Z_1,\ldots,Z_k)=\prod_j P_{\lambda_j}(Z_j)$ leads to $Z^{(i)}_j\sim \text{Ber}(\lambda_j)$ with complete independence among the indicator variables. For each sample $X^{(i)}$, each likelihood object $S(A_j,B_j)$ is selected or not independently with probability $\lambda_j$.
\item[Multinomial.] A multinomial model $Z\sim \text{Mult}(1,\lambda)$ implies that for each sample $Z^{(i)}$ a multivariate Bernoulli experiment is conducted with precisely one likelihood object being selected depending on the selection probabilities $\lambda_1,\ldots,\lambda_k$.
\item[Product of Multinomials.] A product of multinomials is formed by a partition of the dimensions to $l$ disjoint subsets $\{1,\ldots,m\}=C_1\cup \cdots C_l$ where $Z_{C_i}\sim \text{Mult}(1,(\lambda_j:j\in C_i))$ i.e.,
\[ P(Z)=\prod_{i=1}^c P_i\left(\{Z_j:j\in C_i\}\right), \quad \text{ where }P_i \text{ is }\text{Mult}(1,(\lambda_j:j\in C_l)).\]
\item[Loglinear Models.] The distribution $P(Z)$ follows a hierarchical loglinear model \citep{Bishop1975}. This case subsumes the other cases above.
\end{description}
In analogy to the mle and the mple, the maximum scl estimator (mscle)
$\hat\theta_n^{\text{msl}}$ estimates $\theta_0$ by maximizing the scl
function. In contrast to the loglikelihood and pseudo loglikelihood functions, the scl function and its maximizer are random variables that depend on the indicator variables $Z^{(1)},\ldots,Z^{(n)}$ in addition to the data $D$. As such, its behavior should be summarized by examining the limit $n\to\infty$. Doing so eliminates the dependency on particular realizations of $Z^{(1)},\ldots,Z^{(n)}$ in favor of the the expected frequencies $\lambda_j=\E_{P(Z)}Z_j$ which are non-random constants.
The statistical accuracy and computational complexity of the msl estimator are continuous functions of the parameters $(\beta,\lambda)$ (components weights and selection probabilities respectively) which vary continuously throughout their domain $(\lambda,\beta)\in \Lambda\times \R_+^k$. Choosing appropriate values of $(\lambda,\beta)$ retrieves the special cases of mle, mple, maximum composite likelihood with each selection being associated with a distinct statistical accuracy and computational complexity. The scl framework allows selections of many more values of $(\lambda,\beta)$ realizing a wide continuous spectrum of estimators, each resolving the accuracy-complexity tradeoff differently.
We include below a demonstration of the scl framework in a simple low dimensional case. In the following sections we discuss in detail the statistical behavior of the mscle and its computational complexity. We conclude the paper with several experimental studies.
\subsection{Boltzmann Machine Example}
Before proceeding we illustrate the SCL framework using a simple example involving a Boltzmann machine \citep{Jordan1999}. We consider in detail three SCL policies: full likelihood (FL), pseudo-likelihood (PL), and a stochastic combination of first and second order pseudo-likelihood with the first order components ($p(X_i|X_{-i})$) selected with probability $\lambda$ and the second order components ($p(X_i,X_j|X_{\{i,j\}^c})$) with probability $1-\lambda$.
Denoting the number of (binary) graph nodes by $m$, the number of examples by $n$, the computational complexity of the FL function (FLOP\footnote{FLOP stands for the number of floating point operations.} counts) is $O\left(\begin{pmatrix}m\\2\end{pmatrix}(2^m+n)\right)$ (loglikelihood) and $O\left(\begin{pmatrix}m\\2\end{pmatrix}^22^m+n\begin{pmatrix}m\\2\end{pmatrix}\right)$ (loglikelihood gradient). The exponential growth in $m$ prevents such computations for large graphs.
The $k$-order PL function offers a practical alternative to FL (1-order PL correspond to the traditional pseudo-likelihood and 2-order is its analog with second order components $p(X_{\{i,j\}}|X_{\{i,j\}^c})$). The complexity of computing the corresponding SCL function is
$O\left(\begin{pmatrix}m\\2\end{pmatrix} \left(\begin{pmatrix}m\\k\end{pmatrix}2^k+n\right)\right)$ (for the objective function) and $O\left(\begin{pmatrix}m\\k\end{pmatrix}\begin{pmatrix}m\\2\end{pmatrix}^22^k+n\begin{pmatrix}m\\2\end{pmatrix}\right)$ (for the gradient). The slower complexity growth of the $k$-order PL (polynomial in $m$ instead of exponential) is offset by its reduced statistical accuracy, which we measure using the normalized asymptotic variance
\begin{align} \label{eq:relEff}
\text{eff}(\hat\theta_n) = \frac{\det (\text{Asymp Var}(\hat\theta_n))}{\det (\text{Asymp Var}(\hat\theta_n^{\text{mle}}))}
\end{align}
which is bounded from below by 1 (due to Cramer Rao lower bound) and its deviation from 1 reflects its inefficiency relative to the MLE.
The MLE thus achieves the best accuracy but it is computationally intractable. The first order and second order PL have higher asymptotic variance but are easier to compute. The SCL framework enables adding many more estimators filling in the gaps between ML, 1-order PL, 2-order PL, etc.
We illustrate three SCL functions in the context of a simple Boltzmann machine (five binary nodes, fourteen samples $X^{(1)},\ldots,X^{(14)}$, $\theta^{\text{true}}=(-1,-1,-1,-1,-1,1,1,1,1,1)$) in Figure~\ref{fig:policies}. The top box refers to the full likelihood policy. For each of the fourteen samples, the FL component is computed and their aggregation forms the SCL function which in this case equals the loglikelihood. The selection of the FL component for each sample is illustrated using a diamond box. The numbers under the boxes reflect the FLOP counts needed to compute the components and the total complexity associated with computing the entire SCL or loglikelihood is listed on the right. As mentioned above, the normalized asymptotic variance \eqref{eq:relEff} is 1.
The pseudo-likelihood function \eqref{eq:pl} is illustrated in the second box where each row correspond to one of the five PL components. As each of the five PL component is selected for each of the samples we have diamond boxes covering the entire $5\times 14$ array. The shade of the diamond boxes reflects the complexity required to compute them enabling an easy comparison to the FL components in the top of the figure (note how the FL boxes are much darker than the PL boxes). The numbers at the bottom of each column reflect the FLOP marginal count for each of the fourteen samples and the numbers to the right of the rows reflect the FLOP marginal count for each of the PL components. In this case the FLOP count is less than half the FLOP count of the FL in top box (this reduction in complexity obtained by replacing FL with PL will increase dramatically for graphs with more than 5 nodes) but the asymptotic variance is 83\% higher\footnote{The asymptotic variance of SCL functions is computed using formulas derived in the next section}.
The third SCL policy reflects a stochastic combination of first and second order pseudo likelihood components. Each first order component is selected with probability $\lambda$ and each second order component is selected with probability $1-\lambda$. The result is a collection of 5 1-order PL components and 10 2-order components with only some of them selected for each of the fourteen samples. Again diamond boxes correspond to selected components which are shaded according to their FLOP complexity. The per-component FLOP marginals and per example FLOP marginals are listed as the bottom row and right-most column. The total complexity is somewhere between FL and PL and the asymptotic variance is reduced from the PL's 183\% to 148\%.
\begin{figure}
\centering
\setlength{\tabcolsep}{0pt}
{ \scriptsize \begin{tabular}{|l|*{14}{c}cc|}\hline
& $X^{(1)}$& $X^{(2)}$& $X^{(3)}$& $X^{(4)}$& $X^{(5)}$& $X^{(6)}$& $X^{(7)}$& $X^{(8)}$& $X^{(9)}$& $X^{(10)}$& $X^{(11)}$& $X^{(12)}$& $X^{(13)}$& $X^{(14)}$&& \\\hline \hline
FL &&&&&&&&&&&&&&&&\\
$X_1,,\ldots,X_5$ &
\sq{$\diamond$}{1} & \sq{$\diamond$}{1} &
\sq{$\diamond$}{1} & \sq{$\diamond$}{1} &
\sq{$\diamond$}{1} & \sq{$\diamond$}{1} &
\sq{$\diamond$}{1} & \sq{$\diamond$}{1} &
\sq{$\diamond$}{1} & \sq{$\diamond$}{1} &
\sq{$\diamond$}{1} & \sq{$\diamond$}{1} &
\sq{$\diamond$}{1} & \sq{$\diamond$}{1} & & 4620\\
&
330 & 330 &
330 & 330 &
330 & 330 &
330 & 330 &
330 & 330 &
330 & 330 &
330 & 330 & & 4620\\
&&&&&&&&&&&&&&&&\\
Complexity & 4620&&&&&&&&&&&&&&&\\
Norm Asym Var & 1&&&&&&&&&&&&&&&\\
\hline \hline
PL &&&&&&&&&&&&&&&&\\
$X_1|X_{-1}$ &
\sq{$\diamond$}{93} & \sq{$\diamond$}{93} &
\sq{$\diamond$}{93} & \sq{$\diamond$}{93} &
\sq{$\diamond$}{93} & \sq{$\diamond$}{93} &
\sq{$\diamond$}{93} & \sq{$\diamond$}{93} &
\sq{$\diamond$}{93} & \sq{$\diamond$}{93} &
\sq{$\diamond$}{93} & \sq{$\diamond$}{93} &
\sq{$\diamond$}{93} & \sq{$\diamond$}{93} & & 308\\
$X_2|X_{-2}$ &
\sq{$\diamond$}{93} & \sq{$\diamond$}{93} &
\sq{$\diamond$}{93} & \sq{$\diamond$}{93} &
\sq{$\diamond$}{93} & \sq{$\diamond$}{93} &
\sq{$\diamond$}{93} & \sq{$\diamond$}{93} &
\sq{$\diamond$}{93} & \sq{$\diamond$}{93} &
\sq{$\diamond$}{93} & \sq{$\diamond$}{93} &
\sq{$\diamond$}{93} & \sq{$\diamond$}{93} & & 308\\
$X_3|X_{-3}$ &
\sq{$\diamond$}{93} & \sq{$\diamond$}{93} &
\sq{$\diamond$}{93} & \sq{$\diamond$}{93} &
\sq{$\diamond$}{93} & \sq{$\diamond$}{93} &
\sq{$\diamond$}{93} & \sq{$\diamond$}{93} &
\sq{$\diamond$}{93} & \sq{$\diamond$}{93} &
\sq{$\diamond$}{93} & \sq{$\diamond$}{93} &
\sq{$\diamond$}{93} & \sq{$\diamond$}{93} & & 308\\
$X_4|X_{-4}$ &
\sq{$\diamond$}{93} & \sq{$\diamond$}{93} &
\sq{$\diamond$}{93} & \sq{$\diamond$}{93} &
\sq{$\diamond$}{93} & \sq{$\diamond$}{93} &
\sq{$\diamond$}{93} & \sq{$\diamond$}{93} &
\sq{$\diamond$}{93} & \sq{$\diamond$}{93} &
\sq{$\diamond$}{93} & \sq{$\diamond$}{93} &
\sq{$\diamond$}{93} & \sq{$\diamond$}{93} & & 308\\
$X_5|X_{-5}$ &
\sq{$\diamond$}{93} & \sq{$\diamond$}{93} &
\sq{$\diamond$}{93} & \sq{$\diamond$}{93} &
\sq{$\diamond$}{93} & \sq{$\diamond$}{93} &
\sq{$\diamond$}{93} & \sq{$\diamond$}{93} &
\sq{$\diamond$}{93} & \sq{$\diamond$}{93} &
\sq{$\diamond$}{93} & \sq{$\diamond$}{93} &
\sq{$\diamond$}{93} & \sq{$\diamond$}{93} & & 308\\
&
110 & 110 &
110 & 110 &
110 & 110 &
110 & 110 &
110 & 110 &
110 & 110 &
110 & 110 & &1540\\
&&&&&&&&&&&&&&&&\\
Complexity & 1540&&&&&&&&&&&&&&&\\
Norm Asym Var & 1.83&&&&&&&&&&&&&&&\\
\hline \hline
0.7PL+0.3PL2 &&&&&&&&&&&&&&&&\\
$X_1|X_{-1}$ &
& \sq{$\diamond$}{93} &
\sq{$\diamond$}{93} &\sq{$\diamond$}{93} &
\sq{$\diamond$}{93} & &
& &
\sq{$\diamond$}{93} & \sq{$\diamond$}{93} &
\sq{$\diamond$}{93} & \sq{$\diamond$}{93} &
& & & 176 \\
$X_2|X_{-2}$ &
& \sq{$\diamond$}{93} &
& \sq{$\diamond$}{93} &
\sq{$\diamond$}{93} & &
\sq{$\diamond$}{93} & \sq{$\diamond$}{93} &
\sq{$\diamond$}{93} & \sq{$\diamond$}{93} &
\sq{$\diamond$}{93} & &
\sq{$\diamond$}{93} & \sq{$\diamond$}{93} & &220\\
$X_3|X_{-3}$ &
\sq{$\diamond$}{93} & \sq{$\diamond$}{93} &
& &
& \sq{$\diamond$}{93} &
\sq{$\diamond$}{93} & \sq{$\diamond$}{93} &
& \sq{$\diamond$}{93} &
\sq{$\diamond$}{93} & \sq{$\diamond$}{93} &
\sq{$\diamond$}{93} & \sq{$\diamond$}{93} & &220\\
$X_{4}|X_{-4}$ &
& &
\sq{$\diamond$}{93} & &
& \sq{$\diamond$}{93} &
\sq{$\diamond$}{93} & &
& &
\sq{$\diamond$}{93} & \sq{$\diamond$}{93} &
\sq{$\diamond$}{93} & \sq{$\diamond$}{93} & &154\\
$X_5|X_{-5}$ &
\sq{$\diamond$}{93} & &
&&
\sq{$\diamond$}{93} & \sq{$\diamond$}{93} &
\sq{$\diamond$}{93} & \sq{$\diamond$}{93} &
\sq{$\diamond$}{93} & &
\sq{$\diamond$}{93} & \sq{$\diamond$}{93} &
& \sq{$\diamond$}{93} & &198\\
$X_{\{1,2\}}|X_{\{1,2\}^c}$ &
& &
\sq{$\diamond$}{75} & \sq{$\diamond$}{75} &
& \sq{$\diamond$}{75} &
&&
&&
\sq{$\diamond$}{75} & &
&&& 164 \\
$X_{\{1,3\}}|X_{\{1,3\}^c}$ &
\sq{$\diamond$}{75} & &
\sq{$\diamond$}{75} & &
\sq{$\diamond$}{75} & &
&&
\sq{$\diamond$}{75} & &
& \sq{$\diamond$}{75} &
&&&205\\
$X_{\{1,4\}}|X_{\{1,4\}^c}$ &
&&&&
\sq{$\diamond$}{75} & \sq{$\diamond$}{75} & \sq{$\diamond$}{75} &
&& \sq{$\diamond$}{75} &
&&&&&164\\
$X_{\{1,5\}}|X_{\{1,5\}^c}$ &
\sq{$\diamond$}{75} & &
&&&&
\sq{$\diamond$}{75} & & & &
\sq{$\diamond$}{75} & \sq{$\diamond$}{75} & && & 164\\
$X_{\{2,3\}}|X_{\{2,3\}^c}$ &
&&&\sq{$\diamond$}{75} & \sq{$\diamond$}{75} & & \sq{$\diamond$}{75} &
&& \sq{$\diamond$}{75} & \sq{$\diamond$}{75} & &&& & 205\\
$X_{\{2,4\}}|X_{\{2,4\}^c}$ &
& \sq{$\diamond$}{75} & \sq{$\diamond$}{75} & &&& \sq{$\diamond$}{75} & \sq{$\diamond$}{75} & & \sq{$\diamond$}{75} & \sq{$\diamond$}{75} & \sq{$\diamond$}{75} & & && 287\\
$X_{\{2,5\}}|X_{\{2,5\}^c}$ &
\sq{$\diamond$}{75} & & & \sq{$\diamond$}{75} & & \sq{$\diamond$}{75} & & \sq{$\diamond$}{75} & &&&&&&&164\\
$X_{\{3,4\}}|X_{\{3,4\}^c}$ &
\sq{$\diamond$}{75} & &&&&&&\sq{$\diamond$}{75} & &&&&&&&82\\
$X_{\{3,5\}}|X_{\{3,5\}^c}$ &
&&\sq{$\diamond$}{75} &&\sq{$\diamond$}{75} &\sq{$\diamond$}{75} &&\sq{$\diamond$}{75} &&&&&&& &164\\
$X_{\{4,5\}}|X_{\{4,5\}^c}$ &
&&&&&& \sq{$\diamond$}{75} & \sq{$\diamond$}{75} & \sq{$\diamond$}{75} & \sq{$\diamond$}{75} & & \sq{$\diamond$}{75} & &&& 205\\
& 208 & 107 & 208 & 167 & 230 & 230 & 293 & 271 & 148 & 230 & 274 & 252 & 66 & 88 & &2772 \\
&&&&&&&&&&&&&&&&\\
Complexity & 2772&&&&&&&&&&&&&&&\\
Norm Asym Var & 1.48&&&&&&&&&&&&&&&\\ \hline
\end{tabular}}
\caption{Sample runs of three different SCL policies for 14 examples $X^{(1)},\ldots,X^{(14)}$ drawn from a 5 binary node Boltzmann machine ($\theta^{\text{true}}=(-1,-1,-1,-1,-1,1,1,1,1,1)$). The policies are full likelihood (FL, top), pseudo-likelihood (PL, middle), and a stochastic combination of first and second order pseudo-likelihood with the first order components selected with probability 0.7 and the second order components with probability 0.3 (bottom).
\vspace{0.1in} \newline
The sample runs for the policies are illustrated by placing a diamond box in table entries corresponding to selected likelihood objects (rows corresponding to likelihood objects and columns to $X^{(1)},\ldots,X^{(14)}$). The FLOP counts of each likelihood object determines the shade of the diamond boxes while the total FLOP counts per example and per likelihood objects are displayed as table marginals (bottom row and right column for each policy). We also display the total FLOP count and the normalized asymptotic variance \eqref{eq:relEff}.
\vspace{0.1in} \newline
Even in the simple case of 5 nodes, FL is the most complex policy with PL requiring a third of the FL computation. 0.7PL+0.3PL2 is somewhere in between. The situation is reversed for the estimation accuracy-FL achieves the lowest possible normalized asymptotic variance of 1, PL is almost twice that, and 0.7PL+0.3PL2 somewhere in the middle. The SCL framework spans the accuracy-complexity spectrum. Choosing the right $\lambda$ value obtains an estimator that is suits available computational resources and required accuracy.}
\label{fig:policies}
\end{figure}
Additional insight may be gained at this point by considering Figure~\ref{fig:compAccPlot} which plots several SCL estimators as points in the plane whose $x$ and $y$ coordinates correspond to normalized asymptotic variance and computational complexity respectively. We turn at this point to considering the statistical properties of the SCL estimators.
\section{Consistency and Asymptotic Variance of $\hat\theta_n^{\text{msl}}$} \label{sec:stat}
A nice property of the SCL framework is enabling mathematical characterization of the statistical properties of the estimator $\hat\theta_n^{\text{msl}}$. In this section we examine the conditions for consistency of the mscle and its asymptotic distribution and in the next section we consider robustness. The propositions below constitute novel generalizations of some well-known results in classical statistics. Proofs may be found in Appendix~\ref{sec:proofs}. For simplicity, we assume that $X$ is discrete and $p_{\theta}(x)>0$.
\begin{defn} \label{def:identifiability}
A sequence of $m$-pairs $(A_1,B_1),\ldots,(A_k,B_k)$ ensures identifiability of $p_{\theta}$ if the map $\{p_{\theta}(X_{A_j}|X_{B_j}): j=1,\ldots,k\}\mapsto p_{\theta}(X)$ is injective. In other words, there exists only a single collection of conditionals $\{p_{\theta}(X_{A_j}|X_{B_j}): j=1,\ldots,k\}$ that
does not contradict the joint $p_{\theta}(X)$.
\end{defn}
\begin{prop} \label{prop:consistency}
Let $\Theta\subset \R^r$ be an open set, $p_{\theta}(x)>0$ and continuous and smooth in $\theta$, and $(A_1,B_1),\ldots,(A_k,B_k)$ be a sequence of $m$-pairs for which $\{(A_j,B_j):\forall j \text{ such that } \lambda_j>0\}$ ensures identifiability. Then the sequence of SCL maximizers is strongly consistent i.e.,
\begin{align}
P\left(\lim_{n\to\infty} \hat\theta_n=\theta_0\right)=1.
\end{align}
\end{prop}
The above proposition indicates that to guarantee consistency, the sequence of $m$-pairs needs to satisfy Definition~\ref{def:identifiability}. It can be shown that a selection equivalent to the pseudo likelihood function, i.e.,
\begin{align}
\mathcal{S}=\{(A_1,B_1),\ldots,(A_m,B_m)\} \quad \text{where} \quad
A_i=\{i\}, B_i=\{1,\ldots,m\}\setminus A_i \label{eq:plPieces}
\end{align}
ensure identifiability and consequently the consistency of the mscle estimator. Furthermore, every selection of $m$-pairs that subsumes $\mathcal{S}$ in \eqref{eq:plPieces} similarly guarantees identifiability and consistency.
The proposition below establishes the asymptotic normality of the mscle $\hat\theta_n$. The asymptotic variance enables the comparison of scl functions with different parameterizations $(\lambda,\beta)$.
\begin{prop} \label{prop:asympVar}
Making the assumptions of Proposition \ref{prop:consistency} as well as convexity of $\Theta\subset\R^r$ we have the following convergence in distribution
\begin{align}
\sqrt{n}(\hat\theta_n^{\text{msl}}-\theta_0) \tood N\left(0,\Upsilon \Sigma \Upsilon\right)
\end{align}
where
\begin{align}
\Upsilon^{-1}&=\sum_{j=1}^k \beta_j\lambda_j \Var_{\theta_0} (\nabla S_{\theta_0}(A_j,B_j)) \\
\Sigma&=\Var_{\theta_0}\left(\sum_{j=1}^k\beta_j\lambda_j \nabla S_{\theta_0}(A_j,B_j)\right).
\end{align}
\end{prop}
The notation $\Var_{\theta_0}(Y)$ represents the covariance matrix of the random vector $Y$ under $p_{\theta_0}$ while the notations $\toop,\tood$ in the proof below denote convergences in probability and in distribution \citep{Ferguson1996}. $\nabla$ represents the gradient vector with respect to $\theta$.
When $\theta$ is a vector the asymptotic variance is a matrix. To facilitate comparison between different estimators we follow the convention of using the determinant, and in some cases the trace, to measure the statistical accuracy. See \citep{Serfling1980} for some heuristic arguments for doing so. Figures~\ref{fig:policies},\ref{fig:boltzmann},\ref{fig:compAccPlot} provide the asymptotic variance for some SCL estimators and describe how it can be used to gain insight into which estimator to use.
The statistical accuracy of the SCL estimator depends on $\beta$ (weight parameters) and $\lambda$ (selection parameter). It is thus desirable to use the results in this section in determining what values of $\beta,\lambda$ to use. Directly using the asymptotic variance is not possible in practice as it depends on the unknown quantity $\theta_0$. However, it is possible to estimate the asymptotic variance using the training data. We describe this in Section~\ref{sec:beta}.
\section{Robustness of $\hat\theta_n^{\text{msl}}$}\label{sec:robust}
We observed in our experiments (see Section~\ref{sec:experiments}) that the SCL estimator sometimes performs better on a held-out test set than did the maximum likelihood estimator. This phenomenon seems to be in contradiction to the fact that the asymptotic variance of the MLE is lower than that of the SCL maximizer. This is explained by the fact that in some cases the true model generating the data does not lie within the parametric family $\{p_{\theta}:\theta\in\Theta\}$ under consideration. For example, many graphical models (HMM, CRF, LDA, etc.) make conditional independence assumptions that are often violated in practice. In such cases the SCL estimator acts as a regularizer achieving better test set performance than the non-regularized MLE. We provide below a theoretical account of this phenomenon using the language of $m$-estimators and statistical robustness. Our notation follows the one in \citep{Vaart1998}.
We assume that the model generating the data is outside the model family $P(X)\not\in \{p_{\theta}:\theta\in\Theta\}$ and we augment $m_{\theta}(X,Z)$ in \eqref{eq:mDef} with
\begin{align*}
\psi_{\theta}(X,Z) &\defeq \nabla m_{\theta}(X,Z)\\
\dot\psi_{\theta}(X,Z) &\defeq \nabla^2 m_{\theta}(X,Z) \quad \text{(matrix of second order derivatives)}\\
\Psi_n(\theta) &\defeq\frac{1}{n}\sum_{i=1}^n \psi_{\theta}(X^{(i)},Z^{(i)}).
\end{align*}
Proposition~\ref{prop:robustConsist} below generalizes the consistency result by asserting that $\hat\theta_n\to\theta_0$ where $\theta_0$ is the point on $\{p_{\theta}:\theta\in\Theta\}$ that is closest to the true model $P$, as defined by
\begin{align}\label{eq:KLproj}
\theta_0=\argmax_{\theta\in\Theta}M(\theta) \quad \text{where} \quad M(\theta)\defeq-\sum_{j=1}^k \beta_j\lambda_j D(P(X_{A_j}|X_{B_j})||p_{\theta}(X_{A_j}|X_{B_j})),
\end{align}
or equivalently, $\theta_0$ satisfies
\begin{align} \label{eq:defTheta0}
\E_{P(X)}\E_{P(Z)} \psi_{\theta_0}(X,Z)=0.
\end{align}
When the scl function reverts to the loglikelihood function, $\theta_0$ becomes the KL projection of the true model $P$ onto the parametric family $\{p_{\theta}:\theta\in\Theta\}$.
\begin{prop} \label{prop:robustConsist}
Assuming the conditions in Proposition~\ref{prop:consistency} as well as $\sup_{\theta:\|\theta-\theta_0\|\geq \epsilon} M(\theta)<M(\theta_0)$ for all $\epsilon>0$ we have
$\hat\theta_n^{\text{msl}}\to\theta_0$ as $n\to\infty$ with probability 1.
\end{prop}
The added condition maintains that $\theta_0$ is a well separated maximum point of $M$. In other words it asserts that only values close to $\theta_0$ may yield a value of $M$ that is close to the maximum $M(\theta_0)$. This condition is satisfied in the case of most exponential family models.
\begin{prop} \label{prop:robustVar1}
Assuming the conditions of Proposition~\ref{prop:asympVar} as well as
$\E_{P(X)}\E_{P(Z)} \|\psi_{\theta_0}(X,Z)\|^2 < \infty$, $\E_{P(X)}\E_{P(Z)} \dot\psi_{\theta_0}(X)$ exists and is non-singular, $|\ddot\Psi_{ij}| = |\partial^2\psi_{\theta}(x)/\partial\theta_i\theta_j|<g(x)$ for all $i,j$ and $\theta$ in a neighborhood of $\theta_0$ for some integrable $g$, we have
\begin{align} \label{eq:asymp}
\sqrt{n}(\hat\theta_n-\theta_0) &= -(\E_{P(X)}\E_{P(Z)} \dot\psi_{\theta_0})^{-1} \frac{1}{\sqrt{n}}\sum_{i=1}^n\psi_{\theta_0}(X^{(i)},Z^{(i)})+o_P(1)\\
&\text{or equivalently} \nonumber \\
\hat\theta_n &= \theta_0 -(\E_{P(X)}\E_{P(Z)} \dot\psi_{\theta_0})^{-1} \frac{1}{n}\sum_{i=1}^n\psi_{\theta_0}(X^{(i)},Z^{(i)}) + o_P\left(\frac{1}{\sqrt{n}}\right). \label{eq:asymp2}
\end{align}
\end{prop}
Above, $f_n=o_P(g_n)$ means $f_n/g_n$ converges to 0 with probability 1.
\begin{cor} \label{corr:robustVar} Assuming the conditions specified in Proposition~\ref{prop:robustVar1} we have
\begin{align}
\sqrt{n}(\hat\theta_n-\theta_0) &\tood N(0,(\E_{P(X)}\E_{P(Z)} \dot\psi_{\theta_0})^{-1} (\E_{P(X)}\E_{P(Z)} \psi_{\theta_0}\psi_{\theta_0}^{\top})(\E_{P(X)}\E_{P(Z)} \dot\psi_{\theta_0})^{-1}). \label{eq:normality}
\end{align}
\end{cor}
Equation \eqref{eq:asymp2} means that asymptotically, $\hat\theta_n$ behaves as $\theta_0$ plus the average of iid RVs. As mentioned in \citep{Vaart1998} this fact may be used to obtain a convenient expression for the asymptotic influence function, which measures the effect of adding a new observation to an existing large dataset. Neglecting the remainder in \eqref{eq:asymp} we have
\begin{align}
\mathcal{I}(x,z) &\defeq \hat\theta_n(X^{(1)},\ldots,X^{(n-1)},x,Z^{(1)},\ldots,Z^{(n-1)},z)-\hat\theta_{n-1}(X^{(1)},\ldots,X^{(n-1)},Z^{(1)},\ldots,Z^{(n-1)})
\nonumber \\ &\approx
-(\E_{P(X)}\E_{P(Z)} \dot\psi_{\theta_0})^{-1} \left( \frac{1}{n} \sum_{i=1}^{n-1}\psi_{\theta_0}(X^{(i)},Z^{(i)})+ \frac{1}{n} \psi_{\theta_0}(w,z)-\frac{1}{n-1} \sum_{i=1}^{n-1} \psi_{\theta_0}(X^{(i)},Z^{(i)})\right) \nonumber \\
&= -(\E_{P(X)}\E_{P(Z)} \dot\psi_{\theta_0})^{-1} \frac{1}{n} \psi_{\theta_0}(w,z) +(\E_{P(X)}\E_{P(Z)} \dot\psi_{\theta_0})^{-1} \frac{1}{n(n-1)} \sum_{i=1}^{n-1}\psi_{\theta_0}(X^{(i)},Z^{(i)}) \nonumber \\
&= - \frac{1}{n} (\E_{P(X)}\E_{P(Z)} \dot\psi_{\theta_0})^{-1} \psi_{\theta_0}(w,z) + o_P\left(\frac{1}{n}\right). \label{eq:infFun}
\end{align}
Corollary~\ref{corr:robustVar} and Equation~\ref{eq:infFun} measure the statistical behavior of the estimator when the true distribution is outside the model family. In these cases it is possible that a computationally efficient SCL maximizer will result in higher statistical accuracy as well. This ``win-win'' situation of improving in both accuracy and complexity over the MLE is confirmed by our experiments in Section~\ref{sec:experiments}.
\section{Stochastic Composite Likelihood for Markov Random Fields}
\label{sec:comp}
Markov random fields (MRF) are some of the more popular statistical models for complex high dimensional data. Approaches based on pseudo likelihood and composite likelihood are naturally well-suited in this case due to the cancellation of the normalization term in the probability ratios defining conditional distributions. More specifically, a MRF with respect to a graph $G=(V,E)$, $V=\{1,\ldots,m\}$
with a clique set $\mathcal{C}$ is given by the following exponential family
model
\begin{align} \label{eq:expModel}
P_{\theta}(x)&= \exp\left(\sum_{C\in\mathcal{C}} \theta_{C}
f_C(x_C)-\log Z(\theta)\right),\nonumber\\ & Z(\theta)=\sum_x \exp\left(\sum_{C\in\mathcal{C}} \theta_c f_C(x_C)\right).
\end{align}
The primary bottlenecks in obtaining the maximum likelihood are the
computations $\log Z(\theta)$ and $\nabla \log Z(\theta)$. Their computational
complexity is exponential in the graph's treewidth and for many cyclic graphs,
such as the Ising model or the Boltzmann machine, it is exponential in $|V|=m$.
In contrast, the conditional distributions that form the composite likelihood of \eqref{eq:expModel} are given by (note the cancellation of $Z(\theta)$)
\begin{align}
P_{\theta}(x_A|x_{B}) &= \label{eq:condProb}
\frac{ \sum\limits_{x_{(A\cup B)^c}'} \exp\left(\sum_{C\in\mathcal{C}} \theta_{C} f_C((x_A,x_B,x_{(A\cup B)^c}')_C) \right)}
{
\sum\limits_{x_{(A\cup B)^c}'} \sum\limits_{x_A''} \exp\left(\sum\limits_{C\in\mathcal{C}} \theta_{C} f_C((x_A'',x_B,x_{(A\cup B)^c}')_C)\right)
}.
\end{align}
whose computation is substantially faster. Specifically, The computation of \eqref{eq:condProb} depends on the size of the sets $A$ and $(A\cup B)^c$ and their intersections with the cliques in $\mathcal{C}$. In general, selecting small $|A_j|$ and $B_j=(A_j)^c$ leads to efficient computation of the composite likelihood and its gradient. For example, in the case of $|A_j|=l, |B_j|=m-l$
with $l\ll m$ we have that $k\leq m!/(l!(m-l)!)$ and the complexity of
computing the $c\ell(\theta)$ function and its gradient may be shown to require time that is at most exponential in $l$ and polynomial in $m$.
\section{Automatic Selection of $\beta$} \label{sec:beta}
As Proposition \ref{prop:asympVar} indicates, the weight vector $\beta$ and
selection probabilities $\lambda$ play an important role in the statistical
accuracy of the estimator through its asymptotic variance. The computational
complexity, on the other hand, is determined by $\lambda$ independently of
$\beta$. Conceptually, we are interested in resolving the accuracy-complexity
tradeoff jointly for both $\beta,\lambda$ before estimating $\theta$ by
maximizing the scl function. However, since the computational complexity depends only on $\lambda$ we propose the following simplified problem: Select $\lambda$ based on available computational resources, and then given $\lambda$, select the $\beta$ (and $\theta$) that will achieve optimal statistical accuracy.
Selecting $\beta$ that minimizes the asymptotic variance is somewhat ambiguous
as $\Upsilon\Sigma\Upsilon$ in Proposition~\ref{prop:asympVar} is an $r\times
r$ positive semidefinite matrix. A common solution is to consider the
determinant as a one dimensional measure of the size of the variance matrix\footnote{See \citep{Serfling1980} for a heuristic discussion motivating this measre.}, and minimize
\begin{align}
J(\beta) &= \log \det (\Upsilon\Sigma\Upsilon)
=\log\det\Sigma + 2\log \det \Upsilon. \label{eq:betaobjective}
\end{align}
A major complication with selecting $\beta$ based on the optimization of \eqref{eq:betaobjective} is that it depends on the true
parameter value $\theta_0$ which is not known at training time. This may be resolved, however, by noting that \eqref{eq:betaobjective} is composed of covariance matrices under $\theta_0$ which may be estimated using empirical covariances over the training set. To facilitate fast computation of the optimal $\beta$ we also propose to
replace the determinant in \eqref{eq:betaobjective} with the product of the digaonal elements. Such an approximation is motivated by Hadamard's inequality (which states that for symmetric matrices $\det(M)\le\prod_i M_{ii}$) and by Ger\v{s}gorin's circle theorem (see below). This approximation works well in practice as we observe in the experiments section. We also note that the procedure described below involves only simple statisics that may be computed on the fly and does not contribute significant additional computation (nor do they require significant memory).
More specifically, we denote $K^{(ij)}=\Cov_{\theta_0} (\nabla S_{\theta_0}(A_i,B_i), \nabla
S_{\theta_0}(A_j,B_j))$ with entries $K^{(ij)}_{st}$, and approximate the $\log\det$ terms in \eqref{eq:betaobjective} using
\begin{align}
\log\det \Upsilon&=-\log\det\sum_{j=1}^k \beta_j\lambda_j K^{(jj)}
\approx - \sum_{l=1}^r\log\sum_{j=1}^k \beta_j\lambda_j K^{(jj)}_{ll}\label{eq:approxUpsilon}\\
\log\det\Sigma &= \log\det\Var_{\theta_0}\left(\sum_{j=1}^k\beta_j\lambda_j \nabla S_{\theta_0}(A_j,B_j)\right)
=\log\det\sum_{i=1}^k\sum_{j=1}^k \beta_i\lambda_i \beta_j\lambda_j K^{(ij)}\nonumber\\
&\approx \sum_{l=1}^r\log\sum_{i=1}^k\sum_{j=1}^k \beta_i\lambda_i \beta_j\lambda_j K^{(ij)}_{ll}.\label{eq:approxSigma}
\end{align}
We denote (assuming $A$ is a $n \times n$ matrix) for $i \in \{1, \ldots, n\}$, $R_i(A) = \sum_{j\neq{i}}
\left|A_{ij}\right|$ and let $D(A_{ii}, R_i(A))$ ($D_i$
where unambiguous) be the closed disc centered at $A_{ii}$ with radius
$R_i(A)$. Such a disc is called a Ger\v{s}gorin disc. The result below states that for matrices that are close to diagonal, the eigenvalues are close to the diagonal elements making our approximation accurate.
\begin{theorem}[Ger\v{s}gorin's circle theorem e.g., \citep{Horn1990}]
Every eigenvalue of $A$ lies within at least one of the Ger\v{s}gorin discs
$D(A_{ii}, R_i(A)).$ Furthermore, if the union of $k$ discs is disjoint from the
union of the remaining $n-k$ discs, then the former union contains exactly $k$ and
the latter $n-k$ eigenvalues of $A.$ \label{thm:gersgorin}
\end{theorem}
The following algorithm solves for $\theta,\beta$ jointly using alternating optimization. The second optimization problem with respect to $\beta$ is done using the approximation above and may be computed without much additional computation. In practice we found that such an approach lead to a selection of $\beta$ that is close to the optimal $\beta$ (see Sec.~\ref{sec:chunking} and Figures~\ref{fig:chunk_boltzchain_heuristic},
\ref{fig:chunk_crf_heuristic} for results).
\begin{algorithm}[H]
\begin{algorithmic}[1]
\REQUIRE $X$, $\beta_0$, and $\gamma$
\STATE $i \leftarrow 1$
\STATE $\beta \leftarrow \beta_0$
\WHILE{$i<\textrm{MAXITS}$}
\STATE $\theta \leftarrow \argmin sc\ell(X,\lambda,\beta)$\label{lin:argminscl}
\IF{converged}
\RETURN $\theta$
\ELSE
\STATE $\beta\leftarrow\argmin \mathcal{J}(X,\lambda,\theta,\gamma)$\label{lin:argminbeta}
\STATE $i \leftarrow i + 1$
\ENDIF
\ENDWHILE
\RETURN\FALSE
\end{algorithmic}
\caption{Calculate $\hat\theta^{msl}$}
\end{algorithm}
\section{Experiments} \label{sec:experiments}
We demonstrate the asymptotic properties of $\hat\theta_n^{\text{msl}}$ and explore the complexity-accuracy tradeoff for three different models-Boltzmann machine, linear Boltzmann MRF and conditional random fields. In terms of datasets, we consider synthetic data as well as datasets from sentiment prediction and text chunking domains.
\subsection{Toy Example: Boltzmann Machines}\label{sec:boltzmann}
We illustrate the improvement in asymptotic variance of the mscle associated
with adding higher order likelihood components with increasing probabilities in
context of the Boltzmann machine
\begin{align} \label{eq:BoltzmannMachine}
p_{\theta}(x)=\exp\left(\sum_{i<j}\theta_{ij}x_i
x_j-\log\psi(\theta)\right),\quad x \in\{0,1\}^m.
\end{align}
To be able to accurately compute the asymptotic variance we use $m=5$ with $\theta$ being a ${5 \choose 2}$
dimensional vector with half the components $+1$ and half $-1$.
Since the asymptotic variance of $\hat\theta_n^{\text{msl}}$ is a matrix we summarize
its size using either its trace or determinant.
Figure~\ref{fig:boltzmann} displays the asymptotic variance, relative to the
minimal variance of the mle, for the cases of full likelihood (FL), pseudo
likelihood ($|A_j|=1$) $\text{PL}_1$, stochastic combination of pseudo
likelihood and 2nd order pseudo likelihood ($|A_j|=2$) components $\alpha
\text{PL}_2 + (1-\alpha)\text{PL}_1$, stochastic combination of 2nd order
pseudo likelihood and 3rd order pseudo likelihood ($|A_j|=3$) components
$\alpha \text{PL}_3 + (1-\alpha)\text{PL}_2$, and stochastic combination of 3rd
order pseudo likelihood and 4th order pseudo likelihood ($|A_j|=4$) components
$\alpha \text{PL}_4 + (1-\alpha)\text{PL}_3$.
The graph demonstrates the computation-accuracy tradeoff as follows: (a) pseudo
likelihood is the fastest but also the least accurate, (b) full likelihood is
the slowest but the most accurate, (c) adding higher order components reduces
the asymptotic variance but also requires more computation, (d) the variance
reduces with the increase in the selection probability $\alpha$ of the higher
order component, and (e) adding 4th order components brings the variance very
close the lower limit and with each successive improvement becoming smaller and
smaller according to a law of diminishing returns.
Figure~\ref{fig:compAccPlot} displays the asymptotic accuracy and complexity for different SCL policies for $m=9$. We see how taking different linear combinations of pseudo likelihood orders spans a continuous spectrum of accuracy-complexity resolutions. The lower part of the diagram is the boundary of the achievable region (the optimal but unachievable place is the bottom left corner). SCL policies that lie to the right and top of that boundary may be improved by selecting a policy below and to the left of it.
\begin{figure}
\begin{center}
{
\psfrag{x1}{$\alpha$}
\psfrag{x2}{$\alpha$}
\psfrag{y1}{\scriptsize $\text{tr}(\Var(\hat\theta^{\text{msl}}))\,\,/\,\,\text{tr}(\Var(\hat\theta^{\text{ml}}))$}
\psfrag{y2}{\scriptsize $\det(\Var(\hat\theta^{\text{msl}}))\,\,/\,\,\det(\Var(\hat\theta^{\text{ml}}))$}
\includegraphics[scale=0.6]{figure0001}
}
\end{center}
\vspace{-.12in}
\caption{Asymptotic variance matrix, as measured by trace (left) and
determinant (right), as a function of the selection probabilities for different stochastic versions of the scl function.}
\label{fig:boltzmann}
\end{figure}
\begin{figure}\centering
\includegraphics[scale=0.4]{figure0002}
\caption{Computation-accuracy diagram for three SCL families:$\lambda_1\beta_1\text{PL1}+\lambda_2(1-\beta_1)\text{PL2}$,
$\lambda_1\beta_1 \text{PL1}+\lambda_2(1-\beta_1)\text{PL3}$,
$\lambda_1\beta_1 \text{PL2}+\lambda_2(1-\beta_1)\text{PL3}$ (for multiple values of $\lambda_1,\lambda_2,\beta_1$) for the Boltzmann machine with 9 binary nodes. The pure policies PL1 and PL2 are indicated by black circles and the computational complexity of the full likelihood indicated by a dashed line (corresponding normalized asymptotic variance is 1). As the number of nodes increase the computational cost increase dramatically, in particular for the full likelihood and to a lesser extend for the pseudo likelihood policies.
}
\label{fig:compAccPlot}
\end{figure}
\subsection{Local Sentiment Prediction}\label{sec:localsent}
Our first real world dataset experiment involves local sentiment prediction using a conditional MRF model. The dataset consisted of 249 movie review documents having an average of 30.5 sentences each with an average of 12.3 words from a 12633 word vocabulary. Each sentence was manually labeled as one of five sentimental designations: very negative, negative, objective, positive, or very positive. As described in \citep{Mao2007b} (where more infomration may be found) we considered the task of predicting the local sentiment flow within these documents using regularized conditional random fields (CRFs) (see Figure~\ref{fig:crfgm} for a graphical diagram of the model in the case of four sentences).
\begin{figure}
\centering
\begin{tikzpicture}[scale=1.5,auto,swap]
\node[vertexobs] (y0) at (0,0){\small$y_0$};
\foreach \pre/\cur/\A/\B in {0/1//, 1/2//, 2/3/A_{ij}/B_{jk}, 3/4//} {
\node[vertex] (y\cur) at (\cur,0) {$y_\cur$};
\path[edge] (y\pre) -- node[above] {$\A$} (y\cur);
\node[vertexobs] (x\cur) at (\cur,-1) {$x_\cur$};
\path[edge] (y\cur) -- node[right] {$\B$} (x\cur);
}
\end{tikzpicture}
\caption{ Graphical representation of a four token conditional random
field (CRF). $A$, $B$ are positive weight matrices and represent
state-to-state transitions and state-to-observation outputs. Shading indicates the variable is conditioned upon while no shading indicates the variable is
generated by the model.}
\label{fig:crfgm}
\end{figure}
Figure \ref{fig:localsent_crf_cont} shows the contour plots of train and test loglikelihood as a function of the scl parameters: weight $\beta$ and selection probability
$\lambda$. The likelihood components were mixtures of full and pseudo
($|A_j|=1$) likelihood (rows 1,3) and pseudo and 2nd order pseudo $(|A_j|=2$)
likelihood (rows 2,4). $A_j$ identifies a set of labels corresponding to
adjacent sentences over which the probabilistic query is evaluated. Results
were averaged over 100 cross validation iterations with 50\% train-test split.
We used BFGS quasi-Newton method for maximizing the regularized scl functions. The figure demonstrates how the train loglikelihood increases with
increasing the weight and selection probability of full likelihood in rows 1,3 and of 2nd order pseudo likelihood in rows 2,4. This increase in train loglikelihood is also correlated with an increase in computational complexity as higher order likelihood components require more computation. Note however, that the test set behavior in the third and fourth rows shows an improvement in prediction accuracy associated with decreasing the influence of full likelihood in favor of pseudo likelihood. The fact that this happens for weak regularization $\sigma^2=10$ indicates that lower order pseudo likelihood has a regularization effect which improves prediction accuracy when the model is not regularized enough. We have encountered this phenomenon in other experiments as well and we will discuss it further in the following subsections.
\begin{figure}
\centering
\vspace{-.2in}
\begin{tabular}{cc}
\vspace{-.02in}
\includegraphics[width=.40\textwidth]{figure0003} & \vspace{-.055in}
\includegraphics[width=.409\textwidth]{figure0004}\\ \vspace{-.02in}
\includegraphics[width=.40\textwidth]{figure0005} & \vspace{-.055in}
\includegraphics[width=.409\textwidth]{figure0006}\\ \vspace{-.02in}
\includegraphics[width=.40\textwidth]{figure0007} & \vspace{-.055in}
\includegraphics[width=.409\textwidth]{figure0008}\\ \vspace{-.02in}
\includegraphics[width=.40\textwidth]{figure0009} & \vspace{-.055in}
\includegraphics[width=.409\textwidth]{figure0010} \vspace{-.02in}
\end{tabular}
\caption{Train (left) and test (right) loglikelihood contours for maximum scl
estimators for the CRF model. $L_2$ regularization parameters are $\sigma^2=1$
(rows 1,2) and $\sigma^2=10$ (rows 3,4). Rows 1,3 are stochastic mixtures
of full (FL) and pseudo (PL1) loglikelihood components while rows 2,4
are PL1 and 2nd order pseudo likelihood (PL2).}
\label{fig:localsent_crf_cont}
\end{figure}
Figure~\ref{fig:tradeoff} displays the complexity and negative loglikelihoods
(left:train, right:test) of different scl estimators, sweeping through
$\lambda$ and $\beta$, as points in a two dimensional space. The shaded area
near the origin is unachievable as no scl estimator can achieve high accuracy
and low computation at the same time. The optimal location in this 2D plane is
the curved boundary of the achievable region with the exact position on that
boundary depending on the required solution of the computation-accuracy
tradeoff.
\begin{figure}
\centering
\begin{tabular}{cc}
\includegraphics[width=.5\textwidth]{figure0011} & \hspace{-0.45in}
\includegraphics[width=.5\textwidth]{figure0012}
\end{tabular}
\vspace{-.05in}
\caption{ Scatter plot representing complexity and negative loglikelihood
(left:train, right:test) of scl functions for CRFs with regularization
parameter $\sigma^2=1/2$. The points represent different stochastic
combinations of full and pseudo likelihood components. The shaded region
represents impossible accuracy/complexity demands.}
\label{fig:tradeoff}
\end{figure}
\subsection{Text Chunking}\label{sec:chunking}
This experiment consists of using sequential MRFs to divide sentences into
``text chunks,'' i.e., syntactically correlated sub-sequences, such as noun and
verb phrases. Chunking is an crucial step towards full parsing. For
example\footnote{Taken from the CoNLL-2000 shared task site,
\url{http://www.cnts.ua.ac.be/conll2000/chunking/}.}, the sentence:
\begin{quote}
\small
He reckons the current account deficit will narrow to only \# 1.8 billion in September.
\end{quote}
could be divided as:
\begin{quote}
\small
[NP \textcolor{red}{He} ]
[VP \textcolor{green}{reckons} ]
[NP \textcolor{red}{the current account deficit} ]
[VP \textcolor{green}{will narrow} ]
[PP \textcolor{blue}{to} ]
[NP \textcolor{red}{only \# 1.8 billion} ]
[PP \textcolor{blue}{in} ]
[NP \textcolor{red}{September} ].
\end{quote}
where NP, VP, and PP indicate noun phrase, verb phrase, and prepositional phrase.
\begin{figure}
\centering
\includegraphics[trim=0.0mm 0.0mm 0.0mm 0.0mm,clip,width=.40\textwidth]{figure0013}
\vspace{-1.5em}
\caption{ Label counts in CoNLL-2000 dataset.}
\label{fig:conll2000labels}
\end{figure}
We used the publicly available CoNLL-2000 shared task dataset. It consists of
labeled partitions of a subset of the Wall Street Journal (WSJ) corpus. Our
training sets consisted of sampling 100 sentences without replacement from the
the CoNLL-2000 training set (211,727 tokens from WSJ sections 15-18). The test
set was the same as the CoNLL-2000 testing
partition (47,377 tokens from WSJ section 20). Each of the possible 21,589
tokens, i.e., words, numbers, punctuation, etc., are tagged by one of 11 chunk
types and an O label indicating the token is not part of any chunk. Chunk
labels are prepended with flags indicating that the token begins (B-) or is
inside (I-) the phrase. Figure~\ref{fig:conll2000labels} lists all labels and
respective frequencies. In addition to labeled tokens, the dataset contains a
part-of-speech (POS) column. These tags were automatically generated by the Brill tagger and must be incorporated into any model/feature set accordingly.
In the following, we explore this task using various scl selection polices on two related, but fundamentally different sequential MRFs: Boltzmann chain MRFs and CRFs.
\subsubsection{Boltzmann Chain MRF}\label{sec:chunk_boltzchain}
Boltzmann chains are a generative MRF that are closely related to hidden Markov models (HMM). See \citep{MacKay1996} for a discussion on the relationship between Boltzmann chain MRFs and HMMs. We consider SCL components of the form $\Pr(X_2,Y_2|Y_1,Y_3)$, $\Pr(X_2,X_3,Y_2,Y_3|Y_1,Y_4)$ which we refer to as first and second order pesudo likelihood (with higher order components generalizing in a straightforward manner).
\begin{figure}
\centering
\begin{tikzpicture}[scale=1.5,auto,swap]
\node[vertexobs] (y0) at (0,0){\small$y_0$};
\foreach \pre/\cur/\A/\B in {0/1//, 1/2//, 2/3/A_{ij}/B_{jk}, 3/4//} {
\node[vertex] (y\cur) at (\cur,0) {$y_\cur$};
\path[edge] (y\pre) -- node[above] {\small $\A$} (y\cur);
\node[vertex] (x\cur) at (\cur,-1) {$x_\cur$};
\path[edge] (y\cur) -- node[right] {\small $\B$} (x\cur);
}
\end{tikzpicture}
\caption{ Graphical representation of a four token Boltzmann chain. $A$, $B$
are positive weight matrices and represent preference in particular
state-to-state transitions and state-to-feature emissions. Only the start state
is conditioned upon while all others are generative.}
\label{fig:boltzchaingm}
\end{figure}
The nature of the Boltzmann chain constrains our feature set to only encode the
particular token present at each position, or time index. In doing so we avoid
having to model additional dependencies across time steps and dramatically
reduce computational complexity. Although scl is precisely motivated by high
treewidth graphs, we wish to include the full likelihood for demonstrative
purposes--in practice, this is often not possible. Although POS tags are
available we do not include them in these features since the dependence they
share on neighboring tokens and other POS tags is unclear. For these reasons
our time-sliced feature vector, $x_i$, has only a single-entry one and
cardinality matching the size of the vocabulary (21,589 tokens).
As is common practice, we curtail overfitting through a $L_2$ regularizer,
$\exp\{-\frac{1}{2\sigma^{2}} ||\theta||^2_2\}$, which is is strong when
$\sigma^2$ is small and weak when $\sigma^2$ is large. We consider $\sigma^2$
a hyper-parameter and select it through cross-validation, unless noted
otherwise. More often though, we show results for several representative
$\sigma^2$ to demonstrate the roles of $\lambda$ and $\beta$ in
$\hat{\theta}_n^{msl}$.
Figures \ref{fig:chunk_boltzchain_pl1fl_cont} and
\ref{fig:chunk_boltzchain_pl1fl_beta} depict train and test negative
log-likelihood, i.e., perplexity, for the scl estimator
$\hat{\theta}_{100}^{msl}$ with a pseudo/full likelihood selection policy
(PL1/FL). As is our convention, weight $\beta$ and selection probability
$\lambda$ correspond to the higher order component, in this case full
likelihood. The lower order pseudo likelihood component is always selected and
has weight $1-\beta$. As expected the test set perplexity dominates the
train-set perplexity. As was the situation in Sec.~\ref{sec:localsent}, we
note that the lower order component serves to regularize the full-likelihood,
as evident by the abnormally large $\sigma^2$.
We next demonstrate the effect of using a 1st order/2nd order pseudo likelihood
selection policy (PL1/PL2). Recall, our notion of pseudo likelihood never
entails conditioning on $x$, although in principle it could. Figures
\ref{fig:chunk_boltzchain_pl1pl2_cont} and
\ref{fig:chunk_boltzchain_pl1pl2_beta} show how the policy responds to varying
both $\lambda$ and $\beta$. Figure \ref{fig:chunk_boltzchain_complexity}
depicts the empirical tradeoff between accuracy and complexity.
Figure~\ref{fig:chunk_boltzchain_heuristic} highlights the effectiveness of the
$\beta$ heuristic. See captions for additional comments.
\begin{figure}
\centering
\begin{tabular}{cc}
\includegraphics[trim=0mm 0mm 0mm 0mm,clip,totalheight=.25\textheight]{figure0014} & \includegraphics[trim=0mm 0mm 0mm 0mm,clip,totalheight=.25\textheight]{figure0015} \end{tabular}
\caption{
Accuracy and complexity tradeoff for the Boltzmann chain MRF with PL1/FL
(left) and PL1/PL2 (right) selection policies. Each point represents the negative loglikelihood (perplexity) and the
number of flops required to evaluate the composite likelihood and its
gradient under a particular instantiation of the selection policy. The
shaded region represents empirically unobtainable combinations of
computational complexity and accuracy.
}\label{fig:chunk_boltzchain_complexity}
\end{figure}
\begin{figure}[htb!]
\vspace{-3.00em}
\hspace{-0.75cm}
\begin{tabular}{cccc}
\includegraphics[trim= 0.0mm 10.2mm 0mm 9.5mm,clip,totalheight=.157\textheight]{figure0016} & \hspace{-5.5mm}
\includegraphics[trim=12.0mm 10.2mm 0mm 9.5mm,clip,totalheight=.157\textheight]{figure0017} & \hspace{-5.5mm}
\includegraphics[trim=12.0mm 10.2mm 0mm 9.5mm,clip,totalheight=.157\textheight]{figure0018} & \hspace{-5.5mm}
\includegraphics[trim=12.0mm 10.2mm 0mm 9.5mm,clip,totalheight=.157\textheight]{figure0019} \\
\includegraphics[trim= 0.0mm 0mm 0mm 9.5mm,clip,totalheight=.175\textheight]{figure0020} & \hspace{-5.5mm}
\includegraphics[trim=12.0mm 0mm 0mm 9.5mm,clip,totalheight=.175\textheight]{figure0021} & \hspace{-5.5mm}
\includegraphics[trim=12.0mm 0mm 0mm 9.5mm,clip,totalheight=.175\textheight]{figure0022} & \hspace{-5.5mm}
\includegraphics[trim=12.0mm 0mm 0mm 9.5mm,clip,totalheight=.175\textheight]{figure0023}
\end{tabular}
\vspace{-1.5em}
\caption{
Train set (top) and test set (bottom) negative log-likelihood (perplexity)
for the Boltzmann chain MRF with pseudo/full likelihood selection policy
(PL1/FL). The x-axis, $\beta$, corresponds to relative weight placed on FL
and and the y-axis, $\lambda$, corresponds to the probability of selecting
FL. PL1 is selected with probability 1 and weight $1-\beta$. Contours and
labels are fixed across columns. Results averaged over several
cross-validation folds, i.e., resampling both the train set and the PL1/FL
policy. Columns from left to right correspond to weaker regularization,
$\sigma^2=\{500, 1000, 2500, 5000\}$. The best achievable test set
perplexity is about 190.
\vspace{1em}
\newline
Unsurprisingly the test set perplexity dominates the train set perplexity
at each $\sigma^2$ (column). For a desired level of accuracy (contour)
there exists a computationally favorable regularizer. Hence
$\hat{\theta}^{msl}_n$ acts as both a regularizer and mechanism for
controlling accuracy and complexity.
}\label{fig:chunk_boltzchain_pl1fl_cont}
\hspace{-0.75cm}
\begin{tabular}{cccc}
\includegraphics[trim= 0.0mm 10.2mm 0mm 8.2mm,clip,totalheight=.160\textheight]{figure0024} & \hspace{-5.5mm}
\includegraphics[trim=12.5mm 10.2mm 0mm 8.2mm,clip,totalheight=.160\textheight]{figure0025} & \hspace{-5.5mm}
\includegraphics[trim=12.5mm 10.2mm 0mm 8.2mm,clip,totalheight=.160\textheight]{figure0026} & \hspace{-5.5mm}
\includegraphics[trim=12.5mm 10.2mm 0mm 8.2mm,clip,totalheight=.160\textheight]{figure0027} \\
\includegraphics[trim= 0.0mm 0mm 0mm 8.2mm,clip,totalheight=.178\textheight]{figure0028} & \hspace{-5.5mm}
\includegraphics[trim=12.5mm 0mm 0mm 8.2mm,clip,totalheight=.178\textheight]{figure0029} & \hspace{-5.5mm}
\includegraphics[trim=12.5mm 0mm 0mm 8.2mm,clip,totalheight=.178\textheight]{figure0030} & \hspace{-5.5mm}
\includegraphics[trim=12.5mm 0mm 0mm 8.2mm,clip,totalheight=.178\textheight]{figure0031}
\end{tabular}
\vspace{-1.5em}
\caption{
Train set and test set perplexities for the Boltzmann chain MRF with PL1/FL
selection policy (see above layout description). The x-axis is again
$\beta$ and the y-axis perplexity. Lighter shading indicates FL is
selected with increasing frequency. Note that as the regularizer is
weakened the the range in perplexity spanned by $\lambda$ increases and the
lower bound decreases. This indicates that the approximating power of
$\hat{\theta}^{msl}_n$ increases when unencumbered by the regularizer and
highlights its secondary role as a regularizer.
}\label{fig:chunk_boltzchain_pl1fl_beta}
\end{figure}
\begin{figure}[ht!]
\vspace{-3.0em}
\hspace{-0.75cm}
\begin{tabular}{cccc}
\includegraphics[trim= 0.0mm 10.2mm 0mm 9.5mm,clip,totalheight=.157\textheight]{figure0032} & \hspace{-5.5mm}
\includegraphics[trim=12.0mm 10.2mm 0mm 9.5mm,clip,totalheight=.157\textheight]{figure0033} & \hspace{-5.5mm}
\includegraphics[trim=12.0mm 10.2mm 0mm 9.5mm,clip,totalheight=.157\textheight]{figure0034} & \hspace{-5.5mm}
\includegraphics[trim=12.0mm 10.2mm 0mm 9.5mm,clip,totalheight=.157\textheight]{figure0035} \\
\includegraphics[trim= 0.0mm 0mm 0mm 9.5mm,clip,totalheight=.175\textheight]{figure0036} & \hspace{-5.5mm}
\includegraphics[trim=12.0mm 0mm 0mm 9.5mm,clip,totalheight=.175\textheight]{figure0037} & \hspace{-5.5mm}
\includegraphics[trim=12.0mm 0mm 0mm 9.5mm,clip,totalheight=.175\textheight]{figure0038} & \hspace{-5.5mm}
\includegraphics[trim=12.0mm 0mm 0mm 9.5mm,clip,totalheight=.175\textheight]{figure0039}
\end{tabular}
\vspace{-1.5em}
\caption{
Train set (top) and test set (bottom) perplexity for the Boltzmann chain
MRF with 1st/2nd order pseudo likelihood selection policy (PL1/PL2).
The x-axis corresponds to PL2 weight and the y-axis the probability of its
selection. PL1 is selected with probability 1 and weight $1-\beta$.
Columns from left to right correspond to $\sigma^2=\{5000, 10000, 12500,
15000\}$. See Figure~\ref{fig:chunk_boltzchain_pl1fl_cont} for more
details. The best achievable test set perplexity is about 189.5.
\vspace{1em}
\newline
In comparing these results to PL1/FL, we note that the test set contours
exhibit less perplexity for larger areas. In particular, perplexity is
lower at smaller $\lambda$ values, meaning a computational saving over
PL1/FL at a given level of accuracy.
}\label{fig:chunk_boltzchain_pl1pl2_cont}
\vspace{0.5em}
\hspace{-0.75cm}
\begin{tabular}{cccc}
\includegraphics[trim= 0.0mm 10.2mm 0mm 8.2mm,clip,totalheight=.160\textheight]{figure0040} & \hspace{-5.5mm}
\includegraphics[trim=12.5mm 10.2mm 0mm 8.2mm,clip,totalheight=.160\textheight]{figure0041} & \hspace{-5.5mm}
\includegraphics[trim=12.5mm 10.2mm 0mm 8.2mm,clip,totalheight=.160\textheight]{figure0042} & \hspace{-5.5mm}
\includegraphics[trim=12.5mm 10.2mm 0mm 8.2mm,clip,totalheight=.160\textheight]{figure0043} \\
\includegraphics[trim= 0.0mm 0mm 0mm 8.2mm,clip,totalheight=.178\textheight]{figure0044} & \hspace{-5.5mm}
\includegraphics[trim=12.5mm 0mm 0mm 8.2mm,clip,totalheight=.178\textheight]{figure0045} & \hspace{-5.5mm}
\includegraphics[trim=12.5mm 0mm 0mm 8.2mm,clip,totalheight=.178\textheight]{figure0046} & \hspace{-5.5mm}
\includegraphics[trim=12.5mm 0mm 0mm 8.2mm,clip,totalheight=.178\textheight]{figure0047}
\end{tabular}
\vspace{-1.5em}
\caption{
Train (top) and test (bottom) perplexities for the Boltzmann chain MRF with
PL1/PL2 selection policy (x-axis:PL2 weight, y-axis:perplexity; see above
and previous).
\vspace{1em}
\newline
PL1/PL2 outperforms PL1/FL test perplexity at $\sigma^2=5000$ and continues
to show improvement with weaker regularizers. This is perhaps surprising
since the previous policy includes FL as a special case, i.e.,
$(\lambda,\beta)=(1,1)$. We speculate that the regularizer's indirect
connection to the training samples precludes it from preventing certain
types of overfitting. See Sec.~\ref{sec:winwin} for more discussion.
}\label{fig:chunk_boltzchain_pl1pl2_beta}
\vspace{-0.5em}
\end{figure}
\begin{figure}[ht!]
\hspace{-0.5cm}
\begin{tabular}{ccc}
\includegraphics[trim=0mm 5.5mm 0mm 0.2mm,clip,totalheight=.20\textheight]{figure0048} & \hspace{-3.5mm}
\includegraphics[trim=5.5mm 5.5mm 0mm 6.2mm,clip,totalheight=.20\textheight]{figure0049} & \hspace{-5.5mm}
\includegraphics[trim=5.5mm 5.5mm 0mm 6.2mm,clip,totalheight=.20\textheight]{figure0050} \\
\includegraphics[trim=0mm 0.0mm 0mm 0.2mm,clip,totalheight=.21\textheight]{figure0051} & \hspace{-3.5mm}
\includegraphics[trim=7.6mm 0.0mm 0mm 6.2mm,clip,totalheight=.211\textheight]{figure0052} & \hspace{-5.5mm}
\includegraphics[trim=5.8mm 0.0mm 0mm 6.2mm,clip,totalheight=.211\textheight]{figure0053}
\end{tabular}
\vspace{-1.5em}
\caption{
Demonstration of the effectiveness of the $\beta$ heuristic, i.e., using
$\hat{\theta}^{msl}$ as a plug-in estimate for $\theta_0$ to periodically
re-estimate $\beta$ during gradient descent. Results are for the Boltzmann
chain with PL1/FL (top) and PL1/PL2 (bottom) selection policies. The
x-axis is the value at the heuristically found $\beta$ and the y-axis the
value at the optimal $\beta.$ The optimal $\beta$ was found be evaluating
over a $\beta$ grid and choosing that with the smallest train set
perplexity. The first column depicts the best performing $\beta$ against
the heuristic $\beta$. The second and third columns depict the training
and testing perplexities (resp.) at the best performing $\beta$ and
heuristically found $\beta$. For all three columns, we assess the
effectiveness of the heuristic by its nearness to the diagonal (dashed
line).
\vspace{1em}
\newline
For the PL1/PL2 policy the heuristic closely matched the optimal (all
bottom row points are on diagonal). The heuristic out-performed the
optimal on the test set and had slightly higher perplexity on the training
set. It is a positive result, albeit somewhat surprising, and is
attributable to either coarseness in the grid or improved generalization by
accounting for variability in $\hat{\theta}^{msl}$.
}\label{fig:chunk_boltzchain_heuristic}
\end{figure}
\subsubsection{CRFs}\label{sec:chunk_crf}
Conditional random fields are the discriminative counterpart of Boltzmann
chains (cf.\ Figures~\ref{fig:crfgm} and \ref{fig:boltzchaingm}). Since $x$ is
not jointly modeled with $y$, we are free to include features with
non-independence across time steps without significantly increasing the
computational complexity. Here our notion of pseudo likelihood is more
traditional, e.g., $\Pr(Y_2|Y_1,Y,3,X_2)$ and $\Pr(Y_2,Y_3|Y_1,Y,4,X_2,X_3)$
are valid 1st and 2nd order pseudo likelihood components.
We employ a subset of the features outlined in \cite{Sha2003} which proved
competitive for the CoNLL-2000 shared task. Our feature vector was based on
seven feature categories, resulting in a total of 273,571 binary features
(i.e., $\sum_i f_i(x_t)=7$). The feature categories consisted of word
unigrams, POS unigrams, word bigrams (forward and backward), and POS bigrams
(forward and backward) as well as a stopword indicator (and its complement) as
defined by \cite{lewis04rcv}. The set of possible feature/label pairs is much
larger than our set--we use only those features supported by the CoNLL-2000
dataset, i.e., those which occur at least once. Thus we modeled 297,041
feature/label pairs and 847 transitions for a total of 297,888
parameters. As before, we use the $L_2$ regularizer,
$\exp\{-\frac{1}{2\sigma^{2}} ||\theta||^2_2\}$, which is is stronger when
$\sigma^2$ is small and weak when $\sigma^2$ is large.
We demonstrate learning on two selection policies: pseudo/full likelihood
(Figures \ref{fig:chunk_crf_pl1fl_cont} and \ref{fig:chunk_crf_pl1fl_beta}) and
1st/2nd order pseudo likelihood (Figures \ref{fig:chunk_crf_pl1pl2_cont} and
\ref{fig:chunk_crf_pl1pl2_beta}). In both selection polices we note a
significant difference from the Boltzmann chain, $\beta$ has less impact on
both train and test perplexity. Intuitively, this seems reasonable as the
component likelihood range and variance are constrained by the conditional
nature of CRFs. Figure \ref{fig:chunk_crf_complexity} demonstrates the
empirical accuracy/complexity tradeoff and Figure \ref{fig:chunk_crf_heuristic}
depicts the effectiveness of the $\beta$ heuristic. See captions for further
comments.
\begin{figure}
\centering
\begin{tabular}{ccc}
\includegraphics[trim=0mm 0mm 0mm 0mm,clip,totalheight=.25\textheight]{figure0054} & \includegraphics[trim=0mm 0mm 0mm 0mm,clip,totalheight=.25\textheight]{figure0055} \end{tabular}
\caption{
Accuracy and complexity tradeoff for the CRF with PL1/FL (left) and PL1/PL2
(right) selection policies. Each point represents the negative loglikelihood (perplexity) and the number of flops
required to evaluate the composite likelihood and its gradient under a particular instance of the selection policy. The shaded region represents empirically unobtainable combinations of computational complexity and accuracy. $\sigma^2$.
}\label{fig:chunk_crf_complexity}
\end{figure}
\begin{figure}[ht!]
\vspace{-3.0em}
\hspace{-0.75cm}
\begin{tabular}{cccc}
\includegraphics[trim= 0.0mm 10.2mm 0mm 9.5mm,clip,totalheight=.157\textheight]{figure0056} & \hspace{-5.5mm}
\includegraphics[trim=12.0mm 10.2mm 0mm 9.5mm,clip,totalheight=.157\textheight]{figure0057} & \hspace{-5.5mm}
\includegraphics[trim=12.0mm 10.2mm 0mm 9.5mm,clip,totalheight=.157\textheight]{figure0058} & \hspace{-5.5mm}
\includegraphics[trim=12.0mm 10.2mm 0mm 9.5mm,clip,totalheight=.157\textheight]{figure0059} \\
\includegraphics[trim= 0.0mm 0mm 0mm 9.5mm,clip,totalheight=.175\textheight]{figure0060} & \hspace{-5.5mm}
\includegraphics[trim=12.0mm 0mm 0mm 9.5mm,clip,totalheight=.175\textheight]{figure0061} & \hspace{-5.5mm}
\includegraphics[trim=12.0mm 0mm 0mm 9.5mm,clip,totalheight=.175\textheight]{figure0062} & \hspace{-5.5mm}
\includegraphics[trim=12.0mm 0mm 0mm 9.5mm,clip,totalheight=.175\textheight]{figure0063}
\end{tabular}
\vspace{-1.5em}
\caption{
Train set (top) and test set (bottom) perplexity for the CRF with
pseudo/full likelihood selection policy (PL1/FL). The x-axis corresponds
to FL weight and the y-axis the probability of its selection. PL1 is
selected with probability 1 and weight $1-\beta$. Columns from left to
right correspond to $\sigma^2=\{5000, 10000, 12500, 15000\}$. See
Figure~\ref{fig:chunk_boltzchain_pl1fl_cont} for more details. The best
achievable test set perplexity is about 5.5.
\vspace{1em}
\newline
Although we cannot directly compare CRFs to its generative counterpart, we
observe some strikingly different trends. It is immediately clear that the
CRF is less sensitive to the relative weighting of components than is the
Boltzmann chain. This is partially attributable to a smaller range of the
objective--the CRF is already conditional hence the per-component
perplexity range is reduced.}\label{fig:chunk_crf_pl1fl_cont}
\vspace{0.5em}
\hspace{-0.75cm}
\begin{tabular}{cccc}
\includegraphics[trim= 0.0mm 9.1mm 0mm 8.2mm,clip,totalheight=.162\textheight]{figure0064} & \hspace{-5.5mm}
\includegraphics[trim=11.0mm 9.1mm 0mm 8.2mm,clip,totalheight=.162\textheight]{figure0065} & \hspace{-5.5mm}
\includegraphics[trim=11.0mm 9.1mm 0mm 8.2mm,clip,totalheight=.162\textheight]{figure0066} & \hspace{-5.5mm}
\includegraphics[trim=11.0mm 9.1mm 0mm 8.2mm,clip,totalheight=.162\textheight]{figure0067} \\
\includegraphics[trim= 0.0mm 0mm 0mm 8.2mm,clip,totalheight=.178\textheight]{figure0068} & \hspace{-5.5mm}
\includegraphics[trim=11.0mm 0mm 0mm 8.2mm,clip,totalheight=.178\textheight]{figure0069} & \hspace{-5.5mm}
\includegraphics[trim=11.0mm 0mm 0mm 8.2mm,clip,totalheight=.178\textheight]{figure0070} & \hspace{-5.5mm}
\includegraphics[trim=11.0mm 0mm 0mm 8.2mm,clip,totalheight=.178\textheight]{figure0071}
\end{tabular}
\vspace{-1.5em}
\caption{
Train (top) and test (bottom) perplexities for a CRF with PL1/FL selection
policy (x-axis:FL weight, y-axis:perplexity; see above and
Fig.~\ref{fig:chunk_boltzchain_pl1fl_beta}).
\vspace{1em}
\newline
Perhaps more evidently here than above, we note that the significance of a
particular $\beta$ is less than that of the Boltzmann chain. However, for
large enough $\sigma^2$, the optimal $\beta\ne 1$. This indicates the dual
role of PL1 as a regularizer. Moreover, the left panel calls attention to
the interplay between $\beta$, $\lambda$, and $\sigma^2$. See
Sec.~\ref{sec:interplay} for more discussion.
}
\label{fig:chunk_crf_pl1fl_beta}
\vspace{-1.5em}
\end{figure}
\begin{figure}[ht!]
\hspace{-0.75cm}
\begin{tabular}{cccc}
\includegraphics[trim= 0.0mm 10.2mm 0mm 9.5mm,clip,totalheight=.157\textheight]{figure0072} & \hspace{-5.5mm}
\includegraphics[trim=12.0mm 10.2mm 0mm 9.5mm,clip,totalheight=.157\textheight]{figure0073} & \hspace{-5.5mm}
\includegraphics[trim=12.0mm 10.2mm 0mm 9.5mm,clip,totalheight=.157\textheight]{figure0074} & \hspace{-5.5mm}
\includegraphics[trim=12.0mm 10.2mm 0mm 9.5mm,clip,totalheight=.157\textheight]{figure0075} \\
\includegraphics[trim= 0.0mm 0mm 0mm 9.5mm,clip,totalheight=.175\textheight]{figure0076} & \hspace{-5.5mm}
\includegraphics[trim=12.0mm 0mm 0mm 9.5mm,clip,totalheight=.175\textheight]{figure0077} & \hspace{-5.5mm}
\includegraphics[trim=12.0mm 0mm 0mm 9.5mm,clip,totalheight=.175\textheight]{figure0078} & \hspace{-5.5mm}
\includegraphics[trim=12.0mm 0mm 0mm 9.5mm,clip,totalheight=.175\textheight]{figure0079}
\end{tabular}
\vspace{-1.5em}
\caption{
Train set (top) and test set (bottom) perplexity for a CRF with 1st/2nd
order pseudo likelihood selection policy (PL1/PL2). The x-axis, $\beta$,
represents the relative weight placed on PL2 and the y-axis, $\lambda$, the
probability of selecting PL2. PL1 is selected with probability 1. Columns
from left to right correspond to weaker regularization, $\sigma^2=\{10000,
20000, 30000, 40000\}$. See Figure~\ref{fig:chunk_crf_pl1fl_cont} for more
details.
}\label{fig:chunk_crf_pl1pl2_cont}
\vspace{0.5em}
\hspace{-0.75cm}
\begin{tabular}{cccc}
\includegraphics[trim= 0.0mm 9.1mm 0mm 8.2mm,clip,totalheight=.162\textheight]{figure0080} & \hspace{-5.5mm}
\includegraphics[trim= 11.0mm 9.1mm 0mm 8.2mm,clip,totalheight=.162\textheight]{figure0081} & \hspace{-5.5mm}
\includegraphics[trim= 11.0mm 9.1mm 0mm 8.2mm,clip,totalheight=.162\textheight]{figure0082} & \hspace{-5.5mm}
\includegraphics[trim= 11.0mm 9.1mm 0mm 8.2mm,clip,totalheight=.162\textheight]{figure0083} \\
\includegraphics[trim= 0.0mm 0mm 0mm 8.2mm,clip,totalheight=.178\textheight]{figure0084} & \hspace{-5.5mm}
\includegraphics[trim= 11.0mm 0mm 0mm 8.2mm,clip,totalheight=.178\textheight]{figure0085} & \hspace{-5.5mm}
\includegraphics[trim= 11.0mm 0mm 0mm 8.2mm,clip,totalheight=.178\textheight]{figure0086} & \hspace{-5.5mm}
\includegraphics[trim= 11.0mm 0mm 0mm 8.2mm,clip,totalheight=.178\textheight]{figure0087}
\end{tabular}
\vspace{-1.5em}
\caption{
Train (top) and test (bottom) perplexities for a CRF with PL1/PL2 selection
policy (x-axis:PL2 weight, y-axis:perplexity; see above and
Fig.~\ref{fig:chunk_boltzchain_pl1fl_beta}).
\vspace{1em}
\newline
Although increasing $\lambda$ only brings minor improvement to both the
training and testing perplexities, it is worth noting that the test
perplexity meets that of the PL1/FL. Still though, the overall lack of
resolution here suggests that smaller values of $\lambda$ would better span
a range of perplexities and at reduced computational cost.
}\label{fig:chunk_crf_pl1pl2_beta}
\end{figure}
\begin{figure}[ht!]
\hspace{-0.5cm}
\begin{tabular}{ccc}
\includegraphics[trim=0mm 5.5mm 0mm 0.2mm,clip,totalheight=.20\textheight]{figure0088} & \hspace{-5.0mm}
\includegraphics[trim=5mm 5.5mm 0mm 6.2mm,clip,totalheight=.20\textheight]{figure0089} & \hspace{-5.5mm}
\includegraphics[trim=5mm 5.5mm 0mm 6.2mm,clip,totalheight=.20\textheight]{figure0090} \\
\includegraphics[trim=0mm 0.0mm 0mm 0.2mm,clip,totalheight=.21\textheight]{figure0091} & \hspace{-5.0mm}
\includegraphics[trim=5mm 0.0mm 0mm 6.2mm,clip,totalheight=.21\textheight]{figure0092} & \hspace{-5.5mm}
\includegraphics[trim=5mm 0.0mm 0mm 6.2mm,clip,totalheight=.21\textheight]{figure0093}
\end{tabular}
\vspace{-1.5em}
\caption{
Demonstration of the effectiveness of the $\beta$ heuristic. Results are
for the CRF with PL1/FL (top) and PL1/PL2 (bottom) selection policies. The
x-axis is the value at the heuristically found $\beta$ and the y-axis the
value at the optimal $\beta.$ The first column depicts the best performing
$\beta$ against the heuristic $\beta$. The second and third columns depict
the training and testing perplexities (resp.) at the best performing
$\beta$ and heuristically found $\beta$. For all three columns, we assess
the effectiveness of the heuristic by its nearness to the diagonal (dashed
line). See Fig.~\ref{fig:chunk_boltzchain_heuristic} for more details.
\vspace{1em}
\newline
The optimal and heuristic $\beta$ match train and test perplexities for
both policies. The actual $\beta$ value however does not seem to match as
well as the Boltzmann chain. However, if we note the flatness of the
$\beta$ grid (cf. Fig.~\ref{fig:chunk_crf_pl1fl_beta} and
\ref{fig:chunk_crf_pl1pl2_beta}) this result is unsurprising and can be
disregarded as an indication of the heuristic's performance.
}\label{fig:chunk_crf_heuristic}
\end{figure}
\subsection{Complexity/Regularization Win-Win}\label{sec:winwin}
It is interesting to contrast the test loglikelihood behavior in the case of
mild and stronger $L_2$ regularization. In the case of weaker or no
regularization, the test loglikelihood shows different behavior than the train
loglikelihood. Adding a lower order component such as pseudo likelihood acts as
a regularizer that prevents overfitting. Thus, in cases that are prone to
overfitting reducing higher order likelihood components improves both
performance as well as complexity. This represents a win-win situation in
contrast to the classical view where the mle has the lowest variance and adding
lower order components reduces complexity but increases the variance.
In Figure \ref{fig:localsent_crf_cont} we note this phenomenon when comparing
$\sigma^2=1$ to $\sigma^2=10$ across the selection policies PL1/FL and PL1/PL2.
That is, the weaker regularization and more restrictive selection policy, i.e.,
PL1/PL2, is able to achieve comparable test set perplexity.
For the text chunking experiments, we observe a striking win-win when using the
Boltzmann chain MRF, Figures \ref{fig:chunk_boltzchain_pl1fl_cont} and
\ref{fig:chunk_boltzchain_pl1pl2_cont}. Notice that as regularization is
decreased (comparing from left to right), the contours are pulled closer to
the x-axis. This means that we are achieving the same perplexity at reduced
levels of computational complexity. The CRF however, only exhibits the win-win
to a minor extent. We delve deeper into why this is might be the case in the
following section.
\subsection{$\lambda$, $\sigma^2$ Interplay}\label{sec:interplay}
Throughout these experiments we fixed $\sigma^2$ and either swept over
$(\lambda,\beta)$ or used the heuristic to evaluate $(\lambda,\beta(\lambda))$.
Motivated by the sometimes weak win-win (cf.\ Section \ref{sec:winwin}) we now
consider how the optimal $\sigma^2$ changes as a function of $\lambda$. In
Figure \ref{fig:chunk_crf_lamsig1} we used the $\beta$ heuristic to evaluate
train and test perplexity over a $(\lambda,\sigma^2)$ grid. We used CRFs and
the text chunking task as outlined in Section \ref{sec:chunk_crf}.
For the PL1/FL policy, we observe that for small enough $\lambda$ the optimal
$\sigma^2$, i.e., the $\sigma^2$ with smallest test perplexity, has
considerable range. At some point there are enough samples of the higher-order
component to stabilize the choice of regularizer, noting that it is still
weaker than the optimal full likelihood regularizer. Conversely, the PL1/PL2
regularizer has an essentially constant optimal regularizer which is relatively
much weaker.
\begin{figure}[ht!]
\centering
\begin{tabular}{cc}
\includegraphics[trim=0mm 0mm 0mm 0mm,clip,totalheight=.22\textheight]{figure0094} &
\includegraphics[trim=0mm 0mm 0mm 0mm,clip,totalheight=.22\textheight]{figure0095}
\end{tabular}
\caption{
Optimal regularization parameter as a function of
$(\lambda,\hat{\beta}(\lambda))$ for PL1/FL (left) and PL1/PL2 (right) CRF
selection policies. In the left figure, PL1/FL, $\lambda$ represents the
probability of including FL into the objective. A few FL samples add
uncertainty to the objective thus a weaker regularizer is preferable. As
more FL samples are incorporated, this effect diminishes but still acts to
regularize since the full likelihood (only) best regularization is
$\sigma^2=500$ (red triangle). The right figure, PL1/PL2, exhibits only a
minor change as $\lambda$ (the probability of incorporating PL2) is
increased. It is however, best served by a much weaker regularizer than
PL2 alone (red triangle).
}\label{fig:chunk_crf_lamsig1}
\end{figure}
As a result, we believe that the lack of win-win for the chunking CRF follows
from two effects. In the case of the PL1/FL policy the contour plots are
misleading since there is no single $\sigma^2$ that performs well across all
$\lambda\in [0,1]$. For the PL1/PL2 there is simply little change in
regularization necessary across $\lambda$.
\section{Discussion}
The proposed estimator family facilitates computationally efficient estimation
in complex graphical models. In particular, different $(\beta,\lambda)$ parameterizations of the
stochastic composite likelihood enables the resolution of the complexity-accuracy
tradeoff in a domain and problem specific manner. The framework is generally
suited for Markov random fields, including conditional graphical models and is
theoretically motivated. When the model is prone to overfit, stochastically
mixing lower order components with higher order ones acts as a regularizer and
results in a win-win situation of improving test-set accuracy and reducing
computational complexity at the same time.
{
\bibliographystyle{plain}
| proofpile-arXiv_065-5190 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
The starting idea of this work is the following naive question:
\textit{is there a natural way to multiply $n$-tuples of $0$ and $1$}?
Of course, it is easy to find such algebraic structures.
The abelian group $\left(\mathbb{Z}_2\right)^n$ provides such a multiplication,
but the corresponding group algebra $\mathbb{K}\left[(\mathbb{Z}_2)^n\right]$,
over any field of scalars $\mathbb{K}$,
is not a simple algebra.
A much more interesting algebraic structure on
$\mathbb{K}\left[(\mathbb{Z}_2)^n\right]$ is given by the twisted product
\begin{equation}
\label{TwistProd}
u_x\cdot{}u_y=\left(-1\right)^{f(x,y)}
u_{x+y},
\end{equation}
where $x,y\in\left(\mathbb{Z}_2\right)^n$
and $f$ is a two-argument function on $\left(\mathbb{Z}_2\right)^n$
with values in $\mathbb{Z}_2\cong\{0,1\}$.
We use the standard notations $u_x$ for the element of $\mathbb{K}\left[(\mathbb{Z}_2)^n\right]$
corresponding to $x\in\left(\mathbb{Z}_2\right)^n$.
The only difference between the above product and
that of the group algebra $\mathbb{K}\left[(\mathbb{Z}_2)^n\right]$ is
the sign.
Yet, the structure of the algebra changes completely.
Throughout the paper the ground field $\mathbb{K}$ is assumed to be $\mathbb{R}$ or $\mathbb{C}$
(although many results hold for an arbitrary field of characteristic $\not=2$).
Remarkably enough, the classical Clifford algebras can be obtained
as twisted group algebras.
The first example is
the algebra of quaternions, $\mathbb{H}$.
This example was found by many authors but probably first in \cite{Lyc}.
The algebra $\mathbb{H}$ is a twisted $(\mathbb{Z}_2)^2$-algebra.
More precisely, consider the 4-dimensional vector space over $\mathbb{R}$ spanned by
$(0,0),\,(0,1),\,(1,0)$ and $(1,1)$ with the multiplication:
$$
u_{\left(x_1,x_2\right)}\cdot{}u_{\left(y_1,y_2\right)}=
\left(-1\right)^{x_1y_1+x_1y_2+x_2y_2}
u_{\left(x_1+y_1,\,x_2+y_2\right)}.
$$
It is easy to check that
the obtained twisted $(\mathbb{Z}_2)^2$-algebra is, indeed, isomorphic to $\mathbb{H}$,
see also~\cite{OM} for a different grading on the quaternions (over $(\mathbb{Z}_2)^3$).
Along the same lines, a Clifford algebra with $n$ generators,
is a $\left(\mathbb{Z}_2\right)^n$-graded algebra, see \cite{AM1}.
The (complex) Clifford algebra $\mathit{C}\ell_n$ is
isomorphic to the twisted group algebras over $\left(\mathbb{Z}_2\right)^n$
with the product
\begin{equation}
\label{Cliffunet}
u_{(x_1,\ldots,x_n)}\cdot{}u_{(y_1,\ldots,y_n)}=
\left(-1\right)^
{\sum_{1\leq{}i \leq{}j\leq{}n}x_iy_j}
u_{(x_1+y_1,\ldots,x_n+y_n)},
\end{equation}
where $(x_1,\ldots,x_n)$ is an $n$-tuple of $0$ and $1$.
The above twisting function is bilinear and therefore is a
2-cocycle on $\left(\mathbb{Z}_2\right)^n$.
The real Clifford algebras $\mathit{C}\ell_{p,q}$ are also twisted group algebras over $\left(\mathbb{Z}_2\right)^n$,
where $n=p+q$.
The twisting function~$f$ in the real case contains an extra term
$\sum_{1\leq{}i\leq{}p}x_iy_i$ corresponding to the signature (see Section \ref{SigSec}).
\begin{figure}[hbtp]
\begin{center}
\psfragscanon
\includegraphics[width=6cm]{FanoOpp.pdf}
\end{center}
\caption{$\left(\mathbb{Z}_2\right)^3$-grading on the octonions.}
\label{OandM}
\end{figure}
The algebra of octonions $\mathbb{O}$ can also be viewed as
a twisted group algebra \cite{AM}.
It is isomorphic to $\mathbb{R}\left[(\mathbb{Z}_2)^3\right]$
equipped with the following product:
$$
u_{(x_1,x_2,x_3)}\cdot{}u_{(y_1,y_2,y_3)}=
\left(-1\right)^{
\left(x_1x_2y_3+x_1y_2x_3+y_1x_2x_3+
\sum_{1\leq{}i\leq{}j\leq3}\,x_iy_j\right)}
u_{(x_1+y_1,\,x_2+y_2,\,x_3+y_3)}.
$$
Note that the twisting function in this case is a polynomial of degree 3,
and does not define a 2-cocycle.
This is equivalent to the fact that the algebra $\mathbb{O}$ is not associative.
The multiplication table on $\mathbb{O}$ is usually represented by the
Fano plane.
The corresponding $\left(\mathbb{Z}_2\right)^3$-grading is given
in Figure \ref{OandM}.
We also mention that different group gradings on $\mathbb{O}$ were studied in \cite{Eld},
we also refer to \cite{Bae} for a survey on the octonions and Clifford algebras.
In this paper, we introduce two series of complex algebras, $\mathbb{O}_{n}$ and $\mathbb{M}_n$,
and of real algebras, $\mathbb{O}_{p,q}$ and $\mathbb{M}_{p,q}$.
The series $\mathbb{O}_n$ and $\mathbb{O}_{p,q}$ generalize the algebra of octonions
in a similar way as the Clifford
algebras generalize the algebra of quaternions.
The situation can be represented
by the following diagram
$$
\label{SubmodDiagramm}
\begin{CD}
@.@. \vdots@. \vdots \\
@.@. @AAA@AAA \\
@.@. \mathit{C}\ell_{4}@.\mathbb{O}_{5} \\
@.@.@AAA@AAA\\
@.@.\mathit{C}\ell_{3}@.\mathbb{O}_{4}\\
@.@.@AAA @AAA \\
\mathbb{R}@> >>\mathbb{C} @> >> \mathbb{H} @> >> \mathbb{O} @> >>\mathrm{\mathbb{S}}@> >>\cdots
\end{CD}
$$
where the horizontal line represents the Cayley-Dickson procedure
(see, e.g., \cite{Bae,ConSmi}),
in particular, $\mathbb{S}$ is the 16-dimensional algebra of sedenions.
The algebra $\mathbb{M}_n$ ``measures'' the difference between
$\mathbb{O}_n$ and~$\mathit{C}\ell_n$.
The precise definition is as follows.
The (complex) algebras $\mathbb{O}_{n}$ are twisted group algebras $\mathbb{K}\left[(\mathbb{Z}_2)^n\right]$ with the
product (\ref{TwistProd}), given by the function
\begin{equation}
\label{NashProd}
f_\mathbb{O}(x,y)=\sum_{1\leq{}i<j<k\leq{}n}
\left(
x_ix_jy_k+x_iy_jx_k+y_ix_jx_k
\right)+
\sum_{1\leq{}i\leq{}j\leq{}n}\,x_iy_j,
\end{equation}
for arbitrary $n$.
The algebras $\mathbb{M}_n$ are defined by the twisting function
\begin{equation}
\label{ForgProd}
f_\mathbb{M}(x,y)=\sum_{1\leq{}i<j<k\leq{}n}
\left(
x_ix_jy_k+x_iy_jx_k+y_ix_jx_k
\right),
\end{equation}
which is just the homogeneous part of degree 3 of the function $f_\mathbb{O}$
(i.e., with the quadratic part removed).
In the real case, one can again add the signature term
$\sum_{1\leq{}i\leq{}p}x_iy_i$, which
only changes the square of some generators,
and define the algebras $\mathbb{O}_{p,q}$ and $\mathbb{M}_{p,q}$.
The function $f_\mathbb{O}$ is a straightforward generalization of the
twisting function corresponding to the octonions.
In particular, the algebra $\mathbb{O}_3$ is just the complexified octonion algebra
$\mathbb{O}\otimes{}\mathbb{C}$.
In the real case, $\mathbb{O}_{0,3}\cong\mathbb{O}$,
the algebras $\mathbb{O}_{3,0}\cong\mathbb{O}_{2,1}\cong\mathbb{O}_{1,2}$
are isomorphic to another famous algebra called the algebra of split-octonions.
The first really interesting new example is the algebra $\mathbb{O}_5$
and its real forms $\mathbb{O}_{p,q}$ with $p+q=5$.
The algebras $\mathbb{O}_n$ and $\mathbb{M}_n$ are not associative,
moreover, they are not alternative.
It turns out however, that these algebras
have nice properties similar to those of the octonion algebra
and of the Clifford algebras at the same time.
As an ``abstract algebra'', $\mathbb{O}_n$ can be defined
in a quite similar way as the Clifford algebras.
The algebra~$\mathbb{O}_n$ has
$n$ generators $u_1,\cdots,u_n$ such that $u_i^2=-1$ and
\begin{equation}
\label{RugaetMenia}
u_i\cdot{}u_j=-u_j\cdot{}u_i,
\end{equation}
respectively, together with the
antiassociativity relations
\begin{equation}
\label{RugaetMeniaBis}
u_i\cdot(u_j\cdot{}u_k)=-(u_i\cdot{}u_j)\cdot{}u_k,
\end{equation}
for $i\not=j\not=k$.
We will show that {\it the algebras $\mathbb{O}_n$
are the only algebras with $n$ generators $u_1,\cdots,u_n$
satisfying (\ref{RugaetMenia}) and (\ref{RugaetMeniaBis}) and such that
any three monomials $u,v,w$ either associate or antiassociate
independently of the order of $u,v,w$}.
The relations of higher degree are then calculated inductively using the following
simple ``linearity law''.
Given three monomials $u,v,w$, then
$$
u\cdot(v\cdot{}w)=(-1)^{\phi(\deg{u},\deg{v},\deg{w})}\,(u\cdot{}v)\cdot{}w,
$$
where $\phi$ is the trilinear function uniquely defined by the
above relations of degree~3, see Section \ref{GenRelSect} for the details.
For instance, one has
$
u_i\cdot((u_j\cdot{}u_k)\cdot{}u_\ell)=(u_i\cdot(u_j\cdot{}u_k))\cdot{}u_\ell,
$
for $i\not=j\not=k\not=\ell,$ etc.
The presentation of $\mathbb{M}_n$ is exactly the same as above, except that
the generators of $\mathbb{M}_n$ commute.
We will prove two classification results characterizing the algebras $\mathbb{O}_n$ and $\mathbb{M}_n$
algebras in an axiomatic way.
Our main tool is the notion of \textit{generating function}.
This is a function in one argument $\alpha: (\mathbb{Z}_2)^n\rightarrow \mathbb{Z}_2$ that encodes
the structure of the algebra.
Existence of a generating function is a strong condition.
This is a way to distinguish the series $\mathbb{O}_n$ and $\mathbb{M}_n$
from the classical Cayley-Dickson algebras.
The main results of the paper contain four theorems and
their corollaries.
\begin{enumerate}
\item
Theorem \ref{Thmalpha} states that the generating function
determines a (complex) twisted group algebra completely.
\item
Theorem~\ref{AlphMainTh} is a general characterization of non-associative
twisted group algebras over $(\mathbb{Z}_2)^n$ with symmetric non-associativity
factor, in terms of generating functions.
\item
Theorem \ref{SimProp} answers the question for which $n$ (and $p,q$)
the constructed algebras are simple.
The result is quite similar to that for the Clifford algebras, except that
the algebras $\mathbb{O}_n$ and $\mathbb{M}_n$ degenerate for one value of $n$
over 4 and not 1 over 2 as~$\mathit{C}\ell_n$.
\item
Theorem \ref{Solcx} provides explicit formul{\ae} of the
Hurwitz-Radon square identities.
The algebras $\mathbb{O}_n$ (as well as $\mathbb{M}_n$) are not composition algebras.
However, they have natural Euclidean norm $\mathcal{N}$.
We obtain a necessary and sufficient condition
for elements $u$ and~$v$ to satisfy
$
\mathcal{N}(u\cdot{}v)=\mathcal{N}(u)\,\mathcal{N}(v).
$
Whenever we find two subspaces $V,W\subset\mathbb{O}_n$
consisting of elements satisfying this condition,
we obtain a square identity generalizing the famous ``octonionic''
8-square identity.
\end{enumerate}
The algebras $\mathbb{O}_n$ and $\mathbb{M}_n$ are
closely related to the theory of Moufang loops and, in particular,
to code loops, see \cite{Gri,DV,NV} and references therein.
Indeed, the homogeneous elements $\pm{}u_x$, where $x\in(\mathbb{Z}_2)^n$
form a Moufang loop of rank~$2^{n+1}$.
As an application, we show in Section \ref{LaSec} how the famous Parker loop
fits into our framework.
Our main tools include variations on the cohomology of $\left(\mathbb{Z}_2\right)^n$
and the linear algebra over~$\left(\mathbb{Z}_2\right)^n$.
A brief account on this subject is presented in Section \ref{CSec} and in Appendix.
\medskip
\noindent \textbf{Acknowledgments}.
This work was completed at the
Mathematisches Forschungsinstitut Oberwolfach (MFO).
The first author has benefited from the award of a
\textit{Leibniz Fellowship},
the second author is also grateful to MFO for hospitality.
We are pleased to thank
Christian~Duval, Alexey Lebedev, Dimitry Leites,
John McKay and Sergei Tabachnikov
for their interest and helpful comments.
We are also grateful to anonymous referees for a number of
useful comments.
\section{Twisted group algebras over $\left(\mathbb{Z}_2\right)^n$}
In this section, we give the standard definition of twisted group algebra
over the abelian group $\left(\mathbb{Z}_2\right)^n$.
The twisting function we consider is not necessarily a 2-cocycle.
We recall the related notion of graded quasialgebra introduced in~\cite{AM}.
In the end of the section, we give a short account on the cohomology
of $\left(\mathbb{Z}_2\right)^n$ with coefficients in $\mathbb{Z}_2$.
\subsection{Basic definitions}\label{TheVeryFirS}
The most general definition is the following.
Let $(G,+)$ be an abelian group.
A \textit{twisted group algebra}
$\left(\mathbb{K}\left[G\right],\,F\right)$
is the algebra spanned by the elements
$u_x$ for $x\in{}G$ and equipped with the product
$$
u_x\cdot{}u_y=F(x,y)\,
u_{x+y},
$$
where
$F:G\times{}G\to\mathbb{K}^*$
is an \textit{arbitrary} two-argument function
such that
$$
F(0,.)=F(.,0)=1.
$$
The algebra $\left(\mathbb{K}\left[G\right],\,F\right)$ is always unital and it
is associative if and only if $F$ is a 2-cocycle on $G$.
Twisted group algebras are a classical subject
(see, e.g., \cite{Con,BZ} and references therein).
We will be interested in the particular case of twisted algebras over
$G=\left(\mathbb{Z}_2\right)^n$ and the twisting function $F$ of the form
$$
F(x,y)=\left(-1\right)^{f(x,y)},
$$
with $f$ taking values in $\mathbb{Z}_2\cong\{0,1\}$.
We will denote by $\left(\mathbb{K}\left[(\Z_2)^n\right],\,f\right)$
the corresponding twisted group algebra.
Let us stress that the function $f$ is not necessarily a 2-cocycle.
\subsection{Quasialgebra structure}\label{QuaSec}
An arbitrary twisted group algebra
$\mathcal{A}=\left(\mathbb{K}\left[(\Z_2)^n\right],\,f\right)$
gives rise to two functions
$$
\beta:\left(\mathbb{Z}_2\right)^n\times{}\left(\mathbb{Z}_2\right)^n\to\mathbb{Z}_2,
\qquad
\phi:\left(\mathbb{Z}_2\right)^n\times{}\left(\mathbb{Z}_2\right)^n\times{}\left(\mathbb{Z}_2\right)^n\to\mathbb{Z}_2
$$
such that
\begin{eqnarray}
\label{DeffiC}
u_x\cdot{}u_y&=&
(-1)^{\beta(x,y)}\,
u_y\cdot{}u_x,\\[6pt]
\label{Deffi}
u_x\cdot\left(u_y\cdot{}u_z\right)&=&
(-1)^{\phi(x,y,z)}
\left(u_x\cdot{}u_y\right)\cdot{}u_z,
\end{eqnarray}
for any homogeneous elements $u_x,u_y,u_z\in\mathcal{A}$.
The function $\beta$ obviously satisfies the following properties:
$\beta(x,y)=\beta(y,x)$ and $\beta(x,x)=0$.
Following \cite{AM}, we call the structure $\beta,\phi$
a \textit{graded quasialgebra}.
The functions $\beta$ and $\phi$ can be expressed in terms of the twisting function $f$:
\begin{eqnarray}
\label{bef}
\beta (x,y)&=&f(x,y) +f(y,x),\\[6pt]
\label{phef}
\phi (x,y,z)&=&f(y,z)+f(x+y,z)+f(x,y+z)+f(x,y).
\end{eqnarray}
Note that (\ref{phef}) reads
$$
\phi=\delta{}f.
$$
In particular, $\phi$ is a (trivial) 3-cocycle.
Conversely, given the functions $\beta$ and $\phi$, to what extent the corresponding
function $f$ is uniquely defined?
We will give the answer to this question in Section \ref{IsomSec}.
\begin{ex}
(a)
For the Clifford algebra $\mathit{C}\ell_n$ (and for
$\mathit{C}\ell_{p,q}$ with $p+q=n$),
the function $\beta$ is bilinear:
$$
\beta_{\mathit{C}\ell_n}(x,y)=\sum_{i\not=j}x_iy_j.
$$
The function $\phi\equiv0$ since the twisting function (\ref{Cliffunet}) is a 2-cocycle,
this is of course equivalent to the associativity property.
Every simple graded quasialgebra
with bilinear $\beta$ and $\phi\equiv0$ is a Clifford algebra,
see \cite{OM}.
(b)
For the algebra of octonions $\mathbb{O}$, the function $\beta$ is as follows:
$\beta(x,y)=0$ if either $x=0$, or $y=0$, or $x=y$;
otherwise, $\beta(x,y)=1$.
The function $\phi$ is the determinant of $3\times3$ matrices:
$$
\phi(x,y,z)=
\det\left|
x,y,z
\right|,
$$
where $x,y,z\in\left(\mathbb{Z}_2\right)^3$.
This function is symmetric and trilinear.
\end{ex}
\begin{rem}
The notion of graded quasialgebra was defined in \cite{AM1}
in a more general situation where $G$ is an arbitrary
abelian group and the functions
that measure the defect of commutativity and associativity take values
in~$\mathbb{K}^*$ instead of $\mathbb{Z}_2$.
The ``restricted version'' we consider is very special and
this is the reason we can say much more about it.
On the other hand, many classical algebras can be treated within
our framework.
\end{rem}
\subsection{The pentagonal and the hexagonal diagrams}\label{DiagSect}
Consider any three homogeneous elements, $u,v,w\in\mathcal{A}$.
The functions $\beta$ and $\phi$
relate the different products, $u(vw), (uv)w,\allowbreak (vu)w$, etc.
The hexagonal diagrams in Figure \ref{5and6}
\begin{figure}[hbtp]
\includegraphics[width=12cm]{HexaPenta3.pdf}
\caption{Two hexagonal and the pentagonal commutative diagrams}
\label{5and6}
\end{figure}
represent different loops in $\mathcal{A}$ that lead to the following identities
\begin{equation}
\label{BetPhi}
\begin{array}{rcl}
\phi(x,y,z)+\beta(x,y+z)+\phi(y,z,x)+\beta(z,x)+\phi(y,x,z)+\beta(x,y)&=&0\,,\\[6pt]
\phi(x,y,z)+\beta(z,y)+\phi(x,z,y)+\beta(z,x)+\phi(z,x,y)+\beta(x+y,z)&=&0\,.
\end{array}
\end{equation}
Note that these identities can be checked directly from
(\ref{bef}) and (\ref{phef}).
In a similar way,
the products of any four homogeneous elements $t,u,v,w$,
(see the pentagonal diagrams of Figure \ref{5and6})
is equivalent to the condition
\begin{equation}
\label{DeltaPhi}
\phi(y,z,t)+\phi(x+y,z,t)+\phi(x,y+z,t)+\phi(x,y,z+t)+\phi(x,y,z)=0,
\end{equation}
which is nothing but the 3-cocycle condition $\delta\phi=0$.
We already knew this identity from $\phi=\delta{}f$.
Let us stress on the fact that
these two commutative diagrams are tautologically satisfied
and give no restriction on $f$.
\subsection{Cohomology $H^*\left(\left(\mathbb{Z}_2\right)^n;\mathbb{Z}_2 \right)$}\label{CSec}
In this section, we recall classical notions and results on the cohomology
of $G=(\mathbb{Z}_2)^n$ with coefficients in $\mathbb{Z}_2$.
We consider the space of cochains, $\mathcal{C}^q=\mathcal{C}^q(G;\mathbb{Z}_2)$, consisting of (arbitrary)
maps in $q$ arguments $c:G\times\cdots\times{}G\to\mathbb{Z}_2$.
The usual coboundary operator $\delta:\mathcal{C}^{q}\to\mathcal{C}^{q+1}$
is defined by
$$
\delta c(g_1,\ldots ,g_{q+1})= c(g_1,\ldots, g_{q})
+
\sum_{i=1}^q c(g_1,\ldots, g_{i-1},g_{i}+g_{i+1},g_{i+2},\ldots,, g_{q})
+
c(g_2,\ldots ,g_{q+1}),
$$
for all $g_1, \ldots, g_{q+1} \in G$.
This operator satisfies $\delta^2=0$.
A cochain $c$ is called \textit{$q$-cocycle} if $\delta q=0$,
and called a \textit{$q$-coboundary} (or a trivial $q$-cocycle)
if $c=\delta b$, for some cochain $b\in \mathcal{C}^{q-1}$.
The space of $q$-th cohomology, $H^{q}(G;\mathbb{Z}_2)$,
is the quotient space of $q$-cocycles modulo $q$-coboundaries.
We are particularly interested in the case where $q=1, 2$ or $3$.
A fundamental result (cf. \cite{Ade}, p.66) states that the cohomology ring
$H^*(G;\mathbb{Z}_2)$ is isomorphic to the algebra of polynomials
in $n$ commuting variables $e_1,\ldots,e_n$:
$$
H^*(G;\mathbb{Z}_2)\cong \mathbb{Z}_2[e_1,\ldots,e_n].
$$
The basis of $H^q(G;\mathbb{Z}_2)$ is given by the
cohomology classes of the following multilinear
$q$-cochains:
\begin{equation}
\label{BasCoc}
(x^{(1)},\ldots, x^{(q)})\mapsto x^{(1)}_{i_1}\cdots\, x^{(q)}_{i_q},
\qquad
i_1\leq\cdots\leq{}i_q,
\end{equation}
where each $x^{(k)}\in(\mathbb{Z}_2)^n$ is an $n$-tuple of 0 and 1:
$$
x^{(k)}=
(x^{(k)}_1,\ldots,x^{(k)}_n).
$$
The $q$-cocycle \eqref{BasCoc} is identified with the monomial
$e_{i_1}\cdots\,e_{i_q}$.
\begin{ex}
The linear maps $c_i(x)=x_i$, for $i=1,\ldots,n$
provide a basis of $H^{1}(G;\mathbb{Z}_2)$ while
the bilinear maps
$$
c_{ij}(x,y)=x_i\,y_j,
\qquad i\leq j
$$
provide a basis of the second cohomology space $H^{2}(G;\mathbb{Z}_2)$.
\end{ex}
\subsection{Polynomials and polynomial maps}\label{Poly}
The space of all functions on $(\mathbb{Z}_2)^n$ with values in $\mathbb{Z}_2$ is isomorphic to the
quotient space
$$
\mathbb{Z}_2\left[x_1,\ldots,x_n\right]\,/\,(x_i^2-x_i \,:\, i=1,\ldots ,n).
$$
A function $P:(\mathbb{Z}_2)^n\to\mathbb{Z}_2$ can be expressed as a polynomial
in $(x_1,\ldots,x_n)$,
but not in a unique way.
Throughout the paper we identify the function $P$ to the polynomial expression
in which each monomial is minimally represented (i.e. has lowest degree as possible).
So that each function $P$ can be uniquely written in the following form
$$
P=\sum_{k=0}^{n}\;
\sum_{1\leq{}i_1<\cdots<i_k\leq{}n}
\lambda_{i_1\ldots{}i_k}\,x_{i_1}\cdots\,x_{i_k},
$$
where $\lambda_{i_1\ldots{}i_k}\in\{0,1\}$.
\section{The generating function}
In this section we go further into the general theory of twisted group algebra
over $(\mathbb{Z}_2)^n$.
We define the notion of generating function.
To the best of our knowledge, this notion has never been considered.
This notion will be fundamental for us since it allows to distinguish
the Clifford algebras, octonions and the two new series we introduce in this paper
from other twisted group algebras over $(\mathbb{Z}_2)^n$ (such as Cayley-Dickson algebras).
The generating function contains the full information about the algebra,
except for the signature.
\subsection{Generating functions}\label{DefGenFn}
The notion of generating function makes sense for any
$G$-graded quasialgebra $\mathcal{A}$, over an arbitrary abelian group $G$.
We are only interested in the case where $\mathcal{A}$ is a twisted group algebra over $(\mathbb{Z}_2)^n$.
\begin{defi}
Given a $G$-graded quasialgebra, a function
$
\alpha:G\to\mathbb{Z}_2
$
will be called a \textit{generating function}
if the binary function $\beta$ and the ternary function $\phi$
defined by (\ref{DeffiC}) and (\ref{Deffi})
are both determined by $\alpha$ via
\begin{eqnarray}
\label{Genalp1}
\beta(x,y)&=&\alpha(x+y)+\alpha(x)+\alpha(y),\\[6pt]
\label{Genalp2}
\phi(x,y,z)&=&\alpha(x+y+z)\cr
&&+\alpha(x+y)+\alpha(x+z)+\alpha(y+z)\cr
&&+\alpha(x)+\alpha(y)+\alpha(z).
\end{eqnarray}
\noindent
Note that the identity (\ref{Genalp1}) implies that $\alpha$ vanishes
on the zero element $0=(0,\ldots,0)$ of $(\mathbb{Z}_2)^n$, because the corresponding
element $1:=u_0$ is the unit of $\mathcal{A}$ and therefore commutes with
any other element of $\mathcal{A}$.
\end{defi}
The identity (\ref{Genalp1}) means that $\beta$ is the differential
of $\alpha$ in the usual sense of group cohomology.
The second identity (\ref{Genalp2}) suggests the operator of
``second derivation'', $\delta_2$, defined by the right-hand-side, so that
the above identities then read:
$$
\beta=\delta\alpha,
\qquad
\phi=\delta_2\alpha.
$$
The algebra $\mathcal{A}$ is commutative if and only if $\delta\alpha=0$;
it is associative if and only if $\delta_2\alpha=0$.
The cohomological meaning of the operator $\delta_2$
will be discussed in Appendix.
Note also that formul{\ae} (\ref{Genalp1}) and (\ref{Genalp2})
are known in linear algebra and usually called \textit{polarization}.
This is the way one obtains a bilinear form from a quadratic one
and a trilinear form from a cubic one, respectively.
\begin{ex}
\label{ExGenHO}
(a)
The classical algebras of quaternions $\mathbb{H}$ and of octonions $\mathbb{O}$
have generating functions.
They are of the form:
$$
\alpha(x)=\left\{
\begin{array}{lr}
1,&x\not=0,\\
0,&x=0.
\end{array}
\right.
$$
It is amazing that such simple functions contain the full information about
the structure of $\mathbb{H}$ and $\mathbb{O}$.
(b)
The generating function of $\mathit{C}\ell_n$ is as follows:
\begin{equation}
\label{ClAlp}
\alpha_{\mathit{C}\ell}(x)=\sum_{1\leq{}i\leq{}j\leq{}n}\,x_ix_j.
\end{equation}
Indeed, one checks that the binary function $\beta$ defined by (\ref{Genalp1}) is exactly the
skew-symmetrization of the function
$f=\sum_{1\leq{}i \leq{}j\leq{}n}x_iy_j$.
The function $\phi$ defined by (\ref{Genalp2})
is identically zero, since $\alpha$ is a quadratic polynomial.
\end{ex}
The most important feature of the notion of generating function
is the following.
In the complex case, the generating function contains
the full information about the algebra.
\begin{thm}\label{Thmalpha}
If $\mathcal{A}$ and $\mathcal{A}'$ are two complex twisted group algebras with the same generating function, then
$\mathcal{A}$ and $\mathcal{A}'$ are isomorphic as graded algebras.
\end{thm}
This theorem will be proved in Section \ref{IsomSec}.
In the real case, the generating function determines the algebra
up to the signature.
\subsection{The signature}\label{SigSec}
Consider a twisted group algebra $\mathcal{A}=(\mathbb{K}\left[(\mathbb{Z}_2)^n\right],f)$.
We will always use the following set of generators of $\mathcal{A}$:
\begin{equation}
\label{General}
u_i=u_{(0,\ldots,0,1,0,\ldots0)},
\end{equation}
where 1 stands at $i$-th position.
One has $u_i^2=\pm 1$,
the sign being determined by $f$.
The \textit{signature} is the data of the signs of the squares of the generators~$u_i$.
\begin{defi}
We say that the twisting functions
$f$ and $f'$
differs by a signature if one has
\begin{equation}
\label{Signatur}
f(x,y)-f'(x,y)=x_{i_1}y_{i_1}+\cdots+x_{i_p}y_{i_p},
\end{equation}
where $p\leq{}n$ is an integer.
\end{defi}
Note that $f-f'$ as above, is a non-trivial 2-cocycle, for $p\geq 1$.
The quasialgebra structures defined by \eqref{bef} and \eqref{phef} are identically the same:
$\beta=\beta'$ and $\phi=\phi'$.
The signature represents the main difference between the
twisted group algebras over $\mathbb{C}$ and $\mathbb{R}$.
\begin{prop}
\label{SignProp}
If $\mathcal{A}=(\mathbb{C}\left[(\mathbb{Z}_2)^n\right],f)$ and $\mathcal{A}'=(\mathbb{C}\left[(\mathbb{Z}_2)^n\right],f')$
are complex twisted group algebras such that
$f$ and $f'$ differs by a signature, then $\mathcal{A}$ and $\mathcal{A}'$ are isomorphic
as graded algebras.
\end{prop}
\begin{proof}
Let us assume $p=1$ in \eqref{Signatur}, i.e.,
$f(x,y)-f'(x,y)=x_{i_1}y_{i_1}$, the general case will then follow by induction.
Let $u_x$, resp. $u'_x$, be the standard basis elements of $\mathcal{A}$, resp. $\mathcal{A}'$.
Let us consider the map $\theta:\mathcal{A}\to\mathcal{A}'$ defined by
$$
\theta(u_x)=
\left\{
\begin{array}{rl}
\sqrt{-1}\,u'_x, & \text{if } \,x_{i_1}=1,\\[4pt]
u'_x, & \hbox{otherwise}.
\end{array}
\right.
$$
Note that one can write $\theta(u_x)=\sqrt{-1}^{\,x_{i_1}}u'_x$, for all $x$.
Let us show that $\theta$ is a (graded) isomorphism between $\mathcal{A}$ and $\mathcal{A}'$.
On the one hand,
$$
\theta(u_x\cdot u_y)=(-1)^{f(x,y)}\,\theta(u_{x+y})
=(-1)^{f(x,y)}\, \sqrt{-1}^{\,(x_{i_1}+y_{i_1})}\,u'_{x+y}.
$$
On the other hand,
$$
\theta(u_x)\cdot \theta(u_y)=\sqrt{-1}^{\,x_{i_1}}\,u'_x \cdot \sqrt{-1}^{\,y_{i_1}}\,u'_y
=\sqrt{-1}^{\,x_{i_1}}\,\sqrt{-1}^{\,y_{i_1}}\,(-1)^{f'(x,y)}\,u'_{x+y}.
$$
Using the following (surprising) formula
\begin{equation}
\label{Somewhat}
\frac{\sqrt{-1}^{\,x_{i_1}}\sqrt{-1}^{\,y_{i_1}}}{\sqrt{-1}^{\,(x_{i_1}+y_{i_1})}}
=(-1)^{x_{i_1}y_{i_1}},
\end{equation}
we obtain $\theta(u_x\cdot u_y)=\theta(u_x)\cdot \theta(u_y)$.
To understand \eqref{Somewhat}, beware that the power $x_{i_1}+y_{i_1}$
in the denominator is taken modulo~2.
\end{proof}
In the real case, the algebras $\mathcal{A}$ and $\mathcal{A}'$ can be non-isomorphic
but can also be isomorphic.
We will encounter throughout this paper the algebras for which
either of this situation occurs.
\subsection{Isomorphic twisted algebras}\label{IsomSec}
Let us stress that all the isomorphisms between the twisted group algebras
we consider in this section preserve the grading.
Such isomorphisms are called \textit{graded isomorphisms}.
It is natural to ask under what condition two functions $f$ and $f'$ define
isomorphic algebras.
Unfortunately, we do not know the complete answer to this question
and give here two conditions which are sufficient but certainly not necessary.
\begin{lem}
\label{LemA}
If $f-f'=\delta{}b$ is a coboundary, i.e., $b:(\mathbb{Z}_2)^n\to\mathbb{Z}_2$ is a function such that
$$
f(x,y)-f'(x,y)=b(x+y)+b(x)+b(y),
$$
then the corresponding twisted algebras are isomorphic.
\end{lem}
\begin{proof}
The isomorphism is given by the map
$u_x\mapsto(-1)^{b(x)}\,u_x$, for all $x\in{}(\mathbb{Z}_2)^n$.
\end{proof}
\begin{lem}
\label{LemB}
Given a group automorphism $T:(\mathbb{Z}_2)^n\to{}(\mathbb{Z}_2)^n$, the functions $f$ and
$$
f'(x,y)=f(T(x),T(y))
$$
define isomorphic twisted group algebras.
\end{lem}
\begin{proof}
The isomorphism is given by the map
$u_x\mapsto{}u_{T^{-1}(x)}$, for all $x\in{}(\mathbb{Z}_2)^n$.
\end{proof}
Note that the automorphisms of $(\mathbb{Z}_2)^n$ are
just arbitrary linear transformations.
We are ready to answer the question formulated in the end of Section \ref{QuaSec}.
\begin{prop}
\label{VeryFirstProp}
Given two twisted algebras $\mathcal{A}=(\mathbb{K}\left[(\Z_2)^n\right],f)$ and $\mathcal{A}'=(\mathbb{K}\left[(\Z_2)^n\right],f')$,
the corresponding quasialgebra structures coincide, i.e., $\beta'=\beta$ and $\phi'=\phi$,
if and only if
$$
f(x,y)-f'(x,y)=\delta{}b(x,y)+\sum_{1\leq i \leq n}\lambda_i\,x_iy_i,
$$
where $b:(Z_2)^n\to\mathbb{Z}_2$ is an arbitrary function, and $\lambda_i$ are coefficients in $\mathbb{Z}_2$.
In particular, if $\mathbb{K}=\mathbb{C}$, then $\mathcal{A}\cong\mathcal{A}'$.
\end{prop}
\begin{proof}
If the quasialgebras structures coincide, then
$\phi'=\phi$ implies $\delta{}f'=\delta{}f$, so that $f-f'$ is a 2-cocycle.
We use the information about the
second cohomology space $H^2((\mathbb{Z}_2)^n;\mathbb{Z}_2)$ (see Section \ref{CSec}).
Up to a 2-coboundary,
every non-trivial 2-cocycle is a linear combination of the following
bilinear maps:
$(x,y)\mapsto{}x_iy_i$, for some $i$ and $(x,y)\mapsto{}x_ky_\ell$, for some
$k<\ell$.
One deduces that $f-f'$ is of the form
$$
f(x,y)-f'(x,y)=\delta{}b(x,y)+
\sum_{1\leq{}i\leq{}n}\lambda_i\,x_iy_i+
\sum_{k<\ell}\mu_{k\ell}\,x_ky_\ell.
$$
Since $\beta'=\beta$, one observes that $f-f'$ is symmetric, so that
the last summand vanishes, while the second summand is nothing but
the signature.
The isomorphism in the complex case then follows from Proposition \ref{SignProp}
and Lemma \ref{LemA}.
Conversely, if $f$ and $f'$ are related by the above expression,
then the quasialgebra structures obviously coincide.
\end{proof}
Now, we can deduce Theorem \ref{Thmalpha} as a corollary of Proposition \ref{VeryFirstProp}.
Indeed, if $\mathcal{A}$ and $\mathcal{A}'$ have the same generating function $\alpha$,
then the quasialgebra structure of $\mathcal{A}$ and $\mathcal{A}'$ are the same.
\subsection{Involutions}
Let us mention one more property of generating functions.
Recall that an \textit{involution} on an algebra $\mathcal{A}$ is a linear map
$a\mapsto\bar{a}$ from $\mathcal{A}$ to $\mathcal{A}$ such that
$\overline{ab}=\bar{b}\,\bar{a}$ and $\bar{1}=1$,
i.e., an involution is an anti-automorphism.
Every generating function defines a graded involution
of the following particular form:
\begin{equation}
\label{AntII}
\overline{u_x}=
(-1)^{\alpha(x)}\,u_x,
\end{equation}
\begin{prop}
\label{Malus'}
If $\alpha$ is a generating function, then the linear map defined by
formula (\ref{AntII}) is an involution.
\end{prop}
\begin{proof}
Using (\ref{DeffiC}) and (\ref{Genalp1}), one has
$$
\overline{u_xu_y}=
(-1)^{\alpha(x+y)}\,u_xu_y=
(-1)^{\alpha(x+y)+\beta(x,y)}\,u_y\,u_x=
(-1)^{\alpha(x)+\alpha(y)}\,u_y\,u_x=
\overline{u_y} \,\overline{u_x}.
$$
Hence the result.
\end{proof}
In particular, the generating functions of $\mathbb{H}$ and $\mathbb{O}$, see Example \ref{ExGenHO},
correspond to the canonical involutions, i.e., to the conjugation.
\section{The series $\mathbb{O}_n$ and $\mathbb{M}_n$:
characterization}\label{TheMainS}
In this section, we formulate our first main result.
Theorem \ref{AlphMainTh}, concerns the general properties
of twisted $(\mathbb{Z}_2)^n$-algebras with $\phi=\delta{}f$ symmetric.
This result distinguishes a class of algebras of which our algebras
of $\mathbb{O}$- and $\mathbb{M}$-series are the principal representatives.
We will also present several different ways to define the algebras
$\mathbb{O}_n$ and $\mathbb{M}_n$, as well as of $\mathbb{O}_{p,q}$ and $\mathbb{M}_{p,q}$.
\subsection{Symmetric quasialgebras}\label{SymmSec}
An arbitrary twisted group algebra leads to a quasialgebra structure.
One needs to assume some additional conditions
on the ``twisting'' function~$f$ in order to obtain an interesting
class of algebras.
We will be interested in the case where the function $\phi=\delta{}f$,
see formula (\ref{phef}), is symmetric:
\begin{equation}
\label{SymEq}
\phi(x,y,z)=\phi(y,x,z)=\phi(x,z,y).
\end{equation}
This condition seems to be very natural:
it means that, if three elements,
$u_x,u_y$ and $u_z$ form a antiassociative triplet,
i.e., one has $u_x\cdot(u_y\cdot{}u_z)=-(u_x\cdot{}u_y)\cdot{}u_z$,
then this property is independent of the ordering of the elements
in the triplet.
An immediate consequence of the identity (\ref{BetPhi}) is that,
if $\phi$ is symmetric, then it is completely determined by $\beta$:
\begin{equation}
\label{PhiBet}
\begin{array}{rcl}
\phi(x,y,z)&=&\beta(x+y,\,z)+\beta(x,z)+\beta(y,z)\\[4pt]
&=&\beta(x,\,y+z)+\beta(x,y)+\beta(x,z),
\end{array}
\end{equation}
as the ``defect of linearity'' in each argument.
The following statement is our main result
about the general structure of a twisted
group algebra $\mathcal{A}=(\mathbb{K}[(\mathbb{Z}_2)^n],f)$.
We formulate this result in a slightly more general
context of $(\mathbb{Z}_2)^n$-graded quasialgebra.
\begin{thm}
\label{AlphMainTh}
Given a $(\mathbb{Z}_2)^n$-graded quasialgebra $\mathcal{A}$,
the following conditions are equivalent.
(i)
The function $\phi$ is symmetric.
(ii)
The algebra $\mathcal{A}$ has a generating function.
\end{thm}
\noindent
This theorem will be proved in Section \ref{ExProoS}.
It is now natural to ask under what condition
a function $\alpha:(\mathbb{Z}_2)^n\to\mathbb{Z}_2$ is a generating function
for some twisted group algebra.
The following statement provides a necessary and sufficient condition.
\begin{prop}
\label{AlpDeg3}
Given a function $\alpha:(\mathbb{Z}_2)^n\to\mathbb{Z}_2$,
there exists a twisted group algebra~$\mathcal{A}$ such that $\alpha$
is a generating function of $\mathcal{A}$, if and only if
$\alpha$ is a polynomial of degree $\leq3$.
\end{prop}
\noindent
This proposition will be proved in Section \ref{CubSecG}.
Furthermore, we will show in Section \ref{Unisex} that
the generating function can be chosen in a canonical way.
Theorem \ref{AlphMainTh} has a number of consequences.
In particular, it implies two more important properties of $\phi$.
The function $\phi$ is called \textit{trilinear} if it satisfies
\begin{equation}
\label{AddiT}
\phi(x+y,z,t)=\phi(x,z,t)+\phi(y,z,t),
\end{equation}
and similarly in each argument.
The function $\phi$ is \textit{alternate} if it satisfies
\begin{equation}
\label{AlT}
\phi(x,x,y)=\phi(x,y,x)=\phi(y,x,x)=0,
\end{equation}
for all $x,y\in(\mathbb{Z}_2)^n$.
Let us stress that an algebra satisfying \eqref{AlT}
is \textit{graded-alternative},i.e.,
$$
u_x\cdot(u_x\cdot{}u_y)=u_x^2\cdot{}u_y
\qquad
(u_y\cdot{}u_x)\cdot{}u_x=u_y\cdot{}u_x^2,
$$
for all homogeneous elements $u_x$ and~$u_y$.
This does not imply that the algebra is alternative.
Let us mention that alternative graded quasialgebras were classified in
\cite{AE}.
The following result is a consequence of Theorem \ref{AlphMainTh} and Proposition \ref{AlpDeg3}.
\begin{cor}
\label{TriCol}
If the function $\phi$ is symmetric, then $\phi$ is trilinear and alternate.
\end{cor}
\noindent
This corollary will be proved
in Section \ref{CubSecG}.
Our next goal is to study two series of algebras
with symmetric function $\phi=\delta{}f$.
Let us notice that the Cayley-Dickson algebras are not of this type,
cf. \cite{AM}.
\subsection{The generating functions
of the algebras $\mathbb{O}_{n}$ and $\mathbb{M}_{n}$}\label{DefAlgNash}
We already defined the complex algebras $\mathbb{O}_{n}$ and $\mathbb{M}_{n}$,
with $n\geq3$ and the real algebra $\mathbb{O}_{p,q}$ and $\mathbb{M}_{p,q}$
see Introduction, formul{\ae} (\ref{NashProd}) and (\ref{ForgProd}).
Let us now calculate the associated function $\phi=\delta{}f$
which is exactly the same for $f=f_\mathbb{O}$ or $f_\mathbb{M}$.
One obtains
$$
\phi(x,y,z)=\sum_{i\not=j\not=k}x_iy_jz_k.
$$
This function is symmetric in $x,y,z$ and Theorem \ref{AlphMainTh} implies that
the algebras $\mathbb{O}_n$ and $\mathbb{M}_{n}$ have generating functions.
The explicit formul{\ae} are as follows:
\begin{equation}
\label{NashAlp}
\alpha_\mathbb{O}(x)=\sum_{1\leq{}i<j<k\leq{}n}
x_ix_jx_k+
\sum_{1\leq{}i<j\leq{}n}\,x_ix_j+
\sum_{1\leq{}i\leq{}n}x_i,
\end{equation}
and
\begin{equation}
\label{NashAlpBis}
\alpha_\mathbb{M}(x)=\sum_{1\leq{}i<j<k\leq{}n}
x_ix_jx_k+
\sum_{1\leq{}i\leq{}n}x_i.
\end{equation}
Note that the generating functions $\alpha_\mathbb{O}$ and $\alpha_\mathbb{M}$
are $\mathfrak{S}_n$-\textit{invariant}
with respect to the natural action of the group of permutations $\mathfrak{S}_n$ on~$(\mathbb{Z}_2)^n$.
Thanks to the $\mathfrak{S}_n$-invariance, we can give a very simple description
of the above functions.
Denote by $|x|$ the \textit{weight} of $x\in\left(\mathbb{Z}_2\right)^n$
(i.e., the number of 1 entries in $x$ written as an $n$-tuple of 0 and 1).
The above generating functions, together
with that of the Clifford algebras depend only on $|x|$ and are 4-periodic:
\begin{equation}
\label{GenFuncTab}
\setlength{\extrarowheight}{3pt}
\begin{array}{|c||c|c|c|c|c|c|c|c|c}
|x| & 1 & 2 & 3 &4 &5 &6 &7 &8 &\cdots\\
\hline
\hline
\alpha_{\mathit{C}\ell} & 1 & 1 & 0 & 0 & 1 &1 &0 &0 &\cdots\\
\hline
\alpha_\mathbb{O} & 1 & 1 & 1 & 0 & 1& 1 & 1 & 0 &\cdots\\
\hline
\alpha_\mathbb{M} & 1 & 0 & 0 & 0 & 1& 0 & 0 & 0 &\cdots\\
\hline
\end{array}
\end{equation}
This table is the most simple way to use the generating function
in any calculation.
One can deduce the explicit formul{\ae} (\ref{ClAlp}), (\ref{NashAlp})
and (\ref{NashAlpBis})
directly from the table (\ref{GenFuncTab}).
\subsection{Characterization of the algebras of the
$\mathbb{O}$- and~$\mathbb{M}$-series}\label{UniqReSec}
Let us formulate two uniqueness results
that allow us to give axiomatic definitions of the introduced algebras.
Recall that the group of permutations $\mathfrak{S}_n$ acts on $(\mathbb{Z}_2)^n$
in a natural way.
We will characterize the algebras of the
$\mathbb{O}$- and~$\mathbb{M}$-series in terms of $\mathfrak{S}_n$-invariance.
We observe that, in spite of the fact that the functions $f_\mathbb{O}$ and $f_\mathbb{M}$
are not $\mathfrak{S}_n$-invariant, the corresponding algebras are.
However, we believe that $\mathfrak{S}_n$-invariance is a technical assumption and can be relaxed,
see Appendix for a discussion.
The first uniqueness result is formulated
directly in terms of the twisting function $f$.
We study the unital twisted algebras $\mathcal{A}=(\mathbb{K}\left[\left(\mathbb{Z}_2\right)^n\right],\,f)$
satisfying the following conditions.
\begin{enumerate}
\item
The function $f$ is a polynomial of degree 3.
\item
The algebra $\mathcal{A}$ is graded-alternative, see (\ref{AlT}).
\item
The set of relations between the generators (\ref{General}) of $\mathcal{A}$ is
invariant with respect to the action
of the group of permutations $\mathfrak{S}_n$.
\end {enumerate}
Since we will use the relations of degree 2 or 3,
the condition (3) means that we assume that the generators
either all pairwise commute or all pairwise anticommute and that
one has either
$$
u_i\cdot{}(u_j\cdot{}u_k)=(u_i\cdot{}u_j)\cdot{}u_k,
\qquad
\hbox{for all}
\qquad
i\not=j\not=k,
$$
or
$$
u_i\cdot{}(u_j\cdot{}u_k)=-(u_i\cdot{}u_j)\cdot{}u_k,
\qquad
\hbox{for all}
\qquad
i\not=j\not=k.
$$
\begin{prop}
\label{Charact}
The algebras $\mathbb{O}_n$ and $\mathbb{M}_n$ are the only
twisted $\left(\mathbb{Z}_2\right)^n$-algebras satisfying the above three conditions.
\end{prop}
\begin{proof}
Since the algebra $\mathcal{A}$ is unital, we have $f(0,.)=f(.,0)=0$.
This implies that $f$ contains no constant term and no terms depending
only on $x$ (or only on $y$) variables.
The most general twisting function $f$ of degree 3 is of the form
\begin{equation*}
\begin{array}{rcll}
f(x,y)&=&\displaystyle
\sum_{i<j<k}&\left(
\lambda^1_{ijk}\,x_ix_jy_k+\lambda^2_{ijk}\,x_iy_jx_k+\lambda^3_{ijk}\,y_ix_jx_k\right.\\
&&&\left.+\mu^1_{ijk}\,y_iy_jx_k+\mu^2_{ijk}\,y_ix_jy_k+\mu^3_{ijk}\,x_iy_jy_k\right)\\[8pt]
&&+\displaystyle\sum_{i,j}&\nu_{ij}\,x_iy_j,
\end{array}
\end{equation*}
where $\lambda^e_{ijk},\mu^e_{ijk}$ and $\nu^e_{ij}$ are arbitrary coefficients 0 or 1.
Indeed, the expression of $f$ cannot contain the monomials $x_ix_jy_j$
and $x_iy_iy_j$ because of the condition (2).
By Lemma \ref{LemA}, adding a coboundary to $f$ gives an isomorphic algebra
(as $(\mathbb{Z}_2)^n$-graded algebras).
We may assume that for any $i<j<k$, the coefficient $\mu^1_{ijk}=0$
(otherwise, we add the coboundary of $b(x)=x_ix_jx_k$).
We now compute $\phi=\delta{}f$ and obtain:
$$
\begin{array}{rcl}
\phi(x,y,z)&=&\displaystyle
\sum_{i<j<k}\left(
(\lambda^1_{ijk}+\mu^3_{ijk})\,x_iy_jz_k+
(\lambda^2_{ijk}+\mu^3_{ijk})\,x_iz_jy_k+
(\lambda^1_{ijk}+\mu^2_{ijk})\,y_ix_jz_k\right.\\[6pt]
&&\hskip 1cm
\left.+\lambda^2_{ijk}\,y_iz_jx_k+
(\lambda^3_{ijk}+\mu^2_{ijk})\,z_ix_jy_k+
\lambda^3_{ijk}\,x_iy_jz_k\right).
\end{array}
$$
We can assume that
$$
u_i\cdot{}(u_j\cdot{}u_k)=-(u_i\cdot{}u_j)\cdot{}u_k,
\qquad
i\not=j\not=k.
$$
Indeed, if $u_i\cdot{}(u_j\cdot{}u_k)=(u_i\cdot{}u_j)\cdot{}u_k$ for some values of $i,j,k$
such that $i\not=j\not=k$, then (3) implies the
same associativity relation for all $i,j,k$.
Since $\phi$ is trilinear, this means that $\mathcal{A}$ is associative, so that $\phi=0$.
This can only happen if $\lambda^e_{ijk}=\mu^e_{ijk}=0$ for all $i,j,k$, so that $\deg\,f=2$.
In other words, we obtain a system of equations $\phi(x_i,y_j,z_k)=1$ for all $i,j,k$.
This system has a unique solution $\lambda^1_{ijk}=\lambda^2_{ijk}=\lambda^3_{ijk}=1$
and $\mu^2_{ijk}=\mu^3_{ijk}=0$.
Finally, if all of the generators commute, we obtain $\nu_{ij}=\nu_{ji}$, so that
$\nu_{ij}=0$ up to a coboundary, so that $f=f_\mathbb{M}$.
If all of the generators anticommute,
again up to a coboundary, we obtain $\nu_{ij}=1$, if and only if $i<j$, so that $f=f_\mathbb{O}$.
\end{proof}
The second uniqueness result is formulated
in terms of the generating function.
\begin{prop}
\label{SimSimProp}
The algebras $\mathbb{O}_n$ and $\mathbb{M}_n$
and the algebras~$\mathbb{O}_{p,q}$ and~$\mathbb{M}_{p,q}$ with $p+q=n$,
are the only non-associative twisted $(\mathbb{Z}_2)^n$-algebras over
the field of scalars $\mathbb{C}$ or $\mathbb{R}$ that admit
an $\mathfrak{S}_n$-invariant generating function.
\end{prop}
\begin{proof}
By Proposition \ref{AlpDeg3}, we know that the generating function is a polynomial of degree~$\leq3$.
Every $\mathfrak{S}_n$-invariant polynomial
$\alpha:(\mathbb{Z}_2)^n\to\mathbb{Z}_2$ of degree $\leq3$ is a linear combination
$$
\alpha=\lambda_3\alpha_3+\lambda_2\alpha_2+\lambda_1\alpha_1+\lambda_0\alpha_0
$$
of the following four
functions:
$$
\alpha_3(x)=\sum_{1\leq{}i<j<k\leq{}n}
x_ix_jx_k,
\qquad
\alpha_2(x)=\sum_{1\leq{}i<j\leq{}n}\,x_ix_j,
\qquad
\alpha_1(x)=\sum_{1\leq{}i\leq{}n}x_i,
\qquad
\alpha_0(x)=1.
$$
Since $\alpha(0)=0$, cf. Section \ref{DefGenFn}, one obtains $\lambda_0=0$.
The function $\alpha_1$ does not contribute to the quasialgebra structure
$\beta=\delta\alpha$ and $\phi=\delta_2\alpha$, so that $\lambda_1$ can be chosen arbitrary.
Finally, $\lambda_3\not=0$ since otherwise $\phi=0$ and the corresponding
algebra is associative.
We obtain the functions $\alpha_\mathbb{O}=\alpha_3+\alpha_2+\alpha_1$ and $\alpha_\mathbb{M}=\alpha_3+\alpha_1$
as the only possible $\mathfrak{S}_n$-invariant generating function
that define non-associative algebras.
\end{proof}
Note that relaxing the non-associativity condition $\phi\not\equiv0$, will also recover
the Clifford algebras $\mathit{C}\ell_n$ and $\mathit{C}\ell_{p,q}$ and the group algebra itself.
\subsection{Generators and relations}\label{GenRelSect}
Let us now give another definition of the complex algebras~$\mathbb{O}_n$ and~$\mathbb{M}_n$
and of the real algebras $\mathbb{O}_{p,q}$ and $\mathbb{M}_{p,q}$.
We use a purely algebraic approach and present our
algebras in terms of generators and relations.
Consider the generators \eqref{General}.
The generators of $\mathbb{O}_{p,q}$ and $\mathbb{M}_{p,q}$ square to $\pm1$.
More precisely,
\begin{equation}
\label{Squar}
u_i^2=
\left\{
\begin{array}{ll}
1, &i\leq{}p\\
-1,&\hbox{otherwise},
\end{array}
\right.
\end{equation}
where $1=u_{(0,\ldots,0)}$ is the unit.
For the complex algebras~$\mathbb{O}_n$ and~$\mathbb{M}_n$,
one can set $u_i^2=1$ for all $i$.
The rest of the relations is independent of the signature.
The main difference between the series $\mathbb{O}$ and $\mathbb{M}$ is that
the generators \textit{anticommute} in the $\mathbb{O}$-case
and \textit{commute} in the $\mathbb{M}$-case:
\begin{equation}
\label{AntiComCom}
u_i\cdot{}u_j=-u_j\cdot{}u_i
\quad
\hbox{in}\quad
\mathbb{O}_n,\;\mathbb{O}_{p,q}
\qquad
u_i\cdot{}u_j=u_j\cdot{}u_i
\quad
\hbox{in}\quad
\mathbb{M}_n,\;\mathbb{M}_{p,q}.
\end{equation}
The third-order relations are determined by the function $\phi$
and therefore these relations are the same for both series:
\begin{eqnarray}
\label{Alternass}
u_i\cdot{}(u_i\cdot{}u_j)&=&u_i^2\cdot{}u_j,\\[4pt]
\label{Antiass}
u_i\cdot{}(u_j\cdot{}u_k)&=&-(u_i\cdot{}u_j)\cdot{}u_k,
\end{eqnarray}
where $i\not=j\not=k$ in the second relation.
Note that the antiassociativity relation in (\ref{Antiass})
is the reason why the algebras from the
$\mathbb{M}$ series generated by commuting elements,
can, nevertheless, be simple.
Recall that a Clifford algebra is an algebra with $n$ anticommuting generators satisfying
the relations (\ref{Squar}) and the identity of associativity.
We will now give a very similar definition of the algebras
$\mathbb{O}_n$ and $\mathbb{M}_n$ (as well as $\mathbb{O}_{p,q}$ and $\mathbb{M}_{p,q}$).
The associativity is replaced by the identity of quasialgebra.
Define a family of algebras $\mathcal{A}$ with $n$ generators
$u_1,\ldots,u_n$.
Consider the monoid $X_n$ of non-associative monomials in $u_i$
and define a function $\phi:X_n\times{}X_n\times{}X_n\to\mathbb{Z}_2$
satisfying the following two properties:
\begin{enumerate}
\item
$
\phi(u_i,\,u_j,\,u_k)=
\left\{
\begin{array}{rl}
1,& \hbox{if}\; i\not=j\not=k,\\
0,& \hbox{otherwise}.
\end{array}
\right.
$
\medskip
\item
$\phi(u\cdot{}u',\,v,\,w)=\phi(u,\,v,\,w)+\phi(u',\,v,\,w),$ and similar in each variable.
\end{enumerate}
Such function exists and is unique.
Moreover, $\phi$ is symmetric.
Define an algebra $\mathcal{A}^\mathbb{C}$ or $\mathcal{A}^\mathbb{R}$ (complex or real),
generated by $u_1,\ldots,u_n$ that satisfies
the relations (\ref{Squar}) together with one of the following two relations.
All the generators either anticommute: $u_i\cdot{}u_j=-u_j\cdot{}u_i$, where $i\not=j$, or
commute: $u_i\cdot{}u_j=u_j\cdot{}u_i$, where $i\not=j$.
We will also assume the identity
$$
u\cdot(v\cdot{}w)=(-1)^{\phi(u,v,w)}\,(u\cdot{}v)\cdot{}w,
$$
for all monomials $u,v,w$.
\begin{prop}
\label{InvSnProp}
If the generators anticommute, then $\mathcal{A}^\mathbb{C}\cong\mathbb{O}_n$
and $\mathcal{A}^\mathbb{R}\cong\mathbb{O}_{p,q}$.
If the generators commute, then $\mathcal{A}^\mathbb{C}\cong\mathbb{M}_n$
and $\mathcal{A}^\mathbb{R}\cong\mathbb{M}_{p,q}$.
\end{prop}
\begin{proof}
By definition of $\mathcal{A}=\mathcal{A}^\mathbb{C}$ (resp. $\mathcal{A}^\mathbb{R}$), the elements
$$
u_{i_1\ldots{}i_k}=
u_{i_1}\cdot(u_{i_2}\cdot(\cdots(u_{i_{k-1}}\cdot{}u_{i_k})\!\cdots\!),
$$
where $i_1<i_2<\cdots<i_k$, form a basis of $\mathcal{A}$.
Therefore, $\dim\mathcal{A}=2^n$.
The linear map sending the generators of $\mathcal{A}$
to the generators (\ref{General}) of $\mathbb{O}_n$ or $\mathbb{M}_n$
($\mathbb{O}_{p,q}$ or $\mathbb{M}_{p,q}$, respectively) is a homomorphism,
since the function~$\phi$ corresponding to these algebras is symmetric and trilinear.
It sends the above basis of $\mathcal{A}$ to that of $\mathbb{O}_n$ or $\mathbb{M}_n$
($\mathbb{O}_{p,q}$ or $\mathbb{M}_{p,q}$, respectively).
\end{proof}
\section{The series $\mathbb{O}_n$ and $\mathbb{M}_n$: properties}\label{TheMainSBis}
In this section, we study properties of the algebras of the series $\mathbb{O}$ and $\mathbb{M}$.
The main result is Theorem \ref{SimProp} providing
a criterion of simplicity.
We describe the first algebras of the series and give the list of isomorphisms
in lower dimensions.
We also define a non-oriented graph encoding the structure of the algebra.
Finally, we formulate open problems.
\subsection{Criterion of simplicity}
The most important property of the defined algebras that we
study is the simplicity.
Let us stress that
we understand simplicity in the usual sense:
an algebra is called \textit{simple} if it contains no proper ideal.
Note that in the case of commutative associative algebras,
simplicity and division are equivalent notions,
in our situation, the notion of simplicity is much weaker.
\begin{rem}
This notion should not be confounded with the notion of graded-simple algebra.
The latter notion is much weaker and means that the algebra contains no graded ideal;
however, this notion is rather a property of the grading and not
of the algebra itself.
\end{rem}
The following statement is the second main result of this paper.
We will treat the complex and the real cases independently.
\begin{thm}
\label{SimProp}
(i)
The algebra $\mathbb{O}_n$ (resp. $\mathbb{M}_n$)
is simple if and only if $n\not=4m$ (resp. $n\not=4m+2$).
One also has
$$
\mathbb{O}_{4m}\cong\mathbb{O}_{4m-1}\oplus\mathbb{O}_{4m-1},
\qquad
\mathbb{M}_{4m+2}\cong\mathbb{M}_{4m+1}\oplus\mathbb{M}_{4m+1}.
$$
\noindent
(ii) The algebra $\mathbb{O}_{p,q}$ is simple if and only if one of the following conditions is satisfied
\begin{enumerate}
\item
$p+q\not=4m$,
\item $p+q=4m$ and $p,q$ are odd;
\end{enumerate}
(iii) The algebra $\mathbb{M}_{p,q}$ is simple if and only if one of the following conditions is satisfied
\begin{enumerate}
\item
$p+q\not=4m+2$,
\item $p+q=4m+2$ and $p,q$ are odd.
\end{enumerate}
\end{thm}
This theorem will be proved in Section \ref{ProoSimProp}.
The arguments developed in the proof of Theorem \ref{SimProp}
allow us to link the complex and the real algebras
in the particular cases below.
Let us use the notation $\mathbb{O}_n^\mathbb{R}$ and $\mathbb{M}_n^\mathbb{R}$
when we consider the algebras $\mathbb{O}_n$ and $\mathbb{M}_n$ as
$2^{n+1}$-dimensional real algebras.
We have the following statement.
\begin{cor}
\label{IsoPr}
(i)
If $p+q=4m$ and $p,q$ are odd, then $\mathbb{O}_{p,q}\cong\mathbb{O}_{p+q-1}^\mathbb{R}$.
(ii)
If $p+q=4m+2$ and $p,q$ are odd, then $\mathbb{M}_{p,q}\cong\mathbb{M}_{p+q-1}^\mathbb{R}$.
\end{cor}
\noindent
This statement is proved in Section \ref{Thmii}.
\begin{rem}
To explain the meaning of the above statement,
we notice that, in the case where the complex algebras
split into a direct sum, the real algebras can still be simple.
In this case, all the simple real algebras are isomorphic to
the complex algebra with $n-1$ generators.
In particular, all the algebras $\mathbb{O}_{p,q}$ and $\mathbb{O}_{p',q'}$
with $p+q=p'+q'=4m$ and $p$ and $p'$ odd are isomorphic
to each other (and similarly for the $\mathbb{M}$-series).
A very similar property holds for the Clifford algebras.
\end{rem}
Theorem \ref{SimProp} immediately implies the following.
\begin{cor}
\label{NoisomProp}
The algebras $\mathbb{O}_{n}$ and $\mathbb{M}_n$ with even $n$ are not isomorphic.
\end{cor}
\noindent
This implies, in particular, that the real algebras $\mathbb{O}_{p,q}$
and $\mathbb{M}_{p',q'}$ with $p+q=p'+q'=2m$ are not isomorphic.
\subsection{The first algebras of the series}
Let us consider the first examples of the introduced algebras.
It is natural to ask if some of the introduced algebras can be isomorphic to the other ones.
\begin{prop}
\label{Sporadic}
(i)
For $n=3$, one has:
$$
\mathbb{O}_{3,0}\cong\mathbb{O}_{2,1}\cong\mathbb{O}_{1,2}\not\cong\mathbb{O}_{0,3}.
$$
The first three algebras are isomorphic to the algebra of split-octonions,
while $\mathbb{O}_{0,3}\cong\mathbb{O}$.
(ii)
For $n=4$, one has:
$$
\mathbb{O}_{4,0}\cong\mathbb{O}_{2,2}\cong\mathbb{O}_{3,0}\oplus\mathbb{O}_{3,0}\,,
\qquad
\mathbb{O}_{0,4}\cong\mathbb{O}_{0,3}\oplus\mathbb{O}_{0,3}.
$$
In particular, $\mathbb{O}_{4,0}$ and $\mathbb{O}_{2,2}$ are not isomorphic to $\mathbb{O}_{0,4}$.
\end{prop}
\begin{proof}
The above isomorphisms are combination of the general
isomorphisms of type (a) and (b), see Section \ref{IsomSec}.
The involved automorphisms of $(\mathbb{Z}_2)^3$ and $(\mathbb{Z}_2)^4$ are
$$
\begin{array}{lcl}
x_1'=x_1,&&x_1'=x_1,\\[4pt]
x_2'=x_1+x_2,&&x_2'=x_1+x_2,\\[4pt]
x_3'=x_1+x_2+x_3, \;\;\;\;\;\;\;\;\;&&x_3'=x_1+x_2+x_3,\\[4pt]
&&x_4'=x_1+x_2+x_3+x_4.
\end{array}
$$
Then, the twisting functions of the above isomorphic algebras
coincide modulo coboundary.
\end{proof}
Let us notice that the very first algebras of the $\mathbb{O}$-series
are all obtained as a combination of the algebras
of octonions and split-octonions.
In this sense, we do not obtain new algebras among them.
In the $\mathbb{M}$-case, we have the following isomorphism.
\begin{prop}
\label{M3Prop}
One has
$$
\mathbb{M}_{1,2}\cong\mathbb{M}_{0,3},
$$
\end{prop}
\begin{proof}
This isomorphism can be obtained by the following automorphism of $(\mathbb{Z}_2)^3$.
$$
x_1'=x_1+x_2+x_3,\quad
x_2'=x_2,\quad
x_3'=x_3
$$
This algebra is not isomorphic to $\mathbb{O}_{0,3}$ or $\mathbb{O}_{3,0}$.
\end{proof}
The next algebras,
$\mathbb{O}_5$ and $\mathbb{M}_5$, as well as all of the real algebras
$\mathbb{O}_{p,q}$ and $\mathbb{M}_{p,q}$
with $p+q=5$, are not combinations of the classical algebras.
Since these algebras are simple, they are not direct sums
of lower-dimensional algebras.
The next statement shows that these algebras are not
tensor products of classical algebras.
Note that the only ``candidate'' for an isomorphism of this kind
is the tensor product of the octonion algebra and the algebra of complex
$(2\times2)$-matrices.
\begin{prop}
\label{IndProp}
Neither of the algebras $\mathbb{O}_5$ and $\mathbb{M}_5$ is isomorphic to the tensor product
of the octonion algebra~$\mathbb{O}$ and the algebra $\mathbb{C}[2]$ of complex
$(2\times2)$-matrices:
$$
\mathbb{O}_5\not\cong\mathbb{O}\otimes\mathbb{C}[2],
\qquad
\mathbb{M}_5\not\cong\mathbb{O}\otimes\mathbb{C}[2].
$$
\end{prop}
\begin{proof}
Let us consider the element $u=u_{(1,1,1,1,0)}$ in $\mathbb{O}_5$
and the element $u=u_{(1,1,0,0,0)}$ in $\mathbb{M}_5$.
Each of these elements has a very big centralizer $Z_u$ of $\dim{}Z_u=24$.
Indeed, the above element of $\mathbb{O}_5$
commutes with itself and with any homogeneous element $u_x$
of the weight $|x|=0,1,3,5$ as well as 6 elements such that $|x|=2$.
The centralizer $Z_u$ is the vector space spanned by these 24 homogeneous elements,
and similarly in the $\mathbb{M}_5$ case.
We will show that the algebra $\mathbb{O}\otimes\mathbb{C}[2]$ does not contain
such an element.
Assume, \textit{ad absurdum}, that an element $u\in\mathbb{O}\otimes\mathbb{C}[2]$
has a centralizer of dimension~$\geq24$.
Consider the subspace $\mathbb{O}\otimes1\oplus1\otimes\mathbb{C}[2]$ of the algebra $\mathbb{O}\otimes\mathbb{C}[2]$.
It is 12-dimensional, so that its intersection with $Z_u$ is of dimension at least 4.
It follows that $Z_u$ contains at least two independent elements of the form
$$
z_1=e_1\otimes1+1\otimes{}m_1,
\qquad
z_2=e_2\otimes1+1\otimes{}m_2,
$$
where $e_1$ and $e_2$ are pure imaginary octonions and
$m_1$ and $m_2$ are traceless matrices.
Without loss of generality, we can assume that one of
the following holds
\begin{enumerate}
\item
the generic case:
$e_1,e_2$ and
$m_1,m_2$ are linearly independent and
pairwise anticommute,
\item
$e_2=0$ and $m_1,m_2$ are linearly independent and anticommute,
\item
$m_2=0$ and $e_1,e_2$ are linearly independent and anticommute.
\end{enumerate}
We will give the details of the proof in the case (1).
Let us write
$$
u=u_0\otimes 1 +u_1\otimes m_1 +u_2\otimes m_2 +u_{12}\otimes m_1m_2
$$
where $u_0,u_1,u_2,u_{12}\in \mathbb{O}$.
\begin{lem}
\label{TechnLem}
The element $u$ is a linear combination of the following two elements:
$$
1\otimes1,
\qquad
e_1\otimes{}m_1+e_2\otimes{}m_2-e_1e_2\otimes{}m_1m_2.
$$
\end{lem}
\begin{proof}
Denote by $[\;,\;]$ the usual commutator, one has
$$
\begin{array}{rcl}
[u,z_1]&=&[u_0,e_1]\otimes1+[u_1,e_1]\otimes{}m_1+
\left([u_2,e_1]-2u_{12}\right)\otimes{}m_2+
\left([u_{12},e_1]-2u_{2}\right)\otimes{}m_1m_2,\\[8pt]
[u,z_2]&=&[u_0,e_2]\otimes1+[u_2,e_2]\otimes{}m_2+
\left([u_1,e_2]+2u_{12}\right)\otimes{}m_1+
\left([u_{12},e_2]+2u_{1}\right)\otimes{}m_1m_2.
\end{array}
$$
One obtains $[u_0, e_1]=[u_0,e_2]=0$, so that $u_0$ is proportional to 1.
Furthermore, one also obtains $[u_1,e_1]=0$ and $[u_2,e_2]=0$ that implies
$$
u_1=\lambda_1\,e_1+\mu_1\,1,
\qquad
u_2=\lambda_2\,e_2+\mu_2\,1.
$$
The equations $[u_2,e_1]-2u_{12}=0$ and $[u_1,e_2]+2u_{12}$ give
$$
u_{12}=\lambda_2\,e_2e_1
\qquad\hbox{and}\qquad
u_{12}=-\lambda_1\,e_1e_2,
$$
hence $\lambda_1=\lambda_2$, since $e_1$ and $e_2$ anticommute by assumption.
Finally, the equations $[u_{12},e_1]-2u_{2}=0$ and $[u_{12},e_2]+2u_{1}=0$
lead to $\mu_1=\mu_2=0.$
Hence the lemma.
\end{proof}
In the case (1), one obtains a contradiction because of the following statement.
\begin{lem}
\label{TechnLemBis}
One has $\dim{}Z_u\leq22$.
\end{lem}
\begin{proof}
Lemma \ref{TechnLem} implies that the element $u$
belongs to a subalgebra
$$
\mathbb{C}[4]=\mathbb{C}[2]\otimes\mathbb{C}[2]\subset\mathbb{O}\otimes\mathbb{C}[2].
$$
We use the well-known classical fact that, for
an arbitrary element $u\in\mathbb{C}[4]$, the dimension of the centralizer inside $\mathbb{C}[4]$:
$$
\{X\in\mathbb{C}[4]\,|\,[X,u]=0\}
$$
is at most 10 (i.e., the codimension is $\leq6$).
Furthermore, the 4-dimensional space
of the elements $e_3\otimes1$, where $e_3\in\mathbb{O}$ anticommutes with $e_1,e_2$
is transversal to~$Z_u$.
It follows that the codimension of $Z_u$ is at least 10.
Hence the lemma.
\end{proof}
The cases (2) and (3) are less involved.
In the case (2), $u$ is proportional to
$e_1\otimes1$ and one checks that $Z_u=e_1\otimes\mathbb{C}[2]\oplus1\otimes\mathbb{C}[2]$
is of dimension 8.
In the case (3), $u$ is proportional to $1\otimes{}m_1$ so that
$Z_u=\mathbb{O}\otimes1\oplus\mathbb{O}\otimes{}m_1$ is of dimension 16.
In each case, we obtain a contradiction.
\end{proof}
\subsection{The commutation graph}
We associate a non-oriented graph, that we call the commutation graph, to every
twisted group algebra
in the following way.
The vertices of the graph are the elements of $\left(\mathbb{Z}_2\right)^n$.
The (non-oriented) edges $x-y$ join the elements $x$ and $y$ such that
$u_x$ and $u_y$ anticommute.
\begin{prop}
\label{graphprop}
Given a complex algebra $(\mathbb{C}[(Z_2)^n],f)$ with symmetric function $\phi=\delta{}f$,
the commutation graph completely determines the
structure of $\mathcal{A}$.
\end{prop}
\begin{proof}
In the case where $\phi$ is symmetric, formula (\ref{PhiBet}) and
Proposition \ref{VeryFirstProp} imply that the graph determines the
structure of the algebra $\mathcal{A}$, up to signature.
\end{proof}
This means, two complex algebras, $\mathcal{A}$ and $\mathcal{A}'$ corresponding to the same
commutation graph are isomorphic.
Conversely, two algebras, $\mathcal{A}$ and $\mathcal{A}'$
with different commutation graphs,
are not isomorphic as $(\mathbb{Z}_2)^n$-graded algebras.
However, we do not know if there might exist an isomorphism
that does not preserve the grading.
\begin{ex}
The algebra $\mathbb{M}_{3}$ is the first non-trivial algebra of the series $\mathbb{M}_{n}$.
The corresponding commutation graph is presented in Figure \ref{M3Alg}, together
with the graph of the Clifford algebra $\mathit{C}\ell_3$.
\begin{figure}[hbtp]
\includegraphics[width=11cm]{Fano.pdf}
\caption{The algebras $\mathit{C}\ell_3$ and $\mathbb{M}_3$.}
\label{M3Alg}
\end{figure}
\noindent
The algebra $\mathit{C}\ell_3$ is not simple: $\mathit{C}\ell_3=\mathbb{C}[2]\oplus\mathbb{C}[2]$.
It contains a central element $u_{(1,1,1)}$ corresponding to a
``singleton'' in Figure \ref{M3Alg}.
\end{ex}
\begin{rem}
(a)
The defined planar graph is \textit{dual trivalent},
that is, every edge
represented by a projective line or a circle,
see Figure \ref{M3Alg}, contains exactly 3 elements.
Indeed, any three homogeneous elements
$u_x,u_y$ and $u_{x+y}$ either commute or anticommute
with each other.
This follows from the tri-linearity of $\phi$.
(b)
We also notice that the superposition of the graphs
of $\mathit{C}\ell_n$ and $\mathbb{M}_n$ is precisely
the graph of the algebra $\mathbb{O}_n$.
We thus obtain the following ``formula'': $\mathit{C}\ell+\mathbb{M}=\mathbb{O}$.
\end{rem}
\begin{figure}[hbtp]
\includegraphics[width=11cm]{kheops.pdf}
\caption{The commutation graph of $\mathbb{M}_4$.}
\label{M4Alg}
\end{figure}
\begin{ex}
The commutation graph of the algebra $\mathbb{M}_4$
is presented in Figure \ref{M4Alg}.
The commutation graph of the Clifford algebra $\mathit{C}\ell_4$ is is presented in Figure \ref{C4Alg}.
Note that both algebras, $\mathbb{M}_4$ and $\mathit{C}\ell_4$ are simple.
The superposition of the graphs of $\mathbb{M}_4$ and $\mathit{C}\ell_4$ cancels
all the edges from $(1,1,1,1)$.
Therefore, the element $(1,1,1,1)$ is a singleton in the graph of
the algebra $\mathbb{O}_4$.
This corresponds to the fact that $u_{(1,1,1,1)}$ in $\mathbb{O}_4$ is a central,
in particular, $\mathbb{O}_4$ is not simple.
\begin{figure}[hbtp]
\includegraphics[width=11cm]{boscop.pdf}
\caption{The commutation graph of $\mathit{C}\ell_4$.}
\label{C4Alg}
\end{figure}
\end{ex}
The planar graph provides a nice way
to visualize the algebra $(\mathbb{K}\left[(\Z_2)^n\right],f)$.
\section{Generating functions: existence and uniqueness}\label{AntiSex}
In this section we prove Theorem \ref{AlphMainTh} and its corollaries.
Our main tool is the notion of generating function.
We show that the structure of all the algebras we consider in this paper
is determined (up to signature) by a single function of one argument
$\alpha:\left(\mathbb{Z}_2\right)^n\to\mathbb{Z}_2.$
This of course simplifies the understanding of these algebras.
\subsection{Existence of a generating function}\label{ExProoS}
Given a $(\mathbb{Z}_2)^n$-graded quasialgebra, let us prove that
there exists a generating function $\alpha$ if and only
if the ternary map $\phi$ is symmetric.
The condition that $\phi$ is symmetric is of course necessary
for existence of $\alpha$, cf. formula (\ref{Genalp2}), let us prove that this condition is,
indeed, sufficient.
\begin{lem}
\label{dbetZero}
If $\phi$ is symmetric, then $\beta$ is a 2-cocycle: $\delta\beta=0$.
\end{lem}
\begin{proof}
If $\phi$ is symmetric then the identity (\ref{PhiBet}) is satisfied.
In particular, the sum of the two expressions of $\phi$ gives:
$$
\beta(x+y,\,z)+\beta(x,\,y+z)+\beta(x,y)+\beta(y,z)=0
$$
which is nothing but the 2-cocycle condition $\delta\beta=0$.
\end{proof}
Using the information about the second cohomology
space $H^2((\mathbb{Z}_2)^n;\mathbb{Z}_2)$, as in the proof of Proposition \ref{VeryFirstProp},
we deduce that $\beta$ is of the form
$$
\beta(x,y)=\delta\alpha(x,y)+
\sum_{i\in{}I}x_iy_i+
\sum_{(k,\ell)\in{}J}x_ky_\ell,
$$
where $\alpha:(\mathbb{Z}_2)^n\to\mathbb{Z}_2$ is an arbitrary function and
where $I$ is a subset of $\{1,\ldots,n\}$ and
$J$ is a subset of $\{(k,\ell)\,|\,k<\ell\}$.
Indeed, the second and the third terms are the most general non-trivial
2-cocycles on $(\mathbb{Z}_2)^n$ with coefficients in $\mathbb{Z}_2$.
Furthermore, the function $\beta$ satisfies two properties:
it is symmetric and $\beta(x,x)=0$.
The second property implies that $\beta$ does not contain
the terms $x_iy_i$.
The symmetry of $\beta$ means that whenever there is a term
$x_ky_\ell$, there is $x_\ell{}y_k$,
as well.
But, $x_ky_\ell+x_\ell{}y_k$ is a coboundary of $x_kx_\ell$.
We proved that $\beta=\delta\alpha$, which is equivalent to the identity \eqref{Genalp1}.
Finally,
using the equality \eqref{PhiBet}, we also obtain the identity \eqref{Genalp2}.
Theorem \ref{AlphMainTh} is proved.
\subsection{Generating functions are cubic}\label{CubSecG}
In this section, we prove Proposition \ref{AlpDeg3}.
We show that a function $\alpha:(\mathbb{Z}_2)^n\to\mathbb{Z}_2$ is a generating function of a
$(\mathbb{Z}_2)^n$-graded quasialgebra if and only if $\alpha$ is a polynomial
of degree $\leq3$.
The next statement is an application of the pentagonal diagram
in Figure~\ref{5and6}.
\begin{lem}
\label{KeyLem}
A generating function $\alpha:(\mathbb{Z}_2)^n\to\mathbb{Z}_2$ satisfies the
equation $\delta_3\alpha=0$, where the map $\delta_3$ is defined by
\begin{equation}
\label{DelTrequat}
\begin{array}{rl}
\delta_3\alpha\,(x,y,z,t):=&
\alpha(x+y+z+t)\\[4pt]
&+\alpha(x+y+z)+\alpha(x+y+t)+\alpha(x+z+t)+\alpha(y+z+t)\\[4pt]
&+\alpha(x+y)\!+\!\alpha(x+z)\!+\!\alpha(x+t)\!+\!\alpha(y+z)\!+\!\alpha(y+t)\!+\!\alpha(z+t)\\[4pt]
&+\alpha(x)+\alpha(y)+\alpha(z)+\alpha(t).
\end{array}
\end{equation}
\end{lem}
\begin{proof}
This follows immediately from the fact that $\phi$ is a 3-cocycle:
substitute (\ref{Genalp2}) to the equation $\delta\phi=0$ to obtain
$\delta_3\alpha=0$.
\end{proof}
The following statement characterizes polynomials of degree $\leq3$.
\begin{lem}
\label{ThOrdProp}
A function $\alpha:(\mathbb{Z}_2)^n\to\mathbb{Z}_2$ is a polynomial of degree $\leq3$
if and only if $\delta_3\alpha=0$.
\end{lem}
\begin{proof}
This is elementary, see also \cite{War,DV}.
\end{proof}
Proposition \ref{AlpDeg3} is proved.
\medskip
Let us now prove Corollary \ref{TriCol}.
If the map $\phi$ is symmetric, then Theorem \ref{AlphMainTh}
implies the existence of the generating function $\alpha$.
The map $\phi$ is then given by (\ref{Genalp2}).
One checks by an elementary calculation that
$$
\phi(x+y,z,t)+\phi(x,z,t)+\phi(y,z,t)=\delta_3\alpha(x,y,z,t).
$$
By Lemma \ref{KeyLem}, one has $\delta_3\alpha=0$.
It follows that $\phi$ is trilinear.
Furthermore, from \eqref{PhiBet}, we deduce that $\phi$ is alternate.
Corollary \ref{TriCol} is proved.
\subsection{Uniqueness of the generating function}\label{Unisex}
Let us show that there is a canonical way to choose a generating function.
\begin{lem}
\label{unipro}
(i)
Given a $(\mathbb{Z}_2)^n$-graded quasialgebra $\mathcal{A}$ with a generating function,
one can choose the generating function in such a way that it
satisfies
\begin{equation}
\label{NormAlp}
\left\{
\begin{array}{rl}
\alpha(0)=0,&\\[4pt]
\alpha(x)=1, & |x|=1.
\end{array}
\right.
\end{equation}
(ii)
There exists a unique generating function of $\mathcal{A}$ satisfying (\ref{NormAlp}).
\end{lem}
\begin{proof}
Part (i).
Every generating function $\alpha$ vanishes on the zero element
$0=(0,\ldots,0)$, cf. Section~\ref{DefGenFn}.
Furthermore, a generating function corresponding to a given algebra $\mathcal{A}$,
is defined up to a 1-cocycle on $(\mathbb{Z}_2)^n$.
Indeed, the functions $\beta=\delta\alpha$ and $\phi=\delta_2\alpha$
that define the quasialgebra structure do not change if one adds a 1-cocycle to $\alpha$.
Since every 1-cocycle is a linear function, we obtain
$$
\alpha(x)\sim\alpha(x)+\sum_{1\leq{}i\leq{}n}\lambda_i\,x_i.
$$
One therefore can normalize $\alpha$ in such a way that $\alpha(x)=1$ for all $x$
such that $|x|=1$.
Part (ii).
The generating function normalized in this way is unique.
Indeed, any other function, say $\alpha'$, satisfying (\ref{NormAlp}) differs from $\alpha$
by a polynomial of degree $\geq2$, so that $\alpha-\alpha'$ cannot be a 1-cocycle.
Therefore, $\beta'\not=\beta$ which means the quasialgebra structure is different.
\end{proof}
We will assume the normalization (\ref{NormAlp}) in the sequel, whenever we
speak of \textit{the} generating function corresponding to a given algebra.
Let us now consider an algebra $\mathcal{A}$ with $n$ generators $u_1,\ldots,u_n$.
The group of permutations $\mathfrak{S}_n$ acts on $\mathcal{A}$ by permuting the generators.
\begin{cor}
\label{SimLem}
If the group of permutations $\mathfrak{S}_n$ acts on $\mathcal{A}$ by automorphisms,
then the corresponding generating function $\alpha$ is $\mathfrak{S}_n$-invariant.
\end{cor}
\begin{proof}
Let $\alpha$ be a generating function.
Since the algebra $\mathcal{A}$ is stable with respect to the $\mathfrak{S}_n$-action,
the function $\alpha\circ{}\sigma$ is again a generating function.
If, moreover, $\alpha$ satisfies~(\ref{NormAlp}), then
$\alpha\circ{}\sigma$ also satisfies this condition.
The uniqueness Lemma \ref{unipro} implies that
$\alpha\circ{}\sigma=\alpha$.
\end{proof}
Note that the converse statement holds in the complex case,
but fails in the real case.
\subsection{From the generating function to the twisting function}\label{FroSec}
Given an arbitrary polynomial map $\alpha:(\mathbb{Z}_2)^n\to\mathbb{Z}_2$ of $\deg\alpha\leq3$
such that $\alpha(0)=0$,
there is a simple way to associate a twisting function $f$
such that $(\mathbb{K}[(\mathbb{Z}_2)^n],f)$ admits $\alpha$ as a generating function.
\begin{prop}
\label{XXProp}
There exists a twisting function $f$
satisfying the property
\begin{equation}
\label{XXAlpha}
f(x,x)=\alpha(x).
\end{equation}
\end{prop}
\begin{proof}
Let us give an explicit formula for a twisting function $f$.
The procedure is linear, we associate to every monomial in $\alpha$
a function in two variables via the following rule:
\begin{equation}
\label{Fakset}
\begin{array}{rcl}
x_ix_jx_k&\longmapsto&x_ix_jy_k+x_iy_jx_k+y_ix_jx_k\\[4pt]
x_ix_j&\longmapsto&x_iy_j,\\[4pt]
x_i& \longmapsto & x_iy_i
\end{array}
\end{equation}
where $i<j<k$.
\end{proof}
\section{Proof of the simplicity criterion}\label{ProoSimProp}
In this section, we prove Theorem \ref{SimProp}.
We use the notation $\mathcal{A}$ to refer to any of the algebras
$\mathbb{O}_n, \mathbb{O}_{p,q}$ and $\mathbb{M}_n, \mathbb{M}_{p,q}$.
\subsection{The idea of the proof}\label{IdeaSect}
Our proof of simplicity of a twisted group algebra $\mathcal{A}$ will
be based on the following lemma.
\begin{lem}\label{ASimple}
If for every homogeneous element $u_x$ in $\mathcal{A}$
there exists an element $u_y$ in $\mathcal{A}$ such that $u_x$ and $u_y$
anticommute, then $\mathcal{A}$ is simple.
\end{lem}
\begin{proof}
Let us suppose that there exists a nonzero proper two-sided ideal $\mathcal{I}$ in $\mathcal{A}$.
Every element $u$ in $\mathcal{I}$ is a linear combination of some homogeneous elements of $\mathcal{A}$.
We write
$$
u=\lambda_1\,u_{x_1}+\cdots+\lambda_k\,u_{x_k}.
$$
Among all the elements of $\mathcal{I}$ we choose an element such that the number
$k$ of homogeneous components is the smallest possible.
We can assume that $k\geq 2$, otherwise $u$ is homogeneous and therefore
$u^2$ is non-zero and proportional to $1$, so that $\mathcal{I}=\mathcal{A}$.
In addition, up to multiplication by $u_{x_1}$ and scalar normalization
we can assume that
$$
u=1+\lambda_2\,u_{x_2}+\cdots+\lambda_k\,u_{x_k}.
$$
If there exists an element $u_y\in \mathcal{A}$ anticommuting with $u_{x_2}$
then one obtains that $u\cdot{}u_y-u_y\cdot{}u$ is a nonzero element in $\mathcal{I}$
with a shorter decomposition into homogeneous components.
This is a contradiction with the choice of $u$.
Therefore, $\mathcal{A}$ has no proper ideal.
\end{proof}
We now need to study central elements in $\mathcal{A}$, i.e., the elements commuting
with every element of $\mathcal{A}$.
\subsection{Central elements}
In this section we study the \textit{commutative center} $\mathcal{Z}(\mathcal{A})$ of $\mathcal{A}$, i.e.,
$$
\mathcal{Z}(\mathcal{A})=\{w\in \mathcal{A}| \;w\cdot a=a\cdot w, \;\hbox{for all}\; a\in \mathcal{A}\}.
$$
Note that, in the case where $\mathcal{A}$ admits a generating function,
formula \eqref{PhiBet} implies that the commutative center is contained in the
associative nucleus of~$\mathcal{A}$, so that the commutative center
coincides with the usual notion of center, see \cite{ZSS}, p. 136
for more details.
The unit $1$ of $\mathcal{A}$ is obviously an element of the center.
We say that $\mathcal{A}$ has a trivial center if $\mathcal{Z}(\mathcal{A})=\mathbb{K}\,1$.
Consider the following particular element
$$
z=(1, \ldots, 1)
$$
in $(\mathbb{Z}_2)^n$, with all the components equal to $1$,
and the associated homogeneous element $u_z$ in $\mathcal{A}$.
\begin{lem}
\label{zcentral}
The element $u_z$ in $\mathcal{A}$ is central if and only if
\begin{enumerate}
\item
$n=4m$ in the cases $\mathcal{A}=\mathbb{O}_n, \mathbb{O}_{p,q}$;
\item
$n=4m+2$ in the cases $\mathcal{A}=\mathbb{M}_n, \mathbb{M}_{p,q}$.
\end{enumerate}
\end{lem}
\begin{proof}
The element $u_z$ in $\mathcal{A}$ is central if and only if for all $y\in(\mathbb{Z}_2)^n$ one has $\beta(y,z)=0$.
We use the generating function $\alpha$.
Recall that $\beta(y,z)=\alpha(y+z)+\alpha(y)+\alpha(z)$.
The value $\alpha(x)$ depends only on the weight $|x|$, see Table \eqref{GenFuncTab}.
For every $y$ in $\mathbb{Z}_2^n$, one has
$$
|z+y|=|z|-|y|.
$$
Case (1).
According to the Table \eqref{GenFuncTab}, one has $\alpha(x)=0$ if and only if
$|x|$ is a multiple of 4.
Assume $n=4m$. One gets $\alpha(z)=0$ and for every $y$ one has $\alpha(y)=0$ if and only if $\alpha(y+z)=0$.
So, in that case, one always has $\alpha(y)=\alpha(y+z)$ and therefore $\beta(y,z)=0$.
Assume $n=4m+r$, $r=1,2$ or $3$.
We can always choose an element $y$ such that $|y|=|r-2|+1$. We get
$$
\alpha(z)=\alpha(y)=\alpha(y+z)=1.
$$
Hence, $\beta(y,z)=1$. This implies that $u_z$ is not central.
Case (2).
According to the Table \eqref{GenFuncTab}, one has $\alpha(x)=0$ if and only if
$|x|$ is not equal to $1\mod4$.
Assume $n=4m+2$.
One gets $\alpha(z)=0$ and for every $y$ one has $|y|=1 \mod 4$ if and only if $|y+z|=1 \mod 4$.
So, in that case, one always has $\alpha(y)=\alpha(y+z)$ and therefore $\beta(y,z)=0$.
Assume $n=4m+r$, $r=0,1$ or $3$.
We choose the element $y=(1,0,\ldots,0)$, if $r=0,3$, or $y=(1,1,0,\ldots,0)$, if $r=1$.
We easily compute
$
\beta(y,z)=1.
$
This implies that $u_z$ is not central.
\end{proof}
Let us consider the case where $u_z$ is not central.
\begin{lem}
\label{nocentre}
If $u_z$ is not central, then $\mathcal{A}$ has a trivial center.
\end{lem}
\begin{proof}
It suffices to prove that for every homogeneous element $u_x$ in $\mathcal{A}$,
that is not proportional to $1$, there exists an element $u_y$ in $\mathcal{A}$,
such that $u_x$ and $u_y$ anticommute.
Indeed, if $u$ is central, then each homogeneous component of $u$ is central.
Let us fix $x\in (\mathbb{Z}_2)^n$ and the corresponding homogeneous element $u_x\in \mathcal{A}$,
such that $x$ is neither $0$, nor $z$.
We want to find an element $y\in (\mathbb{Z}_2)^n$ such that $\beta(x,y)=1$ or equivalently
$u_x$ anticommutes with $u_y$.
Using the invariance of the functions $\alpha$ and $\beta$ under permutations of the coordinates,
we can assume that $x$ is of the form
$$
x=(1,\ldots,1,0, \ldots, 0),
$$
where first $|x|$ entries are equal to 1 and the last entries are equal to 0.
We assume $0<|x|<n$, so that, $x$ starts with 1 and ends by 0.
{\bf Case $\mathcal{A}=\mathbb{O}_n$ or $\mathbb{O}_{p,q}$}.
If $|x|\not=4\ell$, then
we use exactly the same arguments as in the proof of Lemma \ref{zcentral}
in order to find a suitable $y$
(one can also take one of the elements $y=(1,0,\cdots, 0)$ or $y=(0,\cdots, 0, 1)$).
Assume $|x|=4\ell$.
Consider the element
$$
y=(0,1, \ldots,1,0,\ldots,0),
$$
with $|y|=|x|$.
One has $\alpha(x)=\alpha(y)=0$ and $\alpha(x+y)=1$. So we also have $\beta(x,y)=1$ and deduce $u_x$ anticommutes with $u_y$.
\textbf{Case $\mathcal{A}=\mathbb{M}_n$ or $\mathbb{M}_{p,q}$}.
Similarly to the proof of Lemma \ref{zcentral},
if $k\not=4\ell+2$ then we can find a $y$ such that $u_y$ anticommutes with $u_x$.
If $k=4\ell+2$ then $\alpha(x)=0$.
The element $y=(0,\cdots, 0, 1)$ satisfies $\alpha(y)=1$ and $\alpha(x+y)=0$.
\end{proof}
Consider now the case where $u_z$ is a central element.
There are two different possibilities: ${u_z}^2=1$, or ${u_z}^2=-1$.
\begin{lem}
\label{ADecomp}
If $u_z\in\mathcal{A}$ is a central element and if ${u_z}^2=1,$
then the algebra splits into a direct sum of two subalgebras:
$$
\mathcal{A}=\mathcal{A}^+\oplus \mathcal{A}^-,
$$
where $\mathcal{A}^+:=\mathcal{A}\cdot(1+u_z)$ and $\mathcal{A}^-:=\mathcal{A}\cdot(1-u_z)$.
\end{lem}
\begin{proof}
Using ${u_z}^2=1,$ one immediately obtains
\begin{equation}
\label{calcuz}
\begin{array}{rcl}
(1\pm u_z)^2&=&2\,(1\pm u_z),\\[4pt]
(1+u_z)\cdot(1-u_z)&=&0.
\end{array}
\end{equation}
In addition, using the expression of $\phi$ in terms of $\beta$ given in \eqref{PhiBet}
and the fact that $\beta(\cdot, z)=0$, one deduces that $\phi(\cdot,\cdot,z)=0$ and thus
$a\cdot(b\cdot{}u_z)=(a\cdot{}b)\cdot{}u_z$
for all $a,b\in \mathcal{A}$.
It follows that
\begin{equation}
\label{assoz}
\left(a \cdot(1\pm u_z)\right)
\cdot
\left(b \cdot(1\pm u_z)\right)=
(a \cdot b) \cdot\left((1\pm u_z) \cdot(1\pm u_z)\right)
\end{equation}
for all $a,b\in \mathcal{A}$.
This expression, together with the above computations \eqref{calcuz},
shows that $\mathcal{A}^+$ and $\mathcal{A}^-$ are, indeed, two subalgebras of $\mathcal{A}$ and that they
satisfy $\mathcal{A}^+\cdot\mathcal{A}^-=\mathcal{A}^-\cdot \mathcal{A}^+=0$.
Moreover, for any $a\in \mathcal{A}$, one can write
$$
a=\textstyle \frac{1}{2}\,a \cdot(1+u_z) +\frac{1}{2}\,a \cdot(1-u_z).
$$
This implies the direct sum decomposition $\mathcal{A}=\mathcal{A}^+\oplus\mathcal{A}^-$.
\end{proof}
Notice that the elements $\frac{1}{2}\,(1+u_z)$ and $\frac{1}{2}\,(1-u_z)$
are the units of $\mathcal{A}^+$ and $\mathcal{A}^-$, respectively.
\subsection{Proof of Theorem \ref{SimProp}, part (i)}
If $n\not=4m$, then by Lemma \ref{ASimple} and Lemma \ref{nocentre} we immediately deduce
that $\mathbb{O}_n$ is simple.
If $n=4m$, then $u_z$ is central and, in the complex case, one has
$u_z^2=1$.
By Lemma \ref{zcentral} and Lemma \ref{ADecomp}, we immediately deduce
that $\mathbb{O}_n$ is not simple and one has
$$
\mathbb{O}_{4m}=\mathbb{O}_{4m}\cdot(1+u_z)\oplus \mathbb{O}_{4m}\cdot(1-u_z),
$$
where $z=(1,\ldots,1)\in (\mathbb{Z}_2)^n$.
It remains to show that the algebras $\mathbb{O}_{4m-1}$
and $ \mathbb{O}_{4m}\cdot(1\pm u_z)$ are isomorphic.
Indeed, using the computations \eqref{calcuz} and \eqref{assoz}, one checks that
the map
$$
u_x\longmapsto \textstyle \frac{1}{2}\,u_{(x,0)}\cdot(1\pm u_z),
$$
where $x\in(\mathbb{Z}_2)^{n-1}$, is the required isomorphism.
The proof in the case of $\mathbb{M}_n$ is completely similar.
\subsection{Proof of Theorem \ref{SimProp}, part (ii)}\label{Thmii}
The algebras $\mathbb{O}_{p,q}$ with $p+q\not=4m$
and the algebras $\mathbb{M}_{p,q}$ with $p+q\not=4m+2$
are simple because their complexifications are.
If now $u_z$ is central, then the property $u_z^2=1$ or $-1$ becomes crucial.
Using the expressions for $f_{\mathbb{O}}$ or $f_{\mathbb{M}}$, one computes
\begin{eqnarray*}
f_{\mathbb{O}_{p,q}}(z,z)&=&
\sum_{i<j<k}z_iz_jz_k \quad
+\sum_{i\leq j} z_iz_j
+\sum_{1\leq{}i\leq{}p}z_i \\
&=&\dfrac{n(n-1)(n-2)}{6}+ \dfrac{n(n+1)}{2}+p\\
&=&p,\mod2.
\end{eqnarray*}
And similarly, one obtains
$
f_{\mathbb{M}_{p,q}}(z,z)=p.
$
It follows that $u_z^2=(-1)^p$.
If $p$ is even, then Lemma \ref{zcentral} just applied
guarantees that $\mathcal{A}$ is not simple.
Finally, if $u_z$ is central and $p$ is odd, then $u_z^2=-1$.
\begin{lem}
\label{Moulinette}
If $u_z$ is central and $p$ is odd, then
$$
\mathbb{O}_{p,q}\cong\mathbb{O}_{p,q-1}\otimes\mathbb{C},
\qquad
\mathbb{M}_{p,q}\cong\mathbb{M}_{p,q-1}\otimes\mathbb{C}.
$$
\end{lem}
\begin{proof}
We construct an explicit isomorphism from $\mathbb{O}_{p,q-1}\otimes\mathbb{C}$ to $\mathbb{O}_{p,q}$ as follows.
\begin{eqnarray*}
u_x\otimes 1&\longmapsto& u_{(x,0)}\\
u_x\otimes \sqrt{-1} &\longmapsto& u_{(x,0)}\cdot u_z\;,
\end{eqnarray*}
for all $x\in (\mathbb{Z}_2)^{n-1}$.
We check that the above map is indeed an isomorphism of algebras by noticing that
$f_{\mathbb{O}_{p,q}}((x,0),(y,0))=f_{\mathbb{O}_{p,q-1}}(x,y)$.
\end{proof}
Let us show that Lemma \ref{Moulinette} implies that the
(real) algebras $\mathbb{O}_{p,q}$ with $p+q=4m$ and $p$ odd
and the algebras $\mathbb{M}_{p,q}$ with $p+q=4m+2$ and $p$ odd are
simple.
Indeed,
$$
\mathbb{O}_{p,q-1}\otimes_\mathbb{R}\mathbb{C}\cong\mathbb{O}_{p+q-1}
$$
viewed as a real algebras.
We then use the following well-known fact.
A simple unital complex algebra viewed as a real algebra remains simple.
The proof of Theorem \ref{SimProp} is complete.
Lemma \ref{Moulinette} also implies Corollary \ref{IsoPr}.
\section{Hurwitz-Radon square identities}\label{Sqrd}
In this section, we use the algebras $\mathbb{O}_n$ (and, in the real case, $\mathbb{O}_{0,n}$) to
give explicit formul{\ae} for solutions of a classical problem
of products of squares.
Recall, that the octonion algebra is related to the 8-square identity.
In an arbitrary commutative ring,
the product $(a_1^2+\cdots{}+a_8^2)\,(b_1^2+\cdots{}+b_8^2)$
is again a sum of 8 squares $c_1^2+\cdots{}+c_8^2$,
where $c_k$ are explicitly given by bilinear forms in $a_i$ and $b_j$
with coefficients $\pm1$, see, e.g., \cite{ConSmi}.
This identity is equivalent to the fact that $\mathbb{O}$ is a composition algebra,
that is, for any $a,b\in\mathbb{O}$, the norm of the product is equal to the product of the norms:
\begin{equation}
\label{NormPr}
\mathcal{N}(a\cdot{}b)=\mathcal{N}(a)\,\mathcal{N}(b).
\end{equation}
Hurwitz proved that there is no similar $N$-square identity for $N>8$,
as there is no composition algebra in higher dimensions.
The celebrated Hurwitz-Radon Theorem \cite{Hur,Rad} establishes the maximal
number $r$, as function of $N$, such that there exists an identity
\begin{equation}
\label{Radon}
\left(a_1^2+\cdots{}+a_N^2\right)
\left(b_1^2+\cdots{}+b_r^2\right)=
\left(c_1^2+\cdots{}+c_N^2\right),
\end{equation}
where $c_k$ are bilinear forms in $a_i$ and $b_j$.
The theorem states that $r=\rho(N)$ is the maximal number,
where $\rho(N)$ is the Hurwitz-Radon function defined as follows.
Write $N$ in the form $N=2^{4m+ \ell}\,N'$, where $N'$ is odd and $\ell=0,1,2$ or $3$, then
$$
\rho(N):=8m+2^\ell.
$$
It was proved by Gabel \cite{Gab} that the bilinear forms $c_k$ can be chosen with
coefficients~$\pm1$.
Note that the only interesting case is $N=2^n$ since the general case is an immediate
corollary of this particular one.
We refer to \cite{Squa,Sha} for the history,
results and references.
In this section, we give explicit formul{\ae} for the solution to the Hurwitz-Radon equation,
see also \cite{LMO} for further development within this framework.
\subsection{The explicit solution}\label{Expl}
We give explicit solution for Hurwitz-Radon equation \eqref{Radon}
for any $N=2^n$ with $n$ not a multiple of 4.
We label the $a$-variables and the $c$-variables by elements of $(\mathbb{Z}_2)^n$.
In order to describe the labeling of the $b$-variables,
we consider the following particular elements of $(\mathbb{Z}_2)^n$:
\begin{equation*}
\begin{array}{rcl}
e_0&:=& (0,0,\ldots,0),\\[4pt]
\overline{e_0}&:=& (1,1,\ldots,1),\\[4pt]
e_i&:=&(0,\ldots,0,1,0,\ldots,0), \text{ where 1 occurs at the $i$-th position,}\\[4pt]
\overline{e_i}&:=&(1,\ldots,1,0,1,\ldots,1), \text{ where 0 occurs at the $i$-th position,}
\end{array}
\end{equation*}
for all $1\leq i\leq n$ and $1<j\leq n$.
We then introduce the following subset $H_n$ of $(\mathbb{Z}_2)^n$:
\begin{equation}\label{defHn}
\begin{array}{rcl}
H_n&=& \{ e_i, \overline{e_i}, \; 1\leq i\leq n\}, \text{ for } n=1 \mod 4,\\[6pt]
H_n&=& \{ e_i, e_1+e_j,\; 0\leq i\leq n,\; 1<j\leq n\}, \text{ for } n=2 \mod 4,\\[6pt]
H_n&=& \{ e_i, \overline{e_i},\; 0\leq i\leq n\}, \text{ for } n=3 \mod 4.\\[4pt]
\end{array}
\end{equation}
In each case, the subset $H_n$ contains exactly $\rho(2^n)$ elements.
We write the Hurwitz-Radon identity in the form
$$
\Big( \sum_{x\in (\mathbb{Z}_2)^n} a_x^2\;\Big)\Big( \sum_{x\in H_n} b_x^2\;\Big)
= \sum_{x\in (\mathbb{Z}_2)^n} c_x^2.
$$
We will establish the following.
\begin{thm}
\label{Solcx}
The bilinear forms
\begin{equation}
\label{ExplSolRad}
c_x=\sum_{y\in H_n}(-1)^{f_{\mathbb{O}}(x+y,y)}\,a_{x+y}b_y,
\end{equation}
where $f_{\mathbb{O}}$ is the twisting function of the algebra $\mathbb{O}_n$ defined in \eqref{NashProd},
are a solution to the Hurwitz-Radon identity.
\end{thm}
In order to prove Theorem \ref{Solcx} we will need to define
the natural norm on $\mathbb{O}_n$.
\subsection{The Euclidean norm}\label{Norm}
Assume that a twisted group algebra $\mathcal{A}=(\mathbb{K}[(\mathbb{Z}_2)^n],f)$
is equipped with a generating function $\alpha$.
Assume furthermore that the twisting function satisfies $f(x,x)=\alpha(x)$, as in \eqref{XXAlpha}.
The involution on $\mathcal{A}$ is defined for every
$a=\sum_{x\in(\mathbb{Z}_2)^n}a_x\,u_x$, where $a_x\in\mathbb{C}$ (or in~$\mathbb{R}$)
are scalar coefficients and $u_x$ are the basis elements,
by the formula
$$
\bar{a}=\sum_{x\in(\mathbb{Z}_2)^n}(-1)^{\alpha(x)}\,a_x\,u_x.
$$
We then define the following norm of an element $a\in\mathcal{A}$:
$$
\mathcal{N}(a):=\left(a\cdot{}\bar{a}\right)_0.
$$
\begin{prop}
\label{ObvPr}
The above norm is nothing but the
Euclidean norm in the standard basis:
\begin{equation}
\label{EuclEq}
\mathcal{N}(a)=\sum_{x\in(\mathbb{Z}_2)^n}a_x^2.
\end{equation}
\end{prop}
\begin{proof}
One has:
$$
\mathcal{N}(a)=\sum{}(-1)^{\alpha(x)}\,a_x^2\,u_x\cdot{}u_x
=\sum{}(-1)^{\alpha(x)+f(x,x)}\,a_x^2.
$$
The result then follows from the assumption $f(x,x)=\alpha(x)$.
\end{proof}
The following statement is a general criterion for $a,b\in\mathcal{A}$ to satisfy the
composition equation (\ref{NormPr}).
This criterion will be crucial for us to establish the square identities.
\begin{prop}
\label{EuclProp}
Elements $a,b\in\mathcal{A}$ satisfy (\ref{NormPr}), if and only if
for all $x,y,z,t\in(\mathbb{Z}_2)^2$ such that
$$
x+y+z+t=0,
\qquad
(x,y)\not=(z,t),
\qquad
a_x\,b_y\,a_z\,b_t\not=0,
$$
one has $\alpha(x+z)=\alpha(y+t)=1$.
\end{prop}
\begin{proof}
Calculating the left-hand-side of (\ref{NormPr}), we obtain
$$
\mathcal{N}(a\cdot{}b)=
\sum_{x+y+z+t=0}(-1)^{f(x,y)+f(z,t)}\,a_x\,b_y\,a_z\,b_t
$$
According to (\ref{EuclEq}), the product of the norm in the right-hand-side is:
$$
\mathcal{N}(a)\,\mathcal{N}(b)=\sum_{x,y}\,a_x^2\,b_y^2.
$$
It follows that the condition (\ref{NormPr}) is satisfied if and only if
$$
f(x,y)+f(z,t)+f(x,t)+f(z,y)=1,
$$
whenever $(x,y)\not=(z,t)$ and $a_x\,b_y\,a_z\,b_t\not=0$.
Taking into account the linearity of the function (\ref{Fakset})
and substituting $t=x+y+z$, one finally gets (after cancellation):
$$
f(z,x)+f(x,z)+f(x,x)+f(z,z)=1.
$$
In terms of the function $\alpha$ this is exactly the condition $\alpha(x+z)=1$.
Hence the result.
\end{proof}
\subsection{Proof of Theorem \ref{Solcx}}\label{Choose}
Let us apply Proposition \ref{EuclProp} to the case of the algebra $\mathbb{O}_n$.
Given the variables $(a_x)_{x\in (\mathbb{Z}_2)^n}$ and $(b_x)_{x\in H_n}$,
where $H_n$ is the subset defined in \eqref{defHn}, form the following vectors in $\mathbb{O}_n$,
$$
a=\sum_{x\in (\mathbb{Z}_2)^n} a_x\,u_x,
\qquad b=\sum_{y\in H_n} b_y\,u_y.
$$
Taking two distinct elements $y,t\in H_n$ one always has $\alpha_{\mathbb{O}}(y+t)=1$.
Therefore, from Proposition \ref{EuclProp} one deduces that
$\mathcal{N}(a)\mathcal{N}(b)=\mathcal{N}(a\cdot{}b).$
Writing this equality in terms of coordinates of the three elements
$a, b$ and $c=a\cdot{}b$, one obtains the result.
Theorem \ref{Solcx} is proved.
\medskip
Let us give one more classical identity that can be realized in the algebra $\mathbb{O}_n$.
\begin{ex}
The most obvious choice of two elements $a,b\in \mathbb{O}_n$
that satisfy the condition~(\ref{NormPr})
is: $a=a_0\,u_0+\sum{}a_i\,u_i$ and $b=b_0\,u_0+\sum{}b_i\,u_i$.
One immediately obtains in this case the following elementary but elegant identity:
$$
(a_0^2+\cdots+a_n^2)\,(b_0^2+\cdots+b_n^2)=
(a_0\,b_0+\cdots+a_n\,b_n)^2+
\sum_{0\leq{}i<j\leq{}n}(a_i\,b_j-b_j\,a_i)^2,
$$
for an arbitrary $n$, known as the Lagrange identity.
\end{ex}
\section{Relation to code loops}\label{LaSec}
The constructions of the algebras that we use in this work
are closely related to some constructions in the theory of Moufang Loops.
In particular, they lead to examples of Code Loops \cite{Gri}.
In this section, we apply our approach in order to obtain an explicit construction of the famous Parker Loop.
\bigskip
\paragraph{\bf The loop of the basis elements.}
The structure of loop is a nonassociative version of a group
(see, e.g., \cite{Goo}).
\begin{prop}
\label{MouPr}
The basis elements together with their opposites, $\{\pm u_x, x\in (\mathbb{Z}_2)^n\}$,
in a twisted algebra $(\mathbb{K}\left[(\Z_2)^n\right],f)$, form a loop with respect to the multiplication rule.
Moreover, this loop is a Moufang loop whenever $\phi=\delta{}f$ is symmetric.
\end{prop}
\begin{proof}
The fact that the elements $\pm u_x$ form a loop is evident.
If the function $\phi=\delta f$ is symmetric,
then this loop satisfies the Moufang identity:
$$
u\cdot (v \cdot (u \cdot w))= ((u\cdot v) \cdot u) \cdot w
$$
for all $u,v,w$.
Indeed, the symmetry of $\phi$ implies that $\phi $ is also trilinear and alternate,
see Corollary \ref{TriCol}.
\end{proof}
Let us mention that the Moufang loops associated with the octonions and
split-octonions are important classical objects invented by Coxeter \cite{Cox}.
\bigskip
\paragraph{\bf Code loops.}
The notion of code loops has been introduced by Griess, \cite{Gri}. We recall the construction and main results. A doubly even binary code is a subspace $V$ in $(\mathbb{Z}_2)^n$ such that any vectors in $V$ has weight a multiple of 4. It was shown that there exists a function~ $f$ from $V\times V $ to $\mathbb{Z}_2$, called a \textit{factor set} in \cite{Gri}, satisfying
\begin{enumerate}
\item $f(x,x)=\textstyle\frac{1}{4}|x|$,
\item $f(x,y)+f(y,x)=\frac{1}{2} |x\cap y|$,
\item $\delta f(x,y,z) = |x\cap y\cap z|$,
\end{enumerate}
where $|x\cap y|$ (resp. $|x\cap y\cap z|$) is the number of nonzero coordinates in both $x$ and $y$ (resp. all of $x, y, z$).
The associated code loop $\Lc (V)$ is the set $\{\pm u_x, x\in V\}$ together with the multiplication law
$$
u_x\cdot u_y=(-1)^{f(x,y)}\,u_{x+y}.
$$
The most important example of code loop, is the Parker loop that plays an important r\^ole
in the theory of sporadic finite simple groups.
The Parker loop is the code loop obtained from the Golay code.
This code can be described as the 12-dimensional subspace of $(\mathbb{Z}_2)^{24}$
given as the span of the rows of the following matrix, see \cite{ConSlo},
\begin{tiny}
\begin{equation*}
G=\left(
\begin{array}{cccccccccccccccccccccccc}
1&&&&&&&&&&&&1&0&1&0&0&0&1&1&1&0&1&1\\
&1&&&&&&&&&&& 1&1&0&1&0&0&0&1&1&1&0&1\\
&&1&&&&&&&&&& 0&1&1&0&1&0&0&0&1&1&1&1\\
&&&1&&&&&&&&& 1&0&1&1&0&1&0&0&0&1&1&1\\
&&&&1&&&&&&&& 1&1&0&1&1&0&1&0&0&0&1&1\\
&&&&&1&&&&&&& 1&1&1&0&1&1&0&1&0&0&0&1\\
&&&&&&1&&&&&& 0&1&1&1&0&1&1&0&1&0&0&1\\
&&&&&&&1&&&&& 0&0&1&1&1&0&1&1&0&1&0&1\\
&&&&&&&&1&&&& 0&0&0&1&1&1&0&1&1&0&1&1\\
&&&&&&&&&1&&& 1&0&0&0&1&1&1&0&1&1&0&1\\
&&&&&&&&&&1&& 0&1&0&0&0&1&1&1&0&1&1&1\\
&&&&&&&&&&&1& 1&1&1&1&1&1&1&1&1&1&1&0\\
\end{array}
\right)
\end{equation*}
\end{tiny}
\bigskip
\paragraph{\bf An explicit formula for the Parker loop.}
Let us now give the generating function of the Parker loop.
We identify the Golay code with the space $(\mathbb{Z}_2)^{12}$, in such a way
that the $i$-th row of the matrix $G$, denoted $\ell_i$, is identified with the
$i$-th standard basic vector
$e_i=(0, \ldots, 0, 1, 0 \ldots,0)$ of $(\mathbb{Z}_2)^{12}$.
As previously, we write $u_i=u_{e_i}=u_{\ell_i}$ the corresponding vector in the Parker loop.
The coordinates of an element $x\in(\mathbb{Z}_2)^{12}$ are denoted by
$(x_1,\ldots,x_{11},x_{12})$.
\begin{prop}
The Parker loop is given by the following generating function $\alpha$ from
$(\mathbb{Z}_2)^{12}$ to $\mathbb{Z}_2$.
\begin{equation}
\label{PLGF}
\begin{array}{rcl}
\alpha_G(x)&=&\displaystyle
\sum_{1\leq i \leq 11 }x_ix_{i+1}\left(x_{i+5}+x_{i+8}+x_{i+9}\right)
+x_ix_{i+2}\,(x_{i+6}+x_{i+8})\\[12pt]
&&\displaystyle
\; +\; x_{12}\,\Big(\sum_{1\leq i \leq 11}x_i +\sum_{1\leq i <j\leq 11}x_ix_j\Big),
\end{array}
\end{equation}
where the indices of $x_{i+k}$ are understood modulo 11.
\end{prop}
\begin{proof}
The ternary function
$$
\phi(x,y,z)=\delta{}f(x,y,z)= |x\cap y\cap z|
$$
is obviously symmetric in $x,y,z$.
Theorem \ref{AlphMainTh} then implies the existence of a
generating function $\alpha_G$.
By Proposition \ref{AlpDeg3} we know that $\alpha_G$ is a polynomial
of degree $\leq3$.
Moreover, linear terms in $\alpha_G$ do not contribute in the quasialgebra structure
($i.e$ do not contribute in the expression of $\beta$ and $\phi$, see \eqref{Genalp1}).
To determine the quadratic and cubic terms, we use the following equivalences
$$
\alpha_G \text{ contains the term } x_ix_j, i\not=j
\Longleftrightarrow u_i, u_j \text{ anti-commute,}
$$
$$
\alpha_G \text{ contains the term } x_ix_jx_k, i\not=j\not=k
\Longleftrightarrow u_i, u_j, u_k \text{ anti-associate}.
$$
For instance, the construction of the Parker loop gives that
$u_i$ and $u_j$ commute for all $1\leq i,j \leq11$, since
$|\ell_{i}\cap \ell_j|=8$, for $1\leq i,j \leq11$.
Thus, $\alpha_G$ does not contain any of the quadratic terms $x_ix_j$, $1\leq i\not=j \leq11$.
But, $u_{12}$ anti-commutes with $u_i$, $i \leq11$,
since $|\ell_{12}\cap \ell_i|=6$, $i \leq11$.
So that the terms $x_{12}x_i$, $ i \leq11$, do appear in the expression of $\alpha_G$.
Similarly, one has to determine which one of the triples $u_i, u_j, u_k$ anti-associate
to determine the cubic terms in $\alpha_G$.
This yields to the expression \eqref{PLGF}
\end{proof}
The explicit formula for the factor set $f$ in coordinates on $(\mathbb{Z}_2)^{12}$
is immediately obtained by (\ref{Fakset}).
Note that the signature in this case is $(11,1)$, so that we have to add
$x_{12}y_{12}$ to~(\ref{Fakset}).
\begin{rem}
The difference between the loops generated by the basis elements of
$\mathbb{O}_n$ and $\mathbb{M}_n$ and the Parker loop is that the function
\eqref{PLGF} is not $\mathfrak{S}_n$-invariant.
Our classification results cannot be applied in this case.
\end{rem}
We hope that the notion of generating function can be a useful
tool for study of code loops.
\section{Appendix: linear algebra and differential calculus over $\mathbb{Z}_2$}
The purpose of this Appendix is to relate the algebraic problems we
study to the general framework of linear algebra over $\mathbb{Z}_2$
which is a classical domain.
\bigskip
\paragraph{\bf Automorphisms of $(\mathbb{Z}_2)^n$ and linear equivalence.}
All the algebraic structures on $(\mathbb{Z}_2)^n$ we consider are invariant
with respect to the action of the group automorphisms
$$
\mathrm{Aut}((\mathbb{Z}_2)^n)\cong\mathrm{GL}(n,\mathbb{Z}_2).
$$
For instance, the generating function $\alpha:(\mathbb{Z}_2)^n\to\mathbb{Z}_2$, as well as
$\beta$ and $\phi$, are considered up to the $\mathrm{Aut}((\mathbb{Z}_2)^n)$-equivalence
(called ``congruence'' in the classic literature \cite{Alb}).
\bigskip
\paragraph{\bf Quadratic forms.}
The interest to describe an algebra in terms of a generating function
can be illustrated in the case of the Clifford algebras.
There are exactly two non-equivalent
non-degenerate quadratic forms on $(\mathbb{Z}_2)^{2m}$ with coefficients in $\mathbb{Z}_2$
(see \cite{Alb,Dieu} for the details):
\begin{equation}
\label{Darboux}
\alpha(x)=
x_1x_{m+1}+\cdots+x_mx_{2m}+\lambda\,(x_m^2+x_{2m}^2),
\end{equation}
where $\lambda=0$ or $1$.
Note that sometimes the case $\lambda=1$ is not considered (see \cite{KMRT}, p.xix)
since the extra term is actually linear, for $x_i^2=x_i$.
The corresponding polar bilinear form $\beta=\delta\alpha$
and the trilinear form $\phi=\delta_2\alpha$ do not depend on $\lambda$.
The corresponding twisted group algebra is isomorphic to the
Clifford algebra $\mathit{C}\ell_n$.
The normal form (\ref{Darboux}) is written in the standard Darboux basis,
this formula has several algebraic corollaries.
For instance, we immediately obtain the well-known factorization of the complex Clifford algebras:
$$
\mathit{C}\ell_{2m}\cong
\mathit{C}\ell_2^{\otimes{}m}\cong
\mathbb{C}[2^m],
$$
where $\mathbb{C}[2^m]$ are $(2^m\times2^m)$-matrices.
Indeed, the function (\ref{Darboux}), with $\lambda=0$, is nothing
but the sum of $m$ generating functions of $\mathit{C}\ell_2$.
The other classical symmetry and periodicity theorems for the Clifford algebras
can also be deduced in this way.
Let us mention that bilinear forms over $\mathbb{Z}_2$ is still an interesting subject~\cite{Leb}.
\bigskip
\paragraph{\bf Cubic polynomials.}
In this paper, we were led to consider polynomials $\alpha:(\mathbb{Z}_2)^n\to\mathbb{Z}_2$ of degree~3:
$$
\alpha(x)=\sum_{i<j<k}\alpha^3_{ijk}\,x_ix_jx_k+
\sum_{i<j}\alpha^2_{ij}\,x_ix_j,
$$
where $\alpha^3_{ijk}$ and $\alpha^2_{ij}$ are arbitrary coefficients
(equal to 1 or 0).
It turns out that it is far of being obvious to understand what means $\alpha$ is ``non-degenerate''.
To every polynomial $\alpha$, we associate a binary function $\beta=\delta\alpha$ and a trilinear form
$\phi=\delta_2\alpha$, see formula (\ref{Genalp2}),
which is of course just the polarization (or linearization) of~$\alpha$.
The form $\phi$ is alternate:
$\phi(x,x,.)=\phi(x,.,x)=\phi(.,x,x)=0$ and depends only on the
homogeneous part of degree 3 of $\alpha$, i.e., only on $\alpha^3_{ijk}$.
There are three different ways to understand the notion of non-degeneracy.
(1)
The most naive way: $\alpha$ (and $\phi$) is non-degenerate
if for all linearly independent $x,y\in(\mathbb{Z}_2)^n$, the linear
function $\phi(x,y,.)\not\equiv0$.
One can show that, with this definition,
{\it there are no non-degenerate cubic forms on $(\mathbb{Z}_2)^{n}$
for $n\geq3$.}
This is of course not the way we take.
(2)
The second way to understand non-degeneracy is as follows.
The trilinear map $\phi$ itself defines an $n$-dimensional algebra.
Indeed, identifying $(\mathbb{Z}_2)^n$ with its dual space,
the trilinear function $\phi$
defines a product $(x,y)\mapsto\phi(x,y,.)$.
One can say that $\phi$ (and $\alpha$) is non-degenerate
if this algebra is simple.
This second way is much more interesting and is related to
many different subjects.
For instance, classification of simple Lie (super)algebras over $\mathbb{Z}_2$
is still an open problem, see \cite{BGL} and references therein.
This definition also depends only on the
homogeneous part of degree 3 of $\alpha$.
(3)
We understand non-degeneracy yet in a different way.
We say that $\alpha$ is non-degenerate if for
all linearly independent $x,y$ there exists $z$ such that
$$
\beta(x,z)\not=0,
\qquad
\beta(y,z)=0,
$$
where $\beta=\delta\alpha$.
This is equivalent to the fact that the algebra with the generated
function~$\alpha$ is simple, cf. Section~\ref{ProoSimProp}.
We believe that every non-degenerate (in the above sense) polynomial of degree 3
on~$(\mathbb{Z}_2)^{n}$ is equivalent to one of the two forms (\ref{NashAlp}) and (\ref{NashAlpBis}).
Note that a positive answer would imply the uniqueness results of Section \ref{UniqReSec}
without the $\mathfrak{S}_n$-invariance assumption.
\bigskip
\paragraph{\bf Higher differentials.}
Cohomology of abelian groups with coefficients in $\mathbb{Z}_2$ is
a well-known and quite elementary theory explained in many
textbooks.
Yet, it can offer some surprises.
Throughout this work, we encountered
and extensively used the linear operators $\delta_k$,
for $k=1,2,3$, cf. (\ref{Genalp2}) and~(\ref{DelTrequat}),
that associate a $k$-cochain on $(\mathbb{Z}_2)^n$ to a function.
These operators were defined in \cite{War},
and used in the Moufang loops theory, \cite{Gri,DV,NV}.
Operations of this type are called \textit{natural} or \textit{invariant}
since they commute with the action of $\mathrm{Aut}((\mathbb{Z}_2)^n)$.
The operator $\delta_k$ fits to the usual understanding of ``higher derivation'' since
the condition $\delta_k\alpha=0$ is equivalent to the fact that $\alpha$
is a polynomial of degree $\leq{}k$.
The cohomological meaning of $\delta_k$ is as follows.
In the case of an abelian group $G$, the cochain complex with coefficients in $\mathbb{Z}_2$
has a richer structure.
There exist $k$ natural operators acting from $C^k(G;\mathbb{Z}_2)$ to $C^{k+1}(G;\mathbb{Z}_2)$ :
\SelectTips{eu}{12}%
\xymatrix{
&C^1(G;\mathbb{Z}_2)\ar@{->}[rr]^\delta
&&C^2(G;\mathbb{Z}_2) \ar@<6pt>[rr]^{\delta_{1,0}} \ar@<-8pt>[rr]^{\delta_{0,1}}
&&C^3(G;\mathbb{Z}_2) \ar@<13pt>[rr]^{\delta_{1,0,0}} \ar@<-13pt>[rr]^{\delta_{0,0,1}} \ar@{->}[rr]^{\delta_{0,1,0}}&&
\;\cdots\\
}
\medskip
\noindent
where $\delta_{0,\ldots,1,\ldots,0}$ is a ``partial differential'', i.e., the differential with respect to
one variable.
For instance, if $\beta\in C^2(G;\mathbb{Z}_2)$ is a function in two variables, then
$$
\delta_{1,0}\beta(x,y,z)=\beta(x+y,z)+\beta(x,z)+\beta(y,z).
$$
In this formula $z$ is understood as a parameter and one can write
$\delta_{1,0}\beta(x,y,z)=\delta\gamma(x,y)$, where $\gamma=\beta(.,z)$.
At each order one has
$$
\delta=\delta_{1,0,\ldots,0}+\delta_{0,1,\ldots0}+\cdots+\delta_{0,\ldots,0,1}.
$$
If $\alpha\in{}C^1(G;\mathbb{Z}_2)$, then an \textit{arbitrary} sequence of the partial derivatives
gives the same result: $\delta_k\alpha$, for example one has
$$
\delta_2\alpha=\delta_{1,0}\circ\delta\alpha=\delta_{0,1}\circ\delta\alpha,
\qquad
\delta_3\alpha=\delta_{1,0,0}\circ\delta_{1,0}\circ\delta\alpha=\cdots=\delta_{0,0,1}\circ\delta_{0,1}\circ\delta\alpha,
$$
etc.
The first of the above equations corresponds to the formula
\eqref{PhiBet} since $\beta=\delta\alpha$.
\bigskip
| proofpile-arXiv_065-5200 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
Several cosmological observations like rotation curves of spiral galaxies,
the gravitational micro-lensing,
observations on Virgo and
Coma clusters\cite{virgo,coma}, bullet clusters \cite{bullet},
etc. provide indications of existence of
huge amount of non-luminous matter or dark matter(DM) in the universe.
The Wilkinson Microwave Anisotropy Probe
(WMAP) experiment \cite{wmap}
suggests that about 85\% of the total matter content
of the universe is dark. This constitutes
23\% of the total content of the universe. The rest 73\%
is the dark energy, whereas the remaining 4\% is the known
luminous matter. Nature and identity of the constituents of
this non-luminous matter is mostly
unknown. However, the indirect evidences suggest that
most of them are stable , nonrelativistic (Cold Dark Matter or CDM)
and Weakly Interacting Massive Particles (WIMPs)
\cite{Jungman:1995df, Griest:2000kj, Bertone:2004pz, Murayama:2007ek}.
Despite the wide success spectrum of Standard Model (SM),
explanation of CDM poses a challenge
to SM as no viable candidate for CDMs
has been obtained within the framework of SM.
Hence, there are attempts to explain the DM WIMP candidates with
theories beyond standard model.
Phenomenology of different extensions of scalar sector of the
SM had been explored by
many groups
\cite{McDonald:1993ex,Bento:2000ah,Burgess:2000yq,
Davoudiasl:2004be,Schabinger:2005ei,O'Connell:2006wi,
Kusenko:2006rh,Bahat-Treidel:2006kx,Andreas:2005,Yaguna:2008HD,He:2009YD,Andreas:2008XY,
Cirelli:2009uv}.
However, addition of one real scalar singlet to the SM provides
the simplest possible minimal renormalizable extension to the scalar
sector of SM. In addition, invoking a $Z_2$ discrete symmetry under which
the additional singlet is odd, gives rise to the singlet as a viable
DM candidate.
In this work, we explore
the parameter space of the model to accommodate the results
of different dark matter direct detection experiments.
We further predict the event rates and the annual variation of the event rates
to be observed by the Liquid Argon Detector experiment.
In the experiments for direct detection of DM,
the WIMP scatters off the target nucleus
of the material of the detector giving rise to recoil of the nucleus.
The energy of this nuclear recoil is very low ($\sim$ keV).
The signal generated by the nuclear
recoil is measured for direct detection of dark matter.
There are several ongoing experiments for direct dark matter searches.
Some of them are cryogenic detectors where the detector material
such as Germanium
are kept at a very low temperature
background and the nuclear recoil energy is measured
using scintillation, phonon or ionization
techniques. The experiments like CDMS
(Cryogenic Dark Matter Search
uses Germanium as detector material) at Soudan Mine, Minnesota \cite{CDMS}
use both ionization and phonon techniques.
In phonon technique, the energy
of the recoil nucleus sets up a vibration of the detector
material (Ge crystal for CDMS).
These vibrations or phonons propagate at the surface of the detector crystal
and excites quasi-particle states at materials used in the pulse pick up device.
Finally the heat produced by these quasi-particle states is converted to
pulses by SQUID (Superconducting Quantum Interference Device) amplifiers.
CDMS carries out two experiments - one with Germanium and the other with
Silicon in order to separate the neutron background. As Germanium nucleus
is heavier ($A=73$) than Silicon ($A=28$),
WIMPs interact with $^{73}$Ge
with higher probability than with $^{28}$Si, but neutron being strongly
interacting will not make any
such discrimination. Thus any excess signal at the $^{73}$Ge
detector over the $^{28}$Si in CDMS will be a possible signature for dark matter.
The DAMA experiment at Gran Sasso
(uses diatomic NaI as the detector material) \cite{DAMA},
uses the scintillation technique for detection of the recoil energy.
There are other class of liquid or gas (generally noble gases) detectors
that measure the recoil energy by the ionization of the detector
gas. The ionization yield is amplified by an avalanche process and
the drifting of these charge reaches the top (along z-axis)
where they are collected by the electrodes for
generating a signal. These types of detectors known as TPC
(Time Projection Chamber) are gaining lot of interest in present time
for their better effectiveness and resolution in detecting
such direct signals of recoil energy from a DM-nucleon scattering.
As mentioned, they generally use noble gases such as Xenon,
Argon or Fluorine etc. The XENON-10 experiment \cite{XENON10} at Gran Sasso
is a liquid Xenon TPC
with target mass of 13.7 Kg, whereas its upgraded version, the XENON-100
experiment \cite{XENON100} uses 100 Kg of the target mass.
The CoGeNT (Coherent Germanium Neutrino Technology)
experiment \cite{cogent} also uses Ge as detector material and designed
to detect dark matter particles with masses less than
that to be probed in CDMS.
Recently the CoGeNT collaboration has reported an excess of events above the
expected background \cite{cogent}.
The experiment ArDM (Argon Dark Matter Experiment) \cite{ArDM} plans to use
1 ton of liquid $^{39}$Ar
gas for the TPC. There are other experiments that use other techniques
like PICASSO \cite{picasso} etc. at SNOlab in Canada but here we
consider
CDMS, DAMA, CoGeNT and Xenon experiments for the present study.
We restrict the relevant couplings of the scalar dark matter
by using the bounds on
dark matter-nucleon scattering cross sections from these three experiments
and further, utilising the WMAP experimental data.
We predict possible direct detection
event rate as well as annual variation of event rate in
liquid ArDM experiment for different possible dark matter masses.
The paper is organized as follows. In Section 2 we briefly discuss
the model. The CDMS, XENON, CoGeNT and DAMA bounds used to
constrain the parameter space of the
model is described in Section 3. In Section 4
we present the formalism of Direct detection rate calculations and
compute such rates for liquid Ar detector. Section 5
contains summary and conclusions.
\section{Singlet extended Standard Model : A brief outline}
The framework of the simplest scalar sector
extension of the SM involving addition of a real scalar singlet
field to the SM Lagrangian has been
discussed in detail in \cite{Barger:2007im,OConnell:2006wi}.
In this section we present a brief outline of the model
and emphasize on those of its aspects that would be relevant
for discussions to follow.
The most general form of the potential
appearing in the Lagrangian density for scalar sector of this model
is given by \footnote{We used same notations as used in \cite{Barger:2007im,OConnell:2006wi}.}
\begin{eqnarray}
V(H,S)
&=&
\frac{m^2}{2} H^\dagger H
+ \frac{\lambda}{4} {(H^\dagger H)}^2
+ \frac{\delta_1}{2} H^\dagger H S
+ \frac{\delta_2}{2} H^\dagger H S^2 \nonumber\\
&&
+ \left(\frac{\delta_1 m^2}{2\lambda}\right)S
+ \frac{\kappa_2}{2}S^2
+ \frac{\kappa_3}{3}S^3
+ \frac{\kappa_4}{4}S^4
\label{eqpot}
\end{eqnarray}
where $H$ is the complex Higgs field (an $SU(2)$ doublet) and
$S$ is a real scalar gauge singlet that defines our minimal extension
to the scalar sector of SM. The singlet $S$ needs to be stable
in order to be considered as a viable dark matter candidate.
Stability of $S$ is achieved within the theoretical
framework of the model by assuming the potential
to exhibit a $Z_2$ symmetry $S\rightarrow -S$.
This ensures absence of vertices
involving odd number of singlet fields $S$
($\delta_1 = \kappa_3$ =0).
Using unitary gauge, we define
\begin{eqnarray}
H &=& \pmatrix {0 \cr \frac{v+h}{\sqrt{2}}}
\end{eqnarray}
where $h$ is the physical Higgs field and
$v = 246$ GeV is the VEV of the $H$ scalar determined by the parameters
$m$ and $\lambda$ as $v = \sqrt{\frac{-2m^2}{\lambda}}$.
The mass terms of the two scalar fields $h$ and $S$ are
identified as
\begin{eqnarray}
V_{\rm mass}
&=&
\frac{1}{2} (M_h^2 h^2 + M_S^2 S^2)
\end{eqnarray}
where,
\begin{eqnarray}
M_h^2 &=& -m^2 = \lambda v^2/2 \nonumber\\
M_S^2 &=& \kappa_2 + \delta_2 v^2/2
\label{eqms}
\end{eqnarray}
\begin{figure}[t]
\begin{center}
\includegraphics[width=4.5cm, height=4cm, angle=0]{fig1.eps}
\caption{\label{fig:fd}
Diagram for singlet-nucleon elastic scattering via higgs mediation}
\end{center}
\end{figure}
The scalar field $S$ is stable as long as the $Z_2$
symmetry is unbroken and appears to be a
candidate for cold dark matter in the universe.
In the present work, we investigate the prospect of
such a candidate in direct detection experiments through
its scattering off nucleon ($N$) relevant for the detector. The lowest order
diagram for the process has been shown in Fig.\ \ref{fig:fd}.
The cross section corresponding to the elastic scattering ($SN \to SN$) in
the non-relativistic limit is given by \cite{Burgess:2000yq}
\begin{eqnarray}
\sigma^{\rm scalar}_{N}
&=&
\frac{\delta_2^2 v^2 |{\cal A}_N|^2}{4\pi}
\left( \frac{m^2_r}{{M_S}^2{M_h}^4}\right)
\label{eqcross1}
\end{eqnarray}
where, $m_r (N,S)= M_N M_S/(M_N + M_S)$ is the reduced mass
for the target nucleus of the two body scattering
$SN \to SN$ and ${\cal A}_N$ is the relevant
matrix element. For the case of non-relativistic nucleons the singlet-nucleus
and singlet-nucleon
elastic scattering cross sections are related by \cite{Burgess:2000yq}
\begin{eqnarray}
\sigma^{\rm scalar}_{\rm nucleus}
&=& \frac{A^2 m^2_r({\rm nucleus},S)}{ m^2_r({\rm nucleon},S)}
\sigma^{\rm scalar}_{\rm nucleon}
\end{eqnarray}
Numerically evaluating the matrix element appearing in Eq.\ (\ref{eqcross1})
the singlet-nucleon elastic scattering cross section can be written
as \cite{Burgess:2000yq}
\begin{eqnarray}
\sigma^{\rm scalar}_{\rm nucleon}
&=&
{(\delta_2)}^2
{\left(\frac{100 ~{\rm GeV}}{M_h}\right)}^4
{\left(\frac{50 ~{\rm GeV}}{M_S}\right)}^2
(5 \times 10^{-42} {\rm cm^2}) .
\label{eqcross2}
\end{eqnarray}
It is evident from the above equations that the scalar cross section for scalar singlet depends on two couplings $\delta_2$ and $\kappa_2$. Hence the dark matter direct detection rate with scalar singlet as the dark matter candidate also depends on the two couplings $\delta_2$ and $\kappa_2$. The constraint on direct detection rates of dark matter from different experiments thus can also put constraint on the parameter space ($\delta_2$, $\kappa_2$) of scalar singlet dark matter. In the next section we will discuss how some of the recent direct detection experiments of dark matter can put limit on the scalar singlet model of dark matter.
\begin{figure}[t]
\includegraphics[width=8.2cm, height=8cm, angle=0]{sigvsmd123_114.eps}
\vglue -8.0cm \hglue 8.5cm
\includegraphics[width=8.2cm, height=8cm, angle=0]{sigvsmd123_200.eps}
\caption{\label{fig:svm}
Dashed lines: Scalar singlet-nucleon elastic scattering cross section
($\sigma^{\rm scalar}_{\rm nucleon}$) as a function
of the singlet mass for different values of $|\delta_2|$ with
2 different values of higgs mass $M_h$, 114 GeV(left panel) and 200 GeV(right
panel). Solid lines: 90\% C.L. experimental upper limits on
$\sigma^{\rm DM}_{\rm nucleon}$ from
CDMS 2009 Ge, CDMS Soudan(all), XENON-10 and XENON-100.
The shaded areas corresponding to 99\% C.L. regions allowed from DAMA
(with and without channelling) has also been shown.}
\end{figure}
\section{Constraining the model parameters}
The direct detection rates of scalar singlet dark matter is governed by
the scalar-nucleon cross section ($\sigma^{\rm scalar}_{\rm nucleon}$)
for a given scalar singlet mass $M_S$. Direct detection experiments
like CDMS, DAMA, Xenon thus can constraint the scalar singlet
parameter space. Preliminary studies of
scalar singlet dark matter concerning CDMS, Xenon are given in Refs.
\cite{Davoudiasl:2004be, Yaguna:2008HD, He:2009YD}. Also, explanation of
DAMA bounds with scalar singlet dark matter is described
in \cite{Petriello:2008JJ, Andreas:2008XY}.
In this work we make a detailed study of all the direct detection
bounds together with the relic density limits from WMAP and find
the relevant parameter space for the scalar singlet to be a successful
dark matter candidate.
\subsection{Constraints on $\delta_2 $ as a
function of dark matter mass}
\label{sec:3.1}
The coupling $\delta_2$ is of severe importance as this is the only coupling in this model which determines the annihilation of the scalar to other standard particles. Moreover given the singlet mass ($M_S$),
$\delta_2$ is the sole coupling that
controls the singlet-nucleon cross section and the quadratic
dependence (Eq.\ \ref{eqcross2} ) in particular reflects that
this cross section is insensitive to the sign of $\delta_2$.
In Fig.\ \ref{fig:svm} the singlet-nucleon elastic scattering cross section is plotted as a function of singlet mass (dark matter mass) $M_S$ for different values of $\delta_2$. The plots are presented at two different values of Higgs mass ($M_h$), namely 114 GeV (left panel) and 200 GeV (right panel) of Fig. \ \ref{fig:svm}.
For comparison, the 90\% C.L. (confidence limit) results obtained from CDMS II experiment
(CDMS 2009 (Ge)) \cite{cdms2}
are plotted in Fig.\ \ref{fig:svm} with the similar results from the combined analysis of full data set of Soudan CDMS II results (CDMS Soudan (All)) \cite{cdms2}, XENON-10 \cite{XENON10} and XENON-100 experiment \cite{XENON100}.
One sees from Fig.\ \ref{fig:svm} that the DAMA and CoGeNT results are constrained
in closed allowed regions unlike the other experiments that provide
upper bounds of the allowed masses and cross-sections of dark matter.
In this context
it may be noted that the consideration of ion channelling in NaI
crystal in DAMA
experiment is crucial in the interpretation of its results. The
presence of channelling in NaI, affects the allowed mass and cross section
regions of the
DM particles inferred from the observation of an annual modulation
by the DAMA
collaboration. The effect of channelling has been discussed
extensively in
\cite{gondolochanneling}. For our analysis we consider the allowed
mass-cross section limits inferred from observed annual modulation
of DAMA for both the cases $-$ with channelling and without channelling.
From Fig.\ \ref{fig:svm} it is seen that for larger values of $\delta_2$,
higher singlet mass domain is required in order to represent the
experimentally allowed region for
$\sigma^{\rm scalar}_{\rm nucleon} -$ DM mass plane. Also,
the region of overlap of
$\sigma^{\rm scalar}_{\rm nucleon} - M_S$ plots for a
fixed value of $\delta_2$,
becomes larger for higher Higgs mass.
In Fig.\ \ref{fig:delcon} we present plots for upper limits of the
range of $|\delta_2|$ (as a function of singlet mass)
that would reproduce cross section values
(computed with Eq.\ \ref{eqcross2})
below the 90\% CL limits of different experiments shown in
Fig.\ \ref{fig:svm}. It is evident from the plots that, as in
Fig.\ \ref{fig:svm}, the allowed values of the coupling $ \delta_2 $
is sensitive towards the Higgs mass. Moreover the minimum allowed value
of dark matter mass in this model is also dependent on Higgs mass.
The appearance of local minima at low $M_S$ domain of the plots
are due to the behaviour of experimental bounds that show a
sudden upturn followed by a rise of the curves at low $M_S$
($M_S \to 0$) region. For
Higgs mass of 120 GeV (114 GeV) this minima corresponds to the $|\delta_2|$
value $\sim 0.5(0.1)$.
\begin{figure}[t]
\includegraphics[width=8.5cm, height=8cm, angle=0]{sigdel114.eps}
\vglue -8.0cm \hglue 8.7cm
\includegraphics[width=8.5cm, height=8cm, angle=0]{sigdel200.eps}
\caption{\label{fig:delcon}
Upper limits on $|\delta_2|$ as a function of dark matter mass from 90\%
CL experimental bounds on spin-independent WIMP-nucleon cross section.
Plots are done for 2 values of Higgs mass:
114 GeV (left panel) and 200 GeV (right panel) .}
\end{figure}
\begin{figure}[t]
\includegraphics[width=5.5cm, height=5cm, angle=0]{soudanall.eps}
\vglue -5.0cm \hglue 5.7cm
\includegraphics[width=5.5cm, height=5cm, angle=0]{xenon.eps}
\vglue -5.0cm \hglue 11.4cm
\includegraphics[width=5.5cm, height=5cm, angle=0]{xenon100.eps}
\caption{\label{fig:dk1} Shaded region: Range of the parameter space
$\delta_2 - {\rm sign}(\kappa_2)|\kappa_2|^{1/2}$
consistent with 90\% CL limits of WIMP-nucleon scattering cross section
from CDMS Soudan (All) (left panel), XENON-10 (middle panel)
and XENON-100 (right panel)
corresponding to three different values of Higgs mass -
114 GeV, 150GeV, and 200 GeV.
Dashed lines: Different iso-$M_S$ contours in
$\delta_2 - {\rm sign}(\kappa_2)|\kappa_2|^{1/2}$ plane. The region
spanned by red dots describe the values of model parameters
consistent with WMAP measurements of dark matter relic density :
$0.099<\Omega h^2<0.123$. }
\end{figure}
\begin{figure}[t]
\includegraphics[width=5.5cm, height=5cm, angle=0]{cdms2009.eps}
\vglue -5.0cm \hglue 5.7cm
\includegraphics[width=5.5cm, height=5cm, angle=0]{cdmsdama.eps}
\vglue -5.0cm \hglue 11.4cm
\includegraphics[width=5.5cm, height=5cm, angle=0]{damacogent.eps}
\caption{\label{fig:cdmsdama}
Left panel: (Shaded region) Range of the parameter space
$\delta_2 - {\rm sign}(\kappa_2)|\kappa_2|^{1/2}$
consistent with 90\% CL limits of WIMP-nucleon scattering cross section
from CDMS 2009 (Ge) for three different values of Higgs mass 114 GeV, 150 GeV
and 200 GeV. Different iso-$M_S$ contours in
$\delta_2 - {\rm sign}(\kappa_2)|\kappa_2|^{1/2}$ plane are shown
by dashed lines.The region
spanned by red dots describe the values of model parameters
consistent with WMAP measurements
of dark matter relic density :
$0.099<\Omega h^2<0.123$. Middle panel: (small thread-like regions outlined by
dashed lines) Region in parameter space
consistent with (CDMS 2009 Ge + DAMA(with channelling)) limits for $M_h$ = 114 GeV ($1^{st}$ column), 150 GeV ($2^{nd}$ column), 200 GeV ($3^{rd}$ column). The region
allowed from only CDMS 2009 (Ge) limits are shaded for corresponding values of
Higgs mass. The contour for $M_S = 10$ GeV are shown by dotted lines. Right panel: Range of the parameter space
$\delta_2 - {\rm sign}(\kappa_2)|\kappa_2|^{1/2}$
consistent with 90\% CL limits of WIMP-nucleon scattering corresponding to the lower DM mass regime allowed from DAMA (without channelling) (upper panel)
and CoGeNT (lower panel) for three different values of Higgs mass 114 GeV,
150 GeV and 200 GeV}
\end{figure}
\subsection{Constraining the $\delta_2-\kappa_2$ parameter space}
In the previous subsection, constraining of the parameter $\delta_2$
using other direct detection experimental results are addressed.
But as described earlier, the present dark matter model with scalar
singlet also depends on the other parameter namely $\kappa_2$
(Eq.\ (\ref{eqms})).
From Eq.\ (\ref{eqms}) one sees that the singlet mass term $M_S$
is governed by two model
parameters namely $\delta_2$ and $\kappa_2$.
For a given value of $\delta_2$, real values of $M_S$ can be obtained
only by
excluding all $\kappa_2$ values less than $-\delta_2v^2/2$.
In principle any value of dark matter mass $M_S$ can accommodate
all $(\delta_2,\kappa_2)$ values satisfying
$\kappa_2 + \delta_2v^2/2 = {M_S}^2$.
However at large $\kappa_2$ values with
$\kappa_2 \gg \delta_2 v^2$, the singlet mass is predominantly $\kappa_2$
driven and is scaled with it as $\sqrt{\kappa_2}$. The interplay between
$\delta_2$ and $\kappa_2$ in setting a given singlet mass is represented
in Fig.\ \ref{fig:dk1} where we plotted (dashed lines) different
iso-$M_S$ contours in
$\delta_2 - {\rm sign}(\kappa_2)|\kappa_2|^{1/2}$ plane.
For a given singlet mass $M_S$,
the range of $|\delta_2|$ consistent with CDMS II/XENON-10 limits
on WIMP-nucleon scattering cross section (as
discussed in Sec.\ \ref{sec:3.1}) would then correspond to a segment
of the corresponding iso-$M_S$ contour in
$\delta_2 - {\rm sign}(\kappa_2)|\kappa_2|^{1/2}$ plane.
Its projection on the $\kappa_2$ axis
would give the corresponding range
of the parameter $\kappa_2$. In Fig.\ \ref{fig:dk1}
the shaded regions represent the domains of the model-parameter space
$\delta_2 - {\rm sign}(\kappa_2)|\kappa_2|^{1/2}$, that is consistent with 90\% CL limits on WIMP-nucleon elastic
scattering cross sections from analysis of
CDMS Soudan (All) (left panel), XENON-10 (middle panel) and XENON-100 (right panel)
results. The dependence of the allowed model-parameter space
on Higgs mass is shown by plotting the allowed areas
for three different values
of Higgs mass namely 114 GeV, 150 GeV and 200 GeV.
The region of the parameter
space to the left of
$M_S = 0$ line is excluded as points ($\delta_2,\kappa_2$)
in that region would give negative values of mass. Moving from lower $M_S$ to
higher $M_S$ domain in the parameter space
allows more and more room for $(\delta_2,\kappa_2)$
to represent DM-nucleon scattering cross section
consistent with its experimental bounds - a feature also apparent
from Fig.\ \ref{fig:svm} and Fig.\ \ref{fig:delcon} (discussed
in Sec.\ \ref{sec:3.1}). At very low scalar singlet mass regime
($M_S \sim 0-10$ GeV) higher peaked value of the WIMP-nucleon
scattering cross section-limit concedes
a thread like extension of the allowed parameter space along
the corresponding iso-$M_S$ contours. The sudden drop of the
experimental limits of WIMP-nucleon cross sections
in $10-20$ GeV mass
regime severely restricts the width of the above mentioned
thread-like extension of
the allowed parameter space at $M_S \sim 0 - 10$ GeV.
This drop is more robust for the case of the experiment denoted
in this work as ``CDMS Soudan (All)"
than that of ``XENON" experiment leading to
a more prominent appearance of the thread-like zone
in left panel of Fig.\ \ref{fig:dk1}. The $\delta_2 - \kappa_2$
parameter zone obtained from WMAP constraint \cite{Barger:2007im}
($0.099<\Omega h^2 <0.123$, $\Omega$ being the dark matter relic density
and $h$ is the Hubble parameter normalized to 100 Km sec$^{-1}$ Mpc$^{-1}$)
are also shown by the red colored dots in Fig. 4, for comparison.
From Fig.\ \ref{fig:dk1} it is seen that
CDMS/XENON upper limits of WIMP-nucleon scattering cross section
together with WMAP observation allows only small $|\delta_2|$ regime
($\la 0.2$) of the model parameter space for different higher values
of $M_S$ , although for very small $M_S$ values ($0-10$ GeV) CDMS limit
concedes more room for $|\delta_2|$ (upto $\sim 1.0$)
The results of DAMA experiment restricts the variations of
$\sigma^{\rm scalar}_{\rm nucleon}$ with DM mass in two small
contours represented as shaded areas in Fig.\ \ref{fig:svm}.
They represent the 99\% C.L.
regions in ($\sigma^{\rm scalar}_{\rm nucleon}$- DM mass) space.
Interpretation of DAMA results with channelling (without channelling) requires
WIMP-nucleon scattering cross sections of order
$10^{-41} {\rm cm}^2(10^{-40} {\rm cm}^2)$ along with two locally preferred zones of
dark matter mass -
one around $\sim 12$ GeV (referred as DAMA-a in this work) and the other
around $\sim 70$ GeV (referred as DAMA-b in this work).
The DAMA solution corresponding to
the large DM mass regime (DAMA-b) is completely
excluded by observed limits on the WIMP-nucleon scattering
cross section form other direct detection
experiments like CDMS, XENON etc. The other DAMA solutions corresponding to
lower DM mass regime (DAMA-a) are also largely disfavoured by
XENON. However though a small region at the lower DM mass
regime (``with channelling" case)
in DAMA-a zone is barely
consistent with 90\% C.L. with CDMS experiment denoted in this work as
``CDMS 2009 (Ge)", the corresponding limit ``CDMS Soudan (All)"
experiment excludes the entire DAMA-a zone.
In the left panel of Fig.\ \ref{fig:cdmsdama} we have shown region of
$\delta_2 - {\rm sign}(\kappa_2)|\kappa_2|^{1/2} $
parameter space that corresponds to singlet-nucleon elastic
scattering cross sections within its 90\% C.L. limit from
CDMS 2009 (Ge). The middle panel of Fig.\ \ref{fig:cdmsdama} shows
the $\delta_2 - {\rm sign}(\kappa_2)|\kappa_2|^{1/2} $ parameter space, consistent with both CDMS 2009 (Ge) and DAMA results (with channelling).
The small thread like regions (marked red and
indicated within closed dashed lines)
in the middle panel of Fig.\ \ref{fig:cdmsdama}
represent the parameter space domain that fits singlet-nucleon
scattering cross section with DM-nucleon cross section consistent with
both CDMS 2009 (Ge) and DAMA limits (with channelling). These are presented
in three
different columns that correspond to three
values of Higgs mass namely 114 GeV, 150 GeV and 200 GeV.
The parameter space consistent with CDMS 2009 (Ge) limit
only for corresponding values of Higgs masses
are also shown in the respective columns by grey shades.
The iso-$M_S$ contour for $M_S = 10$ GeV, spanning through
the DAMA + CDMS 2009 (Ge) allowed regimes, is
also shown in each column of the same figure to illustrate the fact that
parameter space regions
consistent with both DAMA and CDMS 2009 (Ge) correspond
to a singlet dark matter mass of around $\sim 10$ GeV. In the right panel of
Fig.\ \ref{fig:cdmsdama} we have shown the allowed region in
the $\delta_2 - {\rm sign}(\kappa_2)|\kappa_2|^{1/2} $ parameter space
consistent with DAMA-a region without channelling (upper panel) and the
CoGeNT result (the lower panel).
\section{Predictions for Rates at Argon detector}
The Argon detector is a Noble liquid detector where the liquefied noble gas
Argon is used as targets for direct detection of WIMPs. Because of the high density
and high atomic number the event rate is expected to be large. Also
because of its high scintillation and ionization yields owing to its low
ionization potentials, it can effectively discriminate nuclear recoils
and other backgrounds from $\gamma$ or electrons. The ArDM (Argon
Dark Matter) experiment at surface of CERN uses one such detector
that envisages one ton of liquid Argon in a cylindrical container. This has
a provision for three dimensional imaging for every event.
A strong electric field along the axis of the cylinder helps drifting
of the charge -- produced due to the ionization of liquid Argon by WIMP
induced nuclear recoil -- to the surface of the liquid.
This charge then enters into the gaseous phase of
the detector (TPC) where it is multiplied through avalanche and finally recorded
by a position sensitive readout. \\
In this section we estimate WIMP signal rates for such Argon detector.
The differential rate for dark matter scattering
detected per unit detector mass can be
written as
\begin{eqnarray}
\frac {dR} {d|{\bf q}|^2} = N_T \Phi \frac {d\sigma} {d|{\bf q}|^2} \int f(v) dv
\label{eq:drdqsq}
\end{eqnarray}
where $N_T$ is number of target nuclei per unit mass of the detector, $\Phi$
is the dark matter flux, $f(v)$ denotes the distribution of
dark matter velocity $v$ (in earth's frame). The integration
is over all possible kinematic configurations in the scattering process.
$|\bf q|$ is the momentum transferred to the nucleus in
dark matter-nucleus scattering and $\sigma$ being the corresponding
cross section. The recoil energy of the scattered nucleus can be expressed in
terms of the momentum transfer $|\bf q|$ as
\begin{eqnarray}
E_R
&=& |{\bf q}|^2/2M_N
= m^2_r v^2 (1 - \cos\theta)/M_N
\label{eq:recoil}
\end{eqnarray}
where $M_N$ is the nuclear mass, $\theta$ is the scattering angle
in dark matter - nucleus center of momentum frame and $m_r$ is the reduced
mass given by
\begin{eqnarray}
m_r &=& \frac{M_N M_{S}}{M_N + M_{S}}\,\,.
\end{eqnarray}
In the above $M_{S}$ is the dark matter mass. Expressing
the dark matter
flux $\Phi$ in terms of the local dark matter density $\rho_s$,
velocity $v$ and mass $M_{S}$. With $N_T = 1/M_N$
and writing $|{\bf q}|^2$ in terms
of nuclear recoil energy $E_R$, Eq.\ (\ref{eq:drdqsq}) can be rewritten as
\begin{eqnarray}
\frac {dR} {dE_R}
&=&
2 \frac {\rho_{s}} {M_{S}} \frac {d\sigma}
{d |{\bf q}|^2} \int_{v_{min}}^\infty v f(v) dv
\label{eq:drde}
\end{eqnarray}
where
\begin{eqnarray}
v_{\rm min} &=& \left [ \frac {M_{N} E_R} {2m^2_{\rm r}} \right ]^{1/2}
\end{eqnarray}
The dark matter - nucleus differential cross-section for the scalar interaction
is given by \cite{Jungman:1995df}
\begin{eqnarray}
\frac {d\sigma} {d |{\bf q}|^2} = \frac {\sigma^{\rm scalar}}
{4 m_{\rm red}^2 v^2} F^2 (E_R) \,\,\, .
\label{eq:dsdqsq}
\end{eqnarray}
Here $\sigma^{\rm scalar}$ is dark matter-nucleus scalar cross-section
and $F(E_R)$ is nuclear form factor given by \cite{helm,engel}
\begin{eqnarray}
F(E_R) &=&
\left [ \frac {3 j_1(q R_1)} {q R_1} \right ]
{\rm exp} \left ( \frac {q^2s^2}
{2} \right ) \\
R_1 &=& (r^2 - 5s^2)^{1/2} \nonumber \\
r &=& 1.2 A^{1/3} \nonumber
\end{eqnarray}
where $s (\simeq 1 ~{\rm fm})$ is the thickness parameter
of the nuclear surface, $A$ is the mass number of the nucleus,
$j_1(qR_1)$ is the spherical
Bessel function of index 1 and $q = {\bf |q|} = \sqrt{2M_NE_R}$ as from
Eq. \ (\ref{eq:recoil}). Assuming distribution $f(v_{\rm gal})$
of dark matter velocities $(v_{\rm gal})$
with respect to galactic rest frame to be Maxwellian,
one can obtain the distribution $f(v)$ of dark matter velocity ($v$) with respect to earth rest frame by
making the transformation
\begin{eqnarray}
{\bf v} = {\bf v}_{\rm gal} - {\bf v}_\oplus
\end{eqnarray}
where ${\bf v}_\oplus$ is the velocity
of earth with respect to Galactic rest
frame and is given as a function of time $t$ by
\begin{eqnarray}
v_\oplus &=& v_\odot + v_{\rm orb} \cos\gamma
\cos \left (\frac {2\pi (t - t_0)}{T} \right ) \label{eq:vearth}
\end{eqnarray}
In Eq.\ (\ref{eq:vearth}) $v_\odot$ is the speed of the solar system
in galactic rest frame, $T$ ($1$ year) us the period of earth's rotation
about sun , $t_0 = 2^{\rm nd}$ June (the time of the year when the
orbital velocity of earth and velocity of solar system
point in the same direction) and $\gamma \simeq 60^o$ is the angle
subtended by earth orbital
plane at Galactic plane. The speed of solar system $v_\odot$ in the
Galactic rest frame is given by
\begin{eqnarray}
v_\odot &=& v_0 + v_{\rm pec}
\end{eqnarray}
$v_0$ being the circular velocity of the local system at the position of
solar system and $v_{\rm pec} = 12 {\rm km/sec}$, called
{\it peculiar velocity},
is speed of solar system with respect to
the local system. Physical range of $v_0$ is $170\,\, {\rm km/sec} \leq v_0 \leq 270$ km/sec (90 \% C.L.) \cite{pec1,pec2}. In this work we consider the central value - $220$ km/sec for $v_0$. The term
$\cos2[\pi (t - t_0)/T]$ in the velocity is responsible for annual modulation
of dark matter signal. Introducing a dimensionless quantity $T(E_R)$ as
\begin{eqnarray}
T(E_R)
&=& \frac {\sqrt {\pi}} {2} v_0 \int_{v_{\rm min}}^\infty \frac {f(v)}
{v} dv\,\,
\end{eqnarray}
which can also be expressed as
\begin{eqnarray}
T(E_R) = \frac {\sqrt {\pi}} {4v_\oplus} v_0
\left [ {\rm erf} \left ( \frac{v_{\rm min} + v_\oplus} {v_0} \right )
- {\rm erf} \left ( \frac
{v_{\rm min} - v_\oplus} {v_0} \right ) \right ]
\end{eqnarray}
we obtain from Eqs.\ (\ref{eq:drde}) and (\ref{eq:dsdqsq})
\begin{eqnarray}
\frac {dR} {dE_R}
&=& \frac {\sigma^{\rm scalar}\rho_s} {4v_\oplus M_S
m_{\rm r}^2} F^2 (E_R)
\left[
{\rm erf}\left(\frac{v_{min} + v_\oplus}{v_0}\right)
- {\rm erf}\left(\frac{v_{min} - v_\oplus}{v_0}\right)
\right]
\end{eqnarray}
\begin{figure}[t]
\includegraphics[width=8.5cm, height=8cm, angle=0]{ratear.eps}
\vglue -8.0cm \hglue 8.5cm
\includegraphics[width=8.5cm, height=8cm, angle=0]{annvarar.eps}
\caption{\label{fig:rate}
Left Panel: Plot of predictions for dark matter detection rates (per kg per day per keV)
in Argon detector as a function of observed recoil energy. The plots are shown
for two values of $M_h$ - $114$ GeV and $200$ GeV and for two sets of
($M_S, \delta_2$) values which are consistent with CDMS limits as well as
observed relic density of dark matter (WMAP). In the inset we show
the corresponding
plot for $M_h = 114$ GeV and for two sets of ($M_S, \delta_2$) values simultaneously
consistent with limits on scattering cross section from CDMS 2009 (Ge) and DAMA
and also with WMAP. Right panel: Predicted annual
variation of event rates in Argon detector over one year for $M_h = 114$ (GeV).
The upper panel corresponds to a ($M_S, \delta_2$) value consistent
with CDMS+DAMA+WMAP while the lower panel corresponds to a
($M_S, \delta_2$) value consistent with CDMS+WMAP.}
\end{figure}
The local dark matter
density $\rho_s$ may be taken as 0.3 GeV/cm$^3$.
The observed recoil energy $(E)$ in the
measured response of the detector is a
fraction $(Q_X)$ of actual recoil energy $(E_R)$
at the time of scattering. This fraction $Q_X = E/E_R$ (called as
quenching factor) is different for different scattered nucleus $X$.
For $^{39}$Ar, $Q_{\rm Ar}$ = 0.76.
Thus the differential rate in terms of the observed recoil energy
$E$ for $^{39}Ar$ detector
can be expressed as
\begin{equation}
\frac {\Delta R} {\Delta E} (E) =
\displaystyle\int^{(E + \Delta E)/Q_{\rm Ar}}_{E/Q_{\rm Ar}}
\frac {dR_{\rm Ar}} {dE_R} (E_R) \frac {\Delta E_R} {\Delta E}
\end{equation}
In left panel of Fig.\ \ref{fig:rate} we show the expected differential
rates (/kg/day/keV) for different observed recoil energies in Argon detector
considering scalar singlet as the dark matter candidate.
Four representative cases have been plotted and they are
denoted as (a), (b), (c) and (d). For all the plots, (a) - (d), the chosen
values of coupling $\delta_2$ (as also corresponding scalar singlet masses, $M_S$)
are consistent with current CDMS and WMAP limits.
All the plots show that the
rate falls off with the increase of recoil energy.
Plots (a) and (b) are the variations of rates for the same set of
Higgs mass ($M_h = 114$ GeV) and singlet mass ($M_S = 100$ GeV)
but for different values of the coupling $\delta_2$ ($\delta_2 = -0.1 (-0.03)$
for plot (a) ((b)). Plots (a) and (b) show a decrease of the rate when
$|\delta_2|$ decreases. For example, in case of recoil energy $E = 50$
GeV the calculated rates from plots (a) and (b)
are $8.5 \times 10^{-6}$ (for $|\delta_2| = 0.1$)
and $7.2 \times 10^{-7}$ (for $|\delta_2| = 0.03$ ) respectively in the units
of /kg/day/keV.
Plots (c) and (d) compare the variation of rates for two different
scalar masses namely $M_S = 200$ GeV (plot (c)) and $M_S = 100$ GeV
(plot (d)) for same values of $M_h, \ \delta_2$ (200 GeV, $-0.03$). From
plots (c) and (d) it is seen that the rate increases for any particular value of
recoil energy with decrease of dark matter mass.
For example in case of $E = 50$ GeV, the rates for $M_S= 100$ GeV (plot c)
and $M_S = 200$ GeV (plot d) the calculated rates are $8 \times 10^{-8}$
and $1.4 \times 10^{-8}$ respectively in the units of /kg/day/keV.
This is evident from the
expression for scalar cross section (Eq.\ (\ref {eqcross2}) which
varies as $M_S^{-2}$ and direct detection rates is linear with the scalar cross section.
One can compare plots (b) and (d) to see the effect of Higgs mass values
on the rate. For $M_S, \ \delta_2$ (100 GeV, $-0.03$) , the estimated rates in the
present calculations at
$E = 50$ keV are $7.2 \times 10^{-7}$ (/kg/day/keV) for $M_h = 114$ GeV (plot (b))
which is reduced to $1.4 \times 10^{-8}$ (/kg/day/keV) for $M_h = 200$
GeV (plot (d)). In the inset of this
same figure we show the calculated prediction of rates for
$M_S = 10$ GeV, $\delta_2 = 0.4$. This is compatible with
the CDMS and DAMA bound together, other than satisfying the WMAP
limits. The Higgs mass is kept at 114 GeV. As an example,
for $E = 30$ GeV the calculated expected event is
$1.4 \times 10^{-3}$ per day for a ton of the detector.
One very positive signature of dark matter in direct detection method
is the annual variation of detection rate. This periodic variation arises due
to periodic motion of the earth about the sun in which the directionality
of earth's motion changes continually over the year. As a result, there is
an annual variation in the amount
of dark matter encountered by the earth. The detection of annual variation
in direct detection experiments serve as a smoking gun signal for existence
of dark matter.
In right panel of Fig.\ \ref{fig:rate} we show the calculated annual variation
of event rate (/kg/day) in different times of a year. In the upper panel,
we have chosen the parameter set that is compatible with
CDMS, DAMA and WMAP limits (as in inset of the left panel) and
for the lower panel we have given the results for
$M_h = 114$ GeV, $M_S = 100$ GeV and $\delta_2 = -0.03$.
As expected,
the yield is maximum on 2nd June when the direction of motion of the earth
is the same as that of the solar system.
\section{Summary and Conclusions}
In the present work we consider a simplest extension of SM, introducing
a real scalar singlet along with a discrete $Z_2$ symmetry
which ensures stability of the singlet. Such singlet is considered as a
viable cold dark matter candidate.
The scattering of this singlet dark matter off nuclei in the detector
can be observed by measuring the energy of the recoil nuclei.
The calculated singlet-nucleon scattering cross section in this model
explicitly depends on the coupling $\delta_2$ and implicitly on $\kappa_2$
(as defined in Eq.\ (\ref{eqpot})).
We constrain the $\delta_2 - \kappa_2$
parameter space using the recent bounds on the
WIMP-nucleon scalar cross section as function of WIMP mass,
reported by the CDMS collaboration and also reported by the XENON-10
and XENON-100 collaborations.
The allowed zones in the parameter space follow a typical pattern determined
by the shape of the WIMP mass dependence of the experimental limits
considered here
and the way the scalar singlet mass
is related to $\delta_2$ and $\kappa_2$. The allowed zones vary for different
Higgs masses and they are consistent with WMAP limits.
We investigate the effect of the inclusion of DAMA results (with
channelling) on the
$\delta_2 - \kappa_2$ parameter space but the allowed zone is found to
be extremely small and representative of a very low dark
matter mass (around $\sim 10$ GeV). We also compute the range of
$\delta_2 - \kappa_2$ parameter space consistent with
DAMA (without channelling) and CoGeNT experiments.
Utilising the constrained parameter space we estimate the possible
detection rates and their annual variations for a liquid Argon detector.\\
{\bf Acknowledgments:} We thank Probir Roy and Biswajit Adhikary
for useful discussion. S.C. acknowledges support from the projects
‘Centre for Astroparticle Physics’ and ‘Frontiers of Theoretical Physics’
of Saha Institute of Nuclear Physics. A.G. and D.M. acknowledge
the support from the DAE project
`Investigating Neutrinoless Double Beta Decay, Dark Matter and GRB'
of Saha Institute of Nuclear Physics.
| proofpile-arXiv_065-5205 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
\input{section1}
\section{The Model}
\input{section2}
\section{Fixed Points and Stable Manifolds}
\input{section3}
\section{Successful Therapy}
\input{section4}
\section{Conclusion}
\input{section5}
| proofpile-arXiv_065-5212 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction. An example from theory of ideal gases}
Well known that the internal energy $U$ is function of entropy $S$ and volume $V$ of a system:
\begin{equation}\label{b1}
U=U(S,V).
\end{equation}
So well known that the free energy $F$ is function of temperature $T$ and volume $V$ of a system:
\begin{equation}\label{b2}
F=F(T,V).
\end{equation}
We can say, that entropy and volume are eigen-arguments of the internal energy, so as temperature and volume are eigen-arguments of free energy. Well known also that internal energy for ideal gases is simply expressed versus temperature \cite{b64}:
\begin{equation}\label{b3}
U=U_{0}+C_{V}T,
\end{equation}
where $C_{V}$ is heat capacity at constant volume.
In the same time, the internal energy versus eigen-arguments for an ideal gas looks as a complex thing. Really, the expression for entropy of ideal gases is
\begin{equation}\label{b4}
\frac{S}{\nu}-S_{0}=C_{V}\ln T+R \ln \frac{V}{N}=\ln[(T)^{C_{V}}(\frac{V}{N})^{R}],
\end{equation}
where $\nu$ is number of moles, $S_{0}$ is some constant, $R$ is molar gas constant, $N$ is number of particles \cite{b64}. From (\ref{b4}) one can give expression for temperature and for internal energy:
\begin{eqnarray}\label{b5}
\nonumber
T=(\frac{V}{N})^{\frac{R}{C_{V}}}\exp(\frac{S}{\nu}-S_{0}), \\
U=C_{V}(\frac{V}{N})^{\frac{R}{C_{V}}}\exp(\frac{S}{\nu}-S_{0}),
\end{eqnarray}
Comparing with (\ref{b3}) one can see that the internal energy versus its eigen-arguments looks more complicated than versus a foreign variable. Nevertheless, the expression for the internal energy in the form (\ref{b5}) is more correct from the point of view of thermodynamics. At least one can calculate the temperature $T$ and pressure $P$ with help of standard procedure by means of differentiation of the internal energy:
\begin{equation}\label{b6}
T=\frac{\partial U}{\partial S},
\quad P=-\frac{\partial U}{\partial V},
\end{equation}
One can easily prove the identity of temperature following from Eqs. (\ref{b5}) and (\ref{b6}) with help of equation of state for ideal gases $RT=PV$.
Thus, in this example we see, that some relations of thermodynamics look simpler versus variables, which isn't its eigen-arguments. In the next issues we will see that this 'rule' is executed and in more general cases.
\section{Principle of minimum of free energy}
If a solid consisting $N$ particles has $n$ structural defects (e.g., vacancies, substituted atom, ets.) then the equilibrium or steady state in this case can be found from the maximum of probability distribution function taking in the form \cite{f55, ll69, s89, g97}:
\begin{equation}\label{b7}
f(n)=C\dfrac{(N+n)!}{N!n!}\exp(-\dfrac{U(n)}{kT}),
\end{equation}
where $C$ is a normalized factor, $U(n)$ is the internal energy depending of number of structural defects, $k$ is Boltzmann's constant. The pre-exponential factor describes the combinational, that is, entropic, part of the distribution function, connected with degeneration of microstates. The exponential factor describes the restrictive part of the distribution function, connected with the overcoming of potential barriers between microstates. In a quadratic approximation
\begin{equation}\label{b8}
U=U_{0}+u_{0}n-\frac{1}{2}u_{1}n^2.
\end{equation}
where $u_{0}$ and $u_{1}$ are some constants.
Bringing variables independing of number of defect $n$ into the inessential constant $C$ the expression (\ref{b7}) can be written down in the form:
\begin{equation}\label{b9}
f(n)=C\dfrac{(N+n)!}{n!}\exp(-\dfrac{u_{0}n-\frac{1}{2}u_{1}n^2}{kT}),
\end{equation}
or in a form of product:
\begin{equation}\label{b10}
f(n)=C\prod_{l=1}^{N}(n+i)\exp(-\dfrac{u_{0}n-\frac{1}{2}u_{1}n^2}{kT}).
\end{equation}
By differentiating it, we obtain
\begin{equation}\label{b11}
\frac{\partial f(n)}{\partial n}=(\sum_{k=1}^{N+n}\frac{1}{k}-\sum_{k=1}^{n}\frac{1}{k}-\frac{u_{0}-u_{1}n}{kT})f(n).
\end{equation}
The extreme meaning of probability distribution function is at n, which obeys next transcendental equation:
\begin{equation}\label{b12}
\sum_{k=1}^{N+n}\frac{1}{k}-\sum_{k=1}^{n}\frac{1}{k}-\frac{u_{0}-u_{1}n}{kT}=0.
\end{equation}
From a table value partial sums one can find \cite{gr07}:
\begin{equation}\label{b13}
\sum_{k=1}^{n}\frac{1}{k}=C+\ln n +\frac{1}{2n},
\end{equation}
where $C$ is some constant. By substituting (\ref{b13}) into (\ref{b12}), for case $N>>n>>1$ we obtain:
\begin{equation}\label{b14}
n=N\exp(-\frac{u}{kT}),
\end{equation}
where
\begin{equation}\label{b15}
u\equiv\frac{\partial U}{\partial n}=u_{0}-u_{1}n
\end{equation}
is energy of a defect. As is evident from the last formula the energy of defect is not strictly constant, but depends from total number of defects. The relation (\ref{b14}) is equation of state for an equilibrium case, the relation (\ref{b15}) is equation of state too, but for more general nonequilibrium case included the equilibrium state as a partial case. It is need to consider the Eqs. (\ref{b14}) and (\ref{b15}) together, as a set of equations for deducing both the energy of defect $u_{e}$ and density of defects $n_{e}$ into the equilibrium state.
Thus, the equation of state (\ref{b14}) is obtained from the condition of most probability state as maximum of probability distribution function (\ref{b7}). Same result one can obtain from the principle of minimum of the free energy. Really, pre-exponential factor in (\ref{b7}) is the thermodynamic probability \cite{f55, bbe92}
\begin{equation}\label{b16}
W=\dfrac{(N+n)!}{N!n!},
\end{equation}
the logarithm of which is configurational entropy $S_{c}=k\ln W$. Note, that configurational entropy is one-valued function of number of defects. It is perfectly independent of energy of defect (and of temperature too).
Then described above procedure can be schematically displayed as \cite{s89}
\begin{equation}\label{b17}
W\exp(-\dfrac{U(n)}{kT})\rightarrow max
\end{equation}
or after logarithmic operation in the form
\begin{equation}\label{b18}
\ln W-\dfrac{U(n)}{kT}\rightarrow max.
\end{equation}
Inverting the sign, we come to the free energy minimization principle
\begin{equation}\label{b19}
U(n)-kT\ln W\equiv U-TS_{c}=F_{c}\rightarrow min.
\end{equation}
Nevertheless, this excellent result contains a contradiction. Really, the product $TS_{c}$ entered into definition of the free energy $F_{c}$ is bounded energy, which is lost for a production of the work by a system. In another side, the total energy of defects in the main part is physically energy, which is lost for the production of the work too. Only a little part of it remains for the work production. Then we can write down that
\begin{equation}\label{b20}
TS_{c}\approx un
\end{equation}
And now we can introduce a new specific kind of free energy by means of subtraction of bounded energy in the form product $un$ from internal energy (\ref{b8}).
\begin{equation}\label{b21}
\tilde{F_{c}}=U-un=U_{0}+\dfrac{1}{2u_{1}}(u_{0}-u_{V})^{2}.
\end{equation}
Here we use equation of state (\ref{b15}) for elimination of density of defects. It is very easy to establish that
\begin{equation}\label{b22}
n=-\frac{\partial \tilde{F_{c}}}{\partial u}.
\end{equation}
Both relations (\ref{b15}) and (\ref{b22}) are connected couple of equations between the internal energy $U$ and the modified configurational free energy $\tilde{F_{c}}$ from one side, and between density of defects $n$ and energy of defect $u$ from another side. One can see that the energy of defect is eigen-argument for the internal energy, and density of defects is eigen-argument for the modified configurational free energy. In this connection, the exact free energy $F_{c}$ according to (\ref{b16}) and (\ref{b19}) is expressed through variable $n$, which isn't its eigen-argument.
We have same situation as in the previous section example for ideal gases. Namely, the free energy expressing versus the foreign «argument» obeys simple fundamental feature: minimization principle, as it expressing versus the eigen-argument don't obey this feature, and we must use additional operations for finding of equilibrium parameters. But from the thermodynamic point of view, the expression of free energy versus eigen-argument is more correct, as allows to use notations closed to the equilibrium thermodynamics in nonequilibrium cases.
\section{KINETIC EQUATIONS}
Because the energy, needed for the formation of a new defect, is smaller in the presence of others than in defect-free crystal, the quadratic term in (\ref{b8}) has negative sign. Note that expression (\ref{b8}) is true both for equilibrium and non-equilibrium states. In this approximation the internal energy is a convex function of the defect number having the maximum at point $n=n_{max}$, as it is shown in Fig. \ref{f1} a.
\begin{figure*}
\includegraphics [width=2.7 in]{fig_1a
\quad \quad
\includegraphics [width=2.7 in]{fig_1b
\caption{\label{f1} Plots of the internal (a) and free (b) energy versus its eigen arguments. Tendency of the system to the equilibrium or steady state is indicated by arrows.}
\end{figure*}
In same approximation the modified configurational free energy is a concave function with the minimum at point $u_{V}=u_{V0}$, as it is shown in Fig. \ref{f1}b.
With relationships (\ref{b14}) and (\ref{b15}), it is easy to show that the steady state corresponds neither to the maximum of the internal energy nor to the minimum of the free energy. The steady state is at point $n=n_{e}$, where
\begin{equation}\label{b23}
u_{e}=\dfrac{\partial U}{\partial n_{e}},
\quad n_{e}=-\dfrac{\partial \tilde{F_{c}}}{\partial u_{e}}.
\end{equation}
Here the additional subscript $e$ denotes the equilibrium value of a variable.
If a system has deviated from the steady state, then it should tend back that state with a speed, which is higher, the larger is the deviation \cite{m07, m08, m09}:
\begin{equation}\label{b24}
\dfrac{\partial n}{\partial t}=\gamma_{n}(\dfrac{\partial U}{\partial n}-u_{e}),
\quad \dfrac{\partial u}{\partial t}=-\gamma_{u}(\dfrac{\partial \tilde{F_{c}}}{\partial u}-n_{e}),
\end{equation}
Both variants of the kinetic equations are equivalent and their application is a matter of convenience. The form of kinetic equations (\ref{b24}) is symmetric with respect to the use of internal and configurational free energy. In the right-hand parts of Eq. (\ref{b24}) the signs are chosen, based on solution stability, so that the internal energy is a convex function, and the free energy is a concave one. In the right side of the well-known Landau-Khalatnikov kinetic equation \cite{lh54}
\begin{equation}\label{b25}
\dfrac{\partial n}{\partial t}=-\gamma \dfrac{\partial F_{c}}{\partial n}
\end{equation}
the ``chemical potential'' is used in the form:
\begin{equation}\label{b26}
\mu =\dfrac{\partial F_{c}}{\partial n}.
\end{equation}
From the thermodynamic point view, such kind of variable isn't chemical potential in really, as it is specified by foreign corresponding to the free energy 'argument'. But it does not hind using this notation in practical work, as it directly realizes the minimization principle for the free energy.
If we consider that equilibrium energy of defect $u_{e}$ and number of defects $n_{e}$ slowly change during external action then we can introduce them under differentiation sign in (\ref{b24}) and definite new kind (shifted) of internal and free energy.
\begin{equation}\label{b27}
\bold{U} =U-u_{e}n,
\quad \tilde{\bold{F_{c}}} =\tilde{F_{c}}-un_{e}.
\end{equation}
Then equations (\ref{b24}) are simplified a little:
\begin{equation}\label{b28}
\dfrac{\partial n}{\partial t}=\gamma_{n}\dfrac{\partial \bold{U}}{\partial n},
\quad \dfrac{\partial u}{\partial t}=-\gamma_{u}\dfrac{\partial \tilde{\bold{F_{c}}}}{\partial u}.
\end{equation}
The original potentials $U$ and $\tilde{F_{c}}$ are connected by means of a Legendre-like transformation:
\begin{equation}\label{b29}
F_{c}=U-un.
\end{equation}
The shifted potential $\bold{U}$ and $\tilde{\bold{F_{c}}}$ are connected by means of transformation:
\begin{equation}\label{b30}
\tilde{\bold{F_{c}}}=\bold{U}-un+un_{e}-u_{e}n,
\end{equation}
which differs from the Legendre-like transformation by Poisson-like bracket $[un]=un_{e}-u_{e}n$.
The stationary point for the shifted potentials is coincided with a maximum of $\bold{U}$ and with a minimum of $\tilde{\bold{F_{c}}}$. Thus $\bold{U}$ is some effective thermodynamic potential, for which tendency of the original part of internal energy to minimum is completely compensated by entropic factor. Twice modified configurational free energy $\tilde{\bold{F_{c}}}$ tends to minimum, but this tendency is differ from it for the original configurational free energy $F_{c}$. The effective thermodynamic potential $\tilde{\bold{F_{c}}}$ tends to minimum in space of eigen-argument $u$, then the original free energy $F_{c}$ tends to minimum in the space of foreign 'argument' $n$.
\section{CONCLUDING REMARKS}
In the paper, a phenomenological approach, based on generalization of Landau technique is considered. For fast processes thermal fluctuations have no time to exert essential influence and it is possible to consider the problem in the mean-field approximation. The approach is based not on an abstract order parameter but on a physical parameters of structural defects -- their quantity (density) and the average energy. The new more general form of kinetic equations, symmetric with respect to using the internal energy $U$ and the modified configurational free energy $\tilde{F_{c}}$, is proposed. In this case, the density of defects and defect energy are related by a symmetric differential dependences of type (\ref{b15}), (\ref{b22}) and (\ref{b23}). Because the defect energy in the steady state is not equal to zero, the extreme principle of equality to zero of the derivative of free energy with respect to 'order parameter' in the framework of nonequilibrium evolution thermodynamics breaks down. This principle needs to be substituted with principle of the tendency to a steady state. Steady-state characteristics can not be determined in the framework of phenomenological approach, statistical and microscopic approaches are required.
The present form of kinetic equations can be generalized to all types of regularly or randomly distributed defects.
\begin{acknowledgments}
The work was supported by the budget topic № 0106U006931 of NAS of Ukraine and partially by the Ukrainian state fund of fundamental researches (grants F28.7/060). The author thanks A. Filippov for helpful discussions. The author thanks also him Referee for useful remarks and comments.
\end{acknowledgments}
| proofpile-arXiv_065-5218 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
In one-dimensional quantum systems, a completely different behavior
for the integer spin
chains from the half-integer spin chains was predicted the Haldane
\cite{Haldane1,Haldane2}.
The antiferromagnetic isotopric
spin-1 model introduced by Affleck, Kennedy, Lieb and Tasaki
(AKLT model) \cite{AKLT},
whose groundstate can be exactly calculated, has been a useful toymodel
for the deep understanding of Haldane's prediction of the massive behavior
for integer spin chains, such as the
discovery of the special type of long-range order \cite{DR,Tasaki}.
The AKLT model has been generalized to
higher spin models, anisotropic models, etc \cite{AAH,KSZ1,KSZ2,KSZ3,Osh,TS1,BY,GR,SR,TZX,TZXLN,KM,AHQZ}.
The hamiltonians are essentially linear combinations
of projection operators with nonnegative coefficients.
In this paper we consider the anisotropic integer spin-$S$ Hamiltonian
\begin{align}
&H=\sum_{k=1}^L H(k,k+1), \\
&H(k,k+1)=\sum_{J=S+1}^{2S}C_J(k,k+1) \pi_{J}(k,k+1),
\end{align}
where $C_J(k,k+1) \ge 0$, and $\pi_{J}(k,k+1)$, which acts
on the $k$-th and $(k+1)$-th site,
is the $U_q(su(2))$ projection operator
for $V_{S} \otimes V_{S}$ to $V_J$ where $V_j$ is the $(2j+1)$-dimensional
representation of the quantum group $U_q(su(2))$ \cite{Drinfeld,Jimbo}.
We determine the matrix product representation for the groundstate,
which is useful for calculations of correlation functions.
For $S=1$ or $q=1$ limit, it recovers the known results
for the isotropic spin-$S$ model or anisotropic spin-1 model
\cite{KSZ2,KSZ3,TS1,TS2}.
Several correlation functions are evaluated from the matrix product representation.
This paper is organized as follows. In the next section, we
briefly review the quantum group $U_q(su(2))$.
By use of the Weyl representation of $U_q(su(2))$, we
construct a boson representation for the valence-bond-solid (VBS) groundstate.
The matrix product representation for the VBS state is constructed in section 3,
from which several correlation
functions are evaluated for $S=2$ and $S=3$. Section 4 is devoted to conclusion.
\section{Schwinger boson representation of the groundstate}
The quantum group $U_q(su(2))$ is defined by generators $X^+, X^-,H$
with relations
\begin{align}
{[} X^+, X^- {]}=\frac{q^H-q^{-H}}{q-q^{-1}}, \ \ [H, X^{\pm}]=\pm 2 X^{\pm}.
\end{align}
The comultiplication is given by
\begin{align}
\Delta(X^+)&=X^+ \otimes q^{H/2}+q^{-H/2} \otimes X^+, \\
\Delta(X^-)&=X^- \otimes q^{H/2}+q^{-H/2} \otimes X^-, \\
\Delta (H)&=H \otimes 1+1 \otimes H.
\end{align}
For convenience, let us define $q$-integer, $q$-factorial and $q$-binomial coefficients
as
\begin{align}
[n]_q=\frac{q^n-q^{-n}}{q-q^{-1}}, \ [n]_q!=\prod_{k=1}^n [k], \
\left[
\begin{array}{c}
n \\
k
\end{array}
\right]_q
=\frac{[n]_q!}{[k]_q![n-k]_q!}.
\end{align}
$U_q(su(2))$ has the Schwinger boson representation
\cite{qboson1,qboson2,qboson3}.
Introducing two $q$-bosons $a$ and $b$ satisfying
\begin{align}
&aa^{\dagger}-q a^{\dagger} a=q^{-N_a}, \ \ bb^{\dagger}-q b^{\dagger} b=q^{-N_b}, \\
&[N_a, a]=-a, \ \ [N_a, a^{\dagger}]=a^{\dagger}, \ \
[N_b, b]=-b, \ \ [N_b, b^{\dagger}]=b^{\dagger},
\end{align}
$U_q(su(2))$ can be realized through the relations
\begin{align}
X^+=a^{\dagger} b, \ \ X^-=b^{\dagger} a, \ \ H=N_a-N_b.
\end{align}
The basis of $(2j+1)$-dimensional representation $V_j$ is given by
\begin{align}
|j,m \ket=\frac{(a^{\dagger})^{j+m}(b^{\dagger})^{j-m}}{([j+m]_q! [j-m]_q!)^{1/2}}
|\mathrm{vac} \ket, \ \ (m=-j, \dots, j).
\end{align}
We construct the VBS groundstate in terms of Schwinger bosons, following
the arguments of \cite{KX}.
Let us denote the $q$-bosons $a$ and $b$ acting on the $l$-th site as $a_l$ and $b_l$.
We utilize the Weyl representation of $U_q(su(2))$
\cite{weyl1,weyl2} for convenience.
$a_l^{\dagger}$ and $b_l^{\dagger}$ is represented as multiplication by
variables $x_l$ and $y_l$ on the space of polynomials $\mathbb{C}[x_l,y_l]$,
respectively.
$a_l$ and $b_l$ are represented as difference operators
\begin{align}
a_l=\frac{1}{(q-q^{-1})x_l}(D_q^{x_l}-D_{q^{-1}}^{x_l}), \ \
b_l=\frac{1}{(q-q^{-1})y_l}(D_q^{y_l}-D_{q^{-1}}^{y_l}),
\end{align}
where
\begin{align}
D_{p}^{x_l}f(x_l,y_l)=f(px_l,y_l), \ \ D_{p}^{y_l}f(x_l,y_l)=f(x_l,py_l).
\end{align}
Then, at the $l$-th site, one has
\begin{align}
X_l^{+}=\frac{x_l}{(q-q^{-1})y_l}(D_q^{y_l}-D_{q^{-1}}^{y_l}), \ \
X_l^{-}=\frac{y_l}{(q-q^{-1})x_l}(D_q^{x_l}-D_{q^{-1}}^{x_l}), \ \
q^{H_l}=D_q^{x_l} D_{q^{-1}}^{y_l}.
\end{align}
The basis of $(2S_l+1)$-dimensional representation $V_{S_{l}}$ is given by
\begin{align}
\{x_l^{S_l+m_l} y_l^{S_l-m_l} \ | \ m_l=-S_l, \dots S_l \}.
\end{align}
The tensor product of two irreducible representations has the following
Clebsch-Gordan decomposition
\begin{align}
V_{S_k} \otimes V_{S_l}= \oplus_{J=|S_k-S_l|}^{S_k+S_l}V_J.
\end{align}
The highest weight vector $v_J \in V_J$ has the following form
\begin{align}
v_J=\sum_{m_k+m_l=J}C_{m_k,m_l}
x_k^{S_k+m_k}y_k^{S_k-m_k} x_l^{S_l+m_l}y_l^{S_l-m_l}.
\end{align}
Since
\begin{align}
X_{kl}^+ v_J=& \Delta(X_{kl}^+) \sum_{m_k+m_l=J}C_{m_k,m_l}
x_k^{S_k+m_k}y_k^{S_k-m_k} x_l^{S_l+m_l}y_l^{S_l-m_l} \nn \\
=&\sum_{m_k=0}^{J-1}
([S_k-m_k]_q q^{J-m_k}C_{m_k,J-m_k}+[S_l-J+m_k+1]_q q^{-m_k-1}
C_{m_k+1,J-m_k-1}) \nn \\
&\times x_k^{S_k+m_k+1}y_k^{S_k-m_k-1} x_l^{S_l+J-m_k}y_l^{S_l-J+m_k},
\end{align}
one has
\begin{align}
C_{m_k,J-m_k}=\frac{(-1)^{S_k-m_k}
\left[
\begin{array}{c}
S_k+S_l-J \\
S_k-m_k
\end{array}
\right]_q
}{
(-1)^{S_k}
\left[
\begin{array}{c}
S_k+S_l-J \\
S_k
\end{array}
\right]_q
}
q^{m_k(J+1)}C_{0,J}. \label{relation1}
\end{align}
Utilizing \eqref{relation1} and
\begin{align}
\prod_{j=1}^m(1-zq^{2j-2})
=\sum_{k=0}^m (-z)^k q^{k(m-1)}
\left[
\begin{array}{c}
m \\
k
\end{array}
\right]_q,
\end{align}
one gets
\begin{align}
v_J=\frac{q^{S_k(J+1)}C_{0,J}}
{(-1)^{S_k}
\left[
\begin{array}{c}
S_k+S_l-J \\
S_k
\end{array}
\right]_q
}
x_k^{S_k-S_l+J}x_l^{S_l-S_k+J}
\prod_{m=1}^{S_k+S_l-J}(x_k y_l-q^{2m-2-S_k-S_l}x_l y_k).
\end{align}
We are now considering the homogeneous chain, i.e., $S_k=S$ for all $k$.
The highest weight vector $v_S \in V_S \subset V_S \otimes V_S$
is divisible by
$\prod_{m=1}^S(q^m x_k y_l-q^{-m} y_k x_l)$. Moreover, we conjecture the following. \\
\\
{\bf Conjecture} \\
All vectors in $V_j \subset V_S \otimes V_S, \ j=0,1, \dots, S$ are divisible by
$\prod_{m=1}^S(q^m x_k y_l-q^{-m} y_k x_l)$. \\
\\
We have checked this conjecture for several values of $S$.
The vectors for the case $S=2$ are listed in the Appendix.
Based on this conjecture and the property of
projection opreators $\pi_J w_K=\delta_{JK}w_K, w_K \in V_K$
, we have the $q$-deformed lemma of Lemma 1 in
\cite{KX}. \\
\\
{\bf Lemma} \\
All solutions of
\begin{align}
\pi_J(k,k+1)| \psi \ket=0, \ \ S+1 \le J \le 2S,
\end{align}
for fixed $k$ can be represented in the following form
\begin{align}
| \psi \ket=f(a_k^{\dagger}, b_k^{\dagger},a_{k+1}^{\dagger}, b_{k+1}^{\dagger})
\prod_{m=1}^S(q^m a_k^{\dagger} b_{k+1}^{\dagger}-q^{-m} b_k^{\dagger} a_{k+1}^{\dagger})
| \mathrm{vac} \ket,
\end{align}
where $f(a_k^{\dagger}, b_k^{\dagger},a_{k+1}^{\dagger}, b_{k+1}^{\dagger})$ is
some polynomial in
$a_k^{\dagger}, b_k^{\dagger},a_{k+1}^{\dagger}$ and $b_{k+1}^{\dagger}$. \\
\\
From this Lemma, we find the $q$-deformed VBS groundstate is
\begin{align}
| \Psi \ket_{PBC}=\prod_{k=1}^L \prod_{m=1}^S
(q^m a_k^{\dagger} b_{k+1}^{\dagger}-q^{-m} b_k^{\dagger} a_{k+1}^{\dagger})
|\mathrm{vac} \ket,
\end{align}
where $a_{L+1}=a_1, b_{L+1}=b_1$ for the periodic chain, and
\begin{align}
| \Psi \ket_{p_1,p_2}=Q_{\mathrm{left}}(a_1^{\dagger},b_1^{\dagger};p_1)
\prod_{k=1}^{L-1} \prod_{m=1}^S
(q^m a_k^{\dagger} b_{k+1}^{\dagger}-q^{-m} b_k^{\dagger} a_{k+1}^{\dagger})
Q_{\mathrm{right}}(a_L^{\dagger},b_L^{\dagger};p_2)
|\mathrm{vac} \ket,
\end{align}
where
\begin{align}
Q_{\mathrm{left}}(a_1^{\dagger},b_1^{\dagger};p_1)&=
\left[
\begin{array}{c}
S \\
p_1-1
\end{array}
\right]_q^{1/2}
(a_1^{\dagger})^{S-p_1+1} (b_1^{\dagger})^{p_1-1}, \ \ (p_1=1, \dots S+1), \\
Q_{\mathrm{right}}(a_L^{\dagger},b_L^{\dagger};p_2)&=
\left[
\begin{array}{c}
S \\
p_2-1
\end{array}
\right]_q^{1/2}
(a_L^{\dagger})^{p_2-1} (b_L^{\dagger})^{S-p_2+1}, \ \ (p_2=1, \dots S+1),
\end{align}
for the open chain, generalizing the results of \cite{AAH}.
\section{Matrix product representation}
In the last section, we constructed the $q$-VBS states in terms of
Schwinger bosons.
One can transform them in the matrix product representation
as in \cite{TS1,TS2}, which are
\begin{align}
| \Psi \ket_{PBC}&=\Tr [g_1 \otimes g_2 \otimes \cdots \otimes g_{L-1} \otimes g_L], \\
| \Psi \ket_{p_1,p_2}&=[g^{\mathrm{start}} \otimes g_2 \otimes \cdots g_{L-1} \otimes
g_L]_{p_1,p_2},
\end{align}
where $g_k$ and $g^{\mathrm{start}}$ are $(S+1) \times (S+1)$ matrices whose
matrix elements are given by
\begin{align}
g_k(i,j)=&(-1)^{S-i+1} q^{(2i-2-S)(S+1)/2} \nn \\
&\times
\left(
\left[
\begin{array}{c}
S \\
i-1
\end{array}
\right]_q
\left[
\begin{array}{c}
S \\
j-1
\end{array}
\right]_q
\right)^{1/2}
(a_k^{\dagger})^{S-i+j} (b_k^{\dagger})^{S+i-j} | \mathrm{vac} \ket_k \nn \\
=&(-1)^{S-i+1} q^{(2i-2-S)(S+1)/2} \nn \\
&\times
\left(
\left[
\begin{array}{c}
S \\
i-1
\end{array}
\right]_q
\left[
\begin{array}{c}
S \\
j-1
\end{array}
\right]_q
[S-i+j]_q! [S+i-j]_q!
\right)^{1/2}
|S;j-i \ket_k, \\
g^{\mathrm{start}}(i,j)=
&
\left(
\left[
\begin{array}{c}
S \\
i-1
\end{array}
\right]_q
\left[
\begin{array}{c}
S \\
j-1
\end{array}
\right]_q
[S-i+j]_q! [S+i-j]_q!
\right)^{1/2}
|S;j-i \ket_k.
\end{align}
For $q \to 1$ limit, one recovers the results of \cite{TS2}.
We can also construct the matrix product representation in the following form
\begin{align}
| \Psi \ket_{PBC}&=\Tr [f_1 \otimes f_2 \otimes \cdots \otimes f_{L-1} \otimes f_L],
\end{align}
where
\begin{align}
f_k(i,j)=&(-1)^{S-i+1} q^{(i+j-2-S)(S+1)/2} \nn \\
&\times
\left(
\left[
\begin{array}{c}
S \\
i-1
\end{array}
\right]_q
\left[
\begin{array}{c}
S \\
j-1
\end{array}
\right]_q
[S-i+j]_q! [S+i-j]_q!
\right)^{1/2}
|S;j-i \ket_k,
\end{align}
which reproduces the result for $S=1$ \cite{KSZ2,KSZ3}.
From the matrix product representation, one can formulate correlation functions.
Let $f_j^{\dagger}$ be a matrix replacing the ket vectors of the matrix $f_j$
by the bra vectors.
We define $(S+1)^2 \times (S+1)^2$ matrices $G$ and $G^A$ as
\begin{align}
G_{(m_{j-1},n_{j-1};m_j,n_j)}&=f_j^{\dagger}(m_{j-1},m_j)f_j(n_{j-1},n_j), \\
G^A_{(m_{j-1},n_{j-1};m_j,n_j)}&=f_j^{\dagger}(m_{j-1},m_j)A_jf_j(n_{j-1},n_j).
\end{align}
Explicitly we have
\begin{align}
G_{(a,b;c,d)}=&\delta_{a-b,c-d}(-1)^{a+b}q^{(a+b+c+d-2S-4)(S+1)/2} \nn \\
&\times \left(
\left[
\begin{array}{c}
S \\
a-1
\end{array}
\right]_q
\left[
\begin{array}{c}
S \\
b-1
\end{array}
\right]_q
\left[
\begin{array}{c}
S \\
c-1
\end{array}
\right]_q
\left[
\begin{array}{c}
S \\
d-1
\end{array}
\right]_q
\right)^{1/2} \nn \\
&\times
([S-a+c]_q![S+a-c]_q![S-b+d]_q![S+b-d]_q!)^{1/2}.
\end{align}
The eigenvalues of $G$ for $S=2$ are
\begin{align}
&\lambda_1=[5]_q[4]_q[2]_q, \\
&\lambda_2=\lambda_3=\lambda_4=-[5]_q[2]_q^2, \\
&\lambda_5=\lambda_6=\lambda_7=\lambda_8=\lambda_9=[2]_q^2.
\end{align}
Moreover, we conjecture that the eigenvalues of $G$ for general $S$ is given by
\begin{align}
\lambda(l)=(-1)^l \frac{[2S+1]_q!}{[S+1]_q}
\frac{
\left[
\begin{array}{c}
S \\
l
\end{array}
\right]_q
}
{
\left[
\begin{array}{c}
S+l+1 \\
l
\end{array}
\right]_q
}, \ (l=0,1,\dots,S),
\end{align}
where the degeneracy of $\lambda(l)$ is $2l+1$. \\
For $A=S^z$, one has
\begin{align}
G^{S^z}_{(a,b;c,d)}=&\delta_{a-b,c-d}(d-b)(-1)^{a+b}q^{(a+b+c+d-2S-4)(S+1)/2} \nn \\
&\times \left(
\left[
\begin{array}{c}
S \\
a-1
\end{array}
\right]_q
\left[
\begin{array}{c}
S \\
b-1
\end{array}
\right]_q
\left[
\begin{array}{c}
S \\
c-1
\end{array}
\right]_q
\left[
\begin{array}{c}
S \\
d-1
\end{array}
\right]_q
\right)^{1/2} \nn \\
&\times
([S-a+c]_q![S+a-c]_q![S-b+d]_q![S+b-d]_q!)^{1/2}.
\end{align}
One point function $\bra A \ket$
and two point function $\bra A_1 B_r \ket$
of the periodic chain can be represented as
\begin{align}
\bra A \ket&=(\Tr G^L)^{-1} \Tr G^A G^{L-1}, \label{onepoint} \\
\bra A_1 B_r \ket&=(\Tr G^L)^{-1} \Tr G^A G^{r-2} G^B G^{L-r}. \label{twopoint}
\end{align}
Denoting the eigenvalues and the normalized eigenvectors of $G$ as
$|\lambda_1| > |\lambda_2| \ge \cdots \ge |\lambda_{(S+1)^2}|$ and
$| e_1 \ket, |e_2 \ket, \dots |e_{(S+1)^2} \ket$,
\eqref{onepoint} and \eqref{twopoint} reduces to
\begin{align}
\bra A \ket&=\lambda_1^{-1} \bra e_1|G^A|e_1 \ket, \\
\bra A_1 B_r \ket&=\sum_{n=1}^{(S+1)^2} \lambda_n^{-2}
\left( \frac{\lambda_n}{\lambda_1} \right)^r
\bra e_1|G^A|e_1 \ket \bra e_n|G^B|e_1 \ket.
\end{align}
in the thermodynamic limit $L \to \infty$.
Let us calculate several correlation functions. For $S=2$,
the probability of finding $S^z=m$ value $\bra P(S^z=m) \ket$ is
\begin{align}
&\bra P(S^z=2) \ket=\bra P(S^z=-2) \ket=\frac{1}{[5]_q}, \\
&\bra P(S^z=1) \ket=\bra P(S^z=-1) \ket=\frac{[2]_q[8]_q}{[5]_q[4]_q^2}, \\
&\bra P(S^z=0) \ket=\frac{[2]_q}{[5]_q[4]_q} \left(1+\frac{[12]_q}{[3]_q[4]_q} \right).
\end{align}
In the $q=1$ limit, $\bra P(S^z=m) \ket=1/5$ for all $m$.
As we move away from $q=1$, $P(S^z=0)$ increases,
i.e., the spins prefer the transverse $x$-$y$ plane. \\
The spin-spin correlation function $\bra S_1^z S_r^z \ket$ is
\begin{align}
\bra S_1^z S_r^z \ket=&-\frac{[2]_q[3]_q}{[4]_q}
\left( \frac{[2]_q}{[5]_q[4]_q} \right)^r
\Big\{
(q-q^{-1})(q^3-q^{-3})\frac{[6]_q^2}{[3]_q^2[2]_q^2}+[2]_q^2(-[5]_q)^r
\Big\},
\end{align}
which reduces to $-6(-2)^{-r}$ for $q=1$.
$\bra S_1^z S_r^z \ket$
exhibits exponential decay
for large distances, which is a typical behavior of gapful systems.
For $S=3$, one has
\begin{align}
\bra S_1^z S_r^z \ket=&-\frac{[2]_q}{[6]_q[5]_q[3]_q}
\left( \frac{[3]_q}{[7]_q[6]_q[5]_q} \right)^r
\Big\{ (q-q^{-1})^2(q^3-q^{-3})^2([9]_q-(q^2-q^{-2})^2)^2
\frac{[4]_q^2}{[2]_q^2}(-[2]_q)^r \nn \\
&+(q^3-q^{-3})^2 \frac{[8]_q^2[5]_q}{[4]_q^2}([7]_q[2]_q)^r
+([2]_q^4-2 [3]_q)^2 \frac{[6]_q[2]_q}{[3]_q}(-[7]_q[6]_q)^r
\Big\},
\end{align}
which reduces to $-80 (-3)^{r-2} 5^{-r}$ in the $q=1$ limit.
\section{Conclusion}
In this paper, we considered one-dimensional
spin-$S$ $q$-deformed AKLT models.
We derived the Schwinger boson representation
and the matrix product representation for the valence-bond-solid groundstate.
The matrix product representation is practical for calculating
correlation functions. The spin-spin correlation functions exhibit exponential
decay for large distances.
An interesting problem is to calculate the entanglement entropy of this model,
which is a typical quantification of the entanglement of quantum systems.
It is interesting to see how the
entanglement entropy changes as we move away from the isotropic point
\cite{FKR,KHH,XKHK} (see also \cite{KHK,KKKKT} for other VBS states).
\section*{Acknowledgement}
This work was partially supported
by Global COE Program (Global Center of Excellence for
Physical Sciences Frontier) from the Ministry of Education,
Culture, Sports, Science and Technology, Japan.
\section*{Appendix}
We list all the vectors in $v_j \in V_j \subset V_{S} \otimes V_{S}, j=1,2,\dots,S$. \\
$S=2$
\begin{align}
v_2 &\propto x_k^2 x_l^2(qx_k y_l-q^{-1}x_l y_k)(q^2x_k y_l-q^{-2}x_l y_k), \nn \\
(X_{kl}^-)v_2 &\propto x_k x_l(q^{-2}x_k y_l+q^2 x_l y_k)
(qx_k y_l-q^{-1}x_l y_k)(q^2x_k y_l-q^{-2}x_l y_k), \nn \\
(X_{kl}^-)^2 v_2 &\propto \{ q^{-4}x_k^2 y_l^2+(q+q^{-1})^2
x_k x_l y_k y_l+q^4 x_l^2 y_k^2 \}
(qx_k y_l-q^{-1}x_l y_k)(q^2x_k y_l-q^{-2}x_l y_k), \nn \\
(X_{kl}^-)^3 v_2 &\propto y_k y_l(q^{-2}x_k y_l+q^2 x_l y_k)
(qx_k y_l-q^{-1}x_l y_k)(q^2x_k y_l-q^{-2}x_l y_k), \nn \\
(X_{kl}^-)^4 v_2 &\propto y_k^2 y_l^2(qx_k y_l-q^{-1}x_l y_k)(q^2x_k y_l-q^{-2}x_l y_k),
\nn \\
v_1 &\propto x_k x_l(x_k y_l-x_l y_k)
(qx_k y_l-q^{-1}x_l y_k)(q^2x_k y_l-q^{-2}x_l y_k), \nn \\
(X_{kl}^-)v_1 &\propto (q^{-2}x_k y_l+q^2 x_l y_k)
(x_k y_l-x_l y_k)
(qx_k y_l-q^{-1}x_l y_k)(q^2x_k y_l-q^{-2}x_l y_k), \nn \\
(X_{kl}^-)^2v_1 &\propto y_k y_l(x_k y_l-x_l y_k)
(qx_k y_l-q^{-1}x_l y_k)(q^2x_k y_l-q^{-2}x_l y_k), \nn \\
v_0 &\propto (q^{-1}x_k y_l-qx_l y_k)(x_k y_l-x_l y_k)
(qx_k y_l-q^{-1}x_l y_k)(q^2x_k y_l-q^{-2}x_l y_k). \nn
\end{align}
| proofpile-arXiv_065-5237 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}\label{aba:sec1}
Electromagnetic properties of neutrino are among the key items of
modern particle physics (see \cite{GiuStu09} for a recent review).
It seems quite natural that a massive neutrino would have nonzero
diagonal or transition magnetic moment. If a neutrino has
non-trivial electromagnetic properties, then neutrino coupling to
photons is possible and several important, for instance for
astrophysics, processes can exist ~\cite{Raf_book96_RafPR99} .
Recently we have proposed a new mechanism of neutrino radiation of
photons that is realized when a relativistic neutrino with nonzero
magnetic moment moves in dense matter. This mechanism was termed
the ``spin light of neutrino" ($SL\nu$)~\cite{LobStuPLB03} . The
quantum theory of this phenomenon was developed
in~\cite{StuTerPLB05} (see also ~\cite{Lob05} and
~\cite{StuJPA_06_08} ).
In this paper we extend our studies of $SL\nu$ ~\cite{StuTerPLB05}
and consider the $SL\nu$ in a more general case when the photon is
emitted in the neutrino radiative decay. The $SL\nu$ considered in
~\cite{StuTerPLB05} was investigated under condition of equal
masses for the initial and final neutrino states. Here below we
examine the case when the neutrino transition between two
different neutrino mass states is realized. Thus, we consider the
$SL\nu$ mode in the neutrino radiative decay in matter originated
due to the neutrino transition magnetic moment.
It should be noted that the neutrino radiative decay was
considered before by several authors ~\cite{Smi78} . It was shown
that the process characteristics are substantially changed if the
presence of a medium is taken into account. In these calculations,
the influence of the background matter was considered only in the
electromagnetic vertex. Here we are going to discuss the impact of
the medium also onto the state of neutrino itself. At the same
time we will be interested in the another aspect of the problem
and consider it from the point of view of light emission. Under
the condition of equal initial and final particle masses the
process becomes equivalent to the $SL\nu$ in matter . With
different masses for the initial and final neutrino states, the
spin light becomes only the constituting channel for the overall
process corresponding to the change of the neutrino helicity. The
mechanism of $SL\nu$ is based on helicity states energy difference
of the particle arising due to the weak interaction with the
background matter. Hence, our study makes sense, obviously, if the
scale of neutrino mass difference is of the order of spin energy
splitting owing to the interaction with matter.
Let us specify now the process under consideration. We are
considering the decay of one neutrino mass state $\nu_1$ into
another mass state $\nu_2$ assuming that $m_1>m_2$, and restrict
ourselves with only these two neutrino species and accordingly
with two flavour neutrinos. Having in mind that conditions for the
most appropriate application of the process under study can be
found in the vicinity of neutron stars we will take for the
background a neutron-rich matter. In this case a process with
participation of antineutrinos is more appropriate and thus will
study here. However for the convenience in what follows we will
still refer to the particles as to neutrinos. Since the
interactions of flavour neutrinos with neutron star matter are the
same and governed by the neutron density we will take equal
interactions for the initial and final massive neutrinos with the
matter.
\vspace{-0.4cm}
\section{Modified Dirac equation}
The system ``neutrino $\Leftrightarrow$ dense matter" depicted
above can be circumscribed mathematically in different ways. Here
we use the powerful method of exact solutions, discussed in a
series of our papers ~\cite{StuJPA_06_08} . This method is based
on solutions ~ \cite{StuTerPLB05} of the modified Dirac equation
for neutrino in the background medium
\begin{equation}
\{i\gamma_{\mu}\partial^{\mu}-\frac{1}{2}\gamma_{\mu}(1+\gamma^{5})f^{\mu}-m\}\Psi(x)=0,
\label{eq:dirac}
\end{equation}
where in the case of unpolarized and nonmoving matter
$f^{\mu}=G_{f}/\sqrt{2} \ (n,\textbf{0})$ with $n$ being matter
number density. At this, the energy spectrum of neutrino is given
by
\begin{equation}
E_\varepsilon=\varepsilon\sqrt{(p-s\alpha
m_{\nu})^{2}+m_{\nu}^{2}}+\alpha m_{\nu}, \ \ \alpha =
\frac{1}{2\sqrt{2}}G_F\frac{n}{m_{\nu}} \label{eq:spektr}
\end{equation}
where $\varepsilon=\pm1 $ defines the positive and negative-energy
branches of the solutions, $s$ is the helicity of neutrino, $p$
is the neutrino momentum. The exact form of the solutions
$\Psi_{\varepsilon,p,s}(\textbf{r},t)$ can be found in
~\cite{StuTerPLB05} and ~\cite{StuJPA_06_08} .
\vspace{-0.4cm}
\section{Spin light mode of massive neutrino decay}
The S-matrix element for the decay has the standard form that of
the magnetic moment radiation process:
\begin{equation}
S_{fi}=-(2\pi)^4\mu\sqrt{\frac{\pi}{2wL^3}}\delta(E_2-E_1+w)
\delta^{3}({\bf p}_2-{\bf p}_1+{\bf k})
\overline{u}_{f}({{\bf e}},{\bf \Gamma}_{fi})u_i.
\label{eq:amplitude}
\end{equation}
Here ${\bf \Gamma}=i\omega\big\{\big[{\bf \Sigma} \times {\bm
\varkappa}\big]+i\gamma^{5}{\bf \Sigma}\big\}$, $u_{i,f}$ are the
spinors for the initial and final neutrino states, ${\bf e}$ is
the photon polarization vector, $\mu$ is the transitional magnetic
moment~\cite{GiuStu09} and $L$ is the normalization length.
In the process, we have the following conservation laws:
\begin{equation}
E_1=E_2+\omega; \ \ \bf{p_1}=\bf{p_2}+\bf{k}.
\label{eq:conservation}
\end{equation}
It is useful to carry out our computations through non-dimensional
terms. For that purpose we introduce the following notations:
$\gamma=\frac{m_1}{p_1};\kappa=\frac{\alpha
m_1}{p_1}=\frac{\tilde{n}}{p_1};\delta=\frac{\triangle
m^2}{p_{1}^{2}}=\frac{m_{1}^{2}-m_{2}^{2}}{p_{1}^{2}}$. To single
out the the spin light part of the radiative decay process we
should choose different helicities for the initial and final
neutrinos. Keeping the analogy with the usual process of $SL\nu$
we take the helicity quantum numbers as $s_1=-1$, $s_2=1$. Then
the solution of the kinematic system (\ref{eq:conservation}) can
be written in the form
\begin{equation}
w=\frac{-(KD+x\kappa^2)+\sqrt{(KD+x\kappa^2)^2-(K^2-\kappa^2)(D^2-\kappa^2)}}{(K^2-\kappa^2)}
\label{eq:freq}
\end{equation}
with the notations $D=s_{1}\kappa-\delta$;
$\tilde{n}=\frac{1}{2\sqrt{2}}G_F n$,
$K=\sqrt{(1-s_{1}\cdot\kappa)^2+\gamma^2}-x$, here $x$ stands for
$\cos\theta$, $\theta$ is the angle between ${\bf p}_1$ and $\bf
k$.
Performing all the calculations we obtain angle distribution of
the probability for the investigated process:
\begin{equation}
\frac{d\Gamma}{dx}=\mu^2 p_1^3\frac{(K-w+x)(wK-\kappa-\delta)w^3S'}
{\sqrt{(KD-w+x)^2-(K^2-\kappa^2)(D^2-\kappa^2)}},
\label{eq:density}
\end{equation}
where $S'= (1+\beta_1 \beta_2)(1-\frac{w-x-w \cdot x + w \cdot
x^2}{\sqrt{1+w^2-2w \cdot x}}x)-(\beta_1 +
\beta_2)(x-\frac{w-x}{\sqrt{1+w^2-2w \cdot x}})$ and
$\beta_1=\frac{1+\kappa}{\sqrt{(1+\kappa)^2+\gamma^2}}$,
$\beta_2=\frac{\sqrt{1+w^2-2w \cdot x}-\kappa}{K-w+x}$.
The total probability can be computed from the equation
(\ref{eq:density}) by taking the integration over the angle
$\theta$ range. However, manual calculations are not quite simple
to carry through them. Even though the integral can be calculated
exactly, the final expression is enormously complex and its
explicit form is optional to be given here.
\vspace{-0.4cm}
\section{Results and discussion}
It is worth to investigate the asymptotical behavior of the
probability $\Gamma$ in three most significant relativistic
limiting cases keeping only the first infinitesimal order of small
parameters. On this way we have,
\begin{equation}
\Gamma=4\mu^2\tilde{n}^3(1+\frac{3}{2}\frac{m_1^2-m_2^2}{\tilde{n}p_1}+\frac{p_1}{\tilde{n}}), \
({\text {ultrahigh density:}}\ 1 \ll \frac{p_1}{m_1} \ll \frac{\tilde{n}}{p_1});
\label{ultrahighdensity}
\end{equation}
\begin{equation}
\Gamma=4\mu^2\tilde{n}^2p_1(1+\frac{\tilde{n}}{p_1}+\frac{m_1^2-m_2^2}
{\tilde{n}p_1}+\frac{3}{2}\frac{m_1^2-m_2^2}{p_1^2}), ({\text {high density:}}\
\frac{m_1^2}{p_1^2}\ll \frac{\tilde{n}}{p_1} \ll 1);
\label{highdensity}
\end{equation}
\begin{equation}
\Gamma\approx\mu^2 \frac{m_1^6}{p_1^3}, ({\text {quasi-vacuum case:}}
\frac{\tilde{n}}{p_1} \ll \frac{m_1}{p_1}\ll 1, m_1 \gg m_2).
\label{lowdensity}
\end{equation}
The obtained results (\ref{ultrahighdensity}) and
(\ref{highdensity}) exhibit the power of the method of exact
solutions since they establish clear connection between the case
of massive weak-interacting particles when the masses of the
initial and final particles differ with the previously
investigated equal mass case. Indeed, it is easy to verify that
these results transforms exactly into the results of $SL\nu$
calculation~\cite{LobStuPLB03}. The asymptotic estimation
(\ref{lowdensity}) can not be reduced to the $SL\nu$ case and thus
it is a new result, which is characteristic feature for the decay
process under study. The above-mentioned asymptotical cases
(\ref{ultrahighdensity}), (\ref{highdensity}) and
(\ref{lowdensity}) where calculated with the assumption that the
initial neutrino is relativistic $(\gamma=\frac{m_1}{p_1}\ll 1)$.
In particular the relativistic character of the initial neutrino
propagation influences strongly on the emitted $SL\nu$ photon
energy because of increase of the part of neutrino energy in it.
It is also interesting to investigate the spin light mode in the
radiative decay of slowly moving massive neutrino (or even
stationary initial neutrino). This process has been calculated
several times \cite{Smi78} . We consider the vacuum case to find
the interrelation of our results obtained using the method of
exact solutions with the results of previous works. So, taking
into account $\gamma=\frac{m_1}{p}\ll 1,
\kappa=\frac{\tilde{n}}{p_1}, \delta\equiv\frac{\gamma^2}{2}$ for
the probability of the process we finally get:
\begin{equation}
\Gamma\approx \frac{7}{24}\mu^2 {m_1^3} \sim m_1^5.
\label{vacuum}
\end{equation}
We obtain here the same dependency of the probability from the
mass of the decaying neutrino as in the classical papers on the
radiative neutrino decay. By this means we justify usage of the
modified Dirac equation exact solutions method.
{\it Acknowledgements.} One of the authors (A.S.) is thankful to
Kim Milton for the invitation to attend the 9th Conference on
Quantum Field Theory Under the Influence of External Conditions
(Oklahoma, USA) and for the kind hospitality provided in Norman.
\vspace{-0.4cm}
| proofpile-arXiv_065-5250 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}\label{intro}
In the Hamiltonian formalism, many classical mechanical systems are described by a manifold, which plays the role of phase space, endowed with a symplectic structure and a choice of Hamiltonian function.
However symplectic structures are not suitable to describe all classical systems. Mechanical systems with symmetries are described by Poisson structures -- integrable bivector fields -- and system with constraints are described by closed 2-forms. Systems with both symmetries and constraints are described using Dirac structures, introduced by Ted Courant in the early 1990s \cite{Cou}. Recall that, given a manifold $M$, $TM\oplus T^*M$ is endowed a natural pairing on the fibers and a bracket on its space of sections, called (untwisted) Courant bracket. A Dirac structure is a maximal isotropic and involutive subbundle of $TM\oplus T^*M$.
Given a Dirac manifold $M$, one defines the notion of Hamiltonian function -- in physical terms, an observable for the system --
and shows that the set of Hamiltonian functions is endowed with a Poisson algebra structure.\\
Higher analogues of symplectic structures are given by multisymplectic structures \cite{CIDL}\cite{Hammulti} (called $p$-plectic structures in \cite{BHR}), i.e. closed forms $\omega\in \Omega^{p+1}(M)$ such that the bundle map $\tilde{\omega} \colon TM \to \wedge^p T^*M, X \to \iota_X \omega$ is injective. They are suitable to describe certain physical systems arising from classical field theory, as was realized by Tulczyjew in the late 1960s. They are also suitable to describe systems in which particles are replaced by higher dimensional objects such as strings \cite{BHR}.
The recent work of Baez, Hoffnung and Rogers \cite{BHR} and then Rogers \cite{RogersL} shows that on a $p$-plectic manifold $M$
the observables -- consisting of certain differential forms -- have naturally the structure of a Lie $p$-algebra, by which we mean an $L_{\infty}$-algebra \cite{LadaStasheff} concentrated in degrees $-p+1,\dots,0$. This extends the fact, mentioned above, that the observables of classical mechanics form a Lie algebra (indeed, a Poisson algebra).\\
The first part of the present paper arose from the geometric observation that, exactly as symplectic structures are special cases of Dirac structures, multisymplectic structures are special cases of higher analogues of Dirac structures. More precisely, for every $p\ge 1$
we consider $$E^p:=TM\oplus \wedge^pT^*M,$$ a vector bundle endowed with a $\wedge^{p-1}T^*M$-valued pairing and a
bracket on its space of sections. We regard $E^p$ as a higher analogue of split Courant algebroids. We also consider isotropic, involutive subbundles of $E^p$. When the latter are Lagrangian, we refer to them as {higher Dirac structures}.
The following diagram displays the relations between the geometric structures mentioned so far:
\vspace{-0.2cm}
\begin{center}
\includegraphics[scale=.28]{pichDirac}
\end{center}
\vspace{-2.6cm}
In the first part of the paper (\S \ref{haca}-\S \ref{equivmd}) we introduce and study the geometry of isotropic, involutive subbundles of $E^p$. Examples include Dirac structures, closed forms together with a foliation, and a restrictive class of multivector fields.
The main results are
\begin{itemize}
\item Thm. \ref{integ}: a description of all regular higher Dirac structures in terms of familiar geometric data: a (not necessarily closed) differential form and a foliation.
\item Thm \ref{eqint}: higher Dirac structures are \emph{equivalent} to Multi-Dirac structures, at least in the regular case\footnote{Regularity is a technical assumption and is probably not necessary. The physically most relevant examples of Multi-Dirac structures are regular \cite{MultiDirac}.}.\end{itemize}
Recall that Multi-Dirac structures were recently introduced by Vankerschaver, Yoshimura and Marsden \cite{MultiDirac}. They are the geometric structures that allow to describe the implicit Euler-Lagrange equations (equations of motion) of a large class of field theories, which include the treatment of non-holonomic constraints.
By the above equivalence, higher Dirac structures thus acquire a field-theoretic motivation. Further, since higher Dirac structures are simpler to handle than Multi-Dirac structures (which contain some redundancy in their definition), we expect
our work to be useful in the context of field theory too. \\
The second part of the paper is concerned with the algebraic structure on the observables, which turns out to be an $L_{\infty}$-algebra. Further, we investigate an $L_{\infty}$-algebra that can be associated to a manifold without any geometric structure on it, except for a (possibly vanishing) closed differential form defining a
twist. Recall that a closed 2-form on a manifold $M$ (a 2-cocycle for the Lie algebroid $TM$) can be used to obtain a Lie algebroid structure on $E^0=TM\times \ensuremath{\mathbb R}$ \cite[\S 1.1]{MariusPre}, so the sections of the latter form a Lie algebra.
Recall also that Roytenberg and Weinstein \cite{rw} associated a Lie 2-algebra to every Courant algebroid (in particular to $E^1=TM\oplus T^*M$ with Courant bracket twisted by a closed 3-form). Recently Getzler \cite{GetzlerHigherDer} gave an algebraic construction which extends Roytenberg and Weinstein's proof. Applying Getzler's result in a straightforward way one can extend the above results to all $E^p$'s.
Our main results in the second part of the paper (\S \ref{Linfty}-\S \ref{per}) are:
\begin{itemize}
\item Thm. \ref{Liep}: the observables associated to an isotropic, involutive subbundle of $E^p$ form a Lie $p$-algebra.
\item Prop. \ref{ord} and Prop. \ref{ordH}: to $E^p=TM\oplus \wedge^p T^*M$ and to a closed $p+2$- form $H$ on $M$, one can associate a Lie $p+1$-algebra extending the $H$-twisted Courant bracket.
\item
Thm. \ref{mor01}: there is a morphism (with one dimensional kernel) from the Lie algebra associated to $E^0$ and a closed $2$-form into the Lie 2-algebra associated to the Courant algebroid $E^1=TM\oplus T^*M$ with the untwisted Courant bracket.
\end{itemize}
Rogers \cite{RogersCou} observed
that there is an injective morphism -- which can be interpreted as a prequantization map -- from the Lie 2-algebra of observables on a $2$-pletic manifold $(M,\omega)$ into the Lie 2-algebra associated to the Courant algebroid $E^1=TM\oplus T^*M$ endowed with the $\omega$-twisted Courant bracket. We conclude the paper with an attempt to put this into context.\\
\noindent \textbf{Acknowledgments}
I thank Klaus Bering, Ya\"el Fr\'egier, David Iglesias, Camille Laurent, Jo\~ao Martins, Claude Roger, Chris Rogers, Florian Sch\"atz, Pavol {\v{S}}evera and Joris Vankerschaver for helpful discussions, and Jim Stasheff for comments on this note. The first part of Prop. \ref{referee} on integration is due to a referee, whom I hereby thank. I am grateful to a further referee for numerous comments that improved the presentation.
Further I thank Juan Carlos Marrero and Edith Padr\'on for pointing out to me the reference \cite{Hagi}, and Chris Rogers for pointing out \cite{MultiDirac}.
\section{Higher analogues of split Courant algebroids}\label{haca}
Let $M$ be a manifold and $p\ge 0$ an integer.
Consider the vector bundle $$E^p:=TM\oplus \wedge^pT^*M,$$
endowed with the
symmetric pairing on its fibers
\begin{equation*}
\langle \cdot,\cdot \rangle \colon E^p \times E^p \to \wedge^{p-1}T^*M,
\end{equation*}
given by
\begin{equation}\label{pairing}
\langle X+\alpha , Y+\beta \rangle = \iota_{X}\beta + \iota_{Y}\alpha .
\end{equation}
Endow the space of sections of $E^p$ with the \emph{Dorfman bracket}
\begin{equation}\label{dorf}
[\![ X+\alpha , Y+\beta]\!]= [X,Y] + \mathcal{L}_{X} \beta - \iota_{Y} d\alpha .
\end{equation}
The Dorfman bracket satisfies the Jacobi identity and Leibniz rules
\begin{align}
\label{Jacobi}
[\![ e_1,[\![ e_2,e_3]\!]\cc&=[\![ [\![ e_1,e_2]\!] ,e_3]\!]+[\![ e_2,[\![ e_1,e_3]\!]\cc\\
\label{Leib2}
[\![ e_1,fe_2 ]\!]&=f[\![ e_1,e_2 ]\!]+ (pr_{TM}(e_1)f) e_2\\
\label{Leib1}
[\![ fe_1,e_2 ]\!]&=f[\![ e_1,e_2]\!]- (pr_{TM}(e_2)f) e_1+df\wedge \langle e_1,e_2 \rangle
\end{align}
where $e_i\in\Gamma(E^p)$, $f\in C^{\infty}(M)$, and $pr_{TM} \colon E^p \to TM$ is the projection onto the first factor.
The decomposition of the Dorfman bracket into its anti-symmetric and symmetric parts is
\begin{equation}\label{dorcou}
[\![ e_1,e_2]\!] =[\![ e_1,e_2]\!]_{Cou}+\frac{1}{2}d \langle e_1,e_2 \rangle,
\end{equation}
where $$[\![ X+\alpha , Y+\beta]\!]_{Cou}:= [X,Y] + \mathcal{L}_{X} \beta -\mathcal{L}_{Y} \alpha -\frac{1}{2}d(\iota_X \beta- \iota_{Y} \alpha)$$ is known as \emph{Courant bracket}. \\
\begin{remark}
The Dorfman bracket on $E^p$ was already considered by Hawigara \cite[\S 3.2]{Hagi}, Hitchin \cite{Hi} and Gualtieri \cite[\S 3.8]{Gu}\cite[\S 2.1]{Gu2}.
$(E^p,\langle \cdot,\cdot \rangle , [\![ \cdot,\cdot ]\!])$ is an example of \emph{weak Courant-Dorfman algebra} as introduced by Ekstrand and Zabzine in \cite[Appendix]{ekzab}.
When $p=1$ we recover an instance of split \emph{Courant algebroid} \cite{LWX}. The Courant bracket has been extended to the setting of multivector fields in \cite[\S 4]{MultiDirac}.
\end{remark}
In \cite{Hi,Gu,Gu2} it is remarked that closed $p+1$-forms $B$ on $M$ provide symmetries of the Dorfman bracket (and of the pairing),
by the gauge
transformation
$e^B \colon X+\alpha \mapsto X+\alpha+\iota_XB$.
Further
the Dorfman bracket may be twisted by a closed $p+2$-form $H$, just by adding a term
$\iota_Y\iota_X H$ to the r.h.s. of eq. \eqref{dorf}. We refer to the resulting bracket as \emph{$H$-twisted Dorfman bracket} (this notion will not be used until \S \ref{liM}), and we use the term Dorfman bracket to refer to the untwisted one given by eq. \eqref{dorf}.
\section{Higher analogues of Dirac structures}\label{hads}
In this section we introduce a geometric structure that extends the notion of Dirac structure and multisymplectic form. It is given by a subbundle of $E^p$,
which we require to be involutive and isotropic, since this is needed to associate to it an $L_{\infty}$-algebra of observables in \S \ref{obs}. Further we consider subbundles which are Lagrangian (that is, maximal isotropic) and study in detail their geometry.
\begin{defi}\label{hd} Let $p\ge 1$. Let $L$ be a subbundle of $E^p=TM\oplus \wedge^pT^*M$.
\begin{itemize}
\item
$L$ is \emph{isotropic}
if for all sections $X_i+\alpha_i$:
\begin{equation}\label{isot}
\langle X_1+\alpha_1,X_2+\alpha_2\rangle=0.
\end{equation}
$L$ is \emph{involutive} if for all sections $X_i+\alpha_i$: $$[\![ X_1+\alpha_1,X_2+\alpha_2 ]\!] \in \Gamma(L),$$
where $[\![ \cdot,\cdot ]\!]$ denotes the Dorfman bracket \eqref{dorf}.
\item
$L$ is \emph{Lagrangian} if $$L=L^{\perp}:=\{e \in E^p : \langle e,L\rangle =0 \}.$$
\noindent
(In this case we also refer to $L$ as a \emph{almost Dirac structure of order $p$}.)
\noindent $L$ a \emph{Dirac structure of order $p$} or \emph{higher Dirac structure} if it is Lagrangian and involutive.
\item $L$ is \emph{regular} if $pr_{TM} (L)$ has constant rank along $M$.
\end{itemize}
\end{defi}
\subsection{Involutive isotropic subbundles}
In this subsection we make some simple considerations on involutive isotropic subbundles and present some examples.
The involutive, Lagrangian subbundles of $E^1$ are the \emph{Dirac structures} introduced by Courant \cite{Cou}.
When $p=dim(M)$, isotropic subbundles are forced to lie inside $TM\oplus\{0\}$ or $\{0\}\oplus \wedge^pT^*M$, hence they are uninteresting.
Now, for arbitrary $p$, we look at involutive, isotropic subbundles that project isomorphically onto the first or second summand of $E^p$.
\begin{prop}\label{closedform} Let $p\ge1$. Let $\omega$ be a closed $p+1$-form on $M$. Then
$$graph(\omega):=\{X-\iota_X \omega: X\in TM\}$$
is an isotropic involutive subbundle of $E^p$.
All isotropic involutive subbundles $L\subset E^p$ that project isomorphically onto $TM$ under $pr_{TM} \colon E^p \to TM$ are of the above form.
\end{prop}
\begin{proof}
The subbundle $graph(\omega)$ is isotropic because $\langle X-\iota_X \omega, Y-\iota_Y \omega \rangle = -\iota_X\iota_Y \omega-\iota_Y\iota_X \omega=0$.
To see that $L$ is involutive, use the fact that since $\omega$ is closed $d(\iota_X \omega)=\mathcal{L}_X\omega$ and compute
$$[\![ X-\iota_X \omega, Y-\iota_Y\omega ]\!]=
[X,Y]-\mathcal{L}_X(\iota_Y\omega)+\iota_Y(\mathcal{L}_X\omega)=
[X,Y]- \iota_{[X,Y]}\omega.$$
Let $L\subset E^p$ be a subbundle that projects isomorphically onto $TM$, i.e. $L=\{X+B(X): X\in TM\}$ for some $B\colon TM \to \wedge^pT^*M$. If $L$ is isotropic then
the map $$TM\otimes TM \to \wedge^{p-1}T^*M,\;\;\; X\otimes Y \mapsto \iota_X(B(Y))$$ is skew in $X$ and $Y$, so
$B(X)=-\iota_{X}\omega$ defines a unique $p+1$-form $\omega$, which satisfies $graph(\omega)=L$. If $L$ is involutive then the above computation shows that $\omega$ is a closed form.
\end{proof}
The following generalization of Prop. \ref{closedform} is proven exactly as in the last paragraph of the proof of Prop. \ref{integ}. It provides a wide class of regular isotropic, involutive subbundles.
\begin{cor}\label{oS}
Fix $p\ge 1$. Let $\omega\in \Omega^{p+1}(M)$ be a $p+1$-form and $S$ an integrable distribution on $M$, such that $d\omega|_{\wedge^3 S \otimes \wedge^{p-1}TM}=0.$ Then
$$L:=\{X-\iota_X \omega +\alpha: X\in S, \alpha\in \wedge^p S^{\circ}\}$$
is an isotropic, involutive subbundle of $E^p$.
\end{cor}
\begin{prop}\label{NPiso} Let $1 \le p\le dim(M)-1$. Let $\pi\in \Gamma(\wedge^{p+1} TM)$ be either a Poisson bivector field, a $dim(M)$-multivector field or $\pi=0$.
Then
$$graph(\pi):=\{\iota_{\alpha} \pi + \alpha: \alpha \in \wedge^{p}T^*M\}$$
is an isotropic involutive subbundle of $E^p$.
All isotropic involutive subbundles $L\subset E^p$ that project isomorphically onto $\wedge^pT^*M$ under $pr_{\wedge^pT^*M} \colon E^p \to \wedge^pT^*M$ are of the above form.
\end{prop}
\begin{proof} We write $n:=p+1$, so $\pi$ is an $n$-vector field.
Clearly $graph(\pi)$ is isotropic in the cases $\pi=0$ and $n=2$. For the case $n=dim(M)$ fix a point $x\in M$. We may assume that at $x$ we have $\pi=\pd{x_1}\wedge \dots \wedge \pd{x_n}$ where $\{x_i\}_{i\le n}$ is a coordinate system on $M$. For each $i$ denote $dx_i^C:=dx_1\wedge \dots \widehat{dx_i} \dots \wedge dx_{n}$.
For $i\le j$ at the point $x$ we have
\begin{align}\label{topm}
&\langle \iota_{dx_i^C}\pi+ {dx_i^C},\iota_{dx_j^C}\pi+{dx_j^C} \rangle\\
=&((-1)^{(n-i)+(i-1)}+(-1)^{(n-j)+(j-2)})dx_1\wedge \dots \widehat{dx_i} \dots \widehat{dx_j} \dots \wedge dx_{n}=0 \nonumber
\end{align}
showing that $graph(\pi)$ is isotropic.
It is known that $graph(\pi)$ is involutive if{f} $\pi$ is a Nambu-Poisson multivector field (see \cite[\S 4.2]{Hagi}). For $n=2$ the Nambu-Poisson multivector fields are exactly Poisson bivector field, and for $n=dim(M)$ all $n$-multivector fields are Nambu-Poisson. This concludes the first part of the proof.
Conversely, assume that $L\subset E^{n-1}$ is an isotropic subbundle that projects isomorphically onto $\wedge^{n-1}T^*M$, i.e.
$L=\{A {\alpha} + \alpha: \alpha \in \wedge^{n-1}T^*M\}$ for some map $A \colon \wedge^{n-1}T^*M \to TM$.
Assume that $A$ is not identically zero, and that $n\neq 2, dim(M)$.
In this case we obtain a contradiction to the isotropicity of $L$, as follows.
There is a point $x\in M$ with $A_x\neq 0$. Near $x$ choose coordinates $x_1,\dots,x_{dim(M)}$ (notice that $dim(M)\ge n+1$).
Without loss of generality at $x$ we might assume that $A (dx_1\wedge \dots\wedge dx_{n-1})$ does not vanish.
It does not lie in the span of $\pd{x_1}, \dots,\pd{x_{n-1}}$ since we assume that $L$ is isotropic, so by modifying the coordinates $x_{n},\dots,x_{dim(M)}$
we may assume that $A (dx_1\wedge \dots\wedge dx_{n-1})=\pd{x_{n}}$.
Then $$\big\langle A_x(dx_1\wedge \dots \wedge dx_{n-1})+ dx_1\wedge \dots \wedge dx_{n-1}\;,\;
A_x(dx_3\wedge \dots \wedge dx_{n+1})+dx_3\wedge \dots \wedge dx_{n+1} \big\rangle \neq 0.$$
Indeed the contraction of $A_x(dx_1\wedge \dots \wedge dx_{n-1})=\pd{x_{n}}$
with $dx_3\wedge \dots \wedge dx_{n+1}$ contains the summand $(-1)^{n-3}\cdot
dx_3\wedge \dots \wedge dx_{n-1}\wedge dx_{n+1}$,
whereas the contraction of any vector of $T_xM$ with $dx_1\wedge \dots \wedge dx_{n-1}$ can not contain $dx_{n+1}$. Hence we obtain a contradiction to the isotropicity.
If $A\equiv0$ then clearly $L$ is isotropic.
In the case $n=2$, it is known that $L$ is isotropic if{f} it is the graph of a bivector field $\pi$.
Now consider the case $n=dim(M)$.
For any $i$, let $X_i+{dx_i^C}\in L$. The isotropicity condition implies that $X_i=\lambda_i\pd{x_i}$ for some $\lambda_i\in \ensuremath{\mathbb R}$, and a computation similar to \eqref{topm} implies $\lambda_i=(-1)^{n-i}\lambda_n$ for all $i$, so that
$L=graph(\pi)$ for $\pi=\lambda_n\pd{x_1}\wedge \dots \wedge \pd{x_n}$.
Hence we have shown that $L$ is isotropic if{f} $L$ is the graph of an $n$-vector field where $\pi=0$,
$n=2$ or $n=dim(M)$. As seen earlier, if $graph(\pi)$ is involutive then, in the case $n=2$, $\pi$ has to be a Poisson bivector field.
\end{proof}
We present a class of isotropic involutive subbundles which are not necessarily regular:
\begin{cor}\label{0n}
Let $\Omega$ be an top degree form on $M$,
and $f\in C^{\infty}(M)$ such that $\Omega_x\neq 0$ at points of $\{x\in M: f(x)=0\}$. Then
$$L:=\{fX-\iota_X\Omega: X\in TM\}$$
is an involutive isotropic subbundle of $E^{dim(M)-1}$.
\end{cor}
\begin{proof} Let $x\in M$. If $f(x)\neq 0$, then nearby $L$ is the graph of $\frac{1}{f}\Omega$,
which being a top-form is closed. Hence, near $x$, $L$ defines an isotropic involutive subbundle by Prop. \ref{closedform}.
Now suppose that $f(x)=0$. Then $L_x$ is just $0+\wedge^{dim(M)-1}T^*_xM$,
so nearby $L$ is the graph of a top multivector field, and by Prop. \ref{NPiso} it is
an isotropic involutive subbundle.
\end{proof}
Notice that the isotropic subbundles described in Prop. \ref{closedform}, Prop. \ref{NPiso}, Cor. \ref{0n} are all Lagrangian (use Lemma \ref{easychar} below).\\
We end this subsection relating
involutive isotropic subbundles with Lie algebroids and Lie groupoids.
\begin{prop}\label{algoid}
Let $L\subset E^p$ be an involutive isotropic subbundle. Then $(L, [\![ \cdot,\cdot ]\!],pr_{TM})$ is a Lie algebroid \cite{CW},
where $pr_{TM} \colon E^p\to TM$ is
the projection onto the first factor.
\end{prop}
\begin{proof}
The restriction of the Dorfman bracket to $\Gamma(L)$ is skew-symmetric because of eq. \eqref{dorcou},
and as seen in eq. \eqref{Jacobi} the Dorfman bracket satisfies the Jacobi identity. The Leibniz rule holds because of eq. \eqref{Leib2}.
\end{proof}
Recall that (integrable) Dirac structures give rise to presymplectic groupoids in the sense of \cite{BCWZ} and, restricting to the non-degenerate case, that
Poisson structures give rise to symplectic groupoids. We generalize this:
\begin{prop}\label{referee} Suppose that the
Lie algebroid $L$ of Prop. \ref{algoid} integrates to a source simply connected Lie groupoid $\Gamma$. Then $\Gamma$ is canonically endowed with a multiplicative closed $p+1$-form $\Omega$.
Further, if $L$ is the graph of a multivector field as in Prop. \ref{NPiso} or the graph of a
multisymplectic form (see \S \ref{intro}), then $\Omega$ is a multisymplectic form.
\end{prop}
\begin{proof}
The first statement follows immediately from recent results of Arias Abad-Crainic, applying \cite[Thm. 6.1]{Abad:2009zr}
to the vector bundle map $\tau \colon L \to \wedge^p T^*M$ given by the projection onto the second factor, which satisfies the assumptions of the theorem since $L$ isotropic and because the Lie algebroid bracket on $L$ is the restriction of the Dorfman bracket.
Concretely, for all $x\in M$ and $e\in L_x$, $X_1,\dots,X_p \in T_xM$, the multiplicative form $\Omega$ is determined by the equation
\begin{equation}\label{henrmulti}
\Omega(e,X_1,\dots,X_p)= \langle pr_{\wedge^{p}T^*M}(e), X_1\wedge \dots \wedge X_p \rangle.
\end{equation}
Here on the l.h.s. we identify the Lie algebroid $L$ with $ker(s_*)|_M$, where $s \colon \Gamma \to M$ is the source map.
Now assume that $L$ is the graph of a multivector field $\pi$ as in Prop. \ref{oS}.
First, given a non-zero $e\in L$, it follows that $pr_{\wedge^{p}T^*M}(e)$ is also non-zero, so it pairs non-trivially with some $X_1\wedge \dots \wedge X_p \in \wedge^{p}TM$. Second, given a non-zero $X_1\in TM$, extend it to a non-zero element $X_1\wedge \dots \wedge X_p
\in \wedge^{p}TM$, and choose $\alpha \in \wedge^{p}T^*M$ so that their pairing is non-trivial. Let $e:=\iota_{ \alpha} \pi+\alpha$.
Then the expression \eqref{henrmulti} is non-zero. Since $T\Gamma|_M=TM\oplus ker(s_*)|_M$ and $\Omega|_{\wedge^{p+1}TM}=0$, this
shows that $\Omega$ is multisymplectic at points of $M$. To make the same conclusion at every $g\in \Gamma$, use \cite[eq. (3.4)]{BCWZ}
that the multiplicativity of $\Omega$ implies \begin{equation*}
\Omega_g((R_g)_* e,w_1,\dots,w_p)=\Omega_{x}(e,t_*(w_1),\dots,t_*(w_p))
\end{equation*}
for all $e\in ker(s_*)|_{x}$ and $w_i\in T_g \Gamma$. Here $t\colon \Gamma \to M$ is the target map and $x:=t(g)\in M$.
Last, assume that $L$ is the graph of a multisymplectic form $\omega$ on $M$.
Given a non-zero $e\in L$, say $e=X-\iota_X \omega$, we have by eq. \eqref{henrmulti} that $\iota_e \Omega|_{\wedge^{p}TM}=-\iota_X \omega\neq 0$. Given a non-zero $X_1\in TM$, there is $X\wedge X_2\wedge \dots \wedge X_p
\in \wedge^{p}TM$ with which $\iota_{X_1}\omega$ pairs non-trivially. Let $e:=X-\iota_X \omega$. Then the expression \eqref{henrmulti} is non-zero. This shows that $\Omega$ is multisymplectic at points of $M$, and by the argument above on the whole of $\Gamma$.
\end{proof}
\subsection{Higher Dirac structures}\label{subsec:lag}
In this subsection we characterize Lagrangian subbundles $L\subset E^p$ (i.e. almost Dirac structures of order $p$) and their involutivity.
We start characterizing Lagrangian subbundles at the linear algebra level. Recall first what happens in the case $p=1$. Let $T$ be a vector space. Any $L\subset T\oplus T^*$ such that $L=L^{\perp}$ is determined exactly by the subspace $S:=pr_T(L)$ and a skew-symmetric bilinear form on it \cite{BR}.
Further $dim(S)$ can assume any value between $0$ and $dim(T)$. For $p\ge 2$
the description is more involved, however
it remains true that every Lagrangian subspace of $T\oplus \wedge^p T^*$ can be described by means of a subspace $S\subset T$ (satisfying a dimensional constraint) and a (non-unique) $p+1$-form on $T$.
\begin{prop}\label{linalg}
Fix a vector space $T$ and an integer $p\ge 1$. There is a bijection between
\begin{itemize}
\item Lagrangian subspaces $L\subset T\oplus \wedge^p T^*$
\item \text{ pairs }
$$\begin{cases}
S\subset T &\text{ such that either } dim(S)\le (dim(T)-p) \text{ or }S=T,\\
\Omega \in \wedge^2 S^*\otimes \wedge^{p-1}T^* &\text{ such that }
\Omega \text{ is the restriction of an element of } \wedge^{p+1}T^*.
\end{cases}$$
\end{itemize}
The correspondence is given by
\begin{align*}
L &\mapsto \begin{cases} S:=pr_T(L)\\
\Omega \text{ given by } \iota_X \Omega
=\alpha|_{S\otimes\bigotimes^{p-1}T} \text{ for all } X+\alpha \in L
\end{cases}\\
(S,\Omega)&\mapsto L:=\{X+\alpha: X\in S, \alpha|_{S\otimes\bigotimes^{p-1}T}=\iota_X \Omega
\}.
\end{align*}
\end{prop}
Here we regard $\wedge^n T^*$ as the subspace of $ \bigotimes^n T^*:=T^*\otimes\dots\otimes T^*$ consisting of elements invariant under the odd representation of the permutation group in $n$ elements. Loosely speaking, the restriction on $dim(S)$ arises as follows: when it is not satisfied $\wedge^p S^{\circ}=\{0\}$ and $S\neq T$,
and one can enlarge
$L$ to an isotropic $L'\subset T\oplus \wedge^p T^*$ such that $pr_T(L')$ is strictly larger than $S$.
The proof of Prop. \ref{linalg} is presented
in Appendix \ref{applag}.
An immediate corollary of Prop. \ref{linalg}, which we present without proof, is:
\begin{cor}\label{norom} Fix a vector space $T$ and an integer $p\ge 1$. For any Lagrangian subspace $L\subset T\oplus \wedge^p T^*$ let $(S,\Omega)$ be the corresponding pair as in Prop. \ref{linalg}, and
$\omega\in \wedge^{p+1}T^*$ an arbitrary extension of $\Omega$. Then $L$ can be described in terms of $S$ and $\omega$ as
$$L=\{X+\iota_X \omega +\alpha: X\in S, \alpha\in \wedge^p S^{\circ}\}.$$
\end{cor}
As an immediate consequence of Lemma \ref{easychar}, we obtain the following dimensional constraints on the singular distribution induced by a Lagrangian subbundle:
\begin{cor}\label{restr} Let $L\subset E^p$ be a Lagrangian subbundle. Denote $S:=pr_{TM}(L)$. Then
\begin{itemize}
\item[a)] $dim(S_x)\in\{0,1,\dots,dim(M)-p,dim(M)\}$ for all $x\in M$
\item[b)] $dim(L_x)=dim(S_x)+
{dim(M)-dim(S_x) \choose p}$ is constant for all $x\in M$.
\end{itemize}
\end{cor}
When $p=1$, so that $L$ is a maximal isotropic subbundle of $TM\oplus T^*M$,
the dimensional constraints of Cor. \ref{restr} do not pose any restriction of $dim(S_x)$. (It is known, however, that
$dim(S_x) \;mod\; 2$ must be constant on $M$.) When $p\ge 2$, Lagrangian subbundles of $E^p$ are quite rigid.
\begin{ex}
Let $p=dim(M)-1$, and let $L$ be a Lagrangian subbundle of $E^p$.
Cor. \ref{restr} a)
implies that at every point $dim(S_x)$ is either $0$, $1$ or $dim(M)$.
Assume that $p\ge 2$.
By Cor. \ref{restr} b), if $rk(S)=1$ at one point then $rk(S)=1$ on the whole of $M$, and the rank 2 bundle $L$ is equal to $S\oplus \wedge^{dim(M)-1} S^{\circ}$.
Otherwise, at
any point $x$ we have either $S_x=T_xM$ or $L_x=0+\wedge^{dim(M)-1}T^*M$.
In the first case by Cor. \ref{norom} we known that, near $x$, $L$ is the graph of a top form. In the second case $L$
projects isomorphically onto the second component $\wedge^{dim(M)-1}T^*M$ near $x$, so by Prop. \ref{NPiso} it must be the graph of a $dim(M)$-vector field.
\end{ex}
Finally, we characterize when a regular Lagrangian subbundle is a higher Dirac structure.
\begin{thm}\label{integ}
Let $M$ be a manifold, fix an integer $ p\ge 1$ and a Lagrangian subbundle $L\subset TM\oplus \wedge^p T^*M$. Assume that $S:=pr_{TM} (L)$ has constant rank along $M$. Choose a form $\omega\in \Omega^{p+1}(M)$ such that $S$ and $\omega$ describe $L$ as in Cor. \ref{norom}.
Then $L$ is involutive if{f} $S$ is an involutive distribution and $d\omega|_{\wedge^3 S \otimes \wedge^{p-1}TM}=0.$
\end{thm}
\begin{proof}
First notice that a $p+1$-form $\omega$ as above always exists, as it can be constructed as in Lemma \ref{ext} choosing a (smooth) distribution $C$ on $M$ complementary to $S$. We use the description of $L$ given in Cor. \ref{norom}.
Assume that $L$ is involutive. By Prop. \ref{algoid}, $S$ is an involutive distribution.
Let $X,Y$ be sections of $S$. Using $\mathcal{L}_X\omega=d(\iota_X \omega)+\iota_X d\omega$ we have
$$[\![ X+\iota_X \omega, Y+\iota_Y\omega ]\!]=
[X,Y]+\mathcal{L}_X(\iota_Y\omega)-\iota_Y (\mathcal{L}_X\omega)+\iota_Y \iota_X d\omega=
[X,Y]+\iota_{[X,Y]}\omega+\iota_Y \iota_X d \omega.$$
Since this lies in $L$
we have $\iota_Y \iota_X d \omega \in \wedge^{p} S^{\circ}$ for all sections $X,Y$ of $S$, which is equivalent to $d\omega|_{\wedge^3 S \otimes \wedge^{p-1}TM}=0.$
Conversely, assume the above two conditions on $S$ and $d\omega$. The above computation shows that for all sections $X,Y$ of $S$, the bracket $[\![ X+\iota_X \omega, Y+\iota_Y\omega ]\!]$ lies in $L$.
The brackets of $X+\iota_X \omega$ with sections of $\wedge^p S^{\circ}$ lie in $L$ since, by the involutivity of $S$, locally $\wedge^p S^{\circ}$ admits a frame consisting of $p$-forms $\alpha_i$ which are closed and which hence satisfy $[\![ \alpha_i,\cdot ]\!]=0$. Therefore $L$ is involutive.
\end{proof}
Notice that for $p=1$ (so $d\omega$ is a 3-form) we obtain the familiar statement that a regular almost Dirac structure $L$ is involutive if{f} $pr_{TM}(L)$ is an involutive distribution whose leaves are endowed with closed 2-forms (see \cite[Thm. 2.3.6]{Cou}).
\section{Equivalence of higher Dirac and Multi-Dirac structures}\label{equivmd}
Recently Vankerschaver, Yoshimura and Marsden \cite{MultiDirac} introduced the notion of Multi-Dirac structure. In this section we show that, at least in the regular case, it is equivalent to our notion of higher Dirac structure.
This section does not affect any of the following ones and might be skipped on a first reading.
We recall some definitions from \cite[\S 4]{MultiDirac}. All along we fix an integer $p\ge 1$ and a manifold $M$. In the following the indices $r,s$ range from $1$ to $p$.
Define $$P_r:=\wedge^r TM\oplus \wedge^{p+1-r} T^*M.$$
Define a pairing $P_r\times P_s \to \wedge^{p+1-r-s} T^*M$
by
\begin{equation*}
\langle\!\langle
({Y}, \eta), (\bar{{Y}}, \bar{\eta}) \rangle\!\rangle :=\frac{1}{2}\left( \iota_{\bar{{Y}}}\eta-(-1)^{rs}\iota_{{Y}}\bar{\eta} \right).
\end{equation*}
If $V_s\subset P_s$, then $(V_{s})^{\perp,r}\subset P_r$ is defined by
\begin{equation}\label{perpe}
(V_{s})^{\perp,r}:= \{ ({Y}, \eta) \in P_{r} :
\langle\!\langle({Y}, \eta)\;,\; V_s \rangle\!\rangle=0 \}.
\end{equation}
\begin{defi}\label{MultiDirac}
An \emph{almost multi-Dirac structure of degree $p$} on $M$ consists of subbundles
$(D_1, \ldots, D_{p})$, where
$
D_{r} \subset P_{r}$ for all $r$,
satisfying
\begin{equation} \label{isotropy}
D_{r}=(D_{s})^{\perp, r}
\end{equation}
for all $r,s$ with $r+s \le p+1$.
\end{defi}
\begin{prop}\label{eqla} Fix a manifold $M$ and an integer $p\ge 1$.
There is a bijection
\begin{align*}
\{\text{almost Multi-Dirac structures of degree }p \} &\cong \{\text{almost Dirac structures }L\text{ of order }p \text{ s.t. } \\
&\;\;\;\;\;\;\;L^{\perp, r} \text{ is a subbundle for }r=2,\dots,p\}\\
(D_1,\dots,D_p)&\mapsto D_1.
\end{align*}
\end{prop}
The proof of Prop. \ref{eqla} uses the following extension of Cor. \ref{norom}:
\begin{lemma}\label{mnormal} Fix a vector space $T$ and an integer $p\ge 1$.
Let $L$ be a Lagrangian subspace of $T\oplus \wedge^pT^*$, and define
$D_r:=(L)^{\perp, r}$ for $r=1,\dots,p$. Choose $\omega\in \wedge^{p+1}T^*$ so that $\omega$ and $S:=pr_{T}(L)$ describe $L$ as in Cor. \ref{norom}. Then for all $r$ we have
\begin{equation*}
D_r= \{Y+\iota_Y \omega +\xi: Y\in S\wedge (\wedge^{r-1}T), \xi \in \wedge^{p+1-r} S^{\circ}\}.
\end{equation*}
\end{lemma}
\begin{proof}
`` $\subset$:'' We first claim that
$$pr_{\wedge^r T} (D_r)\subset S\wedge (\wedge^{r-1}T).$$ If $S=T$ this obvious.
If $S\neq T$, by Prop. \ref{linalg} we have that $\wedge^p S^{\circ} \subset L$ is non-zero.
As $(Y,\eta)\in D_r$ implies $\iota_Y (\wedge^p S^{\circ})= 0$, we conclude that $Y\in S\wedge (\wedge^{r-1}T)$.
Let $(Y,\eta)\in D_r$. For all $(X,\alpha)\in L$ we have $\alpha-\iota_{X}\omega \in \wedge^p S^{\circ}$ by Cor. \ref{norom}, and since $Y\in S\wedge (\wedge^{r-1}T)$ we obtain $\iota_{Y}\alpha=\iota_{Y} (\iota_{X}\omega)$. Hence zero equals
\begin{equation}\label{eiota}
\langle\!\langle
({Y}, \eta),(X,\alpha) \rangle\!\rangle = \iota_X \eta -(-1)^r \iota_{Y}\alpha=
\iota_X \eta -(-1)^r\iota_{Y} (\iota_{X}\omega)=\iota_{X}(\eta-\iota_{Y}\omega),
\end{equation}
that is, $\eta-\iota_{Y}\omega\in \wedge^{p+1-r} S^{\circ}$.
Notice that in the last equality of eq. \eqref{eiota} we used the total skew-symmetry of $\omega$.
`` $\supset$'' follows from eq. \eqref{eiota}.
\end{proof}
\begin{proof}[Proof of Prop. \ref{eqla}]
The map in the statement of Prop. \ref{eqla} is well-defined by eq. \eqref{isotropy} with $r=s=1$.
It is injective as $D_r=(D_{1})^{\perp, r}$ is determined by $D_1$ for $r=2,\dots,p$, again by eq. \eqref{isotropy}.
We now show that it is surjective.
Let $L$ be a Lagrangian subbundle of $E^p$, and assume that
$D_r:=(L)^{\perp, r}$ is a smooth subbundle for $r=1,\dots,p$. We have to show that eq. \eqref{isotropy} holds for all $r,s$ with $r+s\le p+1$.
If $(Y,\eta)\in D_r$ and $(\bar{Y},\bar{\eta})\in D_s$, then $\iota_{Y}\bar{\eta}=\iota_{Y} (\iota_{\bar{Y}}\omega)$ by Lemma \ref{mnormal}, showing $\langle\!\langle
D_r,D_s\rangle\!\rangle=0$ and the
inclusion ``$\subset$''.
For the opposite inclusion take $(Y,\eta)\in (D_{s})^{\perp, r}$ at some point $x\in M$. In particular
$(Y,\eta)$ is orthogonal to $\wedge^{p+1-s} S^{\circ}_x$ (where $S_x:=pr_{T_xM}L$). The latter does not vanish by Prop. \ref{linalg} if $S_x\neq T_xM$, and since $r\le p+1-s$ we conclude that $Y\in S_x\wedge (\wedge^{r-1}T_xM)$. If $S_x=T_xM$ the same conclusion holds. A computation analog to eq. \eqref{eiota} implies that for all $(\bar{Y},\bar{\eta})\in D_s$ we have $0=\iota_{\bar{Y}}(\eta-\iota_{Y}\omega)$. As such $\bar{Y}$ span
$S_x\wedge (\wedge^{s-1}T_xM)$
by Lemma \ref{mnormal} applied to $D_s$, from $s\le p+1-r$ it follows that $\eta-\iota_{Y}\omega\in \wedge^{p+1-r} S_x^{\circ}$. Hence by Lemma \ref{mnormal} $(Y,\eta)\in D_r$ .
\end{proof}
In other to introduce the notion of integrability for almost multi-Dirac structures, as in \cite{MultiDirac} define
$\left[\!\left[\cdot,\cdot \right]\!\right]_{r,s} \colon \Gamma(P_r)\times \Gamma(P_s) \to \Gamma(P_{r+s-1})$ by
\begin{equation*}
\left[\!\left[\left({Y},\eta\right), \left(\bar{{Y}}, \bar{\eta}\right) \right]\!\right]_{r,s}\\
:=
\left( [{Y},\bar{{Y}}], \; \mathcal{L}_{{Y}}\bar{\eta}-(-1)^{(r-1)(s-1)}\mathcal{L}_{\bar{{Y}}}\eta+\frac{(-1)}{2}^{r}d
\left( \iota_{\bar{{Y}}}\eta+(-1)^{rs}\iota_{{Y}}\bar{\eta} \right)
\right).
\end{equation*}
\begin{defi}
An almost Multi-Dirac structure $(D_1, \dots, D_{p})$ is \emph{integrable} if
\begin{equation}
\label{involm}\left[\!\left[D_r, D_s\right]\!\right]_{r,s} \subset D_{r+s-1}
\end{equation}
for all $r, s$ with $r+s \le p$. In that case it is a \emph{Multi-Dirac structure}.
\end{defi}
We call an almost Multi-Dirac structure $(D_1,\dots,D_p)$ \emph{regular} if $pr_{TM} (D_1)$ has constant rank. By Lemma \ref{mnormal}, this is equivalent to $pr_{\wedge^r TM} (D_r)$ having constant rank for $r=1,\dots,p$.
Under this regularity assumption, we obtain an equivalence for integrable structures.
\begin{thm}\label{eqint} Fix a manifold $M$ and an integer $p\ge 1$.
The bijection of Prop. \ref{eqla} restricts to a bijection
\begin{align*}
\{\text{regular Multi-Dirac structures of degree }p \text\} &\cong \{\text{regular Dirac structures of order }p \}
\end{align*}
\end{thm}
\begin{proof}
If $(D_1,\dots,D_p)$ is a Multi-Dirac structure, by the remark at the end of \cite[\S 4]{MultiDirac}, $D_1$ is involutive w.r.t. the Courant bracket. Therefore it is involutive w.r.t. Dorfman bracket, that is, it is a Dirac structure of order $p$.
For the converse, notice that if $L$ is a regular Dirac structure $L$ then
$L^{\perp, r}$ is always a smooth subbundle by Cor. \ref{mnormal}.
So let $(D_1,\dots,D_p)$ be a regular almost Multi-Dirac structure
with the property that $L:=D_1$ is involutive. Choose $\omega\in \Omega^{p+1}(M)$ so that $(\omega, S:=pr_{TM}(L))$ describe $L$ as in Cor. \ref{norom}. Such a differential form exists by the regularity assumption.
To show that condition \eqref{involm} holds, let $Y\in \Gamma(S\wedge (\wedge^{r-1}T))$ and $\bar{Y}\in \Gamma(S\wedge (\wedge^{s-1}T))$.
We have $$\left[\!\left[Y+\iota_Y \omega, \bar{Y}+\iota_{\bar{Y}} \omega\right]\!\right]_{r,s} =\left([Y, \bar{Y}], \iota_{[Y, \bar{Y}] }\omega+(-1)^r \iota_Y\iota_{\bar{Y}} d\omega \right),$$ see for instance \cite[Proof of Thm. 4.5]{MultiDirac}.
Now $\iota_Y\iota_{\bar{Y}} d\omega \in \Gamma(\wedge^{p+2-r-s}S^{\circ})$ by Thm. \ref{integ}, so the above lies in $D_{r+s-1}$ by Lemma \ref{mnormal}.
Further, the involutivity of $S$ implies that locally $\wedge^{p+1-s} S^{\circ}$ admits a frame consisting of closed forms $\alpha_i$. For any choice of functions $f_i$ we have
$$\left[\!\left[Y+\iota_Y \omega, f_i \alpha_i \right]\!\right]_{r,s}
=\mathcal{L}_Y (f_i \alpha_i) + (-1)^{r(s+1)} d \iota_Y(f_i \alpha_i)
=\iota_{Y}(df_i\wedge \alpha_i),$$ which lies in $\Gamma(\wedge^{p+2-r-s}S^{\circ})$
since $Y\in \Gamma(S\wedge (\wedge^{r-1}T))$ and $\alpha_i \in \Gamma(\wedge^{p+1-s} S^{\circ})$.
\end{proof}
Finally, we comment on how our definition of higher Dirac structure differs from
Hagiwara's Nambu-Dirac structures \cite{Hagi}, which also are an extension of
Courant's notion of Dirac structure.
\begin{remark}\label{Hagi}
A \emph{Nambu-Dirac structure} on a manifold $M$ \cite[Def. 3.1, Def. 3.7]{Hagi} is an involutive subbundle $L\subset E^p$ satisfying
\begin{align}\label{Hagiiso}
&\langle X_1+\alpha_1,X_2+\alpha_2\rangle|_{\wedge^{p-1}(pr_{TM}(L))}=0,\\
\label{hismax}
&\wedge^{p}(pr_{TM}(L))=pr_{\wedge^{p}TM}L^{\perp,p},
\end{align}
where $L^{\perp,p}\subset \wedge^{p}TM\oplus T^*M$ is defined as in eq. \eqref{perpe}. When $p=1$, Nambu-Dirac structures agree with Dirac structures. Graphs of closed forms and of Nambu-Poisson multivector fields are Nambu-Dirac structures.
Our isotropicity condition \eqref{isot} is clearly stronger than \eqref{Hagiiso}. Nevertheless, higher Dirac structures are usually \emph{not} Nambu-Dirac structures, for the former satisfy
$$pr_{TM}(L)\wedge(\wedge^{p-1}TM)= pr_{\wedge^{p}TM}L^{\perp,p}$$ by Lemma \ref{mnormal}, and hence usually do not satisfy \eqref{hismax}.
A concrete instance is given by the 3-dimensional Lagrangian subspace $L\subset T\oplus \wedge^2 T^*$ given as in Cor. \ref{norom} by $T=\ensuremath{\mathbb R}^4$, $S$ equal to the plane
$\{x_3=x_4=0\}$ and $\omega=dx_1\wedge dx_2\wedge dx_3$.\end{remark}
\section{Review: $L_{\infty}$-algebras}\label{Linfty}
In this section we review briefly the notion of $L_{\infty}$-algebra, which generalizes Lie algebras
and was introduced by Stasheff \cite{LadaStasheff} in the 1990s.
We will follow the conventions of Lada-Markl\footnote{Except that on graded vector spaces we take the grading inverse to theirs.}
\cite[\S2,\S5]{LadaMarkl}.
Recall that a \emph{graded vector space} is just a (finite dimensional, real) vector space $V=\oplus_{i\in \ensuremath{\mathbb Z}}V_i$ with a direct sum decomposition into subspaces. An element
of $V_i$ is said to have degree $i$, and we
denote its degree by $|\cdot|$.
For any $n\ge 1$, $V^{\otimes n}$ is a graded vector space, and the symmetric group
acts on it
by the
so-called odd representation: the transposition
of the $k$-th and $(k+1)$-th element acts by
$$v_1\otimes\dots\otimes v_n \mapsto -(-1)^{|v_k||v_{k+1}|} v_1\otimes\dots\otimes
v_{k+1}\otimes v_k \otimes\dots\otimes v_n.$$
The \emph{$n$-th graded exterior product of $V$} is the graded vector space
$\wedge^n V$, consisting of elements of $V^{\otimes n}$ which are fixed by the odd representation of the symmetric group.
\begin{defi}\label{lidef}
An \emph{$L_{\infty}$-algebra} is a graded vector space $V=\bigoplus_{i\in \ensuremath{\mathbb Z}}V_i$ endowed with a sequence of multi-brackets ($n\ge 1$)
\begin{equation*}
l_n \colon \wedge ^n V \to V
\end{equation*}
of degree $2-n$, satisfying the following quadratic relations for each $n\ge 1$:
\begin{align}\label{lijac}
\sum_{i+j=n+1}\sum_{\sigma\in Sh(i,n-i)}\chi(\sigma)(-1)^{i(j-1)}l_j(l_i(v_{\sigma(1)},\dots,v_{\sigma(i)}),
v_{\sigma(i+1)},\dots,v_{\sigma(n)})=0.
\end{align}
Here $Sh(i,n-i)$ denotes the set of $(i,n-i)$-unshuffles, that is, permutations
preserving the order of the first $i$ elements and the order of the last $n-i$ elements.
The sign $\chi(\sigma)$ is given by the action of $\sigma$ on
$v_1\otimes\dots\otimes v_n$ in the odd representation.
\end{defi}
\begin{remark}
1) The quadratic relations imply that the unary bracket $l_1$ squares to zero, so $(V,l_1)$ is a chain complex of vector spaces. Hence $L_{\infty}$-algebras can be viewed as chain complexes with the extra data given by the multi-brackets
$l_n$ for $n\ge 2$.
2) When $V$ is concentrated in degree $0$,
(i.e., only $V_0$ is non-trivial) then $\wedge^n V$ is the usual $n$-th exterior product of $V$, and is concentrated in degree zero. Hence
by degree reasons only the binary bracket $[\cdot,\cdot]_2$ is non-zero, and the quadratic relations are simply the Jacobi identity, so we recover the notion of Lie algebra.
\end{remark}
For any $p\ge 1$, we use the term \emph{Lie $p$-algebra} to denote an
$L_{\infty}$-algebra whose underlying graded vector space is concentrated in degrees $-p+1,\cdots,0$. Notice that
by degree reasons
only the multi-brackets $l_1,\cdots,l_{p+1}$ can be non-zero.
In particular, a Lie 2-algebra consists of a graded vector space $V$
concentrated in degrees $-1$ and $0$, together with
maps
\begin{align*}
d:=&l_1 \colon V\to V\\
[\cdot,\cdot]:=&l_2 \colon \wedge ^2 V \to V\\
J:=&l_3 \colon \wedge ^3 V \to V
\end{align*}
of degrees $1$,$0$ and $-1$ respectively, subject to the quadratic relations.\\
An \emph{$L_{\infty}$-morphism} $\phi \colon V \rightsquigarrow V'$ between $L_{\infty}$-algebras is a sequence of maps ($n\ge 1$)
\begin{equation*}
\phi_n \colon \wedge ^n V \to V'
\end{equation*}
of degree $1-n$, satisfying certain relations, which can be found in
\cite[Def. 5.2]{LadaMarkl} in the case when $V'$ has only the unary and binary bracket.
The first of these relations says that $\phi_1 \colon V \to V'$ must preserve the differentials (unary brackets). We spell out the definition when
$V$ and $V'$ are Lie 2-algebras.
\begin{defi}\label{defmor} Let $(V,d,[\cdot,\cdot],J)$ and $(V',d',[\cdot,\cdot]',J')
$ be Lie 2-algebras.
A \emph{morphism} $\phi \colon V \rightsquigarrow V'$ consists of
linear maps
\begin{align*}
\phi_0 &\colon V_0 \to V_0\\
\phi_{1} &\colon V_{-1} \to V_{-1}\\
\phi_2 & \colon\wedge^2 V_0 \to V_{-1}
\end{align*}
such that
\begin{align} \label{chainmap}d' \circ\phi_{1}&= \phi_{0}\circ d,\\
\label{failjac}d' (\phi_2(x,y))&=\phi_0[x,y]-[\phi_0(x),\phi_0(y)]'
\;\;\;\text{ for all } x,y \in V_0,\\
\label{failjacnew}\phi_2(df,y)&=\phi_1[f,y]-[\phi_1(f),\phi_0(y)]'
\;\;\;\text{ for all } f\in V_{-1},y \in V_0,
\end{align}
and for all $x,y,z \in V_0$:
\begin{align}\label{eight}
&\phi_0(J(x,y,z))-J'(\phi_0(x),\phi_0(y),\phi_0(z))=\\
&\phi_2(x,[y,z])
-\phi_2(y,[x,z])+\phi_2(z,[x,y]) \nonumber \\
+&[\phi_0(x),\phi_2(y,z)]'-[\phi_0(y),\phi_2(x,z)]'
+[\phi_0(z),\phi_2(x,y)]' \nonumber.
\end{align}
\end{defi}
\section{$L_{\infty}$-algebras from higher analogues of Dirac structures}\label{obs}
Courant \cite[\S 2.5]{Cou} associated to every Dirac structure on $M$ a subset of $C^{\infty}(M)$, which we refer to as Hamiltonian functions or observables.
Usually the Hamiltonian vector field associated to such a function is not unique. Nevertheless, the set of Hamiltonian functions is endowed with
a Poisson algebra structure (a Lie bracket compatible with the product of functions).
Baez, Rogers and Hoffnung associate to a $p$-plectic form a set of Hamiltonian $p-1$-forms and endow it with a bracket \cite[\S 3]{BHR}. Rogers shows that the bracket can be extended to obtain a Lie $p$-algebra \cite[Thm. 5.2]{RogersL}.
In this section we mimic Courant's definition of the bracket and extend Roger's results to arbitrary isotropic involutive subbundles.
Let $p\ge 1$ and let $L$ be an isotropic, involutive subbundle of $E^p=TM\oplus \wedge^pT^*M$.
\begin{defi}\label{ham}
A $(p-1)$-form $\alpha \in \Omega^{p-1}(M)$ is called \emph{Hamiltonian} if there exists a smooth vector field $X_{\alpha}$ such that $X_{\alpha}+d\alpha \in \Gamma(L)$. We denote the set of Hamiltonian forms by $\Omega^{p-1}_{ham}(M,L)$.
We refer to $X_{\alpha}$ as \emph{a Hamiltonian vector field} of $\alpha$.
\end{defi}
\begin{remark}\label{ann}
a) Hamiltonian vector fields are unique only up to smooth sections of $L\cap (TM\oplus 0)$.
b) For all $X\in L_x\cap (T_xM\oplus 0)$ and for all $\eta \in pr_{\wedge^pT^*M} L_x$ ,
$$\iota_X \eta=0.$$ Here $x\in M$ and $pr_{\wedge^pT^*M}$ denotes the projection of $E^p_x$ onto the second component. The above property follows from the fact that there exists $Y\in T_xM$ with $Y+\eta \in L_x$, so
$\iota_X \eta = \langle X+0,Y+\eta \rangle=0$ by the isotropicity of $L$.
\end{remark}
\begin{defi}\label{brs} We define a bracket
$\{\cdot,\cdot\}$ on $\Omega^{p-1}_{ham}(M,L)$ by
\begin{equation*}
\{\alpha,\beta\}:= \iota_{X_{\alpha}} d\beta,
\end{equation*}
where $X_{\alpha}$ is any Hamiltonian vector field for $\alpha$.
\end{defi}
\begin{lemma}\label{bracket}
The bracket $\{\cdot,\cdot\}$ is well-defined and skew-symmetric.
It does not satisfy the Jacobi identity, but rather
\begin{equation*}
\{\alpha,\{\beta,\gamma\}\} +c.p.=-d (\iota_{X_{\alpha}}\{\beta,\gamma\})
\end{equation*}
where ``$c.p.$'' denotes cyclic permutations.
\end{lemma}
\begin{proof}
The bracket is well-defined: by Remark \ref{ann}
the ambiguity in
the choice of $X_{\alpha}$ is
a section $X$ of $L\cap (TM\oplus 0)$ and $\iota_X d \beta =0$.
Using $\mathcal{L}_Y=\iota_Yd+d\iota_Y$ one computes
\begin{equation}\label{closed}
[\![ X_{\alpha}+d\alpha, X_{\beta}+d\beta]\!]= [X_{\alpha},X_{\beta}]+ d \{ \alpha, \beta\}.
\end{equation}
Hence $[X_{\alpha}, X_{\beta}]$ is a Hamiltonian vector field for $\{\alpha,\beta\}$, showing that $\Omega^{p-1}_{ham}(M,L)$ is closed under $\{\cdot,\cdot\}$.
The bracket is skew symmetric because $$0=\langle X_{\alpha}+d\alpha, X_{\beta} +d\beta \rangle=
\{\alpha,\beta\}+\{\beta,\alpha\}.$$
To compute the Jacobiator of $\{\cdot,\cdot\}$ we proceed as in\footnote{There the case $p=1$ is treated, and the term $\iota_{X_{\alpha}}\{\beta,\gamma\} $ vanishes by degree reasons.} \cite[Prop. 2.5.3]{Cou}. Since $L$ is isotropic and involutive we have
\begin{align}
0&=\langle [\![ X_{\alpha}+d\alpha\;,\;X_{\beta}+d\beta ]\!],X_{\gamma}+d\gamma \rangle \nonumber \\
&=\langle [X_{\alpha},X_{\beta}]+d\{\alpha,\beta\}\;,\;X_{\gamma}+d\gamma \rangle \nonumber \\
&= \iota_{[X_{\alpha},X_{\beta}]}d\gamma+ \iota_{X_{\gamma}}d\{\alpha,\beta\} \nonumber \\
&= \left(\{ \alpha,\{\beta,\gamma\}\} +c.p.\right)+d \;(\iota_{X_{\alpha}}\{\beta,\gamma\}). \nonumber
\end{align}
Here the second equality uses eq. \eqref{closed} and the last equality uses $\iota_{[Y,Z]}=[\mathcal{L}_Y,\iota_Z]$.
\end{proof}
\begin{remark}\label{graph-o}
Given a $p$-plectic form $\omega$,
Cantrijn, Ibort and de Le\'on \cite[\S 4]{Hammulti}
define the space of
Hamiltonian $(p-1)$-forms $\alpha$ by the requirement that $d\alpha=-\iota_{X_{\alpha}}\omega$
for a (necessarily unique) vector field $X_{\alpha}$ on $M$, and define
the \emph{semi-bracket} $\{\alpha,\beta\}_s$ by $\iota_{X_{\beta}}\iota_{X_{\alpha}}\omega$.
These notions
coincide with our Def. \ref{ham} and Def. \ref{brs} applied to $graph(\omega):=\{X-\iota_X \omega: X\in TM\}\subset E^{p}$.
\end{remark}
\begin{remark} Given an $p$-plectic form,
in \cite[Def. 3.3]{BHR} the \emph{hemi-bracket} of $\alpha,\beta \in
\Omega^{p-1}_{ham}(M,graph(\omega))$ is also defined, by the formula $\mathcal{L}_{X_{\alpha}}\beta$.
This notion does not extend to the setting of arbitrary isotropic subbundles of $E^p$, since in that setting
the Hamiltonian vector field $X_{\alpha}$ is not longer unique and the above expression depends on it.
For instance, take $M=\ensuremath{\mathbb R}^4$, consider the closed $3$-form $\theta=dx_1\wedge dx_2 \wedge dx_3$. By Prop. \ref{closedform}, $L=\{X-\iota_X \theta: X\in TM\}$
is a isotropic, involutive subbundle of $E^2$. Both $\pd{x_4}\in \Gamma(L\cap TM)$ and the zero vector field are Hamiltonian vector fields for $\alpha=0$, and
the hemi-bracket of $\alpha$ with $\beta=x_1dx_4+x_4 dx_1$ is not well-defined since
$$\mathcal{L}_{\pd{x_4}}\beta=dx_1\neq 0=\mathcal{L}_{0}\beta.$$ \end{remark}
Rogers \cite[Thm. 5.2]{RogersL} shows that for every $p$-plectic manifold there is an associated $L_{\infty}$-algebra of observables. The statement and the proof generalize in a straightforward way to arbitrary isotropic, involutive subbundle of $E^p=TM\oplus \wedge^p T^*M$.
\begin{thm}\label{Liep}
Let $p\ge 1$ and $L$ be a isotropic, involutive subbundle of $E^p=TM\oplus \wedge^p T^*M$. Then the complex concentrated in degrees $-p+1,\dots,0$
$$
C^{\infty}(M) \overset{d}{\rightarrow}\dots \overset{d}{\rightarrow}\Omega^{p-2}(M)\overset{d}{\rightarrow}\Omega^{p-1}_{ham}(M,L)$$ has a Lie $p$-algebra structure.
The only non-vanishing multibrackets are given by the de Rham differential on $\Omega^{\le p-2}(M)$ and, for $k=2,\dots,p+1$, by
$$
l_k(\alpha_1,\dots,\alpha_k)=
\;\;\;\epsilon(k)\iota_{X_{\alpha_k}}\dots \iota_{X_{\alpha_3}}\{\alpha_1,\alpha_2\}
$$
where $\alpha_1,\dots,\alpha_k \in \Omega^{p-1}_{ham}(M,L)$ and $\epsilon(k)=(-1)^{\frac{k}{2}+1}$ if $k$ is even,
$\epsilon(k)=(-1)^{\frac{k-1}{2}}$ if $k$ is odd.
\end{thm}
\begin{proof}
The expressions for the multibrackets are totally skew-symmetric, as a consequence of the fact that $\{\cdot,\cdot\}$ is skew-symmetric. This and the fact that $\{\cdot,\cdot\}$ is independent of the choice of Hamiltonian vector fields imply that the multibrackets are well-defined.
Clearly $l_k$ has degree $2-k$.
Now we check the $L_{\infty}$ relations \eqref{lijac}. For $n=1$ the relation holds due to $d^2=0$. Now consider the relation \eqref{lijac} for a fixed $n\ge 2$, and let $\alpha_1,\dots,\alpha_n$ be homogeneous elements of the above complex. We will use repeatedly the fact that, for $k\ge2$, the $k$-multibracket vanishes when one of its entries is of negative degree.
For $j\in \{2,\dots,n-2\}$ (so $i\ge 3$), we have
$$l_j(l_i(\alpha_{1}, \dots,\alpha_{i}),\alpha_{i+1}, \dots, \alpha_{n} )=0,$$ as a consequence of the fact that $k$-multibrackets for $k\ge3$ take values in negative
degrees.
For $j=n$ we have
$$l_n(l_1(\alpha_{1}), \alpha_{2}, \dots, \alpha_{n} )=0:$$
if $|\alpha_{1}|=0$ then $l_1(\alpha_{1})$ vanishes, otherwise
$l_1(\alpha_{1})=d\alpha_{1}$ and its Hamiltonian vector field vanishes.
We are left with the summands of \eqref{lijac} with $j=1$ and $j=n-1$.
When $n=2$ we have just one summand $l_1(l_2( \alpha_{\sigma(1)},\alpha_{\sigma(2)}))$
which vanishes by degree reasons.
For $n\ge 3$ it is enough to assume that all the $\alpha_i$'s have degree zero. We have
\begin{align*}
d(l_n(\alpha_{ 1}, \dots,\alpha_{ n}))+
\sum_{\sigma
\in Sh(2,n-2)} \chi(\sigma) l_{n-1}(\{\alpha_{\sigma(1)},\alpha_{\sigma(2)}\},\alpha_{\sigma(3)} \dots, \alpha_{\sigma(n)} ).
\end{align*}
Writing out explicitly the unshuffles in $Sh(2,n-2)$ and the multibrackets we obtain
\begin{align*}
\epsilon(n)&d(\iota_{X_{\alpha_{n}}}\dots\iota_{X_{\alpha_{3}}}\{\alpha_{1},\alpha_{2}\})\\
+ \epsilon(n-1)&\Big[
\sum_{2\le i<j\le n }(-1)^{i+j-1}\iota_{X_{\alpha_{n}}}\dots \widehat{\iota_{X_{\alpha_{j}}}}\dots
\widehat{\iota_{X_{\alpha_{i}}}}
\dots
\iota_{X_{\alpha_{2}}}\{\{\alpha_{i},\alpha_{j}\},\alpha_{1}\} \\\
&+ \sum_{3\le j\le n }(-1)^{j}\iota_{X_{\alpha_{n}}}\dots \widehat{\iota_{X_{\alpha_{j}}}}
\dots
\iota_{X_{\alpha_{3}}}\{\{\alpha_{1},\alpha_{j}\},\alpha_{2}\} \\\
&+ \iota_{X_{\alpha_n}}\dots
\dots
\iota_{X_{\alpha_{4}}}\{\{\alpha_{1},\alpha_{2}\},\alpha_{3}\} \
\Big].
\end{align*}
By Lemma \ref{pain}
we conclude that the above expression vanishes.
\end{proof}
The following Lemma, needed in the proof of Thm. \ref{Liep},
extends \cite[Lemma 3.7]{RogersL}.
\begin{lemma}\label{pain}
Let $p\ge 1$ and $L$ be a isotropic, involutive subbundle of $E^p=TM\oplus \wedge^p T^*M$. Then for any $n\ge 3$, and for all $\alpha_1,\dots,\alpha_n\in \Omega^{p-1}_{ham}(M,L)$ we have
\begin{align*}
d(\iota_{X_{\alpha_n}}\dots\iota_{X_{\alpha_3}}\{\alpha_{1},\alpha_{2}\})=
(-1)^{n+1}&\Big[
\sum_{2\le i<j\le n }(-1)^{i+j-1}\iota_{X_{\alpha_n}}\dots \widehat{\iota_{X_{\alpha_j}}}\dots
\widehat{\iota_{X_{\alpha_i}}}
\dots
\iota_{X_{\alpha_2}}\{\{\alpha_{i},\alpha_{j}\},\alpha_{1}\} \\\
&+ \sum_{3\le j\le n }(-1)^{j}\iota_{X_{\alpha_n}}\dots \widehat{\iota_{X_{\alpha_j}}}
\dots
\iota_{X_{\alpha_3}}\{\{\alpha_{1},\alpha_{j}\},\alpha_{2}\} \\\
&+ \iota_{X_{\alpha_n}}\dots
\dots
\iota_{X_{\alpha_4}}\{\{\alpha_{1},\alpha_{2}\},\alpha_{3}\} \
\Big].
\end{align*}
\end{lemma}
\begin{proof}
We proceed by induction on $n$. For $n=3$ the statement holds by Lemma \ref{bracket}. So let $n>3$. To shorten the notation, denote $A:=\iota_{X_{\alpha_{n-1}}}\dots\iota_{X_{\alpha_3}}\{\alpha_{1},\alpha_{2}\}$. Then we have
\begin{align}\label{lhslie}
d(\iota_{X_{\alpha_n}}\dots\iota_{X_{\alpha_3}}\{\alpha_{1},\alpha_{2}\})=d(\iota_{X_{\alpha_n}} A)=\mathcal{L}_{X_{\alpha_n}}A-\iota_{X_{\alpha_n}}dA.
\end{align}
The first term on the r.h.s. of \eqref{lhslie}
becomes
\begin{align*}
&\mathcal{L}_{X_{\alpha_n}}(\iota_{X_{\alpha_{3}}\wedge \dots\wedge {X_{\alpha_{n-1}}}}\{\alpha_{1},\alpha_{2}\})\\
=& \sum_{i=3}^{n-1}(-1)^{i+1}\iota_{X_{\alpha_{n-1}}}\dots \widehat{\iota_{X_{\alpha_i}}}\dots\iota_{X_{\alpha_3}}
\iota_{[X_{\alpha_n},X_{\alpha_i}]
}
\{\alpha_{1},\alpha_{2}\}
+\iota_{X_{\alpha_{n-1}}}\dots \iota_{X_{\alpha_3}}
\mathcal{L}_{X_{\alpha_n}}\{\alpha_{1},\alpha_{2}\}
\\
=&\sum_{i=3}^{n-1}(-1)^{i+1}\iota_{X_{\alpha_{n-1}}}\dots \widehat{\iota_{X_{\alpha_i}}}\dots
\iota_{X_{\alpha_2}} \{\{\alpha_n,\alpha_i\},\alpha_{1}\}
+
\iota_{X_{\alpha_{n-1}}}\dots \iota_{X_{\alpha_3}}(
\{\{\alpha_2,\alpha_n\},\alpha_{1}\}-\{\{\alpha_1,\alpha_n\},\alpha_{2}\}
)\\
=&\sum_{i=2}^{n-1}(-1)^{i}\iota_{X_{\alpha_{n-1}}}\dots \widehat{\iota_{X_{\alpha_i}}}\dots\iota_{X_{\alpha_2}} \{\{\alpha_i,\alpha_n\},\alpha_{1}\}
- \iota_{X_{\alpha_{n-1}}}\dots \iota_{X_{\alpha_3}}\{\{\alpha_1,\alpha_n\},\alpha_{2}\}.\end{align*}
Here in the second equality we used $[X_{\alpha_n},X_{\alpha_i}]=X_{\{\alpha_n,\alpha_i\}}$ (see the proof of Lemma \ref{bracket})
and $$\iota_{X_{\{\alpha_n,\alpha_i\}}}\{\alpha_{1},\alpha_{2}\}=-\iota_{X_{\{\alpha_{n},\alpha_{i}\}}}\iota_{X_{\alpha_2}} d \alpha_1=\iota_{X_{\alpha_2}} \{\{\alpha_n,\alpha_i\},\alpha_{1}\},$$
as well as Cartan's formula for the Lie derivative and Lemma \ref{bracket}.
The second term on the r.h.s. of \eqref{lhslie} can be developed using the induction hypothesis. The resulting expression for the l.h.s. of eq. \eqref{lhslie}
is easily seen to agree with the one in the statement of this lemma.
\end{proof}
\begin{remark} The observables associated by Thm. \ref{Liep} to the zero $p+1$-form on $M$ are given by the abelian Lie algebra $\ensuremath{\mathbb R}$ for $p=1$ and to the complex $C^{\infty}(M) \overset{d}{\rightarrow} \Omega^{1}_{closed}(M)$ (with vanishing higher brackets) for $p=2$.
It is a curious coincidence that they agree with the central extensions of observables of $p$-plectic structures given in \cite[Prop. 9.4]{RogersPre} for $p=1$ and $2$ respectively.
\end{remark}
A closed 2-form $B$ on $M$ induce an automorphism of the Courant algebroid $TM\oplus T^*M$ by gauge transformations
(see \S \ref{intro}), and therefore acts on the set of Dirac structures. For instance, the Dirac structure $TM\oplus \{0\}$ is mapped to the graph of $B$.
The Poisson algebras of observables of these two Dirac structures are not isomorphic (unless $B=0$).
Similarly, for $p\ge1$, gauge transformations of $E^p$ by closed $p+1$-forms usually do not
induce an isomorphism of the Lie $p$-algebra of observables.
We display a quite trivial operation which, on the other hand, does have this property.
\begin{lemma}\label{lambda} Let $\lambda \in \ensuremath{\mathbb R}-\{0\}$ and consider
\begin{align*}
m_{\lambda} \colon\;\;\;\;\; E^p &\to E^p\\ X+\eta& \mapsto X+\lambda \eta
\end{align*}
Let $L\subset E^p$ be an involutive isotropic subbundle.
Then $m_{\lambda}(L)$ is also an involutive isotropic subbundle, and the Lie $p$-algebras of observables of $L$ and $m_{\lambda}(L)$ are isomorphic.
\end{lemma}
\begin{proof}
$m_{\lambda}$ is an automorphism of the Dorfman bracket $[\![ \cdot,\cdot ]\!]$ and
$\langle m_{\lambda}\cdot,m_{\lambda}\cdot \rangle=\lambda \langle \cdot,\cdot\rangle$. Hence
$m_{\lambda}(L)$ is also involutive and isotropic.
We consider the Lie $p$-algebras of observables associated to $L$ and $m_{\lambda}(L)$ respectively, as in Thm. \ref{Liep}. We denote them by $\mathcal{O}^L$ and $\mathcal{O}^{m_{\lambda}(L)}$ respectively. The underlying complexes coincide, both being
$$ C^{\infty}(M) \overset{d}{\rightarrow}\Omega^1(M)\overset{d}{\rightarrow}\dots\overset{d}{\rightarrow}\Omega^{p-1}_{ham}(M,L).$$
Notice that if $\alpha\in \Omega^{p-1}_{ham}(M,L)$ has Hamiltonian vector field
$X_{\alpha}^L$, then $\lambda \alpha$ is a Hamiltonian $(p-1)$-form for $m_{\lambda}(L)$, and $X_{\alpha}^L$ itself is a Hamiltonian vector field for it. Hence
from Thm. \ref{Liep} it is clear that the unary map
given by multiplication by $\lambda$
$$\phi \colon (\beta_0,\dots,\beta_{p-1}) \mapsto (\lambda \beta_0,\dots, \lambda \beta_{p-1})$$ intertwines the multibrackets of $\mathcal{O}^L$ and $\mathcal{O}^{m_{\lambda}(L)}$, where $\beta_i\in \Omega^i(M)$ for $i< p-1$ and $\beta_{p-1}\in \Omega^{p-1}_{ham}(M,L)$.
Therefore,
setting the higher maps to zero,
we obtain a strict morphism \cite[\S 7]{MarlkDoubek}
of Lie $p$-algebras, which clearly is an isomorphism.
\end{proof}
As an application of Lemma \ref{lambda} we show that
to any compact, connected, orientable $p+1$-dimensional manifold ($p\ge 1$)
there is an associated Lie $p$-algebra.
A dual version of this Lie $p$-algebra appeared in \cite[Thm. 6.1]{RogerVol}.
\begin{cor}\label{M3}
Let $M$ be a compact, connected, orientable $p+1$-dimensional manifold. For any volume form $\omega$ consider the Lie $p$-algebra associated
to $graph(\omega)$ by Thm. \ref{Liep}, whose underlying complex is
$$ C^{\infty}(M) \overset{d}{\rightarrow}\Omega^1(M)\overset{d}{\rightarrow}\dots\overset{d}{\rightarrow}\Omega^{p-1}(M).$$
(Notice that all $p-1$-forms are Hamiltonian). Its isomorphism class is independent of the choice of $\omega$, and therefore depends only on the manifold $M$.
\end{cor}
\begin{proof}
Let $\omega_0$ and $\omega_1$ be two volume forms on $M$. They define non-zero cohomology classes in
$H^{p+1}(M,\ensuremath{\mathbb R})=\ensuremath{\mathbb R}$, so there is a (unique) $\lambda \in \ensuremath{\mathbb R}-\{0\}$ such that $[\omega_1]=
\lambda [\omega_0]$. By Moser's theorem \cite{Moser}
there is a diffeomorphism $\psi$ of $M$ such that $\psi^*(\omega_1)=\lambda \omega_0$.
This explains the first isomorphism in
$$\text{Lie $p$-algebra of }\omega_1 \;\cong\; \text{Lie $p$-algebra of }\lambda \omega_0\;\cong \;\text{Lie $p$-algebra of }\omega_0,$$
whereas the second one holds by Lemma \ref{lambda}.
\end{proof}
\section{Relations to $L_{\infty}$-algebras arising from split Courant algebroids}\label{liM}
In this section we construct an $L_{\infty}$-morphism from a Lie algebra associated to $E^0$ with the $\sigma$-twisted bracket, where $\sigma$ is a closed 2-form, to a Lie 2-algebra associated to $E^1$ with the untwisted Courant bracket (in other words, the Courant bracket twisted by $d\sigma =0$).
We consider again $E^p:=TM\oplus \wedge^pT^*M$.
For $p=0$ we have $E^0=TM \oplus \ensuremath{\mathbb R}$. Fix a closed 2-form $\sigma \in \Omega^2_{closed}(M)$.
Then $\Gamma(E^0)$ with the
$\sigma$-twisted Dorfman bracket
\begin{equation*}
[X+f,Y+g]_{\sigma}=[X,Y]+(X(g)-Y(f))+\sigma(X,Y)
\end{equation*}
is an honest Lie algebra. (See
\cite[\S 3.8]{Gu}, where a geometric interpretation in terms of circle bundles is given too.)
For $p=1$ we have the (untwisted) Courant algebroid $E^1=TM\oplus T^*M$. Roytenberg and Weinstein \cite{rw} associated to it an $L_{\infty}$-algebra. In the version given in \cite[Thm. 4.4]{RogersCou} the underlying complex is
\begin{equation}\label{fctE}
C^{\infty}(M) \overset{d}{\rightarrow}\Gamma(E^1)
\end{equation}
where $d$ is the de Rham differential. The binary bracket $[\cdot,\cdot]'$ is given by
the \emph{Courant} bracket $[\![ \cdot,\cdot ]\!]_{Cou}$ on $\Gamma(E^1)$ and by
$$[e,f]'=-[f,e]':=\frac{1}{2}\langle e, df \rangle$$ for $e\in \Gamma(E^1)$ and $f\in C^{\infty}(M)$.
The trinary bracket $J'$ is given by
$$J'(e_1,e_2,e_3)=-\frac{1}{6}\left( \langle [\![ e_1,e_2 ]\!]_{Cou},e_3\rangle + c.p.\right)$$ for elements of $\Gamma(E^1)$, where ``c.p.'' denotes cyclic permutation.
All other brackets vanish.
We show that there is a canonical morphism between these two Lie 2-algebras:
\begin{thm}\label{mor01} Let $M$ be a manifold and $\sigma \in \Omega^2_{closed}(M)$.
There is a canonical morphism of Lie 2-algebras
\begin{equation}
\phi \colon \left(\Gamma(E^0), [\cdot,\cdot]_{\sigma}\right) \;\;\rightsquigarrow \;\; \left(C^{\infty}(M) \overset{d}{\rightarrow}\Gamma(E^1),[\cdot,\cdot]',J'\right)
\end{equation}
given by
\begin{align*}
&\phi_0 \colon \Gamma(E^0) \to \Gamma(E^1), \;\;\;\;\;\;\;\;\;\;\;\; (X,f)\mapsto (X,df)\\
&\phi_2 \colon \wedge^2\Gamma(E^0) \to C^{\infty}(M) , \;\;\; (X,f),(Y,g)\mapsto \frac{1}{2}\big(X(g)-Y(f)\big)+\sigma(X,Y).
\end{align*}
\end{thm}
\begin{proof}
We check that the conditions of Def. \ref{defmor} are satisfied.
Eq. \eqref{chainmap} is satisfied because $ \Gamma(E^0)$ is concentrated in degree zero.
Eq. \eqref{failjac} is satisfied because for any $X+f,Y+g\in \Gamma(E^0)$ we have
\begin{align*}
& \phi_0\Big[X+f,Y+g\Big]_{\sigma}-\Big[\!\Big[ \phi_0 (X+f),\phi_0 (Y+g)\Big]\!\Big]_{Cou}\\
=&\Big([X,Y]+d\big(X(g)-Y(f)+\sigma(X,Y)\big)\Big)-
\Big ([X,Y]+\frac{1}{2}d(X(g)-Y(f))\Big)
\\
=&d\Big(\phi_2(X+f,Y+g)\Big).
\end{align*}
Eq. \eqref{failjacnew} is satisfied because $\Gamma(E^{\circ})$ is concentrated in degree zero.
We are left with checking eq. \eqref{eight}.
Let $X+f,Y+g,Z+h\in \Gamma(E^1)$. We want to show that
\begin{align}\label{consist}
-J'(X+df,Y+dg,Z+dh)\;\overset{!}{=}\;&\phi_2\Big(X+f,[Y,Z]
+Y(h)-Z(g)+\sigma(Y,Z)\Big)+c.p.\\+& [ X+df,\phi_2(Y+g,Z+h)]'+c.p. \nonumber
\end{align}
where as usual ``$c.p.$'' denotes cyclic permutation.
The l.h.s. of eq. \eqref{consist} is equal to
\begin{align*}
&\frac{1}{6}\Big( \langle[\![ X+df,Y+dg]\!]_{Cou},Z+dh\rangle \Big)+ c.p. \\
= & \frac{1}{6}\Big( [X,Y](h)+\frac{1}{2}Z(X(g))- \frac{1}{2}Z(Y(f)) \Big)+c.p. \\
=& \frac{1}{4}[X,Y](h) + c.p.
\end{align*}
The r.h.s. is equal to
\begin{align*}
&\frac{1}{2}\Big(X\big(Y(h)-Z(g)+\sigma(Y,Z)\big)-[Y,Z](f)\Big)+\sigma(X,[Y,Z]) + c.p. \\
&+\frac{1}{2}X\Big(\frac{1}{2}(Y(h)-Z(g))+\sigma(Y,Z)\Big)+c.p. \\
=&\frac{3}{4}X\Big(Y(h)-Z(g)\Big)-\frac{1}{2}[Y,Z](f)+c.p.\\
&+\sigma(X,[Y,Z])+X(\sigma(Y,Z))+ c.p. \\
=& \frac{1}{4}[X,Y](h) + c.p.\\
&+d\sigma(X,Y,Z).
\end{align*}
Since $\sigma$ is a closed form, we conclude that eq. \eqref{consist} is satisfied.
\end{proof}
\section{$L_{\infty}$-algebras from higher analogues of split Courant algebroids}\label{ez}
In this section we apply Getzler's recent contruction \cite{GetzlerHigherDer} to obtain an $L_{\infty}$ structure on the complex concentrated in degrees $-r+1,\cdots,0$
\begin{equation}\label{complex}
C^{\infty}(M) \overset{d}{\rightarrow}\cdots
\overset{d}{\rightarrow} \Omega^{r-2}(M)\overset{d}{\rightarrow} \Gamma(E^{r-1})=\chi(M)\oplus \Omega^{r-1}(M),
\end{equation}
for any manifold $M$ and integer $r\ge 2$. When $r=2$ we obtain exactly the Lie 2-algebra given just before Thm. \ref{mor01}.
Let us first recall Getlzer's recent theorem \cite[Thm. 3]{GetzlerHigherDer}.
Let $(V,\delta,\textbf{\{}\;,\;\}\!\!\})$ be a differential graded Lie algebra (DGLA).
Getlzer endows the graded\footnote{We take the opposite grading as in \cite{GetzlerHigherDer}
so that our differential $\delta$ has degree 1.} vector space
$V^-:=\oplus_{i<0} V_i$
with multibrackets satisfying the relations \cite[Def. 1]{GetzlerHigherDer}, which after a degree shift provide
$V^-[-1]$ with a $L_{\infty}$-algebra structure in the sense of our Def. \ref{lidef}. Notice that $V^-[-1]$ is concentrated in non-positive degrees: its degree $0$ component is $V_{-1}$, its degree $-1$ component is $V_{-2}$, and so on.
The multibrackets are built out of a derived bracket construction using the restriction of the operator $\delta$ to $V_0$, and
the Bernoulli numbers appear as coefficients.
Now let $M$ be a manifold, fix an integer $r\ge 2$, and consider the graded manifold $$T^*[r]T[1]M$$ (see \cite{Dima}\cite[\S 2]{AlbICM}\cite{ALbFlRio} for background material on graded manifolds). $T^*[r]T[1]M$
is endowed with a canonical Poisson structure of degree $-r$:
there is a bracket
$\textbf{\{}\;,\;\}\!\!\}$ of degree $-r$ on the graded commutative algebra of functions $\mathcal{C}:=C(T^*[r]T[1]M)$ such that
$$\big(\mathcal{C} \;,\; \cdot \;,\; \textbf{\{}\;,\;\}\!\!\}\big)$$ is a \emph{Poisson algebra of degree $r$}
\cite[Def. 1.1]{GPA}. This means that $\textbf{\{}\;,\;\}\!\!\}$
defines a (degree zero) graded Lie algebra structure on $\mathcal{C}[r]$, the graded vector space defined by the degree shift $(\mathcal{C}[r])_i:=\mathcal{C}_{r+i}$, and that
$\textbf{\{} a,\cdot \}\!\!\}$ is a degree $|a|-r$ derivation of the product for any homogeneous element $a\in \mathcal{C}$.
More concretely, choose coordinates $x_i$ on $M$, inducing fiber coordinates $v_i$ on $T[1]M$, and conjugate coordinates $P_i$ and $p_i$ on the fibers of $T^*[r]T[1]M\to T[1]M$. The degrees of these generators of $\mathcal{C}$ are $$|x_i|=0,\;\; |v_i|=1,\;\; |P_i|=r,\;\;|p_i|=r-1.$$ Then
\begin{align*}
\textbf{\{} P_i,x_i \}\!\!\}=&1=- \textbf{\{} x_i,P_i \}\!\!\}\\
\textbf{\{} p_i,v_i \}\!\!\}=&1=-(-1)^{r-1}\textbf{\{} v_i,p_i \}\!\!\}
\end{align*}
for all $i$, and all the other brackets between generators vanish.
Notice that the coordinate $v_i$ corresponds canonically to $dx_i\in \Omega^1(M)$ and that $p_i$ corresponds canonically to $\pd{x_i}\in \chi(M)$.
Also notice that $\mathcal{C}$ is concentrated in non-negative degrees, and that there are canonical identifications
\begin{equation}\label{ccforms}
\mathcal{C}_i=\Omega^i(M) \text{ for }0\le i< r-1, \;\;\;\;\;\;\;\;\; \mathcal{C}_{r-1}=\Omega^{r-1}(M)\oplus \chi(M).
\end{equation}
Indeed
for $i<r-1$
the elements of degree $i$ are sums of expressions of the form $f(x)v_{j_1}\dots v_{j_i}$, while for $i=r-1$ they are sums of expressions $f(x)v_{j_1}\dots v_{j_{r-1}}+g(x)p_j$.
The degree $r+1$ function $\mathcal{S}:=\sum v_iP_i$, given by the De Rham differential on $M$, satisfies $\textbf{\{} \mathcal{S},\mathcal{S} \}\!\!\}=0$, hence $\textbf{\{} \mathcal{S},\;\}\!\!\}$ squares to zero. This and the fact that $(\mathcal{C}[r],\textbf{\{}\;,\;\}\!\!\})$ is a graded Lie algebra imply that
\begin{equation}\label{DGLAme}
\big(\mathcal{C}[r], \delta:=\textbf{\{} \mathcal{S},\;\}\!\!\}, \textbf{\{}\;,\;\}\!\!\}\big).
\end{equation}
is a DGLA.
Hence Getlzer's construction can be applied to \eqref{DGLAme}, endowing $(\mathcal{C}[r])^-[-1]=(\oplus_{0\le i \le r-1}\mathcal{C}_i)[r-1]$ (the complex displayed in \eqref{complex}) with an
$L_{\infty}$-algebra structure.
We write out explicitly the multibrackets. The twisted case will be considered in Prop. \ref{ordH} below.
\begin{prop}\label{ord}
Let $M$ be a manifold, $r\ge 2$ an integer. There exists a Lie $r$-algebra structure on the complex \eqref{complex} concentrated in degrees $-r+1,\cdots,0$, that is
\begin{equation*}
C^{\infty}(M) \overset{d}{\rightarrow}\cdots
\overset{d}{\rightarrow} \Omega^{r-2}(M)\overset{d}{\rightarrow} \Gamma(E^{r-1})=\chi(M)\oplus \Omega^{r-1}(M),
\end{equation*}
whose only non-vanishing brackets (up to permutations of the entries) are \begin{itemize}
\item unary bracket: the de Rham differential in negative degrees.
\item binary bracket:
\begin{itemize}
\item[ ] for $e_i\in\Gamma(E^{r-1})$ the Courant bracket as in eq. \eqref{dorcou},
$$[e_1,e_2]=[\![ e_1,e_2 ]\!]_{Cou};$$
\item[ ] for $e=(X,\alpha) \in \Gamma(E^{r-1})$ and $\xi \in \Omega^{\bullet<{r-1}}(M)$,
$$[e,\xi]=\frac{1}{2} \mathcal{L}_{X}\xi.$$ \end{itemize}
\item trinary bracket:
\begin{itemize}
\item[ ] for $e_i\in \Gamma(E^{r-1})$,
$$[e_0,e_1,e_2]=-\frac{1}{6}\left( \langle [\![ e_0,e_1 ]\!]_{Cou},e_2\rangle + c.p.\right);$$
\item[ ] for $\xi \in\Omega^{\bullet<{r-1}}(M)$ and $e_i=(X_i,\alpha_i) \in \Gamma(E^{r-1})$,
\begin{align*}
[\xi,e_1,e_2]
=& -\frac{1}{6}\left(
\frac{1}{2}(\iota_{X_1}\mathcal{L}_{X_2} - \iota_{X_2}\mathcal{L}_{X_1}) +\iota_{[X_1,X_2]} \right) \xi.
\end{align*}
\end{itemize}
\item $n$-ary bracket for $n\ge 3$ with $n$ an \emph{odd} integer:
\begin{itemize}
\item[ ] for $e_i=(X_i,\alpha_i) \in \Gamma(E^{r-1})$, $[e_0,\cdots,e_{n-1}]=\sum_i [X_0,\dots,\alpha_i,\dots,X_{n-1}]$, with
\begin{align*}
[\alpha,X_1,\dots,X_{n-1}]=
\frac{(-1)^{\frac{n+1}{2}} 12 B_{n-1} }{(n-1)(n-2)}
\sum_{1\le i<j\le n-1}(-1)^{i+j+1}\iota_{X_{n-1}}\dots
\widehat{\iota_{X_{j}}}\dots \widehat{\iota_{X_{i}}}\dots
\iota_{X_{1}} [\alpha,X_i,X_j];
\end{align*}
\item[ ]for $\xi \in \Omega^{\bullet<{r-1}}(M)$ and $e_i=(X_i,\alpha_i) \in \Gamma(E^{r-1})$,
\begin{align*}
[\xi,e_1,\cdots,e_{n-1}]=
\frac{(-1)^{\frac{n+1}{2}} 12 B_{n-1} }{(n-1)(n-2)}
\sum_{1\le i<j\le n-1}(-1)^{i+j+1}\iota_{X_{n-1}}
\dots \widehat{\iota_{X_{j}}}\dots \widehat{\iota_{X_{i}}}\dots
\iota_{X_{_{1}}} [\xi,X_i,X_j].
\end{align*}
\end{itemize}
\end{itemize}
Here the $B$'s denote the Bernoulli numbers.
\end{prop}
\begin{remark}
Bering \cite[\S 5.6]{Bering} shows that the vector fields and differential forms on a manifold $M$ are naturally endowed with multibrackets forming an algebraic structure which generalizes $L_{\infty}$-algebras: the quadratic relations satisfied by Bering's multibrackets have Bernoulli numbers as coefficients.
The multibrackets appearing in Prop. \ref{ord} are similar to Bering's, and
they differ not only in the coefficients, but also in that the expression for $[\xi,e_1,\cdots,e_{n-1}]$ (for $n\ge 3$) does not appear among Bering's brackets. This is
a consequence of the fact that Getzler's multibracket are constructed not out of $\delta$, but out of its restriction to $V_0$.\end{remark}
\begin{remark}
We write more explicitly the trinary bracket of elements $e_i=(X_i,\alpha_i)\in \Gamma(E^{r-1})$: we have
$[e_0,e_1,e_2]=[\alpha_0,X_1,X_2]-[\alpha_1,X_0,X_2]+[\alpha_2,X_0,X_1]$ with
\begin{align*}
[\alpha_0,X_1,X_2]=
-\frac{1}{6}\left(
\frac{1}{2}(\iota_{X_1}\mathcal{L}_{X_2} - \iota_{X_2}\mathcal{L}_{X_1})+\iota_{[X_1,X_2]} +\iota_{X_1}\iota_{X_2}d\right)\alpha_0.
\end{align*}
\end{remark}
\begin{proof}
Let $X_1,X_2,\dots \in \chi(M)$ and $\xi_1,\xi_2,\dots$ be differential forms on $M$. In the following we identify them with elements of $\mathcal{C}$
as indicated in eq. \eqref{ccforms}, and we adopt the notation introduced in
the text before Prop. \ref{ord}.
The following holds:
\begin{itemize}
\item[a)] If $\xi_i\in \Omega^{k_i}(M)$ for $k_1,k_2$ arbitrary, we have $$\textbf{\{} X_1+\xi_1 , X_2+\xi_2 \}\!\!\} = \iota_{X_1}\xi_2+(-1)^{r-1-k_1} \iota_{X_2}\xi_1.$$
In particular, when $\xi_1,\xi_2 \in \Omega^{r-1}(M)$, we obtain
the pairing $\langle \cdot , \cdot \rangle$ as in eq. \eqref{pairing}.
\item[b)] For any differential form $\xi_1$,
the identity $$\textbf{\{}\mathcal{S}, \xi_1 \}\!\!\} =d \xi_1$$ is immediate in coordinates.
\item[c)] If $\xi_1,\xi_2 \in \Omega^{r-1}(M)$ we have
$$\textbf{\{} \textbf{\{}\mathcal{S}, X_1+\xi_1 \}\!\!\}, X_2+\xi_2 \}\!\!\} = [\![ X_1+\xi_1, X_2+\xi_2 ]\!],$$
the Dorfman bracket as in eq. \eqref{dorf}.
This holds by the
following identities, which we write for $\xi_i \in \Omega^{k_i}(M)$ for arbitrary $k_1,k_2$:
$$\textbf{\{} \textbf{\{}\mathcal{S}, X_1 \}\!\!\}, X_2 \}\!\!\}=[X_1,X_2] \text{ and }\textbf{\{} \textbf{\{}\mathcal{S}, \xi_1 \}\!\!\}, \xi_2 \}\!\!\}=0$$ are checked in coordinates, and
\begin{align*}
\textbf{\{} \textbf{\{}\mathcal{S}, X_1 \}\!\!\}, \xi_2 \}\!\!\}&=\textbf{\{} \mathcal{S}, \textbf{\{} X_1 , \xi_2 \}\!\!\} \}\!\!\}
+ \textbf{\{} X_1 , \textbf{\{} \mathcal{S}, \xi_2 \}\!\!\} \}\!\!\}=d(\iota_{X_1} \xi_2)+\iota_{X_1} d\xi_2=\mathcal{L}_{X_1} \xi_2,\\
\textbf{\{} \textbf{\{}\mathcal{S}, \xi_1 \}\!\!\}, X_2 \}\!\!\}&=-(-1)^{r-1-k_1}\textbf{\{} X_2 , \textbf{\{}\mathcal{S}, \xi_1 \}\!\!\} \}\!\!\}=-(-1)^{r-1-k_1}\iota_{X_2} d\xi_1.
\end{align*}
\item[d)] For $n\ge 3$, and letting $a_i$ be either a vector field $X_i$ or a differential form $\xi_i$ of arbitrary degree (not a sum of both),
$$\textbf{\{} \textbf{\{} \dots \textbf{\{} \mathcal{S}, a_1\}\!\!\}, \dots \}\!\!\}, a_n \}\!\!\}=0$$ except when
exactly one of $a_1,a_2,a_3$ is a differential form and all the remaining $a_i$'s are vector fields.
\end{itemize}
Using this it is straighforward to write out the (graded symmetric) multibrackets of \cite[Thm. 3]{GetzlerHigherDer}, which we denote by $(\cdot,\dots,\cdot)$. More precisely, b) gives the unary bracket, c) gives the binary bracket, c) and d) give the trinary bracket.
For the higher brackets ($n\ge 3$ odd) one uses d) and then a) to compute
\begin{align*}
(\alpha,X_1,\dots,X_{n-1})&=
\frac{c_{n-1}}{c_2}\sum_{\sigma \in \Sigma_{n-1}\;,\; \sigma_1 <\sigma_2}
(-1)^{\sigma}\textbf{\{} \textbf{\{} \dots \textbf{\{} (\alpha,X_{\sigma_1},X_{\sigma_2}),
X_{\sigma_3}
\}\!\!\}, \dots \}\!\!\}, X_{\sigma_{n-1}} \}\!\!\}\\
&=(-1)^{n-2 \choose 2}(n-3)! \frac{c_{n-1}}{c_2}
\sum_{1\le i<j\le n-1}(-1)^{i+j+1}\iota_{X_{n-1}}
\dots \widehat{\iota_{X_{j}}}\dots \widehat{\iota_{X_{i}}}\dots
\iota_{X_{_{1}}} (\xi,X_i,X_j),
\end{align*}
where we abbreviate $c_{n-1}:=\frac{(-1)^{n+1\choose 2}}{(n-1)!}B_{n-1}$. The computation for $[\xi,e_1,\cdots,e_{n-1}]$ with $\xi \in \Omega^{\bullet<{r-1}}(M)$ delivers the same expression and uses the fact that $n$ is odd. The coefficient can be simplified:
$$(-1)^{n-2 \choose 2}(n-3)! \frac{c_{n-1}}{c_2}=\frac{12}{(n-1)(n-2)}B_{n-1}$$
since $n$ is odd and $c_2=\frac{1}{12}$.
This gives us the (graded symmetric) multibrackets $(\cdot,\dots,\cdot)$ of
\cite{GetzlerHigherDer}. As pointed out in \cite{GetzlerHigherDer}, multiplying the $n$-ary bracket by
$(-1)^{n-1 \choose 2}$ delivers (graded symmetric) multibrackets
that satisfy the Jacobi rules given just before \cite[Def. 4.2]{GetzlerAnnals}.
These Jacobi rules coincide with Voronov's \cite[Def. 1]{vor}, and according to \cite[Rem. 2.1]{vor}, the passage from these (graded symmetric) multibrackets to the (graded skew-symmetric) multibrackets satisfying our Def. \ref{lidef} is given as follows: multiply
the multibracket of elements $x_1,\dots,x_n$ by
\begin{equation}\label{vordec}
(-1)^{\tilde{x}_1(n-1)+\tilde{x}_2(n-2)+\dots+\tilde{x}_{n-1}}
\end{equation}
where $\tilde{x}_i$ denotes the degree of $x_i$ as an element
of \eqref{complex}, a complex concentrated in degrees $-r+1,\dots,0$. One easily checks that in all the cases relevant to us \eqref{vordec} does not introduce any sign.
In conclusion, to pass from the conventions of
\cite{GetzlerHigherDer} to the conventions of
our Def. \ref{lidef} we just have to multiply the $n$-ary bracket
$(\cdot,\dots,\cdot)$
by $(-1)^{n-1 \choose 2}$,
which for $n=1,2$ equals $1$ and for $n$ odd equals $(-1)^{\frac{n-1}{2}}$.
\end{proof}
Now let $H\in\Omega^{r+1}_{closed}(M)$ be a closed $r+1$-form. $H$ can be viewed as an element $\mathcal{H}$ of $\mathcal{C}_{r+1}$, and
$\textbf{\{} \mathcal{S}-\mathcal{H},\mathcal{S} -\mathcal{H}\}\!\!\}=-2\textbf{\{} \mathcal{S}, \mathcal{H}\}\!\!\}=-2dH=0$. Hence
\begin{equation}\label{dglaH}
\big(\mathcal{C}[r], \delta:=\textbf{\{} \mathcal{S} -\mathcal{H},\;\}\!\!\}, \textbf{\{}\;,\;\}\!\!\}\big)
\end{equation}
is a DGLA, and again we can apply Getzler's construction. We obtain an $L_{\infty}$-algebra structure that extends the $H$-twisted Courant bracket:
\begin{prop}\label{ordH}
Let $M$ be a manifold, $r\ge 2$ an integer, and $H\in\Omega^{r+1}_{closed}(M)$. There exists a Lie $r$-algebra structure on the complex \eqref{complex} concentrated in degrees $-r+1,\cdots,0$, whose only non-vanishing brackets (up to permutations of the entries) are those given in Prop. \ref{ord} and additionally
for $e_i=(X_i,\alpha_i) \in\Gamma(E^{r-1})$:
\begin{itemize}
\item binary bracket:
$$[e_1,e_2]=\iota_{X_2}\iota_{X_1}H$$
\item $n$-ary bracket for $n\ge 3$ with $n$ an \emph{odd} integer:
$$[e_1,\cdots,e_{n}]=(-1)^{\frac{n-1}{2}}\cdot n\cdot B_{n-1}\cdot \iota_{X_n}\dots \iota_{X_1}H.$$
\end{itemize}
\end{prop}
\begin{proof}
It is easy to see (in coordinates, or using that $T[1]M\subset T^*[r]T[1]M$ is Lagrangian) that for any $n\ge 1$, letting $a_i$ be either a vector field $X_i$ or a differential form $\xi_i$ of arbitrary degree (not a sum of both), one has:
$$\textbf{\{} \textbf{\{} \dots \textbf{\{} \mathcal{H}, a_1\}\!\!\}, \dots \}\!\!\}, a_n \}\!\!\}=0$$ except when
all of the $a_i$'s are vector fields $X_i$'s. In this case one obtains
\begin{equation}\label{Hn}
(-1)^{n \choose 2} \iota_{X_n}\dots \iota_{X_1}H
\end{equation}
using a) in the proof of Prop. \ref{ord}.
Denoting by $(\cdot,\dots,\cdot)$ the (graded symmetric) multibrackets as in \cite{GetzlerHigherDer} from the DGLA \eqref{dglaH}, we see that
$(X_1,\dots,X_n)$ is equal to \eqref{Hn} multiplied by $-n!\cdot c_{n-1}$.
In order to pass from the conventions of \cite{GetzlerHigherDer} to those of our Def. \ref{lidef} we multiply by $(-1)^{n-1 \choose 2}$ and obtain the formulae in the statement. \end{proof}
For any $B\in \Omega^r(M)$, the gauge
transformation of $E^{r-1}$ given
by $e^{-B} \colon X+\alpha \mapsto X+\alpha-\iota_XB$ maps the $H$-twisted Courant bracket to the $(H+dB)$-twisted Courant bracket. Defining properly the notion of higher Courant algebroid -- of which the $E^r$'s should be the main examples -- and
extending to this general setting Prop. \ref{ord}, will presumably imply that
the $L_{\infty}$-algebras defined by cohomologous differential forms are isomorphic. We show this directly:
\begin{prop}
Let $M$ be a manifold, $r\ge 2$ an integer, and $H\in\Omega^{r+1}_{closed}(M)$. For any $B\in \Omega^r(M)$, there is a strict isomorphism
\begin{center}
$($the Lie-$r$ algebra defined by $H)\to($the Lie-$r$ algebra defined by $H+dB)$
\end{center}
between the Lie-$r$ algebra structures defined as in
Prop. \ref{ordH}
on the complex \eqref{complex}. Explicitly, the isomorphism is given by $e^{-B}$ on $\Gamma(E^{r-1})$ and is the identity elsewhere.
\end{prop}
\begin{proof}
View $B$ as an element $\mathcal{B}\in\mathcal{C}_r$. As $\textbf{\{}\mathcal{B},\;\}\!\!\}$ is a degree zero derivation of the graded Lie algebra
$(\mathcal{C}[r], \textbf{\{}\;,\;\}\!\!\})$ and is nilpotent, it follows that the exponential $\Phi:=e^{\textbf{\{}\mathcal{B},\;\}\!\!\}}$ is an automorphism. Therefore it is an isomorphism of DGLAs $$\Phi \colon \big(\mathcal{C}[r], \delta:=\textbf{\{} \mathcal{S} -\mathcal{H},\;\}\!\!\}, \textbf{\{}\;,\;\}\!\!\}\big) \to \big(\mathcal{C}[r], \Phi\delta \Phi^{-1}, \textbf{\{}\;,\;\}\!\!\}\big).$$
From the formulas for the multibrackets in Getzler's \cite[Thm. 3]{GetzlerHigherDer} it is then clear that $\Phi|_{(\oplus_{0\le i \le r-1}\mathcal{C}_i)[r-1]}$ is a strict isomorphism between the $L_{\infty}$-algebras induced by these two DGLAs.
The differential $\Phi\delta \Phi^{-1}$ on $\mathcal{C}$ is not equal to $\textbf{\{} \mathcal{S} - (\mathcal{H}+ \textbf{\{} \mathcal{S},\mathcal{B} \}\!\!\}),\;\}\!\!\}$, which is the differential associated to $H+dB\in\Omega^{r+1}_{closed}(M)$ as in \eqref{dglaH}. However
on $\oplus_{0\le i \le r-1}\mathcal{C}_i$ the two differentials do agree.
(This follows from the fact that on $\oplus_{0\le i \le r-1}\mathcal{C}_i$ we have $\Phi(y) =y+\textbf{\{}\mathcal{B},y\}\!\!\}$).
This assures that the $L_{\infty}$-algebras induced by the two differentials agree.
\end{proof}
\section{Open questions: the relation between the $L_{\infty}$-algebras of \S\ref{obs} and \S\ref{liM}-\ref{ez}}\label{per}
In this section we speculate about the relations among the $L_{\infty}$-algebras that appeared in \S \ref{obs}--\S\ref{ez} and their higher analogues, and relate them to prequantization.
Let $M$ be a manifold. Given an integer $n\ge0$ and $H\in \Omega^{n+2}_{closed}(M)$, we use the notation $E^n_H$ to denote the vector bundle
$E^n=TM\oplus \wedge^n T^*M$ with the $H$-twisted Dorfman bracket $[\cdot,\cdot]_{H}$.
In particular, $E^n_0$ denotes $TM\oplus \wedge^n T^*M$ with the untwisted Dorfman bracket \eqref{dorf}.
\subsection{Relations between $L_{\infty}$-algebras}\label{relations}
To any $n\ge 0$ and $H\in \Omega^{n+2}_{closed}(M)$, we associated in Prop. \ref{ordH}
a Lie $n+1$-algebra
$\mathcal{S}^{E^n_H}$. We ask:
\begin{itemize}
\item[]
Is there a natural $L_{\infty}$-morphism $D$ from
$ \mathcal{S}^{E^n_H}$ to $\mathcal{S}^{E_0^{n+1}}$?
\end{itemize}
When $n=0$ the answer is affirmative by Thm. \ref{mor01}.
Let $p\ge1$ and
$L\subset E^{p}_0$ an involutive isotropic subbundle. Denote by $\mathcal{O}^{L\subset E^p_0}$ the Lie $p$-algebra associated in Thm. \ref{Liep}.
Since $L$ is an involutive subbundle of $E_0^p$ it is natural to ask:
\begin{itemize}
\item[] What is the relation between $\mathcal{O}^{L\subset E^p_0}$
and $\mathcal{S}^{E_0^{p}}$?
\end{itemize}
When $L$ is equal to $graph({H})$ for a $p$-plectic form ${H}$, we expect the relation
to be given by an $L_{\infty}$-morphism
\begin{equation*}
P\colon \mathcal{O}^{graph({H})\subset E^p_0} \rightsquigarrow \mathcal{S}^{E_H^{p-1}}
\end{equation*}
with the property that the unary map of the
$L_{\infty}$-morphism $D \circ P$, restricted to the degree zero component, coincide with
\begin{equation}\label{sortapreqmap}
{\Omega}^{p-1}_{ham}(M, graph({H})) \to \Gamma(E^p_0), \;\;\;\;\;\alpha \mapsto X_{\alpha}-d\alpha.
\end{equation}
We summarize the situation in this diagram:\[\xymatrix{
\mathcal{S}^{E^{p-1}_H}\ar@{~>}[r]^D & \mathcal{S}^{E^{p}_0}\\
\ar@{~>}[u]^P\ar@{~>}[ur]_{D \circ P}\mathcal{O}^{graph({H})\subset E^p_0} &
}
\]
\begin{remark}\label{expre}
In the case $p=1$ (so ${H}$ is a symplectic form) the embedding $P$ exists and is given as follows. We have two honest Lie algebras
$$
\mathcal{O}^{graph({H})\subset E^1_0}=(C^{\infty}(M),\{\cdot,\cdot \}),\;\;\;\;\;\;
\mathcal{S}^{E_H^0}=(\Gamma(TM\oplus \ensuremath{\mathbb R}),[\cdot,\cdot]_{H})
$$
where $\{\cdot,\cdot\}$ is the usual Poisson bracket defined by $H$.
The map
$$
P \colon C^{\infty}(M) \to \Gamma(TM\oplus \ensuremath{\mathbb R}),\;\;\;\;
f \mapsto (X_f,-f)
$$
is a Lie algebra morphism. Lie $2$-algebra morphism $D$ is given by Thm. \ref{mor01}. One computes that the composition consists only of a unary map, given by the Lie algebra morphism \eqref{sortapreqmap}.
\end{remark}
\begin{remark} We interpret $P$ as a prequantization map. Indeed for $p=1$ and integral form $H$, the Lie algebra $\mathcal{S}^{E_H^{0}}$ can be identified with the space of $S^1$-invariant vector fields on a circle bundle over $M$ \cite[\S 3.8]{Gu}. The composition of $P$ with the action of vector fields on the $S^1$-equivariant complex valued functions is then a faithful representation of the Lie algebra $\mathcal{O}^{graph({H})\subset E^1_0}=C^{\infty}(M)$, that is, a prequantization representation. For $p=2$ the morphism $P$ is described by Rogers in \cite[Thm. 5.2]{RogersCou} and \cite[Thm. 7.1]{RogersPre}, to which we refer for the interpretation as a prequantization map.
\end{remark}
\subsection{The twisted case} We pose three questions about higher analogues of twisted Dirac structures.
Let $H$ be a closed $p+1$-form for $p\ge 2$. Let $L'\subset E^{p-1}_H$
be an isotropic subbundle, involutive w.r.t. the ${H}$-twisted Dorfman bracket.
\begin{itemize}
\item[]
Can one associate to $L'$ an $L_{\infty}$-algebra of observables
$\mathcal{O}^{L'\subset E^{p-1}_H}$?
\end{itemize}
To the author's knowledge, this is not known even in the simplest case, i.e., when $p=2$ and $L'$ is the graph of an $H$-twisted Poisson structure \cite{SW}. In that case one defines in the usual manner a skew-symmetric bracket $\{\cdot,\cdot\}$ on $C^{\infty}(M)$. It does not satisfy the Jacobi identity but rather \cite[eq. (4)]{SW}
$\{\{f,g\},h\} +c.p.=-{H}(X_f,X_g,X_h)$, hence it is natural to wonder if
one can extend
this bracket to an $L_{\infty}$-structure.
\begin{itemize}
\item []
Is there a natural $L_{\infty}$-morphism $D'$ from $\mathcal{O}^{L'\subset E^{p-1}_H}$ to $\mathcal{O}^{graph(H)\subset E^{p}_0}$?
\end{itemize}
This question is motivated by the fact that $L'$ plays the role of a primitive of $H$. In the simple case that $L'$ is the graph of a symplectic form the answer is affirmative, by
the morphism from $(C^{\infty}(M),\{\cdot,\cdot\})$ to $C^{\infty}(M) \overset{d}{\rightarrow}\Omega^1_{closed}(M)$ (a complex with no higher brackets) with vanishing unary map and binary map $\phi_2(f,g)=\{f,g\}$.
\begin{itemize}
\item []
Is there an $L_{\infty}$-morphism from
$ \mathcal{O}^{L'\subset E^{p-1}_H}$ to $\mathcal{S}^{E^{p-1}_H}$, assuming that $L'$ is the graph of a non-degenerate differential form?
\end{itemize}
Such a morphism would be interesting because it could be interpreted as a
weaker (because not injective) version of a prequantization map for $(M,L')$. \\
We summarize the discussion of this whole section in the following diagram, in which for the sake of concreteness and simplicity we take ${H}\in \Omega^3_{closed}(M)$ to be a 2-plectic form and $L'\subset TM\oplus T^*M$ to be a ${H}$-twisted Dirac structure. The arrows denote $L_{\infty}$-morphisms.
\[\xymatrix{
& \mathcal{S}^{E^{1}_H} \ar@{~>}[r]^D & \mathcal{S}^{E^{2}_0}\\
\mathcal{O}^{L'\subset E^{1}_H} \ar@{~>}[r]^{D'}\ar@{~>}[ur] &\ar@{~>}[u]^P\ar@{~>}[ur]_{D \circ P} \mathcal{O}^{graph({H})\subset E_0^2} &
}
\]
We conclude presenting an interesting example in which the geometric set-up described above applies.
\begin{ex}
Let $G$ be a Lie group whose Lie algebra $\ensuremath{\mathfrak{g}}$ is endowed with a non-degenerate bi-invariant quadratic form $(\cdot,\cdot)_{\ensuremath{\mathfrak{g}}}$. There is a well-defined closed Cartan 3-form $H$, which on $\ensuremath{\mathfrak{g}}=T_eG$ is given by $H(u,v,w)=\frac{1}{2}
(u,[v,w])_{\ensuremath{\mathfrak{g}}}$ \cite[\S 2.3]{MaHenr}.
There is also a canonical $H$-twisted Dirac structure $L'\subset TG\oplus T^*G$: it is given by $L'=\{(v_r-v_l)+\frac{1}{2}(v_r+v_l)^*:v\in \ensuremath{\mathfrak{g}} \}$ where $v_r,v_l$ denote the right and left translations of $v\in \ensuremath{\mathfrak{g}}$ and the quadratic form is used to identify a tangent vector $X\in TG$ with a covector $X^*\in T^*G$
\cite{SW}\cite[Ex. 3.4]{MaHenr}. \end{ex}
| proofpile-arXiv_065-5251 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
Different astronomical observations and measurements
indicate that
more than 80\% of all matter in our Universe are ``dark''
and this Dark Matter interacts
at most very weakly with ordinary matter.
Weakly Interacting Massive Particles (WIMPs) $\chi$
arising in several extensions of
the Standard Model of particle physics
with masses roughly between 10 GeV and a few TeV
are one of the leading candidates for Dark Matter%
\cite{SUSYDM96,Bertone05,Steffen08,Bergstrom09}.
Currently,
the most promising method to detect different WIMP candidates
is the direct detection of the recoil energy deposited
by elastic scattering of ambient WIMPs off the target nuclei%
\cite{Smith90,Lewin96}.
The differential event rate
for elastic WIMP--nucleus scattering is given by%
\cite{SUSYDM96}:
\beq
\dRdQ
=
\afrac{\rho_0 \sigma_0}{2 \mchi \mrN^2}
\FQ \int_{\vmin}^{\vmax} \bfrac{f_1(v)}{v} dv
\~.
\label{eqn:dRdQ}
\eeq
Here $R$ is the direct detection event rate,
i.e., the number of events
per unit time and unit mass of detector material,
$Q$ is the energy deposited in the detector,
$\rho_0$ is the WIMP density near the Earth,
$\sigma_0$ is the total cross section
ignoring the form factor suppression,
$F(Q)$ is the elastic nuclear form factor,
$f_1(v)$ is the one--dimensional velocity distribution function
of the WIMPs impinging on the detector,
$v$ is the absolute value of the WIMP velocity
in the laboratory frame.
The reduced mass $\mrN$ is defined by
\beq
\mrN
\equiv \frac{\mchi \mN}{\mchi + \mN}
\~,
\label{eqn:mrN}
\eeq
where $\mchi$ is the WIMP mass and
$\mN$ that of the target nucleus.
Finally,
\mbox{$\vmin = \alpha \sqrt{Q}$}
is the minimal incoming velocity of incident WIMPs
that can deposit the energy $Q$ in the detector
with the transformation constant
\beq
\alpha
\equiv \sfrac{\mN}{2 \mrN^2}
\~,
\label{eqn:alpha}
\eeq
and $\vmax$ is the maximal WIMP velocity
in the Earth's reference frame,
which is related to
the escape velocity from our Galaxy
at the position of the Solar system,
$\vesc~\:\raisebox{-0.7ex}{$\stackrel{\textstyle>}{\sim}$}\:~600$ km/s.
The total WIMP--nucleus cross section
$\sigma_0$ in Eq.~(\ref{eqn:dRdQ})
depends on the nature of WIMP couplings on nucleons.
Through e.g., squark and Higgs exchanges with quarks,
WIMPs could have a ``scalar'' interaction with nuclei. %
The total cross section for
the spin--independent (SI) scalar interaction
can be expressed as\cite{SUSYDM96,Bertone05}
\beq
\sigmaSI
= \afrac{4}{\pi} \mrN^2 \bBig{Z f_{\rm p} + (A - Z) f_{\rm n}}^2
\~.
\label{eqn:sigma0_scalar}
\eeq
Here $\mrN$ is the reduced mass defined in Eq.~(\ref{eqn:mrN}),
$Z$ is the atomic number of the target nucleus,
i.e., the number of protons,
$A$ is the atomic mass number,
$A-Z$ is then the number of neutrons,
$f_{\rm (p, n)}$ are the effective
scalar couplings of WIMPs on protons p and on neutrons n,
respectively.
Here we have to sum over the couplings
on each nucleon before squaring
because the wavelength associated with the momentum transfer
is comparable to or larger than the size of the nucleus,
the so--called ``coherence effect''.
In addition,
for the lightest supersymmetric neutralino,
and for all WIMPs which interact primarily through Higgs exchange,
the scalar couplings are approximately the same
on protons and on neutrons:
\(
f_{\rm n}
\simeq f_{\rm p}
.
\)
Thus the ``pointlike'' cross section $\sigmaSI$
in Eq.~(\ref{eqn:sigma0_scalar}) can be written as
\beq
\sigmaSI
\simeq \afrac{4}{\pi} \mrN^2 A^2 |f_{\rm p}|^2
= A^2 \afrac{\mrN}{\mrp}^2 \sigmapSI
\~,
\label{eqn:sigma0SI}
\eeq
where $\mrp$ is the reduced mass
of the WIMP mass $\mchi$ and the proton mass $m_{\rm p}$,
and
\beq
\sigmapSI
= \afrac{4}{\pi} \mrp^2 |f_{\rm p}|^2
\label{eqn:sigmapSI}
\eeq
is the SI WIMP--nucleon cross section.
Here the tiny mass difference between a proton and a neutron
has been neglected.
On the other hand,
through e.g., squark and Z boson exchanges with quarks,
WIMPs could also couple to the spin of target nuclei,
an ``axial--vector'' interaction.
The spin--dependent (SD) WIMP--nucleus cross section
can be expressed as\cite{SUSYDM96,Bertone05}:
\beq
\sigmaSD
= \afrac{32}{\pi} G_F^2 \~ \mrN^2
\afrac{J + 1}{J} \bBig{\Srmp \armp + \Srmn \armn}^2
\~.
\label{eqn:sigma0SD}
\eeq
Here $G_F$ is the Fermi constant,
$J$ is the total spin of the target nucleus,
$\expv{S_{\rm (p, n)}}$ are the expectation values of
the proton and neutron group spins,
and $a_{\rm (p, n)}$ are the effective SD WIMP couplings
on protons and on neutrons.
Some relevant spin values of
the most used spin--sensitive nuclei
are given in Table 1.
For the SD WIMP--nucleus interaction,
it is usually assumed that
only unpaired nucleons contribute significantly
to the total cross section,
as the spins of the nucleons in a nucleus
are systematically anti--aligned%
\footnote{
However,
more detailed nuclear spin structure calculations show that
the even group of nucleons has sometimes
also a non--negligible spin
(see Table 1 and
e.g., data given
in Refs.~\refcite{SUSYDM96,Tovey00,Giuliani05,Girard05}).
}.
Under this ``odd--group'' assumption,
the SD WIMP--nucleus cross section can be reduced to
\beq
\sigmaSD
= \afrac{32}{\pi} G_F^2 \~ \mrN^2
\afrac{J + 1}{J} \expv{S_{\rm (p, n)}}^2 |a_{\rm (p, n)}|^2
\~.
\label{eqn:sigma0SD_odd}
\eeq
And
the SD WIMP cross section on protons or on neutrons
can be given as
\beq
\sigma_{\chi {\rm (p, n)}}^{\rm SD}
= \afrac{24}{\pi} G_F^2 \~ m_{\rm r, (p, n)}^2 |a_{\rm (p, n)}|^2
\~.
\label{eqn:sigmap/nSD}
\eeq
\begin{table}[t!]
\tbl{
List of the relevant spin values of
the most used spin--sensitive nuclei.
More details can be found in
e.g., Refs.~1, 7, 8, 9
}{
\begin{tabular}{|| c c c c c c c c ||}
\hline
\hline
\makebox[1 cm][c]{Isotope} &
\makebox[0.5cm][c]{$Z$} & \makebox[0.5cm][c]{$J$} &
\makebox[1 cm][c]{$\Srmp$} & \makebox[1 cm][c]{$\Srmn$} &
\makebox[1.2cm][c]{$-\Srmp/\Srmn$} & \makebox[1.2cm][c]{$\Srmn/\Srmp$} &
\makebox[3 cm][c]{Natural abundance (\%)} \\
\hline
\hline
$\rmXA{F}{19}$ & 9 & 1/2 & 0.441 & \hspace{-1.8ex}$-$0.109 &
4.05 & $-$0.25 & 100 \\
\hline
$\rmXA{Na}{23}$ & 11 & 3/2 & 0.248 & 0.020 &
$-$12.40 & 0.08 & 100 \\
\hline
$\rmXA{Cl}{35}$ & 17 & 3/2 & \hspace{-1.8ex}$-$0.059 & \hspace{-1.8ex}$-$0.011 &
$-$5.36 & 0.19 & 76 \\
\hline
$\rmXA{Cl}{37}$ & 17 & 3/2 & \hspace{-1.8ex}$-$0.058 & 0.050 &
1.16 & $-$0.86 & 24 \\
\hline
$\rmXA{Ge}{73}$ & 32 & 9/2 & 0.030 & 0.378 &
$-$0.08 & 12.6 & 7.8 / 86 (HDMS)\cite{Bednyakov08a}\\
\hline
$\rmXA{I}{127}$ & 53 & 5/2 & 0.309 & 0.075 &
$-$4.12 & 0.24 & 100 \\
\hline
$\rmXA{Xe}{129}$ & 54 & 1/2 & 0.028 & 0.359 &
$-$0.08 & 12.8 & 26 \\
\hline
$\rmXA{Xe}{131}$ & 54 & 3/2 & \hspace{-1.8ex}$-$0.009 & \hspace{-1.8ex}$-$0.227 &
$-$0.04 & 25.2 & 21 \\
\hline
\hline
\end{tabular}}
\end{table}
Due to the coherence effect with the entire nucleus
shown in Eq.~(\ref{eqn:sigma0SI}),
the cross section for scalar interaction
scales approximately as the square of
the atomic mass of the target nucleus.
Hence,
in most supersymmetric models,
the SI cross section for nuclei with $A~\:\raisebox{-0.7ex}{$\stackrel{\textstyle>}{\sim}$}\:~30$ dominates
over the SD one\cite{SUSYDM96,Bertone05}.
\section{Reconstructing the one--dimensional
velocity distribution function\hspace*{-0.3cm} \\ of halo WIMPs}
As the first step of the development of
these model--independent data analysis procedures,
starting with a time--averaged recoil spectrum $dR / dQ$
and assuming that no directional information exists,
the normalized one--dimensional
velocity distribution function of incident WIMPs, $f_1(v)$,
is solved from Eq.~(\ref{eqn:dRdQ}) directly as\cite{DMDDf1v}
\beq
f_1(v)
= \calN
\cbrac{ -2 Q \cdot \dd{Q} \bbrac{ \frac{1}{\FQ} \aDd{R}{Q} } }\Qva
\~,
\label{eqn:f1v_dRdQ}
\eeq
where the normalization constant $\calN$ is given by
\beq
\calN
= \frac{2}{\alpha}
\cbrac{\intz \frac{1}{\sqrt{Q}}
\bbrac{ \frac{1}{\FQ} \aDd{R}{Q} } dQ}^{-1}
\~.
\label{eqn:calN_int}
\eeq
Note that
the WIMP velocity distribution
reconstructed by Eq.~(\ref{eqn:f1v_dRdQ})
is {\em independent} of the local WIMP density $\rho_0$
as well as of the WIMP--nucleus cross section $\sigma_0$.
However,
in order to use the expressions
(\ref{eqn:f1v_dRdQ}) and (\ref{eqn:calN_int})
for reconstructing $f_1(v)$,
one needs a functional form for the recoil spectrum $dR / dQ$.
In practice
this requires usually a fit to experimental data
and data fitting will re--introduce some model dependence
and make the error analysis more complicated.
Hence,
expressions that allow to reconstruct $f_1(v)$
directly from experimental data
(i.e., measured recoil energies)
have been developed%
\cite{DMDDf1v}.
Considering experimental data described by
\beq
{\T Q_n - \frac{b_n}{2}}
\le \Qni
\le {\T Q_n + \frac{b_n}{2}}
\~,
~~~~~~~~~~~~
i
= 1,~2,~\cdots,~N_n,~
n
= 1,~2,~\cdots,~B.
\label{eqn:Qni}
\eeq
Here the total energy range between $\Qmin$ and $\Qmax$
has been divided into $B$ bins
with central points $Q_n$ and widths $b_n$.
In each bin,
$N_n$ events will be recorded.
Since the recoil spectrum $dR / dQ$ is expected
to be approximately exponential,
in order to approximate the spectrum
in a rather wider range,
the following {\em exponential} ansatz
for the {\em measured} recoil spectrum
({\em before} normalized by the exposure $\calE$)
in the $n$th $Q-$bin has been introduced\cite{DMDDf1v}:
\beq
\adRdQ_{{\rm expt}, \~ n}
\equiv \adRdQ_{{\rm expt}, \~ Q \simeq Q_n}
\equiv \rn \~ e^{k_n (Q - Q_{s, n})}
\~.
\label{eqn:dRdQn}
\eeq
Here $r_n = N_n / b_n$ is the standard estimator
for $(dR / dQ)_{\rm expt}$ at $Q = Q_n$,
$k_n$ is the logarithmic slope of
the recoil spectrum in the $n$th $Q-$bin,
which can be computed numerically
from the average value of the measured recoil energies
in this bin:
\beq
\bQn
\equiv \frac{1}{N_n} \sumiNn \abrac{\Qni - Q_n}
= \afrac{b_n}{2} \coth\afrac{k_n b_n}{2}-\frac{1}{k_n}
\~.
\label{eqn:bQn}
\eeq
Then the shifted point $Q_{s, n}$
in the ansatz (\ref{eqn:dRdQn}),
at which the leading systematic error
due to the ansatz
is minimal\cite{DMDDf1v},
can be estimated by
\beq
Q_{s, n}
= Q_n + \frac{1}{k_n} \ln\bfrac{\sinh(k_n b_n/2)}{k_n b_n/2}
\~.
\label{eqn:Qsn}
\eeq
Note that $Q_{s, n}$ differs from the central point of the $n$th bin, $Q_n$.
Now,
substituting the ansatz (\ref{eqn:dRdQn})
into Eq.~(\ref{eqn:f1v_dRdQ})
and then letting $Q = Q_{s, n}$,
we can obtain that\cite{DMDDf1v}
\beq
f_{1, {\rm rec}}\abrac{v_{s, n} = \alpha \sqrt{Q_{s, n}}}
= \calN
\bBigg{\frac{2 Q_{s, n} r_n}{F^2(Q_{s, n})}}
\bbrac{\dd{Q} \ln \FQ \bigg|_{Q = Q_{s, n}} - k_n}
\~.
\label{eqn:f1v_Qsn}
\eeq
Here the normalization constant $\calN$
given in Eq.~(\ref{eqn:calN_int})
can be estimated directly from the data:
\beq
\calN
= \frac{2}{\alpha}
\bbrac{\sum_{a} \frac{1}{\sqrt{Q_a} \~ F^2(Q_a)}}^{-1}
\~,
\label{eqn:calN_sum}
\eeq
where the sum runs over all events in the sample.
\section{Determining the WIMP mass and
the SI WIMP--nucleon coupling\hspace*{-0.35cm}}
By using expressions (\ref{eqn:f1v_dRdQ})
and (\ref{eqn:calN_int})
for reconstructing the WIMP velocity distribution function,
not only the overall normalization constant $\calN$
given in Eq.~(\ref{eqn:calN_int}),
but also the shape of the velocity distribution,
through the transformation $Q = v^2 / \alpha^2$
in Eq.~(\ref{eqn:f1v_dRdQ}),
depends on the WIMP mass $\mchi$
involved in the coefficient $\alpha$.
It is thus crucial to develop a method
for determining the WIMP mass model--independently.
From Eq.~(\ref{eqn:f1v_dRdQ})
and using the exponential ansatz in Eq.~(\ref{eqn:dRdQn}),
the moments of the normalized one--dimensional
WIMP velocity distribution function
can be estimated by\cite{DMDDmchi}
\beqn
\expv{v^n}
&=& \int_{v(\Qmin)}^{v(\Qmax)} v^n f_1(v) \~ dv
\non\\
&=& \alpha^n
\bfrac{2 \Qmin^{(n+1)/2} r(\Qmin) / \FQmin + (n+1) I_n(\Qmin, \Qmax)}
{2 \Qmin^{ 1 /2} r(\Qmin) / \FQmin + I_0(\Qmin, \Qmax)}
\~.
\label{eqn:moments}
\eeqn
Here $v(Q) = \alpha \sqrt{Q}$,
$Q_{\rm (min, max)}$ are the experimental
minimal and maximal cut--off energies,
\beq
r(\Qmin)
\equiv \adRdQ_{{\rm expt},\~Q = \Qmin}
= r_1 \~ e^{k_1 (\Qmin - Q_{s, 1})}
\label{eqn:rmin}
\eeq
is an estimated value of the {\em measured} recoil spectrum
$(dR / dQ)_{\rm expt}$ ({\em before}
the normalization by the exposure $\cal E$) at $Q = \Qmin$,
and $I_n(\Qmin, \Qmax)$ can be estimated through the sum:
\beq
I_n(\Qmin, \Qmax)
= \sum_a \frac{Q_a^{(n-1)/2}}{F^2(Q_a)}
\~,
\label{eqn:In_sum}
\eeq
where the sum runs again over all events in the data set.
Note that
by using Eq.~(\ref{eqn:moments})
$\expv{v^n}$ can be determined
independently of the local WIMP density $\rho_0$,
of the WIMP--nucleus cross section $\sigma_0$,
as well as of the velocity distribution function
of incident WIMPs, $f_1(v)$.
By requiring that
the values of a given moment of $f_1(v)$
estimated by Eq.~(\ref{eqn:moments})
from two experiments
with different target nuclei, $X$ and $Y$, agree,
$\mchi$ appearing in the prefactor $\alpha^n$
on the right--hand side of Eq.~(\ref{eqn:moments})
can be solved as%
\cite{DMDDmchi-SUSY07}:%
\beq
\left. \mchi \right|_{\Expv{v^n}}
= \frac{\sqrt{\mX \mY} - \mX (\calR_{n, X} / \calR_{n, Y})}
{\calR_{n, X} / \calR_{n, Y} - \sqrt{\mX / \mY}}
\~,
\label{eqn:mchi_Rn}
\eeq
where
\beqn
\calR_{n, X}
\equiv \bfrac{2 \QminX^{(n+1)/2} r_X(\QminX) / \FQminX + (n+1) \InX}
{2 \QminX^{ 1 /2} r_X(\QminX) / \FQminX + \IzX}^{1/n}
\~,
\label{eqn:RnX_min}
\eeqn
and $\calR_{n, Y}$ can be defined analogously%
\footnote{
Hereafter,
without special remark
all notations defined for the target $X$
can be defined analogously for the target $Y$
and eventually for the target $Z$.
}.
Here $n \ne 0$,
$m_{(X, Y)}$ and $F_{(X, Y)}(Q)$
are the masses and the form factors of the nucleus $X$ and $Y$,
respectively,
and $r_{(X, Y)}(Q_{{\rm min}, (X, Y)})$
refer to the counting rates for detectors $X$ and $Y$
at the respective lowest recoil energies included in the analysis.
Note that
the general expression (\ref{eqn:mchi_Rn}) can be used
either for spin--independent or for spin--dependent scattering,
one only needs to choose different form factors
under different assumptions.
On the other hand,
by using the theoretical prediction that
the SI WIMP--nucleus cross section
dominates,
and the fact that
the integral over the one--dimensional WIMP velocity distribution
on the right--hand side of Eq.~(\ref{eqn:dRdQ})
is the minus--first moment of this distribution,
which can be estimated by Eq.~(\ref{eqn:moments}) with $n = -1$,
one can easily find that\cite{DMDDmchi}
\beq
\rho_0 |f_{\rm p}|^2
= \frac{\pi}{4 \sqrt{2}} \afrac{1}{\calE A^2 \sqrt{\mN}}
\bbrac{\frac{2 \Qmin^{1/2} r(\Qmin)}{\FQmin} + I_0}
\abrac{\mchi + \mN}
\~.
\label{eqn:rho0_fp2}
\eeq
Note that
the exposure of the experiment, $\calE$,
appears in the denominator.
Since the unknown factor $\rho_0 |f_{\rm p}|^2$
on the left--hand side above
is identical for different targets,
it leads to a second expression for determining $\mchi$:%
\cite{DMDDmchi}
\beq
\left. \mchi \right|_\sigma
= \frac{\abrac{\mX / \mY}^{5/2} \mY - \mX (\calR_{\sigma, X} / \calR_{\sigma, Y})}
{\calR_{\sigma, X} / \calR_{\sigma, Y} - \abrac{\mX / \mY}^{5/2}}
\~.
\label{eqn:mchi_Rsigma}
\eeq
Here $m_{(X, Y)} \propto A_{(X, Y)}$ has been assumed and
\beq
\calR_{\sigma, X}
\equiv \frac{1}{\calE_X}
\bbrac{\frac{2 \QminX^{1/2} r_X(\QminX)}{\FQminX} + \IzX}
\~.
\label{eqn:RsigmaX_min}
\eeq
Remind that
the basic requirement of the expressions for determining $\mchi$
given in Eqs.~(\ref{eqn:mchi_Rn}) and (\ref{eqn:mchi_Rsigma}) is that,
from two experiments with different target nuclei,
the values of a given moment of the WIMP velocity distribution
estimated by Eq.~(\ref{eqn:moments}) should agree.
This means that
the upper cuts on $f_1(v)$ in two data sets
should be (approximately) equal%
\footnote{
Here the threshold energies of two experiments
have been assumed to be negligibly small.
}.
Since $v_{\rm cut} = \alpha \sqrt{Q_{\rm max}}$,
it requires that\cite{DMDDmchi}
\beq
Q_{{\rm max}, Y}
= \afrac{\alpha_X}{\alpha_Y}^2 Q_{{\rm max}, X}
\~.
\label{eqn:match}
\eeq
Note that
$\alpha$ defined in Eq.~(\ref{eqn:alpha})
is a function of the true WIMP mass.
Thus this relation for matching optimal cut--off energies
can be used only if $\mchi$ is already known.
One possibility to overcome this problem is
to fix the cut--off energy of the experiment with the heavier target,
determine the WIMP mass by either Eq.~(\ref{eqn:mchi_Rn})
or Eq.~(\ref{eqn:mchi_Rsigma}),
and then estimate the cut--off energy for the lighter nucleus
by Eq.~(\ref{eqn:match}) algorithmically\cite{DMDDmchi}.
Furthermore,
by combining two or three data sets
with different target nuclei
and making an assumption for
the local WIMP density $\rho_0$,
we can use Eq.~(\ref{eqn:rho0_fp2})
to estimate the {\em squared}
SI WIMP coupling on protons (nucleons),
$|f_{\rm p}|^2$.%
\cite{DMDDfp2-IDM2008,DMDDfp2}
It is important to note that
$|f_{\rm p}|^2$ and $\mchi$
can be estimated {\em separately} and
from experimental data directly
with {\em neither} prior knowledge about each other
{\em nor} about the WIMP velocity distribution.
\section{Determining ratios between
different WIMP--nucleon cross sections\hspace*{-0.55cm}}
\subsection{Determining the ratio between two SD WIMP couplings}
Assuming that
the SD WIMP--nucleus interaction dominates and
substituting the expression (\ref{eqn:sigma0SD})
for $\sigmaSD$ into Eq.~(\ref{eqn:dRdQ})
for two target nuclei $X$ and $Y$,
the ratio between two SD WIMP--nucleon couplings
can be solved analytically as%
\cite{DMDDranap-DM08,DMDDidentification-DARK2009,DMDDranap}%
\beq
\afrac{\armn}{\armp}_{\pm, n}^{\rm SD}
=-\frac{\SpX \pm \SpY \abrac{\calR_{J, n, X} / \calR_{J, n, Y}} }
{\SnX \pm \SnY \abrac{\calR_{J, n, X} / \calR_{J, n, Y}} }
\~,
\label{eqn:ranapSD}
\eeq
for $n \ne 0$.
Here I have defined
\beq
\calR_{J, n, X}
\equiv \bbrac{\Afrac{J_X}{J_X + 1}
\frac{\calR_{\sigma, X}}{\calR_{n, X}}}^{1/2}
\~,
\label{eqn:RJnX}
\eeq
with $\calR_{n, X}$ and $\calR_{\sigma, X}$
defined in Eqs.~(\ref{eqn:RnX_min}) and (\ref{eqn:RsigmaX_min}).
Note that,
firstly,
the expression (\ref{eqn:ranapSD}) for $\armn / \armp$
is {\em independent} of the WIMP mass $\mchi$
and the ratio can thus be determined
from experimental data directly
{\em without} knowing the WIMP mass.
Secondly,
because the couplings in Eq.~(\ref{eqn:sigma0SD}) are squared,
we have two solutions for $\armn / \armp$ here;
if exact ``theory'' values for ${\cal R}_{J, n , (X, Y)}$ are taken,
these solutions coincide for
\beq
\afrac{\armn}{\armp}_{+, n}^{\rm SD}
= \afrac{\armn}{\armp}_{-, n}^{\rm SD}
= \cleft{\renewcommand{\arraystretch}{1}
\begin{array}{l l l}
\D -\frac{\SpX}{\SnX} \~, & ~~~~~~~~ &
{\rm for}~\calR_{J, n, X} = 0 \~, \\ \\
\D -\frac{\SpY}{\SnY} \~, & &
{\rm for}~\calR_{J, n, Y} = 0 \~,
\end{array}}
\label{eqn:ranapSD_coin}
\eeq
which depend only on the properties of target nuclei
(see Table 1).
Moreover,
it can be found from Eq.~(\ref{eqn:ranapSD}) that
one of these two solutions has a pole
at the middle of two coincident values,
which depends simply on the signs of $\SnX$ and $\SnY$:
since $\calR_{J, n, X}$ and $\calR_{J, n, Y}$ are always positive,
if both of $\SnX$ and $\SnY$ are positive or negative,
the ``$-$'' solution $(\armn / \armp)^{\rm SD}_{-, n}$
will diverge and
the ``$+$'' solution $(\armn / \armp)^{\rm SD}_{+, n}$
will be the ``inner'' solution;
in contrast,
if the signs of $\SnX$ and $\SnY$ are opposite,
the ``$-$'' solution $(\armn / \armp)^{\rm SD}_{-, n}$
will be the ``inner'' solution.
\subsection{Determining the ratio between
two WIMP--proton cross sections\hspace*{-0.3cm}}
Considering a general combination of
both the SI and SD cross sections
given in Eqs.~(\ref{eqn:sigma0SI}) and (\ref{eqn:sigma0SD}),
we can find that%
\cite{DMDDranap-DM08,DMDDranap}
\beq
\frac{\sigmaSD}{\sigmaSI}
= \afrac{32}{\pi} G_F^2 \~ \mrp^2 \Afrac{J + 1}{J}
\bfrac{\Srmp + \Srmn (\armn / \armp)}{A}^2 \frac{|\armp|^2}{\sigmapSI}
= \calCp \afrac{\sigmapSD}{\sigmapSI}
\~,
\label{eqn:rsigmaSDSI}
\eeq
where $\sigmapSD$ given
in Eq.~(\ref{eqn:sigmap/nSD}) has been used and
\beq
\calCp
\equiv \frac{4}{3} \afrac{J + 1}{J}
\bfrac{\Srmp + \Srmn (\armn/\armp)}{A}^2
\~.
\label{eqn:Cp}
\eeq
Then
the expression (\ref{eqn:dRdQ})
for the differential event rate
should be modified to
\beqn
\adRdQ_{{\rm expt}, \~ Q = \Qmin}
&=& \calE
A^2 \! \afrac{\rho_0 \sigmapSI}{2 \mchi \mrp^2} \!\!
\bbrac{F_{\rm SI}^2(\Qmin) + \afrac{\sigmapSD}{\sigmapSI} \calCp F_{\rm SD}^2(\Qmin)}
\non\\
&~& ~~~~~~~~~~~~ \times
\int_{v(\Qmin)}^{v(\Qmax)} \bfrac{f_1(v)}{v} dv
\~,
\label{eqn:dRdQ_SISD}
\eeqn
where I have used Eq.~(\ref{eqn:sigma0SI}) again.
Now by combining two targets $X$ and $Y$
and assuming that
the integral over the WIMP velocity distribution function
in Eq.~(\ref{eqn:dRdQ_SISD})
estimated by Eq.~(\ref{eqn:moments}) for each target
with suitable experimental maximal and minimal cut--off energies
should be (approximately) equal,
the ratio of the SD WIMP--proton cross section
to the SI one can be solved
analytically as%
\cite{DMDDranap-DM08,DMDDidentification-DARK2009,DMDDranap}
\beq
\frac{\sigmapSD}{\sigmapSI}
= \frac{\FSIQminY (\calR_{m, X}/\calR_{m, Y}) - \FSIQminX}
{\calCpX \FSDQminX - \calCpY \FSDQminY (\calR_{m, X} / \calR_{m, Y})}
\~,
\label{eqn:rsigmaSDpSI}
\eeq
where I have assumed $m_{(X, Y)} \propto A_{(X, Y)}$ and defined
\beq
\calR_{m, X}
\equiv \frac{r_X(\QminX)}{\calE_X \mX^2}
\~.
\label{eqn:RmX}
\eeq
Similarly,
the ratio of the SD WIMP--neutron cross section
to the SI one can be given analogously as%
\footnote{
Here I assumed that $\sigmanSI \simeq \sigmapSI$.
}:
\beq
\frac{\sigmanSD}{\sigmapSI}
= \frac{\FSIQminY (\calR_{m, X}/\calR_{m, Y}) - \FSIQminX}
{\calCnX \FSDQminX - \calCnY \FSDQminY (\calR_{m, X} / \calR_{m, Y})}
\~,
\label{eqn:rsigmaSDnSI}
\eeq
with the definition
\beq
\calCn
\equiv \frac{4}{3} \Afrac{J + 1}{J}
\bfrac{\Srmp (\armp/\armn) + \Srmn}{A}^2
\~.
\label{eqn:Cn}
\eeq
Note here that
one can use expressions
(\ref{eqn:rsigmaSDpSI}) and (\ref{eqn:rsigmaSDnSI})
{\em without} a prior knowledge of the WIMP mass $\mchi$.
Moreover,
$\sigma_{\chi, ({\rm p, n})}^{\rm SD} / \sigmapSI$
are functions of only $\calR_{m, (X, Y)}$,
or, equivalently,
the counting rate
at the experimental minimal cut--off energies,
$r_{(X, Y)}(Q_{{\rm min}, (X, Y)})$,
which can be estimated with events
in the lowest available energy ranges.
On the other hand,
for the general combination of
the SI and SD WIMP--nucleon cross sections,
by introducing {\em a third nucleus}
with {\em only} the SI sensitivity:
\mbox{
\(
\Srmp_Z
= \Srmn_Z
= 0
,
\)}
i.e.,
\(
{\cal C}_{{\rm p}, Z}
= 0
.
\)
The $\armn / \armp$ ratio can in fact be solved analytically as%
\cite{DMDDranap-DM08,DMDDidentification-DARK2009,DMDDranap}:
\beq
\afrac{\armn}{\armp}_{\pm}^{\rm SI + SD}
= \frac{-\abrac{\cpX \snpX - \cpY \snpY}
\pm \sqrt{\cpX \cpY} \vbrac{\snpX - \snpY}}
{\cpX \snpX^2 - \cpY \snpY^2}
\~.
\label{eqn:ranapSISD}
\eeq
Here I have defined
\cheqna
\beqn
\cpX
&\equiv& \frac{4}{3} \Afrac{J_X + 1}{J_X} \bfrac{\SpX}{A_X}^2
\non\\
&~& ~ \times \!
\bbrac{ \FSIQminZ \afrac{\calR_{m, Y}}{\calR_{m, Z}} \!
- \FSIQminY} \!
\FSDQminX
\~,
\label{eqn:cpX}
\eeqn
\cheqnb
\beqn
\cpY
&\equiv& \frac{4}{3} \Afrac{J_Y + 1}{J_Y} \bfrac{\SpY}{A_Y}^2
\non\\
&~& ~ \times \!
\bbrac{ \FSIQminZ \afrac{\calR_{m, X}}{\calR_{m, Z}} \!
- \FSIQminX} \!
\FSDQminY
\~;
\label{eqn:cpY}
\eeqn
\cheqn
and
\(
\snpX
\equiv \SnX / \SpX
.
\)
Note that,
firstly,
$(\armn / \armp)_{\pm}^{\rm SI + SD}$ and $c_{{\rm p}, (X, Y)}$
given in Eqs.~(\ref{eqn:ranapSISD}), (\ref{eqn:cpX}), and (\ref{eqn:cpY})
are functions of only
$r_{(X, Y, Z)}(Q_{{\rm min}, (X, Y, Z)})$,
which can be estimated with events
in the lowest available energy ranges.
Secondly,
while the decision of the inner solution of
$(\armn / \armp)_{\pm, n}^{\rm SD}$
depends on the signs of $\SnX$ and $\SnY$,
the decision with $(\armn / \armp)_{\pm}^{\rm SI + SD}$
depends {\em not only} on the signs of
$\snpX = \SnX / \SpX$ and $\snpY = \SnY / \SpY$,
{\em but also} on the {\em order} of the two targets.
Moreover,
since in the expression (\ref{eqn:rsigmaSDpSI})
for the ratio of two WIMP--proton cross sections
there are four sources contributing statistical uncertainties,
i.e., ${\cal C}_{{\rm p}, (X, Y)}$ and $\calR_{m, (X, Y)}$,
in order to reduce the statistical error,
one can choose at first a nucleus
with {\em only} the SI sensitivity
as the second target:
\(
\SpY
= \SnY
= 0
,
\)
i.e.,
\(
{\cal C}_{{\rm p}, Y}
= 0.
\)
Then the expression in Eq.~(\ref{eqn:rsigmaSDpSI})
can be reduced to\cite{DMDDranap}
\beq
\frac{\sigmapSD}{\sigmapSI}
= \frac{\FSIQminY (\calR_{m, X} / \calR_{m, Y}) - \FSIQminX}
{\calCpX \FSDQminX}
\~.
\label{eqn:rsigmaSDpSI_even}
\eeq
Secondly,
one chooses a nucleus with (much) larger
proton (or neutron) group spin
as the first target:
\(
\SpX
\gg \SnX
\simeq 0
.
\)
Now $\calCpX$ given in Eq.~(\ref{eqn:Cp})
becomes (almost) independent of $\armn/\armp$:%
\beq
\calCpX
\simeq \frac{4}{3} \Afrac{J_X + 1}{J_X} \bfrac{\SpX}{A_X}^2
\~.
\label{eqn:CpX_p}
\eeq
\section{Summary and conclusions}
In this article
I reviewed the data analysis procedures
for extracting properties of WIMP--like Dark Matter particles
from direct detection experiments.
These methods are model--independent
in the sense that
neither prior knowledge about
the velocity distribution function of halo Dark Matter
nor their mass and cross sections on target nucleus
is needed.
The unique required information
is measured recoil energies
from experiments
with different target materials.
Once two or more experiments
observe a few tens recoil events
(in each experiment),
one could in principle already estimate
the mass and the SI coupling on nucleons
as well as ratios between different cross sections
of Dark Matter particles.
All this information
(combined eventually results from collider
and/or indirect detection experiments)
could then allow us to distinguish different candidates
for (WIMP--like) Dark Matter particles
proposed in different theoretical models
and to extend our understanding
on particle physics.
\section*{Acknowledgments}
This work
was partially supported
by the BK21 Frontier Physics Research Division under project
no.~BA06A1102 of Korea Research Foundation,
by the National Science Council of R.O.C.
under contract no.~NSC-98-2811-M-006-044,
as well as by
the LHC Physics Focus Group,
National Center of Theoretical Sciences, R.O.C..
| proofpile-arXiv_065-5252 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
GM Cep is a solar type variable star in the $\sim$ 4 Myr-old open cluster Tr 37 \citep{sic04, sic05}, which is located at a distance of 900 pc \citep{con02}. The coordinates of GM Cep are $21^h 38^m 16^s.48$ and $+57^{\circ} 32' 47''.6$. It has a late-type spectral classification of G7V-K0V, with a mass of $\sim$2.1 $M_\sun$ and radius estimate of 3 - 6 $R_\sun$ \citep{sic08}. A companion star has been hypothesized as part of the physical mechanism for the variability in the GM Cep system, but it has not been seen.
The first recorded photometric data for GM Cep was taken at Sonneberg Observatory \citep{mor39} and showed with the visual magnitude varying from 13.5 to 15.5 mag. \citet{suy75} showed that GM Cep had a stable period of up to $\sim$100 days, and it was experiencing rapid variation between 14.2 and 16.4 mag. \citet{sic08} listed and summarized the available data in the literature and depicted a long term light curve in multiple wavelengths. This list contains 16 magnitude, in V band and 5 in B band, most of which were taken in 2006 and later. The only one B band magnitude before 2006 was taken from \citet{kun86}, with B = 17.31 mag. It is much fainter than any other available B magnitude values, and the simultaneous V magnitude is not significantly high. \citet{sic08} took it as an outlier and did not include it in their analysis. Although the data for GM Cep in the literature span from 1939 until 2007, the time history is rather spotty, and there are few magnitudes before 2000.
\citet{sic08} invoked several possible mechanisms to explain the large rapid variability of GM Cep's optical magnitude, the fast rotation rate, and the strong mid-IR excesses. The rapid variability can be explained by the strong outbursts of FUor systems (which brighten by $\geqslant$ 4 mag), in which the mass accretion rate through the circumstellar disk of a young star increases by orders of magnitude \citep{har96}. Another proto-stellar system, EXor (with outbursts $\geqslant$ 2 mag), was also interpreted as a mass accretion event \citep{leh95} and proposed to be similar to GM Cep. \citet{sic08} also give comparisons between the observational features of GM Cep and several better-known systems. For example, RW Aur, which is often quoted as a triple system, shares the features of a strong and variable P Cygni H$\alpha$ profile, a powerful disk, a large accretion rate, and a strong double-peaked $OI$ emission line with GM Cep \citep{ghe93, ale05, suy75}. Another similar system is GW Ori, a 1-Myr old G5 star with a fast rotation rate of $V\sin i$ = 43 km s$^{-1}$ \citep{bou86}, variability up to 1 mag in JHK\footnote{VizieR Online Data Catalog, II/250 \citep{sam04}}, and strong IR excess \citep{mat91, mat95}. CW Tau, a K3 star, has large magnitude variations of 2 mag\footnotemark[\value{footnote}] , a rapid rotation rate of $V\sin i$ = 28 km s$^{-1}$ \citep{muz98}, a P Cygni H$\alpha$ profile, and a deep, broad $OI$ absorption at 7773 \AA. McNeil's Nebula \citep{mcn04} has its emission line spectrum at optical wavelengths similar to the spectrum of GM Cep. KH 15D, a pre-main sequence binary system with a precessing disk or ring \citep{ham05}, is another system that provides an example of a possible explanation for the mechanism of GM Cep. However, without a long-term light curve of GM Cep, these physical explanations cannot be properly tested, and the observational comparisons cannot be made.
A long-term light curve can be used to search for outbursts, periodicities, repetitive features, and other observational features that these mechanisms predict. To obtain a long-term light curve, we visited Harvard College Observatory and Sonneberg Observatory, searched through the archival plates showing this field, and obtained 186 magnitude estimates from 1895 until 1993. We also collected the 75 visual observations from the database of the American Association of Variable Star Observations (AAVSO) from 2006 to present. A long-term light curve for GM Cep was plotted from these data.
\section{Data}
The majority of the world's archival photographic plates are now preserved at Harvard College Observatory (Cambridge, Massachusetts) and Sonneberg Observatory (Germany). The Harvard collection contains roughly 500,000 archival plates with complete sky coverage from mid-1880 to 1989, with a gap from 1953 to 1968. A large fraction of these plates are patrol plates, with a typical limiting magnitude (in the B band) of 12-15. There are also many series plates, with larger plate scale and deep limiting magnitudes ($\sim$15-18). The description of the patrol and series plates can be found on the HCO website\footnote{http://www.cfa.harvard.edu/hco/collect.html}. Most of the patrol plates are not deep enough to show GM Cep. As a result, our search focused on the series plates. Sonneberg Observatory was built in 1925 and has roughly 300,000 plates taken from the early 1930s until present, with patrol plates still ongoing. The magnitude limit of the series plates is $\sim$14-18, so many of these plates are deep enough to show GM Cep. The exposure times for the series plates range from $\sim$ 40 mins to 2 hrs. Most of the archival plates are in blue sensitivity, which closely matches the Johnson B band. Indeed, the Harvard plates provided the original definition of the B band, and the same spectral sensitivity is kept for the photoelectric and CCD magnitudes. With the comparison sequences measured in modern B magnitudes, the differential magnitudes from the old plates are now exactly in the Johnson B-magnitude system.
Before looking through the plates, we set up our own comparison star sequence. The sequence was obtained at Sonoita Research Observatory\footnote{http://www.sonoitaobservatories.org/sonoita\_research\_observatory.html}, located near the town of Sonoita, AZ. The observatory has a 35cm (C14) Schmidt-Cassegrain telescope equipped with an SBIG STL-1000E CCD camera with Johnson-Cousins BVRI filters as well as a clear filter. The pixel scale of the telescope is 1.25 arcsec/pixel, with a 20$\times$20 arcmin field of view. All-sky photometry was obtained, using nightly standards \citep{lan83, lan92} on several photometric nights. The magnitudes and positions of the comparison stars are shown in Table 1.
We searched through all the series plates at Harvard and Sonneberg, and some of the patrol plates at Harvard (specifically, the Damon plates with a scale of 580"/mm and limiting magnitude 14-15 from years 1965 to 1990), and found 186 plates with images of GM Cep. All of these plates have blue sensitivity except for 10 Damon plates (6 DNY plates with visual sensitivity and 4 DNR plates with red sensitivity). We recorded all the plate numbers, dates, and the estimated GM Cep magnitudes.
Each of the GM Cep magnitudes was obtained by taking the average of two or three independent estimations of the same plate. Our magnitude measurements were taken by visually examining each (back-illuminated) plate using a handheld loupe or microscope. Magnitudes were estimated by directly comparing the radius of GM Cep against the radii of nearby comparison stars. On photographic plates, only the objects with magnitude close to the limiting magnitude of the plates show a Gaussian profile. GM Cep is a relatively bright object, for which the central (Gaussian) portion of the star image is saturated. In this case, there is a sharp edge on the star profile, and human eyes are quite good at measuring the radius. The relation functions between the radii and the magnitudes are shown in \citet{sch91}. For our purpose here, as we are choosing comparison stars with comparable brightness on both sides (brighter and fainter) of GM Cep, the relation in such a small region can be approximated to be linear, and the uncertainty caused by the non-linear effect is much smaller than the measurement uncertainty itself. The field of GM Cep is not crowd at all, and all the measurements are well performed.
From our experience and the quantitative studies, our visual method is comparable in accuracy with methods based on two-dimensional scans of the plates and with the use of an Iris Diaphragm Photometer \citep{sch08, sch91, sch81, sch83b}. The measurement error on the magnitude estimation varies slightly among different plates. From the experience of the work on archival plates by our group at Louisiana State University, we can take a typical measurement error value of $\sim$ 0.15 mag \citep{sch83a, sch91, sch05, col09, pag09}. According to our data of GM Cep, the magnitude of each plate has been measured 2-3 times, and the average RMS of different measurements is 0.15, which provides us a typical measurement uncertainty of the magnitudes. In an archival plate study of nova QZ Aur \citep{xia10}, we calculate the standard deviation of the data points that are out of its eclipse. The standard deviation comes out to be 0.16 mag, which is compatible with the value we adopted here.
Table 2 records data from Sonneberg, and Table 3 records data from Harvard. Both of these tables are sorted in order of ascending time. The first column lists the plate number. The second and third columns show the date when the plate was taken, and the corresponding Julian day number. The fourth column lists our measured magnitudes. Our data show the long -term behavior of GM Cep from 1895 until 1993. The long-term light curve in the B band is plotted in Figure 1, and the light curve in the most densely sampled time interval (1935 to 1945) is plotted in Figure 2. B band data from \citet{sic08} are plotted on the same figure, which extends the time range to 2006-2007.
The American Association of Variable Star Observers (AAVSO) has a substantial database of 75 V band magnitudes observed by two amateur astronomers. These data are available upon request at the AAVSO website\footnote{http://www.aavso.org}. The light curve from the combination of the AAVSO data, our V-band magnitudes from the DNY plates, and data from \citet{sic08} are shown in Figure 3. No measurement uncertainties are available for the AAVSO data, so we take 0.15 mag as a typical measurement error. Figure 4 displays the densely sampled V-band light curve from 2006 to 2009.
\section{Light Curve Analysis}
Different mechanisms have been proposed for the pre-main sequence star magnitude variations, including the rotation of a star with cool or hot spots, and the irregular UX-or stars \citep{her94}. However, none of them can explain the 2 - 2.5 mag variations within $\sim$10 days seen for GM Cep \citep{sic08}. Several comparable systems are pointed out in \citet{sic08} as possible explanations of the variation, e.g. FUors, EXors, RW Aur, GW Ori, CW Tau, McNeil's Nebula, and KH 15D. KH 15D was ruled out by \citet{sic08}, as it could not explain the high luminosity of GM Cep. All the remaining systems share some common features with GM Cep, as stated in Section 1. \citet{sic08} concluded that the variability mechanism is probably dominated by strong increases of the accretion rate. From our long-term light curve, we are able to analyze the behavior of GM Cep during the past century and compare it with all these listed possibilities.
From our data and Figures 1 and 2, we see that the magnitude varies between 13.7 and 16.4, with most of the measures between 14.0 and 14.5. From the light curve between years 1938 and 1944, we see both rapid increases and decreases in magnitude (e.g. $\sim$1.1 mag increase from Aug. 2, 1938 to Aug. 19, 1938, $\sim$1 mag decrease from Jul. 25, 1941 to Sep. 15, 1941), which is in agreement with what \citet{sic08} found. The same rapid variation is found in AAVSO V band data, as shown in Figures 3 and 4.
We also checked for periodicity in our data. Periodicity is expected if it is a binary system, with strong periodic modulations if the system is like KH 15D. Now that we have enough data, we can examine this possibility. We ran discrete Fourier transforms on both V band data from AAVSO and B band data from Harvard and Sonneberg. First, we constructed a smoothed light curve in both bands, which represents the long term variation behavior of GM Cep. By subtracting the smoothed light curve, we removed the long time scale variations and can search for the short timescale flickering that might be periodic. We ran a discrete Fourier transform analysis on both sets of data and found no significant period within the range 0.5 to 100 days. For the B band data from Harvard and Sonneberg, to get rid of the effect from long term variation of the light curve (especially the dips which are as large as 2 magnitudes), we picked a subsample of all the data points with magnitude between 13.75 and 14.75, which are not part of the dips. Another subsample we chose was the data between years 1935 and 1938, which is roughly constant before a dip, as shown in Figure 2. We ran the same discrete Fourier transforms on both subsamples, and neither of these show a significant period within the range 0.5 to 100 days.
We made histograms of the magnitude distribution for both bands, which are shown in Figure 5. If the variability is caused by accretion, the light curve will have episodic outbursts (`shots') superposed on some quiescence state, and the corresponding magnitude histogram will show a cut-off at a higher magnitude and an extended tail to the lower magnitude. If the variations are caused by changing extinction, the light curve will have dips superposed on some quiescent state with roughly a constant magnitude, and the resulting magnitude histogram will be like a cut-off at a lower magnitude and an extended tail to the higher magnitude. From the figure we see a cut-off at the magnitude of $\sim$14, and a long extended tail to 16.5. We do not see how episodic flares from accretion can cause the system to spend most of its time in a nearly constant bright state. That is, with multiple shots (even if of some constant amplitude) superposed, the light curve should frequently be brighter than that of one shot due to the overlap of multiple shots, leading to a bright tail in the histogram. With the histogram showing a tail to the \textit{faint} side, we have an effective argument that the variation is not dominated by flares caused by accretion.
Could the tail be caused by the detection thresholds of our plates? To test this, we checked all our data and sources. The detection threshold effect is most involved in the patrol plates from Harvard, i.e. the Damon plates in our data set. All the series plates at Sonneberg and Harvard are deep enough to obtain a GM Cep magnitude measurement. As a result, we made a magnitude histogram for data from Sonneberg series plates only and one for data exclusively from Harvard MC plates. These plots are shown in Figure 6. The Harvard MC histogram does not show any significant trend (although there are relatively few plates), while the Sonneberg histogram shows a cut-off at $\sim$14 mag and a significant extended tail, which is what we found above. As a result, we conclude that the faint tail in the histogram is not caused by threshold effects on the plates.
We are now able to compare GM Cep with the possible mechanisms listed previously. The most obvious property of the light curve is that GM Cep has not undergone any substantial outburst since 1895, and the light curve itself indicate that the variability is due to dips caused by extinction instead of outbursts caused by accretion. The conclusion is confirmed by the magnitude histogram in both B and V bands. FUor stars are characterized by large outbursts with typical rises of 5 magnitudes in a year or so \citep{har96}. In EXor stars, recurrent bursts with amplitudes $> 2$ mag, which last $\leqslant 1$ yr, are found \citep{her01}. Given our sampling time and sensitivity, similar features in GM Cep could not be missed. Thus, our long-term data rule out the associations with FUor or EXor systems. McNeil's Nebula \citep{mcn04}, which shows outbursts with EXor or FUor type eruptions, can also be excluded by our light curve. Our data are compatible with more evolved T Tauri systems, whose magnitude variability is typically $\leqslant 0.4$ mag, with no significant changes over many years \citep{gra07, gra08}. T Tauri systems are also likely populating an old cluster such as Tr37. However, the T Tauri system is not able to produce the huge dips ($\sim 2$ mags) in the light curves. Certain unusual T Tauri stars were found to share spectral features with GM Cep \citep{sic08}. These systems are RW Aur, GW Ori, and CW Tau, and we compared the light curves of these systems to that of GM Cep. RW Aur shows variations of $> 2$ mags, but no dips was found in the long-term light curve from both the literature \citep{ahn57, bec01, pet01} and AAVSO data. GW Ori showes variations of $< 0.2$ mags, with eclipses of $\sim 0.4$ mags \citep{she92, she98}, which is not compatible with GM Cep. For CW Tau, only a few data points were obtained from literature, and no variation was found in the long-term photometric monitoring \citep{wal99}. As a result, none of the light curves of these three systems are compatible with GM Cep. The absence of periodic features in the light curve also excludes the possibility of GM Cep being a KH 15D type star.
\section{Summary}
In this paper, we present data of GM Cep from all available series archival plates and some patrol plates from Harvard College Observatory and Sonneberg Observatory. We obtained 186 new magnitudes for GM Cep (176 in blue, 6 in visible, and 4 in red) ranging from 1895 until 1993. Another 75 V band magnitudes were drawn from the AAVSO database. By combining our data from archival plates, AAVSO data, and previously-published data collected by \citet{sic08}, long term B and V band light curves for GM Cep were constructed. The B band light curve shows a generally constant magnitude ($\sim$14-14.5) with occasional dips to $\sim$16.5. Fast variations are found in both the B and V band light curves.
The magnitude histograms in both B and V bands show cut-offs at the low magnitude (bright) end and long extended tails at the high magnitude (faint) end, which implies that the light curve is composed of dips caused by varying extinction instead of outbursts caused by accretion superposed on quiescence state. The lack of large outbursts in the past century implies that it is not a FUor or EXor star, or a McNeil's Nebula type star. The lack of periodicity in the light curve also excludes the possibility of GM Cep being a KH 15D type star. Several special cases of T Tauri stars (RW Aur, GW Ori and CW Tau) were checked, but none of these light curves are compatible with that of GM Cep.
\acknowledgements
We thank the many observers and curators for the archival plate collections at Harvard College Observatory and at Sonneberg Observatory. The work in this paper would not be possible without their patient and hopeful work over the last century. We would like to thank Bradley Schaefer, Ashley Pagnotta and Andrew Collazzi for their help and useful discussions. We also thank the amateur astronomers of the AAVSO for providing the V magnitude values of GM Cep in 2006-2008.
| proofpile-arXiv_065-5268 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}\label{sec:Intro}
Understanding the physics of galaxy formation has been an active field of study ever since it was demonstrated that galaxies are stellar systems external to our own Milky Way. Modern galaxy formation theory grew out of early studies of cosmology and structure formation and is set within the cold dark matter cosmological model and therefore proceeds via a fundamentally hierarchical paradigm. Observational evidence and theoretical expectations indicate that galaxy formation is an ongoing process which has been occurring over the vast majority of the Universe's history. The goal of galaxy formation theory then is to describe how underlying physical principles give rise to the complicated set of phenomena which galaxies encompass.
Approaches to modelling the complex and non-linear processes of galaxy formation fall into two broad categories: direct hydrodynamical simulation and semi-analytic modelling. The division is of a somewhat fuzzy nature: semi-analytic models frequently make use of N-body simulation merger trees and calibrations from simulations, while simulations themselves are forced to include semi-analytical prescriptions for sub-resolution physics. The direct simulation approach has the advantage of, in principle, providing precise solutions (in the limit of large number of particles and assuming that numerical artifacts are kept under control), but require substantial investments of computing resources and are, at present (and for the foreseeable future), more fundamentally limited by our incomplete understanding of the various sub-resolution physical processes incorporated into them. The semi-analytical approach is less precise, but allows for rapid exploration of a wide range of galaxy properties for large, statistically useful samples. A primary goal of the semi-analytic approach is to develop insights into the process of galaxy formation that are comprehensible in terms of fundamental physical processes or emergent phenomena\footnote{A good example of an emergent phenomenon here is dynamical friction. Gravity (in the non-relativistic limit) is described entirely by $1/r^2$ forces and at this level makes no mention of frictional effects. The phenomenon of dynamical friction emerges from the interaction of large numbers of gravitating particles.}.
The problem is therefore one of complexity: can we tease out the underlying mechanisms that drive different aspects of galaxy formation and evolution from the numerous and complicated physical mechanisms at work. The key here is then ``understanding''. One can easily comprehend how a $1/r^2$ force works and can, by extrapolation, understand how this force applies to the billions of particles of dark matter in an N-body simulation. However, it is not directly obvious (at least not to these authors) how a $1/r^2$ force leads to the formation of complex filamentary structures and collapsed virialized objects. Instead, we have developed simplified analytic models (e.g. the Zel'dovich approximation, spherical top-hat collapse models etc.) which explain these phenomena in terms more accessible to the human intellect. It seems that this is what we must strive for in galaxy formation theory---a set of analytic models that we can comprehend and which allow us to understand the physics and a complementary set of precision numerical tools to allow us to determine the quantitative outcomes of that physics (in order to make precision tests of our understanding). As such, it is our opinion that no set of numerical simulations of galaxy formation, no matter how precise, will directly result in understanding. Instead, analytic methods, perhaps of an approximate nature, must always be developed (and, of course, checked against those numerical simulations) to allow us to understand galaxy formation.
Modern semi-analytic models of galaxy formation began with \cite{white_galaxy_1991}, drawing on earlier work by \cite{rees_cooling_1977} and \cite{white_core_1978}. Since then numerous studies \citep{kauffmann_formation_1993,cole_recipe_1994,baugh_semianalytic_1999,baugh_epoch_1998,somerville_semi-analytic_1999,cole_hierarchical_2000,benson_effects_2002,hatton_galics-_2003,monaco_morgana_2007} have extended and improved this original framework. Current semi-analytic models have been used to investigate many aspects of galaxy formation including:
\begin{itemize}
\item galaxy counts \pcite{kauffmann_faint_1994,devriendt_galaxy_2000};
\item galaxy clustering \pcite{diaferio_clustering_1999,kauffmann_clustering_1999,kauffmann_clustering_1999-1,baugh_modellingevolution_1999,benson_dependence_2000,benson_nature_2000,wechsler_galaxy_2001,blaizot_galics_2006};
\item galaxy colours and metallicities \pcite{kauffmann_chemical_1998,springel_populatingcluster_2001,lanzoni_galics-_2005,font_colours_2008,nagashima_metal_2005-1};
\item sub-mm and \ifthenelse{\equal{\arabic{IRDone}}{0}}{infrared (IR) \setcounter{IRDone}{1}}{IR}\ galaxies \pcite{guiderdoni_semi-analytic_1998,granato_infrared_2000,baugh_canfaint_2005,lacey_galaxy_2008};
\item abundance and properties of Local Group galaxies \pcite{benson_effects_2002-1,somerville_can_2002};
\item the reionization of the Universe \pcite{devriendt_contribution_1998,benson_non-uniform_2001,somerville_star_2003,benson_epoch_2006};
\item the heating of galactic disks \pcite{benson_heating_2004};
\item the properties of Lyman-break galaxies \pcite{governato_seeds_1998,blaizot_predicting_2003,blaizot_galics-_2004};
\item supermassive black hole formation and \ifthenelse{\equal{\arabic{AGNDone}}{0}}{active galactic nuclei (AGN) \setcounter{AGNDone}{1}}{AGN} feedback \pcite{kauffmann_unified_2000,croton_many_2006,bower_breakinghierarchy_2006,malbon_black_2007,somerville_semi-analytic_2008,fontanot_many_2009};
\item damped Lyman-$\alpha$ systems \pcite{maller_damped_2001,maller_damped_2003};
\item the X-ray properties of galaxy clusters \pcite{bower_impact_2001,bower_flip_2008};
\item chemical enrichment of the ICM and \ifthenelse{\equal{\arabic{IGMDone}}{0}}{intergalactic medium (IGM) \setcounter{IGMDone}{1}}{IGM}\ \pcite{de_lucia_chemical_2004,nagashima_metal_2005};
\item the formation histories and morphological evolution of galaxies \pcite{kauffmann_age_1996,de_lucia_formation_2006,fontanot_reproducingassembly_2007,somerville_explanation_2008}.
\end{itemize}
The goal of this approach is to provide a coherent framework within which the complex process of galaxy formation can be studied. Recognizing that our understanding of galaxy formation is far from complete these models should not be thought of as attempting to provide a ``final theory'' of galaxy formation (although that, of course, remains the ultimate goal), but instead to provide a means by which new ideas and insights may be tested and by which quantitative and observationally comparable predictions may be extracted in order to test current theories.
In order for these goals to be met we must endeavour to improve the accuracy and precision of such models and to include all of the physics thought to be relevant to galaxy formation. The complementary approach of direct numerical (N-body and/or hydrodynamic) simulation has the advantage that it provides high precision, but is significantly limited by computing power, resulting in the need for inclusion of semi-analytic recipes in such simulations. In any case, while a simulation of the entire Universe with infinite resolution would be impressive, the goal of the physicist is to understand Nature through relatively simple arguments\footnote{For example, while it is clear from N-body simulations that the action of $1/r^2$ gravitational forces in a \ifthenelse{\equal{\arabic{CDMDone}}{0}}{cold dark matter (CDM) \setcounter{CDMDone}{1}}{CDM}\ universe lead to dark matter halos with approximately \ifthenelse{\equal{\arabic{NFWDone}}{0}}{Navarro-Frenk-White (NFW) \setcounter{NFWDone}{1}}{NFW}\ density profiles, there is a clear drive to provide simple, analytic models to demonstrate that we understand the underlying physics of these profiles \protect\pcite{taylor_phase-space_2001,barnes_density_2007,barnes_velocity_2007}.}
The most recent incarnation of the {\sc Galform}\ model was described by \cite{bower_breakinghierarchy_2006}. The major innovation of that work was the inclusion of feedback from \ifthenelse{\equal{\arabic{AGNDone}}{0}}{active galactic nuclei (AGN) \setcounter{AGNDone}{1}}{AGN}\ which allowed it to produce a very good match to the observed local luminosity functions of galaxies. In particular, the \cite{bower_breakinghierarchy_2006} model was designed to explain the phenomenon of ``down sizing''. While the \cite{bower_breakinghierarchy_2006} model turned out to also give a good match to several other datasets---including stellar mass functions at higher redshifts, the luminosity function at $z=3$ \pcite{marchesini_assessingpredictive_2007}, the abundance of $5<z<6$ galaxies \pcite{mclure_luminosity_2009}, overall colour bimodality \pcite{bower_breakinghierarchy_2006}, morphology \pcite{parry_galaxy_2009}, the global star formation rate and the black hole mass vs. bulge mass relation \pcite{bower_breakinghierarchy_2006}---it fails in several other areas, such as the mass-metallicity relation for galaxies, the sizes of galactic disks \pcite{gonzalez_testing_2009}, the small-scale clustering amplitude \pcite{seek_kim_modelling_2009}, the normalization and environmental dependence of galaxy colours \pcite{font_colours_2008} and the X-ray properties of groups and clusters \pcite{bower_parameter_2010}. Additionally, while the implementation of physics in semi-analytic models must always involve approximations, there are several aspects of the \cite{bower_breakinghierarchy_2006} model which call out for improvement and updating. Chief amongst these is the cooling model---crucial to the implementation of \ifthenelse{\equal{\arabic{AGNDone}}{0}}{active galactic nuclei (AGN) \setcounter{AGNDone}{1}}{AGN}\ feedback---which retained assumptions about dark matter halo ``formation'' events which make implementing feedback physics difficult. Our motivation for this work is therefore to attempt to rectify these shortcomings of the \cite{bower_breakinghierarchy_2006} model, by updating the physics of {\sc Galform}, removing unnecessary assumptions and approximations, and adding in new physics that is thought to be important for galaxy formation but which has previously been neglected in {\sc Galform}. In addition, we will systematically explore the available model parameter space to locate a model which agrees as well as possible with a wide range of observational constraints.
In this current work, we describe the advances made in the {\sc Galform}\ semi-analytic model over the past nine years. Our goal is to present a comprehensive model for galaxy formation that agrees as well as possible with current experimental constraints. In future papers we will utilize this model to explore and explain features of the galaxy population through cosmic history.
The remainder of this paper is structured as follows. In \S\ref{sec:Model} we describe the details of our revised {\sc Galform}\ model. In \S\ref{sec:Selection} we describe how we select a suitable set of model parameters. In \S\ref{sec:Results} we present some basic results from our model, while in \S\ref{sec:Effects} we explore the effects of certain physical processes on the properties of model galaxies. Finally, in \S\ref{sec:Discussion} we discuss their implications and in \S\ref{sec:Conclusions} we give our conclusions. Readers less interested in the technicalities of semi-analytic models and how they are constrained may wish to skip \S\ref{sec:Model}, \S\ref{sec:Selection} and most of \S\ref{sec:Results}, and jump directly to \S\ref{sec:Predictions} where we present two interesting predictions from our model and \S\ref{sec:Effects} in which we explore the effects of varying key physical processes.
\section{Model}\label{sec:Model}
In this section we provide a detailed description of our model.
\subsection{Starting Point}
The starting point for this discussion is \cite{cole_hierarchical_2000} and we will refer to that work for details which have not changed in the current implementation. We choose \cite{cole_hierarchical_2000} as a starting point for the technical description of our model as it represents the last point at which the details of the {\sc Galform}\ model were presented as a coherent whole in a single document. As noted in \S\ref{sec:Intro} however, the scientific predecessor of this work is that of \cite{bower_breakinghierarchy_2006}. That paper, and several others, introduced many improvements relative to \cite{cole_hierarchical_2000}, many of which are described in more detail here. A brief chronology of the development of {\sc Galform}\ from \cite{cole_hierarchical_2000} to the present is as follows:\\
\begin{itemize}
\item \cite{cole_hierarchical_2000}: Previous full description of the {\sc Galform}\ model.
\item \cite{granato_infrared_2000}: Detailed dust modelling utilizing {\sc Grasil}\ (see \S\ref{sec:DustModel}).
\item \cite{benson_non-uniform_2001}: Treatment of reionization and the evolution of the \ifthenelse{\equal{\arabic{IGMDone}}{0}}{intergalactic medium (IGM) \setcounter{IGMDone}{1}}{IGM}\ (see \S\ref{sec:IGM}).
\item \cite{bower_impact_2001}: Treatment of heating and ejection of hot material from halos due to energy input (see \S\ref{sec:AGNFeedback}).
\item \cite{benson_effects_2002-1}: Back reaction of reionization and photoionizing background on galaxy formation (see \S\ref{sec:IGM}) and detailed treatment of satellite galaxy dynamics (a somewhat different approach to this is described in \S\ref{sec:Merging} and \S\ref{sec:Stripping}).
\item \cite{benson_what_2003}: Effects of thermal conduction on cluster cooling rates and ``superwind'' feedback from supernovae (described in further detail by \citealt{baugh_canfaint_2005}).
\item \cite{benson_heating_2004}: Heating of galactic disks by orbiting dark matter halos.
\item \cite{nagashima_metal_2005}: Detailed chemical enrichment models (incorporating delays and tracking of individual elements; see \S\ref{sec:NonInstGasEq}).
\item \cite{bower_breakinghierarchy_2006}: Feedback from \ifthenelse{\equal{\arabic{AGNDone}}{0}}{active galactic nuclei (AGN) \setcounter{AGNDone}{1}}{AGN}\ (see \S\ref{sec:AGNFeedback}).
\item \cite{malbon_black_2007}: Black hole growth (see \S\ref{sec:AGNFeedback}) as applied to the \cite{baugh_canfaint_2005}---see \cite{fanidakis_agn_2009} for a similar (and more advanced) treatment of black holes in the \cite{bower_breakinghierarchy_2006} model.
\item \cite{stringer_formation_2007}: Radially resolved structure of galactic disks.
\item \cite{font_colours_2008}: Ram pressure stripping of cold gas from galactic disks (see \S\ref{sec:Stripping}).
\end{itemize}
\subsection{Executive Summary}
Having developed these treatments of various physical processes one-by-one, our intention is to integrate them into a single baseline model. In addition to the accumulation of many of these improvements (many of which have not previously been utilized simultaneously), the two major modifications to the {\sc Galform}\ model introduced in this work are:\\
\begin{itemize}
\item The removal of discrete ``formation'' events for dark matter halos (which previously occurred each time a halo doubled in mass and caused calculations of cooling and merging times to be reset). This has facilitated a major change in the {\sc Galform}\ cooling model which previously made fundamental reference to these formation events.
\item The inclusion of arbitrarily deep levels of subhalos within subhalos and, as a consequence, the possibility of mergers between satellite galaxies.
\end{itemize}
Aspects of the model that are essentially unchanged from \cite{cole_hierarchical_2000} are listed in \S\ref{sec:Unchanged}. Before launching into the detailed discussion of the model, \S\ref{sec:Changes} provides a quick overview of what has changed between \cite{cole_hierarchical_2000} and the current implementation. In addition to changes to the physics of the model, the {\sc Galform}\ code has been extensively optimized and made OpenMP parallel to permit rapid calculation of self-consistent galaxy/\ifthenelse{\equal{\arabic{IGMDone}}{0}}{intergalactic medium (IGM) \setcounter{IGMDone}{1}}{IGM}\ evolution (see \S\ref{sec:IGM}).
\subsection{Unchanged Aspects}\label{sec:Unchanged}
Below we list aspects of the current implementation of {\sc Galform}\ that are unchanged relative to that published in \cite{cole_hierarchical_2000}.
\begin{itemize}
\item \emph{Virial Overdensities:} Virial overdensities of dark matter halos are computed as described by \cite{cole_hierarchical_2000}, i.e. using the spherical top-hat collapse model for the appropriate cosmology and redshift. Given the mass and virial overdensity of each halo the corresponding virial radii and velocities are easily computed.
\item \emph{Star Formation Rate:} The star formation rate in disk galaxies is given by
\begin{equation}
\dot{\phi} = M_{\rm cold}/\tau_\star \hbox{ where } \tau_\star = \epsilon_\star^{-1} \tau_{\rm disk} (V_{\rm disk}/200\hbox{km s}^{-1})^{\alpha_\star},
\end{equation}
where $M_{\rm cold}$ is the mass of cold gas in the disk, $\tau_{\rm disk}=r_{\rm disk}/V_{\rm disk}$ is the dynamical time of the disk at the half mass radius $r_{\rm disk}$ and $V_{\rm disk}$ is the circular velocity of the disk at that radius. The two parameters $\epsilon_\star$ and $\alpha_\star$ control the normalization of the star formation rate and its scaling with galaxy circular velocity respectively.
\item \emph{Mergers/Morphological Transformation:} The classification of merger events as minor or major follows the logic of \citenote{cole_hierarchical_2000}{\S4.3.2}. However, the rules which determine when a burst of star formation occurs are altered to become:
\begin{itemize}
\item Major merger?
\begin{itemize}
\item Requires $M_{\rm sat}/M_{\rm cen}>f_{\rm burst}$.
\end{itemize}
\item Minor merger?
\begin{itemize}
\item Requires $\left\{\begin{array}{c}M_{\rm cen(bulge)}/M_{\rm cen} < {\rm B/T}_{\rm burst} \\ {\it and} \\ M_{\rm cen(cold)}/M_{\rm cen} \ge f_{\rm gas,burst}.\end{array}\right.$
\end{itemize}
\end{itemize}
where $M_{\rm cen}$ and $M_{\rm sat}$ are the baryonic masses of the central and satellite galaxies involved in the merger respectively, $M_{\rm cen(bulge)}$ is the mass of the bulge component in the central galaxy and $f_{\rm burst}$, $f_{\rm gas,burst}$ and B/T$_{\rm burst}$ are parameters of the model. The parameter B/T$_{\rm burst}$ is intended to inhibit minor merger-triggered bursts in systems that are primarily spheroid dominated (since we may expect that in such systems the minor merger cannot trigger the same instabilities as it would in a disk dominated system and therefore be unable to drive inflows of gas to the central regions to fuel a burst). We would expect that the value of this parameter should be of order unity (i.e. the system should be spheroid dominated in order thatthe burst triggering be inhibited).
\item \emph{Spheroid Sizes:} The sizes of spheroids formed through mergers are computed using the approach described by \citenote{cole_hierarchical_2000}{\S4.4.2}.
\item \emph{Calculation of Luminosities:} The luminosities and magnitudes of galaxy are computed from their known stellar populations as described by \citenote{cole_hierarchical_2000}{\S5.1}. (Although note that the treatment of dust extinction has changed; see \S\ref{sec:DustModel}.)
\end{itemize}
\subsection{Overview of Changes}\label{sec:Changes}
We list below the changes in the current implementation of {\sc Galform}\ relative to that published in \cite{cole_hierarchical_2000}. These are divided into ``minor changes'', which are typically simple updates of fitting formulas, and ``major changes'', which are significant additions to or modifications of the physics and structure of the model.
\subsubsection{Minor changes}\label{sec:MinorChanges}
\begin{itemize}
\item \emph{Dark matter halo mass function:} [See \S\ref{sec:HaloMassFunction}] \cite{cole_hierarchical_2000} use the \cite{press_formation_1974} mass function for dark matter halos. In this work, we use the more recent determination of \cite{reed_halo_2007} which is calibrated against N-body simulations over a wide range of masses and redshifts.
\item \emph{Dark matter merger trees:} [See \S\ref{sec:MergerTrees}] \cite{cole_hierarchical_2000} use a binary split algorithm utilizing halo merger rates inferred from the extended Press-Schechter formalism \pcite{lacey_merger_1993}. We use an empirical modification of this algorithm proposed by \cite{parkinson_generating_2008} and which provides a much more accurate match to progenitor halo mass functions as measured in N-body simulations.
\item \emph{Density profile of dark matter halos:} [See \S\ref{sec:HaloProfiles}] \cite{cole_hierarchical_2000} employed \ifthenelse{\equal{\arabic{NFWDone}}{0}}{Navarro-Frenk-White (NFW) \setcounter{NFWDone}{1}}{NFW}\ \pcite{navarro_universal_1997} density profiles. We instead use Einasto density profiles \pcite{einasto__1965} consistent with recent findings (\citealt{navarro_inner_2004}; \citealt{merritt_universal_2005}; \citealt{prada_far_2006}).
\item \emph{Density and angular momentum of halo gas:} [See \S\ref{sec:HotGasDist}] \cite{cole_hierarchical_2000} adopted a cored isothermal profile for the hot gas in dark matter halos and furthermore assumed a solid body rotation, normalizing the rotation speed to the total angular momentum of the gas (which was assumed to have the same average specific angular momentum as the dark matter). We choose to adopt the density and angular momentum distributions measured in hydrodynamical simulations by \cite{sharma_angular_2005}.
\item \emph{Dynamical friction timescales:} [See \S\ref{sec:DynFric}] \cite{cole_hierarchical_2000} estimated dynamical friction timescales using the expression derived by \cite{lacey_merger_1993} for isothermal dark matter halos and the distribution of orbital parameters found by \cite{tormen_rise_1997}. In this work, we adopt the fitting formula of \cite{jiang_fitting_2008} to compute dynamical friction timescales and the orbital parameter distribution of \cite{benson_orbital_2005}.
\item \emph{Disk stability:} We retain the same test of disk stability as did \citet{cole_hierarchical_2000} and similarly assume that unstable disks undergo bursts of star formation resulting in the formation of a spheroid\footnote{While the implementation of this physical process is unchanged, \protect\cite{cole_hierarchical_2000} actually ignored this process in their fiducial model, while we include it in our work.}. One slight difference is that we assume that the instability occurs at the largest radius for which the disk is deemed to be unstable rather than at the rotational support radius as \citet{cole_hierarchical_2000} assumed. This prevents galaxies with very low angular momenta from contracting to extremely small sizes (and thereby becoming very highly self-gravitating and unstable) before the stability criterion is tested. Additionally, we allow for different stability thresholds for gaseous and stellar disks. We employ the stability criterion of \cite{efstathiou_stability_1982} such that disks require
\begin{equation}
{V_{\rm d} \over ({\rm G} M_{\rm d}/R_{\rm s})^{1/2}} > \epsilon_{\rm d},
\end{equation}
to be stable, where $V_{\rm d}$ is the disk rotation speed at the half-mass radius,$M_{\rm d}$ is the disk mass and $R_{\rm s}$ is the disk radial scale length. \cite{efstathiou_stability_1982} found a value of $\epsilon_{\rm d,\star}=1.1$ was applicable for purely stellar disks. \cite{christodoulou_new_1995} demonstrate that an equivalent result for gaseous disks gives $\epsilon_{\rm d,gas}=0.9$. We choose to make $\epsilon_{\rm d,gas}$ a free parameter of the model and enforce $\epsilon_{\rm d,\star}=\epsilon_{\rm d,gas}+0.2$. For disks containing a mixture of stars and gas we linearly interpolate between $\epsilon_{\rm d,\star}$ and $\epsilon_{\rm d,gas}$ using the gas fraction as the interpolating variable. As has been recently pointed out by \cite{athanassoula_disc_2008}, this treatement of the process of disk destabilization, similar to that in other semi-analytic models, is dramatically oversimplified. As \cite{athanassoula_disc_2008} also describes, a more realistic model would need both a much more careful assessment of the disk stability and a consideration of the process of bar formation. This currently remains beyond the ability of our model to address, although it should clearly be a priority area in which semi-analytic models should strive to improve. In {\sc Galform}\, we can consider an alternative disk instability treatment in which during an instability event only just enough mass is transferred from the disk to the spheroid component to restabilize the disk. While this does not explore the full range of uncertainties arising from the treatment of this process, it gives at least some idea of how significant they may be. We find that the net result of switching to the alternative treatment of instabilities is to slightly increase the number of bulgeless galaxies at all luminosities, with a corresponding decrease in the numbers of intermediate and pure spheroid galaxies. The changes, however, do not alter the qualitative trends of morphological mix with luminosity nor global properties of galaxies such as sizes and luminosity functions at $z=0$. At higher redshifts (e.g. $z\ge5$), the change is more significant, with a reduction in star formation rate by a factor of 2--3 resulting from the lowered frequency of bursts of star formation. This change could be offset by adjustments in other parameters, but demonstrates the need for a refined understanding and modelling of the disk instability process in semi-analytic models.
\item \emph{Sizes of galaxies:} [See \S\ref{sec:Sizes}]. Sizes of disks and spheroids are determined as described by \citet{cole_hierarchical_2000}, although the equilibrium is solved for in the potential corresponding to an Einasto density profiles (used throughout this work) rather than the \ifthenelse{\equal{\arabic{NFWDone}}{0}}{Navarro-Frenk-White (NFW) \setcounter{NFWDone}{1}}{NFW}\ profiles assumed by \citet{cole_hierarchical_2000} and adiabatic contraction is computed using the methods of \cite{gnedin_response_2004} rather than that of \cite{blumenthal_contraction_1986}.
\end{itemize}
While we class the above as minor changes, the effects of some of these changes can be significant in the sense that reverting to the previous implementation would change some model predictions by an amount comparable to or greater than the uncertainties in the relevant observational data. However, none of these modifications lead to fundamental changes in the behaviour of the model and their effects could all be counteracted by small adjustments in model parameters. This is why we classify them as ``minor'' and do not explore their consequences in any greater detail.
\subsubsection{Major changes}
\begin{itemize}
\item \emph{Spins of dark matter halo:} [See \S\ref{sec:HaloSpins}] In \cite{cole_hierarchical_2000} spins of dark matter halos were assigned randomly by drawing from the distribution of \cite{cole_structure_1996}. In this work, we implement an updated version of the approach described by \cite{vitvitska_origin_2002} to produce spins correlated with the merging history of the halo and consistent with the distribution measured by \cite{bett_spin_2007}.
\item \emph{Removal of discrete formation events:} [See \S\ref{sec:FormEvents}] The discrete ``formation'' events (associated with mass doublings) in merger trees which \cite{cole_hierarchical_2000} utilized to reset cooling and merging calculations are no more. Instead, cooling, merging and other processes related to the merger tree evolve smoothly as the tree grows.
\item \emph{Cooling model:} [See \S\ref{sec:Cooling}] The cooling model has been revised to remove the dependence on halo formation events, allow for gradual recooling of gas ejected by feedback and accounts for cooling due to molecular hydrogen and Compton cooling and for heating from a photon background.
\item \emph{Ram pressure and tidal stripping} [See \S\ref{sec:Stripping}] Ram pressure and tidal stripping of both hot halo gas and stars and \ISM\ gas in galaxies are now accounted for.
\item \emph{\ifthenelse{\equal{\arabic{IGMDone}}{0}}{intergalactic medium (IGM) \setcounter{IGMDone}{1}}{IGM}\ interaction} [See \S\ref{sec:IGM}] Galaxy formation is solved simultaneously with the evolution of the intergalactic medium in a self-consistent way: emission from galaxies and \ifthenelse{\equal{\arabic{AGNDone}}{0}}{active galactic nuclei (AGN) \setcounter{AGNDone}{1}}{AGN}\ ionize and heat the \ifthenelse{\equal{\arabic{IGMDone}}{0}}{intergalactic medium (IGM) \setcounter{IGMDone}{1}}{IGM}\ which in turn suppresses the formation of future generations of galaxies.
\item \emph{Full hierarchy of subhalos} [See \S\ref{sec:Merging}] All levels of the substructure hierarchy (i.e. subhalos, sub-subhalos, sub-sub-subhalos\ldots) are included in calculations of merging. This allows for satellite-satellite mergers.
\item \emph{Non-instantaneous recycling} [See \S\ref{sec:NonInstGasEq}] The instantaneous recycling approximation for mass loss, chemical enrichment and feedback has been dropped and the full time and metallicity-dependences included. All models presented in this work utilize fully non-instantaneous recycling, metal production and supernovae feedback.
\end{itemize}
\subsection{Dark Matter Halos}
\subsubsection{Mass Function}\label{sec:HaloMassFunction}
We assume that the masses of dark matter halos at any given redshift are distributed according to the mass function found by \cite{reed_halo_2007}. Specifically, the mass function is given by
\begin{eqnarray}
{{\rm d} n \over{\rm d}\ln M_{\rm v}} &=& \sqrt{2\over\pi} {\Omega_0\rho_{\rm crit}\over M_{\rm v}} \left|{{\rm d}\ln\sigma\over{\rm d}\ln M}\right| \nonumber \\
& & \times [1+1.047(\omega^{-2p})+0.6G_1+0.4G_2] A^\prime \omega \nonumber \\
& & \times \exp\left(-{1\over 2} \omega^2-0.0325 {\omega^{2p}\over (n_{\rm eff}+3)^2}\right),
\label{eq:HaloMF}
\end{eqnarray}
where ${\rm d} n /{\rm d}\ln M_{\rm v}$ is the number of halos with virial mass $M_{\rm v}$ per unit volume per unit logarithmic interval in $M_{\rm v}$, $\sigma(M)$ is the fractional mass root-variance in the linear density field in top-hat spheres containing, on average, mass $M$, $\delta_{\rm c}(z)$ is the critical overdensity for spherical top-hat collapse at redshift $z$ \pcite{eke_cluster_1996},
\begin{eqnarray}
n_{\rm eff} &=& -6{{\rm d}\ln\sigma\over{\rm d}\ln M}-3, \\
\omega &=& \sqrt{ca} {\delta_{\rm c}(z) \over \sigma}, \\
G_1 &=& \exp\left(-{1\over 2}\left[{(\log\omega-0.788)\over 0.6}\right]^2 \right), \\
G_2 &=& \exp\left(-{1\over 2}\left[{(\log\omega-1.138)\over 0.2}\right]^2 \right), \\
\end{eqnarray}
$A^\prime = 0.310$, $ca = 0.764$ and $p=0.3$ as in eqns.~(11) and (12) of \cite{reed_halo_2007}\footnote{With minor corrections to the published version (Reed, private communication).}. The mass variance, $\sigma^2(M)$, is computed using the cold dark matter transfer function of \cite{eisenstein_power_1999} together with a scale-free primordial power spectrum of slope $n_{\rm s}$ and normalization $\sigma_8$.
When constructing samples of dark matter halos we compute the number of halos, $N_{\rm halo}$, expected in some volume $V$ of the Universe within a logarithmic mass interval, $\Delta\ln M_{\rm V}$, according to this mass function, requiring that the number of halos in the interval never exceeds $N_{\rm max}$ and is never less than $N_{\rm min}$ to ensure a fair sample. We then generate halo masses at random using a Sobol' sequence \pcite{sobol__1967} drawn from a distribution which produces, on average, $N_{\rm halo}$ halos in each interval. This ensures a quasi-random, fair sampling of halos of all masses with no quantization of halo mass and with sub-Poissonian fluctuations in the number of halos in any mass interval.
\subsubsection{Merger Trees}\label{sec:MergerTrees}
Dark matter halo merger trees, which describe the hierarchical growth of structure in a cold dark matter universe, form the backbone of our model within which the process of galaxy formation proceeds. Merger trees are either constructed through a variant of the extended Press-Schechter Monte Carlo methodology, or are extracted from N-body simulations.
When constructing trees using Monte Carlo methods, we employ the merger tree algorithm described by \cite{parkinson_generating_2008} which is itself an empirical modification of that described by \cite{cole_hierarchical_2000}. We adopt the parameters $(G_0,\gamma_1,\gamma_2)=(0.57,0.38,-0.01)$ that \cite{parkinson_generating_2008} found provided the best fit\footnote{\protect\cite{benson_constraining_2008} found an alternative set of parameters which provided a better match to the evolution of the overall halo mass function but performed slightly less well (although still quite well) for the progenitor halo mass functions. We have chosen to use the parameters of \protect\cite{parkinson_generating_2008} as for the properties of galaxies we wish to get the progenitor masses as correct as possible.} to the statistics of halo progenitor masses measured from the Millennium Simulation by \cite{cole_statistical_2008}. We typically use a mass resolution (i.e. the lowest mass halo which we trace in our trees) of $5\times 10^9h^{-1}M_\odot$, which is sufficient to achieve resolved galaxy properties for all of the calculations considered in this work. An exception is when we consider Local Group satellites (see \S\ref{sec:LocalGroup}), for which we instead use a mass resolution of $10^7h^{-1}M_\odot$. Figure~\ref{fig:MCTrees} shows the resulting dark matter halo mass functions at several different redshifts and demonstrates that they are in good agreement with that expected from eqn.~(\ref{eq:HaloMF}).
\begin{figure}
\includegraphics[width=80mm,viewport=0mm 55mm 200mm 245mm,clip]{Plots/Progenitor_MF.pdf}
\caption{The dark matter halo mass function is shown at a number of different redshifts. Solid lines indicate the mass function expected from eqn.~(\protect\ref{eq:HaloMF}) while points with error bars indicate the mass function constructed using merger trees from our model. The trees in question were initiated at $z=0$ and grown back to higher redshifts using the methods of \protect\cite{parkinson_generating_2008}.}
\label{fig:MCTrees}
\end{figure}
\subsubsection{(Lack of) Halo Formation Events}\label{sec:FormEvents}
\cite{cole_hierarchical_2000} identified certain halos in each dark matter merger tree as being newly formed. ``Formation'' in this case corresponded to the point where a halo had doubled in mass since the previous formation event. The characteristic circular velocity and spin of halos was held fixed in between formation events and the time available for hot gas in a halo to cool was measured from the most recent formation event (such that the cooling radius was reduced to zero at each formation event). Additionally, any gas ejected by feedback was only allowed to begin recooling after a formation event, and any satellite halos that had not yet merged with the central galaxy of their host halo were assumed to have their orbits randomized by the formation event and consequently their merger timescales were reset.
While computationally useful, these formation events lack any solid physical basis. As such, we have excised them from our current implementation of {\sc Galform}. Halo properties (virial velocity and spin) now change at each timestep in response to mass accretion. Additionally, the cooling and merging calculations no longer make use of formation events (see \S\ref{sec:Cooling} and \S\ref{sec:Merging} respectively).
\subsubsection{Density Profiles}\label{sec:HaloProfiles}
Recent N-body studies \pcite{navarro_inner_2004,merritt_universal_2005,prada_far_2006} indicate that the density profiles of dark matter halos in \ifthenelse{\equal{\arabic{CDMDone}}{0}}{cold dark matter (CDM) \setcounter{CDMDone}{1}}{CDM}\ cosmologies are better described by the Einasto profile \pcite{einasto__1965} than the \ifthenelse{\equal{\arabic{NFWDone}}{0}}{Navarro-Frenk-White (NFW) \setcounter{NFWDone}{1}}{NFW}\ profile \pcite{navarro_universal_1997}. As such, we use the Einasto density profile,
\begin{equation}
\rho(r) = \rho_{-2} \exp\left( -{2\over \alpha} \left[ \left({r\over r_{-2}}\right)^\alpha- 1 \right] \right),
\end{equation}
where $r_{-2}$ is a characteristic radius at which the logarithmic slope of the density profile equals $-2$ and $\alpha$ is a parameter which controls how rapidly the logarithmic slope varies with radius. To fix the value of $\alpha$ we adopt the fitting formula of \cite{gao_redshift_2008}, truncated so that $\alpha$ never exceeds $0.3$,
\begin{equation}
\alpha = \left\{ \begin{array}{ll} 0.155 + 0.0095\nu^2 & \hbox{if } \nu < 3.907 \\ 0.3 & \hbox{if } \nu \ge 3.907,\end{array} \right.
\end{equation}
where $\nu=\delta_{\rm c}(a)/\sigma(M)$ which is a good match to halos in the Millennium Simulation\footnote{\protect\cite{gao_redshift_2008} were not able to probe the behaviour of $\alpha$ in the very high $\nu$ regime. Extrapolating their formula to $\nu > 4$ is not justified and we instead choose to truncate it at a maximum of $\alpha=0.3$.}. The value of $r_{-2}$ for each halo is determined from the known virial radius, $r_{\rm v}$, and the concentration, $c_{-2}\equiv r_{\rm v}/r_{-2}$. Concentrations are computed using the method of \cite{navarro_universal_1997} but with the best-fit parameters found by \cite{gao_redshift_2008}.
Various integrals over the density and mass distribution are needed to compute the enclosed mass, angular momentum, velocity dispersion, gravitational energy and so on of the Einasto profile. Some of these may be expressed analytically in terms of incomplete gamma functions \pcite{cardone_spherical_2005}. Expressions for the mass and gravitational potential are provided by \cite{cardone_spherical_2005}. One other integral, the angular momentum of material interior to some radius, can also be found analytically:
\begin{eqnarray}
J(r) &=& \pi^2 V_{\rm rot} \int_0^r r^{\prime (3+\alpha_{\rm rot})} \rho(r^\prime) {\rm d} r^\prime \nonumber \\
&=&\pi^2 V_{\rm rot} \rho_{-2} r_{-2}^{4+\alpha_{\rm rot}} {{\rm e}^{2/\alpha}\over \alpha} \left({\alpha\over 2}\right)^{4+\alpha_{\rm rot}} \nonumber \\
& & \times\Gamma\left({4+\alpha_{\rm rot}\over \alpha},{2 (r/r_{-2})^\alpha\over\alpha}\right),
\end{eqnarray}
where the specific angular momentum at radius $r$ is assumed to be $r V_{\rm rot} (r/r_{\rm v})^{\alpha_{\rm rot}}$ and $\Gamma$ is the lower incomplete gamma function. Other integrals (e.g. gravitational energy) are computed numerically as needed.
\subsubsection{Angular momentum}\label{sec:HaloSpins}
As first suggested by \cite{hoyle_origin_1949}, and developed further by \cite{doroshkevich_space_1970}, \cite{peebles_origin_1969} and \cite{white_angular_1984}, the angular momenta of dark matter halos arises from tidal torques from surrounding large scale structure and is usually characterized by the dimensionless spin parameter,
\begin{equation}
\lambda\equiv {J_{\rm v}|E_{\rm v}|^{1/2}\over{\rm G} M_{\rm v}^{5/2}},
\label{eq:lambdaSpinDef}
\end{equation}
where $J_{\rm v}$ is the angular momentum of the halo and $E_{\rm v}$ its energy (gravitational plus kinetic). The distribution of $\lambda$ has been measured numerous times from N-body simulations \pcite{barnes_angular_1987,efstathiou_gravitational_1988,warren_dark_1992,cole_structure_1996,lemson_environmental_1999} and found to be reasonably well approximated by a log-normal distribution. More recent estimates by \cite{bett_spin_2007} using the Millennium Simulation show a somewhat different form for this distribution:
\begin{equation}
P(\lambda) \propto \left({\lambda\over\lambda_0}\right)^3 \exp\left[-\alpha_\lambda\left({\lambda\over\lambda_0}\right)^{3/\alpha_\lambda}\right],
\end{equation}
where $\alpha_\lambda=2.509$ and $\lambda_0=0.04326$ are parameters.
\cite{cole_hierarchical_2000} assigned spins to dark matter halos by drawing them at random from the distribution of \cite{cole_structure_1996}. This approach has the disadvantage that spin is not influenced by the merging history of a given dark matter halo and, furthermore, spin can vary dramatically from one timestep to the next even if a halo experiences no (or only very minor) merging. This was not a problem for \cite{cole_hierarchical_2000}, who made use of the spin of each newly formed halo, ignoring any variation between formation events\footnote{As it seems reasonable to assume that the spins of a halo at two successive formation events, i.e. separated by a factor of two in halo mass, would be only weakly correlated.}. However, in our case, such behaviour would be problematic. We therefore revisit an idea first suggested by \citeauthor{vitvitska_origin_2002}~(\citeyear{vitvitska_origin_2002}; see also \citealt{maller_modelling_2002}). They followed the contribution to the angular momentum of each halo from its progenitor halos (which carry angular momentum in both their internal spin and orbit). Note that the angular momentum still arises via tidal torques (which are responsible for the orbital angular momenta of merging halos).
Halos in the merger tree which have no progenitors are assigned a spin by drawing at random from the distribution of \cite{bett_spin_2007}. For halos with progenitors, we proceed as follows:
\begin{enumerate}
\item Compute the internal angular momenta of all progenitor halos using their previously assigned spin and eqn.~(\ref{eq:lambdaSpinDef});
\item Select orbital parameters (specifically the orbital angular momentum) for each merging pair of progenitors by drawing at random from the distribution found by \cite{benson_orbital_2005};
\item Sum the internal and orbital angular momenta of all progenitors assuming no correlations between the directions of these vectors\footnote{Additionally, we are assuming that mass accretion below the resolution of the merger tree contributes the same mean specific angular momentum as accretion above the resolution.};
\item Determine the spin parameter of the new halo from this summed angular momentum and eqn.~(\ref{eq:lambdaSpinDef}).
\end{enumerate}
\cite{benson_orbital_2005} report orbital velocities for merging halos and give expressions for the angular momenta of those orbits assuming point mass halos. While this will be a reasonable approximation for high mass ratio mergers it will fail for mergers of comparable mass halos. In addition, halo mergers may not necessarily conserve angular momentum in the sense that some material, plausibly with the highest specific angular momentum, may be thrown out during the merging event leaving the final halo with a lower angular momentum. To empirically account for these two factors we divide the orbital angular momentum by a factor of $f_2\equiv 1+M_2/M_1$ (where $M_2<M_1$ are the masses of the dark matter halos). We find that this empirical factor leads to good agreement with the measured N-body spin distribution, but could be justified more rigorously by measuring the angular momentum (accounting for finite size effects) of the progenitor and remnant halos in N-body mergers.
\begin{figure}
\includegraphics[width=80mm,viewport=0mm 60mm 200mm 245mm,clip]{Plots/Spin_Model.pdf}
\caption{The distribution of dark matter halo spin parameters. Black lines show measurements of this distribution from the Millennium N-body simulation \protect\citeauthor{bett_spin_2007}~(\protect\citeyear{bett_spin_2007}; B07), for three different group finding algorithms. \protect\cite{bett_spin_2007} note that the ``{\sc tree}'' halos give the most accurate determination of the spin distribution. The red line shows the results of the Monte Carlo model described in this work, using 51625\ Monte Carlo realizations of merger trees spanning a range of masses identical to that used by \protect\cite{bett_spin_2007}.}
\label{fig:SpinModel}
\end{figure}
To test the validity of this approach we generated 51625\ Monte Carlo realizations of merger trees drawn from a halo mass function consistent with that of the Millennium Simulation and with a range of masses consistent with that for which \cite{bett_spin_2007} were able to measure spin parameters and applied the above procedure. Figure~\ref{fig:SpinModel} shows the results of this test. We find remarkably good agreement between the distribution of spin measured by \cite{bett_spin_2007} and the results of our Monte Carlo model. It should be noted that our assumption of no correlation between the various angular momenta vectors of progenitor halos is not correct. However, \cite{benson_orbital_2005} shows that any such correlations are weak. Therefore, given the success of a model with no correlations, we choose to ignore them.
Our results are in good agreement with previous attempts to model the halo spin distribution in this way. \cite{maller_modelling_2002} found good agreement with N-body results using the same principles, although they found that introducing some correlation between the directions of spin and orbital angular momenta improved their fit. \cite{vitvitska_origin_2002} also found generally good agreement with N-body simulations using orbital parameters of halos drawn from an N-body simulation. Both of these earlier calculations relied on much less well calibrated orbital parameter distributions for merging halos and the simulations to which they compared their results had significantly poorer statistics than the Millennium simulation. Our results confirm that this approach to calculating halo spins from a merger history still works extremely well even when confronted with the latest high-precision measures of the spin distribution.
\subsection{Cooling Model}\label{sec:Cooling}
The cooling model described by \cite{cole_hierarchical_2000} determines the mass of gas able to cool in any timestep by following the propagation of the cooling radius in a notional hot gas density profile\footnote{We refer to this as a ``notional'' profile since it is taken to represent the profile before any cooling can occur. Once some cooling occurs presumably the actual profile adjusts in some way to respond to this and so will no longer look like the notional profile, even outside of the cooling radius.} which is fixed when a halo is flagged as ``forming'' and is only updated when the halo undergoes another formation event. The mass of gas able to cool in any given timestep is equal to the mass of gas in this notional profile between the cooling radius at the present step and that at the previous step. The cooling time is assumed to be the time since the formation event of the halo. Any gas which is reheated into or accreted by the halo is ignored until the next formation event, at which point it is added to the hot gas profile of the newly formed halo. The notional profile is constructed using the properties (e.g. scale radius, virial temperature etc.) of the halo at the formation event and retains a fixed metallicity throughout, corresponding to the metallicity of the hot gas in the halo at the formation event.
In this work we implement a new cooling model. We do away with the arbitrary ``formation'' events and instead use a continuously updating estimate of cooling time and halo properties. For the purposes of this calculation we define the following quantities:
\begin{itemize}
\item $M_{\rm hot}$: The current mass of hot (i.e. as yet uncooled) gas remaining in the notional profile;
\item $M_{\rm cooled}$: The mass of gas which has cooled out of the notional profile into the galaxy phase;
\item $M_{\rm reheated}$: The mass of gas which has been reheated (by supernovae feedback) but has yet to be reincorporated back into the hot gas component.
\item $M_{\rm ejected}$: The mass of gas which has been ejected beyond the virial radius of this halo, but which may later reaccrete into other, more massive halos.
\end{itemize}
The notional profile always contains a mass $M_{\rm total}=M_{\rm hot}+M_{\rm cooled}+M_{\rm reheated}$. The properties (density normalization, core radius) are reset, as described in \S\ref{sec:HotGasDist}, at each timestep. The previous infall radius (i.e. the radius within which gas was allowed to infall and accrete onto the galaxy) is computed by finding the radius which encloses a mass $M_{\rm cooled}+M_{\rm reheated}$ (i.e. the mass previously removed from the hot component) in the current notional profile.
We aim to compute a time available for cooling for the halo, $t_{\rm avail}$, from which we can compute a cooling radius in the usual way (i.e. by finding the radius in the notional profile at which $t_{\rm cool}=t_{\rm avail}$). In \cite{cole_hierarchical_2000} the time available for cooling is simply set to the time since the last formation event of the halo.
At any time, the rate of cooling per particle is just $\Lambda(T,\hbox{\boldmath $Z$},n_{\rm H},F_\nu) n_{\rm H}$ where $\Lambda(T,\hbox{\boldmath $Z$},n_{\rm H},F_\nu)$ is the cooling function, and $n_{\rm H}$ the number density of hydrogen, $\hbox{\boldmath $Z$}$ a vector of metallicity (such that the $i^{\rm th}$ component of $\hbox{\boldmath $Z$}$ is the abundance by mass of the $i^{\rm th}$ element) and $F_\nu$ the spectrum of background radiation. The total cooling luminosity is then found by multiplying by the number of particles, $N$, in some volume $V$ that we want to consider. If we take this volume to be the entire halo then $N\equiv M_{\rm total}/\mu m_{\rm H}$. If we integrate this luminosity over time, we find the total energy lost through cooling. The total thermal energy in our volume $V$ is just $3Nk_{\rm B}T/2$. The gas will have completely cooled once the energy lost via cooling equals the original thermal energy, i.e.:
\begin{equation}
3Nk_{\rm B}T_{\rm v}/2 = \int_0^t \Lambda(t^\prime) n_{\rm H} N {\rm d} t^\prime,
\end{equation}
where for brevity we write $\Lambda(t)\equiv\Lambda[T_{\rm v}(t),\hbox{\boldmath $Z$}(t),n_{\rm H}(t),F_\nu(t)]$. We can write this as
\begin{equation}
t_{\rm cool} = t_{\rm avail},
\end{equation}
where
\begin{equation}
t_{\rm cool}(t) = {3k_{\rm B}T_{\rm v}(t) \over 2\Lambda(t) n_{\rm H}}
\end{equation}
is the usual cooling time and
\begin{equation}
t_{\rm avail} = {\int_0^t \Lambda(t^\prime) n_{\rm H}(t^\prime) N {\rm d} t^\prime \over \Lambda(t) n_{\rm H}(t) N}
\end{equation}
is the time available for cooling. We can re-write this as
\begin{equation}
t_{\rm avail} = {\int_0^t [T_{\rm v}(t^\prime) N / t_{\rm cool}(t^\prime)] {\rm d} t^\prime \over [T_{\rm v}(t) N / t_{\rm cool}(t)]}.
\label{eq:tAvail}
\end{equation}
In the case of a static halo, where $T_{\rm v}$, $\hbox{\boldmath $Z$}$, $F_\nu$ and $N$ are independent of time, $t_{\rm avail}$ reduces to the time since the halo came into existence as we might expect. For a non-static halo the above makes more physical sense. For example, consider a halo which is below the $10^4$K cooling threshold from time $t=0$ to time $t=t_4$, and then moves above that threshold (with fixed properties after this time). Since $t_{\rm cool}=\infty$ (i.e. $\Lambda(t)=0$) before $t_4$ in this case we find that $t_{\rm avail}=t-t_4$ as expected. Note that since the number of particles, $N$, appears in both the numerator and denominator of eqn.~(\ref{eq:tAvail}) we can, in practice, replace $N$ by $M_{\rm total}$ without changing the resulting time.
The cooling time in the above must be computed for a specific value of the density. We choose to use the cooling time at the mean density of the notional profile at each timestep. This implicitly assumes that the density of each mass element of gas in the notional profile has the same time dependence as the mean density of the profile, i.e. that the profile evolves in a self-similar way and that $\Lambda(t)$ is independent of $n_{\rm H}$ (which will only be true in the collisional ionization limit). This may not be true in general, but serves as an approximation allowing us to describe the cooling of the entire halo with just a single integral\footnote{A more elaborate model could compute a separate integral for each shell of gas, following the evolution of its density as a function of time as the profile evolves due to continued infall and cooling.}.
Having computed the time available for cooling we solve for the cooling radius in the notional profile at which $t_{\rm cool}(r_{\rm cool})=t_{\rm avail}$ (as described in \S\ref{sec:CoolRadius}). We also estimate the largest radius from whch gas has had sufficient time to freefall to the halo centre (as described in \S\ref{sec:Freefall}). The current infall radius is taken to be the smaller of the cooling and freefall radii. Any mass between the current infall radius and that at the previous timestep is allowed to infall onto the galaxy during the current timestep---that is, it is transferred from $M_{\rm hot}$ to $M_{\rm cooled}$.
One refinement which must be introduced is to limit the integral
\begin{equation}
{\mathcal E} = {3\over 2} k_{\rm B} \int_0^t [T_{\rm v}(t^\prime) N / t_{\rm cool}(t^\prime)] {\rm d} t^\prime,
\label{eq:tAvailIntegral}
\end{equation}
so that the total radiated energy cannot exceed the total thermal energy of the halo. This limit is given by
\begin{equation}
{\mathcal E}_{\rm max} = {3\over 2} k_{\rm B} T_{\rm v}(t) N {\bar{\rho}_{\rm total} \over \rho_{\rm total}(r_{\rm v})},
\label{eq:tAvailIntegralMax}
\end{equation}
where $\bar{\rho}_{\rm total}$ is the mean density of the notional profile and $\rho_{\rm total}(r_{\rm v})$ is the density of the notional profile at the virial radius. For the entire halo (out to the virial radius) to cool takes longer than for gas at the mean density of the halo to cool, by a factor of $\bar{\rho}_{\rm total} / \rho_{\rm total}(r_{\rm v})$. This is the origin of the ratio of densities in eqn.~(\ref{eq:tAvailIntegralMax}).
We must then consider two additional effects: accretion (\S\ref{sec:Accretion}) and reheating (\S\ref{sec:Reheating}). The cooling model is then fully specified once we specify the distribution of gas in the notional profile (\S\ref{sec:HotGasDist}), determine a cooling radius (\S\ref{sec:CoolRadius}) and freefall radius (\S\ref{sec:Freefall}), and consider how to compute the angular momentum of the infalling gas (\S\ref{sec:CoolAngMom}).
\subsubsection{Accretion}\label{sec:Accretion}
When a halo accretes another halo, we merge their notional gas profiles. Since the integral, ${\mathcal E} = \int (N T_{\rm v}/t_{\rm cool}) {\rm d} t$, that we are computing is the total energy lost we simply add ${\mathcal E}$ from the accreted halo to that of the halo it accretes into. This gives the total energy lost from the combined notional profile. However, we must consider the fact that only a fraction $M_{\rm hot}/M_{\rm total}$ of the gas from the accreted halo is added to the hot gas reservoir of the combined halo (the mass $M_{\rm cooled}$ from the accreted halo becomes the satellite galaxy while the mass $M_{\rm reheated}$ is added to the reheated reservoir of the new halo to await reincorporation into the hot component; see \S\ref{sec:Reheating}). We simply multiply the integral ${\mathcal E}$ of the accreted halo by this fraction before adding it to the new halo.
Figure \ref{fig:CoolingModels} compares the mean cooled gas fractions in halos of different masses computed using the cooling model described here (green lines) and two previous cooling models used in {\sc Galform}: that of \citeauthor{cole_hierarchical_2000}~(\citeyear{cole_hierarchical_2000}; red lines) and that of \citeauthor{bower_breakinghierarchy_2006}~(\citeyear{bower_breakinghierarchy_2006}; blue lines). The only significant difference between the cooling implementations of \cite{cole_hierarchical_2000} and \cite{bower_breakinghierarchy_2006} is that \cite{bower_breakinghierarchy_2006} allow reheated gas to gradually return to the hot component (and so be available for re-cooling) at each timestep (in the same manner as in the present work), while \cite{cole_hierarchical_2000} simply accumulated this reheated gas and returned it all to the hot component only at the next halo formation event (i.e. after a halo mass doubling). No star or black hole formation was included in these calculations, so consequently there is no reheating of gas, expulsion of gas from the halo or metal enrichment. Additionally, no galaxy merging was allowed. The thick lines show the total cooled fraction in all branches of the merger trees, while the thin lines show the cooled fraction in the main branch of the trees\footnote{We define the main branch of the merger tree as the set of progenitor halos found by starting from the final halo and repeatedly stepping back to the most massive progenitor of the current halo at each time step. It should be noted that definition is not unique, and can depend on the time resolution of the merger tree. It can also result in situations where the main branch does not correspond to the most massive progenitor halo at a given timestep.}.
The cooling model utilized by \cite{bower_breakinghierarchy_2006} was similar to that of \cite{cole_hierarchical_2000} except that it allowed accreted and reheated gas to rejoin the hot gas reservoir in a continuous manner rather than only at each halo formation event. Additionally, it used the current properties of the halo (e.g. virial temperature) to compute cooling rates rather than the properties of the halo at the previous formation event. As such, the \cite{bower_breakinghierarchy_2006} model contains many features of the current cooling model, but retains the fundamental division of the merger tree into discrete branches as in the \cite{cole_hierarchical_2000} model.
We find that, in general, the cooling model described here predicts a total cooled fraction very close to that predicted by the cooling model of \cite{bower_breakinghierarchy_2006}, the exception being at very early times in low mass halos where it gives a slightly lower value. The difference of course is that the new model does not contain artificial resets in the cooling calculation which, although they make little difference to this statistic, have a strong influence on, for example, calculations of the angular momentum of cooling gas. Both of these models predict somewhat more total cooled mass than the \cite{cole_hierarchical_2000} model. This is due entirely to the allowance of accreted gas to begin cooling immediately.
If we consider the cooled fraction in the main branch of each tree (i.e. the mass in what will become the central galaxy in the final halo) we see rather different behaviour. At early times, the new model tracks the \cite{bower_breakinghierarchy_2006} model. At late times, however, the \cite{bower_breakinghierarchy_2006} model shows a much lower cooling rate while the new model tracks the cooled fraction in the \cite{cole_hierarchical_2000} model quite closely. This occurs in massive halos where, in the \cite{bower_breakinghierarchy_2006} model the use of the current halo properties to determine cooling rates results in ever increasing cooling times as the virial temperature of the halo increases and the halo density (and hence hot gas density) decline. The \cite{cole_hierarchical_2000} model is less susceptible to this as it computes halo properties based on the halo at formation. The new cooling model produces results comparable to the \cite{cole_hierarchical_2000} model since, while it utilizes the present properties of the halo just as does the \cite{bower_breakinghierarchy_2006} model, it retains a memory of the early properties of the halo.
\begin{figure}
\includegraphics[width=80mm,viewport=0mm 10mm 186mm 265mm,clip]{Plots/Cooling.pdf}
\caption{The mean cooled gas fractions in the merger trees of halos with masses $10^{11}$, $10^{12}$, $10^{13}$, $10^{14}$ and $10^{15}h^{-1}M_\odot$ at $z=0$ are shown by coloured lines. Green lines show results from the cooling model described in this work while red lines indicate the model of \protect\cite{cole_hierarchical_2000} and blue lines the cooling model of \protect\cite{bower_breakinghierarchy_2006}. Solid lines show the total cooled fraction in all branches of the merger trees, while dashed lines show the cooled fraction in the main branch of the trees. For the purposes of this figure, no star or black hole formation was included in these calculations, so consequently there is no reheating of gas, expulsion of gas from the halo or metal enrichment. Additionally, no galaxy merging was allowed. As such, the differences between models arises purely from their different implementations of cooling.}
\label{fig:CoolingModels}
\end{figure}
\subsubsection{Reheating}\label{sec:Reheating}
When gas is reheated (via feedback; \S\ref{sec:Feedback}) we assume that it is heated to the virial temperature of the current halo (i.e. the host halo for satellite galaxies) and is placed into a reservoir $M_{\rm reheated}$. Mass is moved from this reservoir back into the hot gas reservoir on a timescale of order the halo dynamical time, $\tau_{\rm dyn}$. Specifically, mass is returned to the hot phase at a rate
\begin{equation}
\dot{M}_{\rm hot} = \alpha_{\rm reheat} {M_{\rm reheated} \over \tau_{\rm dyn}}
\end{equation}
during each timestep. This effectively undoes the cooling energy loss which caused this gas to cool previously. The energy integral ${\mathcal E}$ is therefore modified by subtracting from it an amount $\Delta N_{\rm reheated} T_{\rm v}$ where $\Delta N_{\rm reheated}$ is the number of particles reheated.
Similarly, the notional profile is allowed to ``forget'' about any cooled gas on a timescale of order the dynamical time (i.e. we assume that the notional profile adjusts to the loss of this gas). This is implemented by removing mass from the cooled reservoir at a rate
\begin{equation}
\dot M_{\rm cooled} = - \alpha_{\rm remove} {M_{\rm cooled} \over \tau_{\rm dyn}}.
\end{equation}
\subsubsection{Hot Gas Distribution}\label{sec:HotGasDist}
The hot gas is assumed to be distributed in a notional profile with a run of density consistent with that found in hydrodynamical simulations \pcite{sharma_angular_2005,stringer_formation_2007}. \cite{sharma_angular_2005} performed non-radiative cosmological \ifthenelse{\equal{\arabic{SPHDone}}{0}}{spectral energy distribution (SPH) \setcounter{SPHDone}{1}}{SPH}\ simulations and studied the properties of the hot gas in dark matter halos. These simulations are therefore well suited to our purposes since they relate to the notional profile which is defined to be that in the absence of any cooling. The gas density profiles found by \cite{sharma_angular_2005} are well described by the expression:
\begin{equation}
\rho(r) \propto {1 \over (r+r_{\rm core})^3},
\end{equation}
where $r_{\rm core}$ is a characteristic core radius for the profile. We choose to set $r_{\rm core} = a_{\rm core} r_{\rm v}$ where $a_{\rm core}$ is a parameter whose value is the same for all halos at all redshifts. The simulations suggest that $a_{\rm core}\approx 0.05$ \pcite{stringer_formation_2007}, but we will treat $a_{\rm core}$ as a free parameter to be constrained by observational data. The density profile is normalized such that
\begin{equation}
\int_0^{r_{\rm v}} \rho(r) 4 \pi r^2 {\rm d} r = M_{\rm total},
\end{equation}
and the hot gas is assumed to be isothermal at the virial temperature
\begin{equation}
T_{\rm v} = {1 \over 2} {\mu m_{\rm H} \over k} V_{\rm v}^2
\end{equation}
with a metallicity equal to $Z = M_{Z,{\rm hot}}/M_{\rm hot}$. Initially, $M_{Z,{\rm hot}}=0$ but can become non-zero due to metal production and outflows as a result of star formation and feedback.
\subsubsection{Cooling Radius}\label{sec:CoolRadius}
Given the time available for cooling from eqn.~(\ref{eq:tAvail}) the cooling radius is found by solving
\begin{equation}
t_{\rm avail} = {{3 \over 2} (n_{\rm tot}/n_{\rm H})k_{\rm B}T_{\rm V} \over n_{\rm H} (r_{\rm cool}) \Lambda(t)},
\end{equation}
where $n_{\rm tot}$ is the total number density of the atoms in the gas. Due to the dependence of $\Lambda(t)$ on density when a photoionizing background is present (see \S\ref{sec:PhotoEffect}) this equation must be solved numerically.
\subsubsection{Freefall Radius}\label{sec:Freefall}
To compute the mass of gas which can actually reach the centre of a halo potential well at any given time we require that not only has the gas had time to cool but also that it has had time to freefall to the centre of the halo starting from zero velocity at its initial radius. To estimate the maximum radius from which cold gas could have reached the halo centre through freefall we proceed as follows. We compute a time available for freefall in the halo, $t_{\rm avail,ff}$, using eqn.~(\ref{eq:tAvail}), but limit the integral ${\mathcal E}$ (defined in eqn.~\ref{eq:tAvailIntegral}) such that the time available can not exceed the freefall time at the virial radius. We then solve the freefall equation
\begin{equation}
\int_0^{r_{\rm ff}} {{\rm d} r^\prime \over \sqrt{2[\Phi(r^\prime)-\Phi(r_{\rm ff})]}} = t_{\rm avail,ff},
\end{equation}
where $\Phi(r)$ is the gravitational potential of the halo, for the radius $r_{\rm ff}$ at which the freefall time equals the time available. Only gas within the minimum of the cooling and freefall radii at each timestep is allowed to reach the centre of the halo and become part of the forming galaxy.
\subsubsection{Angular Momentum}\label{sec:CoolAngMom}
The angular momentum of gas in the notional halo is tracked using a similar approach as for the mass. We define the following quantities:
\begin{description}
\item [$J_{\rm hot}$:] The total angular momentum in the $M_{\rm hot}$ reservoir of the notional profile;
\item [$J_{\rm cooled}$:] The total angular momentum in the $M_{\rm cooled}$ reservoir of the notional profile;
\item [$J_{\rm reheated}$:] The total angular momentum in the $M_{\rm reheated}$ reservoir of the notional profile;
\item [$j_{\rm new}$:] The specific angular momentum which newly accreted material must have in order to produce the correct change in angular momentum for this halo\footnote{The angular momentum of a halo differs from that of its main progenitor due to an increase in mass, change in virial radius and change in spin parameter. $j_{\rm new}$ is computed by finding the difference in the angular momentum of a halo and its main progenitor and dividing by their mass difference. Note that this quantity can therefore be negative.}.
\end{description}
$J_{\rm cooled}$ and $J_{\rm reheated}$ are initialized to zero at the start of the calculation. $J_{\rm hot}$ is initialized by assuming that any material accreted below the resolution of the merger tree arrives with the mean specific angular momentum of the halo. Angular momentum is then tracked using the following method:
\begin{enumerate}
\item At the start of a time step, all three angular momentum reservoirs from the most massive progenitor halo are added to those of the current halo.
\item We assume that the specific angular momentum of the gas halo is distributed according to the results of \cite{sharma_angular_2005} such that the differential distribution of specific angular momentum, $j$, is given by
\begin{equation}
{1\over M}{{\rm d} M \over {\rm d} j} = {1 \over j_{\rm d}^{\alpha_j}\Gamma(\alpha_j)}j^{\alpha_j-1}{\rm e}^{-j/j_{\rm d}},
\end{equation}
where $\Gamma$ is the gamma function, $M$ is the total mass of gas, $j_{\rm d}=j_{\rm tot}/\alpha$ and $j_{\rm tot}$ is the mean specific angular momentum of the gas. The parameter $\alpha_j$ is chosen to be $0.89$, consistent with the median value found by \cite{sharma_angular_2005} in simulated halos. The fraction of mass with specific angular momentum less than $j$ is then given by
\begin{equation}
f(<j) = \gamma\left(\alpha_j,{j\over j_{\rm d}}\right),
\end{equation}
where $\gamma$ is the incomplete gamma function. Once the mass of gas cooling in any given timestep is known the above allows the angular momentum of that gas to be found. This amount is added to the $J_{\rm cooled}$ reservoir.
\item If $J_{\rm reheated}>0$ then an angular momentum
\begin{equation}
\Delta J_{\rm hot} = \left\{ \begin{array}{ll} J_{\rm reheated} \alpha_{\rm reheat} \Delta t/\tau_{\rm dyn} & \hbox{ if } \alpha_{\rm reheat} \Delta t < \tau_{\rm dyn} \\
J_{\rm reheated}& \hbox{ if } \alpha_{\rm reheat} \Delta t \ge \tau_{\rm dyn} \\
\end{array}
\right.
\end{equation}
is transferred to back to the hot phase, consistent with the fraction of mass returned to the hot phase (see \S\ref{sec:Reheating}).
\item When a halo becomes a satellite of a larger halo, $J_{\rm hot}$ of the larger halo is increased by an amount, $j_{\rm new} M_{\rm hot,sat}$. This accounts for the orbital angular momentum of the gas in the satellite halo assuming that, on average, satellites have specific angular momentum of $j_{\rm new}$. We do the same for $J_{\rm reheated}$, assuming that the $M_{\rm reheated}$ reservoir of the satellite arrives with the same specific angular momentum.
\item When gas is ejected from a galaxy disk to join the reheated reservoir it is ejected with the mean specific angular momentum of the disk. Gas ejected during a starburst is also assumed to be ejected with the mean pseudo-specific angular momentum\footnote{As defined by \protect\citeauthor{cole_hierarchical_2000}~(\protect\citeyear{cole_hierarchical_2000}; their eqn.~C11) and equal to the product of the bulge half-mass radius and the circular velocity at that radius.} of the bulge.
\end{enumerate}
Because $j_{\rm new}$ can be negative on occasion it is possible that $J_{\rm hot}<0$ can occur. This, in turn, can lead to a galaxy disk with a negative angular momentum. We do not consider this to be a fundamental problem due to the vector nature of angular momentum. When computing disk sizes we simply consider the magnitude of the disk angular momentum, ignoring the sign.
\subsubsection{Cooling/Heating Rates of Hot Gas in Halos}\label{sec:Cloudy}
The cooling model described above requires knowledge of the cooling function, $\Lambda(T,\hbox{\boldmath $Z$},n_{\rm H},F_\nu)$. Given a gas metallicity and density and the spectrum of the ionizing background we can compute cooling and heating rates for gas in dark matter halos. Calculations were performed with version 08.00 of {\sc Cloudy}, last described by \cite{ferland_cloudy_1998}. In practice, we compute cooling/heating rates as a function of temperature, density and metallicity using the self-consistently computed photon background (\S\ref{sec:IGM}) after each timestep. The rates are computed on a grid which is then interpolated on to find the cooling/heating rate for any given halo.
Chemical abundances are assumed to behave as follows:
\begin{itemize}
\item{} $Z=0$ : ``zero'' metallicity corresponding to the ``primordial'' abundance ratios as used by {\sc Cloudy} version 08.00 (see the \emph{Hazy} documentation of {\sc Cloudy} for details).
\item{} [Fe/H]$<-1$ : ``primordial'' abundance ratios from \cite{sutherland_cooling_1993};
\item{} [Fe/H]$\ge 1$ : Solar abundance ratios as used by {\sc Cloudy} version 08.00 (see the \emph{Hazy} documentation of {\sc Cloudy} for details).
\end{itemize}
However, since our model can track the abundances of individual elements we know the abundances in each cooling halo. In principle, we could recompute a cooling/heating rate for each halo using its specific abundances as input into {\sc Cloudy}. This is computationally impractical however. Instead, we follow the approach of \citet{martinez-serrano_chemical_2008} who perform a \ifthenelse{\equal{\arabic{PCADone}}{0}}{principal components analysis (PCA) \setcounter{PCADone}{1}}{PCA}\ to find the optimal linear combination of abundances which minimizes the variance between cooling/heating rates computed using that linear combination as a parameter and a full calculation using all abundances. The best linear combination turns out to be a function of temperature. We therefore track this linear combination of abundances at 10 different temperatures for all of the gas in our models and use it instead of metallicity when computing cooling/heating rates.
{\bf Compton Cooling:} \citet{cole_hierarchical_2000} allowed hot halo gas to cool via two-body collisional radiative processes. However, as we go to higher redshifts the effect of Compton cooling must be considered. The Compton cooling timescale is given by \pcite{peebles_recombination_1968}:
\begin{equation}
\tau_{\rm Compton} = {3 m_{\rm e}{\rm c} (1+1/x_{\rm e}) \over 8\sigma_{\rm T}aT^4_{\rm CMB}(1-T_{\rm CMB}/T_{\rm e})},
\end{equation}
where $x_{\rm e}=n_{\rm e}/n_{\rm t}$, $n_{\rm e}$ is the electron number density, $n_{\rm t}$ is the number density of all atoms and ions, $T_{\rm CMB}$ is the \ifthenelse{\equal{\arabic{CMBDone}}{0}}{cosmic microwave background (CMB) \setcounter{CMBDone}{1}}{CMB}\ temperature and $T_{\rm e}$ is the electron temperature of the gas.
The electron fraction, $x_{\rm e}$, is determined from photoionization equilibrium computed using {\sc Cloudy} (see above).\\
\noindent {\bf Molecular Hydrogen Cooling:} The molecular hydrogen cooling timescale is found by first estimating the abundance, $f_{{H_2},c}$, of molecular hydrogen that would be present if there is no background of $H_2$-dissociating radiation from stars. For gas with hydrogen number density $n_{\rm H}$ and temperature $T_{\rm V}$ the fraction is \pcite{tegmark_small_1997}:
\begin{eqnarray}
f_{{\rm H_2},c} &=& 3.5 \times 10^{-4}T_3^{1.52} [1+(7.4\times 10^8 (1+z)^{2.13} \nonumber \\
& & \times \exp\left\{-3173/(1+z)\right\}/n_{\rm H 1})]^{-1},
\end{eqnarray}
where $T_3$ is the temperature $T_{\rm v}$ in units of 1000K and $n_{\rm H 1}$ is the hydrogen density in units of cm$^{-3}$. Using this initial abundance we calculate the final H$_2$ abundance, still in the absence of a photodissociating background, as
\begin{equation}
f_{\rm H_2} = f_{{\rm H_2},c}\exp\left({-T_{\rm v} \over 51920K}\right)
\end{equation}
where the exponential cut-off is included to account for collisional dissociation of H$_2$, as in \citet{benson_epoch_2006}.
Finally, the cooling time-scale due to molecular hydrogen was computed using \pcite{galli_chemistry_1998}:
\begin{equation}
\tau_{{\rm H}_2} = 6.56419 \time 10^{-33} T_{\rm e} f^{-1}_{{\rm H}_2} n^{-1}_{{\rm H} 1} \Lambda^{-1}_{{\rm H}_2},
\end{equation}
where
\begin{equation}
\Lambda_{\rm H_2} = {\Lambda_{\rm LTE} \over 1+n^{\rm cr}/n_{\rm H}},
\end{equation}
where
\begin{equation}
{n^{\rm cr}\over n_{\rm H}} = {\Lambda_{\rm H_2}({\rm LTE}) \over \Lambda_{\rm H_2}[n_{\rm H}\rightarrow0]},
\end{equation}
and
\begin{eqnarray}
\log_{10}\Lambda_{\rm H_2}[n_{\rm H}\rightarrow0] &=& -103+97.59 \ln(T) -48.05 \ln(T)^2 \nonumber \\
& &+10.8 \ln(T)^3-0.9032 \ln(T)^4
\end{eqnarray}
is the cooling function in the low density limit (independent of hydrogen density) and we have used the fit given by \citet{galli_chemistry_1998},
\begin{equation}
\Lambda_{\rm LTE} = \Lambda_r+\Lambda_v
\end{equation}
is the cooling function in local thermodynamic equilibrium and
\begin{eqnarray}
\Lambda_r &=& {1\over n_{\rm H_1}}\left\{{9.5\times10^{-22} T_3^{3.76}\over1+0.12 T_3^{2.1}} \exp\left(-\left[{0.13\over T_3}\right]^3\right) \right. \nonumber \\
& & \left. +3\times10^{-24} \exp\left(-{0.51\over T_3}\right) \right\} \hbox{ergs cm}^3\hbox{ s}^{-1}, \\
\Lambda_v &=& {1\over n_{\rm H_1}}\left\{ 6.7\times 10^{-19} \exp\left(-{5.86\over T_3}\right) \right. \nonumber \\
& & \left. +1.6\times 10^{-18} \exp\left(-{11.7\over T_3}\right)\right\} \hbox{ergs cm}^3\hbox{ s}^{-1}
\end{eqnarray}
are the cooling functions for rotational and vibrational transitions in H$_2$ \pcite{hollenbach_molecule_1979}.
The model also allows for an estimate of the rate of molecular hydrogen formation on dust grains using the approach of \citet{cazaux_molecular_2004}. In this case we have to modify equation (13) of \cite{tegmark_small_1997}, which gives the rate of change of the H$_2$ fraction, to account for the dust grain growth path. The molecular hydrogen fraction growth rate becomes:
\begin{equation}
\dot{f} = k_{\rm d} f (1-x-2f) + k_{\rm m} n (1-x-2f) x,
\end{equation}
where $f$ is the fraction of H$_2$ by number, $x$ is the ionization fraction of H which has total number density $n$,
\begin{equation}
k_{\rm d}=3.025\times 10^{-17}{\xi_{\rm d}\over0.01} S_{\rm H}(T) \sqrt{{T_{\rm g}\over100\hbox{K}}} \hbox{cm}^3 \hbox{s}^{-1}
\end{equation}
is the dust formation rate coefficient (\citealt{cazaux_molecular_2004}; eqn.~4), and $k_{\rm m}$ is the effective rate coefficient for H$_2$ formation (\citealt{tegmark_small_1997}; eqn.~14). We adopt the expression given by \citeauthor{cazaux_molecular_2004}~(\citeyear{cazaux_molecular_2004}; eqn.~3) for the H sticking coefficient, $S_{\rm H}(T)$ and $\xi_{\rm d}=0.53 Z$ for the dust-to-gas mass ratio as suggested by \cite{cazaux_molecular_2004} and which results in $\xi_{\rm d}\approx 0.01$ for Solar metallicity. This equation must be solved simultaneously with the recombination equation governing the ionized fraction $x$. The solution, assuming $x(t)=x_0/(1+x_0nk_1t)$ and $1-x-2f\approx 1$ as do \cite{tegmark_small_1997}, is
\begin{equation}
f(t) = f_0 {k_{\rm m} \over k_1} \exp\left[ {\tau_{\rm r} +t\over \tau_{\rm d}} \right] \left\{ {\rm E}_{\rm i}\left( {\tau_{\rm r} \over \tau_{\rm d}} \right) - {\rm E}_{\rm i}\left( {\tau_{\rm r} +t \over \tau_{\rm d}} \right) \right\}
\end{equation}
where $\tau_{\rm r}=1/x_0/n_{\rm H}/k_1$, $\tau_{\rm d}=1/n_{\rm H}/k_{\rm d}$, $k_1$ is the hydrogen recombination coefficient and ${\rm E}_{\rm i}$ is the exponential integral.
\subsection{Sizes and Adiabatic Contraction}\label{sec:Sizes}
The angular momentum content of galactic components is tracked within our model, allowing us to compute sizes for disks and bulges. We follow the same basic methodology as \cite{cole_hierarchical_2000}---simultaneously solving for the equilibrium radii of disks and bulges under the influence of the gravity of the dark matter halo and their own self-gravity and including the effects of adiabatic contraction---but treat adiabatic contraction using updated methods.
For the bulge component with pseudo-specific angular momentum $j_{\rm b}$ the half-mass radius, $r_{\rm b}$, must satisfy
\begin{equation}
j^2_{\rm b} = {\rm G} [M_{\rm h}(r_{\rm b})+M_{\rm d}(r_{\rm b})+M_{\rm b}(r_{\rm b})] r_{\rm b},
\end{equation}
where $M_{\rm h}(r)$, $M_{\rm d}(r)$ and $M_{\rm b}(r)$ are the masses of dark matter, disk and bulge within radius $r$ respectively, and which we can write as
\begin{equation}
c_{\rm b} = [M_{\rm h}(r_{\rm b})+M_{\rm d}(r_{\rm b})+M_{\rm b}(r_{\rm b})] r_{\rm b},
\label{eq:cBulge}
\end{equation}
where $c_{\rm b} = j^2_{\rm b}/{\rm G}$. In the original \cite{blumenthal_contraction_1986} treatment of adiabatic contraction the right-hand side of eqn.~(\ref{eq:cBulge}) is an adiabatically conserved quantity allowing us to write
\begin{equation}
c_{\rm b} = M_{\rm h}^0(r_{\rm b,0}) r_{\rm b,0},
\end{equation}
where $M_{\rm h}^0$ is the unperturbed dark matter mass profile and $r_{\rm b,0}$ the original radius in that profile. This allows us to trivially solve for $r_{\rm b,0}$ and $M_{\rm h}^0(r_{\rm b,0})$ and so, assuming no shell crossing, $M_{\rm h}(r_{\rm b}) = f_{\rm h} M_{\rm h}^0(r_{\rm b,0})$, where $f_{\rm h}$ is the fraction of mass that remains distributed like the halo. Given a disk mass and radius this allows us to solve for $r_{\rm b}$.
In the \cite{gnedin_response_2004} treatment of adiabatic contraction however, $M(r)r$ is no longer a conserved quantity. Instead, $M(\langle\overline{r}\rangle)r$ is the conserved quantity where $\langle\overline{r}\rangle/r_{\rm h} = A_{\rm ac} (r/r_{\rm h})^{w_{\rm ac}}$. In this case, we write
\begin{equation}
r_{\rm b}=\langle\overline{r_{\rm b}^\prime}\rangle = A_{\rm ac} r_{\rm h} (r_{\rm b}^\prime/r_{\rm h})^{w_{\rm ac}}.
\end{equation}
Equation~(\ref{eq:cBulge}) then becomes
\begin{equation}
c_{\rm b}^\prime = [M_{\rm h}(\langle\overline{r_{\rm b}^\prime}\rangle)+M_{\rm d}(\langle\overline{r_{\rm b}^\prime}\rangle)+M_{\rm b}(\langle\overline{r_{\rm b}^\prime}\rangle)] r_{\rm b}^\prime,
\label{eq:cBulgePrime}
\end{equation}
where
\begin{equation}
c_{\rm b}^\prime = {c_{\rm b} \over A_{\rm ac}}\left({r_{\rm b}^\prime\over r_{\rm h}}\right)^{1-w_{\rm ac}}.
\end{equation}
The right-hand side of eqn.~(\ref{eq:cBulgePrime}) is now an adiabatically conserved quantity and we can write
\begin{equation}
c_{\rm b}^\prime = M_{\rm h}^0(\langle\overline{r_{\rm b,0}^\prime}\rangle) r_{\rm b,0}.
\end{equation}
If we know $c_{\rm b}^\prime$ this expression allows us to solve for $r_{\rm b,0}$ and $M_{\rm h}^0(\langle\overline{r_{\rm b,0}^\prime}\rangle)$ which in turns gives $M_{\rm h}(r_{\rm b}) = f_{\rm h} M_{\rm h}^0(\langle\overline{r_{\rm b,0}^\prime}\rangle)$. Of course, to find $c_{\rm b}^\prime$ we need to know $r_{\rm b}$. This equation must therefore be solved iteratively. In practice, for a galaxy containing a disk and bulge, the coupled disk and bulge equations must be solved iteratively in any case, so this does not significantly increase computational demand.
The disk is handled similarly. We have
\begin{equation}
{j^2_{\rm d} \over k^2_{\rm d}} = {\rm G} \left[M_{\rm h}(r_{\rm d})+{k_{\rm h}\over 2}M_{\rm d}+M_{\rm b}(r_{\rm d})\right] r_{\rm d},
\end{equation}
where $k_{\rm h}$ gives the contribution to the rotation curve in the mid-plane and $k_{\rm d}$ relates the total angular momentum of the disk to the specific angular momentum at the half-mass radius \pcite{cole_hierarchical_2000}. This becomes
\begin{equation}
c_{\rm d,2}^\prime = M_{\rm h}^0(\langle\overline{r_{\rm d,0}^\prime}\rangle) r_{\rm d,0},
\end{equation}
where
\begin{equation}
c_{\rm d,2}^\prime = {c_{\rm d,2} \over A_{\rm ac}}\left({r_{\rm b}^\prime\over r_{\rm h}}\right)^{1-w_{\rm ac}},
\end{equation}
and
\begin{equation}
c_{\rm d,2} = {j^2_{\rm d} \over {\rm G} k^2_{\rm d}} - \left({k_{\rm h}\over 2}-{1\over 2}\right) r_{\rm d} M_{\rm d}.
\end{equation}
This system of equations must be solved simultaneously to find the radii of disk and bulge in a given galaxy. Once these are determined, the rotation curve and dark matter density as a function of radius are trivially found from the known baryonic distribution, pre-collapse dark matter density profile and the adiabatic invariance of $M(\langle\overline{r}\rangle)r$.
\subsection{Substructures and Merging}\label{sec:Merging}
N-body simulations of dark matter halos have convincingly shown that substructure persists within dark matter halos for cosmological timescales \pcite{moore_dark_1999}. Moreover, recent ultra-high resolution simulations \pcite{kuhlen_via_2008,springel_aquarius_2008,stadel_quantifyingheart_2009} demonstrate that multiple levels of substructure (e.g. sub-sub-halos) can exist. This ``substructure hierarchy'' is often neglected in semi-analytic models when merging is being considered. For example, \cite{cole_hierarchical_2000} and all other semi-analytic models to date\footnote{\protect\cite{taylor_evolution_2004}, who describe a model of the orbital dynamics of subhalos, do account for the orbital grouping of subhalos arriving as part of a pre-existing bound system (i.e. when a halo becomes a subhalo its own subhalos are given similar orbits in the new host). However, as noted by \cite{taylor_evolution_2005}, they do not include the self-gravity of subhalos and so sub-subhalos do not remain gravitationally bound to their subhalo. As such, sub-subhalos will gradually disperse and cannot merge with each other via dynamical friction.} consider only one level of substructure---a substructure in a group halo which merges into a cluster immediately becomes a substructure of the cluster for the purposes of merging calculations. This is unrealistic and may:
\begin{enumerate}
\item neglect mergers between galaxies in substructures which \cite{angulo_fate_2009} have recently shown to be important for lower mass subhalos;
\item bias the estimation of merging timescales for halos (and their galaxies).
\end{enumerate}
\cite{angulo_fate_2009} examine rates of subhalo-subhalo mergers in the Millennium Simulation and find that for subhalos with masses below 0.1\% the mass of the main halo mergers with other subhalos become equally likely as a merger with the central galaxy of the halo. They also find that subhalo-subhalo mergers tend to occur between subhalos that were physically associated before falling into the larger potential. This suggests that a treatment of subhalo-subhalo mergers must consider the interactions between subhalos and not simply consider random encounters as was done, for example, by \cite{somerville_semi-analytic_1999}.
We therefore implement a method to handle an arbitrarily deep hierarchy of substructure. We refer to isolated halos as $S^0$ substructures (i.e. not substructures at all), substructures of $S^0$ halos are called $S^1$ substructures and substructures of $S^n$ halos are $S^{n+1}$ substructures. When a halo forms it is an $S^0$ substructure, and when it first becomes a satellite it becomes a $S^1$ substructure.
For $S^n$ substructures with $n\ge2$ we check at the end of each timestep whether the substructure has been tidally stripped out of its $S^{n-1}$ host. If it has, it is promoted to being a $S^{n-1}$ substructure in the $S^{n-2}$ substructure which hosts its $S^{n-1}$ host.
\subsubsection{Orbital Parameters}
When a halo first becomes an $S^1$ subhalo it is assigned orbital parameters drawn from the distribution of \cite{benson_orbital_2005} which was measured from N-body simulations. This distribution gives the radial and tangential velocity components of the orbit. For later convenience, we compute from these velocities the radius of a circular orbit with the same energy as the actual orbit, $r_{\rm C}(E)$, and the circularity (the angular momentum of the actual orbit in units of the angular momentum of that circular orbit), $\epsilon$. These are computed using the gravitational potential of the host halo.
\subsubsection{Adiabatic Evolution of Host Potential}
As a subhalo orbits inside of a host halo the gravitational potential of that host halo will evolve due to continued cosmological infall. To model how this evolution affects the orbital parameters of each subhalo we assume that it can be well described as an adiabatic process\footnote{Halos are expected to grow on the Hubble time, while the characteristic orbital time is shorter than this by a factor of $\sqrt{\Delta}$ where $\Delta$ is the overdensity of dark matter halos. This expected validity of the adiabatic approximation has been confirmed in N-body simulations by \protect\cite{book_testingadiabatic_2010}.}. As such, the azimuthal and radial actions of the orbits:
\begin{equation}
J_{\rm a} = {1\over 2\pi}\int_0^{2\pi} r^2 \dot{\phi}{\rm d}\phi,
\end{equation}
and
\begin{equation}
J_{\rm r} = {1\over \pi}\int_{r_{\rm min}}^{r_{\rm max}} \dot{r}{\rm d} r,
\end{equation}
should be conserved (assuming a spherically symmetric potential). Therefore, at each timestep, we compute $J_{\rm a}$ and $J_{\rm r}$ for each satellite from the known orbital parameters in the current host halo potential. We assume these quantities are the same in the new host halo potential and convert them back into new orbital parameters $r_{\rm C}(E)$ and $\epsilon$.
\subsubsection{Tidal Stripping of Dark Matter Substructures}
Given orbital parameters, $r_{\rm C}(E)$ and $\epsilon$ we can compute the apocentric and pericentric distances of the orbit of each subhalo. At the end of each timestep, for each subhalo we find the pericentric distance and compute the tidal field of its host halo at that point:
\begin{equation}
{\mathcal D}_{\rm t} = {{\rm d} \over {\rm d} r_{\rm h}}\left[ - {{\rm G} M_{\rm h}(r_{\rm h}) \over r_{\rm h}^2} \right] + \omega^2,
\end{equation}
where $\omega$ is the orbital frequency of the subhalo, and find the radius, $r_{\rm s}$, in the subhalo at which this equals
\begin{equation}
{\mathcal D}_{\rm s} = {{\rm G} M_{\rm s}(r_{\rm s})\over r_{\rm s}^3}.
\end{equation}
This gives the tidal radius, $r_{\rm s}$, in the subhalo.
\subsubsection{Promotion through the hierarchy}
After computing tidal radii, for each $S^{\ge2}$ subhalo we compute the apocentric distance of its orbit and ask if this exceeds the tidal radius of its host. If it does, the subhalo is assumed to be tidally stripped from its host halo and promoted to an orbit in the host of its host: $S^n\rightarrow S^{n-1}$. To compute orbital parameters of the satellite in this new halo we determine its radius and velocity at the point where it crosses the tidal radius of its old host. These are added vectorially (assuming random orientations) to the position and velocity of its old host at pericentre in the new host. From this new position and velocity values of $r_{\rm C}(E)$ and $\epsilon$ are computed.
This approach can handle an arbitrarily deep hierarchy of substructure. In practice, the actual depth of the hierarchy will depend on both the mass resolution of the merger trees used and the efficiency of tidal forces to promote substructures through the hierarchy. Given the resolution of the trees used in our calculations we find that most substructures belonge to the $S^1$ and $S^2$ levels. However, the deepest substructure level that we have found at $z=0$ is $S^7$.
\subsubsection{Dynamical Friction}\label{sec:DynFric}
We adopt the fitting formula found by \cite{jiang_fitting_2008} to estimate merging timescales for dark matter substructures (and, consequently, the galaxies that they contain). The multiple levels of substructure hierarchy in our model allow for the possibility of satellite-satellite mergers. We intend to compare results from our model with N-body measures of this process in a future work.
When a halo first becomes a satellite, we set a dimensionless merger clock, $x_{\rm DF}=0$. On each subsequent timestep, $x_{\rm DF}$ is incremented by an amount $\Delta t / \tau_{\rm DF}$ where $\tau_{\rm DF}$ is the dynamical friction timescale for the satellite in the current host halo according to the expression of \cite{jiang_fitting_2008}, including the dependence on $r_{\rm C}(E)$. When $x_{\rm DF}=1$ the satellite is deemed to have merged with the central galaxy in the host halo.
When a satellite is tidally stripped out of its current orbital host and promoted to the host above it in the hierarchy the merging clock is reset so that dynamical friction calculations start anew in this new orbital host. This is something of an approximation since the dynamical friction timescale of \cite{jiang_fitting_2008} is calibrated using satellites that enter their halo at the virial radius. As such, it does not explore as a sufficiently wide range in $r_{\rm C}$ as is required for our models. Furthermore, when promoted to a new orbital host, a satellite will have already lost some mass due to tidal effects. This is not accounted for when computing a new dynamical friction timescale however, and so may cause us to underestimate merging timescales somewhat.
Dynamical friction also affects the orbital parameters of each subhalo. To simplify matters we follow \cite{lacey_merger_1993} and examine the evolution of these quantities in an isothermal dark matter halo. In such a halo, and for a circular orbit, $r_{\rm C}$ evolves as
\begin{equation}
\left({r_{\rm C} \over r_{\rm C,0}}\right)^2 = 1-{t\over \tau_{\rm DF}}.
\end{equation}
Therefore, after each timestep we update
\begin{equation}
r_{\rm C}^2 \rightarrow r_{\rm C}^2 - r_{\rm C,0}^2 {\Delta t \over \tau_{\rm DF}}.
\end{equation}
The fractional change in $\epsilon$ is assumed to be given by $(\dot{\epsilon}/\epsilon)/(\dot{r}_{\rm C}/r_{\rm C})$ as computed for the current orbit using the expressions of \cite{lacey_merger_1993}. This is a function of $\epsilon$ only and is plotted in Fig.~\ref{fig:Orbital_DynFric_Ratio}. Note that the timescale, $\tau_{\rm DF}$, used here is that from \cite{jiang_fitting_2008} and not the one from \cite{lacey_merger_1993}.
\begin{figure}
\includegraphics[width=80mm,viewport=0mm 60mm 200mm 245mm,clip]{Plots/Orbital_DynFric_Ratio.pdf}
\caption{The ratio $(\dot{\epsilon}/\epsilon)/(\dot{r}_{\rm C}/r_{\rm C})$ for isothermal halos. This ratio is used in solving for the evolution of orbital circularity and orbital radius under the influence of dynamical friction as described in \S\protect\ref{sec:DynFric}.}
\label{fig:Orbital_DynFric_Ratio}
\end{figure}
\subsection{Ram pressure and tidal stripping}\label{sec:Stripping}
We follow \cite{font_colours_2008} and estimate the extent to which ram pressure from the hot atmosphere of a halo may strip away the hot atmosphere of an orbiting subhalo. In addition, we also consider tidal stripping of this hot gas and both ram pressure and tidal stripping of material from galaxies.
Ram pressure and tidal forces are computed at the pericentre of each subhalo's orbit, which we now compute self-consistently with our orbital model (see \S\ref{sec:Merging}). For an $S^i$, where $i>1$, subhalo we compute the ram pressure force from all halos higher in the hierarchy and take the maximum of these to be the ram pressure force actually felt. The tidal field (i.e. the gradient in the gravitational force across the satellite) includes the centrifugal contribution at the orbital pericentre and is given by:
\begin{equation}
{\mathcal F} = \omega^2 - {{\rm d} \over {\rm d} R} {{\rm G} M(<r) \over r^2} .
\end{equation}
The ram pressure is taken to be
\begin{equation}
P_{\rm ram} = \rho_{\rm hot,host} V_{\rm orbit}^2
\label{eq:RamPressure}
\end{equation}
where $\rho_{\rm hot,host}$ is the density of hot gas in the host halo at the pericentre of the orbit and $V_{\rm orbit}$ is the orbital velocity of the satellite at that position.
\subsubsection{Stripping of hot halo gas}\label{sec:HotStrip}
We find the ram pressure radius in the hot halo gas by solving
\begin{equation}
P_{\rm ram} = \alpha_{\rm ram}{{\rm G} M_{\rm sat}(r_{\rm r}) \over r_{\rm r}} \rho_{\rm hot,sat}(r_{\rm r})
\label{eq:HotRamPressure}
\end{equation}
for $r_{\rm r}$, where $\alpha_{\rm ram}$ is a parameter that we set equal to 2 as suggested by \cite{mccarthy_ram_2008}. Similarly, a tidal radius is found by solving
\begin{equation}
{\mathcal F} = \alpha_{\rm tidal}^3 {{\rm G} M_{\rm sat}(r_{\rm t}) \over r_{\rm t}^3}
\end{equation}
for $r_{\rm t}$, where $\alpha_{\rm tidal}$ is a parameter that we set equal to unity. Once the minimum of the ram pressure and tidal stripping radii has been determined we follow \cite{font_colours_2008} and compute the cooling rate of the remaining, unstripped gas by cooling only the gas within the stripping radius and assuming that stripping does not alter the mean density of gas within this radius. We implement this by giving the satellite a nominal hot gas mass $M_{\rm hot}^\prime = M_{\rm hot} + M_{\rm strip}$ (where $M_{\rm hot}$ is the true hot gas content of the halo) and applying the same cooling algorithm as that used for central galaxies (except limiting the maximum cooling radius to $r_{\rm strip}$ rather than $R_{\rm v}$). This step ensures self-consistency in the treatment of the gas cooling between stripped and unstripped galaxies, and therefore that the colours of satellites are predicted correctly.
The initial stripping of re-heated gas is the same as for the hot gas, i.e. the same fraction is transferred from the re-heated gas of the satellite to the re-heated gas reservoir of the parent halo. We follow \cite{font_colours_2008} in modelling the time-dependence of the hot gas mass in the satellite halo and refer the reader to that paper for full details. This process introduces one free parameter, $\epsilon_{\rm strip}$ which represents the time averaged stripping rate after the initial pericentre. We treat $\epsilon_{\rm strip}$ as a free parameter which we will adjust to match observational constraints.
The stripping of satellites is also affected by the growth of the halo in
which the satellite is orbiting. \cite{font_colours_2008} took this effect into account by assigning each satellite galaxy new orbital parameters and deriving a new stripping factor every time the halo doubles in mass compared to the initial stripping event. In the present work we directly follow the evolution of the pericentric radius and velocity of each satellite due to both dynamical friction and host halo mass growth. For this reason, we take a different approach from \cite{font_colours_2008}, computing a new ram pressure radius in each timestep instead of only at every mass doubling event.
Any material stripped away from the subhalo is added to the halo which provided the greatest ram pressure force. For tidal forces, we consider only the contribution from the current orbital host as typically if this were exceeded by the tidal force from a parent higher up in the hierarchy the subhalo would have already been tidally stripped from this orbital host and promoted to a higher level in the hierarchy.
\subsubsection{Stripping of galactic gas and stars}
The effective gravitational pressure that resists the ram pressure force in the disk plane is (for an exponential disk; \citealt{abadi_ram_1999}):
\begin{equation}
P_{\rm grav} = {{\rm G} M_{\rm d} M_{\rm g}\over 4\pi r_{\rm d}^4} x {\rm e}^{-x} \left[I_0\left({x\over 2}\right)K_1\left({x\over 2}\right)-I_1\left({x\over 2}\right)K_0\left({x\over 2}\right)\right],
\end{equation}
where $x=r/r_{\rm d}$ and $I_0$, $I_1$, $K_0$ and $K_1$ are Bessel functions. The ram pressure radius is found by solving for the radius at which $P_{\rm grav}=P_{\rm ram}$, where $P_{\rm ram}$ is given by eq.~(\ref{eq:RamPressure}). We assume that any stars in the galaxy which lie beyond the computed tidal radius and any gas which lies beyond the smaller of the tidal and ram pressure radii are instantaneously removed. Stars become part of the diffuse light component of the halo (i.e. that which is known as intracluster light in clusters of galaxies; see \S\ref{sec:ICL}), while gas is added to the reheated reservoir of the host halo. The remaining mass of each component (cold gas, disk and bulge stars) is computed and the specific angular momentum of the remaining material is computed assuming a flat rotation curve:
\begin{eqnarray}
j_{\rm disk}&=&j_{\rm disk 0} \nonumber \\
&&\times
\left[
{
\int_0^{R_\star} \Sigma_\star(R) R^2 {\rm d} R
+
\int_0^{\rm R_{\rm g}} \Sigma_{\rm g}(R) R^2 {\rm d} R
\over
\int_0^\infty \Sigma(R) R^2 {\rm d} R
}
\right]\nonumber\\
&&\times
\left[
{
\int_0^{R_\star} \Sigma_\star(R) R {\rm d} R
+
\int_0^{\rm R_{\rm g}} \Sigma_{\rm g}(R) R {\rm d} R
\over
\int_0^\infty \Sigma(R) R {\rm d} R
}
\right]^{-1} \\
&=& j_{\rm disk 0} \nonumber \\
&&\times
\left\{
f_\star\left[1-\left(1+x_\star+{x_\star^2\over 2}\right){\rm e}^{-x_\star}\right]\nonumber \right. \\
&&+
\left. f_{\rm g}\left[1-\left(1+x_{\rm g}+{x_{\rm g}^2\over 2}\right){\rm e}^{-x_{\rm g}}\right]
\right\}\nonumber\\
&&\times
\left\{
f_\star[1-(1+x_\star){\rm e}^{-x_\star}]
+
f_{\rm g}[1-(1+x_{\rm g}){\rm e}^{-x_{\rm g}}]
\right\}^{-1}
\end{eqnarray}
for the disk (the last line assuming an exponential disk) where $R_\star=r_{\rm tidal}$, $R_{\rm g}=\hbox{min}(r_{\rm tidal},r_{\rm ram})$, $x_\star=R_\star/R_{\rm d}$, $x_{\rm g}=R_{\rm g}/R_{\rm d}$, $f_\star=M_\star/(M_\star+M_{\rm g})$ and $f_{\rm g}=M_{\rm g}/(M_\star+M_{\rm g})$, and
\begin{equation}
j_{\rm sph} = j_{\rm sph 0} {\left. \int_0^{r_{\rm tidal}} \rho_\star(R) R^3 {\rm d} R \right/ \int_0^\infty \rho_\star(R) R^3 {\rm d} R \over \left. \int_0^{r_{\rm tidal}} \rho_\star(R) R^2 {\rm d} R \right/ \int_0^\infty \rho_\star(R) R^3 {\rm d} R}
\end{equation}
for the bulge (and which must be evaluated numerically). Here, $j_{\rm disk 0}$ and $j_{\rm sph 0}$ are the pre-stripping specific angular momenta of disk and spheroid respectively, $\Sigma_\star(R)$ and $\Sigma_{\rm gas}$ are the surface density profiles of stars and gas in the disk prior to stripping and $\rho_\star(R)$ is the stellar density profile in the spheroid prior to stripping. Since {\sc Galform}\ always assumes a de Vaucouler's spheroid and an exponential disk with stars tracing gas the stripped components will readjust to these configurations with their new masses and angular momenta. This is, therefore, an approximate treatment of stripping. In particular, some material will always ``leak'' back out beyond the stripping radius and so is easily stripped on the next timestep. Figure~\ref{fig:MassLossSteps} demonstrates that this is not a severe problem, with the remaining mass fraction asymptoting to a near constant value after just a few steps.
\begin{figure}
\includegraphics[width=80mm,viewport=0mm 55mm 200mm 245mm,clip]{Plots/MassLossSteps.pdf}
\caption{The remaining mass fraction in an exponential disk in a potential giving a flat rotation curve (and ignoring the disk self-gravity) subjected to tidal truncation at radius $r_{\rm t}/r_{\rm d,0}=0.1$, 0.3, 1.0, 3.0 and 10.0 (from lower to upper lines) after a given number of steps according to our model. The remaining mass fraction quickly converges to a near constant value.}
\label{fig:MassLossSteps}
\end{figure}
\subsection{IGM Interaction}\label{sec:IGM}
\cite{benson_effects_2002-1} introduced methods to simultaneously compute the evolution of the \ifthenelse{\equal{\arabic{IGMDone}}{0}}{intergalactic medium (IGM) \setcounter{IGMDone}{1}}{IGM}\ and the galaxy population in a self-consistent manner such that emission from galaxies ionized and heated the \ifthenelse{\equal{\arabic{IGMDone}}{0}}{intergalactic medium (IGM) \setcounter{IGMDone}{1}}{IGM}\ which in turn lead to suppression of future galaxy formation. A major practical limitation of \citeauthor{benson_effects_2002-1}'s~(\citeyear{benson_effects_2002-1}) method was that it required {\sc Galform}\ to be run to generate an emissivity history for the Universe which was then fed into a model for the \ifthenelse{\equal{\arabic{IGMDone}}{0}}{intergalactic medium (IGM) \setcounter{IGMDone}{1}}{IGM}\ evolution. The \ifthenelse{\equal{\arabic{IGMDone}}{0}}{intergalactic medium (IGM) \setcounter{IGMDone}{1}}{IGM}\ evolution was used to predict the effects on galaxy formation and {\sc Galform}\ run again. This loop was iterated around several times to find a converged solution. This problem was inherent in the implementation due to the fact that {\sc Galform}\ was designed to evolve a single merger tree to $z=0$ then move onto the next one.
To circumvent this problem, we have adapted {\sc Galform}\ to allow for multiple merger trees to be evolved simultaneously: Each tree is evolved for a single timestep after which the \ifthenelse{\equal{\arabic{IGMDone}}{0}}{intergalactic medium (IGM) \setcounter{IGMDone}{1}}{IGM}\ evolution for that same timestep is computed. This allows simultaneous, self-consistent evolution of the \ifthenelse{\equal{\arabic{IGMDone}}{0}}{intergalactic medium (IGM) \setcounter{IGMDone}{1}}{IGM}\ and galaxies without the need for iteration.
The model we adopt for the \ifthenelse{\equal{\arabic{IGMDone}}{0}}{intergalactic medium (IGM) \setcounter{IGMDone}{1}}{IGM}\ evolution is essentially identical to that of \cite{benson_effects_2002-1}, and consists of a uniform \ifthenelse{\equal{\arabic{IGMDone}}{0}}{intergalactic medium (IGM) \setcounter{IGMDone}{1}}{IGM}\ (with a clumping factor to account for enhanced recombination and cooling due to inhomogeneities) composed of hydrogen and helium and a photon background supplied by galaxies and \ifthenelse{\equal{\arabic{AGNDone}}{0}}{active galactic nuclei (AGN) \setcounter{AGNDone}{1}}{AGN}. The reader is therefore referred to \cite{benson_effects_2002-1} for a full discussion. Here we will discuss only those aspects that are new or updated.
\subsubsection{Emissivity}
The two sources of photons in our model are quasars and galaxies. For \ifthenelse{\equal{\arabic{AGNDone}}{0}}{active galactic nuclei (AGN) \setcounter{AGNDone}{1}}{AGN}\ we assume that the \ifthenelse{\equal{\arabic{SEDDone}}{0}}{spectral energy distribution (SED) \setcounter{SEDDone}{1}}{SED}\ has the following shape \pcite{haardt_radiative_1996}:
\begin{equation}
f_\nu(\lambda) \propto \left\{ \begin{array}{ll}
\lambda^{1.5} & \hbox{if } \lambda < 1216\hbox{\AA}; \\
\lambda^{0.8} & \hbox{if } 1216\hbox{\AA} < \lambda < 2500\hbox{\AA}; \\
\lambda^{0.3} & \hbox{if } \lambda > 2500\hbox{\AA}, \\
\end{array}
\right.
\end{equation}
where the normalization of each segment is chosen to give a continuous function and unit energy when integrated over all wavelengths. The emissivity per unit volume from \ifthenelse{\equal{\arabic{AGNDone}}{0}}{active galactic nuclei (AGN) \setcounter{AGNDone}{1}}{AGN}\ is then
\begin{equation}
\epsilon_{\rm AGN} = f_{\rm esc,AGN} \epsilon_\bullet \dot{\rho}_\bullet {\rm c}^2 f_\nu(\lambda),
\end{equation}
where $\epsilon_\bullet=0.1$ is an assumed radiative efficiency for accretion onto black holes, $\dot{\rho}_\bullet$ is the rate of black hole mass growth per unit volume computed by {\sc Galform}\ and $f_{\rm esc,AGN}$ is an assumed escape fraction for \ifthenelse{\equal{\arabic{AGNDone}}{0}}{active galactic nuclei (AGN) \setcounter{AGNDone}{1}}{AGN} photons which we fix at $10^{-2}$ to produce a reasonable epoch of HeII reionization.
The emissivity from galaxies was calculated directly by integrating the star formation rate per unit volume predicted by {\sc Galform}\ over time and metallicity to give
\begin{equation}
\epsilon_{\rm gal} = \int_0^{t_{\rm now}} f_{\rm esc,gal}(t^\prime) \dot{M}_{\star}(t^\prime,Z)L_{\nu}(t_{\rm now}-t^\prime,Z[t^\prime]) {\rm d} t^{\prime},
\end{equation}
where $\dot{M}_{\star}(t,Z)$ is the rate of star formation at metallicity $Z$, $L_\nu(t,Z)$ is the integrated luminosity per unit frequency and per Solar
mass of stars formed of a single stellar population of age $t$ and metallicity $Z$ and $f_{\rm esc,gal}$ is the escape fraction of ionizing photons from the galaxy.
The fraction of ionizing photons able to escape from the disk of each galaxy is computed using the expressions derived by \citet{benson_effects_2002} (their eqn.~A4) which is a generalization of the model of \cite{dove_photoionization_1994} in which OB associations with a distribution of luminosities ionize holes through the neutral hydrogen distribution through which their photons can escape.
The sum of $\epsilon_{\rm AGN}$ and $\epsilon_{\rm gal}$ gives the number of photons emitted from the galaxies and quasars in the model.
\subsubsection{IGM Ionization State}
The ionization state of the \ifthenelse{\equal{\arabic{IGMDone}}{0}}{intergalactic medium (IGM) \setcounter{IGMDone}{1}}{IGM}\ is computed just as in \cite{benson_effects_2002-1} except that we use effective photo-ionization cross-sections that account for the effects of secondary ionizations and are given by \citeauthor{shull_x-ray_1985} (\citeyear{shull_x-ray_1985}; as re-expressed by \citealt{venkatesan_heating_2001}):
\begin{eqnarray}
\sigma_{\rm H}^\prime(E) &=& \left(1+\phi_{\hbox{\scriptsize H{\sc i}}} { E-E_{\rm H} \over E_{\rm H}} + \phi^*_{\hbox{\scriptsize He{\sc i}}} {E-E_{\rm H}\over 19.95\hbox{eV}}\right) \sigma_{\rm H}(E) \nonumber \\
& & + \left(1+\phi_{\hbox{\scriptsize He{\sc i}}} {E-E_{\rm He}\over E_{\rm He}}\right) \sigma_{\rm He}(E) \\
\sigma_{\rm He}^\prime(E) &=& \left(1+\phi_{\hbox{\scriptsize He{\sc i}}} {E-E_{\rm He} \over E_{\rm He}}\right) \sigma_{\rm He}(E) \nonumber \\
& & + \left(\phi_{\hbox{\scriptsize He{\sc i}}} {E-E_{\rm H}\over24.6}\right) \sigma_{\rm H}(E)
\end{eqnarray}
where $\sigma(E)$ is the actual cross section \pcite{verner_analytic_1995} and
\begin{eqnarray}
\phi_{\hbox{\scriptsize H{\sc i}}} &=& 0.3908 (1-x_{\rm e}^{0.4092})^{1.7592}, \\
\phi^*_{\hbox{\scriptsize He{\sc i}}} &=& 0.0246 (1-x_{\rm e}^{0.4049})^{1.6594}, \\
\phi_{\hbox{\scriptsize He{\sc i}}} &=& 0.0554 (1-x_{\rm e}^{0.4614})^{1.6660}.
\end{eqnarray}
\subsubsection{IGM Thermal State}
Heating of the \ifthenelse{\equal{\arabic{IGMDone}}{0}}{intergalactic medium (IGM) \setcounter{IGMDone}{1}}{IGM}\ is treated as in \citet{benson_effects_2002-1} with the exception that we account for heating by secondary electrons. Photoionization heats the \ifthenelse{\equal{\arabic{IGMDone}}{0}}{intergalactic medium (IGM) \setcounter{IGMDone}{1}}{IGM}\ at a rate of
\begin{equation}
\Sigma_{\rm photo} = \int^{\infty}_0(E-E_i){\rm c}\sigma^\prime(E)n_in_{\gamma}(E) {\mathcal E} {\rm d} E
\end{equation}
where $E_i$ is the energy of the sampled photons which is associated with atom/ion number density $n_i$, ${\rm c}$ is speed of light, $\sigma^\prime$ is the effective partial photo-ionization cross section (accounting for secondary ionizations) for the ionization stages of H and He, $n_{\gamma(E)}$ is the number density of photons of energy $E$, $E_i$ is the ionization potential of i and index i represent the different atoms and ions, H, H$^+$, He, He$^+$ and He$^{2+}$. In the above, ${\mathcal E}$ accounts for heating by secondary electrons and is given by \pcite{shull_x-ray_1985}:
\begin{equation}
{\mathcal E} = 0.9971 [1-(1-x_{\rm e}^{0.2663})^{1.3163}].
\end{equation}
\subsubsection{Suppression of Baryonic Infall into Halos}\label{sec:BaryonSupress}
According to \citet{okamoto_mass_2008}, the mass of baryons which accrete from the \ifthenelse{\equal{\arabic{IGMDone}}{0}}{intergalactic medium (IGM) \setcounter{IGMDone}{1}}{IGM}\ into a halo after reionization is given by
\begin{equation}
M_{\rm b} = M_{\rm b}^\prime+M_{\rm acc},
\end{equation}
where
\begin{equation}
M_{\rm b}^\prime = \sum_{\rm prog} \exp\left(-{\delta t \over t_{\rm evp}}\right) M_{\rm b},
\end{equation}
and where the sum is taken over the progenitor halos of the current halo, $\delta t$ is the time since the previous timestep and $t_{\rm evp}$ is the timescale for gas to evaporate from the progenitor halo and is given by
\begin{equation}
t_{\rm evp}=\left\{ \begin{array}{ll}
R_{\rm H}/c_{\rm s}(\Delta_{\rm evp}) & \hbox{if } T_{\rm v} < T_{\rm evp}, \\
\infty & \hbox{if } T_{\rm v} > T_{\rm evp}.
\end{array} \right.
\end{equation}
Here, $T_{\rm evp}$ is the temperature below which gas will be heated and evaporated from the halo. We follow \cite{okamoto_mass_2008} and compute $T_{\rm evp}$ by finding the equilibrium temperature of gas at an overdensity of $\Delta_{\rm evp}=10^6$. The accreted mass $M_{\rm acc}$ is given by
\begin{equation}
M_{\rm acc} = \left\{ \begin{array}{ll}
{\Omega_{\rm b}\over \Omega_0} M_{\rm v} - M_{\rm b}^\prime & \hbox{if } T_{\rm vir} > T_{\rm acc} \\
0 & \hbox{if } T_{\rm vir} < T_{\rm acc}
\end{array} \right.
\end{equation}
where $T_{\rm acc}$ is the larger of the temperature of \ifthenelse{\equal{\arabic{IGMDone}}{0}}{intergalactic medium (IGM) \setcounter{IGMDone}{1}}{IGM}\ gas adiabatically compressed to the density of accreting gas and the equilibrium temperature, $T_{\rm eq}$, at which radiative cooling balances photoheating for gas at the density expected at the virial radius. This ensures that a sensible temperature is used even when the photoionizing background is essentially zero.
The value of $T_{\rm acc}$ is computed at each timestep by searching for where the cooling function (see \S\ref{sec:Cloudy}) crosses zero for the density of gas just accreting at the virial radius (for which we use one third of the halo overdensity; \citealt{okamoto_mass_2008}).
\subsection{Recycling and Chemical Evolution}\label{sec:NonInstGasEq}
In \cite{cole_hierarchical_2000} the instantaneous recycling approximation for chemical enrichment was used. While this is a reasonable approximation for $z=0$ it fails for high redshifts (where the main sequence lifetimes of the stars which do the majority of the enrichment become comparable to the age of the Universe). It also prevents predictions for abundance ratios (e.g. [$\alpha$/Fe]) from being made and ignores any metallicity dependence in the yield.
\citeauthor{nagashima_metal_2005}~(\citeyear{nagashima_metal_2005}; see also \citealt{nagashima_metal_2005-1}, \citealt{arrigoni_galactic_2009}) previously implemented a non-instantaneous recycling calculation in {\sc Galform}. We implement a similar model here, following their general approach, but with some specific differences.
The fraction of material returned to the \ISM\ by a stellar population as a function of time is given by
\begin{equation}
R(t) = \int_{M(t;Z)}^\infty [M-M_{\rm r}(M;Z)]\phi(M) {{\rm d} M \over M}
\end{equation}
where $\phi(M)$ is the \ifthenelse{\equal{\arabic{IMFDone}}{0}}{initial mass function (IMF) \setcounter{IMFDone}{1}}{IMF}\ normalized to unit stellar mass, $M_{\rm r}(M)$ is the remnant mass of a star of initial mass $M$. Here, $M(t)$ is the mass of a star with lifetime $t$. Similarly, the yield of element $i$ is given by
\begin{equation}
p_i(t) = \int_{M(t;Z)}^\infty M_i(M_0;Z)\phi(M_0) {{\rm d} M_0\over M_0}
\end{equation}
where $M_i(M_0;Z)$ is the mass of metals produced by stars of initial mass $M_0$. For a specified \ifthenelse{\equal{\arabic{IMFDone}}{0}}{initial mass function (IMF) \setcounter{IMFDone}{1}}{IMF}\ we compute $R(t;Z)$ and $y_i(t;Z)$ for all times and elements of interest. This means that, unlike most previous implementations of {\sc Galform}, the recycled fraction and yield are not free parameters of the model, but are fixed once an \ifthenelse{\equal{\arabic{IMFDone}}{0}}{initial mass function (IMF) \setcounter{IMFDone}{1}}{IMF}\ is chosen. However, it should be noted that significant uncertainties remain in calculations of stellar yields, which may therefore influence our calculations. Note that, unlike \citet{nagashima_metal_2005}, we include the full metallicity dependence in these functions. Stellar data are taken from \citet{portinari_galactic_1998} for low and intermediate mass stars and \citet{marigo_chemical_2001} for high mass stars.
In {\sc Galform}\ the evolution of gas and stellar masses in a galaxy are controlled by the following equations\footnote{These are identical to those given in \protect\citeauthor{cole_hierarchical_2000}~(\citeyear{cole_hierarchical_2000}; their equations 4.6 and 4.8) except for the explicit inclusion of the recycling terms---\protect\cite{cole_hierarchical_2000} included these using the instantaneous recycling approximation.}:
\begin{eqnarray}
\dot{M}_\star & = & {M_{\rm gas} \over \tau_\star} - \dot{M}_{\rm R} \\
\dot{M}_{\rm gas} & = & -(1+\beta^\prime){M_{\rm gas} \over \tau_\star} + \dot{M}_{\rm R} + \dot{M}_{\rm infall}.
\end{eqnarray}
where
\begin{equation}
\tau_\star = \left\{ \begin{array}{ll} \epsilon_\star^{-1} \tau_{\rm disk} \left({V_{\rm disk} \over 200\hbox{km s}^{-1}}\right)^{\alpha_\star} & \hbox{for disks} \\
f_{\rm dyn} \tau_{\rm bulge} & \hbox{for bursts},
\end{array} \right.
\end{equation}
is the star formation timescale, $\tau_{\rm disk}$ is the dynamical time at the disk half-mass radius, $\tau_{\rm bulge}$ is the dynamical time at the bulge half-mass radius, $f_{\rm dyn}=2$ and $\beta^\prime$ quantifies the strength of supernova feedback (see \S\ref{sec:Feedback}). In \cite{cole_hierarchical_2000}, the instantaneous recycling approximation implies that $\dot{M}_{\rm R}\propto M_{\rm gas} / \tau_\star$, and the cosmological infall term $\dot{M}_{\rm infall}$ is approximated as being constant over each short timestep. This permits a simple solution to these equations. In our case, we retain the assumption of constant $\dot{M}_{\rm infall}$ and further assume that the mass recycling rate, $\dot{M}_{\rm R}$, can be approximated as being constant throughout the timestep\footnote{This will be approximately true if the timestep is sufficiently short that $\ddot{R}\Delta t \ll \dot{R}$.}. We therefore write
\begin{equation}
\dot{M}_{\rm R} = {M_{\rm R,past} + M_{\rm R,now} \over \Delta t},
\end{equation}
where $\Delta t$ is the timestep,
\begin{equation}
M_{\rm R,past} = \int_{t_0}^{t_0+\Delta t} {\rm d} t^{\prime\prime} \int_0^{t_0} {\rm d} t^\prime \dot{M}_\star(t^\prime) \dot{R}(t^{\prime\prime}-t^\prime)
\end{equation}
is the mass of gas returned to the \ISM\ from populations of stars formed in previous timesteps (and is trivially computed from the known star formation rate of the galaxy on past timesteps) and
\begin{equation}
M_{\rm R,now} = \int_{t^\prime}^{t_0+\Delta t} {\rm d} t^{\prime\prime} \int_{t_0}^{t_0+\Delta t} {\rm d} t^\prime \dot{M}_\star(t^\prime) \dot{R}(t^{\prime\prime}-t^\prime),
\end{equation}
is the mass returned to the \ISM\ by star formation during the current timestep. With these approximations, the gas equations always have the solution
\begin{eqnarray}
M_{\rm gas}(t) = M_{\rm gas 0} \exp\left(-{t\over \tau_{\rm eff}}\right) + \dot{M}_{\rm input} \tau_{\rm eff}\left[1-\exp\left(-{t\over \tau_{\rm eff}}\right)\right],
\end{eqnarray}
where $M_{\rm gas 0}$ is the mass of gas at time $t=0$ (measured from the start of the timestep and
\begin{eqnarray}
\dot{M}_{\rm input} &=& \dot{M}_{\rm infall} \nonumber \\
& & + \left\{\left[{M_{\rm gas 0} \over \tau_{\rm eff}}-{M_{\rm R,past}\over\Delta t}\right]I_{\rm R1}(\Delta t,\tau_{\rm eff}) \right. \nonumber \\
& & \left. + {M_{\rm R,past}\over\Delta t} I_{\rm R0}(\Delta t)\right\} \nonumber \\
& & \times \left\{ (1+\beta) + [I_{\rm R1}(\Delta t,\tau_{\rm eff}) -I_{\rm R0}(\Delta t)]/\Delta t \right\}^{-1}
\end{eqnarray}
where
\begin{eqnarray}
I_{\rm R0}(t) &=& \int_0^t R(t-t^\prime) {\rm d} t^\prime, \\
I_{\rm R1}(t,\tau) &=& \int_0^t \exp(-t^\prime/\tau) R(t-t^\prime) {\rm d} t^\prime.
\end{eqnarray}
In the above, the effective e-folding timescale for star formation (accounting for \ifthenelse{\equal{\arabic{SNeDone}}{0}}{supernovae (SNe) \setcounter{SNeDone}{1}}{SNe}\ driven outflows), $\tau_{\rm eff}$, is given by
\begin{equation}
\tau_{\rm eff} = {\tau_\star \over 1 + \beta^\prime}
\end{equation}
where $\beta^\prime$ measures the strength of \ifthenelse{\equal{\arabic{SNeDone}}{0}}{supernovae (SNe) \setcounter{SNeDone}{1}}{SNe}\ feedback and is defined below in eqn.~(\ref{eq:Beta}).
The evolution of the metal mass is treated in a similar way, assuming a constant rate of input of metals from infall, star formation from previous timesteps and star formation from the current timestep. Metals in the cold gas reservoir of a galaxy are assumed to be uniformly mixed into the gas, such that the reservoir has a uniform metallicty. Metals then flow from the cold gas reservoir into the stellar phase and out into the reheated reservoir at a rate proportional to the star formation rate and mass outflow rate respectively, with the constant of proportionality being the cold gas metallicity. Material recycled from stars to the cold phase carries with it metals corresponding to the original metallicity of those stars, augmented by the appropriate metal yield. Finally, gas infalling from the surrounding halo may have been enriched in metals by previous galaxy formation and so deposits metals into the cold phase gas at a rate proportional to the mass infall rate, with proportionality equal to the (assumed uniform) metallicity of the notional profile gas. Apart from the fact that metals from stellar recycling and yields are not added instantaneously to the cold reservoir this treatment of metals remains identical to that of \cite{cole_hierarchical_2000}. The net rate of metal mass input to the cold phase (from both cosmological infall and returned from stars) is
\begin{eqnarray}
\dot{M}_{Z_i {\rm input}} &=& \dot{M}_{Z_i {\rm infall}} \nonumber \\
& &+ {[{M_{Z_i {\rm gas 0}}\over\tau_{\rm eff}}-{M_{Z_i {\rm R}}^{\rm past}\over\Delta t}]I_{\rm R1}(\Delta t,\tau_{\rm eff}) + {M_{Z_i {\rm R}}^{\rm past}\over\Delta t} I_{\rm R0}(\Delta t) \over \Delta t [(1+\beta) + (I_{\rm R1}(\Delta t,\tau_{\rm eff}) -I_{\rm R0}(\Delta t))\Delta t]} \nonumber \\
& & + {[{M_{\rm gas 0}\over\tau_{\rm eff}}-{M_{\rm R}^{\rm past}\over\Delta t}]I_{\rm p1}(\Delta t,\tau_{\rm eff}) + {M_{\rm R}^{\rm past}\over\Delta t} I_{\rm p0}(\Delta t) \over \Delta t [(1+\beta) + (I_{\rm p1}(\Delta t,\tau_{\rm eff}) -I_{\rm p0}(\Delta t))\Delta t]},
\end{eqnarray}
where $M_{Z_i {\rm R,past}}$ is the mass of metal $i$ recycled from star formation in previous timesteps and
\begin{eqnarray}
I_{\rm p0}(t) &=& \int_0^t p(t-t^\prime) {\rm d} t^\prime, \\
I_{\rm p1}(t,\tau) &=& \int_0^t \exp(-t^\prime/\tau) p(t-t^\prime) {\rm d} t^\prime.
\end{eqnarray}
\subsubsection{Star Bursts}
In previous implementations of {\sc Galform}\ star bursts were assumed to have an exponentially declining star formation rate. Such a rate results from assuming an instantaneous star formation rate of
\begin{equation}
\dot{M}_\star = {M_{\rm cold}\over \tau_\star},
\label{eq:BurstSFLaw}
\end{equation}
where $\tau_\star$ is a star formation timescale (fixed throughout the duration of the burst), an outflow rate proportional to the star formation rate and a rate of recycling given by $R\dot{M}_\star$. The resulting differential equations have a solution with an exponentially declining star formation rate.
When the instantaneous recycling approximation is dropped the rate of recycling is no longer proportional to the star formation rate and the differential equations no longer have an exponential solution. We choose to retain the original star formation law (eqn.~\ref{eq:BurstSFLaw}) and solve the differential equations to determine the star formation rate, outflow rate etc. as a function of time in the burst. The resulting set of equations have solutions identical to those in \S\ref{sec:NonInstGasEq} but with zero cosmological infall terms. Recycled material and the effects of feedback (see \S\ref{sec:Feedback}) are applied to the gas in the burst during the lifetime of the burst. Any recycling and feedback occurring after the burst is finished are applied to the disk.
In \citet{cole_hierarchical_2000} while bursts were treated as having finite duration for the purposes of computing the luminosity of their stellar populations at some later time, the change in the mass of the galaxy due to the burst occurred instantaneously. We drop this approximation and correctly follow the change in mass of each component (gas, stars, outflow) during each timestep.
\subsection{Feedback}\label{sec:Feedback}
Feedback from supernovae is also modified to account for the delay between star formation and supernova. In \cite{cole_hierarchical_2000} the outflow rate due to supernovae feedback was
\begin{equation}
\dot{M}_{\rm out} = \beta \dot{M}_\star,
\end{equation}
where
\begin{equation}
\beta = \left({V_{\rm hot}\over V_{\rm galaxy}}\right)^{\alpha_{\rm hot}},
\end{equation}
$V_{\rm hot}$ and $\alpha_{\rm hot}$ are parameters of the model (we allow for two different values of $V_{\rm hot}$, one for quiescent star formation in disks and one for bursts of star formation) and $V_{\rm galaxy}$ is the circular velocity at the half-mass radius of the galaxy, determines the strength of feedback and is a function of the depth of the galaxy's potential well. We modify this to
\begin{equation}
\dot{M}_{\rm out} = \beta^\prime \dot{M}_\star,
\end{equation}
where
\begin{equation}
\beta^\prime = \beta { \int_0^t \dot{\phi}_\star(t^\prime) \dot{N}_{\rm SNe}(t-t^\prime)\d t^\prime \over \dot{\phi}_\star(t) N^{\rm (II)}_{\rm SNe}(\infty) }
\label{eq:Beta}
\end{equation}
where $N_{\rm SNe}(t)$ is the total number of \ifthenelse{\equal{\arabic{SNeDone}}{0}}{supernovae (SNe) \setcounter{SNeDone}{1}}{SNe}\ (of all types) arising from a single population of stars after time $t$, such that the outflow rate scales in proportion to the current rate of supernovae but produces the same net mass ejection after infinite time (for constant $\beta$). In fact, we compute $\beta$ using the present properties of the galaxy at each timestep. The qualifier ``(II)'' appearing in the quantity $N^{\rm (II)}_{\rm SNe}(\infty)$ in the denominator of eqn.~(\ref{eq:Beta}) indicates that we normalize the outflow rate by reference to the number of supernovae from our adopted Population II \ifthenelse{\equal{\arabic{IMFDone}}{0}}{initial mass function (IMF) \setcounter{IMFDone}{1}}{IMF}\ (see \S\ref{sec:StellarPop}). This results in the outflow correctly encapsulating any differences in the effective number of supernovae between Population II and III stars. For supernovae rates, we assume that all stars with initial masses greater than $8M_\odot$ will result in a Type II supernova allowing the rate to be found from the lifetimes of these stars and the adopted \ifthenelse{\equal{\arabic{IMFDone}}{0}}{initial mass function (IMF) \setcounter{IMFDone}{1}}{IMF}. We adopt the calculations of \cite{nagashima_metal_2005} to compute the Type Ia \ifthenelse{\equal{\arabic{SNeDone}}{0}}{supernovae (SNe) \setcounter{SNeDone}{1}}{SNe}\ rate.
Since $\beta^\prime$ appears in the gas equations of \S\ref{sec:NonInstGasEq} but also depends on the star formation rate during the current timestep we must iteratively seek a solution for $\beta^\prime$ which is self-consistent with the star formation rate. We find that a simple iterative procedure, with an initial guess of $\beta^\prime=\beta$ quickly converges.
When gas is driven out of a galaxy in this way it can be either reincorporated into the $M_{\rm reheated}$ reservoir in the notional hot gas profile of the current halo, or it can be expelled from the halo altogether and allowed to reaccrete only further up the hierarchy once the potential well has become deeper.
We assume that the expelled fraction is given by
\begin{equation}
f_{\rm exp}=\exp\left( - {\lambda_\phi V^2 \over \langle e \rangle} \right),
\end{equation}
such that the rate of mass input to the reheated reservoir is
\begin{equation}
\dot{M}_{\rm reheated} = (1 - f_{\rm exp}) \beta^\prime \dot{M}_\star.
\end{equation}
Here, $\lambda_\phi$ is a dimensionless parameter relating the depth of the potential well to $V^2$ (we set $\lambda_\phi=1$ always), $V$ is the circular velocity of the galaxy disk or bulge (for quiescent or bursting star formation respectively) and $\langle e \rangle$ is the mean energy per unit mass of the outflowing material. We further assume
\begin{equation}
\langle e \rangle ={1\over 2} \lambda_{\rm expel} V^2,
\end{equation}
where $\lambda_{\rm expel}$ is a parameter of order unity relating the energy of the outflowing gas to the potential of the host galaxy, and will be treated as a free parameter to be constrained from observations (we actually allow for $\lambda_{\rm expel}$ to have different values for quiescent and bursting star formation; see \S\ref{sec:Selection}). We then proceed to the parent halo and allow a fraction
\begin{equation}
f_{\rm acc} = \exp\left( - {V_{\rm max}^2 \over \langle e \rangle} \right) - \exp\left( - {V_{\rm v}^2 \over \langle e \rangle } \right)
\end{equation}
to be reaccreted into the hot gas reservoir of the notional profile, where $V_{\rm max}$ is the maximum of $\sqrt{\lambda_{\rm expel}} V$ and any parent halo $V_{\rm v}$ yet found. We then proceed to the parent's parent and repeat the accretion procedure, continuing until the base of the tree is reached. In this way, all of the gas will be reaccreted if the potential well becomes sufficiently deep.
\subsection{AGN feedback}\label{sec:AGNFeedback}
In recent years, the possibility that feedback from \ifthenelse{\equal{\arabic{AGNDone}}{0}}{active galactic nuclei (AGN) \setcounter{AGNDone}{1}}{AGN} plays a significant role in shaping the properties of a forming galaxy has come to the forefront \pcite{croton_many_2006,bower_breakinghierarchy_2006,somerville_semi-analytic_2008}. We adopt the black hole growth model of \cite{malbon_black_2007} and the \ifthenelse{\equal{\arabic{AGNDone}}{0}}{active galactic nuclei (AGN) \setcounter{AGNDone}{1}}{AGN}\ feedback model of \cite{bower_breakinghierarchy_2006} as modified by \cite{bower_flip_2008}. The reader is referred to those papers for a full description of our implementation of \ifthenelse{\equal{\arabic{AGNDone}}{0}}{active galactic nuclei (AGN) \setcounter{AGNDone}{1}}{AGN}\ feedback.
\subsection{Stellar Populations}\label{sec:StellarPop}
We consider both Pop~II and Pop~III stars. To compute luminosities of Population II stellar populations we employ the most recent version\footnote{Specifically, {\tt v2.0} downloaded from {\tt http://www.astro.princeton.edu/$\sim$cconroy/SPS/} with bug fixes up to January 7, 2010.} of the Conroy, Gunn \& White spectral synthesis library \pcite{conroy_propagation_2009}\footnote{For calculations of \protect\ifthenelse{\equal{\arabic{IGMDone}}{0}}{intergalactic medium (IGM) \setcounter{IGMDone}{1}}{IGM}\ evolution we \emph{do not} use the \protect\cite{conroy_propagation_2009} spectra because they assign stars hotter than $5\times 10^4$K pure blackbody spectra. This leads to an unrealistically large ionizing flux for young, metal rich populations. We therefore instead use the \protect\cite{bruzual_stellar_2003} spectral synthesis library for \protect\ifthenelse{\equal{\arabic{IGMDone}}{0}}{intergalactic medium (IGM) \setcounter{IGMDone}{1}}{IGM}\ evolution calculations.}. We adopt a Chabrier \ifthenelse{\equal{\arabic{IMFDone}}{0}}{initial mass function (IMF) \setcounter{IMFDone}{1}}{IMF}\ \pcite{chabrier_galactic_2003}
\begin{equation}
\phi(M) \propto \left\{ \begin{array}{ll}\exp\left( - {1\over 2}{ [\log_{10} M/M_{\rm c}]^2 \over \sigma^2} \right) & \hbox{for} M\le 1M_\odot \\
M^{-\alpha} & \hbox{for} M>1M_\odot,
\end{array} \right.
\end{equation}
where $M_{\rm c}=0.08M_\odot$ and $\sigma=0.69$ and the two expressions are forced to coincide at $1M_\odot$. Recycled mass fractions, yield and supernovae rates are computed self-consistently from this \ifthenelse{\equal{\arabic{IMFDone}}{0}}{initial mass function (IMF) \setcounter{IMFDone}{1}}{IMF}\ as described in \S\ref{sec:NonInstGasEq} and \S\ref{sec:Feedback} and are shown in Fig.~\ref{fig:Chabrier_NonInstant}.
\begin{figure}
\includegraphics[width=80mm,viewport=0mm 10mm 186mm 265mm,clip]{Plots/Chabrier_NonInstant.pdf}
\caption{Upper, middle and lower panels show the recycled fraction, yield and effective number of supernovae respectively for a Chabrier \protect\ifthenelse{\equal{\arabic{IMFDone}}{0}}{initial mass function (IMF) \setcounter{IMFDone}{1}}{IMF}\ (two metallicities, defined as the mass fraction of heavy elements, are shown: 0.0001\ as red lines and 0.0501\ as blue lines) and for metal free Population III stars with type ``A'' \protect\ifthenelse{\equal{\arabic{IMFDone}}{0}}{initial mass function (IMF) \setcounter{IMFDone}{1}}{IMF}\ from \protect\cite{tumlinson_chemical_2006} (green lines). \emph{Top-panel:} The fraction of mass from a single stellar population, born at time $t=0$, recycled to the interstellar medium after time $t$. \emph{Middle-panel:} The total metal yield from a single stellar population born at time, $t=0$, after time $t$ is shown by the solid lines. Dotted and dashed lines show the yield of oxygen and iron respectively. \emph{Lower-panel:} Cumulative energy input into the interstellar medium, expressed as the number of equivalent supernovae, per unit mass of stars formed as a function of time. The dotted line indicates the contribution from stellar winds, the solid line the contribution from Type~II supernovae and the dashed line the contribution from Type~Ia supernovae.}
\label{fig:Chabrier_NonInstant}
\end{figure}
For Population~III stars (which we assume form below a critical metallicity of $Z_{\rm crit}=10^{-4}Z_\odot$) we adopt \ifthenelse{\equal{\arabic{IMFDone}}{0}}{initial mass function (IMF) \setcounter{IMFDone}{1}}{IMF}\ ``A'' from \cite{tumlinson_chemical_2006}. Spectral energy distributions for this \ifthenelse{\equal{\arabic{IMFDone}}{0}}{initial mass function (IMF) \setcounter{IMFDone}{1}}{IMF}\ as a function of population age were kindly provided by J.~Tumlinson. Lifetimes for these stars are taken from the tabulation given by \cite{tumlinson_cosmological_2003}. Recycled fractions and yields and energies from pair instability supernovae are computed using the data given by \cite{heger_nucleosynthetic_2002}. Recycled mass fractions, yield and supernovae rates are computed self-consistently from these Population III stars as shown in Fig.~\ref{fig:Chabrier_NonInstant} by green lines.
\subsubsection{Extinction by Dust}\label{sec:DustModel}
\citet{cole_hierarchical_2000} introduced a model for dust extinction in galaxies which significantly improved upon earlier ``slab'' models. In \citet{cole_hierarchical_2000} the mass of dust is assumed to be proportional to the mass and metallicity of the \ISM\ and to be mixed homogeneously with the \ISM\ (possibly with a different scale height from the stars) and to have
properties consistent with the extinction law observed in the
Milky Way. To compute the extinction of any galaxy, a random inclination angle is selected and the extinction computed using the results of radiative transfer calculations carried out by \citet{ferrara_atlas_1999}.
Following \citet{gonzalez-perez_massive_2009}, we extend this model\footnote{An alternative method for rapidly computing dust extinction and re-emission within the {\sc Galform} +{\sc Grasil}\ frameworks based on artificial neural networks is described by \protect\cite{almeida_modellingdusty_2009}.} by assuming that some fraction, $f_{\rm cloud}$, of the dust is in the form of dense molecular clouds where the stars form (see \citealt{baugh_canfaint_2005,lacey__2010}). Stars are assumed to form in these clouds and to escape on a timescale of $\tau_{\rm quies}$ (for quiescent star formation in disks) or $\tau_{\rm burst}$ (for star formation in bursts), which is a parameter of the dust model \pcite{granato_infrared_2000}, so these stars spend a significant fraction of their lifetime inside the clouds. Since massive, short-lived stars dominate the \ifthenelse{\equal{\arabic{UVDone}}{0}}{ultraviolet (UV) \setcounter{UVDone}{1}}{UV}\ emission of a galaxy this enhances the extinction at short wavelengths.
To compute emission from dust we assume a far infrared opacity of
\begin{equation}
\kappa = \left\{ \begin{array}{ll}
\kappa_1 (\lambda/\lambda_1)^{-\beta_1} & \hbox{for } \lambda<\lambda_{\rm break} \\
\kappa_1 (\lambda_{\rm break}/\lambda_1)^{-\beta_1} (\lambda/\lambda_{\rm break})^{-\beta_2} & \hbox{for } \lambda>\lambda_{\rm\
break},
\end{array} \right.
\end{equation}
where the opacity normalization at $\lambda_1=30\mu$m is chosen to be $\kappa_1=140$cm$^2$/g to reproduce the dust opacity model used in
{\sc Grasil}, as described in \cite{silva_modelingeffects_1998}. The dust grain
model in {\sc Grasil}\ is a slightly modified version of that proposed by \cite{draine_optical_1984}. Both the \cite{draine_optical_1984} and {\sc Grasil}\ dust models have
been adjusted to fit data on dust extinction and emission in the local \ISM\
(with much more extensive \ISM\ dust emission data being used by \citealt{silva_modelingeffects_1998}).
The normalization is set at 30$\mu$m because the dust opacity in the \cite{draine_optical_1984} and {\sc Grasil}\
models is well fit by a power-law longwards of that wavelength, but not shortwards. The dust luminosity is then assumed to be
\begin{equation}
L_\nu = 4\pi\kappa(\nu)B_\nu(T) M_{\rm Z,gas},
\end{equation}
where $B_\nu(T) = [2{\rm h}\nu^3/{\rm c}^2]/[\exp({\rm h}\nu/{\rm k}T)-1]$ is the Planck blackbody spectrum and $M_{\rm Z,gas}$ is the mass of metals in gas. The dust temperature, $T$, is chosen such that the bolometric dust luminosity equals the luminosity absorbed by dust.
Values of the parameters used in dust model are given in Table~\ref{tb:DustParams} and were found by \citet{gonzalez-perez_massive_2009} to give the best match to the results of the full {\sc Grasil}\ model.
\begin{table}
\caption{Parameters of the dust model used throughout this work. The parameters are defined in \S\protect\ref{sec:DustModel}.}
\label{tb:DustParams}
\begin{center}
\begin{tabular}{lc}
\hline
{\bf Parameter} & {\bf Value} \\
\hline
$f_{\rm cloud}$ & 0.25 \\
$r_{\rm burst}$ & 1.0 \\
$\tau_{\rm quies}$ & 1~Myr \\
$\tau_{\rm burst}$ & 1~Myr \\
$\lambda_{\rm 1, disk}$ & 30$\mu$m \\
$\lambda_{\rm break, disk}$ & 10000$\mu$m \\
$\beta_{\rm 1, disk}$ & 2.0 \\
$\beta_{\rm 2, disk}$ & 2.0 \\
$\lambda_{\rm 1, burst}$ & 30$\mu$m \\
$\lambda_{\rm break, burst}$ & 100$\mu$m \\
$\beta_{\rm 1, burst}$ & 1.6 \\
$\beta_{\rm 2, burst}$ & 1.6 \\
\hline
\end{tabular}
\end{center}
\end{table}
This extended dust model, including diffuse and molecular cloud dust components, provides a better match to the detailed radiative transfer calculation of dust extinction carried out by the spectrophotometric code {\sc Grasil}\ \pcite{silva_modelingeffects_1998,baugh_predictions_2004,baugh_canfaint_2005,lacey_galaxy_2008} while being orders of magnitude faster, although it does not capture details such as \ifthenelse{\equal{\arabic{PAHDone}}{0}}{polycyclic aromatic hydrocarbon (PAH) \setcounter{PAHDone}{1}}{PAH}\ features.
\cite{fontanot_evaluating_2009} have explored similar models which aim to reproduce the results of {\sc Grasil}\ using simple, analytic prescriptions. They found that by fitting the results from {\sc Grasil}\ they were able to obtain a better match to the extinction in galaxies than previous, simplistic models of dust extinction had been able to attain. In this respect, our conclusions are in agreement with theirs---the model we describe here provides a significantly better match to the results of the full {\sc Grasil}\ model than, for example, the dust extinction model described by \cite{cole_hierarchical_2000}.
At high redshifts model galaxies often undergo periods of near continuous bursting as a result of experiencing disk instabilities on each subsequent timestep. This rather chaotic period of evolution is not well modelled presently---it is treated as a sequence of quiescent gas accretion periods punctuated by instability-triggered bursts while in reality we expect it to correspond more closely to a near continuous, high star formation rate mode somewhere in between the quiescent and bursting behaviour. While our model probably estimates the total amount of star formation during this period reasonably well (as it is controlled primarily by the cosmological infall rate and degree of outflow due to supernovae) we suspect that it does a rather poor job of accounting for dust extinction. After each burst the gas (and hence dust) content of each galaxy is reduced to zero, resulting in no extinction. Our model therefore tends to contain too many dust-free galaxies at high redshifts. To counteract this effect we force galaxies in this regime to be observed during a bursting phase, so that they always experience some dust extinction.
Dust remains one of the most challenging aspects of galaxies to model. We will return to aspects of our model related to dust (utilizing the more detailed {\sc Grasil}\ model) in a future work, but note that even this is unlikely to be sufficient---what is needed is a better understanding of the complicated distribution of dust within galaxies, particularly during these early, chaotic phases.
Indeed the distribution of star formation within galaxies at $z=3$ to 5 has recently become within reach of observational studies \pcite{stark_formation_2008,elmegreen_bulge_2009,lehnert_physical_2009,swinbank_???_2009}. It seems that this aspect of the model is indeed supported by observational data. A future project will be to compare the internal properties of observed galaxies at these redshifts with those predicted by the model.
\subsection{Absorption by the IGM}
Where necessary, we model the attenuation of galaxy \ifthenelse{\equal{\arabic{SEDDone}}{0}}{spectral energy distribution (SED) \setcounter{SEDDone}{1}}{SED} s by neutral hydrogen in the intervening \ifthenelse{\equal{\arabic{IGMDone}}{0}}{intergalactic medium (IGM) \setcounter{IGMDone}{1}}{IGM}\ using the model of \cite{meiksin_colour_2006}.
\section{Model Selection}\label{sec:Selection}
The model described above has numerous free parameters which reflect our ignorance of the details of certain physical processes or order unity uncertainties in (e.g. geometrical) coefficients. To determine suitable values for these parameters we appeal to a broad range of observational data and search the model parameter space to find the best fit model.
The problem of how to implement the computationally challenging problem of fitting a complicated semi-analytic model with numerous free parameters to observational data has been considered before by \cite{henriques_monte_2009} and \cite{bower_parameter_2010}. To constrain model parameters in this work we use the ``Projection Pursuit'' method of \cite{bower_parameter_2010}. We give a brief description of that method here and refer the reader to \cite{bower_parameter_2010} for complete details.
Running a single set of model parameters, including all of the redshifts and wavelengths required for our analysis, is a relatively slow process. In particular, running a model with self-consistently computed \ifthenelse{\equal{\arabic{IGMDone}}{0}}{intergalactic medium (IGM) \setcounter{IGMDone}{1}}{IGM}\ evolution is entirely impractical for a parameter space search. We therefore chose to run models without a self-consistently computed \ifthenelse{\equal{\arabic{IGMDone}}{0}}{intergalactic medium (IGM) \setcounter{IGMDone}{1}}{IGM}\ or photoionizing background. Even then, each model takes around 2 hours to run on a fast computer. To mimic the effects of a photoionizing background we adopt the ``$V_{\rm cut}$--$z_{\rm cut}$'' model described by \cite{font_modelingmilky_2009} and which they show to reproduce quite well the results of the self-consistent calculation. Briefly, this model inhibits cooling of gas in halos with virial velocities below $V_{\rm cut}$ at redshifts below $z_{\rm cut}$. We then include $V_{\rm cut}$ and $z_{\rm cut}$ as parameters in our fitting process.
This approach is not ideal, but is required due to computational limitations. \cite{bower_parameter_2010} show that local (i.e. low redshift) properties of the model are not significantly affected by the inclusion of self-consistent reionization (i.e. those data do not constrain $V_{\rm cut}$ or $z_{\rm cut}$), and, where they are, the ``$V_{\rm cut}$--$z_{\rm cut}$'' model provides a reasonable approximation \cite{font_modelingmilky_2009}. In any case, as we will discuss below, some manual tuning of parameters is still required after the automated search of parameter space is completed. This manual search is then conducted using the fully self-consistent \ifthenelse{\equal{\arabic{IGMDone}}{0}}{intergalactic medium (IGM) \setcounter{IGMDone}{1}}{IGM}\ calculation.
We envision the problem in terms of a multi-dimensional parameter space into which constraints from observational data are mapped. Given the large number of model parameters and the fact that running a single realization of the model requires a significant amount of computer time, we can not perform a simple grid-search of the parameter space on a sufficiently fine grid. Instead, we begin by specifying plausible ranges for model parameters. The ranges considered for each parameter are listed in Table~\ref{tb:PCAranges}---for some parameters we choose to consider the logarithm of the parameter as the variable in our parameter space, to allow for efficient exploration of several decades of parameter value. We scale each model parameter such that it varies between 0 and 1 across this allowed range. We then generate a set of points in this limited and scaled model parameter space using Latin hypercube sampling \pcite{mckay_comparison_1979}, thereby ensuring an efficient coverage of the parameter space. A model is run for each set of parameters and a goodness of fit measure computed.
\begin{table}
\caption{The allowed ranges for each parameter in our fitting parameter space. For some parameters, we choose to use the logarithm of the parameter to allow efficient exploration of several decades of parameter value.}
\label{tb:PCAranges}
\begin{center}
\begin{tabular}{lr@{.}lr@{.}l}
\hline
{\bf Parameter} & \multicolumn{2}{c}{{\bf Minimum}} & \multicolumn{2}{c}{{\bf Maximum}} \\
\hline
\input{Data/PCA_Table_Ranges}
\hline
\end{tabular}
\end{center}
\end{table}
The choice of a goodness of fit measure is important and non-trivial (see \citealt{bower_parameter_2010}). We do not expect our model to fit all of the constraints in a statistically rigorous manner, as the model is clearly approximate. The Bayesian approach to this is issue is to assign a prior assessment of the reliability of the model to each of the data set comparisons and to define a correlation matrix reflecting the a priori connections between datasets. This concept (referred to as ``model discrepancy'' in the statistical literature) is discussed in detail for $z=0$ luminosity function constraints in \cite{bower_parameter_2010}. However, in the present paper, we needed a simpler approach to the problem. We therefore adopted a non-Bayesian methodology of simply summing $\chi^2$ for each dataset that we used. This has the advantage of simplicity, but clearly there may be more appropriate choices for the relative weighting of different data sets: we will explore this issue in a future paper. There is little doubt that a better measure of goodness of fit could be found. In particular, the relative weightings given to each dataset should really reflect how well we think the model performs in that particular quantity, how accurately we think that we have been able to match any observational selection and, inevitably, how much we believe the data itself. These are extremely thorny issues to which, at present, we do not have a good answer.
Specifically, in this work, the goodness of fit measure is taken to be
\begin{equation}
\widetilde{\chi}^2 = \sum_i w_i {\chi^2_i \over N_i},
\end{equation}
where $\chi^2_i$ is the usual goodness of fit measure for dataset $i$, $N_i$ is the number of degrees of freedom in that dataset and $w_i$ is a weight assigned to each dataset. The sum is taken over all datasets shown in \S\ref{sec:Results} and, additionally, cosmological parameters were allowed to vary within the $2\sigma$ intervals permitted by the \cite{dunkley_five-year_2009} constraints, and were included in the goodness of fit measure using a Gaussian prior. When computing $\chi^2$ for each dataset we estimate the error in each datum to be the sum in quadrature of the experimental error and any statistical error present in the model due to the finite number of Monte Carlo merger tree realizations that we are able to carry out. This ensures that two models which differ by an amount comparable to the random noise in the models have similar values of $\chi^2$. The specific datasets used, along with the weights assigned to them (estimated using our best judgement of the reliability of each dataset and the {\sc Galform}'s ability to model it) are listed in Table~\ref{tb:constraints}.
\begin{table*}
\caption{The set of datasets used as constraints on our model, together with a reference to where the dataset is shown in this paper and the value of the weight, $w_i$, assigned to each constraint.}
\label{tb:constraints}
\begin{tabular}{lcr@{.}l}
\hline
{\bf Constraint} & {\bf Reference} & \multicolumn{2}{c}{Weight ($w_i$)} \\
\hline
\input{Data/Constraints_Table}
\hline
\end{tabular}
\end{table*}
Once a set of models have been run, a principal components analysis is performed on the goodness of fit values of those models with $\widetilde{\chi}^2$ values in the lower $10^{\rm th}$ percentile of all models to find which linear combinations of parameters provide the \emph{minimum} variance in goodness of fit. These are the parameter combinations that are most tightly constrained by the observational data. A principal component with low variance implies that this particular combination of the parameters is tightly constrained if the model is likely to produce an acceptable fit. Of course, even if this constraint is satisfied, a good model is not guaranteed; rather we can be confident that if it is not satisfied the fit will not be good\footnote{This is only strictly true if the relationships between $\widetilde{\chi}^2$ and the parameters are approximately linear and unimodal. If there exists a separate small island of good values somewhere, our \ifthenelse{\equal{\arabic{PCADone}}{0}}{principal components analysis (PCA) \setcounter{PCADone}{1}}{PCA}+Latin Hypercube method might happen to miss the region, or it might not exert sufficient pull on the \ifthenelse{\equal{\arabic{PCADone}}{0}}{principal components analysis (PCA) \setcounter{PCADone}{1}}{PCA}\ compared to the large region and might be
subsequently ignored. The advantage of the emulator approach used by \protect\cite{bower_parameter_2010} is that it gives an estimate of the error made by excluding regions from further evaluations.}. When analysing the acceptable region in this way, we also need to bear in mind that the \ifthenelse{\equal{\arabic{PCADone}}{0}}{principal components analysis (PCA) \setcounter{PCADone}{1}}{PCA}\ assumes that the relationships are linear, whereas \cite{bower_parameter_2010} show that the actual acceptable space is curved. This will prevent any of the suggested projections being arbitrarily thin and limit the accuracy of constraints. Nevertheless, the procedure substantially cuts down the volume of parameter space where model evaluations need to be run. These linear combinations are used to define rotated axes in the parameter space within which we select a new set of points again using Latin hypercube sampling. The process is repeated until a suitably converged model is found\footnote{In practice these calculations were run on distributed computing resources (including machines at the ICC in Durham, TeraGrid and Amazon EC2. Each machine was given an initial small set of models to run. After running each model, the results were transferred back to a central server. Periodically, the server would collate all available results, perform the \ifthenelse{\equal{\arabic{PCADone}}{0}}{principal components analysis (PCA) \setcounter{PCADone}{1}}{PCA}\ and generate a new set of models which it then distributed to all active computing resources.}. This process is not fast, requiring around 150,000~CPU hours\footnote{The authors, feeling the need to help preserve our own small region of one realization of the Universe, purchased carbon offsets to counteract the carbon emissions resulting from this large investment of computing time.}, but does produce a model which is a good match to the input data.
Figure~\ref{fig:Constraints} demonstrates the efficacy of our method using four 2D slices through the multi-dimensional parameter space. The colour scale in each panel shows constraints on two of the model parameters, while the projections below and to the left of the panel indicate the constraints on the indicated single parameter. Contours illustrate the relative number of model evaluations which were performed at each point in the plane. It can be clearly seen that our ``Projection Pursuit'' methodology concentrates model evaluations in those regions which are most likely to provide a good fit. The nominal best-fit model is indicated by a yellow star in each panel. Despite the large number of models run we do not believe that this precise point should be considered as the ``best'' model---the dimensionality of the parameter space is so large that we do not believe that it has been sufficiently well mapped to draw this conclusion. Additionally, we also need a model discrepancy matrix---without this, we can not say whether a model is acceptable (in the sense that it should only agree with the data as well as we expect given the level of approximation in the model). Without the discrepancy term, we will tend to overfit the model. Instead, we utilize these results to suggest the region of parameter space in which the best model is likely to be found. We then adjust parameters manually to find the final model (utilizing our intuition of how the model will respond to changes in parameters).
\begin{figure*}
\begin{center}
\begin{tabular}{cc}
\includegraphics[width=85mm,viewport=0mm 55mm 205mm 250mm,clip]{Plots/chi2_Vhotdisk_alphaHot} &
\includegraphics[width=85mm,viewport=0mm 55mm 205mm 250mm,clip]{Plots/chi2_alphareheat_alphacool.pdf} \\
\includegraphics[width=85mm,viewport=0mm 55mm 205mm 250mm,clip]{Plots/chi2_A_w.pdf} &
\includegraphics[width=85mm,viewport=0mm 55mm 205mm 250mm,clip]{Plots/chi2_lambdaExpelDisklambdaExpelBurst.pdf}
\end{tabular}
\end{center}
\caption{Constraints on model parameters shown as 2D slices through the multi-dimensional parameter space. In each panel, the colour scale indicates the value of $\widetilde{\chi}^2$ as shown by the bar above the panel, with the yellow star indicating the best-fit model. Each point in the plane is coloured to correspond to the minimum value of $\widetilde{\chi}^2$ found when projecting over all other dimensions of the parameter space. Contours illustrate the relative number of model evaluations at each point in the plane---from lightest to darkest line colour they correspond to 10, 30, 100 and 300 evaluations per grid cell. Most evaluations are carried out when the best model fits are found, indicating that our method is efficient in concentrating resources where good models are most likely to be found. To each side of the plane, the distribution of $\widetilde{\chi}^2$ is projected over one of the remaining dimensions to show constraints on the indicated parameter. \emph{Top left panel:} Shows the main parameters of the \protect\ifthenelse{\equal{\arabic{SNeDone}}{0}}{supernovae (SNe) \setcounter{SNeDone}{1}}{SNe}\ feedback model, $V_{\rm hot,disk}$ and $\alpha_{\rm hot}$. \emph{Top right panel:} Shows critical parameters controlling the cooling and \protect\ifthenelse{\equal{\arabic{AGNDone}}{0}}{active galactic nuclei (AGN) \setcounter{AGNDone}{1}}{AGN}\ feedback models, $\alpha_{\rm reheat}$ and $\alpha_{\rm cool}$. \emph{Lower left panel:} Show parameters of the adiabatic contraction model which have important consequences for the sizes of galaxies, $A_{\rm ac}$ and $w_{\rm ac}$. \emph{Lower right panel:} Shows parameters of the \protect\ifthenelse{\equal{\arabic{SNeDone}}{0}}{supernovae (SNe) \setcounter{SNeDone}{1}}{SNe}\ feedback model that control the amount of material expelled from halos, $\lambda_{\rm expel,disk}$ and $\lambda_{\rm expel,burst}$.}
\label{fig:Constraints}
\end{figure*}
Interesting constraints and correlations can be seen in Figure~\ref{fig:Constraints}. For example, the combination $\alpha_{\rm hot}$--$V_{\rm hot,disk}$ is quite well constrained and somewhat anti-correlated (such that an increase in $\alpha_{\rm hot}$ can be played off against a decrease in $V_{\rm hot,disk}$. It is immediately clear, for example, that no good model can be found with $\lambda_{\rm expel,disk}\gsim1.5$ while $\lambda_{\rm expel,bulge}$ is much less well constrained, but must be larger then about $1.5$.
The principal component vectors from the final set of 36,017\ models are shown in Table~\ref{tb:PCAmatrix}. We note here that these vectors are quite different from those found by \cite{bower_parameter_2010}. This is not too surprising as our implementation of {\sc Galform}\ is quite different from theirs and we constrain our model to a much broader collection of datasets. We will examine the \ifthenelse{\equal{\arabic{PCADone}}{0}}{principal components analysis (PCA) \setcounter{PCADone}{1}}{PCA}\ vectors in greater detail in a future paper, and so restrict ourselves to a brief discussion here. Taking the first \ifthenelse{\equal{\arabic{PCADone}}{0}}{principal components analysis (PCA) \setcounter{PCADone}{1}}{PCA}\ vector for example, we see that it is dominated by $z_{\rm cut}$, $\alpha_\star$ and $\alpha_{\rm hot}$. These parameters all have strong effects on the faint end of luminosity functions. Luminosity functions are abundant in our set of constraints and have been well measured. As such, they provide some of the strongest constraints on the model. It can be seen that an increase in $\alpha_{\rm hot}$, which will flatten the slope of the faint of a luminosity function, has a similar effect as a decrease in $\alpha_\star$, which will preferentially reduce rates of star formation in low-mass galaxies and so also flatten the faint end slope. The second \ifthenelse{\equal{\arabic{PCADone}}{0}}{principal components analysis (PCA) \setcounter{PCADone}{1}}{PCA}\ component shows a strong but opposite dependence on $\Omega_{\rm b}$ and $\lambda_{\rm expel,burst}$. Increasing $\Omega_{\rm b}$ results in more fuel for galaxy formation, while increasing $\lambda_{\rm expel,burst}$ causes material to be lost by being expelled from halos. As we continue to further \ifthenelse{\equal{\arabic{PCADone}}{0}}{principal components analysis (PCA) \setcounter{PCADone}{1}}{PCA}\ vectors the parameter combinations they represent become more complicated and difficult to interpret---the advantage of our methodology is that these complex interactions can be taken into account when exploring the model parameter space.
The differences between our results and those of \mbox{\cite{bower_parameter_2010}} are interesting in their own right. For example, \cite{bower_parameter_2010} found two ``islands'' of good fit in the supernovae feedback parameter space ($V_{\rm hot,disk}$ and $V_{\rm hot,burst}$): a strong feedback island (corresponding approximately to what we find in this work) and a weak feedback island (which we do not find). The weak feedback island is ruled out in the present work as, while a good fit to the galaxy luminosity function can be found in it (as demonstrated by \citealt{bower_parameter_2010}), no good fit to, for example, galaxy sizes can be found.
\begin{sidewaystable*}
\caption{: The principal components, rank ordered by their contribution to the variance, $\sigma^2$, from our models. In each row, the dominant elements (those with an absolute value in excess of $0.33$, are shown in bold.}
\label{tb:PCAmatrix}
\begin{tabular}{lr@{.}lr@{.}lr@{.}lr@{.}lr@{.}lr@{.}lr@{.}lr@{.}lr@{.}lr@{.}lr@{.}lr@{.}lr@{.}lr@{.}lr@{.}lr@{.}l}
\hline
\input{Data/PCA_Table_Part1}
\end{tabular}
\end{sidewaystable*}
\begin{sidewaystable*}
\addtocounter{table}{-1}
\caption{: \emph{(cont.)} The principal components, rank ordered by their contribution to the variance, $\sigma^2$, from our models. In each row, the dominant elements (those with an absolute value in excess of $0.33$, are shown in bold.}
\begin{tabular}{lr@{.}lr@{.}lr@{.}lr@{.}lr@{.}lr@{.}lr@{.}lr@{.}lr@{.}lr@{.}lr@{.}lr@{.}lr@{.}lr@{.}lr@{.}l}
\hline
\input{Data/PCA_Table_Part2}
\end{tabular}
\end{sidewaystable*}
\section{Results}\label{sec:Results}
In this section we will begin by identifying the best-fit model and will then show results from that model compared to the observational data that was used to constrain the model parameters. With the exception of results shown in \S\ref{sec:Predictions} all of the data shown in this section were used to constrain the model and, as such, the results do not represent predictions of the model. (In \S\ref{sec:GasPhases} we examine the distribution of gas between different phases as a function of halo mass, while in \S\ref{sec:ICL} we explore the fraction of stellar mass in the intracluster light component of halos. The data shown in these comparisons were \emph{not} used as constraints when searching for the best-fit model.) The overall best-fit model (i.e. that which best describes the union of all datasets) is shown by blue lines. Additionally, we show as magenta lines the best-fit model to each individual dataset (as described in the figure captions) for comparison. We do not claim that the following represents a complete census of the observational data that \emph{could} be used to constrain our galaxy formation model. Instead, we have selected data which spans a range of physical characteristics and redshifts that we think best constrains the physics of our model, while remaining within the limited (although substantial) computational resources at our disposal.
In addition to these best-fit models, we will, where possible, compare our current results with those from the previous implementation of {\sc Galform}\ described by \cite{bower_breakinghierarchy_2006}. Results from the \cite{bower_breakinghierarchy_2006} model are shown by green lines in each figure. We have not included figures for every constraint used in this work---specifically, in many cases we show examples of the constraints only for a limited number of magnitude or redshift ranges. However, all of the constraints used are listed in Table~\ref{tb:constraints} and are discussed in the text.
\subsection{Best Fit Model}
The resulting set of best-fit parameters are listed in Table~\ref{tb:BestFitParams}. We will not investigate the details of these results here, leaving an exploration of which data constrain which parameters and the possibility of alternative, yet acceptable, parameter sets to a future work. The best fit model turns out to be a reasonably good match to local luminosity data, galaxy colours, metallicities, gas content, supermassive black hole masses and constraints on the epoch of reionization, but to perform less well in matching galaxy sizes, clustering and the Tully-Fisher relation. In addition, luminosity functions become increasingly more discrepant with the data as we move to higher redshifts. In the remainder of this section we will briefly discuss some important aspects of the best fit parameter set.
\begin{table}
\caption{Parameters of the best fit model used in this work and of the \protect\cite{bower_breakinghierarchy_2006} model. Note that the best-fit model listed here is one that includes self-consistent reionization and evolution of the \protect\ifthenelse{\equal{\arabic{IGMDone}}{0}}{intergalactic medium (IGM) \setcounter{IGMDone}{1}}{IGM}\ (see \S\ref{sec:IGM}) and which has been adjusted to also produce a reasonable reionization history (see \S\ref{sec:IGMResults}). It therefore does not correspond to the location of the best-fit model indicated in Fig.~\ref{fig:Constraints}. Where appropriate, references are given to the article, or section of this work, in which the parameter is described.}
\label{tb:BestFitParams}
\begin{center}
\begin{tabular}{cr@{.}lr@{.}lc}
\hline
& \multicolumn{4}{c}{{\bf Value}} & \\
\cline{2-5}
{\bf Parameter} & \multicolumn{2}{c}{{\bf This Work}} & \multicolumn{2}{c}{{\bf Bower06}} & {\bf Reference} \\
\hline
\hline
\multicolumn{6}{c}{\emph{Cosmological}} \\
$\Omega_0$ & 0&284 & 0&250\\
$\Lambda_0$ & 0&716 & 0&750 \\
$\Omega_{\rm b}$ & 0&04724 & 0&04500\\
$h_0$ & 0&691 & 0&730\\
$\sigma_8$ & 0&807 & 0&900 \\
$n_{\rm s}$ & 0&933 & 1&000 \\
\hline
\multicolumn{6}{c}{\emph{Gas Cooling Model}} \\
$\alpha_{\rm reheat}$ & 2&32 & 1&260 & \S\ref{sec:Reheating} \\
$\alpha_{\rm cool}$ & 0&550 & 0&580 & \S\ref{sec:AGNFeedback} \\
$\alpha_{\rm remove}$ & 0&102 & \multicolumn{2}{c}{N/A} & \S\ref{sec:Reheating} \\
$a_{\rm core}$ & 0&163 & 0&100 & \S\ref{sec:HotGasDist} \\
\hline
\multicolumn{6}{c}{\emph{Adiabatic Contraction}} \\
$A_{\rm ac}$ & 0&742 & 1&000 & \S\ref{sec:Sizes} \\
$w_{\rm ac}$ & 0&920 & 1&000 & \S\ref{sec:Sizes} \\
\hline
\multicolumn{6}{c}{\emph{Star Formation}} \\
$\epsilon_\star$ & 0&0152 & 0&0029 & \cite{cole_hierarchical_2000} \\
$\alpha_\star$ & -3&28 & -1&50 & \cite{cole_hierarchical_2000} \\
\hline
\multicolumn{6}{c}{\emph{Disk Stability}} \\
$\epsilon_{\rm d,gas}$ & 0&743 & 0&800\footnotemark & \S\ref{sec:MinorChanges} \\
\hline
\multicolumn{6}{c}{\emph{Supernovae Feedback}} \\
$V_{\rm hot,disk}$ & 358&0~km/s & 485&0~km/s & \S\ref{sec:Feedback} \\
$V_{\rm hot,burst}$ & 328&0~km/s & 485&0~km/s & \S\ref{sec:Feedback} \\
$\alpha_{\rm hot}$ & 3&36 & 3&20 & \S\ref{sec:Feedback} \\
$\lambda_{\rm expel,disk}$ & 0&785 & \multicolumn{2}{c}{N/A} & \S\ref{sec:Feedback} \\
$\lambda_{\rm expel,burst}$ & 7&36 & \multicolumn{2}{c}{N/A} & \S\ref{sec:Feedback} \\
\hline
\multicolumn{6}{c}{\emph{Ram Pressure Stripping}} \\
$\epsilon_{\rm strip}$ & 0&335 & \multicolumn{2}{c}{N/A} & \S\ref{sec:HotStrip} \\
\hline
\multicolumn{6}{c}{\emph{Merging}} \\
$f_{\rm ellip}$ & 0&0214 & 0&3000 & \cite{cole_hierarchical_2000} \\
$f_{\rm burst}$ & 0&335 & 0&100 & \cite{cole_hierarchical_2000} \\
$f_{\rm gas,burst}$ & 0&331 & 0&100 & \S\ref{sec:Unchanged} \\
$B/T_{\rm burst}$ & 0&672 & \multicolumn{2}{c}{N/A} & \S\ref{sec:Unchanged} \\
\hline
\multicolumn{6}{c}{\emph{Black Hole Growth}} \\
$\epsilon_\bullet$ & 0&0134 & 0&0398 & \S\ref{sec:AGNFeedback} \\
$\eta_\bullet$ & 0&0163 & \multicolumn{2}{c}{N/A} & \S\ref{sec:AGNFeedback} \\
$F_{\rm SMBH}$ & 0&0125 & 0&00500 & \cite{malbon_black_2007} \\
\hline
\end{tabular}
\end{center}
\end{table}
\footnotetext{The \protect\cite{bower_breakinghierarchy_2006} model used a single value of $\epsilon_{\rm d}$ for both gaseous and stellar disks.}
\begin{table*}
\caption{Parameters of the overall best fit model compared to those of models which best fit individual datasets (as indicated by column labels). Parameters which play a key role (as discussed in the relevant subsections of \S\ref{sec:Results}) in helping to obtain a good fit to each dataset are shown in bold type.}
\label{tb:BestModelParams}
\setlength\tabcolsep{1pt}
\begin{center}
\begin{tabular}{lr@{.}l@{\hspace{2pt}}r@{.}l@{\hspace{2pt}}r@{.}l@{\hspace{2pt}}r@{.}l@{\hspace{2pt}}r@{.}l@{\hspace{2pt}}r@{.}l@{\hspace{2pt}}r@{.}l@{\hspace{2pt}}r@{.}l@{\hspace{2pt}}r@{.}l@{\hspace{2pt}}r@{.}l@{\hspace{2pt}}r@{.}l@{\hspace{2pt}}r@{.}l@{\hspace{2pt}}r@{.}l@{\hspace{2pt}}r@{.}l@{\hspace{2pt}}r@{.}l@{\hspace{2pt}}r@{.}l@{\hspace{2pt}}r@{.}l@{\hspace{2pt}}r@{.}l}
\input{Data/BestModelsTable0}
\end{tabular}
\end{center}
\setlength\tabcolsep{5pt}
\end{table*}
\begin{table*}
\addtocounter{table}{-1}
\caption{\emph{(cont.)} Parameters of the overall best fit model compared to those of models which best fit individual datasets (as indicated by column labels). Parameters which play a key role in helping to obtain a good fit to each dataset are shown in bold type.}
\setlength\tabcolsep{1pt}
\begin{center}
\begin{tabular}{lr@{.}l@{\hspace{2pt}}r@{.}l@{\hspace{2pt}}r@{.}l@{\hspace{2pt}}r@{.}l@{\hspace{2pt}}r@{.}l@{\hspace{2pt}}r@{.}l@{\hspace{2pt}}r@{.}l@{\hspace{2pt}}r@{.}l@{\hspace{2pt}}r@{.}l@{\hspace{2pt}}r@{.}l@{\hspace{2pt}}r@{.}l@{\hspace{2pt}}r@{.}l@{\hspace{2pt}}r@{.}l@{\hspace{2pt}}r@{.}l@{\hspace{2pt}}r@{.}l@{\hspace{2pt}}r@{.}l@{\hspace{2pt}}r@{.}l@{\hspace{2pt}}r@{.}l}
\input{Data/BestModelsTable1}
\end{tabular}
\end{center}
\setlength\tabcolsep{5pt}
\end{table*}
The cosmological parameters are all close to the \ifthenelse{\equal{\arabic{WMAPDone}}{0}}{\emph{Wilkinson Microwave Anisotropy Probe} (WMAP) \setcounter{WMAPDone}{1}}{WMAP}\ five-year expectations (by construction). The parameters of the gas cooling model are all quite reasonable: the three parameters $\alpha_{\rm reheat}$ and $\alpha_{\rm cool}$ are all of order unity as expected, $\alpha_{\rm remove}$ is somewhat smaller but still plausible, while the core radius $a_{\rm core}$ is around 22\% of the virial radius. The parameters of the adiabatic contraction model differ from those proposed by \cite{gnedin_response_2004} but are within the range of values found by \cite{gustafsson_baryonic_2006} when fitting the profiles of dark matter halos in simulations including galaxy formation with feedback. The disk stability parameter, $\epsilon_{\rm d,gas}$ is close to, albeit lower than, the value of $0.9$ suggested by the theoretical work of \cite{christodoulou_new_1995}. The stripping parameter, $\epsilon_{\rm strip}$, is of order unity as expected.
The star formation parameters are reasonable, implying a low efficiency of star formation. The feedback parameters, $V_{\rm hot,disk|burst}$ are much lower than the value of 485~km/s required by \cite{bower_breakinghierarchy_2006} and significantly closer to the value of 200~km/s adopted by \cite{cole_hierarchical_2000}. This is desirable as values around 200~km/s already stretch the \ifthenelse{\equal{\arabic{SNeDone}}{0}}{supernovae (SNe) \setcounter{SNeDone}{1}}{SNe}\ energy budget. We also note that the value of $\alpha_{\rm hot}$ is lower than that required by \cite{bower_breakinghierarchy_2006} and closer to the ``natural'' value of $2$, which would imply an efficiency of supernovae energy coupling into feedback that was independent of galaxy properties. The expulsion parameters, $\lambda_{\rm expel,disk|burst}$, are close to unity as expected.
The parameters of the merging model imply that mass ratios of 1:10 or greater are required for a major merger, a little low, but within the range of plausibility, while only 1:5 or greater mergers trigger a burst. Minor mergers in which the primary galaxy has at least 34\% gas by mass and at least 34\% of its mass in a disk can also lead to bursts.
Finally, the black hole growth parameters are quite reasonable: black holes radiate at about 9\% of the Eddington luminosity, 5\% of cooling gas reaches the black hole during radio mode feedback and around 0.5\% of gas in a merging event is driven into the black hole.
Overall, the parameters of the best fit model seem reasonable on physical grounds. Given the large dimensionality of the parameter space, the complexity of the model and the various assumptions used in modelling complex physical processes we would not consider these values to be either precise or accurate (which is why we do not quote error bars here), but to merely represent the most plausible values within the context of the {\sc Galform}\ semi-analytic model of galaxy formation.
In addition to this overall best-fit model, we show in Table~\ref{tb:BestModelParams} the parameters which produced the best-fit to subsets of the data (as indicated). We caution that these models were selected from runs without self-consistent reionization and also with relatively few realizations of merger trees, making them noisy. This means that, after re-running these models with many more merger tree realizations it is possible that they will not be such good fits to the data. We do, in fact, find such cases as we will highlight below. Nevertheless, we will refer to this table in the remainder of this section when exploring the ability of our model to match each dataset. We also point out that there is no guarantee that any of these models that provide a good match to an individual dataset are good matches overall---for example, the model which best matches galaxy sizes may produce entirely unacceptable $z=0$ luminosity functions.
\subsection{Star Formation History}
\label{sec:SFH}
\begin{figure}
\includegraphics[width=80mm,viewport=0mm 60mm 200mm 245mm,clip]{Plots/HGFv2.0_PostReion-BestModel_PostReion/SFR.pdf}
\caption{The star formation rate per unit comoving volume in the Universe as a function of redshift. Red points show observational estimates from a variety of sources as compiled by \protect\cite{hopkins_evolution_2004} while magenta points show the star formation rate inferred from gamma ray bursts by \protect\cite{kistler_star_2009}. The solid lines show the total star formation rate density from our models, while the dotted and dashed lines show the contribution to this from quiescent star formation in disks and starbursts respectively. Blue lines show the overall best-fit model, while magenta lines indicate the best-fit model to this dataset and the green lines show results from the \protect\cite{bower_breakinghierarchy_2006} model.}
\label{fig:SFH}
\end{figure}
Figure~\ref{fig:SFH} shows the star formation rate per unit volume as a function of redshift, with symbols indicating observational estimates and lines showing results from our model. Dotted and dashed lines show quiescent star formation in disks and bursts of star formation respectively, while solid lines indicate the sum of these two. The quiescent mode dominates at all redshifts, although we note that at high redshifts model disks are typically unstable and undergo frequent instability events. These galaxies may therefore not look like typical low redshift disk galaxies. The best fit model is in excellent agreement with the star formation rate data from $z=1$ to $z=8$, reproducing the sharp decline in star formation below $z=2$ while maintaining a relatively high star formation rate out to the highest redshifts. Our model lies below the data at $z~\rlap{$<$}{\lower 1.0ex\hbox{$\sim$}} 1$ despite being a good match to the b$_{\rm J}$-band luminosity function (see \S\ref{sec:z0LFResults}). This suggests some inconsistency in the data analysis, perhaps related to the choice of \ifthenelse{\equal{\arabic{IMFDone}}{0}}{initial mass function (IMF) \setcounter{IMFDone}{1}}{IMF}\ or the calibration of star formation rate indicators. Indeed, the model which best fits this particular dataset (shown as magenta lines in Fig.~\ref{fig:SFH}) does so by virtue of having a large value of $\epsilon_\star$ (see Table~\ref{tb:BestModelParams}; this increases star formation rates overall) and a small value of $\alpha_{\rm cool}$ (which alters the critical mass scale for \ifthenelse{\equal{\arabic{AGNDone}}{0}}{active galactic nuclei (AGN) \setcounter{AGNDone}{1}}{AGN}\ feedback and thereby delays the truncation of star formation at low redshifts). While these changes result in a better fit to the star formation rate, they produce very unacceptable fits to the luminosity functions (which have too many bright galaxies) and galaxies which are far too depleted of gas.
The \cite{bower_breakinghierarchy_2006} model has a much lower star formation rate density than our best-fit model at $z>0.5$, although it shows a comparable amount of star formation in bursts. (The \cite{bower_breakinghierarchy_2006} model still manages to obtain a good match to the K-band luminosity function at $z=0$ however by virtue of the fact that at $z\lsim1$, where much of the build up of stellar mass occurs, the two models have comparable average star formation rates, and because it uses a different \ifthenelse{\equal{\arabic{IMFDone}}{0}}{initial mass function (IMF) \setcounter{IMFDone}{1}}{IMF}\ which results in a different mass-to-light ratio. Our best-fit model produces has 65\% more mass in stars at $z=0$ than the \cite{bower_breakinghierarchy_2006} model, but produces only 35\% more K-band luminosity density, as will be shown in Fig.~\ref{fig:K_LF}, mostly from faint galaxies.) Our best-fit model can be seen to be in significantly better agreement with the data than the \cite{bower_breakinghierarchy_2006} model and nicely reproduces the sharp decline in star formation rate at low redshifts.
\subsection{Luminosity Functions}\label{sec:z0LFResults}
Luminosity functions have traditionally represented an important constraint for galaxy formation models. We therefore include a variety of luminosity functions, spanning a range of redshifts in our constraints.
Figures~\ref{fig:bJ_LF} and \ref{fig:K_LF} show local ($z\approx 0$) luminosity functions from the \ifthenelse{\equal{\arabic{TdFDone}}{0}}{Two-degree Field Galaxy Redshift Survey (2dFGRS) \setcounter{TdFDone}{1}}{2dFGRS}\ (\citealt{norberg_2df_2002}; b$_{\rm J}$ band) and the \ifthenelse{\equal{\arabic{TMASSDone}}{0}}{Two-Micron All Sky Survey (2MASS) \setcounter{TMASSDone}{1}}{2MASS}\ (\citealt{cole_2df_2001}; $K$ band) respectively together with model predictions.
\begin{figure}
\includegraphics[width=80mm,viewport=0mm 55mm 200mm 245mm,clip]{Plots/HGFv2.0_PostReion-BestModel_PostReion/bJ_LF.pdf}
\caption{The $z=0$ b$_{\rm J}$-band luminosity function from our models: the solid lines show the luminosity function after dust extinction is applied while the dotted lines show the statistical error on the model estimate. Red points indicate the observed luminosity function from the \protect\ifthenelse{\equal{\arabic{TdFDone}}{0}}{Two-degree Field Galaxy Redshift Survey (2dFGRS) \setcounter{TdFDone}{1}}{2dFGRS}\ \protect\pcite{norberg_2df_2002}. Blue lines show the overall best-fit model, while magenta lines indicate the best-fit model to this dataset and the $z=0$ K-band luminosity function (see Fig.~\protect\ref{fig:K_LF}; note that the requirement that this model be a good match to the $z=0$ K-band luminosity function is the reason why the fit here is not as good as that of the overall best-fit model) and green lines show results from the \protect\cite{bower_breakinghierarchy_2006} model.}
\label{fig:bJ_LF}
\end{figure}
It is well established that the faint end slope of the luminosity function, which is flatter than would be naively expected from the slope of the dark matter halo mass function, requires some type of feedback in order to be reproduced in models. The supernovae feedback present in our model is sufficient to flatten the faint end slope of the local luminosity functions and bring it into good agreement with the data in the b$_{\rm J}$ band, except perhaps at the very faintest magnitudes shown. The K-band shows an even flatter faint end slope and this is not as well reproduced by our model.
\begin{figure}
\includegraphics[width=80mm,viewport=0mm 55mm 200mm 245mm,clip]{Plots/HGFv2.0_PostReion-BestModel_PostReion/K_LF.pdf}
\caption{The $z=0$ K-band luminosity function from our models: the solid lines show the luminosity function after dust extinction is applied while the dotted lines show the statistical error on the model estimate. Red points indicate data from the \protect\ifthenelse{\equal{\arabic{TdFDone}}{0}}{Two-degree Field Galaxy Redshift Survey (2dFGRS) \setcounter{TdFDone}{1}}{2dFGRS} +\protect\ifthenelse{\equal{\arabic{TMASSDone}}{0}}{Two-Micron All Sky Survey (2MASS) \setcounter{TMASSDone}{1}}{2MASS}\ \protect\pcite{cole_2df_2001}. Blue lines show the overall best-fit model, while magenta indicate the best-fit model to this dataset and the $z=0$ b$_{\rm J}$-band luminosity function (see Fig.~\protect\ref{fig:bJ_LF}) and green show results from the \protect\cite{bower_breakinghierarchy_2006} model.}
\label{fig:K_LF}
\end{figure}
Both our best-fit model and the \cite{bower_breakinghierarchy_2006} model produce good fits to these luminosity functions (although our best fit model produces a break which is slightly too bright in the K-band, indicating that the galaxy colours are not quite right---see \S\ref{sec:Colours}). This is not surprising of course as these were primary constraints used to find parameters for the \cite{bower_breakinghierarchy_2006} model. The \cite{bower_breakinghierarchy_2006} model does give a noticeably better match to the faint end of the K-band luminosity function (although it is far from perfect), due to the higher value of $\alpha_{\rm hot}$ that it adopts (see Table~\ref{tb:BestFitParams}). Unfortunately, this large value of $\alpha_{\rm hot}$ adversely affects the agreement with other datasets and so our best-fit model is forced to adopt a lower value. The important point here is that the \cite{bower_breakinghierarchy_2006} model was designed to fit just these luminosity functions, while the current model is being asked to simultaneously fit a much larger compilation of datasets. This point is further illustrated by the magenta lines in Figs.~\ref{fig:bJ_LF} and \ref{fig:K_LF} which show the model that best matches these two datasets. It achieves a flatter faint end slope by virtue of having quite large values of $\alpha_{\rm hot}$ and $\alpha_{\rm cool}$. This improved match to the faint end is at the expense of the bright end though ($\chi^2$ fitting gives more weight to the faint end, which has more data points with smaller error bars).
Figure~\ref{fig:60mu_z0_LF} shows the 60$\mu$m infrared luminosity function from \cite{saunders_60-micron_1990} (red points) and the corresponding model results (lines). The 60$\mu$m luminosity function constrains the dust absorption and reemission in our model and so is complementary to the optical and near-\ifthenelse{\equal{\arabic{IRDone}}{0}}{infrared (IR) \setcounter{IRDone}{1}}{IR}\ luminosity functions discussed above. Our best fit model produces a very good match to the data at low luminosities---the sharp cut off at $10^{11}h^{-2}L_\odot$ is artificial and due to the limited number of merger trees which we are able to run and the scarcity of these galaxies (which are produced by massive bursts of star formation). The \cite{bower_breakinghierarchy_2006} model matches well at high luminosities but underpredicts the number of faint galaxies. This is due to the higher frequency of starbursts at low redshifts in the \cite{bower_breakinghierarchy_2006} model (see Fig.~\ref{fig:SFH}), which populate the bright end of the 60$\mu$m luminosity function. It must be kept in mind that absorption and re-emission of starlight by dust is one of the most challenging processes to model semi-analytically, and we expect that approximations made in this work may have significant effects on emission at $60\mu$m. A more detailed study, utilizing {\sc Grasil}, will be presented in a future work. The best fit model to this specific dataset is a good fit to the data although it has somewhat too many $60\mu$m-bright galaxies. This is achieved by adopting a much lower value of $f_{\rm gas,burst}$ which lets minor mergers trigger bursts more easily. This increases the abundance of bursting galaxies with high star formation rates and fills in the bright end of the $60\mu$m luminosity function.
\begin{figure}
\includegraphics[width=80mm,viewport=0mm 55mm 200mm 245mm,clip]{Plots/HGFv2.0_PostReion-BestModel_PostReion/60mu_z0_LF.pdf}
\caption{The $z=0$ 60$\mu$m luminosity functions from our models are shown by the solid lines. Red points indicate data from \protect\cite{saunders_60-micron_1990}. Blue lines show the overall best-fit model, while magenta lines indicate the best-fit model to this dataset and the green lines show results from the \protect\cite{bower_breakinghierarchy_2006} model.}
\label{fig:60mu_z0_LF}
\end{figure}
\begin{figure}
\begin{center}
$z=1.0$\\
\vspace{-5mm}\includegraphics[width=80mm,viewport=0mm 55mm 200mm 245mm,clip]{Plots/HGFv2.0_PostReion-BestModel_PostReion/K20_Ks_LF_z1.pdf}
\end{center}
\caption{The $z=1$ K$_{\rm s}$-band luminosity function from our models is shown by the solid lines with dotted lines indicating the statistical uncertainty on the model estimates. Red points indicate data from \protect\cite{pozzetti_k20_2003}. Blue lines show the overall best-fit model, while magenta lines indicate the best-fit model to this dataset and the green lines show results from the \protect\cite{bower_breakinghierarchy_2006} model.}
\label{fig:K20_Ks_LF}
\end{figure}
\begin{figure*}
\begin{tabular}{ccc}
\includegraphics[width=55mm,viewport=0mm 55mm 205mm 245mm,clip]{Plots/HGFv2.0_PostReion-BestModel_PostReion/KLF_Morph_Type1.pdf} &
\includegraphics[width=55mm,viewport=0mm 55mm 205mm 245mm,clip]{Plots/HGFv2.0_PostReion-BestModel_PostReion/KLF_Morph_Type3.pdf} &
\includegraphics[width=55mm,viewport=0mm 55mm 205mm 245mm,clip]{Plots/HGFv2.0_PostReion-BestModel_PostReion/KLF_Morph_Type5.pdf}
\end{tabular}
\caption{The $z=0$ morphologically segregated K-band luminosity functions from our models. Points indicate the observed luminosity function from \protect\cite{devereux_morphological_2009} for morphological classes as indicated in each panel. Blue lines show the overall best-fit model, while magenta lines indicate the best-fit model to this dataset and the green lines show results from the \protect\cite{bower_breakinghierarchy_2006} model.}
\label{fig:K_Morpho_LF}
\end{figure*}
Figure~\ref{fig:K20_Ks_LF} shows the K$_{\rm s}$-band luminosity function from the K20 survey \pcite{pozzetti_k20_2003} at $z=1.0$. (The data at $z=0.5$ and $1.5$ were used as constraints also.) The model traces the evolution of the luminosity function quite well but overpredicts the abundance at all redshifts. This is in contrast to the \cite{bower_breakinghierarchy_2006} model which matches these luminosity functions quite well. This is partly due to the tension between luminosity functions and the star formation rate density of Fig.~\ref{fig:SFH} which would be better fit if the model produced an even higher star formation rate density. This constraint forces our best-fit model to build up more stellar mass than the \cite{bower_breakinghierarchy_2006} model, consequently, to overpredict the abundance of galaxies at these redshifts. This tension between luminosity function and star formation rate constraints may in part be due to the difficulties involved with estimating the latter observationally (due to uncertainties in the \ifthenelse{\equal{\arabic{IMFDone}}{0}}{initial mass function (IMF) \setcounter{IMFDone}{1}}{IMF}\, calibration of star formation rate indicators and so on; see \cite{hopkins_normalization_2006} for a detailed examination of these issues). The best-fit model to this specific dataset successfully matches the data at all three redshifts. It achieves this through a combination of relatively high (i.e. less negative) $\alpha_\star$ and a high value of $\alpha_{\rm hot}$. Together, this combination allows for a flatter faint-end slope while maintaining the normalization of the bright end.
In addition to these luminosity functions that include all galaxy types, in Fig.~\ref{fig:K_Morpho_LF} we show the morphologically selected luminosity function of \cite{devereux_morphological_2009} overlaid with model results. We base morphological classification of model galaxies on \ifthenelse{\equal{\arabic{BTDone}}{0}}{bulge-to-total ratio (B/T) \setcounter{BTDone}{1}}{B/T}\ in dust-extinguished K-band light. We determine the mapping between \ifthenelse{\equal{\arabic{BTDone}}{0}}{bulge-to-total ratio (B/T) \setcounter{BTDone}{1}}{B/T}\ and morphology by requiring that the relative abundance of each type in the model agrees with the data in the interval $-23.5 < M_{\rm K}-5\log_{10}h \le -23.0$ but the morphological mix is not enforced outside this magnitude range. Our best-fit model reproduces the broad trends seen in this data---although we find that too many Sb-Sbc galaxies are produced at the highest luminosities. The \cite{bower_breakinghierarchy_2006} gives a better match to this data overall. The best fit to the particular dataset (magenta lines in Fig.~\ref{fig:K_Morpho_LF}) has a relatively large value of $f_{\rm ellip}$, but is not significantly better than our best-fit model.
In addition to these relatively low redshift constraints we are particularly interested here in examining constraints from the highest redshifts currently observable. Therefore, Fig.~\ref{fig:LyBreakLF} shows the luminosity function of $z\approx 3$ Lyman-break galaxies together with the expectation from our best fit model (blue line). Model galaxies are drawn from the entire sample of galaxies at $z=3$ found in the model. The model significantly overpredicts the number of luminous galaxies even when internal dust extinction is taken into account (the dashed line in Fig.~\ref{fig:LyBreakLF} shows the luminosity function without the effects of dust extinction). The \cite{bower_breakinghierarchy_2006} model gives a similarly bad match to this data at the bright-end (although is slightly better at low luminosities), producing too many highly luminous galaxies. The best-fit model to this specific dataset turns out to be not such a good fit, although it is better than either of the other models shown. The problem here is one of noise. The models run for our parameter space search utilized relatively small numbers of merger tree realizations (to permit them to run in a reasonable amount of time). In this particular case, the model run during the parameter space search looked like a good match to the $z\approx 3$ Lyman-break galaxy luminosity function, but, when re-run with many more merger trees, it turned out that the apparently good fit was partly a result of fortuitous noise. This luminosity function is particular sensitive to such effects, as the bright end is dominated by rare starburst galaxies.
\begin{figure}
\includegraphics[width=80mm,viewport=0mm 55mm 205mm 245mm,clip]{Plots/HGFv2.0_PostReion-BestModel_PostReion/LyBreak_LF.pdf}
\caption{The $z=3$ 1700\AA\ luminosity functions from our models are shown by the solid lines with dotted lines showing the statistical uncertainty on the model estimates. The dashed lines indicate the luminosity function when the effects of dust extinction are neglected. Red points indicate the observed luminosity function from \protect\citeauthor{steidel_lyman-break_1999}~(\protect\citeyear{steidel_lyman-break_1999}; circles) and \protect\citeauthor{dickinson_color-selected_1998}~(\protect\citeyear{dickinson_color-selected_1998}; squares). Blue lines show the overall best-fit model, while magenta lines indicate the best-fit model to this dataset and the green lines show results from the \protect\cite{bower_breakinghierarchy_2006} model.}
\label{fig:LyBreakLF}
\end{figure}
\begin{figure}
\begin{tabular}{cc}
\includegraphics[width=80mm,viewport=0mm 55mm 200mm 245mm,clip]{Plots/HGFv2.0_PostReion-BestModel_PostReion/z5_1500A_LF.pdf}
\end{tabular}
\caption{The $z=5$ rest-frame 1500\AA\ luminosity function from our models are shown by the solid lines, with statistical errors indicated by the dotted lines. Red points indicate data from \protect\cite{mclure_luminosity_2009}. Blue lines show the overall best-fit model, while magenta lines indicate the best-fit model to this dataset and the green lines show results from the \protect\cite{bower_breakinghierarchy_2006} model.}
\label{fig:z5_6_LF}
\end{figure}
Finally, at the highest redshifts for which we presently have statistically useful data, Fig~\ref{fig:z5_6_LF} shows rest-frame \ifthenelse{\equal{\arabic{UVDone}}{0}}{ultraviolet (UV) \setcounter{UVDone}{1}}{UV}\ luminosity function at $z=5$ from \cite{mclure_luminosity_2009}. These highest redshift luminosity functions in principle place a strong constraint on the model. However, the effects of dust become extremely important at these short wavelengths and so our model predictions are less reliable. As such, these constraints are less fundamental than most of the others which we consider. We use our more detailed dust modelling for the \cite{bower_breakinghierarchy_2006} model here even though the original \cite{bower_breakinghierarchy_2006} used the simpler dust model of \cite{cole_hierarchical_2000}. However, as noted in \protect\S\ref{sec:DustModel}, in our current model we ensure that high-$z$ galaxies which are undergoing near continuous instability driven bursting are observed during the dust phase of the burst. In the \cite{bower_breakinghierarchy_2006} model shown here this is not the case---such systems are almost always observed in a gas and dust free state, making them appear much brighter. It is clear that the treatment of these galaxies in terms of punctuated equilibrium of disks is inadequate and we will return to this issue in more detail in a future work.
The best fit model again overpredicts the number and/or luminosities of galaxies at these redshifts. The \cite{bower_breakinghierarchy_2006} model performs much worse here however---drastically overpredicting the number of luminous galaxies. The majority of this difference is due to the treatment of dust in bursts in our current model. Additionally, however, this difference simply reflects the fact that high-$z$ constraints were not considered when selecting the parameters of the \cite{bower_breakinghierarchy_2006} model---the improved agreement here illustrates the benefits of considering a wide range of datasets when constraining model parameters. The best fit model to these specific datasets shows a steeper decline at high luminosities and a lower normalization over all luminosities. Once again, the best fit here is not particularly good, for the same reasons that the $z=3$ \ifthenelse{\equal{\arabic{UVDone}}{0}}{ultraviolet (UV) \setcounter{UVDone}{1}}{UV}\ luminosity function is not too well fit (i.e. that the models run to search parameter space use relatively few merger trees, leading to significant noise in these luminosity functions which depend on galaxies that form in rare halos). This is achieved through a combination of strong feedback (i.e. high $V_{\rm hot,disk}$) and highly efficient star formation with a very strong dependence on galaxy circular velocity. However, this achieves only a relatively small improvement over the overall best fit model, at the expense of significantly worse fits to other datasets.
\subsection{Colours}\label{sec:Colours}
The bimodality of the galaxy colour-magnitude diagram has long been understood to convey important information regarding the evolutionary history of different types of galaxy. Recently, semi-analytic models have paid close attention to this diagnostic \pcite{croton_many_2006,bower_breakinghierarchy_2006}. In particular, \cite{font_colours_2008} found that the inclusion of detailed modelling of ram pressure stripping of hot gas from satellite galaxy halos is crucial for obtaining an accurate determination of the colour-magnitude relation. That same model of ram pressure stripping is included in the present work.
\begin{figure*}
\begin{tabular}{ccc}
$-22 < ^{0.1}M_{\rm g}-5\log h \le -21$ & $-20 < ^{0.1}M_{\rm g}-5\log h \le -19$ & $-18 < ^{0.1}M_{\rm g}-5\log h \le -17$ \\
\includegraphics[width=55mm,viewport=0mm 50mm 200mm 245mm,clip]{Plots/HGFv2.0_PostReion-BestModel_PostReion/SDSS_Colours_-22_-21.pdf} &
\includegraphics[width=55mm,viewport=0mm 50mm 200mm 245mm,clip]{Plots/HGFv2.0_PostReion-BestModel_PostReion/SDSS_Colours_-20_-19.pdf} &
\includegraphics[width=55mm,viewport=0mm 50mm 200mm 245mm,clip]{Plots/HGFv2.0_PostReion-BestModel_PostReion/SDSS_Colours_-18_-17.pdf}
\end{tabular}
\caption{$^{0.1}$g$-^{0.1}$r colour distributions for galaxies at $z=0.1$ split by g-band absolute magnitude (see above each panel for magnitude range). Solid lines indicate the distributions from our models while the red points show data from the \protect\ifthenelse{\equal{\arabic{SDSSDone}}{0}}{Sloan Digital Sky Survey (SDSS) \setcounter{SDSSDone}{1}}{SDSS}\ \protect\pcite{weinmann_properties_2006}. Blue lines show the overall best-fit model, while magenta lines indicate the best-fit model to this dataset and the green lines show results from the \protect\cite{bower_breakinghierarchy_2006} model. Note that the magenta model is selected on the basis on more panels than are shown here.}
\label{fig:SDSS_Colours}
\end{figure*}
Figure~\ref{fig:SDSS_Colours} shows slices of constant magnitude through the colour magnitude diagram of \cite{weinmann_properties_2006}, overlaid with results from our model. The model is very successful in matching these data, showing that at bright magnitudes the red galaxy component dominates, shifting to a mix of red and blue galaxies at fainter magnitudes. The median colours of the blue and red components of the galaxy population are reproduced better in our current model than by that of \cite{bower_breakinghierarchy_2006}, although there is clearly an offset in the blue cloud at faint magnitudes (model galaxies in the blue cloud are slightly too red). Our model reproduces the colours of galaxies reasonably well, so this offset may be partly due to the limitations of stellar population synthesis models. This problem with the \cite{bower_breakinghierarchy_2006} model was noted by \cite{font_colours_2008} who demonstrated that a combination of a higher yield of $p=0.04$ in the instantaneous recycling approximation (\cite{bower_breakinghierarchy_2006} assumed a yield of $p=0.02$) and ram pressure stripping of cold gas in galaxy disks lead to a much better match to galaxy colours. The yield is not a free parameter in our model, instead it is determined from the \ifthenelse{\equal{\arabic{IMFDone}}{0}}{initial mass function (IMF) \setcounter{IMFDone}{1}}{IMF}\ and stellar metal yields directly (see Fig.~\ref{fig:Chabrier_NonInstant}), potentially rising as high as $p=0.04$ after several Gyr. This is very close to the value adopted by \cite{font_colours_2008}, and our model is able to produce a good match to the colours. As we will see later (in \S\ref{sec:GasMetals}), the \cite{bower_breakinghierarchy_2006} model has more serious problems with galaxy metallicities which are somewhat rectified in our present model thereby helping us obtain a better match to the galaxy colours. The best-fit model to this specific dataset is a better match than our overall best-fit model for fainter galaxies, although it performs less well at brighter magnitudes. At faint magnitudes it produces a bluer blue-cloud which better matches that which is observed. It achieves this success by having a much larger value (i.e. less negative) of $\alpha_\star$. This parameter controls how star formation rates scale with galaxy mass, with this model having less dependence than any other. This improves the match to galaxy colours (at the expense of steepening the faint end slope of the luminosity function), particularly for fainter galaxies.
\subsection{Scaling Relations}\label{sec:TF}
\begin{figure*}
\begin{tabular}{cccc}
$-20 \le M_{i} < -19$ & $-21 \le M_{i} < -20$ & $-22 \le M_{i} < -21$ & $-23 \le M_{i} < -22$\vspace{-6mm} \\
\includegraphics[width=40mm,viewport=0mm 55mm 205mm 245mm,clip]{Plots/HGFv2.0_PostReion-BestModel_PostReion/SDSS_TF_-20_-19.pdf} &
\includegraphics[width=40mm,viewport=0mm 55mm 205mm 245mm,clip]{Plots/HGFv2.0_PostReion-BestModel_PostReion/SDSS_TF_-21_-20.pdf} &
\includegraphics[width=40mm,viewport=0mm 55mm 205mm 245mm,clip]{Plots/HGFv2.0_PostReion-BestModel_PostReion/SDSS_TF_-22_-21.pdf} &
\includegraphics[width=40mm,viewport=0mm 55mm 205mm 245mm,clip]{Plots/HGFv2.0_PostReion-BestModel_PostReion/SDSS_TF_-23_-22.pdf}
\end{tabular}
\caption{Slices though the i-band Tully-Fisher relation from the \protect\ifthenelse{\equal{\arabic{SDSSDone}}{0}}{Sloan Digital Sky Survey (SDSS) \setcounter{SDSSDone}{1}}{SDSS}\ \protect\pcite{pizagno_tully-fisher_2007} at constant absolute magnitude are shown by red points. Solid lines show results from our models with dotted lines indicating the statistical error on the model estimate. Blue lines show the overall best-fit model, while magenta lines indicate the best-fit model to this dataset and the green lines show results from the \protect\cite{bower_breakinghierarchy_2006} model.}
\label{fig:SDSS_TF}
\end{figure*}
Fitting the Tully-Fisher relation simultaneously with the luminosity function has been a long-standing challenge for models of galaxy formation (see \cite{dutton_tully-fisher_2008} and references therein). Figure~\ref{fig:SDSS_TF} shows the Tully-Fisher relation from the \ifthenelse{\equal{\arabic{SDSSDone}}{0}}{Sloan Digital Sky Survey (SDSS) \setcounter{SDSSDone}{1}}{SDSS}\ as measured by \cite{pizagno_tully-fisher_2007} together with the result from out best-fit model. The model is in reasonable agreement with zero point, although somewhat offset to higher velocities, and in good agreement with the luminosity dependence and width of the Tully-Fisher relation. Our new model is a significantly better match to the Tully-Fisher relation than that of \cite{bower_breakinghierarchy_2006}, which produces galaxies with rotation speeds that are systematically too large (particularly for the brightest galaxies). For example, for the most luminous galaxies shown the \cite{bower_breakinghierarchy_2006} predicts a population of galaxies with circular velocities of 300--400km/s or greater---strongly ruled out by observations. The new model on the other hand predicts essentially no galaxies in this velocity range. The best-fit model to this particular dataset is a significantly better match than our overall best-fit model. No single parameter is responsible for the improvement, but $\lambda_{\rm expel,burst}$ plays an important role---it is much lower in the best-fit model to the Tully-Fisher data.
\subsection{Sizes}
Figure~\ref{fig:SDSS_Sizes} shows the distribution of galaxy sizes, split by morphological type and magnitude, from the \ifthenelse{\equal{\arabic{SDSSDone}}{0}}{Sloan Digital Sky Survey (SDSS) \setcounter{SDSSDone}{1}}{SDSS}\ \pcite{shen_size_2003}. To morphologically classify model galaxies we utilize the bulge-to-total ratio in dust-extinguished $^{0.1}$r-band light. From the K-band morphologically segregated luminosity function (see \S\ref{sec:z0LFResults}) we find that E and S0 galaxies are those with B/T$>0.714$ for the best fit model. There is no convincing reason to expect this value to correspond precisely to the morphological selection used by \cite{shen_size_2003}, but it is currently our best method to choose a division between early and late types in our model. For simplicity, we employ the same morphological cut for all three models plotted in Fig.~\ref{fig:SDSS_Sizes}. Model results are overlaid as lines. Model galaxies are too large compared to the data, by factors of about two, and the distribution of model galaxy sizes is too broad. This problem is more significant for the fainter galaxies.
\begin{figure*}
\begin{tabular}{ccc}
$-18.5 > \hspace{1mm}^{0.1}\hspace{-1mm}M_{\rm r}-5\log h \ge -18.0$ &
$-20.5 > \hspace{1mm}^{0.1}\hspace{-1mm}M_{\rm r}-5\log h \ge -21.0$ &
$-22.5 > \hspace{1mm}^{0.1}\hspace{-1mm}M_{\rm r}-5\log h \ge -23.0$ \\
\includegraphics[width=55mm,viewport=0mm 5mm 200mm 270mm,clip]{Plots/HGFv2.0_PostReion-BestModel_PostReion/SDSS_Sizes_-18_5_-18.pdf} &
\includegraphics[width=55mm,viewport=0mm 5mm 200mm 270mm,clip]{Plots/HGFv2.0_PostReion-BestModel_PostReion/SDSS_Sizes_-20_5_-20.pdf} &
\includegraphics[width=55mm,viewport=0mm 5mm 200mm 270mm,clip]{Plots/HGFv2.0_PostReion-BestModel_PostReion/SDSS_Sizes_-22_5_-22.pdf} \\
\end{tabular}
\caption{Distributions of galaxy half-light radii (measured in the dust-extinguished face-on r-band light profile) at $z=0.1$ segregated by r-band absolute magnitude and by morphological class. Solid lines show results from our models while dotted lines show the statistical error on the model estimates. Red points data from the \protect\ifthenelse{\equal{\arabic{SDSSDone}}{0}}{Sloan Digital Sky Survey (SDSS) \setcounter{SDSSDone}{1}}{SDSS}\ \protect\pcite{shen_size_2003}. Blue lines show the overall best-fit model, while magenta lines indicate the best-fit model to this dataset and the green lines show results from the \protect\cite{bower_breakinghierarchy_2006} model.}
\label{fig:SDSS_Sizes}
\end{figure*}
Figure~\ref{fig:Other_Sizes} shows the distribution of disk sizes from \cite{de_jong_local_2000} with model results overlaid as lines. This permits a more careful comparison with the model as it does not require us to assign morphological types to model galaxies. Model disks are somewhat too large in all luminosity bins considered, and the width of the distribution of disk sizes is broader than that observed.
\begin{figure*}
\begin{tabular}{ccc}
\includegraphics[width=55mm,viewport=0mm 45mm 200mm 255mm,clip]{Plots/HGFv2.0_PostReion-BestModel_PostReion/disk_size_3.pdf} &
\includegraphics[width=55mm,viewport=0mm 45mm 200mm 255mm,clip]{Plots/HGFv2.0_PostReion-BestModel_PostReion/disk_size_5.pdf} &
\includegraphics[width=55mm,viewport=0mm 45mm 200mm 255mm,clip]{Plots/HGFv2.0_PostReion-BestModel_PostReion/disk_size_7.pdf}
\end{tabular}
\caption{Distribution of disk scale lengths for galaxies at $z=0$ segregated by face-on I-band absolute magnitude. Solid lines show results from our models while dotted lines indicate the statical uncertainty on the model estimates. Red circles show data from \protect\cite{de_jong_local_2000} with upper limits indicated by red triangles. Blue lines show the overall best-fit model, while magenta lines indicate the best-fit model to this dataset and the green lines show results from the \protect\cite{bower_breakinghierarchy_2006} model.}
\label{fig:Other_Sizes}
\end{figure*}
The \cite{bower_breakinghierarchy_2006} model produces galaxies which are systematically smaller than those in our current best-fit model at bright magnitudes, but larger at faint magnitudes. It also produces a narrower distribution of disk sizes. Our best fit model to these combined size datasets is a rather poor match to the distribution of disk sizes. We find that it is challenging to obtain realistic sizes for disks in our model while simultaneously matching other observational constraints. This problem, which may reflect inaccuracies in the angular momentum of cooling gas, angular momentum loss during cooling or merging, or internal processes which transfer angular momentum out of galaxies, will be addressed in greater detail in a future work.
\subsection{Gas and Metal Content}\label{sec:GasMetals}
\begin{figure}
\includegraphics[width=80mm,viewport=0mm 10mm 195mm 265mm,clip]{Plots/HGFv2.0_PostReion-BestModel_PostReion/SDSS_Zgas.pdf}
\caption{Gas-phase metallicity as a function of absolute magnitude from the \protect\ifthenelse{\equal{\arabic{SDSSDone}}{0}}{Sloan Digital Sky Survey (SDSS) \setcounter{SDSSDone}{1}}{SDSS}\ \protect\pcite{tremonti_origin_2004} is shown by the red points. Points show the median value, while error bars indicate the 2.5, 16, 84 and 97.5 percentiles of the distribution. Lines indicate results form our best-fit model. Solid lines indicate the median model relation, dashed lines the 16 and 84 percentiles and dotted lines the 2.5 and 97.5 percentiles, corresponding to the error bars on the data. Blue lines show the overall best-fit model, while magenta lines indicate the best-fit model to this dataset and the green lines show results from the \protect\cite{bower_breakinghierarchy_2006} model. (Note that dashed and dotted lines are shown only for the best-fit model for clarity.)}
\label{fig:SDSS_Zgas}
\end{figure}
\begin{figure*}
\begin{tabular}{cccc}
{\tiny $-15 < M_{\rm B} -5\log h\le -14$} &
{\tiny $-17 < M_{\rm B} -5\log h\le -16$} &
{\tiny $-19 < M_{\rm B} -5\log h\le -18$} &
{\tiny $-21 < M_{\rm B} -5\log h\le -20$} \\
\includegraphics[width=40mm,viewport=5mm 50mm 200mm 245mm,clip]{Plots/HGFv2.0_PostReion-BestModel_PostReion/Zstar_-15_-14.pdf} &
\includegraphics[width=40mm,viewport=5mm 50mm 200mm 245mm,clip]{Plots/HGFv2.0_PostReion-BestModel_PostReion/Zstar_-17_-16.pdf} &
\includegraphics[width=40mm,viewport=5mm 50mm 200mm 245mm,clip]{Plots/HGFv2.0_PostReion-BestModel_PostReion/Zstar_-19_-18.pdf} &
\includegraphics[width=40mm,viewport=5mm 50mm 200mm 245mm,clip]{Plots/HGFv2.0_PostReion-BestModel_PostReion/Zstar_-21_-20.pdf}
\end{tabular}
\caption{Distributions of mean stellar metallicity at different slices of absolute magnitude. Red points show observational data compiled by \protect\cite{zaritsky_h_1994}. Solid lines indicate results from our models while dotted lines show the statistical uncertainty on the model estimate. Blue lines show the overall best-fit model, while magenta lines indicate the best-fit model to this dataset and the green lines show results from the \protect\cite{bower_breakinghierarchy_2006} model.}
\label{fig:Zstar}
\end{figure*}
\begin{figure}
\includegraphics[width=80mm,viewport=0mm 55mm 205mm 245mm,clip]{Plots/HGFv2.0_PostReion-BestModel_PostReion/Gas2Light.pdf}
\caption{Gas (hydrogen) to B-band light ratios at $z=0$ as a function of B-band absolute magnitude. The solid lines show the mean ratio from our models while the dotted lines show the dispersion around the mean. Red points show the mean ratio from a compilation of data from \protect\cite{huchtmeier_h_1988} and \protect\cite{sage_molecular_1993} with error bars indicating the dispersion in the distribution. Blue lines show the overall best-fit model, while magenta lines indicate the best-fit model to this dataset and the green lines show results from the \protect\cite{bower_breakinghierarchy_2006} model. Model galaxies were selected to have bulge-to-total ratios in B-band light of $0.4$ or less and gas fractions of 3\% or more in order to attempt to match the morphological selection (Sa and later types) in the observations.}
\label{fig:Gas2Light}
\end{figure}
The star formation and supernovae feedback prescriptions in our model can be constrained by measurements of the gas and metal content of galaxies. Figure~\ref{fig:SDSS_Zgas} shows the distribution of gas-phase metallicities from the \ifthenelse{\equal{\arabic{SDSSDone}}{0}}{Sloan Digital Sky Survey (SDSS) \setcounter{SDSSDone}{1}}{SDSS}\ \pcite{tremonti_origin_2004} compared with results from our best-fit model. Model galaxies are drawn from the entire population of galaxies at $z=0.1$. \cite{tremonti_origin_2004} select star forming galaxies---essentially those with well detected H$\beta$, H$\alpha$ and [N{\sc ii}] $\lambda$6584 lines---and also reject galaxies with a significant \ifthenelse{\equal{\arabic{AGNDone}}{0}}{active galactic nuclei (AGN) \setcounter{AGNDone}{1}}{AGN}\ component. We have not attempted to reproduce these observational selection criteria here\footnote{Both because we can not, at present, include the \protect\ifthenelse{\equal{\arabic{AGNDone}}{0}}{active galactic nuclei (AGN) \setcounter{AGNDone}{1}}{AGN}\ component in the spectra and because it would involve constructing mock catalogues which is too expensive during our parameter space search.}, but note that excluding galaxies with very low star formation rates makes negligible difference to our results. The model clearly produces a strong trend of increasing metallicity with increasing luminosity, just as is observed, although the relation is somewhat too steep, resulting in metallicities which are around a factor of two too low at the lowest luminosities plotted. This relation is driven, in the model, by supernovae feedback: in low luminosity galaxies feedback is more efficient at ejecting material from a galaxy making it less efficient at self-enriching. The trend is somewhat steeper in the model than is observed and therefore underpredicts the metallicity of low luminosity galaxies. The spread in metallicity at fixed luminosity is larger than that which is observed. The best fit model to the metallicity datasets presented in this subsection can be seen to actually be a worse fit to the gas phase metallicity, a consequence of tensions between fitting this data and stellar metallcities and gas fractions.
Figure~\ref{fig:Zstar} shows distributions of mean stellar metallicity in various bins of absolute B-band magnitude. Data, shown by points, are taken from \cite{zaritsky_h_1994}, while results from our best-fit model are shown by lines. For model galaxies, we plot the luminosity-weighted mean metallicity of all stars (i.e. both disk and bulge stars). Although the data are quite noisy, there is, in general, good agreement of the model with this data. The \cite{bower_breakinghierarchy_2006} model fails to match the scaling of metallicity with stellar mass seen in these data. An increase in the yield in this model (from $p=0.02$ to $p=0.04$ as required to better match galaxy colours; \citealt{font_colours_2008}) would improve this situation significantly, but some reduction in the dependence of \ifthenelse{\equal{\arabic{SNeDone}}{0}}{supernovae (SNe) \setcounter{SNeDone}{1}}{SNe}\ feedback on galaxy mass is likely still required to obtain the correct scaling.
Finally, Fig.~\ref{fig:Gas2Light} shows the distribution of gas-to-light ratios from a compilation of data compared to results from our best-fit model. Model galaxies are selected to have bulge-to-total ratios in B-band light of $0.4$ or less and gas fractions of 3\% or more in order to attempt to match the morphological selection (Sa and later types) in the observations. The results are somewhat sensitive to the morphological criteria used, a fact which must be taken into account when considering the comparison with the observational data. The model ratio is somewhat too high (too much gas per unit light), but displays approximately the correct dispersion. The \cite{bower_breakinghierarchy_2006} model gets closer to the observed mean for bright galaxies, but shows a dramatic downturn at low luminosities (a result of its very strong supernovae feedback). The best fit model to this specific dataset is an excellent match to both the mean and dispersion in the gas fraction data. This is achieved primarily via a very low efficiency of star formation (allowing gas fractions to stay high) coupled with strongly velocity dependent feedback which helps obtain the measured slope in this relation.
Overall, the \cite{bower_breakinghierarchy_2006} performs much less well in matching metallicity and gas content properties. This problem can be traced to the very strong scaling of supernovae feedback strength with galaxy circular velocity adopted in the \cite{bower_breakinghierarchy_2006} model and the low yield. This strongly suppresses the effective yield in low mass galaxies, resulting in them being too metal poor, and likewise strongly suppresses the gas content of those same low mass galaxies. These constraints are among the primary drivers causing our best fit model to adopt a lower value of $\alpha_{\rm hot}$.
\subsection{Clustering}\label{sec:Clustering}
Galaxy clustering places strong constraints on the occupancy of galaxies within dark matter halos and, therefore, the merger rate (amongst other things). To compute the clustering properties of galaxies we make use of the fact that halo occupation distributions are naturally predicted by the {\sc Galform}\ model. We therefore extract halo occupation distributions directly from our best fit model. We then employ the halo model of galaxy clustering \pcite{cooray_halo_2002} to compute two-point correlation functions in redshift space. These are compared to measured redshift-space correlation functions from the \ifthenelse{\equal{\arabic{TdFDone}}{0}}{Two-degree Field Galaxy Redshift Survey (2dFGRS) \setcounter{TdFDone}{1}}{2dFGRS}\ \pcite{norberg_2df_2002} in Fig.~\ref{fig:2dFGRS_Clustering}.
There is excellent agreement between the model and data on large scales (where the two halo term dominates). On small scales, in the one halo regime, the model systematically overestimates the correlation function. This discrepancy, which is due to the model placing too many satellite galaxies in massive halos, has been noted and discussed previously by \cite{seek_kim_modelling_2009}. In their study, \cite{seek_kim_modelling_2009} demonstrated that this problem might be resolved by invoking destruction of satellite galaxies by tidal forces and by accounting for satellite-satellite mergers (both processes reduce the number of satellites). The current model includes both of these processes and treats them in a significantly more realistic way than did \cite{seek_kim_modelling_2009}. We find that they are not enough to bring the model correlation function into agreement with the data on small scales (although they do help), in our particular model. This may indicate that these processes have not been modelled sufficiently accurately, or that our model simply begins with too many satellites. We note that the \cite{bower_breakinghierarchy_2006} model performs similarly well on large scales and somewhat better on small scales (the stronger feedback in this model helps reduce the number of satellite galaxies of a given luminosity in high mass halos), although it still overpredicts the small scale clustering, as has been noted by \cite{seek_kim_modelling_2009}. The best-fit model to the clustering data alone is not very successful. This is again due to the difficulty of computing accurate correlation functions using the relatively small sets of merger trees that we are able to utilize for parameter space searches, and serves as an excellent example of the need to include better estimates of the model uncertainty (i.e. the variance in predictions from the model due to the limited number of merger trees utilized) when computing goodness of fit measures.
\begin{figure*}
\begin{tabular}{ccc}
{$-19.0 < M_{\rm b_{\rm J}} -5\log h\le -18.5$} &
{$-19.5 < M_{\rm b_{\rm J}} -5\log h\le -19.0$} &
{$-20.0 < M_{\rm b_{\rm J}} -5\log h\le -19.5$} \\
\includegraphics[width=55mm,viewport=0mm 50mm 200mm 245mm,clip]{Plots/HGFv2.0_PostReion-BestModel_PostReion/xi_s_-19.pdf} &
\includegraphics[width=55mm,viewport=0mm 50mm 200mm 245mm,clip]{Plots/HGFv2.0_PostReion-BestModel_PostReion/xi_s_-19_5.pdf} &
\includegraphics[width=55mm,viewport=0mm 50mm 200mm 245mm,clip]{Plots/HGFv2.0_PostReion-BestModel_PostReion/xi_s_-20.pdf}
\end{tabular}
\caption{Redshift space two-point correlation functions of galaxies selected by their b$_{\rm J}$ band absolute magnitude. Solid lines show results from our models while red points indicate data from the \protect\ifthenelse{\equal{\arabic{TdFDone}}{0}}{Two-degree Field Galaxy Redshift Survey (2dFGRS) \setcounter{TdFDone}{1}}{2dFGRS}\ \protect\pcite{norberg_2df_2002}. Model correlation functions are computed using the halo model of clustering \protect\pcite{cooray_halo_2002} with the input halo occupation distributions computed directly from our best-fit model. Blue lines show the overall best-fit model, while magenta lines indicate the best-fit model to this dataset and the green lines show results from the \protect\cite{bower_breakinghierarchy_2006} model.}
\label{fig:2dFGRS_Clustering}
\end{figure*}
\subsection{Supermassive Black Holes}\label{sec:SMBH}
\begin{figure*}
\begin{tabular}{ccc}
$9 \le \log_{10}(M_{\rm bulge}/h^{-1}M_\odot) < 10$ &
$10 \le \log_{10}(M_{\rm bulge}/h^{-1}M_\odot) < 11$ &
$11 \le \log_{10}(M_{\rm bulge}/h^{-1}M_\odot) < 12$ \\
\includegraphics[width=55mm,viewport=7mm 55mm 205mm 245mm,clip]{Plots/HGFv2.0_PostReion-BestModel_PostReion/MSMBH_v_Mbulge_9_10.pdf} &
\includegraphics[width=55mm,viewport=7mm 55mm 205mm 245mm,clip]{Plots/HGFv2.0_PostReion-BestModel_PostReion/MSMBH_v_Mbulge_10_11.pdf} &
\includegraphics[width=55mm,viewport=7mm 55mm 205mm 245mm,clip]{Plots/HGFv2.0_PostReion-BestModel_PostReion/MSMBH_v_Mbulge_11_12.pdf}
\end{tabular}
\caption{The distribution of supermassive black hole mass in three slices of galaxy bulge mass. Data are taken from \protect\cite{haring_black_2004} and are shown by red points. Solid lines indicate results from our models with dotted lines showing the statistical uncertainty on the model estimate. Blue lines show the overall best-fit model, while magenta lines indicate the best-fit model to this dataset and the green lines show results from the \protect\cite{bower_breakinghierarchy_2006} model.}
\label{fig:MSMBH}
\end{figure*}
The inclusion of \ifthenelse{\equal{\arabic{AGNDone}}{0}}{active galactic nuclei (AGN) \setcounter{AGNDone}{1}}{AGN}\ feedback in semi-analytic models of galaxy formation necessitates the inclusion of the supermassive black holes that are responsible for that feedback. As such, it is important to constrain the properties of these black holes to match those that are observed. Figure~\ref{fig:MSMBH} shows the distribution of supermassive black hole masses in three slices of galaxy bulge mass. Points show observational data from \cite{haring_black_2004} while lines show results from our best-fit model. The model is in excellent agreement with the current data. The \cite{bower_breakinghierarchy_2006} model produces nearly identical results for the black hole masses. This is not surprising since, as pointed out by \cite{bower_parameter_2010}, the $F_\bullet$ parameter can be adjusted to achieve a good fit here without significantly affecting any other predictions. For this same reason, the best fit model to these black hole data in not significantly better than either the \cite{bower_breakinghierarchy_2006} or the overall best fit model.
\subsection{Local Group}\label{sec:LocalGroup}
\begin{figure}
\includegraphics[width=80mm,viewport=7mm 55mm 205mm 255mm,clip]{Plots/HGFv2.0_PostReion-BestModel_PostReion/_LocalGroup/LocalGroup_LumFun.pdf}
\caption{The luminosity function of Local Group satellite galaxies in our models. Red points show current observational estimates of the luminosity function from \protect\cite{koposov_luminosity_2008} including corrections for sky coverage and selection probability from \protect\cite{tollerud_hundreds_2008}. Solid lines show the median luminosity functions of model satellite galaxies located in Milky Way-hosting halos, while dotted lines indicate the $10^{\rm th}$ and $90^{\rm th}$ percentiles of the distribution of model luminosity functions. Blue lines show the overall best-fit model, while magenta lines indicate the best-fit model to this dataset and the green lines show results from the \protect\cite{bower_breakinghierarchy_2006} model.}
\label{fig:LocalGroup_LF}
\end{figure}
The recent discovery of several new satellite galaxies of the Milky Way has lead to their abundance and properties being more robustly known and therefore acting as a strong constraint on models of galaxy formation and has attracted significant attention recently \pcite{bullock_reionization_2000,benson_effects_2002,somerville_can_2002,gnedin_fossils_2006,madau_dark_2008,madau_fossil_2008,munoz_probingepoch_2009,bovill_pre-reionization_2009,busha_impact_2009,macci`o_luminosity_2009}. Our model is the only one of which we are aware that follows the formation of these galaxies within the context of a self-consistent model of the \ifthenelse{\equal{\arabic{IGMDone}}{0}}{intergalactic medium (IGM) \setcounter{IGMDone}{1}}{IGM}\ and the global galaxy population which fits a broad range of experimental constraints on galaxies and the \ifthenelse{\equal{\arabic{IGMDone}}{0}}{intergalactic medium (IGM) \setcounter{IGMDone}{1}}{IGM}.
To compute the expected properties of Milky Way satellites in our model we simulate a large number of dark matter halos with masses at $z=0$ in the range $2\times 10^{11}$--$3\times 10^{12}h^{-1}M_\odot$. From these, we select only those halos with a virial velocity in the range 125--180km/s (consistent with recent estimates; \citealt{dehnen_velocity_2006,xue_milky_2008}) and which contain a central galaxy with a bulge-to-total ratio between 5 and 20\% to approximately match the properties of the Milky Way. This step is potentially important, as it ensures that the satellite populations that we consider are consistent with the formation of a Milky Way-like galaxy\footnote{The merging history of a halo will affect both the properties of the central galaxy and the population of satellite galaxies. By selecting only halos whose merger history was suitable to produce a Milky Way we ensure that we are looking only at satellite populations consistent with the presence of such a galaxy.}. In practice, we find that the morphological selection has little effect on the satellite luminosity function. However, the selection of suitable halos based on virial velocity produces a significant reduction (by about a factor of 2) in the number of satellites compared to the common practice of selecting halos with masses of approximately $10^{12}h^{-1}M_\odot$. Halo selection is clearly of great importance when addressing the missing satellite problem. We prefer to use a selection on halo virial velocity here rather than a selection on galaxy stellar mass, as was used by \cite{benson_effects_2002} for example, since we know that the Tully-Fisher relation in our model is incorrect (see \S\ref{sec:TF}) and so selecting on galaxy mass would result in an incorrect sample of halo masses.
Figure~\ref{fig:LocalGroup_LF} shows the V-band luminosity function of Milky Way satellite galaxies from our best fit model compared with the latest observational estimate. Our model is able to produce a sufficient number of the brightest satellites in a small fraction of realizations, although the median lies below the observed luminosity function for the Milky Way. At lower luminosities our best fit model overpredicts the observed number of satellites by factors of up to 5. It has recently been pointed out \pcite{busha_impact_2009,font_modelingmilky_2009} that inhomogeneous reionization (namely the reionization of the Lagrangian volume of the Milky Way halo by Milky Way progenitors) is an important consideration when computing the abundance of Local Group satellites. In particular, \cite{font_modelingmilky_2009} find a similar level of discrepancy in the luminosity function when they ignore this effect (as we do here) and use a similar feedback model, but demonstrate that consideration of inhomogeneous reionization can reconcile the predicted and observed abundance of satellites. We do not consider inhomogeneous reionization here, but will return to it in greater detail in a future work. It must be noted, however, that this may have an impact on the luminosity function of Local Group satellites. The \cite{bower_breakinghierarchy_2006} model gives a reasonably good match to the data, producing slightly fewer satellites than are observed at all luminosities. The best fit model to this specific dataset is in good agreement with the observations down to $M_{\rm V}=-5$, but fails to produce fainter satellites. (It also produces very few halo/galaxy pairs which meet our criteria to be deemed ``Milky Way-like'', resulting in poor statistics for this model. The models utilized during the parameter space search happened to produce more faint galaxies, resulting in them being judged a good fit---this is another example of where understanding the model uncertainty is of crucial importance.)
Figure~\ref{fig:LocalGroup_Sizes} shows the distribution of half-mass radii for Milky Way satellites split into four bins of V-band absolute magnitude (only two of the bins are shown). The data are sparse, but the model produces galaxies that are too small compared to the observed satellites by factors of around 3--6. The \cite{bower_breakinghierarchy_2006} model has the opposite problem, producing faint satellites that are too large but doing well at matching the sizes of brighter satellites. The best fit model to the Local Group size data alone is not significantly better than the overall best fit model---the sizes tend to be rather insensitive to most parameters.
\begin{figure*}
\begin{tabular}{cc}
\vspace{-3mm} $-15 < M_{\rm V} \le -10$ & $-10 < M_{\rm V} \le -5$ \\
\includegraphics[width=80mm,viewport=7mm 55mm 205mm 255mm,clip]{Plots/HGFv2.0_PostReion-BestModel_PostReion/_LocalGroup/LocalGroup_Sizes_-15_-10.pdf} &
\includegraphics[width=80mm,viewport=7mm 55mm 205mm 255mm,clip]{Plots/HGFv2.0_PostReion-BestModel_PostReion/_LocalGroup/LocalGroup_Sizes_-10_-5.pdf}
\end{tabular}
\caption{The size distribution of Local Group satellite galaxies in our models. Red points show current observational estimates of the size distribution from \protect\cite{tollerud_hundreds_2008}. Solid lines show the size distribution of model satellite galaxies located in Milky Way-hosting halos with dotted lines showing the statistical uncertainty on the model estimate. Blue lines show the overall best-fit model, while magenta lines indicate the best-fit model to this dataset and the green lines show results from the \protect\cite{bower_breakinghierarchy_2006} model.}
\label{fig:LocalGroup_Sizes}
\end{figure*}
Figure~\ref{fig:LocalGroup_Metallicities} shows the distribution of stellar metallicities for Milky Way satellites split into the same four bins of V-band absolute magnitude (of which only two are shown). Once again, the data are sparse, but the model is seen to predict distributions of metallicity that are too broad compared to those observed. The \cite{bower_breakinghierarchy_2006} model performs poorly here, significantly underestimating the metallicities of the fainter satellites. This problem can be directly traced to the high value of $\alpha_{\rm hot}$ used by the \cite{bower_breakinghierarchy_2006} model which results is exceptionally strong supernovae feedback, and consequently very low effective yields, for low mass galaxies. The best fit model to the Local Group metallicity data alone performs much better than the \cite{bower_breakinghierarchy_2006} and significantly better than the overall best fit model in reproducing both the trend with luminosity and scatter at fixed luminosity. This is achieved through a combination of relatively weakly velocity dependent feedback (i.e. a low value of $\alpha_{\rm hot}$) and a weak scaling of star formation efficiency with velocity. Together, these parameters determine the trend of effective yield with mass and the degree of self-enrichment in these galaxies. However, this weaker feedback and low $\alpha_{\rm hot}$ also result in a steeper faint end slope for the global luminosity function compared to \cite{bower_breakinghierarchy_2006}, thereby giving less success in matching the data in that particular statistic.
\begin{figure*}
\begin{tabular}{cc}
\vspace{-3mm} $-15 < M_{\rm V} \le -10$ & $-10 < M_{\rm V} \le -5$ \\
\includegraphics[width=80mm,viewport=7mm 55mm 205mm 255mm,clip]{Plots/HGFv2.0_PostReion-BestModel_PostReion/_LocalGroup/LocalGroup_Metallicities_-15_-10.pdf} &
\includegraphics[width=80mm,viewport=7mm 55mm 205mm 255mm,clip]{Plots/HGFv2.0_PostReion-BestModel_PostReion/_LocalGroup/LocalGroup_Metallicities_-10_-5.pdf}
\end{tabular}
\caption{The metallicity distribution of Local Group satellite galaxies in our models. Red points show current observational estimates of the metallicity distribution from the compilation of \protect\cite{mateo_dwarf_1998} and from \protect\cite{kirby_uncovering_2008}. Solid lines show the metallicity distribution of model satellite galaxies located in Milky Way-hosting halos with dotted lines showing the statistical uncertainty on the model estimate. Blue lines show the overall best-fit model, while magenta lines indicate the best-fit model to this dataset and the green lines show results from the \protect\cite{bower_breakinghierarchy_2006} model.}
\label{fig:LocalGroup_Metallicities}
\end{figure*}
\subsection{IGM Evolution}\label{sec:IGMResults}
As described in \S\ref{sec:IGM}, our model self-consistently evolves the properties of the intergalactic medium along with those of galaxies. In this section we discuss basic properties of the \ifthenelse{\equal{\arabic{IGMDone}}{0}}{intergalactic medium (IGM) \setcounter{IGMDone}{1}}{IGM}\ (and related quantities) from our best-fit model.
Photoheating of the \ifthenelse{\equal{\arabic{IGMDone}}{0}}{intergalactic medium (IGM) \setcounter{IGMDone}{1}}{IGM}\ begins to raise its temperature above the adiabatic expectation at $z\approx 25$, reaching a peak temperature of approximately 15,000K when hydrogen becomes fully reionized before cooling to around 2,000K by $z=0$. Hydrogen is fully reionized by $z=8$. Helium is singly ionized at approximately the same time. There follows an extended period during which helium is partially doubly ionized, but is not fully doubly ionized until much later, around $z=4$.
Figure~\ref{fig:IGM_tau} shows the Gunn-Peterson \pcite{gunn_density_1965} and electron scattering optical depths as a function of redshift. The Gunn-Peterson optical depth rises sharply at the epoch of reionization becoming optically thick at $z=8$. The rise in Gunn-Peterson optical depth is offset from that seen in observations of high redshift quasars, suggesting that reionization of hydrogen occurs somewhat too early in our model, although \cite{becker_evolution_2007} have argued that this trend in optical depth does not necessarily coincide with the epoch of reionization, but is instead consistent with a smooth extrapolation of the Lyman-$\alpha$ forest from lower redshifts (our model does not include the Lyman-$\alpha$ forest). The electron scattering optical depth is an excellent match to that inferred from \ifthenelse{\equal{\arabic{WMAPDone}}{0}}{\emph{Wilkinson Microwave Anisotropy Probe} (WMAP) \setcounter{WMAPDone}{1}}{WMAP}\ observations of the cosmic microwave background (i.e. consistent within the errors) suggesting that our model reionizes the Universe at the correct epoch.
\begin{figure*}
\begin{tabular}{cc}
\includegraphics[width=80mm,viewport=7mm 55mm 205mm 255mm,clip]{Plots/IGM_GP_tau.pdf} &
\includegraphics[width=80mm,viewport=7mm 55mm 205mm 255mm,clip]{Plots/IGM_e_Scatter_tau.pdf}
\end{tabular}
\caption{\emph{Left-hand panel:} The Gunn-Peterson \protect\pcite{gunn_density_1965} optical depth as a function of expansion factor and redshift in our best-fit model. Points show observational constraints from \protect\citeauthor{songaila_evolution_2004} (\protect\citeyear{songaila_evolution_2004}; blue points) and \protect\citeauthor{fan_constrainingevolution_2006} (\protect\citeyear{fan_constrainingevolution_2006}; green points). \emph{Right-hand panel:} The electron scattering optical depth to the \protect\ifthenelse{\equal{\arabic{CMBDone}}{0}}{cosmic microwave background (CMB) \setcounter{CMBDone}{1}}{CMB}\ as a function of redshift in our best-fit model. The blue point shows the \protect\ifthenelse{\equal{\arabic{WMAPDone}}{0}}{\emph{Wilkinson Microwave Anisotropy Probe} (WMAP) \setcounter{WMAPDone}{1}}{WMAP} 5 constraint \protect\pcite{dunkley_five-year_2009}.}
\label{fig:IGM_tau}
\end{figure*}
One of the key effects of the reionization of the Universe is to suppress the formation of galaxies in low mass dark matter halos. We find that the accretion temperature, $T_{\rm acc}$, remains approximately constant at around 30,000K below $z=3$, corresponding to a mass scale increasing with time. The filtering mass rises sharply during reionization and remains large until the present day.
We note that the model predicts too much flux at 912\AA\ in the photon background. We suspect that this is due to the fact that our \ifthenelse{\equal{\arabic{IGMDone}}{0}}{intergalactic medium (IGM) \setcounter{IGMDone}{1}}{IGM}\ model is uniform. Inclusion of a non-uniform \ifthenelse{\equal{\arabic{IGMDone}}{0}}{intergalactic medium (IGM) \setcounter{IGMDone}{1}}{IGM}\ (i.e. the Lyman-$\alpha$ forest) would result in a greater mean optical depth and would reduce the model flux.
\subsection{Additional Results}\label{sec:Predictions}
In this section we present two additional results that were not used to constrain the model, and therefore represent predictions.
\subsubsection{Gas Phases}\label{sec:GasPhases}
While not included in our fitting procedure, it is interesting to examine the distribution of gas between different phases as a function of dark matter halo mass. Figure~\ref{fig:GasPhases} shows the fraction of baryons in hot (including reheated gas), galaxy (cold gas in disks plus stars in disks and spheroids) and ejected (lost from the halo) phases. The \cite{bower_breakinghierarchy_2006} model (which has no ejected material) shows a peak in galaxy phase fraction at $M_{\rm halo}\approx 2\times 10^{11}h^{-1}M_\odot$ with a rapid decline to lower mass and asymptoting to a constant fraction of 5\% in higher mass halos. This follows the general trend found in semi-analytic models (see, for example, \citealt{benson_nature_2000}) in which supernovae feedback suppresses galaxy formation in low mass halos, while inefficient cooling and \ifthenelse{\equal{\arabic{AGNDone}}{0}}{active galactic nuclei (AGN) \setcounter{AGNDone}{1}}{AGN}\ feedback does the same in the highest mass halos. In contrast, our best-fit model shows modest ejection of gas in massive halos and a corresponding suppression in the hot gas fraction, although the trends are qualitatively the same as in \cite{bower_breakinghierarchy_2006}. This is different from the dependence of hot gas fraction on halo mass found by \cite{bower_flip_2008}---our current model produces less ejection than found by \cite{bower_flip_2008} resulting in the hot gas fraction being too high in intermediate mass halos. In particular, the right-hand panel of Fig.~\ref{fig:GasPhases}, shows the gas fraction in model halos as a function of hot gas temperature. Model gas fractions were computed within a radius enclosing an overdensity of 2500, just as were the observed data. This radius, and the gas fraction within it, is computed using the dark matter and gas density profiles described in \S\ref{sec:HaloProfiles} and \S\ref{sec:HotGasDist} respectively. Compared to the data (magenta points), the \cite{bower_breakinghierarchy_2006} model is a very poor match, showing almost no trend with temperature. Our best fit model also performs poorly, and it is clear that the suppression in hot gas fraction does not have the correct dependence on halo mass\footnote{Given the hot gas profile assumed in our model and the baryon fraction, the largest ratio of hot gas to dark matter mass we could find here in massive halos is $0.10$ (since the gas profile is cored, but the dark matter profile is not).}. In contrast, the \cite{bower_flip_2008} model produced an excellent match to these data (as it was designed to do). We therefore expect that our best-fit model will not give a good match to the X-ray luminosity-temperature relation, and would instead require more efficient ejection, with a stronger dependence on halo mass in the relevant range, to achieve a good fit. We reiterate that these data were not included as a constraint when searching parameter space for the best-fit model. We will return to this issue in future work, including these constraints directly.
\begin{figure*}
\begin{tabular}{cc}
\includegraphics[width=80mm,viewport=7mm 55mm 205mm 255mm,clip]{Plots/HGFv2.0_PostReion-BestModel_PostReion/GasPhases.pdf} &
\includegraphics[width=80mm,viewport=7mm 55mm 205mm 255mm,clip]{Plots/HGFv2.0_PostReion-BestModel_PostReion/GasPhasesT.pdf}
\end{tabular}
\caption{\emph{Left panel:} Solid lines show the median fraction of baryons in different phases as a function of halo mass, while dotted lines indicate the $10^{\rm th}$ and $90^{\rm th}$ percentiles of the distribution. Red lines show gas in the hot phase (which includes any gas in the $M_{\rm reheated}$ reservoir), blue lines gas in the galaxy phase and green lines gas which has been ejected from the halo. Thin lines indicate results from the \protect\cite{bower_breakinghierarchy_2006} model while thick lines show results from the best fit model used in this work. \emph{Right panel:} The ratio of hot gas mass to total halo mass as a function of halo virial temperature is shown by the solid read line. Magenta points show data from \protect\cite{sun_chandra_2009} (crosses) and \protect\cite{vikhlinin_chandra_2009} (squares). Both the observed data and the model results are measured within $r_{2500}$ (the radius enclosing an overdensity of 2500). These data were not included as constraints in our search of the model parameter space.}
\label{fig:GasPhases}
\end{figure*}
\subsubsection{Intrahalo Light}\label{sec:ICL}
Stars that are tidally stripped from model galaxies become part of a diffuse intrahalo component which we assumes fills the host halo. We can therefore predict the fraction of stars which are found in this intrahalo light as a function of halo mass and compare it to measurements of this quantity. \cite{zibetti_intergalactic_2005} have measured this quantity for clusters, while \cite{mcgee_constraintsintragroup_2009} have measured it for galaxy groups. In Fig.~\ref{fig:ICL} we show their results overlaid on results from our model. Blue points show individual model halos, while the blue line shows the running median of this distribution. The magenta and red points indicate the above mentioned observational determinations for groups and clusters respectively. Our model predicts an intrahalo light fraction which is a very weak function of halo mass, remaining at 20--25\% over two orders of magnitude in halo mass. At fixed halo mass, there is significant scatter, particularly for the lower mass halos. Our predictions are in agreement with the current observational determinations, given their rather large errors bars, and it is clear that in the future such measurements have the potential to provide valuable constraints on models of tidal stripping.
\begin{figure}
\includegraphics[width=80mm,viewport=7mm 55mm 205mm 255mm,clip]{Plots/ICL.pdf}
\caption{The fraction of stars which are part of the intrahalo light as a function of halo mass. Blue points show individual model halos, while the blue line shows the running median of this distribution. The magenta and red points indicate the observational determinations of \protect\cite{mcgee_constraintsintragroup_2009} and \protect\cite{zibetti_intergalactic_2005} for groups and clusters respectively.}
\label{fig:ICL}
\end{figure}
\section{Effects of Physical Processes}\label{sec:Effects}
In the previous section we have explored the effects of varying parameters of the model and their effect on key galaxy properties. We will now instead briefly explore the effects of certain physical processes (those which are either new to this work or have not been extensively examined in the past) on the results of our galaxy formation model. The intent here is not to assess whether these models are ``better'' than our standard model---they all utilize less realistic physical models---but to examine the effects of ignoring certain physical processes or of making certain assumptions. This emphasises one of the key strengths of the semi-analytic approach: the ability to rapidly investigate the importance of different physical processes on the properties of galaxies. Rather than showing all model results in each case, we will show a small selection of model results which best demonstrate the effects of the updated model.
\subsection{Reionization and Photoheating}\label{sec:PhotoEffect}
Our standard model includes a fully self-consistent treatment of the evolution of the \ifthenelse{\equal{\arabic{IGMDone}}{0}}{intergalactic medium (IGM) \setcounter{IGMDone}{1}}{IGM}\ and its back reaction on galaxy formation. Two key physical processes are at work here. The first is the suppression of baryonic infall into halos due to the heating of the \ifthenelse{\equal{\arabic{IGMDone}}{0}}{intergalactic medium (IGM) \setcounter{IGMDone}{1}}{IGM}\ by the photoionizing background (see \S\ref{sec:BaryonSupress}). The second is the reduction in cooling rates of gas in halos as a result of photoheating by the same background (see \S\ref{sec:Cloudy}). Here, we compare this standard model to a model with identical parameters, but with these two physical processes switched off. (We retain Compton cooling and molecular hydrogen cooling, but revert to collisional ionization equilibrium cooling curves since there is no photon background in this model.)
\begin{figure*}
\begin{tabular}{cc}
\vspace{-10mm}\hspace{65mm}a & \hspace{65mm}b\\
\vspace{10mm}\includegraphics[width=80mm,viewport=7mm 55mm 205mm 255mm,clip]{Plots/HGFv2.0_PostReion-NoReionOnly/bJ_LF.pdf} &
\includegraphics[width=80mm,viewport=7mm 55mm 205mm 255mm,clip]{Plots/HGFv2.0_PostReion-NoReionOnly/z5_1500A_LF.pdf} \\
\vspace{-10mm}\hspace{65mm}c & \hspace{65mm}d\\
\includegraphics[width=80mm,viewport=7mm 55mm 205mm 255mm,clip]{Plots/HGFv2.0_PostReion-NoReionOnly/SFR.pdf} &
\includegraphics[width=80mm,viewport=7mm 55mm 205mm 255mm,clip]{Plots/HGFv2.0_PostReion-NoReionOnly/_LocalGroup/LocalGroup_LumFun.pdf} \\
\end{tabular}
\caption{Comparisons between our best-fit model (blue lines) and the same model without the effects of suppression of baryonic accretion or photoionization equilibrium cooling (green lines). \emph{Panel a:} The $z=0$ b$_{\rm J}$-band luminosity function as in Fig.~\protect\ref{fig:bJ_LF}. \emph{Panel b:} The $z=5$ 1500\AA\ luminosity function as in Fig.~\protect\ref{fig:z5_6_LF}. \emph{Panel c:} The mean star formation rate density in the Universe as a function of redshift as in Fig.~\protect\ref{fig:SFH}. \emph{Panel d:} The luminosity function of Local Group satellite galaxies as in Fig.~\protect\ref{fig:LocalGroup_LF}.}
\label{fig:NoReion}
\end{figure*}
Figure~\ref{fig:NoReion} shows some of the key effects of making these changes to our best-fit model. In panel ``a'' we show the $z=0$ b$_{\rm J}$-band luminosity function. The model with no baryonic accretion suppression or photoheating (green line) shows a small excess of very bright galaxies relative to the best-fit model (blue line) due to slightly different cooling rates in this model which affect the efficiency of \ifthenelse{\equal{\arabic{AGNDone}}{0}}{active galactic nuclei (AGN) \setcounter{AGNDone}{1}}{AGN}\ feedback. As shown in panel ``b'' of Fig.~\ref{fig:NoReion}, the $z=5$ and $z=6$ \ifthenelse{\equal{\arabic{UVDone}}{0}}{ultraviolet (UV) \setcounter{UVDone}{1}}{UV}\ luminosity functions are almost identical in this variant model and our best-fit model. At these higher redshifts \ifthenelse{\equal{\arabic{AGNDone}}{0}}{active galactic nuclei (AGN) \setcounter{AGNDone}{1}}{AGN}\ feedback has yet to become a significant factor in galaxy evolution. A small excess of galaxies is seen in the model with no baryonic accretion suppression or photoheating at the faintest magnitudes plotted. This is as expected---those mechanisms preferentially suppress the formation of very low mass galaxies.
The effects of this change in the \ifthenelse{\equal{\arabic{AGNDone}}{0}}{active galactic nuclei (AGN) \setcounter{AGNDone}{1}}{AGN}\ feedback can be seen also in panel ``c'', where we show the star formation history of the Universe. At high redshifts, the two models are nearly identical. However, below $z\approx 1.5$ when \ifthenelse{\equal{\arabic{AGNDone}}{0}}{active galactic nuclei (AGN) \setcounter{AGNDone}{1}}{AGN}\ feedback begins to come into play, the two models diverge (primarily due to differences in their quiescent star formation rates---the rates of bursting star formation remain quite similar), due to the weakened \ifthenelse{\equal{\arabic{AGNDone}}{0}}{active galactic nuclei (AGN) \setcounter{AGNDone}{1}}{AGN}\ feedback in this variant model.
Finally, in panel ``d'', we show the luminosity function of Local Group satellites. There is little difference between this variant model and the best-fit model for satellites brighter than about $M_{\rm V}=-10$---photoheating and baryonic suppression play only a minor role in shaping the properties of these brighter satellites. At fainter magnitudes, the variant model predicts more satellites than the best-fit model---by about a factor of two. Suppression of baryonic accretion and photoheating are clearly then important mechanisms for determining the number of satellites in the Local Group, but other baryonic effects (namely \ifthenelse{\equal{\arabic{SNeDone}}{0}}{supernovae (SNe) \setcounter{SNeDone}{1}}{SNe}\ feedback) are clearly at work in reducing the number of satellites below the number of dark matter subhalos.
\subsection{Orbital Hierarchy}\label{sec:HierarchyEffect}
\begin{figure*}
\begin{tabular}{cc}
\vspace{-10mm}\hspace{65mm}a & \hspace{65mm}b\\
\vspace{10mm}\includegraphics[width=80mm,viewport=7mm 50mm 205mm 255mm,clip]{Plots/HGFv2.0_PostReion-NoOrbitHierarchy/K_LF.pdf} &
\includegraphics[width=80mm,viewport=7mm 50mm 205mm 255mm,clip]{Plots/HGFv2.0_PostReion-NoOrbitHierarchy/KLF_Morph_Type2.pdf}\\
\vspace{-10mm}\hspace{65mm}c & \hspace{65mm}d\\
\includegraphics[width=80mm,viewport=7mm 50mm 205mm 255mm,clip]{Plots/HGFv2.0_PostReion-NoOrbitHierarchy/xi_s_-17_5.pdf} &
\includegraphics[width=80mm,viewport=7mm 50mm 205mm 255mm,clip]{Plots/HGFv2.0_PostReion-NoOrbitHierarchy/_LocalGroup/LocalGroup_LumFun.pdf}\\
\end{tabular}
\caption{Comparisons between our best-fit model (blue lines) and the same model without a full hierarchy of substructures (green lines). \emph{Panel a:} The $z=0$ b$_{\rm J}$-band luminosity function as in Fig.~\protect\ref{fig:bJ_LF}. \emph{Panel b:} The K-band $z=0$ luminosity function of S0 galaxies as in Fig.~\protect\ref{fig:K_Morpho_LF}. \emph{Panel c:} The redshift space two-point correlation function of galaxies with $-18.5<{\rm b}_{\rm J}\le-17.5$ as in Fig.~\protect\ref{fig:2dFGRS_Clustering}. \emph{Panel d:} The luminosity function of Local Group satellite galaxies as in Fig.~\protect\ref{fig:LocalGroup_LF}.}
\label{fig:NoOrbitalHierarchy}
\end{figure*}
In our standard model, the full hierarchy of substructures (i.e. halos within halos within halos\ldots) is followed (see \S\ref{sec:Merging}). This is in contrast to all previous semi-analytic treatments, in which only the first level of the hierarchy has been considered (i.e. only subhalos, no sub-subhalos etc.). Figure~\ref{fig:NoOrbitalHierarchy} compares results from this variant model (green lines) with those from our best-fit standard model (blue lines). Panel ``a'' of this figure shows the $z=0$ b$_{\rm J}$-band luminosity function of galaxies. Without a hierarchy of substructures we find that this luminosity function is unchanged over most of the range of luminosities shown. The exception is for the brightest galaxies, which become slightly brighter when no hierarchy of substructures is used. These galaxies grow primarily through merging, and this suggests therefore that including a hierarchy of substructures reduces the rate of merging onto these galaxies. At first sight, this seems counter intuitive as galaxies should have more opportunity to merge as they pass through each level of the hierarchy. In fact, this is not the case. A subhalo may sink within the potential well of a halo and then be tidally stripped, releasing any sub-subhalos it may contain into the halo. These sub-subhalos (which become subhalos in their new host) are placed onto new orbits consistent with their orbital position and velocity at the time at which their subhalo was disrupted. The merging timescale for these orbits plus the time they have already spent orbiting with a subhalo can be longer than the merging timescale they would have received if they had been made subhalos as soon as they crossed the virial radius of the host halo. This is due in part to the relatively weak dependence of merging timescale on $r_{\rm C}(E)$ in the \cite{jiang_fitting_2008} fitting formula\footnote{We note that this formula has not been well-tested in the regime in which we are employing it. A more detailed study of the merging timescales and orbits of sub-subhalos is clearly warranted.} and partly due to the fact that sub-subhalos are ejected onto relatively energetic orbits (since they effectively gain a kick in velocity as their subhalo no longer holds them in place).
\begin{figure*}
\begin{tabular}{cc}
\vspace{-10mm}\hspace{65mm}a & \hspace{65mm}b\\
\vspace{10mm}\includegraphics[width=80mm,viewport=7mm 50mm 205mm 255mm,clip]{Plots/HGFv2.0_PostReion-NoTidalOrRam/bJ_LF.pdf} &
\includegraphics[width=80mm,viewport=7mm 50mm 205mm 255mm,clip]{Plots/HGFv2.0_PostReion-NoTidalOrRam/SDSS_Colours_-17_-16.pdf}\\
\vspace{-10mm}\hspace{65mm}c & \hspace{65mm}d\\
\includegraphics[width=80mm,viewport=7mm 50mm 205mm 255mm,clip]{Plots/HGFv2.0_PostReion-NoTidalOrRam/xi_s_-17_5.pdf} &
\includegraphics[width=80mm,viewport=7mm 50mm 205mm 255mm,clip]{Plots/HGFv2.0_PostReion-NoTidalOrRam/Gas2Light.pdf}\\
\end{tabular}
\caption{Comparisons between our best-fit model (blue lines) and the same model without the effects of tidal or ram pressure stripping of gas and stars from galaxies and their hot atmospheres (green lines). \emph{Panel a:} The $z=0$ b$_{\rm J}$-band luminosity function as in Fig.~\protect\ref{fig:bJ_LF}. \emph{Panel b:} The $^{0.1}$g$-^{0.1}$r colour distribution for galaxies at $z=0.1$ with $-17<M_{^{0.1}\rm g}\le-16$ as in Fig.~\protect\ref{fig:SDSS_Colours}. \emph{Panel c:} The redshift space two-point correlation function of galaxies with $-18.5<{\rm b}_{\rm J}\le-17.5$ as in Fig.~\protect\ref{fig:2dFGRS_Clustering}. \emph{Panel d:} Gas (hydrogen) to B-band light ratios at $z=0$ as a function of B-band absolute magnitude as in Fig.~\protect\ref{fig:Gas2Light}.}
\label{fig:NoTidalOrRam}
\end{figure*}
Panel ``b'' in Fig.~\ref{fig:NoOrbitalHierarchy} shows that most of the increase in luminosity when the orbital hierarchy is ignored occurs in the S0 morphological class, which, in this model, makes up a significant part of the bright end of the luminosity function. Panel ``c'' shows that the inclusion of the orbital hierarchy makes little difference to the correlation function of galaxies. Mergers between galaxies remain dominated by subhalo-halo interactions, such that this new physics has little impact on the number of pairs of galaxies in massive halos. Finally, panel ``d'' shows the luminosity function of Local Group galaxies. Their numbers are slightly reduced when the orbital hierarchy is ignored, a direct consequence of the slightly increased merger rate.
\subsection{Tidal and Ram Pressure Stripping}\label{sec:StripEffect}
Our standard model incorporates both ram pressure and tidal stripping of gas and stars from galaxies and their hot gaseous atmospheres. We compare this standard model to one in which both of these stripping mechanisms have been switched off. In general, tidal stripping of stars will reduce the luminosity of satellite galaxies. Ram pressure or tidal stripping of gas from galaxies or their hot atmospheres will also reduce the luminosity of satellites and, additionally, may increase the luminosity of central galaxies (since the stripped gas is added to their supply of potential fuel).
\begin{figure*}
\begin{tabular}{cc}
\vspace{-10mm}\hspace{65mm}a & \hspace{65mm}b\\
\vspace{10mm}\includegraphics[width=80mm,viewport=7mm 50mm 205mm 255mm,clip]{Plots/HGFv2.0_PostReion-NoNonInstant/bJ_LF.pdf} &
\includegraphics[width=80mm,viewport=7mm 50mm 205mm 255mm,clip]{Plots/HGFv2.0_PostReion-NoNonInstant/SFR.pdf}\\
\vspace{-10mm}\hspace{65mm}c & \hspace{65mm}d\\
\includegraphics[width=80mm,viewport=7mm 50mm 205mm 255mm,clip]{Plots/HGFv2.0_PostReion-NoNonInstant/SDSS_Colours_-18_-17.pdf} &
\includegraphics[width=80mm,viewport=7mm 50mm 205mm 255mm,clip]{Plots/HGFv2.0_PostReion-NoNonInstant/SDSS_Colours_-22_-21.pdf}\\
\end{tabular}
\caption{Comparisons between our best-fit model (blue lines) and the same model using an instantaneous approximation for recycling, chemical enrichment and \protect\ifthenelse{\equal{\arabic{SNeDone}}{0}}{supernovae (SNe) \setcounter{SNeDone}{1}}{SNe}\ feedback (green lines). \emph{Panel a:} The $z=0$ b$_{\rm J}$-band luminosity function as in Fig.~\protect\ref{fig:bJ_LF}. \emph{Panel b:} The star formation rate density as a function of redshift as in Fig.~\protect\ref{fig:SFH}. \emph{Panel c:} The $^{0.1}$g$-^{0.1}$r colour distribution for galaxies at $z=0.1$ with $-18<M_{^{0.1}\rm g}\le-17$ as in Fig.~\protect\ref{fig:SDSS_Colours}. \emph{Panel d:} The $^{0.1}$g$-^{0.1}$r colour distribution for galaxies at $z=0.1$ with $-22<M_{^{0.1}\rm g}\le-21$ as in Fig.~\protect\ref{fig:SDSS_Colours}.}
\label{fig:NoNonInstant}
\end{figure*}
Figure~\ref{fig:NoTidalOrRam} compares results from the model with no tidal or ram pressure stripping (green lines) with our standard, best-fit model (blue lines). In panel ``a'' we show the b$_{\rm J}$-band luminosity function. At the faintest magnitudes, the model without stripping shows an excess of galaxies relative to the standard model. This is due to low mass galaxies in groups and clusters being stripped of a significant fraction of their stars in the standard model. Conversely, the model without stripping produces fewer of the brightest galaxies (or, more correctly, the bright galaxies that it produces are not quite as luminous as in the standard model). This is a consequence of the fact the ram pressure stripping is able to remove some gas from low mass galaxies, making it available for later accretion onto massive galaxies, allowing those massive galaxies to grow somewhat more luminous. In panel ``b'' we examine the colour distribution of faint galaxies. The model with no stripping produces a shift of galaxies to the blue cloud as expected---with stripping included these galaxies lose their gas supply and quickly turn red.
A further effect of stripping can be seen in panel ``c'' which shows the correlation function of faint galaxies. Without stripping, this is increased on small scales since a greater number of galaxies in massive halos now make it in to the luminosity range selected. Tidal stripping of stars (and, to some extent, ram pressure removal of gas) reduce the luminosities of cluster galaxies and thereby reduce the number of galaxy pairs on small scales in a given luminosity range, thereby helping to reduce small scale correlations. Finally, we show in panel ``d'' the gas to light ratio in a model without stripping. In low mass galaxies the resulting ratio is much higher than in our standard case, a direct result of this gas no longer being removed by ram pressure forces. In more massive galaxies there is, instead a reduction in the gas to light ratio relative to the standard model arising because much of the gas is now locked away in smaller systems and so not available for incorporation into larger galaxies.
Although not shown in Fig.~\ref{fig:NoTidalOrRam} stripping processes have an effect on Local Group galaxies---in the absence of stripping there is a modest increase (by around 50\%) in the number of galaxies brighter than $M_{\rm V}=-10$, but the total number of galaxies is mostly unchanged. Additionally, the sizes of Local Group satellites are larger when stripping processes are ignored as expected (many of the satellites lose their outer portions due to tidal stripping), while metallicities are mostly unaffected.
\subsection{Non-instantaneous Recycling, Enrichment and Supernovae Feedback}\label{sec:NonInstEffect}
\begin{figure*}
\begin{tabular}{cc}
\vspace{-10mm}\hspace{65mm}a & \hspace{65mm}b\\
\vspace{10mm}\includegraphics[width=80mm,viewport=7mm 50mm 205mm 255mm,clip]{Plots/HGFv2.0_PostReion-NoNonInstant/disk_size_5.pdf} &
\includegraphics[width=80mm,viewport=7mm 50mm 205mm 255mm,clip]{Plots/HGFv2.0_PostReion-NoNonInstant/Zstar_-20_-19.pdf} \\
\vspace{-10mm}\hspace{65mm}\raisebox{0mm}{c} & \hspace{65mm}\begin{tabular}{c} d \\ \raisebox{-60mm}{e} \end{tabular}\vspace{-60mm}\\
\includegraphics[width=80mm,viewport=7mm 50mm 205mm 255mm,clip]{Plots/HGFv2.0_PostReion-NoNonInstant/Gas2Light.pdf} &
\includegraphics[width=80mm,viewport=0mm 10mm 195mm 265mm,clip]{Plots/HGFv2.0_PostReion-NoNonInstant/SDSS_Zgas.pdf}\\
\end{tabular}
\caption{Comparisons between our best-fit model (blue lines) and the same model with instantaneous recycling, chemical enrichment and \protect\ifthenelse{\equal{\arabic{SNeDone}}{0}}{supernovae (SNe) \setcounter{SNeDone}{1}}{SNe}\ feedback (green lines). \emph{Panel a:} The distribution of disk sizes for galaxies in the range $-20<M_{I,0}-5\log_{10}h\le-19$ as in Fig.~\protect\ref{fig:Other_Sizes}. \emph{Panel b:} The distribution of stellar metallicities for galaxies in the range $-20<M_{\rm B}-5\log_{10}h\le-19$ as in Fig.~\protect\ref{fig:Zstar}. \emph{Panel c:} The ratio of hydrogen gas mass to B-band luminosity as in Fig.~\protect\ref{fig:Gas2Light}. \emph{Panels d \& e:} The gas phase metallicity as a function of absolute magnitude as in Fig.~\protect\ref{fig:SDSS_Zgas}.}
\label{fig:NoNonInstant2}
\end{figure*}
Our standard model utilizes a fully non-instantaneous model of recycling and chemical enrichment from stellar populations and of feedback from \ifthenelse{\equal{\arabic{SNeDone}}{0}}{supernovae (SNe) \setcounter{SNeDone}{1}}{SNe}. We compare this model with one in which the instantaneous recycling approximation is used and in which \ifthenelse{\equal{\arabic{SNeDone}}{0}}{supernovae (SNe) \setcounter{SNeDone}{1}}{SNe}\ feedback occurs instantaneously after star formation. In this model, cooling rates are computed from the total metallicity (rather than accounting for the abundances of individual elements as described in \S\ref{sec:Cooling}) since we cannot track individual elements in this approximation. We adopt a yield of $p=0.04$ and a recycled fraction of $R=0.39$ for this instantaneous recycling model. (These values correspond approximately the values expected for a single stellar population with a Chabrier \ifthenelse{\equal{\arabic{IMFDone}}{0}}{initial mass function (IMF) \setcounter{IMFDone}{1}}{IMF}\ and an age of approximately 10~Gyr.)
Figures~\ref{fig:NoNonInstant} and \ref{fig:NoNonInstant2} compare the results of this model with our best-fit standard model. In Fig.~\ref{fig:NoNonInstant}, panel ``a'' shows that, at $z=0$ the bright-end of the b$_{\rm J}$-band luminosity function is shifted brightwards in the instantaneous model. This is a consequence of the increased metal enrichment in this model which increases cooling rates (which both increases the amount of gas that can cool and increases the mass scale at which \ifthenelse{\equal{\arabic{AGNDone}}{0}}{active galactic nuclei (AGN) \setcounter{AGNDone}{1}}{AGN}\ feedback becomes effective). This trend is reversed at higher redshifts for the UV luminosity function that we consider. Here, the luminosity function is shifted fainter in the instantaneous model. This effect is due to increased dust extinction in the instantaneous model (which is able to build up metals more rapidly, particularly at high redshifts and so results in dustier galaxies).
Panel ``b'' shows the star formation rate density as a function of redshift. The instantaneous model shows a lower star formation rate at high redshift, and a higher rate at low redshift compared to our standard model. At high redshift this can be seen to be due almost entirely to a change in the rate of bursty star formation. The cause of this is rather subtle: in the non-instantaneous model gas is rapidly locked up into stars at high redshifts and is only slowly returned to the \ISM\ of galaxies. This, coupled with somewhat reduced feedback in the non-instantaneous model (since it takes some time for the \ifthenelse{\equal{\arabic{SNeDone}}{0}}{supernovae (SNe) \setcounter{SNeDone}{1}}{SNe}\ to occur after star formation happens) makes disks more massive and therefore more prone to instabilities (see \S\ref{sec:MinorChanges}). The non-instantaneous model has more instability triggered bursts of star formation at high redshift and there is more gas availble to burst in those events. At low redshifts differences in metal enrichment in hot gas in the instantaneous model results in slightly less efficient \ifthenelse{\equal{\arabic{AGNDone}}{0}}{active galactic nuclei (AGN) \setcounter{AGNDone}{1}}{AGN}\ feedback and, therefore, a higher star formation rate.
Instantaneous enrichment has a big effect on galaxy colours as indicated in panels ``c'' and ``d'' of Fig.~\ref{fig:NoNonInstant}. At faint magnitudes we find a somewhat better fit to the data in the instantaneous model (the blue and red peaks are more widely separated and the red peak is less populated). However, at bright magnitudes the instantaneous model produces too many blue galaxies and too few red ones, resulting in significant disagreement with the data.
Panel ``a'' of Fig.~\ref{fig:NoNonInstant2} shows the sizes of galaxy disks. Remarkably, the instantaneous models shows a much better match to the data than our standard model\footnote{It is worth noting that the \protect\cite{bower_breakinghierarchy_2006} model uses the instantaneous recycling approximation and also does better at matching galaxy sizes than our current best-fit model.}. This can be traced to a corresponding difference in the distributions of specific angular momenta of disks in the two models, which, in turn, can be traced to the different rates of instability-triggered bursts at high redshifts in the two models. In the non-instantaneous model these happen at a high rate. As a result, the low angular momentum material of these disks is locked up into the spheroid components. Later accretion then results in the formation of disks from higher angular momentum material, resulting in disks that are too large. The stochasticity of this process likewise leads to a large dispersion in disk specific angular momenta and, therefore, sizes. In the instantaneous model the rate of instability-triggered bursts is greatly reduced, allowing disks to retain their early accreted, low angular momentum material, giving smaller disks with less variation in size.
Panel ``b'' shows an example of the distribution of stellar metallicities. Stars in the instantaneous model are enriched to higher metallicities as expected---in the non-instantaneous model it takes time for stars to evolve and produce metals, allowing less enrichment overall. Panels ``c'' and ``d'' show the effects on gas content and metallicity respectively. The gas content is reduced in the instantaneous model and is in excellent agreement with the data. This is a result of the late-time replenishment of the \ISM\ in the non-instantaneous model by material recycled from stars. The instantaneous model produces lower gas phase metallicities, again as a result of the lack of this late-time replenishment which consists of relatively low metallicity material.
\subsection{Adiabatic Contraction}\label{sec:ContractionEffect}
\begin{figure*}
\begin{tabular}{cc}
\vspace{-10mm}\hspace{65mm}a & \hspace{65mm}b\\
\vspace{10mm}\includegraphics[width=80mm,viewport=7mm 50mm 205mm 255mm,clip]{Plots/HGFv2.0_PostReion-NoContraction/_LocalGroup/LocalGroup_Sizes_-15_-10.pdf} &
\includegraphics[width=80mm,viewport=7mm 50mm 205mm 255mm,clip]{Plots/HGFv2.0_PostReion-NoContraction/SDSS_TF_-21_-20.pdf}\\
\end{tabular}
\caption{Comparisons between our best-fit model (blue lines) and the same model without adiabatic contraction of dark matter halos (green lines). \emph{Panel a:} The distribution of half-light radii for Local Group satellites in the magnitude range $-15 < M_{\rm V} \le -10$ as in Fig.~\protect\ref{fig:LocalGroup_Sizes}. \emph{Panel b:} The Tully-Fisher relation for galaxies in the magnitude range $-21 < M_{i} \le -20$ as in Fig.~\protect\ref{fig:SDSS_TF}.}
\label{fig:NoContraction}
\end{figure*}
Adiabatic contraction of dark matter halos in response to the condensation of baryons is included in our standard model as described in \S\ref{sec:Sizes}. In Fig.~\ref{fig:NoContraction} we compare our standard model with one in which this adiabatic contraction is switched off such that dark matter halos profiles are unchanged by the presence of baryons. Such a change may be expected to result in galaxies which are somewhat larger and more slowly rotating. Panel ``a'' shows the effects on Local Group satellite galaxy sizes. A slight increase in size is seen as expected. For larger galaxies, we see a similar effect. Rotation speeds of galaxies are less affected though---panel ``b'' shows a slice through the Tully-Fisher and indicates that switching off adiabatic contraction has actually had little effect on this statistic.
\section{Discussion}\label{sec:Discussion}
We have described a substantially revised implementation of the {\sc Galform}\ semi-analytic model of galaxy formation. This version incorporates the numerous developments in our understanding of galaxy formation since the last major review of the code \pcite{cole_hierarchical_2000}. Together with
changes to the code to implement black hole feedback \pcite{bower_breakinghierarchy_2006, bower_flip_2008},
ram-pressure stripping \pcite{font_colours_2008} and to track the formation of black holes \pcite{malbon_black_2007}, we have made
fundamental improvements to key physical processes (such as cooling,
re-ionisation, galaxy merging and tidal stripping) and removed a number of limiting assumptions (in particular, instantaneous recycling and chemical enrichment are no longer assumed).
In addition to computing the properties of galaxies, the model now self-consistently solves for the evolution of the intergalactic medium and its influence on later epochs of galaxy formation.
The goals of these changes have been three-fold. Firstly, a prime motivation
has been to remove the code's explicit dependence on discrete halo formation events. In the older code, the mass-doubling events were used to
reset halo properties and re-initalise the cooling and free fall
accretion calculations. In turn, this lead to abrupt changes in the
supply of cold gas to the central galaxy which was often not associated with any particular merging event in the haloes history. The new method avoids such artificial dependencies and leads to smoothly varying gas accretion rates in haloes
with smooth accretion histories, and only leads to abrupt changes during
sufficiently important merging events. The new scheme explicitly tracks
the energetics of material expelled from galaxies by feedback,
and also allows the angular momentum of the feedback and accreted material
to be self-consistently propagated through the code.
Secondly, we have aimed to enhance the range of physical processes
treated in the code so that it incorporates the full range of
effects that are likely to be key in determining galaxy properties.
In particular, we now include careful treatments of galaxy-environment
interarctions (tidal and ram-pressure stripping), taking into
account the sub-halo hierarchy present within each halo; we take into
account the self-consistent re-ionisation of the IGM and the impact that
this has on gas supply to early galaxies; and we allow for material
to be ejected from haloes (both by star-formation and AGN), broadening the
range of plausible feedback schemes included in the model. Finally,
the verison of the code described may be driven by accurate Monte-Carlo
realisations of halo merger trees. This allows the uncertainty in the
background cosmological parameters to be factored in to the model
parameter constraints.
We have also advanced the methodology by which we test the model's
performance by simulataneously comparing the model to a wide range
of observational data. In addition to our conventional approach of
primarily comparing to local optical and near-IR luminosity functions, we
now include luminosity function data covering a much greater a range
of redshift and wavelength, the star formation history of the universe,
the distribution of galaxies in colour-space, their gas and metal content,
the Tully-Fisher relation and various observational measurements
of the galaxy size distribution. In addition, to these galaxy properties
we also use the thermal evolution of the IGM as an additional
constraint.
The drawback of introducing additional physical processes is that this
introduces additional parameters into the model. However, we now
beleive that we have the tools to efficiently explore high dimensional
parameter spaces and thus identify strongly constrained parameter
combinations, and the additional model freedom is much less than the
sum of the observational constraints.
We performed an extensive search of the new model's parameter space utilizing the ``parameter pursuit'' methodology of \cite{bower_parameter_2010} to rapidly search the high-dimensional space.
This allowed us to find a model which is an adequate description of many of the data sets which were used as constraints. In particular, the model is a good match to local luminosity functions and the overall rate of star formation in the Universe while simultaneously producing reasonable distributions of galaxy colours, metallicities, gas fractions and supermassive black hole masses all while predicting a plausible reionization history.
In many of the original data comparisons, the model gives comparable results to \cite{bower_breakinghierarchy_2006}. In other
comparisons (particularly, colours, metallicities and gas fractions) it greatly improves on the older model.
Additionally, most of the model parameters
have shifted relatively little compared to the older model.
Where parameters have changed significantly, it is possible to identify
a direct cause. For example, the minimum timescale on which feedback
material can be re-accreted by a galaxy (which is set by $\alpha_{\rm reheat}$) is shorter for the new model. This makes good sense since a fraction
of feedback material is now expelled from the system through the new
expulsive feedback channel (see \S\ref{sec:Feedback}). Far from indicating a lack of progress, the comparability of the models is a tremendous success. We cannot emphasise enough how much many of the internal algorithms of the model have been revised: the near stability of the end results suggests a high
degree of convergence, and that adding additional detailing of many
aspects of the model is not required.
Despite this encouraging success, significant discrepancies between the
model and the data remain in many areas. In particular, the sizes of galaxies are too large in our model (and there is too much dispersion in galaxy sizes). This may reflect a break down in certain model assumptions (e.g. the conservation of angular momentum of gas during the cooling and collapse phase), or that we are still lacking some key physics in this
part of the model model (e.g. dissipative effects during spheroid formation; \citealt{covington_predictingproperties_2008}). In addition to the sizes, our model continues to produce too many satellite galaxies in high mass halos, leading to an overprediction of the small scale clustering amplitude of faint galaxies; and predicts a Tully-Fisher relation offset from that which is observed, despite using the latest models of adiabatic contraction. (We note that \cite{dutton_revised_2007} have demonstrated the difficulty of obtaining a match to the Tully-Fisher relation quite clearly, and have advocated adiabatic expansion or transfer of angular momentum from gas to dark matter to alleviate this problem.) Additionally, at high redshifts the agreement with luminosity function data is relatively poor, but these results are highly sensitive to the very uncertain effects of dust on galaxy magnitudes.
The overall aim of this work was to construct a model that incorporates the majority of our current understanding of galaxy formation and explore the extent to which such a model can reproduce a large body of observational data spanning a range of physical properties, mass scales and redshifts. This is far from being the final word on the progress of this model. Numerous improvements remain to be made---such as the inclusion of a physics-based model of star formation. Nevertheless, the current version has been demonstrated to produce good agreement with a very wide range of observational data. Despite the large number of adjustable parameters current observational data is more than sufficient to constrain this model---the good agreement with that data should be seen as a confirmation of current galaxy formation theory.
We have not attempted, in this work, to explore in detail which physical processes are responsible for which observed phenomena. That, and an investigation of which data provided constraints on which parameters, will be the subject of a future work. The parameter space searching methodology described in this paper is quite efficient and successful, but is presently limited by two factors. The first is the available computing time and speed of model calculations which limits how fine-grained any parameter space search can be. Further optimization of our galaxy formation code coupled with more and faster computers will alleviate this problem, but it will remain a limitation for the near future. The second limitation is our ignorance about how best to combine constraints from different datasets. Some of the observational data that we would like to use is undoubtedly affected by poorly understood systematic errors. As a result it is unclear how a precidence
should be assigned to each dataset.
For example, given the robustness of the measurements, are we more
interested in the class of models that accurately match the $z=5$ luminosity, or those that perform better in clustering measurements?
Ideally the model would match both equally well, but underlying systematic
errors may make this impossible.
Furthermore, to utilize the obervational data in a statistically correct way we often require more information (e.g. the full covariance matrix rather than just errors on each data point) than is available.
The most formidable challenge, however, is to better understand the uncertainty in each model prediction. This is a combination of the variance introduced by the limited number of dark matter halo merger trees that we are able to simulate and the accuracy of the approximations made in computing a given property in the model. The first of these is relatively straightforward to estimate (for example, via a bootstrap resampling approach), but the second is much more difficult. For example, we are quite sure that calculations of dust extinction in rapidly evolving high redshift galaxies are very uncertain, while calculations of galaxy stellar masses at $z=0$ are much more robust. The difficulty arises in assigning a numerical ``weight'' to the model predictions for these different constraints. Beyond simply making an educated guess, one might envisage comparing predictions of dust extinction from our model with a matched sample of simulated high redshift galaxies in which the complicated dynamics geometry and radiative transfer could be treated more accurately. The variance between the semi-analytic and numerical simulation results would then give a quantitative estimate of the model uncertainty. The problem with such an approach is that creating such a matched sample is extremely difficult and time consuming.
In addition to these uncertainties, we should really include uncertainties arising from non-galaxy formation aspects of the calculation. Good examples of these include the \ifthenelse{\equal{\arabic{IMFDone}}{0}}{initial mass function (IMF) \setcounter{IMFDone}{1}}{IMF}\ (which we are not explicitly trying to predict in our work, but which is uncertain and makes a significant difference to many of our results) and the spectra of stellar populations which have significant uncertainties in some regimes. Understanding these various model uncertainties is extremely challenging, but is crucial if serious parameter space searching in semi-analytic models is to take place.
However, even in the absense of a well synthesised approach, it is clear
from the data sets we have considered that
certain key problems remain to be tackled in order to produce a model of galaxy formation consistent with a broad range of observed data.
Firstly, the sizes of model galaxies are too large, suggesting a lack of understanding of the physics of angular momentum in galaxies (see \S\ref{sec:Sizes}). It is known that the simple energy-conserving model for merger remnant sizes proposed by \cite{cole_hierarchical_2000} systematically overpredicts the sizes of spheroids and results in too much scatter in their sizes \pcite{covington_predictingproperties_2008}, but it remains unclear how much this will affect the sizes of disks\footnote{Disks feel the gravitational potential of any embedded spheroid, so their sizes will be somewhat reduced if the sizes of spheroids are systematically reduced.} and, furthermore, many spheroids in our model are formed through disk-instabilities rather than mergers---there is, as yet, no good systematic study of how to accurately determine the sizes of such instability-formed spheroids. The disk-instability process itself has signficant consequences for the angular momentum content of disks and, as such, a careful examination of this process is called for.
Secondly, despite the inclusion of tidal stripping and satellite-satellite merging, the number of satellite galaxies in high mass halos seems to remain too high, as evidenced by the clustering of galaxies (see \S\ref{sec:Clustering}). Thirdly, the clear tension between luminosity function constraints and those from the inferred star formation rate density must be reconciled.
The model described in this work will provide the basis for further improvements to our modeling of galaxy formation. In the near future we intend to return to the following outstanding issues and examine their importance for the constraints and results presented here in greater detail:
\begin{itemize}
\item when, exactly, do disk instabilities occur and precisely what effect do they have on the galaxies in which they happen;
\item improved modeling of the sizes of galaxies and how different physical processes affect these sizes;
\item the X-ray properties and hot gas fractions in halos and how these constrain the amount and type of feedback from galaxies;
\item the effects of patchy reionization on Local Group galaxy properties and on the galaxy population as a whole;
\item the importance of the cold mode of gas accretion and how this affects the build up of galaxies at high redshifts (c.f. \citealt{brooks_role_2009});
\item improved modeling of \ifthenelse{\equal{\arabic{AGNDone}}{0}}{active galactic nuclei (AGN) \setcounter{AGNDone}{1}}{AGN}\ feedback utilizing recent estimates of jet power, spin-up rates and the effects of mergers on black hole spin and mass \pcite{boyle_binary_2008,benson_maximum_2009};
\item examination of physically motivated models of star formation and \ifthenelse{\equal{\arabic{SNeDone}}{0}}{supernovae (SNe) \setcounter{SNeDone}{1}}{SNe}\ feedback utilizing the framework of \cite{stringer_formation_2007}.
\end{itemize}
\section{Conclusions}\label{sec:Conclusions}
In this paper we have presented recent developments of the galaxy formation model {\sc Galform}. This extends the model presented in \cite{cole_hierarchical_2000} and
\cite{bower_breakinghierarchy_2006} adding many additional physical process
(such as environmental interactions and additional feedback channels), improving the treatment of other key processes (including cooling,
re-ionisation and galaxy merging) and removing unnecessary limiting
assumptions (such the instantaneous recycling approximation).
The new code is compared to wide range of observational constraints from
both the local and distant universe and across a wide range of wavelengths.
We navigate through the high dimesional parameter space using the
``projection pursuit'' method suggested in \cite{bower_parameter_2010},
identifying a model that performs well in many of the observational
comparisons.
We find it impossible to identify a model that matches
all the available datasets well and there are inherent tensions
between the datasets pointing to some remaining inadequacies in our
understanding and implementation. In particular, the model as it stands
fails to correctly account for the observed distribution of galaxy sizes
and the observed Tuly-Fisher relation.
Galaxy formation is an inherently complex and highly non-linear process. As such, it is clear that our understanding of it remains incomplete and our ability to model it imperfect. Nevertheless, huge progress has been made in both of these areas, and we expect that progress will continue at a rapid pace. The model described in this work provides an excellent match to many datasets and is in reasonable agreement with many others; it represents a solid foundation upon which to base further calculations of galaxy formation. In particular, with its parameters well constrained by current data it can be used to make predictions for as yet unprobed regimes of galaxy formation.
The present work is clearly not the last word on the subjects covered
herein, however. In fact, we expect to constantly revise our model in response to new constraints and improved understanding of the physics\footnote{We intend to maintain a ``living document'' describing any such alterations at {\tt www.galform.org}, where we will also make available results from the model via an online database.}. This simply reflects the current state of galaxy formation theory---it is a rapidly developing field about which we are constantly gaining new insight.
\section*{Acknowledgements}
AJB acknowledges support from the Gordon and Betty Moore Foundation and would like to acknowledge the hospitality of the Kavli Institute for Theoretical Physics at the University of California, Santa Barbara where part of this work was completed. This research was supported in part by the National Science Foundation under Grant No. NSF PHY05-51164. We thank the {\sc Galform}\ team (Carlton Baugh, Shaun Cole, Carlos Frenk, John Helly and Cedric Lacey) for allowing us to use the collaboratively developed {\sc Galform}\ code in this work. This work has benefited from conversations with numerous people, including Juna Kollmeier, Aparna Venkatesan, Annika Peter, Alyson Brooks and Yu Lu. We thank Simon White and the anonymous referee for suggestions which helped improve the clarity of the original manuscript. We thank Shiyin Shen for providing data in electronic form. We are grateful to the authors of {\sc RecFast} and {\sc Cloudy} for making these valuable codes publicly available and to Charlie Conroy, Jim Gunn, Martin White and Jason Tumlinson for providing \ifthenelse{\equal{\arabic{SEDDone}}{0}}{spectral energy distribution (SED) \setcounter{SEDDone}{1}}{SED} s of single stellar populations. Lauren Porter and Tom Fox contributed code to compute galaxy clustering and \ifthenelse{\equal{\arabic{IGMDone}}{0}}{intergalactic medium (IGM) \setcounter{IGMDone}{1}}{IGM}\ evolution respectively. We gratefully acknowledge the Institute for Computational Cosmology at the University of Durham for supplying a large fraction of the computing time required by this project. This research was supported in part by the National Science Foundation through TeraGrid \pcite{catlett_teragrid:_2007} resources provided by the NCSA and by Amazon Elastic Compute Cloud resources provided by a generous grant from the Amazon in Education program.
\bibliographystyle{mn2e}
| proofpile-arXiv_065-5269 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}\label{sec:Intro}
Wind power forecasts are essential for the efficient operation and
integration of wind power into the national grid. Since wind is
variable and wind energy cannot be stored efficiently, there are
risks of power shortages during periods of low wind speed. Wind
turbines may also need to be shut down when wind speeds are too high,
leading to an abrupt drop of power supply. It is extremely important
for power system operators to quantify the uncertainties of wind power
generation in order to plan for system reserve efficiently
[\citet{Doherty2005}]. In addition, wind farm operators require
accurate estimations of the uncertainties of wind power generation to
reduce penalties and maximize revenues from the electricity market
[\citet{Pinson2007}].
Since the work of \citet{Brown1984} in wind speed forecasting using
autoregressive models, there has been an increasing amount of research
in wind speed and wind power forecasts. Most of the early literature
focuses on point forecasts, and in recent years more emphasis
has been placed on probabilistic or density forecasts because of the need to
quantify uncertainties. However, the number of studies on multi-step
density forecasts is still relatively small, not to mention the
evaluation of forecast performances for horizons $h>1$. Early works on
multi-step density forecasts can be found in \citet{Davies1988} and
\citet{Moeanaddin1990}, where the densities are estimated using
recursive numerical quadrature that requires significant computational
time. \citet{Manzan2008} propose a nonparametric way to generate density
forecasts for the U.S. Industrial Production series, which is based on
bootstrap methods. However, Monte Carlo simulations are required and
this approach is also computationally intensive.
One of the approaches to wind power forecasting is to focus on the
modeling of wind speed and then transform the data into wind power
through a power curve [\citet{Sanchez2006}]. An advantage is that
wind speed time series are smoother and more easily described by linear
models. However, a major difficulty is that the shape of the power
curve may vary with time, and also it is difficult to quantify the
uncertainties in calibrating the nonlinear power curve. Another
approach is to transform meteorological forecasts into wind power
forecasts, where ensemble forecasts are generated from sophisticated
numerical weather prediction (NWP) models [\citet{Taylor2006}, \citet{Pinson2009}]. This approach is able to produce reliable wind power
forecasts up to 10 days ahead, but it requires the computation of a
large number of scenarios as well as expensive NWP models. A third approach to wind power forecasting focuses
on the direct statistical modeling of wind power time series. In this
case the difficulty lies on the fact that wind power time series are
highly nonlinear and non-Gaussian. In particular, wind power time
series at individual wind farms always contain long chains of zeros and
sudden jumps from maximum capacity to a low value due to gusts of wind
since turbines have to be shut down temporarily. Nevertheless, it has
been shown that statistical time series models may outperform
sophisticated meteorological forecasts for short forecast horizons
within 6 hours [\citet{Milligan2003}]. Extensive reviews of the short
term state-of-the-art wind power prediction are contained in
\citet{Landberg2003}, \citet{Giebel2003} and
\citet{Costa2008}, in which power curve models, NWP models and
other statistical models are discussed.
In this paper we adopt the third approach and consider modeling the
wind power data directly. We aim at short forecast horizons within 24
hours ahead, since for longer forecast horizons the NWP models may be
more reliable. As mentioned above, wind power time series are highly
nonlinear. Aggregating the individual wind power time series will
smooth out the irregularities, resulting in a time series which is more
appropriately described by linear models under suitable
transformations. Aggregated wind power generation is also more relevant
to power companies since they mainly consider the total level of wind
power generation available for dispatch. Thus, it is economically
important to generate reliable density forecasts for aggregated wind
power generation.
For this reason, as a first study, this paper considers the modeling of
aggregated wind power time series. One may argue that utilizing
spatiotemporal correlations among individual wind farms may improve the
results in forecasting aggregated wind power. We will show in Section
\ref{sec:Evaluation} that this is not the case here, at least by the
use of a simple multiple time series approach. Unless one is interested
in the power generated at individual wind farms, it is more appropriate
to forecast the aggregated wind power as a univariate time series. We
propose two approaches of generating multi-step ahead density forecasts
for wind power generation, and we demonstrate the value of our
approaches using wind power generation from 64 wind farms in Ireland.
In the first approach, we demonstrate that the logistic function is a
suitable transformation to normalize the aggregated wind power data. In
the second approach, we describe the forecast densities by truncated
normal distributions which are governed by two parameters, namely, the
conditional mean and conditional variance. We apply exponential
smoothing methods to forecast the two parameters simultaneously. Since
the underlying model of exponential smoothing is Gaussian, we are able
to obtain multi-step forecasts of the parameters by simple iterations
and thus generate forecast densities as truncated normal distributions.
Although the second approach performs
similarly to the first in terms of our
evaluation of the wind power forecasts, it
has numerous advantages. It is
computationally more efficient, its forecast performances are more
robust, and it provides the flexibility to choose a suitable parametric
function for the density forecasts. It is also valuable when there are
no obvious transformations to normalize the data.
Our paper is organized as follows. In Section \ref{sec:data} we
describe the wind power data that we use in our study. Then we explain
the two approaches of generating multi-step density forecasts in
Section \ref{sec:Approach}. The first approach concerning the logistic
transformation is described in Section \ref{sec:model}, while in
Section \ref{sec:ESTrunNorm} we give the details on the second approach
using exponential smoothing methods and truncated normal distributions.
In Section \ref{sec:Evaluation} we construct 4 benchmarks to gauge the
performances of our approaches, and we evaluate the forecast
performances using various proper scores. Finally, we conclude our
paper in Section \ref{sec:Conclusion}, where we summarize the benefits
of our approaches and discuss important future research directions.
\section{Wind power data}\label{sec:data}
\begin{figure}
\includegraphics{320f01.eps}
\caption{The locations of 64 wind farms in Ireland. There are 68 wind farms and wind power
time series in the raw data, but 4 pairs of
wind farms are so close that they are
essentially extensions from the
corresponding old wind farm. As a result,
we simply consider 64 wind farms here. The wind farms are distributed throughout
Ireland, and Arklow Banks is the only
offshore wind farm.}\label{fig:FarmLocation
\end{figure
We consider aggregated wind power generated from 64 wind farms in
Ireland for approximately six months from 13-Jul-2007 to 01-Jan-2008.
The data are recorded every 15 minutes, giving a total number of 16,512
observations during the period. The locations of the wind farms
are
shown in Figure~\ref{fig:FarmLocation}. One of the wind farms, known as Arklow
Banks, is offshore.\footnote{Detailed information of
individual wind farms, such as latitude, longitude and capacity, is
provided by Eirgrid plc and can be found in \citet{Lau2010}.} We
sum up the capacities\footnote{The capacity is the maximum output of a
wind farm when all turbines operate at their maximum nominal power.} of
all wind farms and the total capacity is 792.355~MW. In order to
facilitate comparisons between data sets with different capacities, we
normalize the aggregated wind power by dividing by the total capacity,
that is, 792.355~MW, and so the normalized data is bounded within $[0,1]$.
We have checked that forecast results, in particular, for approaches
involving nonlinear transformations, are in fact insensitive to the
exact value of normalization.\footnote{In our paper the value of
normalization must not be smaller than the total capacity since we will
consider the logistic transformation (\ref{eq:logit}).} We dissect the
data into a training set of about 4 months (the first 11,008 data
\begin{figure}
\includegraphics{320f02.eps}
\caption{Time series of normalized aggregated wind power from 64 wind
farms in Ireland, where the aggregated wind power is normalized by the
total capacity of 792.355~MW. The data are dissected into a training set
and a testing set as shown by the dashed line. About four months of
data are used for parameter estimation, and the remaining two months of
data are used for out-of-sample evaluation.}\label{fig:NormWP_bw}
\end{figure}
\begin{figure}
\includegraphics{320f03.eps}
\caption{First differences of normalized aggregated wind power. It is
clear that the variance changes with time, and there is volatility
clustering as well as sudden spikes. The data are dissected by the
dashed line into a training set and a testing set.}\label{fig:NormWPDiff_bw}
\end{figure}
\begin{figure}
\includegraphics{320f04.eps}
\caption{Sample ACF of the time series of normalized aggregated wind
power up to a lag of 7 days. The autocorrelations decay very slowly. It
shows that the wind power data are highly correlated and may
incorporate long memory effects.} \label{fig:NormWP_ACF_bw}
\end{figure}
\begin{figure}
\includegraphics{320f05.eps}
\caption{Sample ACF of the first differences of normalized aggregated
wind power up to a lag of 7 days. The
dashed lines are the confidence bounds at 2
standard deviations, assuming that the data
follow a Gaussian white noise process. The autocorrelations are
significantly reduced, but they are still significant up to a lag of 7
days.} \label{fig:NormWPDiff_ACF_bw}
\end{figure
points) for parameter estimation, and a testing set of about two months
(the remaining 5504 data points) for out-of-sample forecast
evaluations. Figures \ref{fig:NormWP_bw} and \ref{fig:NormWPDiff_bw}
show the original and the first differences of the normalized
aggregated wind power respectively. It is clear that wind power data
are nonstationary. The variance is changing with time, showing clusters of high and low variability. Also, there are some occasional spikes.
Figures~\ref{fig:NormWP_ACF_bw} and \ref{fig:NormWPDiff_ACF_bw} show the
autocorrelation function of the wind power and its first differences
respectively. Autocorrelation is significantly reduced by taking first
differences.
Since our aim is to generate short term forecasts up to 24 hours ahead,
we do not focus on modeling any long term seasonality, which often
appears in wind data due to the changing wind patterns throughout the
year. For example, we can model a cycle of 90 days by regressing the
data in the training set with 16 harmonics of sines and cosines with
periods $T = j/(90 \times 96), j = 1, \ldots, 16$. This gives a fitted
time series as shown in Figure \ref{fig:seasonality} with $R^2 = 0.395$.
One may then model the deseasonalized data, but studies
show that results may be worse than those obtained by modeling the
seasonality directly [\citet{Jorgenson1967}]. On the other hand,
we are more interested in the diurnal cycle since it plays a more
important role in intraday forecasts. Diurnal cycles may appear in wind
data due to different temperatures and air pressures during the day and
the night, and wind speeds are sometimes larger during the day when
convection currents are driven by the heating of the sun. Thus, we try
to fit the training data with harmonics of higher frequencies, such as
those with $T = j/96$ where $j$ is an integer. However, results show
that those harmonics cannot help us to explain the variances in the
data, and, thus, we decide to exclude the modeling of any diurnal cycle
in this paper.
\begin{figure}
\includegraphics{320f06.eps}
\caption{Long term seasonality appears in the wind data. We regress the
data in the training set with 16 harmonics of sines and cosines with
periods $T = j/(90 \times 96)$, $j = 1, \ldots, 16$, so that the maximum
period is 90 days. The fit gives an $R^2 = 0.395$. The thin dashed line
is the observed normalized wind power and the solid line is the fitted
time series with a cycle of 90 days. The vertical dashed line dissects
the data into a training set and a testing set.}\label{fig:seasonality}
\end{figure}
\begin{figure}
\includegraphics{320f07.eps}
\caption{Unconditional empirical density of the normalized aggregated
wind power, fitted using the data in the training set. The density is
clearly non-Gaussian since the data is bounded. The density is skewed
and has a sharper peak than the Gaussian distribution. This density
gives the climatology forecast benchmark.}\label{fig:epdf_bar_bw}
\end{figure}
Aggregated wind power time series, although smoother than those from individual wind farms, are non-Gaussian. In particular, they are nonnegative.
Figure~\ref{fig:epdf_bar_bw} shows the unconditional density of aggregated
wind power. This distribution has a sharper peak than the normal
distribution and is also significantly right-skewed. Common
transformations for normalizing wind speed data include the logarithmic
transformation and the square root transformation
[\citet{Taylor2006}]. However, those transformations are shown to
be unsatisfactory for our particular wind power data as demonstrated in Figures
\ref{fig:epdf_log_bar_bw} and \ref{fig:epdf_sqrt_bar_bw}. Nevertheless,
we could transform the wind power data $y_t$ by a logistic
transformation. This can be traced back to the work of
\citet{Johnson1949}, and recently \citet{Bremnes2006} applies
this transformation to model wind power. The logistic transformation is
given by
\begin{equation}\label{eq:logit}
z_t = \log \biggl( \frac{y_t}{1 - y_t} \biggr), \qquad 0 < y_t < 1,
\end{equation}
and the transformed data $z_t$ gives a distribution which can be well
approximated by a Gaussian distribution as shown in Figure
\ref{fig:epdf_logit_bar_bw}. In contrast with individual wind power
data, we do not encounter any values of zero or one and so
(\ref{eq:logit}) is well defined. In Section \ref{sec:model} we apply
this transformation and build a Gaussian model to generate multi-step
density forecasts for wind power.
\begin{figure}
\includegraphics{320f08.eps}
\caption{Density of the wind power data after applying the logarithmic
transformation, which remains non-Gaussian. The logarithmic
transformation is a common transformation to convert wind speed data
into an approximate Gaussian distribution, but is clearly unappropriate
for wind power data. The solid line is the fitted Gaussian distribution
by maximizing the likelihood.} \label{fig:epdf_log_bar_bw}
\end{figure}
\begin{figure}
\includegraphics{320f09.eps}
\caption{Density of the wind power data after applying the square root
transformation, which remains non-Gaussian. The square root
transformation is a common transformation to convert wind speed data
into an approximate Gaussian distribution, but is clearly inappropriate
for wind power data. The solid line is the fitted Gaussian distribution
by maximizing the likelihood.}\label{fig:epdf_sqrt_bar_bw}
\end{figure}
\begin{figure}
\includegraphics{320f10.eps}
\caption{Density of the wind power data after applying the logistic
transformation, which can be well approximated by a Gaussian
distribution. The solid line is the fitted Gaussian distribution by
maximizing the likelihood.} \label{fig:epdf_logit_bar_bw}
\end{figure}
\section{Approaches for density forecasting}\label{sec:Approach}
Since our aim of this paper is to generate multi-step ahead density
forecasts without relying on Monte Carlo simulations, it is important
that our approach can be iterated easily. For this reason, in
both of the following approaches, we consider a Gaussian model at
certain stages so that we can iterate the forecasts in a tractable
manner.
\subsection{Gaussian model for transformed data}\label{sec:model}
In the first approach, we consider the transformation of wind power
data into an approximately Gaussian distribution so that we could
describe the transformed data by a simple Gaussian model, in
particular, the conventional ARIMA--GARCH model with Gaussian
innovations. As discussed in Section \ref{sec:data}, we transform the
wind power data by the logistic function in (\ref{eq:logit}). This
transformation maps the support from $(0,1)$ to the entire real axis,
and Figure \ref{fig:epdf_logit_bar_bw} shows that this results in an
approximately Gaussian distribution.
\begin{figure}
\includegraphics{320f11.eps}
\caption{First differences of the logistic transformed wind power. The
variance is not changing as fast as before, and the amount of
volatility clusterings is reduced. However, the time series is still
nonstationary. The data are dissected by the dashed line into a
training set and a testing set.} \label{fig:NormWP_logit_Diff_bw}
\end{figure}
\begin{figure}
\includegraphics{320f12.eps}
\caption{Sample ACF of the first differences of logistic transformed
wind power up to a lag of 7 days. The
dashed lines are the confidence bounds at 2
standard deviations, assuming that the data
follow a Gaussian white noise process. The autocorrelations are
slightly smaller than that for the original data, which is shown in
Figure \protect\ref{fig:NormWPDiff_ACF_bw}.} \label{fig:NormWPLogitDiff_ACF_bw}
\end{figure}
As wind power data are nonstationary, so are the transformed data and
we consider the first differences $w_t = z_t - z_{t-1}$. When compared
with the original first differences $y_t - y_{t-1}$ in Figure
\ref{fig:NormWPDiff_bw}, the logistic transformed values $z_t$ have
fewer volatility clusterings and a smaller autocorrelation. This is
shown in Figure \ref{fig:NormWP_logit_Diff_bw} and Figure
\ref{fig:NormWPLogitDiff_ACF_bw}, respectively. Thus, we model $z_t$ by
an $\operatorname{ARIMA}(p,1,q$)--$\operatorname{GARCH}(r,s)$ model\footnote{We have also considered
modeling $z_t$ by $\operatorname{ARMA}(p,q$)--$\operatorname{GARCH}(r,s)$ models, but they are not
selected based on the BIC values.}
\begin{eqnarray}
w_t &=& \mu + \sum_{i=1}^{p} \phi_i w_{t-i} + \sum_{j=1}^{q} \theta_j \varepsilon_{t-j} + \varepsilon_t,\qquad
\varepsilon_t|\mathcal{F}_{t-1} \stackrel{\mathrm{i.i.d.}}{\sim} N(0,\sigma^2_{\varepsilon;t}),
\nonumber\\[-8pt]\\[-8pt]
\sigma^2_{\varepsilon;t} &=& \omega + \sum_{i=1}^{r} \alpha_i \varepsilon_{t-i}^2 + \sum_{j=1}^{s} \beta_j \sigma^2_{\varepsilon;t-j}, \nonumber
\end{eqnarray}
where $w_t = z_t - z_{t-1}$, $\mu, \phi_i, \theta_j, \omega, \alpha_i,
\beta_j$ are constant coefficients satisfying the usual conditions [Tsay (\citeyear{Tsay2005})] and
$\mathcal{F}_t$ consists of all the past values of $z$ up to time $t$.
We also consider an $\operatorname{ARIMA}(p,1,q$) model for $z_t$ with
constant conditional \mbox{variance}
$\operatorname{Var}[\varepsilon_t|\mathcal{F}_{t-1}] =
\sigma^2_{\varepsilon;t} = \sigma^2_\varepsilon$, so as to compare with
the $\operatorname{ARIMA}(p,1,q$)--$\operatorname{GARCH}(r,s$) model. We
select the models by minimizing the Bayesian Information Criteria
(BIC). Parameters are estimated by maximizing the Gaussian likelihood.
The optimal $h$-step ahead forecasts $\hat{z}_{t+h|t}$ and
$\hat{\sigma}^2_{\varepsilon;t+h|t}$ can be easily obtained, and the
corresponding $h$-step ahead density forecast of $Z_{t+h}$ is given by
the Gaussian distribution, that is, $f_{Z_{t+h|t}} \sim
N(\hat{z}_{t+h|t}, \hat{\sigma}^2_{t+h|t})$ so that\break
$\hat{\sigma}^2_{t+h|t} = \operatorname{Var}[z_{t+h}|\mathcal{F}_t]$
can be obtained from $\{ \hat{\sigma}^2_{\varepsilon;t+j|t} \}_{j=1}^h$
in a standard way,\vspace*{1pt} for example, by
expressing the model in a moving average
(MA) representation [Tsay (\citeyear{Tsay2005})]. To restore the density of the normalized aggregated
wind power $Y_{t+h}$, we compute the Jacobian of the transformation in
(\ref{eq:logit}) where $|J| = | dz/dy | = 1/[y(1-y)]$. The density of
$Y_{t+h}$ is then given by $f_{Y_{t+h|t}}(y_{t+h}) = |J| f_{Z_{t+h|t}}
(z_{t+h})$, that is,
\begin{eqnarray}\label{eq:LogitDensity}
f_{Y_{t+h|t}}(y_{t+h}) &=& \frac{1}{y_{t+h} (1-y_{t+h})} \frac{1}{\sqrt{2 \pi
\hat{\sigma}^2_{t+h|t}}}\nonumber\\[-8pt]\\[-8pt]
&&{}\times\exp \biggl[ \biggl( - \biggl( \log \biggl( \frac{y_{t+h}}{1-y_{t+h}} \biggr) - \hat{z}_{t+h|t} \biggr)^2 \biggr)
\big/(2 \hat{\sigma}^2_{t+h|t}) \biggr].\nonumber
\end{eqnarray}
Note that (\ref{eq:LogitDensity}) is the $h$-step ahead conditional
density of $Y_{t+h}$ given the conditional point forecast of
$\hat{z}_{t+h|t}$ at time $t$.
\subsection{Exponential smoothing and truncated normal distribution}\label{sec:ESTrunNorm}
The second approach deals with the original wind power data $y_t$
directly. However, since the data are non-Gaussian, there is a problem
with the iteration of multi-step ahead density forecasts. We handle
this by expressing the $h$-step ahead conditional density as a function
of its first two moments. For instance, the one-step ahead density is
written as $f_{t+1|t}(y ; \hat{\mu}_{t+1|t}, \hat{\sigma}^2_{t+1|t})$,
where $\hat{\mu}_{t+1|t} = \mathrm{E}[y_{t+1} | \mathcal{F}_t]$ is the
conditional mean and $\hat{\sigma}^2_{t+1|t} =
\operatorname{Var}[y_{t+1} | \mathcal{F}_t] =
\operatorname{Var}[\varepsilon_{t+1} | \mathcal{F}_t] =
\hat{\sigma}^2_{\varepsilon;t+1|t}$ is the conditional variance.\footnote{In this paper, $\hat{\sigma}^2_{t+h|t}$ denotes the
conditional variance of the data $y_{t+h}$, while
$\hat{\sigma}^2_{\varepsilon;t+h|t}$ denotes the conditional variance
of the innovation $\varepsilon_{t+h}$, so that in general
$\hat{\sigma}^2_{t+h|t}$ is a function of
$\hat{\sigma}^2_{\varepsilon;t+j|t}$ with $j = 1,\ldots,h$.} At this
moment, we do not attempt to figure out the exact form of the density
function $f_{t+1|t}$. Given any $f_{t+1|t}$ and a model $M$ for the
dynamics, we can always evolve the density function so that
\begin{eqnarray}\label{eq:M}
f_{t+1|t}(y ; \hat{\mu}_{t+1|t}, \hat{\sigma}^2_{t+1|t}) \stackrel{M}{\longrightarrow}
f_{t+h|t}(y ; \hat{\mu}_{t+h|t}, \hat{\sigma}^2_{t+h|t}),\nonumber\\[1pt]\\[-21pt]
\eqntext{\hat{\mu}_{t+h|t} = p^{(h)}_M(\hat{\mu}_{t+1|t}, \ldots, \hat{\mu}_{t+h-1|t}; y_1, \ldots, y_t),} \\
\eqntext{\hat{\sigma}^2_{t+h|t} = q^{(h)}_M(\hat{\sigma}^2_{\varepsilon;t+1|t}, \ldots, \hat{\sigma}^2_{\varepsilon;t+h|t}),\hspace*{51pt}}
\end{eqnarray}
where $\stackrel{M}{\longrightarrow}$ denotes the process of evolving
the dynamics and generating $h$-step ahead density forecasts under the
unknown model $M$, which in practice may require the use of Monte Carlo
simulations. Here $p^{(h)}_M$ and $q^{(h)}_M$ stand for functions that
give the conditional mean and the conditional variance of $y_t$, with
parameters that depend on the model $M$ and the forecast horizon $h$.
It is difficult to obtain any closed form for $f_{t+h|t}$ if the
distribution of innovations $\varepsilon_t$ is non-Gaussian. Thus, we
propose to use a two-step approach to approximate $f_{t+h|t}$. In the
first step, we attempt to model the dynamics of the conditional mean
$\hat{\ell}_{t+h|t}$ and the conditional variance $\hat{s}^2_{t+h|t}$
of the data using a Gaussian model~$G$. This is expressed as
\begin{eqnarray}\label{eq:M2}
\mbox{\textit{Step} 1:}\qquad \hat{\ell}_{t+h|t} &=& p^{(h)}_G(\hat{\ell}_{t+1|t}, \ldots, \hat{\ell}_{t+h-1|t}; y_1, \ldots,
y_t),
\nonumber\\[-8pt]\\[-8pt]
\hat{s}^2_{t+h|t} &=& q^{(h)}_G(\hat{s}^2_{\varepsilon;t+1|t}, \ldots, \hat{s}^2_{\varepsilon;t+h|t}), \nonumber
\end{eqnarray}
where $p^{(h)}_G$ and $q^{(h)}_G$ stand for functions that give the
conditional mean and the conditional variance of $y_{t+h}$, with
parameters that depend on the Gaussian model~$G$ and horizon $h$. In
model~$G$, the innovations are additive and are assumed to be i.i.d.
Gaussian distributed. For example, $G$ can be the conventional
ARIMA--GARCH model with Gaussian innovations. This may be violated in
reality, so $\hat{\ell}_{t+h|t}$ and $\hat{s}^2_{t+h|t}$ obtained from
model $G$ may not be the true conditional mean $\hat{\mu}_{t+h|t}$ and
conditional variance $\hat{\sigma}^2_{t+h|t}$ respectively. They only
serve as proxies to the true values.
Although model $G$ may not describe real situations, we rely on a
second step for remedial adjustments such that the final density
forecast is an approximation to reality. In the second step, we assume
that the $h$-step ahead density $f_{t+h|t}$ can be approximated by a
parametric function $D$, which is characterized by a location parameter
and a scale parameter. In particular, the location parameter and the
scale parameter are obtained from the conditional mean
$\hat{\ell}_{t+h|t}$ and the conditional variance $\hat{s}^2_{t+h|t}$
respectively, which are estimated from the Gaussian model $G$. Thus, we
simply take
\begin{equation}\label{eq:M2_2}
\mbox{\textit{Step} 2:}\qquad f_{t+h|t}(y ; \hat{\mu}_{t+h|t}, \hat{\sigma}^2_{t+h|t}) \approx D (y ; \hat{\ell}_{t+h|t}, \hat{s}^2_{t+h|t})
\end{equation}
as the $h$-step ahead density forecast where $D$ is a function
depending on two parameters only. As a result, the two-step approach
may be able to give a good estimation of $f_{t+h|t}$ if (\ref{eq:M2_2})
is a close approximation. In (\ref{eq:M2_2}) the correct conditional
mean $\hat{\mu}_{t+h|t}$ and conditional variance
$\hat{\sigma}^2_{t+h|t}$ are generated by $p^{(h)}_M(\cdot)$ and
$q^{(h)}_M(\cdot)$ under the true model $M$, while the corresponding
proxy values $\hat{\ell}_{t+h|t}$ and $\hat{s}^2_{t+h|t}$ are generated
by $p^{(h)}_G(\cdot)$ and $q^{(h)}_G(\cdot)$ under a Gaussian model
$G$. Empirical studies will be needed to determine the appropriate
Gaussian model $G$ as well as the best choice $D$ in order to
approximate the final density $f_{t+h|t}$.
For our normalized aggregated wind power $y_t$, choosing $D$ as the
truncated normal distribution bounded within $[0,1]$ gives a good
approximation of $f_{t+h|t}$. Truncated normal distributions have been
applied successfully in modeling bounded, nonnegative data
[\citet{Sanso1999}, \citet{Gneiting2006}]. We consider $D$ to be
parameterized by the location parameter $\hat{\ell}_{t+h|t}$ and the
scale parameter $\hat{s}^2_{t+h|t}$, where $N(\hat{\ell}_{t+h|t},
\hat{s}^2_{t+h|t})$ is the corresponding normal distribution without
truncation. Note that $\hat{\ell}_{t+h|t}$ and $\hat{s}^2_{t+h|t}$ will
be the true conditional mean and conditional variance if the data are
indeed Gaussian. The density function $f_{t+h|t} $ is then given by
(\ref{eq:M2_2}) so that
\begin{eqnarray}\label{eq:TrunNormDensity}
f_{t+h|t}(y; \hat{\mu}_{t+h|t}, \hat{\sigma}^2_{t+h|t}) &=&
\frac{1}{\hat{s}_{t+h|t}}
\biggl( \varphi \biggl( \frac{y-\hat{\ell}_{t+h|t}}{\hat{s}_{t+h|t}} \biggr) \biggr)\nonumber\\[-8pt]\\[-8pt]
&&{}\Big/
\biggl(\Phi \biggl( \frac{1-\hat{\ell}_{t+h|t}}{\hat{s}_{t+h|t}} \biggr) - \Phi \biggl( \frac{-\hat{\ell}_{t+h|t}}{\hat{s}_{t+h|t}}
\biggr)\biggr)\nonumber
\end{eqnarray}
for $y\in(0,1)$, where $\varphi$ and $\Phi$ are the standard normal density and distribution function respectively.
Instead of directly estimating $\hat{\ell}_{t+h|t}$ and
$\hat{s}^2_{t+h|t}$ using the ARIMA--GARCH models, we find that a
better way is to smooth the two parameters simultaneously by
exponential smoothing methods. Exponential smoothing methods have been
widely and successfully adopted in areas such as inventory forecasting
[\citet{Brown1961}], electricity forecasting
[\citet{Taylor2003}] and volatility forecasting
[\citet{Taylor2004}]. A comprehensive review of exponential
smoothing is given by \citet{Gardner2006}.
\citet{Hyndman2008} provide a state space framework for
exponential smoothing, which further strengthens its value as a
statistical model instead of an ad hoc forecasting procedure.
\citet{Ledolter1978} show that exponential smoothing methods
produce optimal point forecasts if and only if the underlying data
generating process is within a subclass of
$\operatorname{ARIMA}(p,d,q$) processes. We extend this property and
demonstrate that simultaneous exponential smoothing on the mean and
variance can produce optimal point forecasts if the data follow a
corresponding $\operatorname{ARIMA}(p,d,q$)--$\operatorname{GARCH}(r,s$)
process. This enables us to generate multi-step ahead forecasts for the
parameters $\hat{\ell}_{t+h|t}$ and $\hat{s}^2_{t+h|t}$ by iterating
the underlying ARIMA--GARCH model of exponential smoothing.
\subsubsection{Smoothing the location parameter only}\label{sec:ETSmean}
For the simplest case, let us assume that the conditional variance
of wind power is constant. This means that we only need to smooth the
conditional mean $\ell_t$, while the conditional variance $s^2_t$
will be estimated directly from the data via estimating the variance of
innovations $\hat{s}_{\varepsilon}^2$. From now on, we refer to the
conditional mean as the location parameter and the conditional variance
as the scale parameter so as to remind us that they correspond to the
truncated normal distribution. Again, the $h$-step ahead scale
parameter $\hat{s}_{t+h|t}^2$ is obtained as a function of
$\hat{s}_{\varepsilon}^2$.
By simple exponential smoothing, the smoothed series of the location
parameter $\ell_t$ is given by $S_t$, which is updated according to
\begin{equation}\label{eq:ESupdate}
S_t = \alpha y_t + (1-\alpha)S_{t-1},
\end{equation}
where $y_t$ is the observed wind power at time $t$ and $0<\alpha<1$ is
a smoothing parameter. We initialize the series by setting $S_1 = y_1$,
and the one-step ahead forecast is $\hat{\ell}_{t+1|t} = S_t$.
Iterating (\ref{eq:ESupdate}) gives $\hat{\ell}_{t+h|t} = S_t$.
However, the forecast errors $y_t - \hat{\ell}_{t|t-1}$ are highly
correlated, with a significant lag one sample autocorrelation of
$0.2723$. A simple way to improve the forecast is to add a parameter
$\phi_s$ to account for autocorrelations in the forecast equation
[\citet{Taylor2003}]. We call this the simple exponential
smoothing with error correction. The updating equation is still given
by (\ref{eq:ESupdate}), but the forecast equation is modified as
\begin{equation}\label{eq:ESNNECforecast}
\hat{\ell}_{t+1|t} = S_t + \phi_s (y_t - S_{t-1}),
\end{equation}
where $|\phi_s|<1$. Note that it is now possible to obtain negative
values for $\hat{\ell}_{t+1|t}$ in (\ref{eq:ESNNECforecast}) and in
such cases $\hat{\ell}_{t+1|t}$ is obviously not the true conditional
mean. Nevertheless, this is not a problem here since
$\hat{\ell}_{t+1|t}$ essentially serves as the location parameter of
the truncated normal distribution, which can be negative. Following the
taxonomy introduced by \citet{Hyndman2008}, we denote
(\ref{eq:ESupdate}) and (\ref{eq:ESNNECforecast}) as the
$\operatorname{ETS}(A,N,N|\mathit{EC})$ method, where ETS stands for both an abbreviation for
exponential smoothing as well as an acronym for error, trend and
seasonality respectively. The $A$ inside the bracket stands for
additive errors in the model, the first $N$ stands for no trend, the
second $N$ stands for no seasonality and $\mathit{EC}$ stands for error
correction.
By directly iterating (\ref{eq:ESupdate}) and (\ref{eq:ESNNECforecast})
and expressing $\hat{y}_{t+h|t} = \hat{\ell}_{t+h|t}$, we have
\begin{equation}\label{ESNNECforecast_h}
\hat{\ell}_{t+h|t}
= S_t + \frac{\alpha \phi_s (1 - \phi_s^{h-1})}{1-\phi_s}(y_t - S_{t-1}) + \phi_s^h (y_t - S_{t-1})
\end{equation}
for $h>1$. To generate $h$-step ahead forecasts of $\hat{s}_{t+h|t}^2$,
it is important that we identify an underlying model corresponding to
our updating and forecast equations (\ref{eq:ESupdate}) and
(\ref{eq:ESNNECforecast}). It can be easily checked that the
$\operatorname{ETS}(A,N,N|\mathit{EC})$ method is optimal for the
$\operatorname{ARIMA}(1,1,1)$ model, in the sense that the forecasts in
(\ref{eq:ESNNECforecast}) are the minimum mean square error (MMSE)
forecasts. Expressed in the form of an $\operatorname{ARIMA}(1,1,1)$ model with Gaussian
innovations, the $\operatorname{ETS}(A,N,N|\mathit{EC})$ method can be
written as
\begin{equation}\label{eq:ARIMA111_ETS}
w_t = \phi_s w_{t-1} + \varepsilon_t + (\alpha - 1)\varepsilon_{t-1}, \qquad\varepsilon_t \stackrel{\mathrm{i.i.d.}}{\sim} N(0,s_{\epsilon}^2),
\end{equation}
where $w_t = y_t - y_{t-1}$, $\varepsilon_t$ is the Gaussian innovation
with mean zero and constant variance $s_{\epsilon}^2$, and $\alpha,
\phi_s$ are the smoothing parameters in (\ref{eq:ESupdate}) and
(\ref{eq:ESNNECforecast}). It can also be easily verified that
(\ref{ESNNECforecast_h}) is identical to the $h$-step ahead forecasts
obtained from the $\operatorname{ARIMA}(1,1,1)$ model in
(\ref{eq:ARIMA111_ETS}). It then follows from the
$\operatorname{ARIMA}(1,1,1)$ model that the $h$-step ahead forecast
variance is given by
\begin{equation}\label{ESNNECforecast_sig_h}
\hat{s}^2_{t+h|t} = \hat{s}_{\epsilon}^2 \sum_{j=1}^h \Omega_{h-j}^2,
\end{equation}
where $\Omega_{0} = 1, \Omega_{h} = \phi_s^h + \alpha
(1-\phi_s^h)/(1-\phi_s)$ for $h\geq1$, and $\hat{s}^2_{\varepsilon}$ is
the estimated constant variance of the innovations. Note that in this
case, (\ref{ESNNECforecast_sig_h}) is the explicit form of
$\hat{s}^2_{t+h|t} = q^{(h)}_G(\hat{s}_{\epsilon}^2)$ in (\ref{eq:M2}).
Since maximum likelihood estimators are well known to have nice
asymptotic properties, we estimate the three parameters $\alpha,
\phi_s$ and $\hat{s}^2_{\varepsilon}$ by maximizing the likelihood of
the truncated normal distribution $f_{t+1|t}(y_{t+1};
\hat{\ell}_{t+1|t}, \hat{s}^2_{t+1|t})$. One may also consider
minimizing the mean continuous ranked probability scores (CRPS) of the
density forecasts [\citeauthor{Gneiting2005} (\citeyear{Gneiting2005}, \citeyear{Gneiting2006})], but this
requires a much larger amount of computation. Although it may slightly
improve the density forecasts, minimizing the CRPS is not appealing
here since we aim at generating multi-step forecasts in a
computationally efficient way. After obtaining the parameters, from
(\ref{ESNNECforecast_h}) and (\ref{ESNNECforecast_sig_h}) we can
generate the $h$-step ahead density forecasts using
(\ref{eq:TrunNormDensity}).
\subsubsection{Smoothing both the location and scale parameters simultaneously}
Next, we consider heteroscedasticity for the conditional variances of
wind power. In this case, apart from smoothing the location parameter
$\ell_t$, we also simultaneously smooth the scale parameter $s_t^2$. In
fact, we smooth the variance of innovations $s_{\varepsilon;t}^2$ and
obtain the scale parameter $s_t^2$ as a function of
$s_{\varepsilon;t}^2$ as in (\ref{eq:M2}).
Equipped with the one-step ahead forecast of the location parameter
$\hat{\ell}_{t|t-1}$, we may calculate the squared difference between
$\hat{\ell}_{t|t-1}$ and the observed wind power $y_t$, that is, $(y_t
- \hat{\ell}_{t|t-1})^2$, as the estimated variance
$s^2_{\varepsilon;t}$ at time $t$. Applying simple exponential
smoothing, the smoothed series of $s_{\varepsilon;t}^2$ is given by
$V_t$, which is updated according to
\begin{equation}\label{eq:ESVarupdate}
V_t = \gamma (y_t - \hat{\ell}_{t|t-1})^2 + (1-\gamma) V_{t-1},
\end{equation}
where $y_t$ is the observed wind power at time $t$,
$\hat{\ell}_{t|t-1}$ is obtained by (\ref{eq:ESNNECforecast}) and
$0<\gamma<1$ is a smoothing parameter. We initialize the series by
setting $V_1$ to be the variance of the data in the training set. In
fact, the forecasts are not sensitive to the choice of initial values
due to the size of the data set. The one-step ahead forecast is given
by $\hat{s}^2_{\varepsilon;t+1|t} = V_t$. Again, the forecast errors
are highly correlated and it is better to include an additional
parameter $\phi_v$ in the forecast equation to account for
autocorrelations. The modified forecast equation is then given by
\begin{equation}\label{eq:ESVarECforecast}
\hat{s}^2_{\varepsilon;t+1|t} = V_t + \phi_v [ (y_t - \hat{\ell}_{t|t-1})^2 - V_{t-1}
],
\end{equation}
where $|\phi_v|<1$. Unfortunately, a major drawback of introducing this
extra term in the forecast equation is that negative values of
$\hat{s}^2_{\varepsilon;t+1|t}$ may occur. Although this does not
happen in our data,\vspace*{1pt} we modify our approach and consider smoothing the
logarithmic transformed scale parameter $\log s^2_{\varepsilon;t}$ such
that negative values are allowed since we aim at developing a general
methodology that applies to different data sets. The smoothed series
for $\log s^2_{\varepsilon;t}$ is then given by $\log V_t$. Denoting\vspace*{-3pt}
$\varepsilon_t = y_t - \hat{\ell}_{t|t-1}$ and $e_t = \varepsilon_t /
\sqrt{V_t}$, the estimated logarithmic variance at time $t$ is now chosen to be
$g(e_{t})$ instead of $\log\varepsilon_t^2$ so that
\begin{equation}\label{eq:g}
g(e_t) = \theta ( |e_t| - \mathrm{E}[|e_t|] ),
\end{equation}
where $\theta$ is a constant parameter. This ensures that
$g( e_t)$ is positive for large values of $e_t$ and
negative if $e_t$ is small. The updating equation and the forecast equation are now written respectively as
\begin{eqnarray}\label{eq:ESlogVarEqn}
\log V_t &=& \gamma g(e_{t}) + (1-\gamma) \log V_{t-1},
\nonumber\\[-8pt]\\[-8pt]
\log \hat{s}^2_{\varepsilon;t+1|t} &=& \log V_t + \phi_v [ g(e_{t}) - \log V_{t-1} ],\nonumber
\end{eqnarray}
which are analogous to (\ref{eq:ESVarupdate}) and
(\ref{eq:ESVarECforecast}), except that a logarithmic\vspace*{1pt} transformation is
taken and $(y_t - \hat{\ell}_{t|t-1})^2$ is replaced by $g(e_{t})$.
We initialize the series by setting $\log V_1 = 0$. In fact, the
smoothing procedure is insensitive to the initial value due to the size
of the data set.
Now, the $h$-step ahead forecasts of $\hat{\ell}_{t+h|t}$ are still obtained from (\ref{ESNNECforecast_h}), but to generate $h$-step ahead forecasts of $\hat{s}_{t+h|t}^2$ we need to identify an underlying model for this smoothing method. We summarize our exponential smoothing method for both $\ell_t$ and $s_t^2$ by combining (\ref{eq:ESupdate}), (\ref{eq:ESNNECforecast}) and (\ref{eq:ESlogVarEqn}):
\begin{eqnarray}\label{eq:ESNNEC_ECforecast}
S_t &=& \alpha y_t + (1-\alpha)S_{t-1} , \nonumber \\
\hat{\ell}_{t+1|t} &=& S_t + \phi_s (y_t - S_{t-1}),\nonumber\\[-8pt]\\[-8pt]
\log V_t &=& \gamma g(e_{t}) + (1-\gamma) \log V_{t-1} , \nonumber \\
\log \hat{s}^2_{\varepsilon;t+1|t} &=& \log V_t + \phi_v [ g(e_{t}) - \log V_{t-1} ],\nonumber
\end{eqnarray}
where $g(e_t)$ is given in (\ref{eq:g}) and $e_t$ as defined
previously. There are four smoothing parameters $\alpha, \gamma,
\phi_s, \phi_v$ and a parameter $\theta$ for the estimated logarithmic variance
$g(e_t)$. We adopt the taxonomy similar to that for exponential
smoothing for the location parameter as described in Section
\ref{sec:ETSmean}, and denote (\ref{eq:ESNNEC_ECforecast}) as the
$\operatorname{ETS}(A,N,N|\mathit{EC})$--($A,N,N|\mathit{EC}$) method where
the second bracket of $(A,N,N|\mathit{EC})$ indicates the exponential
smoothing method applied for smoothing the variance. We aim at
identifying (\ref{eq:ESNNEC_ECforecast}) with an ARIMA--GARCH model.
Using (\ref{eq:ARIMA111_ETS}) as the $\operatorname{ARIMA}(1,1,1)$
model for $y_t$ and writing $\varepsilon_t = y_t -
\hat{\ell}_{t|t-1}$, the last equation in (\ref{eq:ESNNEC_ECforecast})
can be written as
\begin{eqnarray}
\log \hat{s}^2_{\varepsilon;t+1|t} &=& \log V_t + \phi_v [ g(e_{t}) - \log V_{t-1} ] \nonumber \\
&=& \gamma g(e_{t}) + (1-\gamma) \log V_{t-1} + \phi_v [ g(e_{t}) - \log V_{t-1} ]\nonumber\\
&=& (\gamma + \phi_v) g(e_{t}) - \phi_v \log V_{t-1} \\
&&{} + (1-\gamma) \{ \log s^2_{\varepsilon;t} - \phi_v [ g(e_{t-1}) - \log V_{t-2} ] \} \nonumber\\
&=& (\gamma + \phi_v) g(e_{t}) - \phi_v g(e_{t-1}) + (1-\gamma) \log s^2_{\varepsilon;t},\nonumber
\end{eqnarray}
where we have used the updating equation in (\ref{eq:ESlogVarEqn}).
This is the exponential GARCH, that is, $\operatorname{EGARCH}(2,1)$ model for the
conditional variance of innovations $\varepsilon_t$
[\citet{Nelson1991}]. Unlike the conventional EGARCH models for
asset prices, $g(e_t)$ is symmetric since there is no reason to expect
volatility to increase when wind power generation drops. In summary,
the exponential smoothing method in (\ref{eq:ESNNEC_ECforecast}) is
optimal for the $\operatorname{ARIMA}(1,1,1)$--$\operatorname{EGARCH}(2,1)$ model, which
can be written as
\begin{eqnarray}\label{eq:ARIMA111GARCH21}
w_t &=& \phi_s w_{t-1} + \varepsilon_t + (\alpha -
1)\varepsilon_{t-1},\qquad
\varepsilon_t | \mathcal{F}_{t-1} \stackrel{\mathrm{i.i.d.}}{\sim}
N(0,s^2_{\varepsilon;t}),\nonumber\\[-8pt]\\[-8pt]
\log s_{\varepsilon;t}^2 &=& (1-\gamma) \log s_{\varepsilon;t-1}^2 + (\gamma + \phi_v) g(e_{t-1}) - \phi_v g(e_{t-2}), \nonumber
\end{eqnarray}
where $w_t = y_t - y_{t-1}$ and $g(e_t)$ is given in (\ref{eq:g}), and
we have assumed Gaussian innovations so that $\mathrm{E}[|e_t|] =
\sqrt{2/\pi}$. Similarly, we estimate the five parameters $\alpha,
\phi_s, \gamma, \phi_v$ and $\theta$ by maximizing the truncated normal
likelihood as mentioned in Section \ref{sec:ETSmean}. Now, equipped with the
$\operatorname{ARIMA}(1,1,1)$--$\operatorname{EGARCH}(2,1)$ model in
(\ref{eq:ARIMA111GARCH21}), the $h$-step ahead forecasts for the scale
parameter $\hat{s}^2_{\varepsilon;t+h|t}$ can be easily obtained
[\citet{Tsay2005}]. Consequently, the $h$-step ahead forecasts
$\hat{s}^2_{t+h|t}$ can be expressed as a function of
$\{\hat{s}^2_{\varepsilon;t+j|t} \}_{j=1}^{h}$, which is analogous to\vspace*{2pt}
(\ref{ESNNECforecast_sig_h}) except that the expression is much more
complicated and, in practice, one would simply iterate the forecasts.
The $h$-step ahead density forecasts can then be obtained using
(\ref{eq:TrunNormDensity}).
\section{Forecast evaluations}\label{sec:Evaluation}
\subsection{Benchmark models}
In this section we apply the approaches of density forecasts in Section
\ref{sec:Approach} to forecast normalized aggregated wind power in
Ireland. To evaluate the forecast performances of our approaches, we
compare the results with four simple benchmarks. The first two
benchmarks are the persistence (random walk) forecast and the constant
forecast, which are both obtained as truncated normal distributions in
(\ref{eq:TrunNormDensity}). For the persistence forecast, we estimate
the $h$-step ahead location parameter $\hat{\ell}_{t+h|t}$ and scale
parameter $\hat{s}_{t+h|t}^2$ using the latest observations, that is,
\begin{equation}\label{eq:persistence}
\hat{\ell}_{t+h|t} = y_t,\qquad
\hat{s}_{t+h|t}^2 = \frac{\sum_{j=1}^N (y_{t+1-j} - y_{t-j})^2}{N}
\end{equation}
for $t>N$. We find that taking $N=48$, that is, using data in the past 12 hours, gives an appropriate estimate for $\hat{s}_{t+h|t}^2$.
For the constant forecast, we estimate the constant location parameter
$\hat{\ell}_{t+h|t}$ and scale parameter $\hat{s}_{t+h|t}^2$ using data
in the whole training set. They are given by the sample mean and the
sample variance of the 11,008 observations in the training set, so that
\begin{equation}
\hat{\ell}_{t+h|t} = \hat{\ell} = \frac{\sum_{j=1}^{11{,}008}
y_{j}}{11{,}008},\qquad
\hat{s}_{t+h|t}^2 = \hat{s}^2 = \frac{\sum_{j=1}^{11{,}008} (y_{j} - \hat{\ell})^2}{11{,}007}.
\end{equation}
We have also considered generating the persistence and constant
forecasts using the first approach as described in Section
\ref{sec:model}. However, our results show that the second approach
gives a better benchmark in terms of forecast performance.
On the other hand, the third and the fourth benchmarks are obtained by
estimating empirical densities from the data. The third benchmark is
the climatology forecast, in which an empirical unconditional density
is fitted using data in the whole training set. The density has been
shown in Figure \ref{fig:epdf_bar_bw} previously. The fourth benchmark
is the empirical conditional density forecast. To be in line with the
use of exponential smoothing to estimate the location and scale
parameters in Section \ref{sec:ESTrunNorm}, we consider an
exponentially weighted moving average (EWMA) of a set of empirical
conditional densities. Due to computational efficiency as well as
reliability of density estimations, at each time $t$ we essentially
consider the EWMA of 14 empirical conditional densities
$g_{\mathrm{emp}}(\{ \Lambda_t^{j} \})$, where each of them is fitted
using observations in the past $j$ days with $j = 1,2,\ldots,14$ and
$\{ \Lambda_t^{j} \} = \{ y_{t-96j+1}, y_{t-96j+2}, \ldots, y_{t} \}$
is the set of $(96 \times j)$ latest observations used to fit the
empirical density. Up to an appropriate normalization constant, the
$h$-step ahead EWMA empirical conditional density forecast is given by
\begin{equation}\label{eq:EWMACondDen}
f_{t+h|t}(y) \propto \sum_{j=1}^{14} \lambda (1 - \lambda)^{j-1} g_{\mathrm{emp}}(\{ \Lambda_t^{j} \})
\end{equation}
so that for any fixed forecast origin $t$, the $h$-step ahead density
forecasts are identical for all $h>1$. The smoothing parameter in
(\ref{eq:EWMACondDen}) is estimated to be $\lambda = 0.1988$, which is
obtained by maximizing the log likelihood, that is, $\sum \log
f_{t+1|t}(\lambda; y_{t+1})$, using the data in the training set only.
It is possible to estimate a smoothing parameter for each forecast
horizon $h$. However, the improvements are not significant and, thus,
we simply keep using $\lambda = 0.1988$ for all horizons. Figure
\ref{fig:EWMACondDen} shows the exponential decrease of the weights
being assigned to different empirical densities $g_{\mathrm{emp}}(\{
\Lambda_t^{j} \})$.
\begin{figure}
\includegraphics{320f13.eps}
\caption{The exponential decrease of the weights
$\lambda(1-\lambda)^{j-1}$ assigned to the empirical conditional
densities $g_{\mathrm{emp}}(\{ \Lambda_t^{j} \})$ fitted with $j$ days
of latest observations, where $\lambda = 0.1988$ is obtained by
maximizing the likelihood using data in the training set. The EWMA
empirical conditional density forecasts are obtained as the weighted
average of $g_{\mathrm{emp}}(\{ \Lambda_t^{j} \})$.}
\label{fig:EWMACondDen}
\end{figure}
In summary, we consider the following 4 benchmarks and 4 approaches of
generating multi-step density forecasts, and compare their forecast
performances from 15 minutes up to 24 hours ahead:
\begin{enumerate}
\item Persistence forecast [TN]
\item Constant forecast [TN]
\item Climatology forecast [Empirical density]
\item EWMA conditional density forecast [Empirical density]
\item The $\operatorname{ARIMA}(2,1,3)$ model [LT]
\item The $\operatorname{ARIMA}(4,1,3)$--$\operatorname{GARCH}(1,1)$ model [LT]
\item The $\operatorname{ETS}(A,N,N|\mathit{EC})$ method [TN]
\item The $\operatorname{ETS}(A,N,N|\mathit{EC})$--($A,N,N|\mathit{EC}$) method [TN],
\end{enumerate}
where [LT] stands for logistic transformation and [TN] stands
for truncated normal distribution, so as to remind us how the densities
are generated.
\subsection{Point forecasts}
First, let us evaluate the point forecasts generated by different
approaches. We consider the expected values of the density forecasts as
the optimal point forecasts. Given a forecast density, we can obtain
the expected value directly by numerical integration. In particular,
for forecast densities in the form of truncated normal distributions,
one may easily write down the expected value as
\begin{eqnarray}
\hat{y}_{t+h|t} &=& \hat{\ell}_{t+h|t} - \hat{\ell}_{t+h|t} \biggl(
\biggl(\varphi \biggl(
\frac{1-\hat{\ell}_{t+h|t}}{\hat{s}_{t+h|t}} \biggr) - \varphi
\biggl( \frac{-\hat{\ell}_{t+h|t}}{\hat{s}_{t+h|t}} \biggr)
\biggr)\nonumber\\[-8pt]\\[-8pt]
&&\hphantom{ \hat{\ell}_{t+h|t} - \hat{\ell}_{t+h|t} \biggl(}
{}\Big/ \biggl(\Phi \biggl(
\frac{1-\hat{\ell}_{t+h|t}}{\hat{s}_{t+h|t}} \biggr) - \Phi
\biggl( \frac{-\hat{\ell}_{t+h|t}}{\hat{s}_{t+h|t}} \biggr)
\biggr) \biggr),\nonumber
\end{eqnarray}
where $\hat{\ell}_{t+h|t}$ and $\hat{s}^2_{t+h|t}$ are the location and
scale parameters of the truncated normal distribution in
(\ref{eq:TrunNormDensity}). Note that due to the truncation, the
distribution may not be symmetric and so the expected value is in
general different from the location parameter, that is,
$\hat{y}_{t+h|t} \neq \hat{\ell}_{t+h|t}$. In fact, referring to
(\ref{eq:M2}), $\hat{\ell}_{t+h|t} = p^{(h)}_G(\hat{\ell}_{t+1|t},
\ldots, \hat{\ell}_{t+h-1|t}; y_1, \ldots, y_t)$ is obtained according
to a Gaussian model $G$, which may not give the true conditional mean
$\hat{y}_{t+h|t}$ of the data, and may even be negative. Since the
final density $f_{t+h|t}$ is only obtained when an appropriate function
$D$ is chosen,\vspace*{1pt} we see that $D$ transforms the conditional mean from
$\hat{\ell}_{t+h|t}$ for Gaussian data to the optimal forecast
$\hat{y}_{t+h|t}$ for our data. This is analogous to calculating
optimal point forecasts when the loss function is asymmetric
[\citet{Christoffersen1997}, \citet{Patton2007}]. Since the normalized
aggregated wind power is bounded within $[0,1]$, the loss function is
always asymmetric unless the conditional mean is $\hat{\ell}_{t+h|t} =
0.5$. When the conditional mean is not the optimal forecast, an
additional term is added to compensate for the asymmetric loss.
\citet{Christoffersen1997} suggest an approximation to calculate
the optimal forecast for conditionally Gaussian data by assuming
$\hat{y}_{t+h|t} = G(\mu_{t+h|t}, \sigma^2_{t+h|t})$, where
$\mu_{t+h|t}, \sigma^2_{t+h|t}$ are the conditional mean and
conditional variance. Their method involves expanding $G$ into a Taylor
series.
To evaluate the performances of different forecasting approaches, we
calculate $h$-step ahead point forecasts for each of the 5504 values in
the testing set, where $1 \leq h \leq 96$, that is, from 15 minutes up
to 24 hours ahead. For each forecast horizon $h$, we calculate the mean
absolute error (MAE) and the root mean squared error (RMSE) of the
point forecasts, where the mean is taken over the 5504 $h$-step ahead
forecasts in the testing set.
\begin{figure}
\includegraphics{320f14.eps}
\caption{Mean absolute error (MAE) of point forecasts generated by
different approaches for forecast horizons from 15 minutes to 24 hours
ahead. The ARIMA--GARCH models on logistic
transformed data perform best for short
horizons less than 12 hours whereas the
ETS$(A,N,N|EC)$--$(A,N,N|EC)$ method with
truncated normal distribution is best for
horizons greater than 12 hours.}
\label{fig:mae}
\end{figure}
\begin{figure}
\includegraphics{320f15.eps}
\caption{Root mean squared error (RMSE) of point forecasts generated by
different approaches for forecast horizons from 15 minutes to 24 hours
ahead. Results are similar to those under MAE.} \label{fig:rmse}
\end{figure}
Figures \ref{fig:mae} and \ref{fig:rmse} show the results of point
forecasts under MAE and RMSE respectively. The rankings of different
approaches are similar under either MAE or RMSE, except for the
$\operatorname{ETS}(A,N,N|\mathit{EC})$--$(A,N,N|\mathit{EC})$ method which
performs relatively better under MAE than RMSE. It performs the best
under MAE for long forecast horizons beyond 14 hours. On the other
hand, the two ARIMA--GARCH models outperform all other approaches for
short forecast horizons within 12 hours, and are almost as good as the
$\operatorname{ETS}(A,N,N|\mathit{EC})$--$(A,N,N|\mathit{EC})$ method for
horizons beyond 12 hours.
Interestingly, the $\operatorname{ARIMA}(2,1,3)$ model is performing almost identically
to the $\operatorname{ARIMA}(4,1,3)$--$\operatorname{GARCH}(1,1)$ model. This phenomenon is in contrast
with that for the ETS methods, where smoothing both the location and
scale parameters do perform much better. It seems that including the
dynamics of the conditional variance in the modeling of the logistic
transformed wind power $z_t$ cannot improve the point forecasts under
MAE or RMSE. These may be explained by Figure \ref{fig:NormWPDiff_bw}
which shows a significantly changing variance in the original wind
power data $y_t$, and by Figure \ref{fig:NormWP_logit_Diff_bw} which
shows a fairly constant variance for $z_t$. We will further investigate
this issue in the evaluation of density forecasts using the
probability integral transform (PIT), where we see that the conditional
variance models are indeed capturing the changes in volatility better
and thus generate more reliable density forecasts.
As discussed in Section \ref{sec:Intro}, one may argue that
spatiotemporal information among individual wind farms should be
deployed to forecast aggregated wind power. To show that it is indeed
better to forecast the aggregated power as a univariate time series, we
consider a simple multiple time series approach. We obtain the best
linear unbiased predictor (BLUP) of wind power generation at a single
wind farm using observations in the neighborhood, where the predictor
is the best in the sense that it minimizes mean square errors. In other
words, it is simply the kriging predictor which is widely applied in
spatial statistics [\citet{Cressie1993}, \citet{Stein1999}]. It can be
easily extended to deal with spatiotemporal data
[\citet{Gneiting2007c}], and more details could be found in
\citet{Lau2010}. Computing the BLUP relies on the knowledge of the
covariances of the process between different sites. In the context of
spatiotemporal data, we obtain the BLUP by calculating the empirical
covariances among the wind power at different spatial as well as
temporal lags.\footnote{One needs to decide the number of temporal lags
to be included in calculating the BLUP. In our case of
empirical covariances, we find that
including temporal lags within the past
hour is generally the best. Forecast
performances deteriorate when one
considers too many temporal lags.} We then substitute the empirical covariances into the formula of
BLUP. We apply this method and obtain 1, 6, 12 and 24 hours ahead point
forecasts for the power generated at each individual wind farm,
aggregate all power and normalize the result by dividing by 792.355~MW.
We compute the RMSE of these aggregated forecasts, and find that
aggregating individual forecasts cannot beat the performances of our
approaches in Section \ref{sec:Approach}. The results are displayed in
Table \ref{tab:SummaryRMSE}. Of course, one may expect that more
sophisticated spatiotemporal models may be able to outperform our
methods here, but this will be of more interest to individual power
generation instead of aggregated ones as discussed in this paper.
\begin{table}
\caption{Summary of point forecast performances of different approaches
under RMSE. The bold numbers indicate the best approach at that
forecast horizon} \label{tab:SummaryRMSE}
\begin{tabular*}{\textwidth}{@{\extracolsep{\fill}}lcccc@{}}
\hline
& \textbf{1 hour} & \textbf{6 hours} & \textbf{12 hours} & \textbf{24 hours} \\
\hline
Persistence forecast & 0.036 & 0.138 & 0.191 & 0.229 \\
Constant forecast & $0.263$ & $0.263$ & $0.263$ & $0.263$ \\
Climatology forecast & 0.285 & 0.285 & 0.285 & 0.285 \\
EWMA conditional density & 0.177 & 0.192 & 0.203 & 0.211 \\
$\operatorname{ARIMA}(2,1,3)$ & 0.032 & ${0.118}$ & ${0.177}$ & $\bolds{0.207}$ \\
$\operatorname{ARIMA}(4,1,3)$--$\operatorname{GARCH}(1,1)$ & $\bolds{0.031}$ & $\bolds{0.117}$ & ${0.177}$ & 0.209 \\
$\operatorname{ETS}(A,N,N|\mathit{EC})$ & 0.032 & 0.126 & 0.193 & 0.230 \\
$\operatorname{ETS}(A,N,N|\mathit{EC})$--($A,N,N|\mathit{EC}$) & 0.034 & 0.126 & $\bolds{0.176}$ & 0.215 \\[3pt]
BLUP (Multiple time series approach) & 0.037 & 0.123 & 0.188 & 0.229 \\
\hline
\end{tabular*}
\end{table}
\subsection{Density forecasts}
For the density forecasts, we use the continuous ranked probability
score (CRPS) to rank the performances. \citet{Gneiting2007a}
discussed the properties of CRPS extensively, showing that it is a
strictly proper score and a lower score always indicates a better
density forecast. CRPS has become one of the popular tools for density
forecast evaluations, especially for ensemble forecasts in meteorology.
We have also analyzed the performances of density forecasts using other
common metrics such as the negative log likelihood (NLL) scores.
However, we advocate the use of CRPS for ranking different approaches
since CRPS is more robust than the NLL scores, while the latter is
always severely affected by a few extreme outliers
[\citet{Gneiting2005}]. One may need to calculate the trimmed mean
of the NLL scores in order to resolve this problem
[\citet{Weigend2000}]. Also, CRPS assesses both the calibration
and the sharpness of the density forecasts, while the NLL scores
assesses sharpness only.
Similar to evaluating point forecasts, we generate $h$-step ahead
density forecasts for each of the 5504 values in the testing set where
$1 \leq h \leq 96$. For each $h$-step ahead density forecast
$f_{t+h|t}$, let $F_{t+h|t}$ be the corresponding cumulative
distribution function. The CRPS is computed as
\begin{equation}
\mathit{CRPS} = \int_0^1 [F_{t+h|t}(y) - \mathbf{1}(y-y_{t+h})]^2 \,dy,
\end{equation}
where $\mathbf{1}(\cdot)$ is the indicator function which is equal to
one when the argument is positive. Again, the mean CRPS is taken over
the 5504 $h$-step ahead density forecasts in the testing set.
\begin{figure}[b]
\includegraphics{320f16.eps}
\caption{Mean continuous ranked probability score (CRPS) of density
forecasts generated by different approaches for forecast horizons from
15 minutes to 24 hours ahead. Rankings are similar to those under MAE
and RMSE in point forecasts.} \label{fig:crps}
\end{figure}
\begin{table}
\caption{Summary of density forecast performances of different
approaches under CRPS. The bold numbers indicate the best approach at
that forecast horizon} \label{tab:SummaryCRPS}
\begin{tabular*}{\textwidth}{@{\extracolsep{\fill}}lcccc@{}}
\hline
& \textbf{1 hour} & \textbf{6 hours} & \textbf{12 hours} & \textbf{24 hours} \\
\hline
Persistence forecast & 0.019 & 0.077 & 0.111 & 0.137 \\
Constant forecast & $0.159$ & $0.159$ & $0.159$ & $0.159$ \\
Climatology forecast & 0.175 & 0.175 & 0.175 & 0.175 \\
EWMA conditional density & 0.098 & 0.111 & 0.120 & 0.127 \\
$\operatorname{ARIMA}(2,1,3)$ & 0.017 & 0.065 & ${0.100}$ & $\bolds{0.119}$ \\
$\operatorname{ARIMA}(4,1,3)$--$\operatorname{GARCH}(1,1)$ & $\bolds{0.016}$ & $\bolds{0.063}$ & $\bolds{0.099}$ & 0.120 \\
$\operatorname{ETS}(A,N,N|\mathit{EC})$ & 0.017 & 0.068 & 0.109 & 0.129 \\
$\operatorname{ETS}(A,N,N|\mathit{EC})$--($A,N,N|\mathit{EC}$) & 0.017 & 0.069 & 0.100 & 0.124 \\
\hline
\end{tabular*}
\end{table}
Figure \ref{fig:crps} shows the performances of density forecasts under
mean CRPS. The rankings are similar to those under MAE and RMSE in
point forecasts. The two ARIMA--GARCH models outperform all other
approaches for all forecast horizons. Table \ref{tab:SummaryCRPS}
summarizes the main results. Again, the performances of the
$\operatorname{ARIMA}(2,1,3$) model are very similar to that of the
$\operatorname{ARIMA}(4,1,3)$--$\operatorname{GARCH}(1,1)$ model and, in contrast, the
$\operatorname{ETS}(A,N,N|\mathit{EC})$--$(A,N,N|\mathit{EC})$ method is
significantly better than the $\operatorname{ETS}(A,N,N|\mathit{EC})$ method.
To investigate the value of including the dynamics of conditional
variances, we consider the probability integral transform (PIT). For
one-step ahead density forecasts $f_{t+1|t}$, the PIT values are given
by
\begin{equation}
z(y_{t+1}) = \int_0^{y_{t+1}} f_{t+1|t}(y) \,dy.
\end{equation}
\citet{Diebold1998} show that the series of PIT values $z$ are
i.i.d. uniform if $f_{t+1|t}$ coincides with the true underlying density
from which $y_{t+1}$ is generated. For each forecasting approach, we
calculate the percentage of PIT values below the 5th, 50th and
95th quantiles of the $U[0,1]$ distribution, that is, the
percentage of PIT values smaller than 0.05, 0.5 and 0.95 respectively.
We denote them by $P_5, P_{50}$ and $P_{95}$, and calculate the
deviations of the percentages $(P_5-5), (P_{50}-50)$ and $(P_{95}-95)$.
Figure \ref{fig:PIT_hor1} shows the deviations, where we only focus on
the two ETS methods and the two ARIMA--GARCH models. We see that the
$\operatorname{ETS}(A,N,N|\mathit{EC})$--($A,N,N|\mathit{EC}$) method and the
$\operatorname{ARIMA}(4,1,3)$--$\operatorname{GARCH}(1,1)$ model indeed generate density forecasts which
are better calibrated. In particular, the overall calibration of the
$\operatorname{ETS}(A,N,N|\mathit{EC})$--($A,N,N|\mathit{EC}$) method is the
best, indicating that it provides the most reliable descriptions of the
changing volatility over time. Note that a positive slope in Figure
\ref{fig:PIT_hor1} implies a density forecast which is
over-conservative and has a large spread, while a negative slope
implies the opposite. Thus, for one-step ahead forecasts, the
ARIMA--GARCH models are over-conservative, while the ETS methods are
over-confident.
\begin{figure}
\includegraphics{320f17.eps}
\caption{We calculate the percentages $P_5$, $P_{50}$ and $P_{95}$ of PIT
values smaller than 0.05, 0.5 and 0.95 respectively, and calculate the
deviations $(P_5-5)$, $(P_{50}-50)$ and $(P_{95}-95)$. The
$\operatorname{ETS}(A,N,N|\mathit{EC}$)--($A,N,N|\mathit{EC}$) method and the
$\operatorname{ARIMA}(4,1,3)$--$\operatorname{GARCH}(1,1)$
model indeed generate better calibrated density forecasts. The overall
calibration of the $\operatorname{ETS}(A,N,N|\mathit{EC}$)--($A,N,N|\mathit{EC}$) method is the best,
indicating that it provides the most reliable descriptions of the
changing volatility over time. Note that a positive slope implies a
ddnsity forecast which is over-conservative, while a negative slope
implies the opposite.} \label{fig:PIT_hor1}
\end{figure}
Figure \ref{fig:PIT_hor1} only reflects information on the marginal
distributions of the PIT values. \citet{Stein2009} suggests that
it is also valuable to evaluate the distributions conditioned on
volatile periods. It is particularly important to capture the variance
dynamics during times of large volatilities, since for most of the
times one does not want to underestimate the risk by proposing an
over-confident density forecast. Underestimating large risks usually
leads to a more disastrous outcome than overestimating small risks.
Following \citet{Stein2009}, we compare the ability of the
approaches in capturing volatility dynamics during the largest 10\% of
variance. To estimate the variance of the data in the testing set, we
directly adopt the persistence forecast $\hat{s}_{\varepsilon;t+1|t}^2$
in (\ref{eq:persistence}), which essentially gives the 12-hour moving
average of realized variance. Figure \ref{fig:EstimatedVariance} shows
the changing variance, where the largest values mostly occur in early
December. The times corresponding to the largest 10\% of variance are
selected and we compare the distribution of $z(y_{t+1})$ at those
times. The PIT diagrams are shown in Figure \ref{fig:PIT_largeV}. It
demonstrates that the ARIMA--GARCH model indeed gives better calibrated
one-step ahead density forecasts than the ARIMA model during volatile
periods. The differences between the two ETS methods are even more
significant, where the $\operatorname{ETS}(A,N,N|\mathit{EC})$ method gives
over-confident density forecasts that underestimate the spread.
\begin{figure}
\includegraphics{320f18.eps}
\caption{Estimated variance of data in the testing set using the
persistence forecast $\hat{s}_{\varepsilon;t+1|t}^2$ in
(\protect\ref{eq:persistence}), which essentially gives the 12-hour moving
average of realized variance. Clearly, the variance changes with time
and the largest values mostly occur in early December.}
\label{fig:EstimatedVariance}
\end{figure}
\begin{figure}
\includegraphics{320f19.eps}
\caption{Histograms of PIT values conditioned on the largest 10\% of
estimated variance, where the one-step ahead density forecasts are
generated using the $\operatorname{ARIMA}(2,1,3)$ model (top left), the
$\operatorname{ARIMA}(4,1,3)$--$\operatorname{GARCH}(1,1)$ model (top right),
the $\operatorname{ETS}(A,N,N|\mathit{EC}$) method
(bottom left) and the $\operatorname{ETS}(A,N,N|\mathit{EC}$)--($A,N,N|\mathit{EC}$) method (bottom
right). The dotted lines correspond to 2 standard deviations from the
uniform density.} \label{fig:PIT_largeV}
\end{figure}
\section{Conclusions and discussions}\label{sec:Conclusion}
In this paper we study two approaches for generating multi-step density
forecasts for bounded non-Gaussian data, and we apply our methods to
forecast wind power generation in Ireland. In the first approach, we
demonstrate that the logistic transformation is a good method to
normalize wind power data which are otherwise highly non-Gaussian and
nonstationary. We fit ARIMA--GARCH models with Gaussian innovations for
the logistic transformed data, and out-of-sample forecast evaluations
show that they generate both superior point and density forecasts for
all horizons from 15 minutes up to 24 hours ahead. A second approach is
to assume that the $h$-step ahead conditional densities are described
by a parametric function $D$ with a location parameter $\hat{\ell}$ and
scale parameter $\hat{s}^2$, namely, the conditional mean and the
conditional variance of $y_t$ that are generated by an appropriate
Gaussian model $G$. Results show that choosing $D$ as the truncated
normal distribution is appropriate for aggregated wind power data, and
in this case $\hat{\ell}$ and $\hat{s}^2$ are the mean and variance of
the original normal distribution respectively. We apply exponential
smoothing methods to generate $h$-step ahead forecasts for the location
and scale parameters. Since the underlying models of the exponential
smoothing methods are Gaussian, we are able to obtain multi-step
forecasts by simple iterations and generate forecast densities as
truncated normal distributions.
Although the approach using exponential smoothing methods with
truncated normal distributions cannot beat the approach considering
logistic transformed data, they are still a useful alternative to
produce good density forecasts due to several reasons. First, forecast
performances of the exponential smoothing methods are more robust under
different lengths of training data, especially when the size of the
training set is relatively small and the estimation of the ARIMA--GARCH
models may not be reliable to extrapolate into the testing set. This
has been demonstrated in our data, where we take 40\% of the data as
the training set and the remaining as the testing set. In such a case,
the $\operatorname{ETS}(A,N,N|\mathit{EC})$--($A,N,N|\mathit{EC}$) method performs
better than the approach with logistic transformed data [Lau (\citeyear{Lau2010})]. Second, in the
first approach using ARIMA--GARCH models, we have to select the best
model using BIC whenever we consider an updated training set. This is
not necessary for the exponential smoothing methods. Third, an
advantage of forecasting by exponential smoothing methods is that it is
computationally more efficient to calculate point forecasts due to the
closed form of density function that we have chosen, namely, the
truncated normal distribution $D$. On the other hand, in the first
approach, we have to transform the Gaussian densities and calculate the
expected value of the transformed densities by numerical integrations,
which require much more computational power. The second and third
points are critical since, in practice, many forecasting problems
require frequent online updating. Finally, the second approach allows
us to choose a parametric function $D$ for the forecast densities,
which gives us more flexibility and one may generate improved density
forecasts by testing various possible choices of $D$. This advantage is
particularly important when there are no obvious transformations to
normalize the data, and when there is evidence that supports simple
parametric forecast densities.
In summary, we have developed a general approach of generating
multi-step density forecasts for non-Gaussian data. In particular, we
have applied our approaches to generate multi-step density forecasts
for aggregated wind power data, which would be economically valuable to
power companies, national grids and wind farm operators. It would be
interesting and challenging to propose modified methods based on our
current approaches, so that reliable density forecasts for individual
wind power generation could be obtained. Individual wind power time
series are interesting since they are highly nonlinear. Sudden jumps
from maximum capacity to zero may occur due to gusts of winds, and
there may be long chains of zero values because of low wind speeds or
maintenance of turbines. Characteristics of individual wind power
densities include a positive probability mass at zero as well as a
highly right-skewed distribution, and it would be challenging to
generate multi-step density forecasts for individual wind power data.
Another important area of future research is to develop spatiotemporal
models to generate density forecasts for a portfolio of wind farms at
different locations. Recent developments in this area include
\citet{Hering2009}. Some possible approaches include the
process-convolution method developed and studied by
\citet{Higdon1998}, which has been applied to the modeling of
ocean temperatures and ozone concentrations. Another possible approach
is the use of latent Gaussian processes. Those approaches have been
studied by \citet{Sanso1999} who consider the power truncated
normal (PTN) model, and by \citet{Berrocal2008} who consider a
modified version of the PTN model called the two-stage model.
Spatiotemporal models will be important to wind farm investors to
identify potential sites for new farms. It would also be of great
importance to the national grid systems where a large portfolio of wind
farms are connected, and sophisticated spatiotemporal models may be
constructed to improve density forecasts for aggregated wind power by
exploring the correlations of power generations between neighboring
wind farms.
\section*{Acknowledgments}
The wind power generation data in Ireland was
kindly provided by Eirgrid plc. The authors thank
Pierre Pinson, James Taylor, Max Little, the referees and the associate
editors for insightful comments and suggestions.
| proofpile-arXiv_065-5276 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
Graphene, a two dimensional (2D) allotrope of carbon, is the motherform of all its graphitic
forms. Graphene shows several fascinating electronic-transport properties, originating
from the linear energy dispersion near the high symmetry corners of the hexagonal Brillouin zone,
which results in effective dynamics of electrons similar to massless Dirac Fermions.\cite{neto109}
But graphene is a zero bandgap semiconductor,\cite{wallace1947} which limits its applications
in electronic devices. In the bulk-form, a band gap can be opened up and tuned by doping graphene with boron or nitrogen~\cite{CNR} or by introducing uniaxial strain.\cite{hua2008,zhen2008}
In graphene nano-ribbons (GNRs), this can be accomplished using the
geometry of the edges: while a GNR with a zigzag edge (ZGNR) has a vanishing gap
(which opens up due to magnetic ordering), a GNR with an arm-chair edge (AGNR)~\cite{son2006}
has a nonzero gap. GNRs can be very useful for practical purposes because their bandgap
can be tuned by changing the ribbon-width.\cite{son2006} Magnetic ground state of pristine ZGNR
can be exploited to explore graphene based spintronics.\cite{young2006}
For any technological applications of graphene, understanding of its
structural stability and mechanical behavior is crucial.
For example, deviation from the perfectly planar structure
in the form of ripples or wrinkles observed in graphene,\cite{meyer2006,fasolino2007}
can have interesting effects on electronic properties. GNRs are known to be susceptible
to structural instabilities at the edges and reconstructions.
\cite{shenoy2008,huang2009,bets2009,koskinen2008,wassmann2008,koskinen2009,caglar2009,gass2008}
Topological defects in the honeycomb carbon lattice, such as Stone-Wales (SW) defects
(pairs of pentagons and heptagons created by 90\textdegree~rotation of a C-C
bond~\cite{stone1986}) occur in graphene~\cite{meyer2008}
and are relevant to its structural and mechanical behavior.\cite{ana2008,Yakobson}
It is important to understand how atomic and electronic structure of quasi 1-D GNRs
is influenced by such defects. In this work, we focus on the effects of the SW defects on
structural stability, electronic and magnetic properties of GNRs.
Deprived of a neighbor, an atom at the edge of GNR has a dangling bond resulting in
an edge compressive stress, which can be relieved by warping,
as analyzed by~\citeauthor{shenoy2008}~\cite{shenoy2008} using a classical potential and
interpreted with a continuum model. \citeauthor{huang2009},\cite{huang2009} on the other hand,
found using first-principles quantum mechanical simulations that such graphene with dangling bonds at the edges would rather undergo SW mediated edge reconstructions to relieve stresses, and consequently have a flat structure. Alternatively, edge stresses in GNRs can be relieved if the dangling bonds are saturated with hydrogen (H-GNR), stabilizing the planar structure relative to the warped one.\cite{huang2009} How SW defect would influence the structure of a H-GNR is not clear and we uncover this in the present work. Although SW defects cost energy,\cite{lusk2008}
they do occur in graphene~\cite{meyer2008} and are shown here to \textit{induce warping instability}
even in H-GNRs.
We organize the paper in the following manner. First, we briefly describe computational details in section~\ref{method}. A discussion follows on various stresses associated with a SW defect in bulk graphene in section~\ref{bg}. We correlate the results obtained in this section to the mechanical properties of edge reconstructed (by SW defects) GNRs. Next, in section~\ref{gnr}, we investigate the properties of such GNRs: first the issue of structural stability in section~\ref{mp}, followed by their electronic properties in section~\ref{ep}. We conclude the paper in section~\ref{con}.
\section{Method}
\label{method}
\begin{figure}
\subfigure[]{\epsfxsize=3.5truecm \epsfbox{fig1a.eps}}
\subfigure[]{\epsfxsize=3.5truecm \epsfbox{fig1b.eps}}
\subfigure[]{\epsfxsize=3.5truecm \epsfbox{fig1c.eps}}
\subfigure[]{\epsfxsize=3.5truecm \epsfbox{fig1d.eps}}
\caption{(color online) Stone Wales defect in bulk graphene; (a) $SW_\|$, (b) $SW_\angle$
($\theta$=60\textdegree), (c) $SW_\angle$ ($\theta$=120\textdegree), where $\theta$ is
the angle of the bond [marked in red (light gray in the gray-scale)] with the horizontal axis in the anti-clockwise direction. (d) Planar stresses acting on the supercell.}
\label{fig1}
\end{figure}
We use first-principles calculations as implemented in the PWSCF code,\cite{pwscf} with
a plane-wave basis set and ultrasoft pseudo-potential, and electron
exchange-correlation is treated with a local density approximation (LDA, Perdew-Zunger
functional). In the literature,
we find use of both LDA\cite{son2006} and GGA\cite{huang2009} in the study of
properties of graphene nanoribbons, and we expect that the choice of
exchange-correlation functional should not affect the main findings of our work much.
We use an energy
cutoff for the plane-wave basis for wavefunctions (charge density) of 30 (240) Ry.
Nanoribbons are simulated using a supercell geometry, with a vacuum layer of $>15$~\AA~between
any two periodic images of the GNR. A \textit{k}-point grid of 12 (24)x1x1 \textit{k} points
(periodic direction of the ribbon along $x$-axis) is used for sampling Brillouin zone
integrations for AGNR (ZGNR).
Despite many-body effects in graphene being a subject of active research,
most of the current experiments support validity of band structure point of view.
Results of DFT calculations are also found to be in remarkable agreement
with those of the Hubbard model\cite{palacios2007}, which takes into account
onsite electron-electron interactions. While we believe that many-body effects
will not drastically alter our results for structure and energetics of Stone-Wales
defects and associated warping, it would indeed be an interesting research problem
to analyze electronic transport properties with many body corrections,
which will not be addressed in this paper.
\section{Results and Discussion}
\subsection{SW Defects in Bulk Graphene}
\label{bg}
We first develop understanding of the stresses associated with SW defects in {\it bulk} graphene
(supercells shown in \fig{fig1}(a), (b) and (c)), and also benchmark our methodology through
comparison with earlier works. The rotated bond that creates the SW defect (marked in red) makes
an angle $\theta$ with the horizontal axis ($x$ axis).
Based on $\theta$, we classify the defects as parallel ($\theta$ = 0\textdegree) and
angled ($\theta\neq$ 0\textdegree) and denote by the symbol $SW_{\|}$ (see~\fig{fig1} (a))
and $SW_\angle$ (see~\fig{fig1} (b) and (c)). We express normal (shear) stress by $\sigma$ ($\tau$).
We use +ve (-ve) $\sigma$ to denote compressive (tensile) stress.
For bulk graphene, the stresses are in the units of eV/\AA$^{2}$, obtained by multiplying
the stress tensor components with the supercell length along $z$ direction.
We show the direction of planar stresses acting on a graphene supercell in \fig{fig1}(d).
For $SW_\|$ defect, $\sigma_x$, $\sigma_y$ and $\tau_{xy}$ are 0.40, -0.27 and 0 eV/\AA${^2}$
respectively. Under rotation $\theta$, stress tensor components transform as
\begin{equation}
\begin{gathered}
\sigma_x(\theta)=\frac{1}{2}(\sigma_x+\sigma_y)+\frac{1}{2}(\sigma_x-\sigma_y)cos~2\theta+\tau_{xy}sin~2\theta\\
\sigma_y(\theta)=\frac{1}{2}(\sigma_x+\sigma_y)-\frac{1}{2}(\sigma_x-\sigma_y)cos~2\theta-\tau_{xy}sin~2\theta\\
\tau_{xy}(\theta)=-\frac{1}{2}(\sigma_x-\sigma_y)sin~2\theta+\tau_{xy}cos~2\theta
\end{gathered}
\end{equation}
When $\theta$=60\textdegree, $\sigma_x$, $\sigma_y$ and $\tau_{xy}$ are -0.10,
0.23 and -0.29 eV/\AA${^2}$ respectively. $SW_\angle$ with $\theta$=120\textdegree~
is same as $\theta$=60\textdegree~in terms of stresses, barring the fact that
$\tau_{xy}$ has opposite sign. The energy cost for a single defect
formation in a 60 atom supercell is 5.4 eV (4.8 eV) for $SW_\|$ ($SW_\angle$), which is in
good agreement with Ref.~\onlinecite{lusk2008}.
The energy difference is due to sizeable long range {\it anisotropic} interactions between the defects (periodic images),
and can be understood in the framework of Ref.~\onlinecite{ertekin2009}.
GNRs are obtained by cutting the bulk graphene sheet along certain direction $-$ along $x$ ($y$) to
create ZGNR (AGNR) (see \fig{fig1}) and the respective direction becomes the ribbon axis. Based on the analysis presented in the previous paragraph, we can readily predict the nature (sign) of stresses generated by SW defects in a GNR along the ribbon axis and in the transverse direction (i.e. along the ribbon width). However, due to finite thickness, a GNR has the freedom to relax stress along the width by deformation
(expansion/contraction depending on the sign of stress). We find that, in a properly relaxed SW
reconstructed GNR, except the normal stress acting along the ribbon axis, all the other stress tensor components are negligible. Thus, post structural relaxation, compressive (tensile) stress along the ribbon axis remains the only significant term in a $SW_\|$ reconstructed ZGNR (AGNR).
For $SW_\angle$ defect, the sign of induced stress along the ribbon axis is opposite to
that of $SW_\|$.
Based on bulk results, we can also predict the elastic energy cost of SW defect formation in
GNRs. Note that, normal stress created by $SW_\|$ in a particular direction is of higher magnitude
than that generated by $SW_\angle$ defect$-$ in $x$ direction, $\sigma_{\|}/\sigma_\angle=4$, and in $y$ direction,
$\sigma_{\|}/\sigma_\angle=1.2$. Elastic energy cost for defect formation is proportional to the stress. Hence, in a GNR $SW_\|$ defect is energetically more expensive than $SW_\angle$.
From the above discussion, it is evident that \textit{orientation of the defects with respect to the ribbon axis} plays a vital role in GNRs. We investigate this and its consequences in the rest of this paper.
\subsection{SW Defects in GNRs}
\label{gnr}
\subsubsection{Structural Stability}
\label{mp}
\begin{figure}
\subfigure[Pristine AGNR]{\epsfxsize=4.0truecm \epsfbox{fig2a.eps}}
\subfigure[$A_{\bot 5757}^{2L}$]{\epsfxsize=4.0truecm \epsfbox{fig2b.eps}}
\subfigure[$A_{\bot 757}^{2L}$]{\epsfxsize=4.0truecm \epsfbox{fig2c.eps}}
\subfigure[$A_{\angle 5757}^{2L}$]{\epsfxsize=4.0truecm \epsfbox{fig2d.eps}}
\subfigure[$A_{\angle 57}^{2L}$]{\epsfxsize=4.0truecm \epsfbox{fig2e.eps}}
\subfigure[$A_{\angle 57}^{L}$]{\epsfxsize=4.0truecm \epsfbox{fig2f.eps}}
\subfigure[Pristine ZGNR]{\epsfxsize=4.0truecm \epsfbox{fig2g.eps}}
\subfigure[$Z_{\|575}^{3L}$]{\epsfxsize=4.0truecm \epsfbox{fig2h.eps}}
\subfigure[$Z_{\angle 5757}^{2L}$]{\epsfxsize=4.0truecm \epsfbox{fig2i.eps}}
\subfigure[$Z_{\angle 57}^{2L}$]{\epsfxsize=4.0truecm \epsfbox{fig2j.eps}}
\subfigure[$Z_{\angle 5757}^{3L}$]{\epsfxsize=4.0truecm \epsfbox{fig2k.eps}}
\subfigure[$Z_{\angle 57}^{3L}$]{\epsfxsize=4.0truecm \epsfbox{fig2l.eps}}
\caption{(color online) (a) Pristine AGNR and (b)$-$(f) SW reconstructed AGNRs. (g) Pristine ZGNR and (h)$-$(l) SW reconstructed ZGNRs. Red, appearing light gray in the gray-scale, dots and lines denote the hydrogen atoms and bonds rotated to create SW defects, respectively.}
\label{fig2}
\end{figure}
We first describe the nomenclature for different SW defects in GNRs.
The first letter, $A$ (armchair) or $Z$ (zigzag), denotes the kind of
pristine GNR hosting a SW-defect, and the first subscript denotes the
orientation of SW defect, defined as the angle between the rotated bond and
the ribbon axis. Three possible orientations are
$\|$ ($\theta=$ 0\textdegree), $\bot$ ($\theta=$ 90\textdegree) and
$\angle$ ($\theta\neq$ 0\textdegree ~or $\neq$ 90\textdegree).
The $SW_\|$ defect in bulk graphene described earlier, falls
into two categories in GNRs, $\bot$ and $\|$, to mark its
orientation with AGNR and ZGNR edge respectively. The series of 5's and 7's in the subscript
represent constituent rings of a single defect $-$ pentagons and heptagons. For example,
a $\bot$ defect, with a pair of pentagons and heptagons each in an AGNR is denoted as
$A_{\bot 5757}$ (see \fig{fig2}(b)). Such a defect is away from GNR edges and keeps the
armchair (or zigzag) edge shapes undisturbed (see~\fig{fig2}(b),(d),(i),(k)).
Lesser number of pentagons/heptagons ($575$ or $57$) in the
subscript implies defects overlapping with the edge, and reconstructed edge shapes typically
differ from that of pristine GNRs (see~\fig{fig2}(c),(e),(f),(h),(j),(l)). The superscript
denotes the length of periodicity along the ribbon axis. $L=3d (\sqrt{3}d)$ for AGNR (ZGNR), where
$d$ is the C-C bond length. Finally, a subscript $w$ is used to differentiate warped ribbons
from planar ones. All the GNRs reported here have H-terminated edges, shown by red dots in \fig{fig2}.
\begin{table}
\caption{Linear density of defects $\eta$ (number of SW defects per unit length), edge formation
energy $E_{edge}$, stress $\sigma$ along the ribbon axis and width W of various GNRs.}
\begin{center}
\begin{tabular}{llllllllllll}
\hline\hline
GNR&$\eta$&$E_{edge}$&\hspace{0.3cm}$\sigma$&W&GNR&$\eta$&$E_{edge}$&\hspace{0.3cm}$\sigma$&W\\
&\tiny{(/\AA)}&\tiny{(eV/\AA)}&\tiny{(eV/\AA)}&\tiny{(\AA)}& &\tiny{(/\AA)}&\tiny{(eV/\AA)}&\tiny{(eV/\AA
)}&\tiny{(\AA)} \\
\hline
AC &0 & 0.04 & 0 & 19.5 & Z &0 &0.10 & 0 & 15.5\\
$A_{\bot 5757}^{2L}$ &0.12 & 0.62 & -11 & 20.1 & $Z_{\parallel 575}^{3L}$ &0.14&0.86 & 19 & 15.0\\
$A_{\bot 757}^{2L}$ &0.12 & 0.51 & -11 & 20.0 & $Z_{\parallel 575w}^{3L}$ &0.14&0.70 & 7 & 14.8\\
$A_{\angle 5757}^{2L}$ &0.12 & 0.51 & 10 & 19.3 & $Z_{\parallel 557}^{2L}$ &0.21&1.66 & 29 & 14.8\\
$A_{\angle 57}^{L}$ &0.24 & 0.86 & 18 & 18.9 &$Z_{\parallel 557w}^{2L}$ &0.21&1.39 & 8 & 14.4\\
$A_{\angle 57w}^{L}$ &0.24 & 0.81 & 11 & 18.8 & $Z_{\angle 5757}^{3L}$ &0.14&0.49 & -5 & 15.9\\
$A_{\angle 57}^{2L}$ &0.12 & 0.40 & 7 & 19.3 & $Z_{\angle 5757}^{2L}$ &0.21&0.53 & -6 & 16.0\\
$A_{\angle 57w}^{2L}$ &0.12 & 0.36 & 3 & 19.0 & $Z_{\angle 57}^{3L}$ &0.14&0.34 & -4 & 16.0\\
& & & & & $Z_{\angle 57}^{2L}$ &0.21&0.37 & -5 & 16.1\\
\hline\hline
\end{tabular}
\end{center}
\label{t1}
\end{table}
We characterize GNRs with two properties: edge formation energy per unit length
and stress along the ribbon axis. The numerical values calculated using first
principles method are reported in Table~\ref{t1}. Edge formation energy per unit length is,
\begin{equation}
E_{edge}=\frac{1}{2L}\left(E_{GNR}-N_CE_{bulk}-\frac{N_H}{2}E_{H_2}\right)
\end{equation}
where $E_{GNR}$, $E_{bulk}$ and $E_{H_2}$ are the total energies of the nanoribbon supercell,
one carbon atom in bulk graphene and of the isolated $H_2$ molecule respectively; $N_C$ ($N_H$) are
the number of carbon (hydrogen) atoms in the supercell. Stress reported here is
$\sigma=(bc)\sigma_{x}$, where $\sigma_{x}$ is the component of stress tensor along
$x$ (ribbon axis) and $b$ and $c$ are the supercell sizes in $y$ and $z$ direction. Other
components of the stress tensor are negligible. $E_{edge}$ is found to be much higher for ZGNR
(0.10 eV/\AA) than compared to AGNR (0.04 eV/\AA). Our numbers are slightly overestimated with
respect to the reported values of 0.08 eV/\AA~(0.03 eV/\AA) for ZGNR (AGNR),\cite{wassmann2008}
obtained using the PBE functional (a GGA functional) for exchange-correlation energy.
Edge defects, consisting of fewer number of pentagons and/or heptagons, require less formation energy. For example, consider $A_{\angle 5757}^{2L}$ and $A_{\angle57}^{2L}$, for which
$E_{edge}$ values are 0.51 and 0.40 eV/\AA~respectively. This is true for all SW reconstructed
GNRs if we compare the cases with defects of a particular orientation (see~Table~\ref{t1}).
For varied orientations, $\angle$ SW defects require less formation energy than compared to $\bot$
and $\|$ ones. For example, $E_{edge}$ of $A_{\angle 5757}^{2L}$ is lower by 0.11 eV/\AA~than that
of $A_{\bot 5757}^{2L}$ (consult~Table~\ref{t1} for more such instances). This observation is
consistent with the argument based on our analysis of SW defect in bulk graphene. Note that ribbons with higher linear defect density ($\eta$) have higher $E_{edge}$ and the above comparisons are
meaningful only for edge reconstructed GNRs of similar $\eta$.
Defect orientations with respect to the ribbon edges control the sign of stress induced.
Note that, the sign of stress along the ribbon axis reported in~Table~\ref{t1} matches with the predictions based on our analysis of SW defect in bulk graphene. The ribbon widths (W) vary in the range of 18.8 to 20.1 \AA~for AGNRs and 14.4 to 16.1~\AA~for ZGNRs. This is due to the stress relaxation via deformation in the direction perpendicular to the ribbon axis. For example, an unrelaxed $A_{\bot 5757}^{2L}$ experiences compressive stress along the width and relieves it by expansion in that direction; thus making it slightly wider than the pristine AGNR (consult~Table~\ref{t1}). Similarly, relaxation of the tensile stress along the width by contraction makes a SW reconstructed GNR narrower than pristine GNRs (see~Table~\ref{t1}).
\begin{figure}
\subfigure[$A_{\angle 57w}^L$]{\epsfxsize=8.5truecm \epsfbox{fig3a.eps}}
\subfigure[$A_{\angle 57w}^{2L}$]{\epsfxsize=8.5truecm \epsfbox{fig3b.eps}}
\subfigure[$Z_{\parallel 575w}^{3L}$]{\epsfxsize=8.5truecm \epsfbox{fig3c.eps}}
\caption{(color online) Warped structure of various different edge reconstructed nanoribbons.
Red, green and blue represents elevated, ``flat'' and depressed regions respectively.
In gray-scale, light and dark gray represents ``flat'' and warped regions respectively.
Atoms of a warped GNR are labeled as ``flat'' if $|z|<0.1$\AA, $z$ being
the height of the constituent carbon atoms. For a GNR of strictly flat geometry, $z=0$
for all the carbon atoms.}
\label{fig3}
\end{figure}
In contrast to the in-plane deformation to relax normal stress along the width,
the only way to partially release the normal stress along ribbon axis is by an out-of-plane
deformation $-$ bending. It has been reported that pristine GNRs with dangling bonds relax the
compressive stress by spontaneous warping.\cite{shenoy2008} As shown in~\fig{fig3}, we also find
that SW reconstructed GNRs under compressive stress ($\angle$ ($\|$) in AGNR (ZGNR))
relax by local warping. For example, $\sigma$ ($E_{edge}$) is smaller
by 4 (0.04) eV/\AA~ in $A_{\angle 57w}^{2L}$ than its planar form $A_{\angle 57}^{2L}$
(see~Table~\ref{t1} for more such instances). On the other hand, SW reconstructed GNRs under
tensile stress ($\bot$ ($\angle$) in AGNR (ZGNR)) favor planar structure. In this regard, our
results are in agreement with those of Ref.~\onlinecite{huang2009}, where authors have
considered only the SW defects generating tensile stress and found them to stabilize
the planar geometry.
Our results for the edge reconstructions in $Z_{\angle 57}$ agree well with
earlier work~\cite{wassmann2008}: our estimate of $E_{edge}$ (0.37 eV/\AA)
for $Z_{\angle 57}^{2L}$ is slightly higher than the PBE (a GGA exchange-correlation
functional) estimate (0.33 eV/\AA) of Ref.~\onlinecite{wassmann2008}. However,
we predict here qualitatively different types of reconstructions
in $A_{\angle 57}$ and $Z_{\|575}$ accompanied by warping of H-GNRs,
which originate from different orientations of the SW-defects.
The gain in energy by H-saturation of dangling bonds at the edges of a GNR is so large
that even after the creation of a relatively costly SW-defect, it remains lower in energy
than the GNRs (no H-saturation) studied in Ref.~\onlinecite{huang2009} (a set of DFT
calculations, but the authors did not specify the exchange-correlation functional).
For example, SW-reconstructions at the edges (with dangling bonds) lead to a energy gain of 0.01 (0.18) eV/\AA~ for AGNR (ZGNR).\cite{huang2009} In contrast, we find SW reconstruction of H-GNRs
{\it costs} 0.32 (0.24) eV/\AA~ energy in AGNR (ZGNR) at least! However, because the
edge energies of pristine H-GNR and GNRs with dangling bonds are of the order of
0.1 eV/\AA~ and 1.0 eV/\AA~ respectively, $E_{edge}$ of the edge
reconstructed H-GNRs presented here (0.36 and 0.34 eV/\AA~ for AGNR and ZGNR)
is much smaller than that (0.99 and 0.97 eV/\AA~ for AGNR and ZGNR) reported in
Ref.~\onlinecite{huang2009}.
So far, we have presented theoretical analysis of Stone-Wales defects
in GNRs with a fixed width (19.5 and 15.5 \AA~ for pristine AGNR and ZGNR respectively).
Since interactions among the SW defects are long-ranged in nature, it will be interesting
to verify how our findings depend on the width of a GNR.
$E_{edge}$ and $\sigma$ for edge reconstructed ribbons of widths 20.7 and 16.9 \AA,
for pristine AGNR and ZGNR respectively, are found to be almost the same as values
reported in Table~\ref{t1}, for GNRs with smaller width. Specifically, our estimates of
$E_{edge}$ and $\sigma$ for $A^L_{\angle 57w}$ type of edge reconstruction are 0.80 eV/\AA~ and
11 eV/\AA~ respectively for wider ribbons. Changes in $E_{edge}$ with width of a GNR are similar
to those in a pristine GNR. Despite long-ranged interactions
among the SW defects, such remarkable insensitivity to width can be understood in terms of defect concentration
along the ribbon length vs. width. Since defects are located at the edges, distance between the two
adjacent defects along the width is $15-20$ \AA~ (see the W column of Table~\ref{t1}). On the other hand,
inter-defect distance along the ribbon length is about $4-8$ \AA~ (inverse of the number reported
in the $\eta$ column of Table~\ref{t1}). Thus, defect-defect interactions along the length of the
ribbon are dominant, explaining relatively weak dependence of edge properties on the width.
We note that the GNRs used in experiments are typically wider than the ones we studied here,
and thus our results for edge reconstruction and related phenomena should hold good in such cases.
The ribbon periodicity ($L$, corresponding to minimum $E_{edge}$ and $\sigma$ in pristine GNRs) was kept fixed in our analysis, as reported in Table~\ref{t1}. As shown in \fig{fig3}, for certain types of SW reconstructions, buckling relieves $\sigma$ partially and reduce $E_{edge}$. Nevertheless,
there is a small remanant stress, which could be relieved further by allowing the ribbons
to relax along the periodic axis. For example, $E_{edge}$ and $\sigma$ decreases to 0.57 and 1.0 eV/\AA~
respectively, upon relaxation of the periodic length of ribbon $Z^{3L}_{\parallel 575w}$.
This results in an expansion of the ribbon by $4\%$ along its length. Warping still prevails,
though with slightly smaller amplitude and longer wavelength. Thus our results do not change
qualitatively.
We note that relaxation of the periodicity of a GNR involves (a) an elastic energy cost associated with straining of the bulk (central part) of the ribbon and (b) a small energy gain associated with relief of
compressive stress at the edges. The former would dominate in wide ribbons typically used in
experiments, and our results obtained without such relaxation are more relevant to experimental
measurements.
Comparing the $E_{edge}$ values from~Table~\ref{t1}, we conclude that among the edge reconstructed
GNRs: \textit{warped AGNRs and flat ZGNRs are energetically more favorable} than flat AGNRs
and warped ZGNRs. In H-unsaturated GNRs, at an optimal concentration, SW reconstructions
lower the edge energy.\cite{huang2009} We also find that $E_{edge}$ values decrease on
reducing the linear defect density $\eta$ (by embedding SW defect in a longer supercell).
However, our study is limited to a region of high $\eta$ only. Whether there exists an
optimal $\eta$ in H-saturated GNRs also or not, at which reconstructed edge has lower
energy than pristine edge, needs to be investigated and is outside the scope of the present
paper.
\subsubsection{Electronic Properties}
\label{ep}
\begin{figure}
{\epsfxsize=8truecm \epsfbox{fig4.eps}}
\caption{(color online) Density of states (DOS) and scanning tunneling microscope (STM) image of
pristine AGNR visualized with XCRYSDEN~\cite{kokalj}. $E=0$ denotes the Fermi energy.
We have applied a thermal broadening equivalent to room temperature to plot the DOS in this figure and
throughout the rest of the paper. Consult text for the definition of $\rho$, $\rho_e$ and $\rho_i$.
The horizontal bar represents the color scale used in the STM image. The image has been simulated for
a sample bias of -0.3 eV and reflects the spatial distribution of the local density of states LDOS
below Fermi energy.}
\label{fig4}
\end{figure}
Electronic properties of GNRs are sensitive to the geometry of the edges at the boundary. For example,
pristine AGNRs are semiconducting - the bandgap arises due to quantum confinement and depends
on the width of the ribbon.\cite{son2006} Pristine ZGNRs also exhibit gapped energy spectrum, although
of entirely different origin. Gap arises due to a localized edge potential generated by
edge magnetization.\cite{son2006} It is well known that presence of defects or disorders at
the edges can change the electronic, magnetic and transport properties of GNRs to various
extent.\cite{kumazaki,schubert} In 2D graphene, topological defects such as dislocations or
SW defects give rise to electronic states localized at the defect sites.\cite{ana2008}
Presence of any such defect induced states near the edges of the SW reconstructed GNRs can have interesting consequences on their electronic and transport properties.
In this section, we analyze electronic properties using the density of states (DOS) and simulated
scanning tunneling microscope (STM) images of pristine and edge reconstructed GNRs.
We decompose total DOS ($\rho$) into the sum of projected
DOS of atoms located at the interior of the ribbon ($\rho_i$) and atoms near the edges ($\rho_e$).
$\rho_e$ is the sum of projected DOS of first two layers of atoms from both the edges.
Since the defects are located at the edges, this technique clearly uncovers the difference between
electronic band structures of a pristine and edge reconstructed GNR.
Note (\fig{fig3}) that,
this is the region which undergoes warping (remains flat) if the ribbon edges are under compressive
(tensile) stress. $\rho_i$ includes the projected DOS of rest of the atoms (located in the region
of the nanoribbon that always remains flat). Depending on the sample bias, STM images help identify the
spatial distribution of local DOS (LDOS) below (-ve bias) and above (+ve bias) the Fermi level ($E_F=0$). These images
should be useful in experimental characterization of GNRs, as well as understanding consequences of such defects
and warping to electronic transport in GNRs.
The DOS for a 19.5~\AA~wide pristine AGNR of bandgap 0.1 eV (\fig{fig4})
shows that both $\rho_i$ and $\rho_e$ contributes to the $\rho$ with equal weight, and
symmetric about $E_F$. This symmetry is known as particle-hole symmetry. The
inset of~\fig{fig4} shows that for a sample bias of -0.3 eV, the occupied LDOS
is spread over the entire ribbon. We do not present the corresponding image for a
positive sample bias, which is very similar due to the underlying particle-hole symmetry.
\begin{figure}
{\epsfxsize=8truecm \epsfbox{fig5.eps}}
\caption{(color online) DOS and STM image of $A_{\bot 757}^{2L}$, simulated with a sample bias of +0.2 eV.
The STM image shows the spatial distribution of LDOS above the Fermi energy.}
\label{fig5}
\end{figure}
Edge reconstruction by $\bot$ SW defect ($7-5-7$ defect) breaks the particle-hole symmetry and a sharp peak of DOS
appears above $E_F$ (see~\fig{fig5}), which has primary contribution from the edge atoms ($\rho_e>\rho_i$).
The STM image (simulated with a sample bias of +0.2 eV) reveals that
the unoccupied LDOS is localized at the defect sites, located very close to the ribbon edges.
The STM image for a -ve sample bias (not shown here) illustrates that occupied LDOS is spatially distributed over the
entire ribbon, similar to the pristine one.
\begin{figure}
\subfigure[$A_{\angle 57}^{L}$]{\epsfxsize=8truecm \epsfbox{fig6a.eps}}
\subfigure[$A_{\angle 57w}^{L}$]{\epsfxsize=8truecm \epsfbox{fig6b.eps}}
\caption{(color online) DOS and STM image of (a) $A_{\angle 57}^{L}$ and (b) $A_{\angle 57w}^{L}$.
The STM images have been simulated with sample bias of +0.2 and +0.4 eV for (a) and (b) respectively.
They show the spatial distribution of LDOS above the Fermi energy.}
\label{fig6}
\end{figure}
Edge reconstruction by $\angle$ SW defects ($5-7$ defect) have similar consequences on electronic band structure of
AGNRs. A sharp peak in DOS, primarily coming from $\rho_e$, appears above the Fermi energy
(see~\fig{fig6}(a)). Simulated STM image (at sample bias +0.2 eV) confirms that LDOS above the Fermi level
are localized at the edges. For -ve sample bias (not presented here), occupied LDOS is spatially
distributed over edge, as well as interior atoms. However, for this type of edge reconstructions,
planar structure is not the stable one and the ribbon undergoes warping near the edges (see~\fig{fig3}).
As shown in~\fig{fig6}(b), the DOS peak of edge-localized (or rather defect localized) states above the Fermi energy
vanishes in the warped GNR and LDOS above $E_F$ is spatially distributed throughout the ribbon.
This is also true for LDOS below the Fermi level also(STM image not shown here).
This reveals an electronic origin of the defect induced stress and its anisotropy in the localized $p$
like defect state. Delocalization of this state relieves the stress and favors warping.
All the results presented here are for a pristine AGNR of narrow bandgap
(0.1 eV), which undergoes edge reconstructions by various SW defects. We have investigated
edge reconstructions of a wide bandgap AGNR also. We find qualitative similarity, such as unoccupied
LDOS localized at the edges, among the edge reconstructed AGNRs of various widths and bandgaps.
However, the magnitude of bandgap depends both on ribbon width and the type of SW reconstruction
present at the edge (i.e. edge shape) and varies over a wide range of values (0.1 eV to 1 eV).
The unoccupied states localized near the edges can have interesting applications in molecule
detection. These states are going to act as electron acceptors and can detect some suitable
electron donating molecules.
\begin{figure}
{\epsfxsize=8truecm \epsfbox{fig7.eps}}
\caption{(color online) DOS up to $E_F$(=0.0) and STM image of pristine ZGNR, simulated with a sample bias
of -0.2 eV. The image illustrates the spatial distribution of LDOS below the Fermi energy.}
\label{fig7}
\end{figure}
We have employed a local spin density approximation (LSDA)
in our calculations to explore the possibilies of magnetic ordering in GNRs.
We initialize our calculation with atoms at the edges of GNRs to
have non-zero spin polarization (of same (opposite) sign at the two edges in
FM (AFM) configurations). While the magnitude of spin at edge atoms change
in the course to self-consistency, their relative signs remain the same, if
the corresponding magnetic ordering is stable. As mentioned earlier, pristine ZGNRs
have gapped antiferromagnetic ground sate.\cite{son2006,young2006} We illustrate the DOS
and simulated STM image of a pristine ZGNR with a width of 15.5~\AA~in~\fig{fig7}.
The bandgap is 0.3 eV and we show DOS upto the Fermi level. Note that up and down spin electrons
have similar energy spectrum and are not shown separately. The STM image has been
simulated for a sample bias of -0.2 eV and reveals that LDOS below the Fermi energy is localized
at the zigzag edges. The edges are spin polarized - ferromagnetically coupled along a particular
edge but antiferromagnetic between two opposite edges (not shown here).
\begin{figure}
{\epsfxsize=8truecm \epsfbox{fig8.eps}}
\caption{(color online) DOS and STM image of $Z_{\angle 57}^{2L}$, simulated with a sample bias of
-0.2 eV. The image shows the spatial distribution of occupied LDOS.}
\label{fig8}
\end{figure}
We find that edge reconstructions by SW defects \textit{destroy magnetism}. At high defect density,
this leads to a~\textit{nonmagnetic metallic} ground state and at lower defect
density magnetism survives with a weaker magnitude. In this paper, our investigation is
restricted to the regime of high defect density (where all or most of the zigzag edges have been
reconstructed by SW defects - see~\fig{fig2}(h)-(l)) and we do not discuss the issue of magnetism any further.
The states at $E_F$ arise primarily from the edges for all the zigzag GNRs.
The DOS and simulated STM image (bias voltage -0.2 eV) of a $\angle$ SW ($5-7$ defect) edge reconstructed ZGNR (see \fig{fig8})
reveal a nonzero DOS at $E_F$(=0) and that the ground state is of a nonmagnetic metal. The STM image shows the formation
of nearly isolated dimers along the edge. Reconstruction of ZGNR edge by $\parallel$ SW defect
($5-7-5$ defect) does not alter the electronic properties qualitatively. Such ZGNRs are also
nonmagnetic metallic with planar as well as warped geometries (not shown here). However, these are very high energy
edges and are unlikely to be preferred over $5-7$ SW reconstructions in ZGNRs.
The $5-7$ defects can act as interface of hybrid graphene
and hybrid GNRs, having both armchair and zigzag like features. Such materials have
remarkable electronic and magnetic properties.~\cite{botello2009}
\section{Conclusion}
\label{con}
In conclusion, the sign of stress induced by a SW-defect in a GNR depends on
the orientation of the SW-defect with respect to the ribbon edge, and the relaxation of the
structure to relieve this stress drives its stability. Local warping or wrinkles
arise in the GNR when the stress is compressive, while the structure remains planar otherwise.
The specific consequences to AGNR and ZGNR can be understood from the anisotropy of the
stress induced by a SW defect embedded in bulk graphene. Using the analogy between a SW-defect and a dislocation, it should be possible to capture the interaction between a SW defect in the interior of a GNR and its edge within a continuum framework that includes images of SW-defects in the edges. As the images of SW-defects are also SW-defects, their interactions can be readily captured within the continuum framework of Ref.~\onlinecite{ertekin2009}. Our work shows how warping of GNRs can be nucleated at the SW-defects localized at the edges and be responsible for flake-like
shapes of graphene samples seen commonly in experiments. Such warping results in delocalization of electrons in the defect states. In ZGNRs, magnetic ordering weakens due to the presence of SW defects at the edges
and the ground state is driven towards that of a nonmagnetic metal.
\section{Acknowledgment}
SB thanks Vijay B Shenoy for valuable discussions, suggestions and comments. UVW acknowledges
support from an IBM Faculty award.
\bibliographystyle{apsrev}
| proofpile-arXiv_065-5290 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
\label{intro} Optically carried microwave signals are of special
interest in a large field of applications, from optical
communication to opto-electronics techniques for detection such as Lidar-Radar~\cite{morvan02}, and to fundamental metrology
and spectroscopy. For instance, time and frequency dissemination at long
distances by optical fibres has shown extremely high
performances~\cite{daussy05}. Furthermore, this principle is also widely used in atomic physics. As an example, it is at the basis of the
Coherent Population Trapping (CPT) clocks~\cite{CPT}, which
benefits from the reduction of size compared to standard microwave
clocks. This method is also used in most of the atom
interferometers to generate Raman lasers for manipulating atomic
wave-packets~\cite{kasevich91}. In fact, it enables reduction of the
propagation noise over the laser paths and systematic errors. The
signal is composed of two optical frequencies separated by a frequency
in the microwave range. This can be achieved by different means:
directly from a two frequency laser~\cite{morvan02}, by a single
sideband electro-optic modulator~\cite{seeds02}, by filtering two
sidebands from an optical comb~\cite{goldberg83,kefelian06} or
from a phase-lock between two independent
lasers~\cite{santarelli94}, as used in this work. In most of the
applications, these sources are not powerful enough and would
benefit from being amplified without adding extra sidebands or
extra phase noise onto the microwave signal.
\begin{figure}[!h]
\centering \resizebox{8.5cm}{!}{
\includegraphics{lasers_raman.eps}}
\caption{Laser setup. The frequency-reference laser L1 is locked
on a spectroscopy signal. The L2 laser frequency is mixed with the
frequency-reference laser and the optical beat note is compared to
a microwave reference and phase-locked through a Digital Phase and
Frequency Detector (DPFD). L1 and L2 are combined on a polarizing
beam splitter cube and amplified using the same tapered amplifier
(TA). The output power is injected in a polarizing fibre through
an acousto-optical modulator (AOM).} \label{bancraman}
\end{figure}
In this paper, we report a study of the influence of a
semi-conductor tapered amplifier on a two frequency laser system.
This setup is dedicated to the generation of a pair of Raman
lasers at $\lambda$~=~852~nm, with a fixed frequency difference
close to 9.192~GHz for further use in an atom
interferometer~\cite{canuel06}. The experimental setup is
described in section
II. Sections III and IV are dedicated to the characterization and measurement of the optical spectrum and to the analysis of the spurious sideband generation due to non-linear effects in the gain medium~\cite{ferrari99}. Then we measure the extra noise added by the amplifier on the microwave signal in section V. Finally, the impact on our atom interferometer is quantified in section VI.\\
\section{Experimental setup}
The laser setup consists of two external-cavity laser diodes using
SDL~5422 chip. These sources are based on an intracavity
wavelength selection by an interference-filter~\cite{baillard06}
and benefit from a narrow linewidth (14~kHz) and a wide tunability
(44~GHz). The diodes are regulated around room temperature and
supplied by a current of 80~mA to provide an optical output power
of 45~mW. The frequency locks are achieved with a feedback to the
diode current and the length of the external cavity.
The first laser L1 (Fig.~\ref{bancraman}), is used as an absolute
frequency reference. It is locked 300~MHz red detuned from the
atomic transition between the $|6S_{1/2},F=3\rangle$ and\\
$|6P_{3/2}, F=2\rangle$ states of Caesium (D2 line) using a
frequency modulation spectroscopy technique~\cite{hall81}.
The phase difference between L1 and L2 is locked with the
method described in~\cite{cheinet06} and summarized in the
following. Small amounts of light of L1 and L2 are superimposed on a fast
photoconductor (PhD$_{12}$, Hamamatsu G4176, bandwidth: 15~GHz).
The beat note at $\nu _{12}$~=~9.192~GHz is then mixed with a
reference signal, given by a microwave synthesizer~\cite{yver04}.
The output signal is sent to a Digital Phase and Frequency
Detector (DPFD, MCH~12140) which derives an error signal
proportional to the phase difference between the two lasers. After
shaping and filtering, this output signal is used to generate the
feedback allowing to phase-lock the laser L2 on L1. In this way, the
features of the microwave reference is mapped on the optical
signal with a bandwidth of 3.5~MHz.
In order to provide sufficient optical power, the output signals of L1 and
L2 are injected with the same linear polarization in a GaAs
tapered semiconductor waveguide amplifier (TA,
EYP-TPA 0850-01000-3006 CMT03) pumped by a current of $I$~=~2~A
and stabilized to room temperature. A half-wave plate and a
polarizing cube allow the power ratio between the two lasers to be
adjusted at the input of the TA. In a normal operation, 11.2~mW of
L1 and 16.2~mW of L2 are injected in the TA, which runs in a
saturated gain regime. Then the output beam (with a power of
770~mW) passes through an acousto-optical modulator (AOM),
driven at 80~MHz, from which the first output order is coupled
to a polarizing fibre. This scheme enables the laser light to be
pulsed on the atomic system by switching the RF signal of the AOM.
\section{Self-phase and amplitude modulation}
In this part, we study the sideband generation due to simultaneous
phase and amplitude modulation in the gain medium. As two closely
detuned optical frequencies are injected in the TA, it is crucial
to determine the spectral impact of potential non-linear effects
in the semiconductor during the amplification process. Indeed, the
beat note at $\nu_{12}=$ 9.192~GHz between L1 and L2 induces a
strong modulation of the power through the TA which affects the
gain and the optical index in the semiconductor~\cite{ferrari99}.
In this situation, the resulting sidebands could cause undesirable
effects on our experiment: for instance detuned excitations can
shift the energy levels of our atomic system, called light
shift~\cite{weiss94}, or it could drive unwanted Raman
transitions. For this reason, we conducted simulations which were
compared to experimental measurements.
The total electric field propagating through the TA can be
described by,
\begin{eqnarray}
\label{general} E\left(t,z\right) &=& A\left(t,z\right)
e^{-i\left(\omega_0 t-kz\right)},\\
\nonumber &=&
\left|A\left(t,z\right)\right|e^{i\psi\left(t,z\right)}\times
e^{-i\left(\omega_0 t-kz\right)},
\end{eqnarray}
where $A\left(t,z\right)$ is the wave envelope which varies at the
microwave frequency, $k$ is the wave vector, and $\omega_0$ is the
optical carrier frequency. By referring
$P\left(t,z\right)=\left|A\left(t,z\right)\right|^2$ to the
envelope power ($\psi\left(t,z\right)$ is its phase), we obtain at
the TA input the modulation profile,
\begin{equation}
\label{PIN} P\left(t,z=0\right)=P_{0}\left(1+m \cdot \cos
\left(2\pi \nu_{12} t+\phi\right) \right),
\end{equation}
where $P_{0}$ is the nominal modulation power, $m$ is the
modulation factor and $\phi$ is the phase difference between L1
and L2 (see Appendix \ref{equations_interactions}). In our case,
we have $P_0 \simeq 27.4 $~mW and $m \simeq 0.983$. The amplifier
is then driven between a saturated state and a non-saturated state
at the frequency $\nu_{12}$. In order to calculate the sideband
generation expected from the amplification process, we write, as
for any amplification medium, the interaction between the carriers
and the light field equations. This was widely described in
reference~\cite{agrawal89,omahony88} for the case of a constant
cross section amplifier. Taking into account the amplifier splay
(see appendix~\ref{equations_interactions}), these equations
become,
\begin{eqnarray}
\label{Equation_systeme_P}
\frac{\partial P}{\partial z}+\frac{1}{v_g} \frac{\partial P}{\partial t}=gP,\\
\label{Equation_systeme_phi}
\frac{\partial \psi}{\partial z}+\frac{1}{v_g} \frac{\partial \psi}{\partial t}=-\frac{1}{2}\alpha g,\\
\label{Equation_gain_g} \frac{\partial g}{\partial t}=
\frac{g_0-g}{\tau_c}-\frac{gP}{E\mathrm{_{sat0}}\left(1+\mu z\right)},
\end{eqnarray}
where $\tau_{C}$ is the carrier lifetime in the semiconductor,
$E\mathrm{_{sat0}}$ the saturation energy for the initial amplifier cross
section, $\mu$ is the amplifier splay factor, $\alpha$ the
linewidth enhancement factor and $v_g$ is the wave group velocity.
$g$ refers to the linear gain and the small-signal
gain $g_{0}$ is defined as,
\begin{equation}
\label{g0} g_{0}=\Gamma a N_{0} \left(\frac{I}{I_{0}}-1\right),
\end{equation}
where $I_{0}$ and $N_{0}$ are the current and carrier density
required for transparency, $a$ the gain coefficient and $\Gamma$
the confinement factor (see
Appendix~\ref{equations_interactions}).
The sideband generation is due to a phase and an amplitude
modulation resulting from a non-linear gain modulation. This gain
modulation depends on the field amplitude along the amplifier and
lead to a non-linear distortion of the signal at the output of the
TA. A modification of the optical
index~(\ref{Equation_systeme_phi}) induces an optical phase
modulation. Indeed, an increase of the gain modulation exacerbates
the phase and the power modulation distortions at the same time.
The evaluation of this effect requires the set of equations
(\ref{Equation_systeme_P}-\ref{Equation_gain_g}) to be solved.
Since the relaxation time $\tau_c$ and the excitation
characteristic time $1/\nu_{12}$ are of the same order of
magnitude, the usual adiabatic approximation can't be used.
Therefore, the system has been numerically solved to obtain the
steady state electric field at the output of the TA for comparison
with experimental measurements.
\section{Optical spectrum measurement}
Due to the weak power expected in each sideband, usual methods to
measure optical spectrum such as the transmission through a
Perot-Fabry cavity or the diffraction on a high resolution grating
are not suitable. Instead, we use the beat note with a close known
optical frequency in order to achieve the precise measurement of
the different components of the laser beam. The output signal of a
beat note contains a frequency component corresponding to the
difference between the two mixed frequencies. It can be measured
with a fast photodetector. The power of the beat note is then
proportional to the product of the fields of the two components.
\begin{figure}[!h]
\centering \resizebox{8.5cm}{!}{
\includegraphics{lasers_sonde.eps}}
\caption{Probe laser setup. The frequency-reference laser L3 is
locked on a spectroscopy signal. The L4 laser beam is mixed with
the L3 beam and the beat note signal at 8.7~GHz is servo looped
through a Frequency to Voltage Converter (FVC). The L4 laser, used
to generate the probe beam, is injected in a fiber.}
\label{lasersonde}
\end{figure}
In order to realize a spectrum measurement, we set up a probe
laser L4. As shown on Fig.~\ref{lasersonde}, the frequency of L4
can be adjusted around the Cesium D2 line by the comparison to a
reference laser L3. This frequency lock is achieved by measuring
the beat note of the two lasers, through a Frequency to Voltage
Converter (FVC). A reference DC voltage (VC4) allows the frequency
detuning between L3 and L4 to be set around 8.6~GHz. As a result,
L4 is detuned from L1 by $\nu_{14}$ = 8.3~GHz
(Fig.~\ref{beatnote}a). The optical frequency spectrum of the
phase-locked lasers (L1+L2) can be analyzed by superimposing the
probe beam L4 on the photoconductor PhD$_{12}$, which exhibits the
microwave components of the beat note between L1, L2 and L4. As
explained before, mixing three optical frequencies on a
photodetector gives rise to three frequencies in the microwave
domain. In this setup, L2 and L4 are sufficiently close that the
beat note between these two lasers~(950~MHz) is filtered out by
the detection bandwidth. Thus, we verify that before optical
amplification, the microwave spectrum of the PhD$_{12}$ signal is
composed of a component at $\nu_{12}$ of power $P_{12}$ and a
component at frequency $\nu_{14}$ of power $P_{14}$. In order to
investigate impact of the amplification on the spectral content,
the same beat note is measured again at the output of the
polarizing fibre and displayed on~Fig.~\ref{beatnote}b. Compared
to the expected microwave spectrum, containing the two previous
frequencies ($\nu_{12}$ and $\nu_{14}$), it exhibits an additional
signal at $\nu_{s4}$~=~10.142~GHz of power $P_{s4}$ which is
related to the beat note between L4 and an additional sideband
detuned by 9.197~GHz from L2. Another symmetric sideband at
9.192~GHz above L1 leads to a beating signal with L4 out of the
bandwidth of detection. Small spurious sidebands around $\nu_{12}$
are generated by the acousto-optic modulator~(AOM) at 80~MHz and
can be ignored in the following.
\begin{figure}[!h]
\centering \resizebox{7cm}{!}{
\includegraphics{spectre_schema.eps}}
\resizebox{7cm}{!}{
\includegraphics{spectre.eps}}
\caption{(a) Position of the different components in the optical
spectrum. (b) Measured beat note between L1, L2 and the probe
laser L4 at the output of the polarizing fibre. Three main
frequencies are obtained: $\nu_{12}$ (9.192~GHz), $\nu_{14}$
(8.245~GHz) and $\nu_{s4}$ (10.142~GHz). } \label{beatnote}
\end{figure}
The TA characterization requires the determination of the set of
parameters ($g_0, E\mathrm{_{sat0}},\alpha,I_0, a,N_0$). For this
purpose, we perform three measurements, which are displayed in
Fig.~\ref{measure}. First, we measure $P_{12}$ as a function of
the total optical input power~(Fig.~\ref{measure}a). Then we
record the two magnitudes $P_{14}$ and $P_{s4}$ as a function of
the total input optical power~(Fig.~\ref{measure}b) and the
current~(Fig.~\ref{measure}c). From $P_{s4}/P_{14}$, we deduce the
ratio between the fields of L1 and the sideband.
Using (Fig.~\ref{measure}a) and (Fig.~\ref{measure}b), we infer
$g_0\simeq 1.33\times 10^3$~m$^{-1}$, $E\mathrm{_{sat0}}\simeq
8$~pJ and $\alpha \simeq 6$, which are in agreement with the
values indicated in reference~\cite{agrawal89}. From
$E\mathrm{_{sat0}}$, we get $a = 5.2\times 10^{-15}$~cm$^2$. Then,
we use (Fig.~\ref{measure}c) to work out $I_0 \simeq 0.12$~A and
$N_0 \simeq 1.8\times 10^{20}$~m$^{-3}$. It gives the active
volume $V\simeq 8\times 10^{-13}$~m$^3$, in good agreement with
the geometrical calculation $V\simeq 5\times 10^{-13}$~m$^3$.
\begin{figure}[!h]
\centering \resizebox{8cm}{!}{
\includegraphics{sideband_measurements.eps}}
\caption{(a) Evolution of the micro-wave signal at frequency
$\nu_{12}$ versus the total optical input power. Variation of the
ratio between the microwave signals $P_{14}$ and $P_{s4}$ at
frequencies $\nu_{14}$ and $\nu_{s4}$ (i.e. ratio of the electric
field in the sideband and in L1) as a function of the total
optical input power in the TA (b) and of the input current in the
TA (c).} \label{measure}
\end{figure}
In normal operation, the ratio ($P_{s4}/P_{14}$) of the micro-wave
power is of -22~dB, corresponding to an optical power ratio
between the sideband and L1 of $4\times 10^{-5}$. As expected,
increasing the current or the input power leads to an increase in
sideband generation. Finally, we work out from the simulation that
the index modulation is responsible for 90 percent of the sideband
generation and the amplitude modulation is responsible for the
last 10 percent.
\section{Phase noise measurement}
The residual phase noise between the two lasers is crucial for the
sensitivity of our experiment. Indeed, the laser phase difference
is imprinted on the atom wave-function during the stimulated Raman
transitions~\cite{cheinet08}. The effective phase noise affecting
the interferometer is the sum of different contributions: the
noise induced by the microwave frequency reference~\cite{yver04},
the residual noise in the phase-lock loop~\cite{cheinet06} and the
noise added along the propagation through the TA and the fibre.
The present study is focused on the characterization of these two
last terms.
The additional phase noise is measured by mixing the beat notes of
the two lasers (L1 and L2) before and after the propagation along
the TA and the fibre. The two microwave signals are then amplified
and combined together on a mixer~(ZMX-10G). Their relative phase
$\phi_{A}$ is adjusted with a phase-shifter and set around $\pi/2$
to reach the optimal phase sensitivity at the mixer output. After
filtering the high frequency components, the output signal is
given by,
\begin{equation}
\label{smixer} s_{Mixer}=K_{d} \cos \left(\tilde{\phi_{n}} +
\phi_{A} \right) \approx K_{d} \cdot \tilde{\phi_{n}},
\end{equation}
where $K_{d}$ represents the scaling factor (0.3~V/rad) between
the phase shift and the output level of the mixer.
$\tilde{\phi_{n}}$ is the phase noise between the two beat notes.
The measurement of the power spectral density of phase noise
$S_\phi$ is obtained with a FFT analyzer and displayed on
Fig.~\ref{phasenoise}. It exhibits the phase noise contribution
induced by the TA and the fibre. The measurement is compared to
the detection noise, which is obtained using the same signal in
both inputs of the mixer. This detection noise represents the
lower limit of the phase noise which can be detected by this
setup. No significant additional phase noise from the TA and the
fibre is measured above 1~Hz.
\begin{figure}[!h]
\centering \resizebox{8cm}{!}{
\includegraphics{phase_noise.eps}}
\caption{Phase noise power spectral density. The dash blue curve
shows the noise from the detection system and the dotted red and
black full curves respectively represent the noise after the TA
and the fiber.} \label{phasenoise}
\end{figure}
\section{Impact on an atom interferometer}
Raman transitions enable the manipulation of atomic wave-packets
by coupling two internal states of an atom using a two photon
transition~\cite{kasevich91}. In order to drive these transitions,
two lasers with a difference of frequency in the microwave range
are needed. This laser setup is implemented to realize the
functions of mirrors and beam splitters for atoms~\cite{borde91}
in a six-axis inertial sensor~\cite{canuel06}. A sequence of three
optical pulses ($\pi/2$, $\pi$, $\pi/2$) of duration $\tau$,
separated by free propagation time~$T$ is used to create the
equivalent of a Mach-Zehnder
interferometer~(Fig.~\ref{interferometer}) for
atoms~\cite{borde02}. Here, we estimate the possible limitations
induced by the common amplification of the two Raman lasers in our
specific device which come from unwanted Raman transitions due to
the sideband generation or additional phase noise in the TA.
\begin{figure}[!h]
\centering \resizebox{8cm}{!}{
\includegraphics{interferometre.eps}}
\caption{Scheme of the $\pi/2$-$\pi$-$\pi/2$ interferometer. The
Raman beam splitters of duration $\tau$ are separated by a free
evolution time~$T$. Atomic wave-packets are split, deflected and
recombined by the Raman lasers to realize the equivalent of a
Mach-Zehnder interferometer.} \label{interferometer}
\end{figure}
First, the sideband generation in the TA is small enough to not
give rise to significant Raman transitions between laser L1 or L2
and one of the sidebands. The diffraction process is characterized
by the effective Rabi frequency~\cite{moler1992} which scales as
the product of the fields of the two lasers and the inverse of the
Raman detuning. The amplitude of the sideband is 22~dB smaller and
9~GHz further detuned than the main optical fields. Therefore the
corresponding Rabi frequency is reduced by almost four orders of
magnitude.
Second, we estimate the additional phase shift from the TA. The
phase shift at the output of the interferometer is deduced from
the transition probability of the atoms between the two coupled
states. It depends on the inertial forces and on the the phase
noise between the two lasers~$\Delta\phi\mathrm{_{laser}}$
~\cite{cheinet08}, as the phase difference between L1 and
L2~($\phi(t_i)$) is imprinted on the atomic wave function at the
moment of each pulse,
\begin{equation}
\label{H} \Delta\phi\mathrm{_{laser}}=\phi(t_1)-2\phi(t_2)+\phi(t_3).
\end{equation}
The transfer function of the spectral power density phase noise
simplifies at low frequency ($f \ll 1/\tau$) as,
\begin{equation}
\label{H} \mid H(2\pi f)\mid ^{2} = \frac{4}{\pi^{2} \cdot f^{2}}
\sin^{4} \left(\pi f T \right),
\end{equation}
where $T$, the interaction time, is typically 40~ms. For high
frequencies, the transfer function decreases as a second order low
pass filter with effective cutoff frequency ($1/(4\surd3\tau)$).
The total phase noise added by the TA and the fibre is given by
the integration over the full spectral range of $S_\phi$ weighted
by the transfer function. It leads to fluctuations on the output
atomic phase noise of~0.21~mrad from shot to shot. This
contribution is negligible compared to the noise generated by the
present microwave synthesizer (2.51~mrad), or best frequency
synthesizer based on current quartz oscillator ($\sim$~1~mrad).
\section{Conclusion}
To conclude, we demonstrate that amplifying optically carried
microwave signal with a tapered amplifier induces neither
microwave phase noise nor significant spurious optical sidebands.
This setup does not give any additional limitation on the
sensitivity of our device, which uses relevant parameters for cold
atom interferometer. The sideband generation has been carefully
characterized and shows a good agreement with the calculations.
This effect becomes more significant as the microwave frequency
get smaller. For instance, the model shows that similar results
should be found for an interferometer based on Rb atoms (microwave
frequency of 6.834~GHz), with an increase of the sideband
generation as small as 2~dB.
To a great extent, it shows that a powerful optically carried
microwave signal can be realized in two stages: one performing the
optical carried microwave signal and the other supplying the
optical power. The quality of the amplification does not depend on
a particular method to generate the optical signal and should be
similar when sideband generation in an electro-optic modulator is
used to generate the microwave signal. This can lead to
simplifications of optical systems in atom
interferometry~\cite{nyman06} and of amplification for frequency
and time dissemination at long distance. In addition, this method
could simplify atom interferometer experiments in which several
separated powerful Raman beams are required~\cite{muller08}, as
they can be amplified independently without special care. Finally,
this study has been realized in the steady state regime, and might
be extended to the pulsed regime~\cite{Takase2007} which allows for
higher laser power.
\begin{acknowledgments} We would like to thank G. Lucas-Leclin for
fruitful discussions, E. Foussier for her contribution to the
experimental setup, J. D. Pritchard and C. Garrido Alzar for
careful readings. We also thank the Institut Francilien pour la
Recherche sur les Atomes Froids (IFRAF) and the European Union
(FINAQS STREP NEST project contract no 012986 and EuroQUASAR IQS
project) for financial support. T.L. thanks the DGA for supporting
his work. W.C. thanks IFRAF for supporting his work.
\end{acknowledgments}
| proofpile-arXiv_065-5296 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
In a previous paper \cite{BH2} we have considered \textit{contact pair structures} and studied some properties of their associated metrics.
This notion was first introduced by Blair, Ludden and Yano \cite{Blair2} by the name {\it bicontact} structures, and is a special type of $f$-{\it structure} in the sense of Yano \cite{yano}.
More precisely, a \emph{metric contact pair} on an even dimensional manifold
is a triple $(\alpha_1 , \alpha_2 , g)$, where $(\alpha_1 , \alpha_2)$ is a contact pair (see \cite{BH}) with Reeb vector fields $Z_1$, $Z_2$, and $g$ a Riemannian metric such that $g(X, Z_i)=\alpha_i(X)$, for $i=1,2$, and for which the endomorphism field $\phi$ uniquely defined by $g(X, \phi Y)= (d \alpha_1 + d
\alpha_2) (X,Y)$ verifies
\begin{equation*}\label{d:cpstructure}
\phi^2=-Id + \alpha_1 \otimes Z_1 + \alpha_2 \otimes Z_2 , \;
\phi(Z_1)=\phi(Z_2)=0 .
\end{equation*}
Contact pairs always admit associated metrics with {\it decomposable} structure tensor $\phi$, i.e. $\phi$ preserves the two characteristic distributions of the pair (see \cite{BH2}).
In this paper, we first show that for a given contact pair all such associated metrics have the same volume element. Next we prove that with respect to these metrics the two characteristic foliations are orthogonal and minimal.
We end by giving an example where the leaves of the characteristic foliations are not totally geodesic.
All the differential objects considered in this paper are assumed to be smooth.
\section{Preliminaries on metric contact pairs}\label{s:prelim}
In this section we gather the notions concerning contact pairs
that will be needed in the sequel. We refer the reader to
\cite{Bande1, Bande2, BH, BH2, BH3, BGK, BK} for further informations and several
examples of such structures.
\subsection{Contact pairs and their characteristic foliations}\label{s:prelimcp}
Recall that a pair $(\alpha_1, \alpha_2)$ of $1$-forms on a manifold is said
to be a \emph{contact pair} of type $(h,k)$ if:
\begin{eqnarray*}
&\alpha_1\wedge (d\alpha_1)^{h}\wedge\alpha_2\wedge
(d\alpha_2)^{k} \;\text{is a volume form},\\
&(d\alpha_1)^{h+1}=0 \; \text{and} \;(d\alpha_2)^{k+1}=0.
\end{eqnarray*}
Since the form $\alpha_1$ (resp. $\alpha_2$) has constant class
$2h+1$ (resp. $2k+1$), the characteristic distribution $\ker \alpha_1 \cap \ker d\alpha_1$ (resp.
$\ker \alpha_2 \cap \ker d\alpha_2$) is completely integrable and determines the so-called \emph{characteristic
foliation} $\mathcal{F}_1$ (resp. $\mathcal{F}_2$) whose leaves are endowed with a contact form induced by $\alpha_2$ (resp. $\alpha_1$).
The equations
\begin{eqnarray*}
&\alpha_1 (Z_1)=\alpha_2 (Z_2)=1 , \; \; \alpha_1 (Z_2)=\alpha_2
(Z_1)=0 \, , \\
&i_{Z_1} d\alpha_1 =i_{Z_1} d\alpha_2 =i_{Z_2}d\alpha_1=i_{Z_2}
d\alpha_2=0 \, ,
\end{eqnarray*}
where $i_X$ is the contraction with the vector field $X$, determine completely the two vector fields $Z_1$ and $Z_2$, called \textit{Reeb vector fields}.
Notice that $Z_i$ is nothing but the Reeb vector field of the contact form $\alpha_i$ on each leaf of $\mathcal{F}_j$ for $i\neq j$.
The tangent bundle of a manifold $M$ endowed with a contact pair
can be split in different ways. For $i=1,2$, let $T\mathcal F _i$
be the subbundle of $TM$ determined by the characteristic foliation of
$\alpha_i$, $T\mathcal G_i$ the subbundle whose fibers are
given by $\ker d\alpha_i \cap \ker \alpha_1 \cap \ker \alpha_2$
and $\mathbb{R} Z_1, \mathbb{R} Z_2$ the line bundles determined
by the Reeb vector fields. Then we have the following splittings:
\begin{equation*}
TM=T\mathcal F _1 \oplus T\mathcal F _2 =T\mathcal G_1 \oplus
T\mathcal G_2 \oplus \mathbb{R} Z_1 \oplus \mathbb{R} Z_2
\end{equation*}
Moreover we have $T\mathcal F _1=T\mathcal G_1 \oplus \mathbb{R}
Z_2 $ and $T\mathcal F _2=T\mathcal G_2 \oplus \mathbb{R} Z_1 $.
Notice that $d\alpha_1$ (resp. $d\alpha_2$) is symplectic on the vector bundle $T\mathcal G_2$ (resp. $T\mathcal G_1$).
\begin{example}
Take $(\mathbb{R}^{2h+2k+2},\alpha_1 , \alpha_2)$ where $\alpha_1$ (resp. $\alpha_2$) is the Darboux contact form on $\mathbb{R}^{2h+1}$
(resp. on $\mathbb{R}^{2k+1}$).
\end{example}
This is also a local model for all contact pairs of type $(h,k)$ (see \cite{Bande1, BH}). Hence a contact pair manifold is locally a product of two contact manifolds.
\subsection{Contact pair structures}
We recall now the definition of contact pair structure introduced in
\cite{BH2} and some basic properties.
\begin{definition}
A \emph{contact pair structure} on a manifold $M$ is a triple
$(\alpha_1 , \alpha_2 , \phi)$, where $(\alpha_1 , \alpha_2)$ is a
contact pair and $\phi$ a tensor field of type $(1,1)$ such that:
\begin{equation}\label{d:cpstructure}
\phi^2=-Id + \alpha_1 \otimes Z_1 + \alpha_2 \otimes Z_2 , \;
\phi(Z_1)=\phi(Z_2)=0
\end{equation}
where $Z_1$ and $Z_2$ are the Reeb vector fields of $(\alpha_1 ,
\alpha_2)$.
\end{definition}
It is easy to check that $\alpha_i \circ \phi =0$ for $i=1,2$, that the rank of $\phi$ is
equal to $\dim M -2$ , and that $\phi$ is almost complex on the vector bundle $T\mathcal G_1 \oplus
T\mathcal G_2$ .
Since we are also interested on the induced structures, we recall that
the endomorphism $\phi$ is said to be \textit{decomposable} if
$\phi (T\mathcal{F}_i) \subset T\mathcal{F}_i$, for $i=1,2$.
This condition is equivalent to $\phi(T\mathcal{G}_i)= T\mathcal{G}_i$.
In this case $(\alpha_1 , Z_1 ,\phi)$ (resp.
$(\alpha_2 , Z_2 ,\phi)$) induces, on every leaf of $\mathcal{F}_2$ (resp. $\mathcal{F}_1$), a contact form with structure tensor the restriction
of $\phi$ to the leaf.
\subsection{Metric contact pairs}
On manifolds endowed with contact pair structures it is natural
to consider the following kind of metrics:
\begin{definition}[\cite{BH2}]
Let $(\alpha_1 , \alpha_2 ,\phi )$ be a contact pair structure on
a ma-nifold $M$, with Reeb vector fields $Z_1$ and $Z_2$. A
Riemannian metric $g$ on $M$ is called:
\begin{enumerate}
\item \emph{compatible} if $g(\phi X,\phi Y)=g(X,Y)-\alpha_1 (X)
\alpha_1 (Y)-\alpha_2 (X) \alpha_2 (Y)$ for all vector fields $X$ and $Y$,
\item \emph{associated} if $g(X, \phi Y)= (d \alpha_1 + d
\alpha_2) (X,Y)$ and $g(X, Z_i)=\alpha_i(X)$, for $i=1,2$ and for
all vector fields $X,Y$. \label{ass-metric}
\end{enumerate}
\end{definition}
An associated metric is compatible, but the converse is not true.
\begin{definition}[\cite{BH2}]
A \emph{metric contact pair} (MCP) on a manifold $M$ is a
quadruple $(\alpha_1, \alpha_2, \phi, g)$ where $(\alpha_1,
\alpha_2, \phi)$ is a contact pair structure and $g$ an associated
metric with respect to it. The manifold $M$ is called a MCP manifold.
\end{definition}
Note that the equation
\begin{equation}\label{d:phi}
g(X, \phi Y)= (d \alpha_1 + d\alpha_2) (X,Y)
\end{equation}
determines completely the endomorphism $\phi$.
So we can talk about a metric $g$ \emph{associated to a contact pair} $(\alpha_1 , \alpha_2)$ when $g(X, Z_i)=\alpha_i(X)$, for $i=1,2$, and the endomorphism $\phi$ defined by equation \eqref{d:phi} verifies \eqref{d:cpstructure}.
\begin{theorem}[\cite{BH2}]
For a MCP $(\alpha_1 , \alpha_2, \phi, g)$, the tensor $\phi$ is
decomposable if and only if the characteristic foliations $\mathcal{F}_1 ,
\mathcal{F}_2$ are orthogonal.
\end{theorem}
Using a standard polarization on the symplectic vector bundles $T\mathcal G_i$ (see Section \ref{s:prelimcp}), one can see that for a given contact pair $(\alpha_1, \alpha_2)$ there always exist a decomposable $\phi$ and a metric $g$ such
that $(\alpha_1, \alpha_2, \phi, g)$ is a MCP (see \cite {BH2}).
This can be stated as:
\begin{theorem}[\cite{BH2}]
For a given contact pair on a manifold,
there always exists an associated metric for which the characteristic foliations are orthogonal.
\end{theorem}
Let $(\alpha_1, \alpha_2 ,\phi, g )$ be a MCP on a manifold with decomposable $\phi$.
Then $(\alpha_i, \phi , g)$ induces a contact metric structure on the
leaves of the characteristic foliation $\mathcal{F}_j$ of $\alpha_j$, for $i \neq
j$ (see \cite{BH2}).
\begin{example}\label{mcp-product}
As a trivial example one can take two metric contact manifolds $(M_i, \alpha_i,g_i)$ and consider the MCP $(\alpha_1,\alpha_2,g_1 \oplus g_2)$ on $M_1\times M_2$. The characteristic foliations are given by the two trivial fibrations.
\end{example}
\begin{remark}
To get more examples of MCP on closed manifolds, one can imitate the constructions on flat bundles and Boothby-Wang fibrations given in \cite{BH3}
and adapt suitable metrics on the bases and fibers of these fibrations. See also Example \ref{liegroup} below which concerns a nilpotent Lie group and its closed nilmanifolds.
\end{remark}
\section{Minimal foliations}
Given any compatible metric $g$ on a manifold endowed with a contact pair structure $(\alpha_1, \alpha_2 , \phi)$ of type $(h,k)$,
with Reeb vector fields
$Z_1$ and $Z_2$, one can construct a local basis, called $\phi$-basis.
On an open set, on the orthogonal complement of $Z_1$ and $Z_2$, choose a vector field $X_1$ of
length $1$ and take $\phi X_1$. Then take the orthogonal
complement of $\{Z_1, Z_2, X_1 ,\phi X_1\}$ and so on. By iteration of this procedure,
one obtains a local orthonormal basis
$$
\{ Z_1 , Z_2 , X_1 , \phi X_1 , \cdots , X_{h+k} , \phi X_{h+k}\},
$$
which will be called $\phi$-basis and is the analog of a $\phi$-basis for almost contact structures, or $J$-basis in the case of
an almost complex structure $J$.
If $\phi$ is decomposable and $g$ is an associated metric, since the characteristic foliations are orthogonal, it is possible to construct the $\phi$-basis in a better way. Starting with $X_1$ tangent to one of the characteristic foliations,
which are orthogonal, with a slight modification of the above
construction, we obtain a $\phi$-basis
$$
\{ Z_1 , X_1 , \phi X_1 , \cdots , X_h , \phi X_h ,Z_2 , Y_1 , \phi Y_1 , \cdots , Y_k ,
\phi Y_k \}
$$
such that $\{ Z_1 , X_1 , \phi X_1 , \cdots , X_h ,
\phi X_h \}$ is a $\phi$-basis for the induced metric contact structures on the
leaves of $\mathcal F_2$, and
$\{Z_2 , Y_1 , \phi Y_1 , \cdots ,
Y_k , \phi Y_k \}$ is a $\phi$-basis for the leaves of $\mathcal
F_1 $.
Using this basis and the formula for the volume form on
contact metric manifolds (see \cite{Blairbook}, for example), one can easily show the following:
\begin{proposition}\label{volumeform}
On a manifold endowed with a MCP $(\alpha_1,
\alpha_2, \phi, g)$ of type $(h,k)$, with a decomposable $\phi$,
the volume element of the Riemannian metric $g$ is given by:
\begin{equation}\label{MCP-volume}
dV= \frac{(-1)^{h+k}}{2^{h+k} h! k!} \alpha_1 \wedge (d\alpha_1) ^h \wedge \alpha_2 \wedge (d\alpha_2) ^k
\end{equation}
\end{proposition}
A direct application of the minimality criterion of Rummler (see \cite{Rum} page 227)
to the volume form on a
MCP manifold yields the following result:
\begin{theorem}
On a MCP manifold $(M, \alpha_1,\alpha_2, \phi,
g)$ with decomposable $\phi$, the characteristic foliations are
minimal.
\end{theorem}
\begin{proof}
Recall the minimality criterion of Rummler: let $\mathcal F$ be a $p$-dimensional foliation on a Riemannian manifold and $\omega$ its characteristic form (i.e. the $p$-form
which vanishes on vectors orthogonal to $\mathcal F$ and whose restriction to $\mathcal F$ is the volume of the induced metric on the leaves). Then
$\mathcal F$ is minimal iff $\omega$ is closed on $T\mathcal F$ (i.e. $d\omega(X_1,...,X_p,Y)=0$ for $X_1$, ..., $X_p$ tangent to $\mathcal F$).
Let $\mathcal F_i$ be the characteristic foliation of $\alpha_i$.
As the volume element of the Riemannian metric $g$ is given by \eqref{MCP-volume}, the characteristic form of $\mathcal F_1$ (resp. $\mathcal F_2$) is,
up to a constant, $\alpha_2 \wedge (d\alpha_2) ^k$ (resp. $\alpha_1 \wedge (d\alpha_1) ^h$). But these forms are closed
by the contact pair condition, and then the
criterion applies directly.
\end{proof}
Since every manifold endowed with a contact pair always admits
an associated metric with decomposable $\phi$, we
have a statement already proved in \cite{BK}:
\begin{corollary}
On every manifold endowed with a contact pair there exists a
metric for which the characteristic foliations are orthogonal and minimal.
\end{corollary}
\begin{remark}
Although a contact pair manifold is locally a product of two contact manifolds (see Section \ref{s:prelimcp}), an associated metric for which the characteristic foliations are orthogonal is not necessary locally a product as in Example \ref{mcp-product}. Here is an interesting case:
\end{remark}
\begin{example}\label{liegroup}
Let us consider the simply connected $6$-dimensional nilpotent Lie group $G$ with structure
equations:
\begin{eqnarray*}
&d\omega_3= d\omega_6=0 \; \; , \; \; d\omega_2= \omega_5 \wedge
\omega _6 ,\\
&d\omega_1=\omega_3 \wedge \omega_4 \; \; , \; \;
d\omega_4= \omega_3 \wedge \omega_5 \; \; , \; \; d\omega_5 =
\omega_3 \wedge \omega_6 \, ,
\end{eqnarray*}
where the $\omega_i$'s form a basis for the cotangent space of $G$
at the identity.
The pair $(\omega_1 , \omega_2)$ is a contact pair of type $(1,1)$
with Reeb vector fields $(X_1, X_2)$, the $X_i$'s being dual to
the $\omega_i$'s. The characteristic distribution of $\omega_1$ (resp. $\omega_2$) is spanned by $X_2$, $X_5$ and $X_6$ (resp. $X_1$, $X_3$ and $X_4$).
The left invariant metric
\begin{equation}
g=\omega_1 ^2+\omega_2 ^2+\frac{1}{2}\sum_{i=3}^6 \omega_i ^2
\end{equation}
is associated to the contact pair $(\omega_1 , \omega_2)$ with decomposable structure tensor $\phi$ given by
$\phi(X_6)=X_5$ and $\phi (X_4)=X_3$ .
The characteristic foliations have minimal leaves. Moreover the leaves tangent to the identity of $G$ are Lie subgroups isomorphic to the Heisenberg group.
Notice that these foliations are not totally geodesic since $g(\nabla _{X_4} X_3,X_5)\neq0$ and $g(\nabla _{X_5} X_6,X_3)\neq0$, where $\nabla$ is the Levi-Civita connection of this metric.
So the metric $g$ is not locally a product.
Since the structure constants of the group are rational, there exist lattices $\Gamma$ such that $G/\Gamma$ is
compact. Since the MCP on G is left invariant, it descends to all quotients $G/\Gamma$ and we obtain closed
nilmanifolds carrying the same type of structure.
\end{example}
\bibliographystyle{amsalpha}
| proofpile-arXiv_065-5307 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
Over the last few years the Bose-Hubbard (BH) model \cite{Fisher_PRB_40_546} has attracted considerable attention, owing to its realization in terms of ultracold atoms trapped in optical lattices \cite{Jaksch_PRL_81_3108,Greiner_Nature_415_39}.
Among other aspects, the interest in the relation between the BH model with attractive interactions and the nonlinear theory that can be considered its semiclassical counterpart \cite{Bernstein_PhysicaD_68_174,Bernstein_Nonlinearity_3_293,Wright_PhisicaD_69_18} was rekindled \cite{Jack_PRA_71_023610,Buonsante_PRA_72_043620,Oelkers_PRB_75_115119,Javanainen_PRL_101_170405}.
One of the most striking issues in this respect lies in the symmetry properties of the typical states of the theory. The translation symmetry of the quantum states appears to be at odds with the spontaneous symmetry breaking of the localized, {\it self-trapped} semiclassical states, although the same regimes can be identified in the two theories \cite{Jack_PRA_71_023610,Buonsante_PRA_72_043620}.
These apparently conflicting features have been reconciled in a recent work \cite{Javanainen_PRL_101_170405} where it is discussed how a {\it single measurement} in a (inherently linear) quantum system can give rise to the localized, symmetry-breaking result typical of the corresponding nonlinear theory. The point in question is illustrated in some detail for the well known case of a lattice comprising only two sites, and a few further examples are given for a larger lattice, at a single value of the effective parameter governing the system.
Here we extend such analysis in a twofold way. On the one hand, we introduce a suitable notion for the width of a localized state, thereby providing a more quantitative tool for comparing the quantum measurements with the relevant semiclassical predictions. On the other hand, unlike Ref.~\cite{Javanainen_PRL_101_170405}, we do not limit our simulations to the localized regime, but we perform a systematic comparison, exploring a range of effective parameters including the semiclassical bifurcation of the system ground state \cite{Eilbeck_PhysicaD_16_318,Smerzi_PRL_89_170402}.
The analysis of the {\it localization width} shows that the semiclassical delocalization transition corresponds to a crossover in the quantum system, which highlights a {\it finite-population} effect. The sharp transition is recovered only in the limit of infinite boson filling, where the classical results are exact.
Also, the range of effective parameters and lattice sizes we explore includes those highlighting the change in the nature of the bifurcation occurring at the semiclassical level when passing from five-site to six-site lattices \cite{Buonsante_JPB_39_S77,Buonsante_PRE_75_016212}. As we discuss in the following, a signature of this qualitative change is apparent at the quantum level.
The layout of this paper is the following. In Section~\ref{S:system} we review the second-quantized model for attractive bosons hopping across the site of a lattice and the nonlinear theory representing its semiclassical counterpart.
We discuss the localization/delocalization transition occurring in the ground-state of the latter, revisiting the results discussed in Ref.~\cite{Javanainen_PRL_101_170405}. Also, we provide a quantitative measure for localization, which is readily extended to the quantum theory.
Section~\ref{results} contains our results, which are based on the numerical calculation of the ground-state of the quantum system. This is obtained by means of Lanczos diagonalization and {\it population} quantum Monte Carlo simulations. We start with a detailed analysis of the specific case of a three-site lattice (trimer), for which the localization/delocalization transition can be appreciated by a direct visualization of the structure of the ground-state. We then make a systematic analysis of the localization width for several lattice sizes and boson fillings, highlighting the agreement with the semiclassical result for large fillings. This also shows that the qualitative change in the bifurcation pattern of the classical theory taking place for lattices containing more than five sites is recovered at the quantum level.
In Section \ref{S:CS} we discuss the connection between the quantum and classical theory based on the representation of the quantum ground-state as a su$(L)$ coherent state \cite{Buonsante_PRA_72_043620} (also known as {\it Hartree wave function} \cite{Wright_PhisicaD_69_18}).
This allows us to employ the semiclassical ground state to construct an approximation to the quantum ground which includes some of the {\it finite population} effects mentioned above.
\section{The system}
\label{S:system}
We consider a simple Bose-Hubbard model
\begin{equation}
\label{BH}
\hat H = -\sum_{j=0}^{L-1} \left[\frac{U}{2} \left(\hat a_j^\dag\right)^2 \hat a_j^2 + J \left(\hat a_j^\dag \hat a_{j+1}+ \hat a_{j+1}^\dag \hat a_{j}\right)\right],
\end{equation}
with attractive interactions, $U,\,J>0$.
The operators $\hat a_j^\dag$ and $\hat a_j$ create and destroy bosons at lattice site $j$, respectively. The lattice comprises $L$ sites and is periodic, so that
site label $j=L$ is to be identified with $j=0$. The (positive) parameters $U$ and $J$ account for the relative strength of the {\it interaction} and {\it kinetic} term, respectively.
The BH Hamiltonian clearly commutes with the total number of bosons in the system $\hat N=\sum_j \hat n_j$, with $\hat n_j = \hat a_j^\dag \hat a_j$. This means that the eigenstates of Eq.~\eqref{BH} are characterized by a well defined total number of bosons $N$, i.e. that $H$ can be studied within a fixed-number subspace of the infinite Hilbert space. The size of this fixed-number subspace is finite, $s = \binom{N+L-1}{L}$, but it becomes computationally prohibitive already for modest lattice sizes and boson populations.
Hamiltonian \eqref{BH} is clearly translation invariant. By virtue of the Bloch theorem, its eigenstates --- and in particular its ground state $|\Psi\rangle$ --- are delocalized over the entire lattice. This means $n_j=\langle\Psi|\hat n_j|\Psi\rangle = \langle\Psi|\hat n_\ell |\Psi\rangle =n_\ell$ for any $j$ and $\ell$.
The semiclassical nonlinear theory corresponding to the BH model can be obtained by changing the site operators into C-numbers, $ \hat a_j^\dag \leadsto \psi_j$, whose square modulus has a natural interpretation as the local boson population, $\hat n_j \leadsto |\psi_j|^2$. The semiclassical Hamiltonian thus modeled onto Eq. \eqref{BH} results in a dynamics governed by the so-called {\it discrete nonlinear Schr\"odinger equations} or {\it discrete self-trapping } (DST) {\it equations},
\begin{equation}
\label{DST}
i \dot \psi_j = - U |\psi_j|^2 \psi_j - J \left(\psi_{j+1} + \psi_{j-1}\right)
\end{equation}
where we set $\hbar = 1$ \cite{Eilbeck_PhysicaD_16_318}. The stationary solutions $\psi_j(t)= e^{-i \omega t} \phi_j$ obey the fixed point equations
\begin{equation}
\label{DSTfp}
\omega \phi_j = - U |\phi_j|^2 \phi_j - J \left(\phi_{j+1} + \phi_{j-1}\right)
\end{equation}
Similar to the quantum case, the semiclassical Hamiltonian, as well as equations of motion \eqref{DST} and \eqref{DSTfp}, are translation invariant. However, their nonlinear nature allows of symmetry-breaking solutions.
In particular, it is easy to check that Eq.~\eqref{DSTfp} always has a uniform solution, $\phi_j = \sqrt{N/L}$, independent of the value of the parameters $U$ and $J$. Its frequency and energy are $\omega = -U N/L-J$ and $E = -U N/2L$, respectively, the latter being the lowest possible energy only for sufficiently large values of the effective parameter
\begin{equation}
\label{tau}
\tau = \frac{J}{U N},\qquad \tau > \tau_{\rm d}
\end{equation}
If, conversely, the effective interaction energy $U N$ prevails over the hopping amplitude $J$, so that $\tau < \tau_{\rm d}$, the lowest-energy solution of Eq. \eqref{DSTfp} is localized about one lattice site, thus breaking the translation invariance. This phenomenon is referred to as ({\it discrete}) {\it self-trapping} \cite{Eilbeck_PhysicaD_16_318}. In general, a second critical value
\begin{equation}
\label{taus}
\tau_{\rm s}= \frac{1}{2 L \sin^2 \frac{\pi}{L}} \leq \tau_{\rm d}
\end{equation}
can be recognized for the uniform solution of Eq.~\eqref{DSTfp} such that
for $\tau> \tau_{\rm s}$ the uniform solution is dynamically stable, whereas it is unstable for $\tau< \tau_{\rm s}$ \cite{Smerzi_PRL_89_170402}.
For sufficiently large lattice sizes, $L\geq 6$, the equality applies in Eq.~\eqref{taus}. That is, the uniform solution becomes simultaneously stable {\it and} the ground state of the system as soon as $\tau> \tau_{\rm s} = \tau_{\rm d}$. On smaller lattices the strict inequality applies, $\tau_{\rm s} < \tau_{\rm d}$, and the low-energy solutions of \eqref{DSTfp} exhibit a more complex bifurcation pattern \cite{Buonsante_JPB_39_S77,Buonsante_PRE_75_016212,Note1}.
A signature of the localization occurring at the semiclassical level for large values of the interaction can be recognized also at the quantum level.
The quantum ground-state can be seen a symmetric superposition of $L$ states, each localized at one of the $L$ sites of the lattice and closely resembling the symmetry-breaking semiclassical ground-state. At the boson fillings $N/L$ corresponding to Hamiltonians that can be analyzed by means of Lanczos diagonalization, this resemblance is strong only for very small values of the effective parameter $\tau$, while it is washed out by quantum fluctuations at larger $\tau$'s \cite{Buonsante_PRA_72_043620}.
In Ref.~\cite{Javanainen_PRL_101_170405} the connection between the uniform quantum ground state and its localized semiclassical counterpart is further discussed. There it is remarked that the experimental measurement of the observable $\hat {\mathbf{n}}= (\hat n_1,\hat n_2,\cdots,\hat n_L) $ is likely to produce an outcome in strong agreement with the semiclassical result. Indeed such a measurement selects a Fock state of the direct space, i.e. an eigenstate of $\hat {\mathbf n}$,
\begin{equation}
\label{nu}
|\vec{\nu}\rangle=|\nu_1,\nu_2,\cdots,\nu_L\rangle = \prod_{j=1}^L \frac{(a_j^\dag)^{\nu_j}}{\sqrt{\nu_j!}} |0\rangle\!\rangle,
\end{equation}
where $|0\rangle\!\rangle$ is the vacuum of the theory, i.e. $a_j |0\rangle\!\rangle$ for all $j=1,2,\ldots,L$.
Each Fock state is selected with a probability distribution given by $P(\vec{\nu}) = |c_{\vec{\nu}}|^2$, where the $c_{\vec{\nu}}$'s are the coefficients in the expansion
\begin{equation}
\label{qGS}
|\Psi\rangle = {\sum_{\vec{\nu}}}' c_{\vec{\nu}} |\vec{\nu}\rangle
\end{equation}
The prime on the summation symbol signals that the Fock states involved in the expansion belong to the same fixed-number subspace, $\sum_j \nu_j = N$.
The Authors of Ref.~\cite{Javanainen_PRL_101_170405} first of all recall that for a two-site lattice, $L=2$, the probability distribution $P(\vec{\nu})$ is double peaked when $\tau<\tau_{\rm d}$ \cite{Ho_JLTP_135_257,Zin_EPL_83_64007} , and observe that the mirror-symmetric Fock states corresponding to the peak probabilities reproduce the populations of the symmetry-breaking semiclassical ground states: $(\nu_1,\nu_2) = (|\psi_1|^2,|\psi_2|^2)$, $(\nu_1,\nu_2) = (|\psi_2|^2,|\psi_1|^2)$. Also, by exactly diagonalizing Hamiltonian \eqref{BH} they show that the width of the probability peaks decrease with increasing total number of bosons.
The Authors' observation is further illustrated by considering the case of $N=256$ bosons on a lattice comprising $L=16$ sites. A few Fock states $|\vec{\nu}\rangle$ are {\it sampled} from a quantum Monte Carlo calculation and, after suitable translations, graphically compared with the semiclassical result for the local populations $(|\psi_1|^2, |\psi_2|^2,\cdots,|\psi_L|^2)$. The importance-sampled quantum configurations agree reasonably well with the semiclassical result, although quantum fluctuations are clearly recognizable.
In summary, Ref.~\cite{Javanainen_PRL_101_170405} demonstrates how a measurement in an inherently linear quantum theory could produce a typical outcome of a nonlinear theory, i.e. a localized {\it soliton-like} ground-state.
In the following we further investigate the connection between the ground-state of Eqs. \eqref{BH} and \eqref{DSTfp}, making the comparison more quantitative with the aid of a suitable observable. Also we explore a range of effective parameters $\tau$ including the localization/delocalization threshold $\tau_{\rm d}$, and discuss the signature of this semiclassical transition at the quantum level. In particular we illustrate that the difference in the semiclassical bifurcation pattern distinguishing small lattices from those comprising more than six sites \cite{Buonsante_JPB_39_S77,Buonsante_PRE_75_016212} is apparent also in the quantum data.
\subsection{Soliton width}
As we recall above, for $\tau < \tau_{\rm d}$ the ground-state of the DST equations \eqref{DSTfp} becomes localized at one of the lattice sites, assuming a {\it soliton-like} density profile. Since the soliton peak can be localized at any of the $L$ lattice sites, the ground-state is $L$-fold degenerate. This spontaneous symmetry breaking entails from the nonlinear nature of Eq.~\eqref{DSTfp}. After recalling that we are considering a cyclic lattice, i.e. periodic boundary conditions, we can estimate the (square) width of a localized solution of Eq.~\eqref{DSTfp} as
\begin{equation}
\label{w2}
w(\vec{n}) = \sum_{j=1}^L \frac{n_j}{N}\left[\left(x_j^2 - x_{\rm cm}^2\right)+\left(y_j^2 - y_{\rm cm}^2\right)\right],
\end{equation}
where $x_j = \cos \frac{2\pi}{L} j$ and $y_j = \sin \frac{2\pi}{L} j$ are the coordinates of the $j$-th site of the ring lattice \cite{note3}, $n_j = |\psi_j|^2$ is the boson population at that site and
\begin{equation}
x_{\rm cm}(\vec{n}) = \sum_{j=1}^L \frac{n_j}{N} x_j, \quad y_{\rm cm}(\vec{n}) = \sum_{j=1}^L \frac{n_j}{N} y_j
\end{equation}
are the coordinates of the center of mass of the boson distribution.
When the semiclassical solution is uniform, $n_j = |\psi_j|^2 = N/L$, the center of mass is at the center of the ring lattice, $x_{\rm cm} = y_{\rm cm}=0 $ and the width attains the maximum possible value, $w=1$.
At the opposite limit, the center of mass coincides with the lattice site at which the entire boson population is confined. In this situation the width attains its minimum value, $w=0$. The width of the lowest solution to Eq.~\eqref{DSTfp} is plotted in Fig.~\ref{scW} for some small lattices, $3\leq L \leq 7$. Note that the different bifurcation pattern characterizing the smaller lattices \cite{Buonsante_JPB_39_S77,Buonsante_PRE_75_016212}, $L<6$, is mirrored in the discontinuos character of the width of the ground state. In general an increase of $\tau<\tau_{\rm d}$ results in an increase of the width of the localized state. However, the delocalization is attained continuously at $\tau=\tau_{\rm d}$ only for $L\geq 6$. For smaller lattice sizes the width of the localized ground state has an upper bound smaller than 1, and delocalization is attained {\it catastrophically} as $\tau>\tau_{\rm d}$.
As we discuss above, a measurement of the quantum observable $\hat{\mathbf{n}}$ selects a Fock state, i.e. a set of integer occupation numbers $\nu_j = \langle\hat n_j\rangle $, with probability $P(\vec{\nu}) = |c_{\vec{\nu}}|$ defined by Eq.~\eqref{qGS}. Eq.~\eqref{w2} lends itself to the estimate of the width of the selected Fock state as well, provided that $\nu_j$ is plugged in $n_j$, instead of $|\psi_j|^2$.
\begin{figure}
\begin{centering}
\includegraphics[width=8.5 cm]{scWidth.eps}
\caption{\label{scW} Width of the semiclassical ground-state according to Eq.~\eqref{w2} for some lattice sizes, from left to right $3 \leq L \leq 7$. Notice that the transition from a localized ($w<1$) to a uniform state ($w=1$) is discontinuous for $L<6$ \cite{Buonsante_JPB_39_S77,Buonsante_PRE_75_016212}. The dotted lines at the critical values $\tau_{\rm d}$ are guides to the eye. }
\end{centering}
\end{figure}
Taking into account that $x_j^2+y_j^2=1$ and that the occupation numbers in the Fock states add up to $N$, after a few manipulations Eq.~\eqref{w2} becomes
\begin{eqnarray}
w(\vec{\nu}) &=& 1-\frac{1}{N^2}\sum_{j\ell} \cos\left[\frac{2\pi}{L}(j-\ell)\right] \nu_j \nu_\ell \nonumber\\
&=& 1-\frac{1}{N^2}\sum_{j\ell} \cos\left[\frac{2\pi}{L}(j-\ell)\right] \langle \vec{\nu}|\hat n_j \hat n_\ell|\vec{\nu}\rangle
\end{eqnarray}
Therefore, the average of a large number of measurements of $w$ tends to the quantum observable
\begin{eqnarray}
\overline{w} &=& {\sum_{\vec{\nu}}}' P(\vec{\nu}) w(\vec{\nu})\nonumber\\
&=&1-
\frac{1}{N^2}\sum_{j\ell} \cos\left[\frac{2\pi}{L}(j-\ell)\right]\left\langle \hat n_j \hat n_\ell
\right\rangle \nonumber\\
\label{aqW}
&=& 1-\left\langle \frac{1}{2} \left[\hat S\left(\frac{2\pi}{L}\right)+\hat S\left(-\frac{2\pi}{L}\right) \right] \right\rangle
\end{eqnarray}
where
\begin{equation}
\label{ssf}
\hat S(q) = \frac{1}{N^2} \sum_{j\ell} e^{i q(j-\ell)} \hat n_j \hat n_\ell .
\end{equation}
We note that the quantity in Eq.~\eqref{ssf} is formally similar to the {\it static structure factor} for a 1D linear lattice \cite{Roth_PRA_68_023604,Note2}.
\section{Results}
\label{results}
In the following we show that the average width of a quantum ground state, as estimated by Eq.~\eqref{aqW}, reproduces the semiclassical results shown in Fig.~\ref{scW}, the agreement improving as the boson population in the lattice is increased. We thereby make the analysis of Ref.~\cite{Javanainen_PRL_101_170405} more systematic and quantitative. Furthermore, we do not limit our investigation to the region where the semiclassical solution is localized, but explore the transition region as well. We do this for relatively small lattices, but consider one example, the six-site lattice, for which the bifurcation pattern at the transition is qualitatively similar to larger lattices \cite{Buonsante_JPB_39_S77,Buonsante_PRE_75_016212}.
In order to evaluate the quantity in Eq.~\eqref{aqW} we need to obtain the ground-state of Hamiltonian \eqref{BH}. We achieve this by means of two different numerical approaches. When the size of the relevant Hilbert space is sufficiently small, we employ a {\it Lanczos} diagonalization algorithm. Otherwise, we resort to a stochastic method, the so-called {\it population} quantum Monte Carlo algorithm \cite{Iba_TJSAI_16_279}. In both cases we exploit the symmetry granted by the commutator $[\hat H,\hat N]$ by working in the canonical ensemble, i.e. by considering only the occupation-number Fock states $|\vec{\nu}\rangle$ relevant to a given total number of bosons, $N=\sum_j \nu_j$. In order to reduce further the size of the matrix to be analyzed by the Lanczos algorithm, we also take advantage of the translation symmetry of the system. We first of all gather the fixed-number Fock states into equivalence classes determined by lattice translations, which allow to define the reduced basis:
\begin{equation}
\label{tiF}
|\vec{\nu}_*\rangle = \frac{1}{\sqrt{{\cal N}_{\vec{\nu}}}} \sum_{j=1}^{{\cal N}_{\vec{\nu}}} {\hat D}^{\frac{L}{{\cal N}_{\vec{\nu}}}\,j} |\vec{\nu}\rangle
\end{equation}
In the r.h.s. of this equation $|\vec{\nu}\rangle$ represents any member of an equivalence class, which determines all of the other ${\cal N}_{\vec{\nu}}-1$ members of the same class through the {\it displacement operator} $\hat D$ such that $\hat D \hat a_j \hat D^{-1} = \hat a_{j+1}$. The number ${\cal N}_{\vec{\nu}}$ of states in a class is in general a divisor of the lattice size $L$.
The size of the reduced basis comprising states of the form \eqref{tiF} is the same as the size of the subspace formed by the {\it quasimomentum} Fock states $|\vec{q}\rangle$ such that $\sum_{k=1}^L k \,q_k= \kappa\, L$, with $\kappa \in {\mathbb{Z}}$ i.e. roughly a factor $L$ smaller than the fixed-number Fock space.
Before discussing the results for the average quantum width, Eq.~\eqref{aqW}, we analyze the ground state of three-site lattice in some detail.
\subsection{The trimer}
\begin{figure}
\begin{centering}
\includegraphics[width=8.5 cm]{trimerF.eps}
\caption{\label{trif1} Representation of the probability distribution $P(\vec{\nu})$. Left: the dark lines demonstrate the (flat) surface of the occupation-number Fock space relevant to a trimer containing $N$ bosons. Center: the same surface as viewn from direction $(0 0 1)$, i.e. projected onto the $(\nu_1,\nu_2)$ plane. Right: the same surface as in the rightmost panel as viewn from the direction normal to it, $(1 1 1)$. Notice that this view highlights the threefold symmetry of the domain of the distribution probability. }
\end{centering}
\end{figure}
For lattices comprising just three sites the probability distribution $P(\vec{\nu})$ for the Fock state $|\vec{\nu}\rangle=|\nu_1,\nu_2,\nu_3\rangle$ selected in an experiment measuring $\hat {\mathbf n}=(\hat n_1,\hat n_2, \hat n_3)$ can be conveniently represented as a two dimensional density plot. This is made possible by the total number conservation, which constrains one of the occupation numbers, say $\nu_3$, to a value depending linearly on the two remaining occupation values $\nu_3=N-\nu_1-\nu_2$. One can therefore regard the probability distribution as a function of the latter alone, $P(\nu_1,\nu_2,N-\nu_1-\nu_2)$.
The portion of the occupation-number Fock space relevant to a trimer containing $N$ boson is illustrated in the leftmost panel of Figure \ref{trif1}. The same surface as seen from two different points of view is shown in the remaining two panels. In the central panel the point of view is on the direction $(0,0,1)$, which results in a projection onto the $(\nu_1,\nu_2)$ plane. In the leftmost panel the point of view is on the direction $(1,1,1)$ normal to the surface under investigation. This highlights the three-fold symmetry of the surface, which is not apparent in the previous view. Since we are interested in the symmetry of the system, we adopt the second point of view when representing the probability distribution $P(\vec{\nu})$.
Figure \ref{trif2} shows the density plots of $P(\vec{\nu})$ for several values of the effective parameter $\tau$ defined in Eq.~\eqref{tau}. The representation is the same as in the rightmost panel of Fig.~\ref{trif1}, but in order to compare different total populations, the tick labels refer to $\nu_j/N$ instead of $\nu_j$. The left and right columns correspond to two different choices for the total population, $N=300$ and $N=900$, respectively. Note that the probability density undergoes a qualitative change as $\tau$ crosses the region of the semiclassical delocalization point, $\tau_{\rm d} = 0.25$. Below this threshold $P(\vec{\nu})$ features three symmetrically positioned peaks. Notice that the probability peaks occur at the same positions as the three equivalent semiclassical ground states. The white cross symbol signals the position of one of such semiclassical ground-states, namely the one with $\nu_1=\nu_2$ and $\nu_3=N-2\,\nu_1$.
Owing to its symmetry, the three-modal distribution always produces a symmetric expectation value of the occupation numbers, $\langle \hat n_j\rangle = N/3$. However, a single measurement of $(\hat n_1, \hat n_2, \hat n_3)$ is extremely likely to produce a symmetrically broken outcome very similar to the semiclassical result.
\begin{figure
\begin{centering}
\begin{tabular}{cc}
\includegraphics[width=4.0 cm]{tri_N300_tau0p2.eps} &
\includegraphics[width=4.0 cm]{tri_N900_tau0p2.eps} \\
\includegraphics[width=4.0 cm]{tri_N300_tau0p23.eps} &
\includegraphics[width=4.0 cm]{tri_N900_tau0p23.eps} \\
\includegraphics[width=4.0 cm]{tri_N300_tau0p248.eps} &
\includegraphics[width=4.0 cm]{tri_N900_tau0p248.eps} \\
\includegraphics[width=4.0 cm]{tri_N300_tau0p24918.eps} &
\includegraphics[width=4.0 cm]{tri_N900_tau0p249745.eps} \\
\includegraphics[width=4.0 cm]{tri_N300_tau0p25.eps} &
\includegraphics[width=4.0 cm]{tri_N900_tau0p25.eps}\\
\includegraphics[width=4.0 cm]{tri_N300_tau1.eps} &
\includegraphics[width=4.0 cm]{tri_N900_tau1.eps}
\end{tabular}
\caption{\label{trif2} Density plots of the probability distribution $P(\vec{\nu})$ for some values of the effective parameter $\tau$, Eq.~\eqref{tau}. Left: $N=300$. Right: $N=900$. The probability distribution is represented as described in the rightmost panel of Fig.~\ref{trif1}, but, in order to compare different total populations, the tick labels refer to $\nu_j/N$ instead of $\nu_j$. A white cross symbol signals the position of one of the three equivalent semiclassical ground-states (see text). The density plot is scaled with respect to $P_{\rm M}= \max_{\vec{\nu}} P(\vec{\nu})$.}\end{centering}
\end{figure}
As $\tau$ increases above $\tau_{\rm d}$, the three-modal probability distribution becomes monomodal. In this situation the symmetric expectation value $\langle \hat n_j\rangle = N/3$ is very similar to the outcome of a single measurement, in agreement with what happens at the semiclassical level.
Notice that there is a very narrow range of effective parameters in which the probability is essentially four-modal, featuring a central peak surrounded by three symmetrically positioned peaks. The value of $\tau$ such that the central peak has the same height as the peripheral peaks could be used to define the quantum counterpart of the delocalization threshold $\tau_{\rm d}$.
This situation is demonstrated in the density plots in the fourth row of Fig.~\ref{trif2}. These plots refer to slightly different values of $\tau$, depending on the different total populations. As the population increases, this quantum threshold approaches the semiclassical one, $\tau_{\rm d} = 0.25$. The remaining rows of Fig.~\ref{trif2} refer to the same values of $\tau$.
The quantum threshold condition discussed above, $\max_{\vec{\nu}} P(\vec{\nu}) = P(\frac{N}{3},\frac{N}{3},\frac{N}{3})$, requires a tomography of the quantum state.
A substantially equivalent approach makes use of the average width defined in Eq.~\eqref{aqW}.
\subsection{Average width}
In this section we demonstrate how the measured average width for a quantum system, Eq.~\eqref{aqW}, reproduces the semiclassical results shown in Fig.~\ref{scW}. Once again we first of all consider the three-site lattice. Figure \ref{sqWL3} shows the average quantum width, Eq.~\eqref{aqW} for a lattice comprising $L=3$ sites and for different total boson populations, $N=300$, $600$ and $900$ particles.
\begin{figure
\begin{centering}
\includegraphics[width=8.2 cm]{SQwidth_M3.eps}
\caption{\label{sqWL3} Average quantum width Eq.~\eqref{aqW} for a trimer containing 300, 600 and 900 bosons. The semiclassical result is also plotted for purposes of comparison.}
\end{centering}
\end{figure}
The semiclassical result appearing in Fig.~\ref{scW} is also plotted for purposes of comparison. Notice first of all that the range of $\tau$'s in Fig.~\ref{sqWL3} is much smaller than that in Fig.~\ref{scW}, and outside this range the quantum results corresponding to the three considered populations agree extremely well with the semiclassical findings. The differences between the three sets of quantum data and the semiclassical ones become perceptible in a narrow region surrounding the semiclassical localization/delocalization threshold, signalled by the vertical dashed gray line. Unlike its semiclassical counterpart,
the average quantum width is a continuous function of the effective parameter $\tau$. However, its derivative becomes extremely large in a very narrow region in the proximity of the semiclassical threshold. The width of such region decreases with increasing boson population so that, in the limit $N\to\infty$ the semiclassical discontinuity corresponding to the catastrophe in the bifurcation pattern of the semiclassical ground state \cite{Buonsante_JPB_39_S77,Oelkers_PRB_75_115119} becomes apparent. The points on the almost vertical stretch of the quantum curves correspond to four-modal probability distributions, like in the fourth row of Fig.~\ref{trif1}.
\begin{figure
\begin{centering}
\includegraphics[width=8.2cm]{SQwidth_M5.eps}
\caption{\label{sqWL5}
Average quantum width Eq.~\eqref{aqW} for a $L=5$ lattice and several boson populations. The solid line black line for $N=60$ was obtained by means of Lanczos diagonalization, while all the data points are {\it population} QMC results. The dotted lines are mere guides to the eye. The semiclassical result is also plotted for purposes of comparison.}
\end{centering}
\end{figure}
The same qualitative behaviour is observed on lattices comprising four- and five-sites. The data corresponding to several boson populations are shown in Fig.~\ref{sqWL5} for the latter case.
Even exploiting all of the system symmetries, the Hamiltonian matrix for the $L=5$ lattice becomes extremely challenging for the Lanczos algorithm around fillings of the order of 12 particles per site. We have nevertheless been able to tackle larger fillings by resorting to a stochastic approach, the so-called {\it population} QMC algorithm \cite{Iba_TJSAI_16_279}.
In Fig.~\ref{sqWL5} we also plot the results for $N = 60$ as provided by the Lanczos algorithm. The comparison with the data points for the same population demonstrates the reliability of our population QMC.
As already observed in the trimer case, Fig.~\ref{sqWL3}, outside a narrow range of $\tau$'s the quantum data corresponding to sufficiently large boson fillings overlaps very well with the semiclassical result. An incipient discontinuity is recognized in the proximity of the semiclassical threshold, which becomes more and more evident with increasing total population. The very same behaviour is oberved in the $L=4$ case, not shown.
The qualitative change at the transition taking place for lattice sizes larger than 5, evident in Fig.~\ref{scW}, is clearly recognizable also at the quantum level.
\begin{figure
\begin{centering}
\includegraphics[width=8.2 cm]{SQwidth_M6.eps}
\caption{\label{sqWL6} Average quantum width, Eq.~\eqref{aqW}, for a 6-site latice at different boson populations. The solid line corresponding to 30 (36?) bosons has been obtained by means of Lanczos exact diagonalization. The data points on top of it have been obtained by means of {\it population} QMC \cite{Iba_TJSAI_16_279}, as well as the data points for the larger populations. The dotted lines are mere guides to the eye. The semiclassical result is also plotted for purposes of comparison. }
\end{centering}
\end{figure}
In order to illustrate this we consider the threshold situation, i.e. a lattice comprising $L=6$ sites. The data points in Fig.~\ref{sqWL6} --- also obtained by means of the population QMC algorithm ---
illustrate the behaviour of the quantum width at several boson populations. The solid green line once again demonstrates the agreement between QMC and Lanczos algorithm at filling $N/L=6$. As the boson population is increased the data points for the quantum width get closer and closer to the semiclassical result which, at variance with smaller lattices, is continuous at the transition point.
Before concluding this paper we discuss in deeper detail the connection between the quantum theory of Hamiltonian \eqref{BH} and its semiclassical counterpart.
\section{${\rm su}(L)$ coherent states}
\label{S:CS}
The DST fixed-point equations \eqref{DST} can be derived by assuming that the
system is well described by a trial state of the form
\begin{equation}
\label{suL}
|\vec{\psi}\rangle = \frac{1}{\sqrt{N!}} \left(\sum_{j=1}^L \frac{\psi_j}{\sqrt{N}}\,a_j^\dag\right)^N |0\rangle\!\rangle.
\end{equation}
This was first sketched in Ref.~\cite{Wright_PhisicaD_69_18}, where the trial state \eqref{suL} was referred to as {\it Hartree wave function}, and subsequently recast in terms of the {\it time-dependendent variational principle} in Ref.~\cite{Buonsante_PRA_72_043620}, where Eq.~\eqref{suL} was recognized as a su$(L)$ coherent-state (CS).
After extremizing a suitable functional, both approaches result into Eq.~\eqref{DST}, except for a correction factor $(N-1)/N$ cropping out in the interaction term. This comes about because Eq.~\eqref{suL} is an eigenstate of the total number operator, $\hat N |\vec{\psi}\rangle = N |\vec{\psi}\rangle$, and is consistent with the expected absence of interaction when only one boson is present in the system. This correction provides a better agreement between quantum and semiclassical theories for small boson populations \cite{Buonsante_PRA_72_043620}, but can be safely neglected at most of the fillings considered in the present paper.
The probability that a measurement of $\hat{\mathbf n}$ on the CS~\eqref{suL} selects a Fock state $|\vec{\nu}\rangle$, Eq.~\eqref{nu},
is given by
\begin{equation}
\label{nupsi}
P(\vec{\nu})=|\langle \vec{\nu}|\vec{\psi}\rangle|^2 = \frac{N!}{N^N} \prod_{j=1}^L \frac{\left(|\psi_j|^2\right)^{\nu_j}}{\nu_j!}
\end{equation}
(see e.g. Ref.~\cite{Buonsante_JPA_41_175301}). Note that the above result applies when the total population of the Fock state is the same as the CS,
$\langle\vec{\nu}|\hat N|\vec{\nu}\rangle=\sum_j \nu_j=N$. By using standard methods (see e.g. Ref.~\cite{Huang_SM}) it is easy to appreciate that the above probability distribution is sharply peaked at the Fock state best reproducing the set of semiclassical occupation numbers, $\nu_j \approx \langle\vec{\psi}|\hat n_j|\vec{\psi}\rangle = |\psi_j|^2 $ (see also the discussion about Fig.~\ref{trif3} below).
Since the dynamical variables $\psi_j$ are determined by the DST equations \eqref{DST}, the CS breaks the translation symmetry below the localization threshold. That is, denoting $D$ the matrix producing a ciclic shift of the vector entries, $(D \vec{\nu})_j = \nu_{j+1} $, one gets $P(D^k\vec{\nu}) \neq P(\vec{\nu})$ for $k\neq L$.
In Ref.~\cite{Buonsante_PRA_72_043620} it was shown that, for sufficiently strong interactions, the above symmetry-breaking CS of is well described by an uniform superposition of the lowest-energy states of the $L$ quasimomentum blocks of Hamiltonian \eqref{BH}.
Here we take a somewhat complementary standpoint, and consider a uniform superposition of $L$ equivalent, symmetry-breaking CS
\begin{equation}
\label{sCS}
|\vec{\phi}\rangle\!\rangle = \frac{1}{\sqrt{L}}\sum_{t=1}^L |D^t\vec{\phi}\rangle = \frac{1}{\sqrt{L}}\sum_{t=1}^L \hat D^{t}|\vec{\phi}\rangle
\end{equation}
where the displacement quantum operator $\hat D$ is defined at the beginning of Sec. \ref{results}.
The L (nonuniform) entries of the vector $\vec{\phi}$ are normalized such that $\langle\!\langle\vec{\phi} |\vec{\phi}\rangle\!\rangle = \langle\vec{\phi} |\vec{\phi}\rangle = N^{-1}\sum_j |\phi_j|^2= 1$, are yet to be determined.
The optimal set of entries is such that the above symmetrized state attains the minimum energy. This entails
\begin{eqnarray}
\label{min1}
\frac{d}{d\phi_j^*}\left[ \langle\!\langle\vec{\phi} |\hat H|\vec{\phi}\rangle\!\rangle -\lambda \left(\langle\!\langle\vec{\phi} |\vec{\phi}\rangle\!\rangle-1\right)\right] &=& 0\\
\label{min2}
\frac{d}{d\lambda}\left[ \langle\!\langle\vec{\phi} |\hat H|\vec{\phi}\rangle\!\rangle -\lambda \left(\langle\!\langle\vec{\phi} |\vec{\phi}\rangle\!\rangle-1\right)\right] &=& 0
\end{eqnarray}
where $\lambda$ is a Lagrange multiplier enforcing the constraint on the norm.
More explicitly this means
\begin{widetext}
\begin{eqnarray}
\lambda \sum_r \Pi^{N-1}_{r} \phi_{h +r}
&=&
- \sum_r
\left[\, \frac{N-1}{N} \Pi^{N-2}_r \,U \phi^*_{h} \phi_{h+r}^2
+J \, \Pi^{N-1}_r \, \left ( \phi_{h+r-1} \, + \phi_{h+r+1} \right )
\right]
\nonumber\\
\label{minim1}
&-& \frac{N-1}{N} \sum_r \phi_{h+r}\sum_s
\left[
\frac{N-2}{N} \Pi^{N-3}_r
\,\frac{U}{2} \, (\phi^*_{s})^2 \phi_{s+r}^2
+J \, \Pi^{N-2}_r \, \left(
\phi_{s+r-1} + \phi_{s+r+1} \right)\, \phi^*_{s}\,
\right],
\end{eqnarray}
\end{widetext}
and
\begin{equation}
\label{minim2}
\sum_r \Pi_r^N = 1
\end{equation}
where we introduced the shorthand notation $\Pi_r~=~N^{-1}\sum_s \phi_s^* \phi_{s+r} $.
Since nonuniform $\vec{\phi}$'s yield $\Pi_0 > \Pi_k $, the only possible solution of Eq.~\eqref{minim2} in the $N\to \infty$ limit is $\Pi_0=1$, i.e. the usual normalization condition $\sum_j |\phi_j|^2=N$. This entails that $\Pi_r^N \stackrel{N\to\infty}{\longrightarrow} \delta_{r, 0}$, which
considerably simplifies Eq.~\eqref{minim1}
\begin{equation}
\label{minim1b}
(\lambda-{\cal E}) \phi_h = -\frac{N-1}{N} U |\phi_h|^2 \phi_h - J(\phi_{h+1}+\phi_{h-1})
\end{equation}
where ${\cal E} = \langle\vec{\phi}|\hat{H}|\vec{\phi}\rangle$. Comparing Eqs.~\eqref{DSTfp} and Eq.~\eqref{minim1b} shows that, in the large population limit, the parameters $\vec{\phi}$ to be plugged into the symmetrized CS \eqref{sCS} do not need to be obtained from the (numerically demanding) minimization inherent in Eqs.~\eqref{min1} and \eqref{min2}, but can be replaced by the (much more easily determined) normal modes of the dynamical DST equations \eqref{DST}, i.e. the solutions of Eq.~\eqref{DSTfp}. The comparison of Eqs.~\eqref{DSTfp}, \eqref{tau} and \eqref{minim1b} shows that semiclassical parameter corresponding to Eq.~\eqref{minim1b} must be rescaled by a factor $\frac{N}{N-1}$.
\begin{figure
\begin{centering}
\begin{tabular}{ccc}
\includegraphics[width=2.8 cm]{tri_N300_tau0p2_cs.eps} &
\includegraphics[width=2.8 cm]{tri_N300_tau0p24918_cs.eps} &
\includegraphics[width=2.8 cm]{tri_N300_tau0p25_cs.eps}
\end{tabular}
\caption{\label{trif3} Density plots of the probability distribution $P(\vec{\nu})=|\langle\vec{\nu}|\vec{\phi}\rangle\!\rangle|^2$ corresponding to the symmetrized coherent state in Eq.~\eqref{sCS} for a trimer containing $N=300$ bosons.
We choose the same values of the effective parameter $\tau$, Eq.~\eqref{tau} as in the first, third and fourth panel from top in the left column of Fig.~\ref{trif2}. A white cross symbol signals the position of one of the three equivalent semiclassical ground-states (see text). The density plot is scaled with respect to $P_{\rm M}= \max_{\vec{\nu}} P(\vec{\nu})$.}\end{centering}
\end{figure}
In Fig.~\ref{trif3} we show some density plots for the probability distribution $P(\vec{\nu})=|\langle\vec{\nu}|\vec{\phi}\rangle\!\rangle|^2$ corresponding to the symmetrized coherent state in Eq.~\eqref{sCS} for a three-site lattice containing $N=300$ bosons. The parameters $\vec{\phi}$ are not obtained by solving Eqs.~\eqref{min1} and \eqref{min2}. Instead, in keeping with the above discussion, we use the solution of the DST fixed-point equation \eqref{DSTfp}, given the fairly large boson filling. The parameter $\tau$ controlling the relative strength of the tunneling and interaction energy, Eq.~\eqref{tau}, has been chosen as in the first, third and fourth panels in the left column of Fig.~\ref{trif2}, respectively. As we mentioned earlier, the probability density \eqref{nupsi} for the su$(N)$ coherent state \eqref{suL} is strongly peaked at the occupation numbers $\{\nu_j\}$ best reproducing the semiclassical local populations $\{|\phi_j|^2\}$. It is then clear that, in the localized regime, the density of the symmetrized state \eqref{sCS} is qualitatively similar to that of the quantum state analyzed in Fig.~\ref{trif2}. It features three peaks, each relevant to one of the Fock states corresponding to the tree equivalent, symmetry-breaking solutions of Eq.~\eqref{DSTfp}. For sufficiently large interactions (small $\tau$'s) the similarity is striking, as it was already pointed from a different perspective in Ref.~\cite{Buonsante_PRA_72_043620}.
The differences between the quantum ground state in Fig.~\ref{trif2} and the symmetrized CS in Fig.~\ref{trif3} become evident in the vicinity of the semiclassical threshold. In particular, at variance with the former, the latter is structurally unable to give rise to a four-modal distribution, as it is clear after comparing the central panel of Fig.~\ref{trif3} and the third panel from top in the left column of Fig.~\ref{trif2}.
Above the delocalization threshold there is no need of symmetrizing the CS in Eq.~\eqref{suL}, because the ground-state solution of Eq.~\eqref{DSTfp} becomes translation invariant, $\phi_j = \sqrt{N/M}$, which makes the CS and the corresponding probability density $\tau$-independent. As it is clear from the comparison between the rightmost panel in Figs.~\ref{trif3} and the bottom panel in the left column of Fig.~\ref{trif2}, the similarity between the CS and the quantum ground state is already rather satisfactory at $\tau=1$. It improves with increasing $\tau$, and becomes perfect in the noninteracting limit.
We conclude by observing that the (quantum) width of CS \eqref{sCS} as calculated with Eq.~\eqref{aqW} coincides with the semiclassical value Eq.~\eqref{w2} only in the large-$N$ limit. Straightforward calculations show indeed that
\begin{equation}
\bar w({|\phi\rangle\!\rangle})= \frac{N-1}{N} w\left(\left\{|\phi_j|^2\right\}\right)
\end{equation}
The population-dependent prefactor in the previous equation correctly allows of the expected delocalization when one single boson is present in the system. Also, it provides a better agreement with the exact result in the large-$\tau$ limit. As we discuss above, the same prefactor affects the semiclassical parameter $\tau$ to be used in Eq.~\eqref{minim1b}, defined in Eq.~\eqref{tau}. This correction makes it so that the width of the symmetrized CS is closer to that of the quantum ground state in the small-$\tau$ (large interaction) regime.
This should be clear from Fig.~\ref{sqWL6cs}, and shows that the su$(L)$ CS approach to the semiclassical theory is more effective in capturing the effects arising from finite size than the simple replacement of lattice operators by C-numbers used in deriving Eq.~\eqref{DST}.
\begin{figure
\begin{centering}
\includegraphics[width=8.2 cm]{SQwidth_M6_cs.eps}
\caption{\label{sqWL6cs} Comparison of the (squared) width of the semiclassical ground state, the coherent state and the exact quantum ground-state for a six-site lattice. The latter results refer to total population of $N=36$ bosons. a Note that the three curves overlap in the small-$\tau$ region, and that the coherent state provides a better approximation in the large-$\tau$ limit at the relatively small filling considered. }
\end{centering}
\end{figure}
\section{Conclusions}
In this paper we perform a systematic analysis of ground-state properties of a system of attractive lattice bosons, highlighting the correspondences between the (inherently linear) quantum theory for this system and its nonlinear semiclassical counterpart.
Our analysis relies on the introduction of a suitable measure of the width of
the symmetry-breaking {\it soliton-like} ground state characterizing the nonlinear semiclassical theory in the large interaction limit, which is then readily transported to the quantum level.
This quantity allows us to perform a systematic comparison between the semiclassical and quantum ground state, exposing striking similarities and significantly extending the discussion in Ref. \cite{Javanainen_PRL_101_170405}.
On the one hand, the comparison of the semiclassical localized state and its quantum counterpart is made more quantitative. On the other hand, we extend the parameter range to include the localization/delocalization transition occurring in the semiclassical nonlinear theory, and show that it has a clear correspondent at the quantum level. In particular, we demonstrate that the change in the semiclassical bifurcation pattern is maintained also quantum-mechanically.
Our analysis is enriched by a detailed investigation of the three-site case, which makes it possible to visualize directly the structure of the quantum ground state. We also include a somewhat rigorous discussion of the relation between the quantum theory for attractive lattice bosons and its semiclassical version, related to the discrete self-trapping equations. This highlights a {\it finite-population} effect.
| proofpile-arXiv_065-5324 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section*{Introduction}
Let $L_m$ be the free Lie algebra of rank $m\geq 2$ over a field $K$ of characteristic 0 with free generators
$x_1,\ldots,x_m$ and let $L_{m,c}=L_m/(L_m''+L_m^{c+1})$ be the free metabelian nilpotent of class $c$
Lie algebra. This is the relatively free algebra of rank $m$ in the variety of Lie
algebras $\mathfrak A^2\cap \mathfrak N_c$, where $\mathfrak A^2$ is the
metabelian (solvable of class 2) variety of Lie algebras and ${\mathfrak N}_c$ is the variety of
all nilpotent Lie algebras of class at most $c$.
The initial goal of our paper was to describe the groups of inner automorphisms $\text{\rm Inn}(L_{m,c})$
and outer automorphisms $\text{\rm Out}(L_{m,c})$ of
the Lie algebra $L_{m,c}$. The automorphism group $\text{\rm Aut}(L_{m,c})$ is a semidirect product of the normal subgroup
$\text{\rm IA}(L_{m,c})$ of the automorphisms which induce the identity map modulo the commutator ideal
of $L_{m,c}$ and the general linear group $\text{\rm GL}_m(K)$.
Since $\text{\rm Inn}(L_{m,c})\subset \text{\rm IA}(L_{m,c})$,
for the description of the factor group
$\text{\rm Out}(L_{m,c})=\text{\rm Aut}(L_{m,c})/\text{\rm Inn}(L_{m,c})$ it is sufficient to know only
$\text{\rm IA}(L_{m,c})/\text{\rm Inn}(L_{m,c})$.
The composition of two inner automorphisms is an inner automorphism obtained by
the Baker-Campbell-Hausdorff formula which gives the solution $z$ of the equation $e^z=e^x\cdot e^y$
for non-commuting $x$ and $y$.
If $x,y$ are the generators of the free associative algebra $A=K\langle x,y\rangle$, then
$z$ is a formal power series in the completion $\widehat{A}$ with respect to the formal power series topology.
The homogeneous components of $z$ are Lie elements.
Hence, it is natural to work in the completion $\widehat{F_m}$ with respect to the formal power series topology
of the free metabelian Lie algebra $F_m=L_m/L''_m$ and to study the groups of its inner automorphisms and
of its continuous outer automorphisms. Then the results for $\text{\rm Inn}(L_{m,c})$ and $\text{\rm Out}(L_{m,c})$
follow easily by factorization modulo $(\widehat{F_m})^{c+1}$.
Gerritzen \cite{G} found a simple version of the Baker-Campbell-Hausdorff
formula when applied to the algebra $\widehat{F_2}$. This provides a nice expression of
the composition of inner automorphisms
in $\widehat{F_m}$.
A result of Shmel'kin \cite{Sh} states that the free metabelian Lie algebra $F_m$ can be embedded into the abelian
wreath product $A_m\text{\rm wr}B_m$, where $A_m$ and $B_m$ are $m$-dimensional abelian Lie algebras
with bases $\{a_1,\ldots,a_m\}$ and $\{b_1,\ldots,b_m\}$, respectively.
The elements of $A_m\text{\rm wr}B_m$ are of the form $\sum_{i=1}^ma_if_i(t_1,\ldots,t_m)+\sum_{i=1}^m\beta_ib_i$,
where $f_i$ are polynomials in $K[t_1,\ldots,t_m]$ and $\beta_i\in K$. This allows to introduce partial derivatives
in $F_m$ with values in $K[t_1,\ldots,t_m]$ and the Jacobian matrix $J(\phi)$ of an endomorphism $\phi$ of $F_m$.
Restricted on the semigroup $\text{\rm IE}(F_m)$ of endomorphisms of $F_m$ which are identical modulo
the commutator ideal $F_m'$, the map $J:\phi\to J(\phi)$ is a semigroup monomorphism of $\text{\rm IE}(F_m)$
into the multiplicative semigroup of the algebra $M_m(K[t_1,\ldots,t_m])$ of $m\times m$ matrices
with entries from $K[t_1,\ldots,t_m]$.
We give the explicit form of the Jacobian matrices of inner automorphisms of $\widehat{F_m}$ and of
the coset representatives of the continuous outer automorphisms in $\text{\rm IOut}(\widehat{F_m})$.
Finally we transfer the obtained results to the algebra $L_{m,c}$ and obtain the description of $\text{\rm Inn}(L_{m,c})$
and $\text{\rm IOut}(L_{m,c})$.
\section{Preliminaries}
Let $L_m$ be the free Lie algebra of rank $m\geq 2$ over a field $K$ of characteristic 0
with free generators $x_1,\ldots,x_m$ and let $L_{m,c}=L_m/(L_m''+L_m^{c+1})$ be the free
metabelian nilpotent of class $c$ Lie algebra. It is freely generated by
$y_1,\ldots,y_m$, where $y_i=x_i+(L_m''+L_m^{c+1})$, $i=1,\ldots,m$.
We use the commutator notation for the Lie multiplication. Our commutators are left normed:
\[
[u_1,\ldots,u_{n-1},u_n]=[[u_1,\ldots,u_{n-1}],u_n],\quad n=3,4,\ldots.
\]
In particular,
\[
L_m^k=\underbrace{[L_m,\ldots,L_m]}_{k\text{ times}}.
\]
For each $v\in L_{m,c}$, the linear operator $\text{\rm ad}v:L_{m,c}\to L_{m,c}$ defined by
\[
u(\text{\rm ad}v)=[u,v],\quad u\in L_{m,c},
\]
is a derivation of $L_{m,c}$ which is nilpotent and $\text{\rm ad}^cv=0$
because $L_{m,c}^{c+1}=0$.
Hence the linear operator
\[
\exp(\text{\rm ad}v)=1+\frac{\text{\rm ad}v}{1!}+\frac{\text{\rm ad}^2v}{2!}+\cdots
=1+\frac{\text{\rm ad}v}{1!}+\frac{\text{\rm ad}^2v}{2!}+\cdots+\frac{\text{\rm ad}^{c-1}v}{(c-1)!}
\]
is well defined and is an automorphism of $L_{m,c}$.
The set of all such automorphisms forms a normal subgroup of the group of all
automorphisms $\text{\rm Aut}(L_{m,c})$ of $L_{m,c}$.
This group is called the inner automorphism group of $L_{m,c}$ and is denoted by $\text{\rm Inn}(L_{m,c})$.
The factor group $\text{\rm Aut}(L_{m,c})/\text{\rm Inn}(L_{m,c})$ is called the outer automorphism group of
$L_{m,c}$ and is denoted by $\text{\rm Out}(L_{m,c})$. The automorphism group $\text{\rm Aut}(L_{m,c})$
is a semidirect product of the normal subgroup
$\text{\rm IA}(L_{m,c})$ of the automorphisms which induce the identity map modulo the
commutator ideal of $L_{m,c}$ and the general
linear group $\text{\rm GL}_m(K)$. Since $\text{\rm Inn}(L_{m,c})\subset \text{\rm IA}(L_{m,c})$,
for the description of
$\text{\rm Out}(L_{m,c})=\text{\rm Aut}(L_{m,c})/\text{\rm Inn}(L_{m,c})$ it is sufficient to know only
$\text{\rm IA}(L_{m,c})/\text{\rm Inn}(L_{m,c})$.
Let $R$ be an algebra over a field $K$ of characteristic 0.
We consider the topology on $R$ induced by the series
$R\supseteq R^2\supseteq R^3\supseteq\cdots$. This is the topology
in which the sets $r+R^k$, $r\in R$, $k\geq 1$, form a basis for the open sets. It is called the
{\it formal power series topology} on $R$.
Let $F_m=L_m/L''_m$ be the free metabelian Lie algebra
of rank $m$. We shall denote the free generators of $F_m$ with the same symbols $y_1,\ldots,y_m$ as the free generators of
$L_{m,c}$, but now $y_i=x_i+L''_m$, $i=1,\ldots,m$. It is well known, see e.g. \cite{Ba}, that
\[
[y_{i_1},y_{i_2},y_{i_{\sigma(i_3)}},\ldots,y_{i_{\sigma(k)}}]
=[y_{i_1},y_{i_2},y_{i_3},\ldots,y_{i_k}],
\]
where $\sigma$ is an arbitrary permutation of $3,\ldots,k$ and that $F_m'$ has a basis consisting of all
\[
[y_{i_1},y_{i_2},y_{i_3},\ldots,y_{i_k}],\quad 1\leq i_j\leq m,\quad i_1>i_2\leq i_3\leq\cdots\leq i_k.
\]
Let $(F_m)_{(k)}$ be the subspace of $F_m$ spanned by all monomials
of total degree $k$ in $y_1,\ldots,y_m$.
We consider the formal power series topology on $F_m$. The completion
$\widehat{F_m}$ of $F_m$ with respect to this topology
may be identified with the complete direct sum $\widehat \bigoplus_{i\geq 1}(F_m)_{(i)}$.
The elements $f\in \widehat{F_m}$ are formal power series
\[
f=f_1+f_2+\cdots,\quad f_i\in (F_m)_{(i)},\quad i=1,2,\ldots.
\]
The composition of two inner automorphisms in $\text{\rm Inn}(L_{m,c})$ is also an inner automorphism.
It can be obtained by
the Baker-Campbell-Hausdorff formula which gives the solution $z$ of the equation $e^z=e^x\cdot e^y$
for non-commuting $x$ and $y$, see e.g. \cite{Bo} and \cite{Se}.
If $x,y$ are the generators of the free associative algebra $A=K\langle x,y\rangle$, then
\[
z=x+y+\frac{[x,y]}{2}-\frac{[x,y,x]}{12}+\frac{[x,y,y]}{12}-\frac{[x,y,x,y]}{24}+\cdots
\]
is a formal power series in the completion $\widehat{A}$ with respect to the formal power series topology.
The homogeneous components of $z$ are Lie elements.
Hence, studying the inner and the outer automorphisms of $L_{m,c}$,
it is convenient to work in the completion $\widehat{F_m}$ and to study the groups of its inner automorphisms and
of its continuous outer automorphisms. Working in the algebra $F_2=L_2/L_2''$
with generators $y_1=x$, $y_2=y$, the element $z\in\widehat{F_2}$ in the
Baker-Campbell-Hausdorff formula has the form
\[
z=x+y+\sum _{a,b\geq 0}c_{ab}[x,y,\underbrace{x,\ldots,x}_{a},\underbrace{y,\ldots,y}_{b}]
=x+y+\sum _{a,b\geq 0}c_{ab}[x,y]\text{\rm ad}^ax\text{\rm ad}^by
\]
\[
=x+y+[x,y]c(\text{\rm ad}x,\text{\rm ad}y),
\]
where
\[
c(t,u)=\sum_{a,b\geq 0}c_{ab}t^au^b\in{\mathbb Q}[[t,u]].
\]
Gerritzen \cite{G} found a nice description of the formal power series in commuting variables $c(t,u)$
which corresponds to $z\in\widehat{F_2}$. Further
we shall use it to obtain an expression of the composition of inner automorphisms
in $\widehat{F_m}$. We present a slightly modified version of the result of Gerritzen due to the fact
that he uses right normed commutators (and not left normed commutators as we do).
Recall that $F_2$ is isomorphic to the Lie subalgebra generated by $x$ and $y$
in the associative algebra $A/C^2$, where $C=A[A,A]A$ is the commutator ideal of $A=K\langle x,y\rangle$.
\begin{proposition}\label{formula of Gerritzen}{\rm(Gerritzen \cite{G})}
If $z\in\widehat{F_2}$ is the solution of the equation $e^z=e^x\cdot e^y$
in the algebra $\widehat{A/C^2}$, then
\[
z=x+y+[x,y]c(\text{\rm ad}x,\text{\rm ad}y),
\]
where
\[
c(t,u)=\frac{e^uh(t)-h(u)}{e^{t+u}-1}\in K[[t,u]],\quad h(v)=\frac{e^v-1}{v}\in K[[v]].
\]
\end{proposition}
\begin{proof}
Following Gerritzen \cite{G}, if $D_v$ is the derivation of the free metabelian algebra $F_2$
defined by $D_v(u)=[v,u]$, $u\in F_2$, then the solution $z\in \widehat{F_2}$ of the equation $e^z=e^x\cdot e^y$
has the form
\[
z=x+y+H_0(D_x,D_y)[x,y],
\]
where
\[
H_0(t,u)=\frac{h(-t)-h(u)}{e^{-t}-e^u}, \quad h(v)=\frac{e^v-1}{v}.
\]
Since $D_x=-\text{\rm ad}x$ and $D_y=-\text{\rm ad}y$, we obtain
\[
z=x+y+H_0(D_x,D_y)[x,y]=x+y+[x,y]H_0(-\text{\rm ad}x,-\text{\rm ad}y)
=x+y+[x,y]c(\text{\rm ad}x,\text{\rm ad}y),
\]
\[
c(t,u)=H_0(-t,-u)=\frac{h(t)-h(-u)}{e^t-e^{-u}}
=\frac{e^uh(t)-h(u)}{e^{t+u}-1}
\]
after easy computations.
\end{proof}
Pay attention that both the nominator and the denominator of $c(t,u)$ are divisible by
$t+u$. After the cancellation of the common factor $t+u$ the denominator becomes an invertible element of $K[[t,u]]$.
Now we collect the necessary information about wreath products, Jacobian matrices
and formal power series. For details and references see e.g. \cite{BD}.
Let $K[t_1,\ldots,t_m]$ be the
(commutative) polynomial algebra over $K$ freely generated by the
variables $t_1,\ldots,t_m$ and let $A_m$ and $B_m$ be the abelian
Lie algebras with bases $\{a_1,\ldots,a_m\}$ and
$\{b_1,\ldots,b_m\}$, respectively. Let $C_m$ be the free right
$K[t_1,\ldots,t_m]$-module with free generators $a_1,\ldots,a_m$.
We give it the structure of a Lie algebra with trivial multiplication.
The abelian wreath product
$A_m\text{\rm wr}B_m$ is equal to the semidirect sum $C_m\leftthreetimes B_m$. The elements
of $A_m\text{\rm wr}B_m$ are of the form
$\sum_{i=1}^ma_if_i(t_1,\ldots,t_m)+\sum_{i=1}^m\beta_ib_i$, where
$f_i$ are polynomials in $K[t_1,\ldots,t_m]$ and $\beta_i\in K$.
The multiplication in $A_m\text{\rm wr}B_m$ is defined by
\[
[C_m,C_m]=[B_m,B_m]=0,
\]
\[
[a_if_i(t_1,\ldots,t_m),b_j]=a_if_i(t_1,\ldots,t_m)t_j,\quad i,j=1,\ldots,m.
\]
Hence $A_m\text{\rm wr}B_m$ is a metabelian Lie algebra and every mapping $\{y_1,\ldots,y_m\}\to A_m\text{\rm wr}B_m$
can be extended to a homomorphism $F_m\to A_m\text{\rm wr}B_m$.
As a special case of the embedding theorem of Shmel'kin \cite{Sh},
the homomorphism $\varepsilon:F_m\to A_m\text{\rm wr}B_m$ defined by
$\varepsilon(y_i)=a_i+b_i$, $i=1,\ldots,m$, is a monomorphism. If
\[
f=\sum[y_i,y_j]f_{ij}(\text{\rm ad}y_1,\ldots,\text{\rm ad}y_m),\quad f_{ij}(t_1,\ldots,t_m)\in K[t_1,\ldots,t_m],
\]
then
\[
\varepsilon(f)=\sum(a_it_j-a_jt_i)f_{ij}(t_1,\ldots,t_m).
\]
The next lemma follows from \cite{Sh}, see also \cite {BD}.
\begin{lemma}\label{metabelian rule}
The element $\sum_{i=1}^ma_if_i(t_1,\ldots,t_m)$ of
$C_m$ belongs to $\varepsilon (F_m')$ if and only if
$\sum_{i=1}^mt_if_i(t_1,\ldots,t_m)=0$.
\end{lemma}
The embedding of $F_m$ into $A_m\text{\rm wr}B_m$ allows to introduce partial derivatives
in $F_m$ with values in $K[t_1,\ldots,t_m]$. If $f\in F_m$ and
\[
\varepsilon(f)=\sum_{i=1}^m\beta_ib_i+\sum_{i=1}^ma_if_i(t_1,\ldots,t_m),\quad \beta_i\in K,f_i\in K[t_1,\ldots,t_m],
\]
then
\[
\frac{\partial f}{\partial y_i}=f_i(t_1,\ldots,t_m).
\]
The Jacobian matrix $J(\phi)$ of an endomorphism $\phi$ of $F_m$
is defined as
\[
J(\phi)=\left(\frac {\partial \phi({y_j})}{\partial y_i}\right)
=\left(\begin{matrix}
\frac {\partial\phi({y_1})}{\partial y_1}&\cdots&\frac {\partial \phi({y_m})}{\partial y_1}\\
\vdots&\ddots&\vdots\\
\frac {\partial\phi({y_1})}{\partial y_m}&\cdots&\frac {\partial \phi({y_m})}{\partial y_m}\\
\end{matrix}\right)\in M_m(K[t_1,\ldots,t_m]),
\]
where $M_m(K[t_1,\ldots,t_m])$ is the associative algebra of $m\times m$ matrices with entries from
$K[t_1,\ldots,t_m]$. Let $\text{\rm IE}(F_m)$ be the multiplicative semigroup of all endomorphisms
of $F_m$ which are identical modulo the commutator ideal $F_m'$.
Let $I_m$ be the identity $m\times m$ matrix and let $S$ be the subspace of
$M_m(K[t_1,\ldots,t_m])$ defined by
\[
S=\left \{(f_{ij})\in M_m(K[t_1,\ldots,t_m]) \mid \sum_{i=1}^mt_if_{ij}=0,j=1,\ldots,m\right \}.
\]
Clearly $I_m+S$ is a subsemigroup of the multiplicative group of $M_m(K[t_1,\ldots,t_m])$.
The completion with respect to the formal power series topology of $A_m\text{\rm wr}B_m$
is the semidirect sum $\widehat{C_m}\leftthreetimes B_m$, where
\[
\widehat{C_m}=\bigoplus_{i=1}^ma_iK[[t_1,\ldots,t_m]].
\]
If $\phi\in \text{\rm IE}(\widehat{F_m})$, then $J(\phi)=I_m+(s_{ij})$,
where $s_{ij}\in K[[t_1,\ldots,t_m]]$.
It is easy to check that if $\phi,\psi \in \text{\rm IE}(\widehat{F_m})$ then $J(\phi \psi)=J(\phi)J(\psi)$.
The following proposition is well known, see e.g. \cite {BD}.
\begin{proposition}\label{met}
The map $J:\text{\rm IE}(F_m)\to I_m+S$ defined by
$\phi\to J(\phi)$ is an isomorphism of the semigroups $\text{\rm IE}(F_m)$ and $I_m+S$.
It extends to an isomorphism
between the group $\text{\rm IE}(\widehat{F_m})=\text{\rm IA}(\widehat{F_m})$ of continuous {\rm IA}- automorphisms
of $\widehat{F_m}$ and the multiplicative group $I_m+\widehat{S}$.
\end{proposition}
\section{Main Results}
In this section we give a formula for the multiplication of inner automorphisms of $\widehat{F_m}$. We
also find the explicit form of the Jacobian matrix of the inner automorphisms of $\widehat{F_m}$ and of
the coset representatives of the continuous outer automorphisms in $\text{\rm IOut}(\widehat{F_m})$. Finally we
transfer the obtained results to the algebra $L_{m,c}$ and obtain the description of $\text{\rm Inn}(L_{m,c})$
and $\text{\rm IOut}(L_{m,c})$.
Let $\text{\rm Inn}(\widehat{F_m})$ denote the set of all inner automorphisms of $\widehat{F_m}$ which are of the form
$\exp(\text{\rm ad}u)$, $u\in \widehat{F_m}$. Our first goal is to give a multiplication rule for
$\text{\rm Inn}(\widehat{F_m})$. For $u\in\widehat{F_m}$ we fix the notation $u=\overline{u}+u_0$,
where $\overline{u}$ is the linear component of $u$ and $u_0\in \widehat{F_m'}$.
\begin{theorem}\label{multiplication in Inn}
Let $u,v\in \widehat{F_m}$. Then the solution $w=w(u,v)\in \widehat{F_m}$ of the equation
$\exp(\text{\rm ad}u)\cdot \exp(\text{\rm ad}v)=\exp(\text{\rm ad}w)$ is
\[
w=H(\overline{u},\overline{v})+u_0(1+(\text{\rm ad}\overline{v})c(\text{\rm ad}\overline{u},\text{\rm ad}\overline{v}))
+v_0(1-(\text{\rm ad}\overline{u})c(\text{\rm ad}\overline{u},\text{\rm ad}\overline{v})),
\]
where
\[
H(\overline{u},\overline{v})=\overline{u}+\overline{v}+[\overline{u},\overline{v}]c(\text{\rm ad}\overline{u},\text{\rm ad}\overline{v}),
\]
\[
c(t_1,t_2)=\frac{e^{t_2}h(t_1)-h(t_2)}{e^{t_1+t_2}-1},\quad h(t)=\frac{e^t-1}{t}.
\]
\end{theorem}
\begin{proof}
The adjoint operator $\text{\rm ad}:\widehat{F_m}\to\text{\rm End}_K\widehat{F_m}$ is a Lie algebra homomorphism.
The algebra $\text{\rm ad}(\widehat{F_m})$ is metabelian, as a homomorphic image of $\widehat{F_m}$.
Hence we may apply the formula of Gerritzen from Proposition \ref{formula of Gerritzen}
for $x=\text{\rm ad}u$, $y=\text{\rm ad}v$, $z=\text{\rm ad}w$. Therefore
\[
\text{\rm ad}w=\text{\rm ad}u+\text{\rm ad}v+[\text{\rm ad}u,\text{\rm ad}v]c(\text{\rm ad}(\text{\rm ad}u),\text{\rm ad}(\text{\rm ad}v))
\]
\[
=\text{\rm ad}u+\text{\rm ad}v+\text{\rm ad}([u,v]c(\text{\rm ad}u,\text{\rm ad}v)).
\]
Since the algebra $\widehat{F_m}$ has no centre, the adjoint representation is faithful and
\[
w=u+v+[u,v]c(\text{\rm ad}u,\text{\rm ad}v).
\]
The metabelian low gives that
\[
[u,v]c(\text{\rm ad}u,\text{\rm ad}v)=[\overline{u}+u_0,\overline{v}+v_0]c(\text{\rm ad}(\overline{u}+u_0),\text{\rm ad}(\overline{v}+v_0))
\]
\[
=([\overline{u},\overline{v}]+[u_0,\overline{v}]-[v_0,\overline{u}])c(\text{\rm ad}\overline{u},\text{\rm ad}\overline{v}),
\]
\[
w=(\overline{u}+u_0)+(\overline{v}+v_0)
+([\overline{u},\overline{v}]+[u_0,\overline{v}]-[v_0,\overline{u}])c(\text{\rm ad}\overline{u},\text{\rm ad}\overline{v})
\]
\[
=H(\overline{u},\overline{v})+u_0(1+(\text{\rm ad}\overline{v})c(\text{\rm ad}\overline{u},\text{\rm ad}\overline{v}))
+v_0(1-(\text{\rm ad}\overline{u})c(\text{\rm ad}\overline{u},\text{\rm ad}\overline{v})).
\]
\end{proof}
Our next objective is to give the explicit form of the Jacobian matrix of the inner automorphisms of $\widehat{F_m}$.
\begin{theorem}\label{Jacobian of inner autos}
Let $u=\overline{u}+u_0\in \widehat{F_m}$, where
\[
\overline{u}=\sum_{r=1}^mc_ry_r, \quad c_r\in K, r=1,\ldots,m,
\]
is the linear component of $u$ and
\[
u_0=\sum_{p>q} [y_p,y_q]h_{pq}(\text{\rm ad}y_q,\ldots,\text{\rm ad}y_m).
\]
Then
\[
J(\exp(\text{\rm ad}u))=J(\exp(\text{\rm ad}\overline{u}))+D_0T=I_m+(\overline{D}+D_0)T,
\]
\[
\overline{D}=\left(\frac{\partial[y_j,\overline{u}]}{\partial y_i}\right),
\quad D_0=\left(\frac{\partial[y_j,u_0]}{\partial y_i}\right),
\]
\[
T=\sum_{k\geq 0}\frac{t^k}{(k+1)!},\quad t=\sum_{r=1}^mc_rt_r.
\]
More precisely
\[
\overline{D}=\left(
\begin{array}{cccc}
c_2t_2+c_3t_3+\cdots+c_mt_m&-c_1t_2&\cdots&-c_1t_m\\
-c_2t_1&c_1t_1+c_3t_3+\cdots+c_mt_m&\cdots&-c_2t_m\\
\vdots&\vdots&\ddots&\vdots\\
-c_mt_1&-c_mt_2&\cdots&\sum_{k=1}^{m-1}c_kt_k
\end{array}
\right),
\]
\[
D_0=\left(
\begin{array}{cccc}
-t_1f_1&-t_2f_1&\cdots&-t_mf_1\\
-t_1 f_2&-t_2f_2&\cdots&-t_mf_2\\
\vdots&\vdots&\ddots&\vdots\\
-t_1f_m&-t_2f_m&\cdots&-t_mf_m
\end{array}
\right),
\]
where
\[
f_i=\sum_{p>q} \frac{\partial\left([y_p,y_q]h_{pq}(\text{\rm ad}y_q,\ldots,\text{\rm ad}y_m)\right)}{\partial y_i}
\]
\[
=\sum_{q=1}^{i-1}t_qh_{iq}(t_q,\ldots,t_m)-\sum_{p=i+1}^mt_ph_{pi}(t_i,\ldots,t_m).
\]
\end{theorem}
\begin{proof}
If $u=\overline{u}+u_0$, then
\[
\exp(\text{\rm ad}u)(y_j)=y_i+[y_j,u]\sum_{k\geq 0}\frac{(\text{\rm ad}\overline{u})^k}{(k+1)!},\quad j=1,\ldots,m,
\]
because $u_0\in\widehat{F_m}$ and $\text{\rm ad}u_0$ acts trivially on the commutator ideal of $\widehat{F_m}$. Hence
\[
\exp(\text{\rm ad}u)(y_j)=y_i+[y_j,\overline{u}+u_0]\sum_{k\geq 0}\frac{(\text{\rm ad}\overline{u})^k}{(k+1)!}
\]
\[
=y_i+[y_j,\overline{u}]\sum_{k\geq 0}\frac{(\text{\rm ad}\overline{u})^k}{(k+1)!}
+[y_j,u_0]\sum_{k\geq 0}\frac{(\text{\rm ad}\overline{u})^k}{(k+1)!}
\]
\[
=\exp(\text{\rm ad}\overline{u})(y_j)+[y_j,u_0]\sum_{k\geq 0}\frac{(\text{\rm ad}\overline{u})^k}{(k+1)!}.
\]
Easy calculations give
\[
\frac{\partial\exp(\text{\rm ad}\overline{u})(y_j)}{\partial y_i}
=\delta_{ij}+\frac{\partial[y_j,\overline{u}]}{\partial y_i}\sum_{k\geq 0}\frac{t^k}{(k+1)!},
\]
\[
\frac{\partial}{\partial y_i}[y_j,u_0]\sum_{k\geq 0}\frac{(\text{\rm ad}\overline{u})^k}{(k+1)!}
=\frac{\partial [y_j,u_0]}{\partial y_i}\sum_{k\geq 0}\frac{t^k}{(k+1)!},
\]
\[
i,j=1,\ldots,m,\quad t=\sum_{p=1}^mc_pt_p,
\]
where $\delta_{ij}$ is Kronecker symbol.
This gives the expression
\[
J(\exp(\text{\rm ad}u))=I_m+(\overline{D}+D_0)T,
\quad T=\sum_{k\geq 0}\frac{t^k}{(k+1)!},
\]
\[
\overline{D}=\left(\frac{\partial[y_j,\overline{u}]}{\partial y_i}\right),
\quad D_0=\left(\frac{\partial[y_j,u_0]}{\partial y_i}\right).
\]
Since
\[
\frac{\partial[y_j,\overline{u}]}{\partial y_i}=\frac{\partial}{\partial y_i}\left[y_j,\sum_{r=1}^mc_ry_r\right]
=\frac{\partial}{\partial y_i}\sum_{r\neq j}c_r[y_j,y_r]
=\begin{cases}
\sum_{r\neq j}c_rt_r&i=j,\\
-c_it_j&i\neq j,\end{cases}
\]
we obtain the desired form of the matrix $\overline{D}$. Further,
\[
[y_j,u_0]=-\left(\sum_{p>q} [y_p,y_q]h_{pq}(\text{\rm ad}y_q,\ldots,\text{\rm ad}y_m)\right)\text{\rm ad}y_j
\]
\[
\frac{\partial[y_j,u_0]}{\partial y_i}=-t_j\sum_{p>q}\frac{\partial [y_p,y_q]}{\partial y_i}h_{pq}(t_q,\ldots,t_m),
\quad
\frac{\partial [y_p,y_q]}{\partial y_i}
=\begin{cases}
t_q&p=i,\\
-t_p&q=i,\\
0&p,q\not=i,
\end{cases}
\]
\[
\frac{\partial[y_j,u_0]}{\partial y_i}=-t_jf_i(t_1,\ldots,t_m),
\]
\[
f_i(t_1,\ldots,t_m)=\sum_{q=1}^{i-1}t_qh_{iq}(t_q,\ldots,t_m)-\sum_{p=i+1}^mt_ph_{pi}(t_i,\ldots,t_m).
\]
In this way we obtain the explicit form of the matrix $D_0$.
\end{proof}
Now we shall find the coset representatives of the normal subgroup $\text{\rm Inn}(\widehat{F_m})$
of the group $\text{\rm IA}(\widehat{F_m})$ of IA-automorphisms $\widehat{F_m}$, i.e.,
we shall find a set of IA-automorphisms $\theta$ of $\widehat{F_m}$ such that
the factor group $\text{\rm IOut}(\widehat{F_m})=\text{\rm IA}(\widehat{F_m})/\text{\rm Inn}(\widehat{F_m})$
of the outer IA-automorphisms of $\widehat{F_m}$
is presented as the disjoint union of the cosets
$\text{\rm Inn}(\widehat{F_m})\theta$.
\begin{theorem}\label{iouthatF}
Let $\Theta$ be the set of automorphisms $\theta$ of $\widehat{F_m}$ with Jacobian matrix of the form
\[
J(\theta)=I_m+\left(\begin{array}{llll}
s(t_2,\ldots,t_m)&f_{12}&\cdots&f_{1m}\\
t_1q_2(t_2,t_3,\ldots,t_m)+r_2(t_2,\ldots,t_m)&f_{22}&\cdots&f_{2m}\\
t_1q_3(t_3,\ldots,t_m)+r_3(t_2,\ldots,t_m)&f_{32}&\cdots&f_{3m}\\
\ \ \ \ \ \ \ \vdots&\ \ \vdots&\ \ddots&\ \ \vdots\\
t_1q_m(t_m)+r_m(t_2,\ldots,t_m)&f_{m2}&\cdots&f_{mm}\\
\end{array}\right),
\]
where $s,q_i,r_i,f_{ij}\in K[[t_1,\ldots,t_m]]$ are formal power series without constant terms
and satisfy the conditions
\[
s+\sum_{i=2}^mt_iq_i=0,\quad \sum_{i=2}^mt_ir_i=0,\quad \sum_{i=1}^mt_if_{ij}=0,\quad j=2,\ldots,m,
\]
$r_i=r_i(t_2,\ldots,t_m)$, $i=2,\ldots,m$, does not depend on $t_1$, $q_i(t_i,\ldots,t_m)$,
$i=2,\ldots,m$, does not depend on $t_1,\ldots,t_{i-1}$
and $f_{12}$ does not contain a summand $dt_2$, $d\in K$.
Then $\Theta$ consists of coset representatives of the subgroup $\text{\rm Inn}(\widehat{F_m})$
of the group $\text{\rm IA}(\widehat{F_m})$ and $\text{\rm IOut}(\widehat{F_m})$ is a disjoint union of
the cosets $\text{\rm Inn}(\widehat{F_m})\theta$, $\theta\in \Theta$.
\end{theorem}
\begin{proof}
Let $A=I_m+(f_{ij})\in I_m+\widehat{S}$,
\[
f_{11}=s,\quad f_{i1}=t_1q_i+r_i,\quad i=2,\ldots,m,
\]
be an $m\times m$ matrix satisfying the conditions of the theorem. The equation
\[
s+\sum_{i=2}^mt_iq_i=0
\]
implies that
\[
t_1s+\sum_{i=2}^mt_i(t_1q_i)=0.
\]
Hence By Lemma \ref{metabelian rule} gives that there exists an $f_1$ in the commutator ideal of $\widehat{F_m}$
such that
\[
\frac{\partial f_1}{\partial y_1}=s,\quad \frac{\partial f_1}{\partial y_i}=t_1q_i,\quad i=2,\ldots,m.
\]
Similarly, the conditions
\[
\sum_{i=2}^mt_ir_i=0,\quad \sum_{i=1}^mt_if_{ij}=0,\quad j=2,\ldots,m,
\]
imply that there exist $f_1',f_j$, $j=2,\ldots,m$, in $\widehat{F_m'}$ with
\[
\frac{\partial f_1'}{\partial y_1}=0,\quad \frac{\partial f_1'}{\partial y_i}=r_i,\quad i=2,\ldots,m,
\]
\[
\frac{\partial f_j}{\partial y_i}=f_{ij},\quad i=1,\ldots,j,\quad j=2,\ldots,m.
\]
This means that $A$ is the Jacobian matrix of a certain IA-automorphism of $\widehat{F_m}$.
Now we shall show that for any $\psi\in \text{\rm IA}(\widehat{F_m})$ there exists an inner automorphism
$\phi=\exp(\text{\rm ad}u)\in\text{\rm Inn}(\widehat{F_m})$ and an automorphism $\theta$ in $\Theta$ such that
$\psi=\exp(\text{\rm ad}u)\cdot \theta$.
Let $\psi$ be an arbitrary element of $\text{\rm IA}(\widehat{F_m})$ and let
\[
\psi(y_1)=y_1+\sum_{k>l} [y_k,y_l]f_{kl}(\text{\rm ad}y_l,\ldots,\text{\rm ad}y_m),
\]
\[
\psi(y_2)=y_2+\sum_{k>l} [y_k,y_l]g_{kl}(\text{\rm ad}y_l,\ldots,\text{\rm ad}y_m),
\]
where $f_{kl}=f_{kl}(t_l,\ldots,t_m),g_{kl}=g_{kl}(t_l,\ldots,t_m)\in K[[t_1,\ldots,t_m]]$.
Let us denote the $m\times 2$ matrix consisting of the first two columns of $J(\psi)$ by
$J(\psi)_2$. Then $J(\psi)_2$ is of the form
\[
J(\psi)_2=\left(\begin{array}{cccc}
1-t_2f_{21}-t_3f_{31}-\cdots-t_mf_{m1}&-t_2g_{21}-t_3g_{31}-\cdots-t_mg_{m1}\\
t_1f_{21}-t_3f_{32}-\cdots-t_mf_{m2}&1+t_1g_{21}-t_3g_{32}-\cdots-t_mg_{m2}\\
t_1f_{31}+t_2f_{32}-\cdots-t_mf_{m3}&\ast\\
\vdots&\vdots\\
t_1f_{m1}+\cdots+t_{(m-1)}f_{m(m-1)}&\ast\\
\end{array}\right),
\]
where we have denoted by $\ast$ the corresponding entries of the second column of
the Jacobian matrix of $\psi$. Let
\[
c_1=-g_{21}(0,\ldots,0),\quad
c_k=f_{k1}(0,\ldots,0),\quad k=2,\ldots,m,
\]
and let us define
\[
\phi_0=\exp(\text{\rm ad}\overline{u}),\quad \overline{u}=\sum_{i=1}^mc_iy_i.
\]
Then
\[
J(\phi_0)=I_m+\left(\begin{array}{llll}
\sum_{i\not=1}c_it_i&-c_1t_2&\cdots&-c_1t_m\\
-c_2t_1&\sum_{i\not=2}c_it_i&\cdots&-c_2t_m\\
-c_3t_1&-c_3t_2&\cdots&-c_3t_m\\
\ \ \ \ \ \ \ \vdots&\ \ \vdots&\ \ddots&\ \ \vdots\\
-c_mt_1&-c_mt_2&\cdots&\sum_{i\not=m}c_it_i\\
\end{array}\right)+B_0,
\]
where the entries of the $m\times m$ matrix $B_0$ are elements of $K[[t_1,\ldots,t_m]]$
which do not contain constant and linear terms. Hence
\[
J(\phi_0\psi)_2=\left(\begin{array}{cccc}
1+t_1s_1(t_1,\ldots,t_m)+s_2(t_2,\ldots,t_m)&g(\widehat{t_2})\\
t_1^2p_2(t_1,\ldots,t_m)+t_1q_2(t_2,\ldots,t_m)+r_2(t_2,\ldots,t_m)&\ast\\
t_1^2p_3(t_1,\ldots,t_m)+t_1q_3(t_2,\ldots,t_m)+r_3(t_2,\ldots,t_m)&\ast\\
\vdots&\vdots\\
t_1^2p_m(t_1,\ldots,t_m)+t_1q_m(t_2,\ldots,t_m)+r_m(t_2,\ldots,t_m)&\ast\\
\end{array}\right),
\]
where $t_1s_1(t_1,\ldots,t_m)$ and $s_2(t_2,\ldots,t_m)$
have no linear terms, and
$g(\widehat{t_2})$ does not contain a summand of the form $dt_2$, $d\in K$.
In the first column of $J(\phi_0\psi)_2$ we have collected the components $t_1^2p_i$ divisible by $t_1^2$,
then the components $t_1q_i$ divisible by $t_1$ only (but not by $t_1^2$)
and finally the components $r_i$ which do not depend on $t_1$, $i=2,\ldots,m$.
By Lemma \ref{metabelian rule} we obtain
\[
t_1^2(s_1+t_2p_2+\cdots+t_mp_m)=0,
\]
\[
t_1(s_2+t_2q_2+\cdots+t_mq_m)=0,
\]
\[
t_2r_2+\cdots+t_mr_m=0.
\]
Let us define $T_s=\{t_s,\ldots,t_m\}$ and rewrite $J(\phi_0\psi)_2$ as
\[
J(\phi_0\psi)_2=\left(\begin{array}{llll}
1-t_1t_2p_2-\cdots-t_1t_mp_m-t_2q_2-\cdots-t_mq_m&g(\widehat{t_2})\\
t_1^2p_2+t_1q_2(T_2)+r_2(T_2)&*\\
t_1^2p_3+t_1q_3(T_2)+r_3(T_2)&*\\
\ \ \ \ \ \ \ \ \ \vdots&\vdots\\
t_1^2p_m+t_1q_m(T_2)+r_m(T_2)&*\\
\end{array}\right),
\]
Now we define
\[
\phi_1=\exp(\text{\rm ad}u_1),\quad u_1=\sum_{i=2}^m[y_i,y_1]p_i(\text{\rm ad}y_1,\ldots,\text{\rm ad}y_m).
\]
The Jacobian matrix of $\phi_1$ has the form
\[
J(\phi_1)=I_m+\left(\begin{array}{llll}
t_1\sum_{i\not=1}t_ip_i&t_2\sum_{i\not=1}t_ip_i&\cdots&t_m\sum_{i\not=1}t_ip_i\\
-t_1^2p_2&-t_1t_2p_2&\cdots&-t_1t_mp_2\\
\ \ \ \ \ \ \ \vdots&\ \ \vdots&\ \ddots&\ \ \vdots\\
-t_1^2p_m&-t_1t_2p_m&\cdots&-t_1t_mp_m\\
\end{array}\right).
\]
The element $u_1$ belongs to the commutator ideal of $\widehat{F_m}$ and
the linear operator $\text{\rm ad}u_1$ acts trivially on $\widehat{F_m'}$.
Hence $\exp(\text{\rm ad}u_1)$ is the identity map restricted on $\widehat{F_m'}$.
Since the automorphism $\phi_0\psi$ is IA, we obtain that
\[
\phi_0\psi(y_j)\equiv y_j\quad (\text{\rm mod }\widehat{F_m'}),
\quad \phi_1(\phi_0\psi(y_j))=\phi_0\psi(y_j)+y_j\text{\rm ad}u_1.
\]
Easy calculations give that
\[
J(\phi_1\phi_0\psi)_2=\left(\begin{array}{cccc}
1-t_2p_2-\cdots-t_mp_m&g(\widehat{t_2})\\
t_1q_2(T_2)+r_2(T_2)&\ast\\
t_1q_3(T_2)+r_3(T_2)&\ast\\
\vdots&\vdots\\
t_1q_m(T_2)+r_m(T_2)&\ast\\
\end{array}\right).
\]
Now we write $q_i(T_2)$ in the form
\[
q_i(T_2)=t_2q_i'(T_2)+q_i''(T_3),\quad i=3,\ldots,m,
\]
and define
\[
\phi_2=\exp(\text{\rm ad}u_2),\quad
u_2=\sum_{i=3}^m[y_i,y_2]q_i'(\text{\rm ad}y_2,\ldots,\text{\rm ad}y_m).
\]
Then we obtain that
\[
J(\phi_2\phi_1\phi_0\psi)_2=\left(\begin{array}{cccc}
1-t_2p_2-\cdots-t_mp_m&g(\widehat{t_2})\\
t_1Q_2(T_2)+r_2(T_2)&\ast\\
t_1q_3''(T_3)+r_3(T_2)&\ast\\
\vdots&\vdots\\
t_1q_m''(T_3)+r_m(T_2)&\ast\\
\end{array}\right),
\]
\[
Q_2(T_2)=q_2(T_2)-\sum_{i=3}^mt_iq_i'(T_2).
\]
Repeating this process we construct inner automorphisms $\phi_3,\ldots,\phi_{m-1}$ such that
\[
\theta=\phi_{m-1}\cdots\phi_2\phi_1\phi_0\psi,
\]
\[
J(\phi_{m-1}\cdots\phi_2\phi_1\phi_0\psi)_2=\left(\begin{array}{cccc}
1+s(T_2)&g(\widehat{t_2})\\
t_1Q_2(T_2)+r_2(T_2)&\ast\\
t_1Q_3(T_3)+r_3(T_2)&\ast\\
\vdots&\vdots\\
t_1Q_m(T_m)+r_m(T_2)&\ast\\
\end{array}\right),
\]
\[
s(T_2)=-t_2p_2(T_2)-\cdots-t_mp_m(T_2).
\]
Hence, starting from an arbitrary coset of IA-automorphisms $\text{\rm Inn}(\widehat{F_m})\psi$,
we found that it contains an automorphism $\theta\in\Theta$ with Jacobian matrix
prescribed in the theorem.
Now, let $\theta_1$ and $\theta_2$ be two different automorphisms in $\Theta$ with
$\text{\rm Inn}(\widehat{F_m})\theta_1=\text{\rm Inn}(\widehat{F_m})\theta_2$. Hence,
there exists a nonzero element $u\in \widehat{F_m}$
such that $\theta_1=\exp(\text{\rm ad}u)\theta_2$. Direct calculations show that this
is in contradiction with the form of $J(\theta_1)$.
\end{proof}
\begin{example}
When $m=2$ the results of Theorems \ref{Jacobian of inner autos} and \ref{iouthatF}
have the following simple form. If $u=\overline{u}+u_0$,
\[
\overline{u}=c_1y_1+c_2y_2,\quad u_0=[y_2,y_1]h(\text{\rm ad}y_1,\text{\rm ad}y_2),
\quad h(t_1,t_2)\in K[[t_1,t_2]],
\]
then the Jacobian matrix of the inner automorphism $\exp(\text{ad}u)$ is
\[
J(\exp(\text{ad}u))=I_2+\left(\begin{matrix}
(c_2+t_1h)t_2&(-c_1+t_2h)t_2\\
-(c_2+t_1h)t_1&-(-c_1+t_2h)t_1\\
\end{matrix}\right)\sum_{k\geq 0}\frac{(c_1t_1+c_2t_2)^k}{(k+1)!}.
\]
The Jacobian matrix of the outer automorphism $\theta\in\Theta$ is
\[
J(\theta)=\left(\begin{matrix}
t_2f_1(t_2)&t_2f_2(t_1,t_2)\\
-t_1f_1(t_2)&-t_1f_2(t_1,t_2)\\
\end{matrix}\right),\quad f_2(0,0)=0.
\]
\end{example}
Recall that the augmentation ideal $\omega$ of the polynomial algebra $K[t_1,\ldots,t_m]$ consists of the polynomials
without constant terms and its completion $\widehat{\omega}$ is the ideal of $K[[t_1,\ldots,t_m]]$ of all formal power
series without constant terms. The elements of the commutator ideal of
the free metabelian nilpotent of class $c$ Lie algebra $L_{m,c}$ are of the form
\[
u_0=\sum_{p>q} [y_p,y_q]h_{pq}(\text{\rm ad}y_q,\ldots,\text{\rm ad}y_m),
\]
where $h_{pq}(t_q,\ldots,t_m)$ belongs to the factor algebra $K[t_1,\ldots,t_m]/\omega^{c-1}$
and may be identified with a polynomial of degree $\leq c-2$. The partial derivative $\partial u/\partial x_i$
belongs to $K[t_1,\ldots,t_m]/\omega^c$ and
may be considered as a polynomial of degree $\leq c-1$. Similarly, the Jacobian matrix of an
endomorphism of $F_{m,c}$ is considered modulo the ideal $M_m(\omega^c)$ of $M_m(K[t_1,\ldots,t_m])$.
As a consequence of our Theorems \ref{Jacobian of inner autos} and \ref{iouthatF}
for $\text{\rm Inn}(\widehat{F_m})$ and $\text{\rm IOut}(\widehat{F_m})$ we
immediately obtain the description of the groups of inner and outer automorphisms of
$L_{m,c}$. We shall give the results for the Jacobian matrices only. The multiplication rule for
the group $\text{\rm Inn}(L_{m,c})$ from Theorem \ref{multiplication in Inn} can be stated similarly.
\begin{corollary}
\text{\rm (i)} Let $u=\overline{u}+u_0\in L_{m,c}$, where
\[
\overline{u}=\sum_{r=1}^mc_ry_r, \quad c_r\in K, r=1,\ldots,m,
\]
is the linear component of $u$ and
\[
u_0=\sum_{p>q} [y_p,y_q]h_{pq}(\text{\rm ad}y_q,\ldots,\text{\rm ad}y_m),
\quad h_{pq}(t_1,\ldots,t_m)\in K[t_1,\ldots,t_m]/\omega^{c-1}.
\]
Then the Jacobian matrix of the inner automorphism $\exp(\text{\rm ad}u)$ is
\[
J(\exp(\text{\rm ad}u))=J(\exp(\text{\rm ad}\overline{u}))+D_0T\equiv I_m+(\overline{D}+D_0)T \quad (\text{\rm mod }M_m(\omega^c)),
\]
\[
\overline{D}=\left(\frac{\partial[y_j,\overline{u}]}{\partial y_i}\right),
\quad D_0\equiv\left(\frac{\partial[y_j,u_0]}{\partial y_i}\right) \quad (\text{\rm mod }M_m(\omega^c)),
\]
\[
T=\sum_{k=0}^{c-1}\frac{t^k}{(k+1)!},\quad t=\sum_{r=1}^mc_rt_r.
\]
\text{\rm (ii)}
The automorphisms with the following Jacobian matrices are
coset representatives of the subgroup $\text{\rm Inn}(L_{m,c})$
of the group $\text{\rm IA}(L_{m,c})$:
\[
J(\theta)=I_m+\left(\begin{array}{llll}
s(t_2,\ldots,t_m)&f_{12}&\cdots&f_{1m}\\
t_1q_2(t_2,t_3,\ldots,t_m)+r_2(t_2,\ldots,t_m)&f_{22}&\cdots&f_{2m}\\
t_1q_3(t_3,\ldots,t_m)+r_3(t_2,\ldots,t_m)&f_{32}&\cdots&f_{3m}\\
\ \ \ \ \ \ \ \vdots&\ \ \vdots&\ \ddots&\ \ \vdots\\
t_1q_m(t_m)+r_m(t_2,\ldots,t_m)&f_{m2}&\cdots&f_{mm}\\
\end{array}\right),
\]
where $s,q_i,r_i,f_{ij}\in \omega/\omega^c$, i.e., are polynomials of degree $\leq c-1$ without constant terms.
They satisfy the conditions
\[
s+\sum_{i=2}^mt_iq_i\equiv 0,\quad \sum_{i=2}^mt_ir_i\equiv 0,\quad \sum_{i=1}^mt_if_{ij}\equiv 0
\quad (\text{\rm mod }\omega^{c+1}), \quad j=2,\ldots,m,
\]
$r_i=r_i(t_2,\ldots,t_m)$, $i=1,\ldots,m$, does not depend on $t_1$, $q_i(t_i,\ldots,t_m)$,
$i=2,\ldots,m$, does not depend on $t_1,\ldots,t_{i-1}$
and $f_{12}$ does not contain a summand $dt_2$, $d\in K$.
\end{corollary}
\section*{Acknowledgements}
The second named author is grateful to the Institute of Mathematics and Informatics of
the Bulgarian Academy of Sciences for the creative atmosphere and the warm hospitality during his visit when
most of this project was carried out.
| proofpile-arXiv_065-5326 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
Incommensurately-modulated antiferromagnetic (AF) correlations appear to be a universal feature in the hole-doped high-temperature superconducting cuprates La$_{2-x}$Sr$_{x}$CuO$_{4}$ (LSCO) and YBa$_{2}$Cu$_{3}$O$_{6+y}$.~\cite{Birgeneau_JPSJ06}
Neutron-scattering measurements of the low-energy dynamical AF correlations in underdoped samples reveal peaks at the wave vectors $(0.5\pm\delta, 0.5)$ and $(0.5, 0.5\pm\delta)$ in reciprocal lattice units (r.l.u.) of the CuO$_{2}$ square lattice (1 r.l.u. $\sim 1.67$~\AA$~{-1}$). Here $\delta$ is referred to as an incommensurability. These magnetic signals disperse inwards, become commensurate at a intermediate energy range, and then disperse outwards, exhibiting the characteristic ``hour-glass'' shaped spectrum.~\cite{hayd04,tran04}
In LSCO, such low-energy incommensurate (IC) magnetic correlations disappear coincidently with the disappearance of the superconductivity in the heavily overdoped region, indicating a strong correlation between superconductivity and AF correlations.~\cite{waki_04,waki_07,lips_07} However, in cuprates with higher $T_c$'s, such as Bi-, Tl-, and Hg-systems, reports of magnetic fluctuations are very sparse~\cite{xu_cm09} and clear IC AF correlations have not been observed, whereas the commensurate resonant magnetic scattering in the superconducting state has been observed.~\cite{Bi,Tl} Thus, there are gaps in the experimental record relevant to understanding the role of magnetism in the mechanism of superconductivity.
Very recently, it has been reported that the replacement of a small amount of Cu sites by magnetic Fe$^{3+}$ ions in the La$_{2-x}$Sr$_{x}$CuO$_{4}$ (LSCO) and overdoped Bi$_{1.75}$Pb$_{0.35}$Sr$_{1.90}$CuO$_{6+y}$ (BPSCO) systems dramatically stabilizes a short-range (SR) IC magnetic correlation.~\cite{Fujita_09CM,hira_09} In the latter case, neutron-scattering measurements clarified the SR IC magnetic correlation below 40 K,~\cite{hira_09} unveiling potential IC spin correlations in the single-layered Bi-based cuprate system. Remarkably, the observed incommensurability is $\delta = 0.21(1)$~r.l.u. This value is approximately equal to the effective hole concentration of the sample, consistent with a trend found in underdoped LSCO, but greatly exceeding the well-known saturation limit of $\sim0.125$ r.l.u. in the LSCO system.~\cite{Yamada_98}
There have been a number of reports that non-magnetic and magnetic impurities, such as Zn and Ni, substituted for Cu atoms in superconducting LSCO stabilize IC magnetic order.~\cite{zn1,zn2,ni1,ni2}
The induced order is attributed to a static stripe order resulting from trapping of hole carriers by the impurities.
On the other hand, the disappearance of low-energy IC magnetic correlations in the overdoped LSCO~\cite{waki_07} implies no stripe correlation to be stabilized by impurities in the overdoped region. Thus, the newly found IC magnetic correlation in Fe-doped overdoped BPSCO may have a different origin from stripe order.
In order to clarify this issue, we performed resistivity and neutron diffraction studies under magnetic fields, since the effect of magnetic fields on the stripe magnetic order has been well studied in LSCO and related compounds.
This problem should give new insights into the origin of the IC magnetic correlations of the high-temperature superconducting cuprates in the overdoped metallic regime.
We find that the sample shows a negative magnetoresistive effect: that is, the resistivity decreases when a magnetic field is applied along the $c$-axis. This effect grows below 40~K where the SR IC magnetic correlation sets in. Neutron diffraction reveals that an applied magnetic field slightly reduces the IC magnetic correlation. These effects are in contrast with the effect of magnetic fields on stripes in the LSCO system, in which the stripes with $\delta\sim0.12$~r.l.u. are generally stable in magnetic fields; in fact, static order is typically enhanced by an applied field in the underdoped regime.\cite{Katano_00,Lake_02,Boris_02,Chang_08}
As an alternative, dilute magnetic moments in a metallic alloy may be a relevant model for the present case. Fe spins in the metallic background in the overdoped BPSCO may behave as Kondo scatterers. This induces the Kondo effect in the resistivity and magnetic correlation appears by the RKKY interaction.
Magnetic fields compete with the exchange coupling between conduction electrons and impurity moments in Kondo systems, and such an effect might explain our observation that the Fe-induced magnetic order is reduced by magnetic fields.
\section{Experimental details}
A single-crystal of Bi$_{1.75}$Pb$_{0.35}$Sr$_{1.90}$Cu$_{0.91}$Fe$_{0.09}$O$_{6+y}$ (Fe-doped BPSCO) used in the neutron scattering experiments is identical to that studied in Ref.~\onlinecite{hira_09}. Hole concentration estimated from Fermi surface area measured by angle-resolved photoemission spectroscopy (ARPES)~\cite{Sato_unpub} is $p \sim 0.23$; thus, the hole carriers are overdoped in the present sample. Small pieces cut from the same crystal rod were used for the magnetization and resistivity measurements. The sample shows no superconductivity down to 1.6~K. From the neutron scattering measurements, the crystal structure is orthorhombic down to the lowest temperature that we measured. Lattice constants are $a=5.30$~\AA~ and $b=5.37$~\AA~ at room temperature, with a corresponding lattice unit of the CuO$_2$ square lattice of $3.77$~\AA.
Magnetization measurements were performed using a superconducting quantum interference device (SQUID) magnetometer. A crystal with typical size of $2 \times 2 \times 2$~mm$^3$ was fixed using a plastic straw.
For the in-plane resistivity measurements, single crystals were cut and shaped with typical size of $2.0 \times 0.8 \times 0.03$~mm$^3$. Then, four electrodes were attached by heating hand-painted gold-paste at $300^\circ$C, and samples were wired by silver paste and gold wires. After these procedures, the in-plane resistivity measurements under magnetic fields were carried out by a standard DC four-terminal method. Magnetic fields were applied up to 15 T parallel to the $c$-axis by a superconducting magnet.
Neutron diffraction measurements were carried out using the TAS-1 triple-axis spectrometer at the research reactor JRR-3, Tokai, Japan, and the HB-1 triple-axis spectrometer at the High Flux Isotope Reactor (HFIR), Oak Ridge National Laboratory, USA.
For both instruments, the wavelength, $\lambda$, of incident neutrons was selected by a pyrolytic graphite (PG) monochromator, while the energy of diffracted neutrons was determined by a PG analyzer. A PG filter was placed in the neutron path to eliminate neutrons with wavelengths of $\lambda/2, \lambda/3$, and so on.
The TAS-1 spectrometer was used for measurements without magnetic fields, using neutrons with $\lambda=2.359$~\AA, corresponding to the energy $E = 14.7$~meV. Horizontal divergences of neutron beam upstream and downstream of monochromator and similarly for the analyzer were $40'$, $80'$, $80'$, and $240'$, respectively.
The HB-1 spectrometer was used for the measurements with magnetic fields up to 5~T, applied parallel to the $c$-axis of the sample, using neutrons with $\lambda=2.462$~\AA, corresponding to the energy $E = 13.5$~meV. Horizontal beam divergences are $40'$, $60'$, $80'$, and $200'$.
The Fe-doped BPSCO crystal was mounted in a refrigerator with the $a$ and $b$ axes laid in the horizontal neutron scattering plane, so that the $(H, K, 0)$ reflections were accessible. In the present paper, we will use orthorhombic notation, in which the antiferromagnetic propagation vector of the basal CuO$_2$ plane corresponds to either $(1, 0, 0)$ or $(0, 1, 0)$.
\section{Magnetization}
\begin{figure}
\includegraphics[width=8cm]{figure01_2.eps}
\caption{(Color online) Temperature dependence of (a) inverse of in-plane magnetic susceptibility, $\chi^{-1}$ and (b) its derivative of Fe 9\%-doped Bi2201. The arrow shows $T_m$ that is magnetic ordering temperature determined by neutron scattering. (c) $M-H$ curve at 2~K and 40~K measured after cooling in zero field.}
\end{figure}
We first present the results of the magnetization measurements.
Figure 1(a) shows temperature ($T$) dependence of the inverse magnetic susceptibility $\chi^{-1}$ measured in a 1~T magnetic field applied perpendicular to the $c$-axis. Here, $\chi$ has been evaluated by subtracting the background magnetic susceptibility arising from the sample holder (plastic straw). In the high temperature region, $\chi^{-1}$ is nearly linear in $T$; therefore, the system shows paramagnetism following the Curie-Weiss law. The extrapolated negative intercept of the temperature axis suggests weakly antiferromagnetic spin correlations.
This Curie-Weiss behavior comes dominantly from doped Fe spins, as the previous study~\cite{hira_09} indicates that the susceptibility increases linearly with the Fe concentration.
However, at low temperatures, $\chi^{-1}$ deviates from the Curie law. This is clearly seen in Fig. 1(b), where the derivative of $\chi^{-1}$ is plotted. As $T$ decreases, the derivative reaches a maximum at $\sim 40$~K, and starts to decrease below that.
On the other hand, this compound shows true spin freezing at $T_{sg} \sim 9$~K into a cluster spin glass state, in which spins in the each cluster are antiferromagnetically correlated. The deviation from the Curie law below $\sim 40$~K might be due to a precursor of the cluster glass state, since the temperature of 40~K is close to the point where SR magnetic correlation sets in, as we will show later.
At very low temperatures, $T < T_{\rm sg}$, the system shows a weak ferromagnetic character.
Figure 1(c) shows $M$--$H$ curves at 2 and 40 K measured with $H // c$ after cooling in zero field. The data at 40 K demonstrate that the system is paramagnetic, whereas the data at 2 K exhibit a small hysteresis loop, indicating that the system has a weak ferromagnetic response along the $c$-axis. %
We confirmed that there is no hysteresis in the in-plane magnetization at 2~K. We will return to the nature of the magnetic interactions and the relevance of these results in Sec.~\ref{sc:disc}.
\section{Resistivity}
\begin{figure}
\includegraphics[width=8cm]{figure02.eps}
\caption{(Color online) Temperature dependence of the in-plane resistivity $\rho_{ab}$ under various magnetic fields at 0, 0.5, 1, 1.5, 2, 3, 4, 5, 6, 7, 9, 11, 13, and 15~T along the $c$-axis. Inset figure shows $H$-dependence of $\rho_{ab}$ at 2~K.}
\end{figure}
Figure 2 shows the $T$-dependence of the in-plane resistivity $\rho_{ab}$ measured in various magnetic fields. The system shows an up turn at low temperature in the form of $\ln(1/T)$, and shows no superconductivity down to the lowest temperature, 1.6~K. For $0 \leq H \leq 2$~T, the data sets overlap, while for $H > 2$~T and $T \alt 40$~K, $\rho_{ab}$ is reduced by magnetic fields, thus demonstrating a negative magnetoresistive effect. The inset shows $\rho_{ab}$ as a function of magnetic field at the lowest temperature. It shows that the negative magnetoresistive effect appears for $H \agt 2$~T.
\begin{figure}
\includegraphics[width=8cm]{figure03_2.eps}
\caption{(Color online) Magnetic field dependence of $\Delta\rho_{ab}/\rho_{ab}(H=0T)$ at selected temperatures. $\Delta\rho_{ab}$ is the reduction of $\rho_{ab}$ by magnetic fields. The inset shows $\Delta\rho_{ab}(H=15T)/\rho_{ab}(H=0T)$ as a function of temperature.}
\end{figure}
The negative magnetoresistive effect is seen only at low temperatures. In Fig.~3, we plot $\Delta\rho_{ab}/\rho_{ab}(H=0{\rm T})$ as a function of $H$, where $\Delta\rho_{ab}$ is the change in resistivity caused by magnetic fields. At 70~K, $\Delta\rho_{ab}$ is nearly zero up to 15~T, and $|\Delta\rho_{ab}|$ increases as temperature decreases. The inset shows the $T$-dependence of $\Delta\rho_{ab}/\rho_{ab}(H=0{\rm T})$ at 15~T. It is shown that the negative magnetoresistive effect becomes prominent below 40~K, which is, again, close to the temperature where the SR magnetic correlation sets in. This fact indicates that the SR magnetic correlation is closely related to the negative magnetoresistive effect.
\section{Neutron diffraction}
Results of the neutron diffraction study in zero field, performed at the TAS-1 spectrometer, are summarized in Fig.~4.
Figure 4 (a) shows neutron diffraction intensity measured at reciprocal lattice points of $(H, K, L)=(1+q, -q, 0)$ in the orthorhombic notation.
The difference between the data at 4~K and 80~K is plotted in Fig.~4(b).
The solid line is a result of fit by resolution-convoluted two-dimensional Lorentzian. The fitting gives peaks at $q = 0.205(5)$.
Note that the position of $(1, 0, 0)$ and the deviation defined as $(q, -q, 0)$ are equivalent to the antiferrromagnetic wave vector $(0.5, 0.5, 0)$ and the deviation of $(q, 0, 0)$ in reciprocal lattice units of the CuO$_2$ square lattice, as illustrated in Fig.~4(e). Thus, the incommensurability $\delta$ is 0.205(5).
In identifying the magnetic scattering, we have focused on temperature-dependent features in order to discriminate from a number of spurious peaks that are present in the raw data. (The same approach was used in Ref.~\onlinecite{hira_09}.)
The sharp peak at $q=0$ in Fig.~4(a) may be due to either multiple scattering or the contribution of a small fraction of neutrons with $\lambda/2$ diffracted from $(2, 0, 0)$. Other sharp peaks at $q=-0.35$ and $0.5$ may originate from nuclear peaks of small, misoriented grains, as the sample is a mosaic crystal with imperfections; note that the intensities of all peaks in Fig.~4(a) are comparable to the background level.
We note that the background of the difference plot, Fig.~4(b), has a finite value, suggesting a $T$-dependent background scattering. To examine this effect, we compare the neutron intensities at $q=-0.2$ and $-0.45$, where the former is located at the top of one IC peak and the latter is far-off from that peak. Figure 4 (c) exhibits the $T$-dependence of the intensities at these positions. Even at $q=-0.45$ the intensity increases moderately with decreasing temperature. This might be attributed to a paramagnetic scattering from Fe spins, which increases with decreasing $T$. As we saw in Fig.~1, the system indeed shows paramagnetic behavior in its magnetization. Thus, the $T$-dependence of the IC magnetic peak should be given by subtracting this paramagnetic component. Figure 4 (d) indicates the difference between intensities at $q=-0.2$ and $-0.45$. The SR IC magnetic correlation clearly sets in at 40~K, which we define as $T_m$.
We note that there are small peak structures at $q \sim \pm 0.2$ even at 80~K in Fig. 4 (a). However, the $T$-dependence in Fig.~4(d) demonstrates that these peaks are independent of $T$ up to 200~K, suggesting that their origin is not magnetic.
The effect of magnetic field on the IC magnetic peak is shown in Fig. 5, which shows the difference in neutron intensity at 5~K and 70~K. The zero-field data are shown by open symbols and the data with a magnetic field of 5~T are indicated by closed symbols. For the measurement under magnetic field, the magnetic field of 5~T was applied at 5~K after cooling in zero-field. Then data were collected at 5~K and 70~K; the sample was heated with the magnetic field kept at 5~T.
The solid lines are results of fits to resolution-convoluted two-dimensional Lorentzians with background fixed at zero. Fits give $\delta=0.221(16)$ for the data at $0$~T, and $\delta=0.247(35)$ for the data at $5$~T.
The results show that the IC magnetic peaks have no tendency to be enhanced by the application of magnetic fields. On the contrary, it appears that magnetic peaks decrease slightly by magnetic fields.
\begin{figure}
\includegraphics[width=8cm]{figure04_2.eps}
\caption{(Color online) (a) Neutron scattering intensity on the trajectory of $(1+q, -q)$ at 4 and 80~K without magnetic field measured at the TAS-1 spectrometer. (b) Difference of neutron intensities at 4~K and 80~K.
The solid line is a result of fit to a Lorentzian function convoluted with the instrumental resolution.
(c) Temperature dependence of neutron scattering intensity at $q=0.2$ (peak position) and $q=0.45$ (off-peak position). (d) Difference between intensities at $q=0.2$ and $0.45$ as a function of temperature.
(e) Incommensurate peak geometry in the reciprocal lattice units. Four spots around $(H, K)=(1, 0)$ represent IC magnetic peaks. The arrow indicates the scan trajectory of $(1+q, -q)$. The axes labeled as $H$ and $K$ define the reciprocal lattice in the orthorhombic notation. Those labeled as $H'$ and $K'$ define the reciprocal lattice of the CuO$_2$ square lattice. (f) CuO$_2$ square lattice. The square by solid line is the orthorhombic unit cell. The square by dashed line is the unit of the CuO$_2$ square lattice.
}
\end{figure}
\section{Discussion}
\label{sc:disc}
In this section, we discuss the possible origin of the SR IC magnetic correlations induced by the Fe-doping in the overdoped BPSCO system.
\begin{figure}
\includegraphics[width=8cm]{figure05_3.eps}
\caption{(Color online) Difference of neutron scattering intensities at 5~K and 70~K measured at the HB-1 spectrometer under magnetic fields of 0~T and 5~T along the $c$-axis.
Solid lines are results of fits to a Lorentzian function convoluted with instrumental resolution.
}
\end{figure}
\subsection{RKKY interaction}
Our results show that the SR IC magnetic correlation induced by the Fe-doping in the overdoped BPSCO system tends to be slightly suppressed by an external magnetic field. In turn the system gains electron conductivity by an external magnetic field, showing a negative magnetoresistive effect. In addition, the spin system freezes into a glass state at low temperature.
A model to consider is that of dilute magnetic alloys, such as Cu alloyed with a few percent of Mn. Beyond the direct analogy of dilute magnetic moments doped into a metallic system, these systems exhibit spin-glass ordering at low temperature, with static magnetic correlations at an IC wave vector,\cite{Tsunoda_92,Lamelas_95} and an up turn in resistivity at low temperature termed the Kondo effect.\cite{Mydosh_93} The magnetic correlations are attributed to the RKKY interaction ({i.e.}, the coupling of local moments by conduction electrons), and the IC wave vector appears to correspond to $2k_F$, where $k_F$ is the Fermi wave vector.\cite{Lamelas_95}
The present sample is located in the overdoped region, having a hole concentration of $p \sim 0.23$/Cu estimated from ARPES measurements.~\cite{Sato_unpub} An ARPES study of (Bi,Pb)$_{2}$(Sr,La)$_{2}$CuO$_{6+y}$ by Kondo {\it et al.}\cite{Kondo_09} indicates that correlation effects, such as the pseudogap, are greatly reduced for overdoped samples. That observation is consistent with neutron-scattering studies of LSCO indicating that antiferromagnetic correlations become quite weak with overdoping.\cite{waki_07,lips_07} Thus, it is reasonable to consider the Fe spins doped into the metallic background of the overdoped BPSCO as Kondo scatterers that reduce the mobility of the charge carriers.\cite{Alloul_09,Balatsky_06}
The resistivity at low temperature varies as $\ln(1/T)$, which is consistent with Kondo behavior.
Scattering of the conduction electrons by the Fe moments could induce magnetic correlations at $2k_F$,
resulting in magnetic clusters.
Compensation of the Fe moments by conduction electrons is, at best, only partial. Magnetic fields would easily couple to the Fe moments, and suppress the $s-d$ exchange interaction between the local moments and conduction electrons, which is the source of both the Kondo effect and the RKKY interaction. %
Therefore the IC magnetic correlation is depressed by moderate magnetic fields, and the system shows the negative magnetoresistive effect. The negative magnetoresistive effect is also discussed on the basis of the Kondo effect in Refs.~\onlinecite{Sekitani_03} and \onlinecite{ZnBi2201}.
\subsection{Stripe order}
There are several reports of impurity induced magnetic order in the under- and optimally-doped 214 compounds.~\cite{zn1,zn2,ni1,ni2} In the latter cases, the induced order is a static stripe order around the impurity atoms owing to trapping of hole carriers by impurities, and the ordered regions appear to behave as non-superconducting islands.~\cite{Nachumi_98} We next consider whether this picture might be applicable to the present case.
Let us start with the large spin incommensurability of $\delta \sim 0.21$ found in Fe-doped BPSCO. If we try to interpret this in terms of coupled charge and spin stripes, then the corresponding spin period of 5 Cu-O-Cu period would imply a charge stripe period of 2.5 lattice spacings. That would require the charge and spin stripes to be not much wider than a single row of Cu atoms.
Given that stripe order involves competition between strongly-correlated antiferromagnetism and the kinetic energy of the doped holes, it seems unlikely that such narrow domains
could be energetically favorable, especially in the overdoped regime.
For reference,
the spin modulations in LSCO and Nd-doped LSCO tend to saturate at about 8 lattice spacings for $x>\frac18$.\cite{Birgeneau_JPSJ06}
The response to an applied magnetic field is another area of contrasting behavior. Underdoped LSCO, La$_{2-x}$Ba$_{x}$CuO$_{4}$ (LBCO), and Nd-doped LSCO, particularly in the vicinity of 1/8 hole concentration, have a strong tendency towards static stripe order.
Previous neutron scattering studies of these materials under magnetic fields revealed that the stripe order is enhanced for LSCO~\cite{Katano_00,Lake_02,Boris_02,Chang_08}, or stays constant for LBCO~\cite{Dunsiger_08,Wen_08} and Nd-doped LSCO~\cite{waki_03}; in any case, the stripe order is never weakened by fields up to at least 7~T.
These facts suggest that stripe order is robust against magnetic field, which is opposite to the present result.
Negative magnetoresistance has been observed in a limited number of cuprates, generally in cases where superconductivity is absent. Examples include lightly-doped LSCO,~\cite{Preyer_91} electron-doped thin films,~\cite{Sekitani_03} and Zn-doped Bi$_{2}$Sr$_{2-x}$La$_{x}$CuO$_{6+y}$~\cite{ZnBi2201}; in the case of La$_{1.79}$Eu$_{0.2}$Sr$_{0.01}$CuO$_4$, the decrease in $c$-axis resistivity due to an applied field was shown unambiguously to be due to spin scattering effects.\cite{Hucker_09} More commonly, positive magnetoresistance is observed.\cite{Kimura_96,Balakirev_98} The salient point here is that negative magnetoresistance has not been observed in stripe ordered systems. In the cases of LSCO, LBCO and Nd-doped LSCO with stripe order near the 1/8 hole concentration, the resistivity at low temperatures is generally increased due to the reduction in $T_c$ by applying a magnetic field, while the ``normal'' state resistivity above the zero-field $T_c$ is relatively insensitive to applied fields.\cite{Katano_00,Boe_96,Adachi_05,Li_07} Thus, the negative magnetoresistance observed in Fe-doped BPSCO is another argument against a stripe interpretation.
\subsection{Remarks}
Based on the above discussion, we conclude that the present sample Bi$_{1.75}$Pb$_{0.35}$Sr$_{1.90}$Cu$_{0.91}$Fe$_{0.09}$O$_{6+y}$ is analogous to the dilute magnetic alloys, in which the Kondo behavior is relevant, and therefore the SR IC magnetic correlation induced by Fe-doping in the present ``overdoped'' BPSCO system originates from the correlation via polarization of conduction electron, the RKKY interaction. The magnetic incommensurability should reflect the Fermi surface topology. In contrast, magnetic or non-magnetic impurities doped in the ``underdoped'' cuprates induce stripe ordered state. In that case, the incommensurability corresponds to the inverse of the stripe modulation period. This difference between the underdoped and overdoped regimes might come from the difference in electronic nature: the former is in the strongly-correlated regime while the latter is in the metallic regime.
It is remarkable that the magnetic correlation in stripes in the underdoped superconducting region and that by the RKKY interaction in the present overdoped compound have the same modulation direction along the Cu-O-Cu bond. Furthermore, the incommensurabilities for
both cases
follow
the $p=\delta$ relation.
This raises the possibility of an interesting test of the nature of the magnetic excitations. In the under- to optimally-doped regimes, the magnetic spectrum is characterized by an ``hour-glass'' dispersion. There has been controversy over the the extent to which this spectrum derives from local moments or conduction electrons. Can such a spectrum be observed in the present Fe-doped BPSCO sample? If so, it would lend support to the argument that the conduction electrons are largely responsible for the magnetic response near optimal doping. If not, it would be consistent with the view that hour-glass spectrum is a consequence of strongly-correlated antiferromagnetism. Either way, measurements of the spin dynamics of this sample would yield interesting results.
\section{Summary}
Fe impurities in the overdoped BPSCO system induce SR IC magnetic correlation with unexpectedly large incommensurability $\delta=0.21$ below 40~K. We have studied the magnetic field dependence of the magnetic correlation by neutron diffraction and the field dependence of the in-plane resistivity. The magnetic peaks observed by neutron scattering show a small reduction in an applied magnetic field, and the resistivity shows a clear negative magnetoresistive effect below 40~K. Such behavior is different from that typical of stripe ordered LSCO, LBCO and Nd-doped LSCO where the stripe order is robust against the magnetic fields.
The present results show greater similarities to dilute magnetic alloys, in which the Kondo effect is relevant. The Fe spins in the overdoped metallic background produces a Kondo effect which results in the up turn in the resistivity at low temperature in the form of $\ln(1/T)$. On the other hand, SR magnetic correlation is induced by the RKKY interaction.
Magnetic fields couple to the Fe moments, competing with the exchange interaction between the local moments and conduction electrons. This effect disturbs both the Kondo effect and the RKKY interaction, resulting in the reduction of the spatially-modulated magnetic correlations and the negative magnetoresistive effect.
\begin{acknowledgments}
We thank K. Kaneko, M. Matsuda, M. Fujita, and J. A. Fernandez-Baca for invaluable discussion. We also thank S. Okayasu for his help in SQUID measurements.
This work is part of the US-Japan Cooperative Program on Neutron scattering. The work at the HFIR at ORNL was partially funded by the Division of Scientific User Facilities, Office of Basic Energy Science, US Department of Energy. The study performed at JRR-3 at Tokai was carried out under the Common-Use Facility Program of JAEA, and the Quantum Beam Technology Program of JST. Magnetoresistive measurement was performed at the High Field Laboratory for Superconducting Materials, Institute for Materials Research, Tohoku University. We acknowledge financial support by Grant-in-Aid from the Ministry of Education, Culture, Sports, Science and Technology. JMT is supported by the U.S. Department of Energy, Office of Basic Energy Sciences, Division of Materials Sciences and Engineering, under Contract No.~DE-AC02-98CH110886.
\end{acknowledgments}
| proofpile-arXiv_065-5327 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
The {\em ultradiscrete} periodic Toda lattice is
an integrable system described by
a piecewise-linear map \cite{KimijimaTokihiro02}.
Recently, its algebro geometrical aspect is clarified
\cite{IT08,IT09,IT09b}
by applying the tropical geometry, a combinatorial algebraic geometry
rapidly developed during this decade \cite{EKL06,IMS-Book, SpeyerSturm04}.
This system has tropical spectral curves, and what proved are
that its general isolevel set is isomorphic to the tropical
Jacobian of the tropical hyperelliptic curve,
and that its general solution is written in terms of
the tropical Riemann's theta function.
The key to the solution is the tropical analogue of Fay's trisecant identity
for a special family of hyperelliptic curves \cite{IT09}.
On the other hand,
there exists a generalization of {\em discrete} periodic Toda lattice $T(M,N)$,
where $M$ (resp. $N$) is a positive integer
which denotes the level of generalization (resp. the periodicity)
of the system.
The $M=1$ case, $T(1,N)$, is the original discrete Toda lattice
of $N$-periodicity.
When $\gcd(M,N)=1$, $T(M,N)$ reduces to a special case
of the integrable multidiagonal Lax matrix \cite{vanMoerMum79},
and the general solution to $T(M,N)$ is recently constructed \cite{Iwao09}.
The aim of this paper is twofold:
the first one is to introduce the tropical analogue of Fay's trisecant identity
not only for hyperelliptic but also for more general tropical curves.
The second one is to study
the generalization of {\em ultradiscrete} periodic Toda lattice $\mT(M,N)$
by applying the tropical Fay's trisecant identity,
as a continuation of the study on $\mT(1,N)$ \cite{IT08,IT09,IT09b}.
This paper is organized as follows:
in \S 2 we review some notion of tropical geometry
\cite{Mikha05,MikhaZhar06,Iwao08}, and
introduce the tropical analogue of Fay's trisecant identity
(Theorem \ref{tropicalFay}) by applying the correspondence of integrations
over complex and tropical curves \cite{Iwao08}.
In \S 3 we introduce the generalization of the discrete periodic Toda lattice
$T(M,N)$ and its ultradiscretization $\mT(M,N)$.
We reconsider the integrability of $T(M,N)$
(Proposition \ref{prop:integrability}).
In \S 4 we demonstrate the general solution
to $\mT(3,2)$, and give conjectures on $\mT(M,N)$
(Conjectures \ref{conj:1} and \ref{conj:2}).
In closing the introduction, we make a brief remark on
the interesting close relation between
the ultradiscrete periodic Toda lattice and
the {\em periodic box and ball system} (pBBS) \cite{KimijimaTokihiro02},
which is generalized to that between $\mT(M,N)$ and
the pBBS of $M$ kinds of balls
\cite{NagaiTokihiroSatsuma99,Iwao09b}.
When $M=1$, the relation is explained at the level of
tropical Jacobian \cite{IT08}.
We expect that our conjectures on $\mT(M,N)$ also account for
the tropical geometrical aspects of the recent results
\cite{KunibaTakagi09} on the pBBS of $M$ kinds of balls.
\section{Tropical curves and Riemann's theta function}
\subsection{Good tropicalization of algebraic curves}
Let $K$ be a subfield of $\C$ and
$K_{\ve}$ be the field of convergent Puiseux series
in ${\bf e} := {\rm e}^{-1/\ve}$ over $K$.
Let $\val:K_{\ve}\to \Q\cup \{\infty\}$ be the natural valuation with respected to
$\ee$.
Any polynomial $f_{\ve}$ in $K_{\ve}[x,y]$ is expressed uniquely as
\[\textstyle
f_{\ve}=\sum_{w=(w_1,w_2)\in \Z^2}{a_w(\ve)x^{w_1}y^{w_2}},\qquad
a_w(\ve)\in K_{\ve}.
\]
Define the tropical polynomial $\Val(X,Y;f_{\ve})$ associated with $f_{\ve}$
by the formula:
$\textstyle
\Val(X,Y;f_{\ve}):=\min_{w\in\Z^2}{[\val(a_w)+w_1X+w_2Y]}.
$
We call $\Val(X,Y;f_{\ve})$ the {\em tropicalization} of $f_{\ve}$.
We take $f_{\ve} \in K_{\ve}[x,y]$.
Let $C^0(f_{\ve})$ be the affine algebraic
curve over $K_{\ve}$ defined by $f_{\ve}=0$.
We write $C(f_{\ve})$ for
the complete curve over $K_{\ve}$
such that
$C(f_{\ve})$ contains $C^0(f_{\ve})$ as a dense open subset,
and that $C(f_{\ve}) \setminus C^0(f_{\ve})$ consists of non-singular points.
The \textit{tropical curve} $TV(f_{\ve})$ is a subset of $\R^2$
defined by:
\[
TV(f_{\ve})=\left\{(X,Y)\in\R^2\,\left\vert\,
\begin{array}{l}
\mbox{the function $\Val(X,Y;f_{\ve})$}\\
\mbox{ is not smooth at $(X,Y)$ }
\end{array}
\right\}\right..
\]
Denote $\Lambda(X,Y;f_{\ve}):=\{w\in \Z^2\,\vert\, \Val(X,Y;f_{\ve})
=\val(a_w)+w_1X+w_2Y
\}$.
The definition of the tropical curve can be put into:
\[
TV(f_{\ve})=\{(X,Y)\in\R^2\,\vert\,
\sharp\Lambda(X,Y;f_{\ve})\geq 2
\}.
\]
For $P=(X,Y)\in \R^2$, we define $f_{\ve}^P:=\sum_{w\in\Lambda(X,Y;f_{\ve})}{a_w x^{w_1}y^{w_2}}$.
To make use of the results of tropical geometry for
real/complex analysis,
we introduce the following condition as a criterion
of genericness of tropical curves.
\begin{definition}\label{def2.1}
We say $C(f_{\ve})$ has a \textit{good tropicalization} if:
\begin{enumerate}
\def\labelenumi{(\theenumi)}
\item $C(f_{\ve})$ is an irreducible reduced non-singular curve over $K_{\ve}$,
\item $f^P_{\ve}=0$ defines an affine reduced
non-singular curve in $(K_{\ve}^\times)^2$ for all
$P\in TV(f_{\ve})$ (maybe reducible).
\end{enumerate}
\end{definition}
\begin{remark}
The notion of a {\em good tropicalization} was first introduced in
\cite[Section 4.3]{Iwao08}.
The above definition gives essentially the same notion.
\end{remark}
\subsection{Smoothness of tropical curves}
For the tropical curve $\Gamma := TV(f_{\ve})$,
we define the set of vertices $V(\Gamma)$:
\begin{align*}
V(\Gamma) = \{(X,Y) \in \Gamma ~|~ \sharp\Lambda(X,Y;f_{\ve})\geq 3 \}.
\end{align*}
We call each disjointed element of $\Gamma \setminus V(\Gamma)$
an edge of $\Gamma$.
For an edge $e$, we have the {\it primitive tangent vector}
$\xi_e = (n,m) \in \Z^2$ as $\gcd(n,m) = 1$.
Note that the vector $\xi_e$ is uniquely determined up to sign.
\begin{definition}\cite[\S 2.5]{Mikha05}
The tropical curve $\Gamma$ is {\it smooth} if:
\begin{enumerate}
\def\labelenumi{(\theenumi)}
\item All the vertices are trivalent, {\it i.e.}
$\sharp\Lambda(X,Y;f_{\ve})=3$ for all $(X,Y) \in V(\Gamma)$.
\item For each trivalent vertex $v \in V(\Gamma)$,
let $\xi_1, \xi_2$ and $\xi_3$ be the primitive tangent vectors of
the three outgoing edges from $v$. Then we have
$\xi_1 + \xi_2 + \xi_3 = 0$ and $|\xi_i \wedge \xi_j| = 1$
for $i \neq j \in \{1,2,3\}$.
\end{enumerate}
\end{definition}
When $\Gamma$ is smooth, the genus of $\Gamma$ is $\dim H_1(\Gamma,\Z)$.
\subsection{Tropical Riemann's theta function}
For an integer $g \in \Z_{>0}$,
a positive definite symmetric matrix $B \in M_g(\R)$ and
$\beta \in \R^g$ we define a function on $\R^g$ as
$$
q_\beta({\bf m},{\bf Z}) = \frac12 {\bf m}B{\bf m}^{\bot}+
{\bf m}({\bf Z}+\beta B)^{\bot}
\qquad ({\bf Z} \in \R^g, ~ {\bf m} \in \Z^g).
$$
The tropical Riemann's theta function $\Theta(\bZ;B)$ and its
generalization $\Theta[\beta](\bZ;B)$ are given by \cite{MikhaZhar06,IT09}
\begin{align*}
&\Theta({\bf Z};B)=\min_{{\bf m}\in \Z^g} q_0({\bf m},{\bf Z}),
\\
&\Theta[\beta]({\bf Z};B) = \frac12 \beta B \beta^{\bot}+
\beta {\bf Z}^{\bot}+
\min_{{\bf m} \in \Z^g} q_\beta({\bf m},{\bf Z}).
\end{align*}
Note that $\Theta[0](\bZ;B) = \Theta(\bZ;B)$.
The function $\Theta[\beta](\bZ;B)$ satisfies the quasi-periodicity:
\begin{align*}
\Theta[\beta]({\bf Z}+K{\bf l})
=
-\frac{1}{2} {\bf l}K{\bf l}^{\bot} -{\bf l}{\bf Z}^{\bot}
+\Theta[\beta]({\bf Z}) \qquad ({\bf l}\in \Z^g).
\end{align*}
We also write $\Theta({\bf Z})$ and $\Theta[\beta]({\bf Z})$
for $\Theta({\bf Z};B)$ and $\Theta[\beta]({\bf Z};B)$
without confusion.
We write ${\bf n} = \arg_{{\bf m} \in \Z^g} q_\beta({\bf m},{\bf Z})$
when $\min_{{\bf m} \in \Z^g} q_\beta({\bf m},{\bf Z})
= q_\beta({\bf n},{\bf Z})$.
\subsection{Tropical analogue of Fay's trisecant identity}
For each $\bar{\ve} \in \R_{>0}$,
we write $C(f_{\bar{\ve}})$ for the base change of $C(f_{\ve})$
to $\C$ via a map
$\iota : ~K_{\ve} \to \C$ given by $\ve \mapsto \bar{\ve}$.
\begin{theorem}\label{Trop-period}\cite[Theorem 4.3.1]{Iwao08}
Assume $C(f_{\ve})$ has a good tropicalization and
$C(f_{\bar{\ve}})$ is non-singular.
Let $B_{\bar{\ve}}$ and $B_T$ be the period matrices
for $C(f_{\bar{\ve}})$ and $TV(f_{\ve})$ respectively.
Then we have the relation
$$
\frac{2\pi\bar{\ve}}{\sqrt{-1}} B_{\bar{\ve}}
\sim B_T \quad (\bar{\ve} \to 0).
$$
(It follows from the assumption that
the genus of $C(f_{\ve})$ and $C(f_{\bar{\ve}})$ coincide.)
\end{theorem}
A nice application of this theorem is to give
the tropical analogue of Fay's trisecant identity.
For the algebraic curve $C(f_{\ve})$ of Theorem \ref{Trop-period},
we have the following:
\begin{theorem}\label{tropicalFay}
We continue the hypothesis and notation in Theorem \ref{Trop-period},
and assume $TV(f_{\ve})$ is smooth.
Let $g$ be the genus of $C(f_{\ve})$ and
$(\alpha,\beta) \in \frac12 \Z^{2g}$ be a non-singular
odd theta characteristic for $\Jac(C(f_{\ve}))$.
For $P_1, P_2, P_3, P_4$
on the universal covering space of $TV(f_{\ve})$,
we define the sign $s_i\in \{\pm 1\}$ $(i=1,2,3)$ as
$s_i=(-1)^{k_i}$,
where
\begin{align*
\begin{split}
k_1&=2\alpha\cdot
\left(\arg_{{\bf m} \in \Z^g} q_\beta({\bf m},\int_{P_3}^{P_2})+
\arg_{{\bf m} \in \Z^g} q_\beta({\bf m},\int_{P_1}^{P_4})\right),\\
k_2&=2\alpha\cdot
\left(\arg_{{\bf m} \in \Z^g} q_\beta({\bf m},\int_{P_3}^{P_1})+
\arg_{{\bf m} \in \Z^g} q_\beta({\bf m},\int_{P_4}^{P_2})\right),\\
k_3&=1+2\alpha\cdot
\left(\arg_{{\bf m} \in \Z^g} q_\beta({\bf m},\int_{P_3}^{P_4})+
\arg_{{\bf m} \in \Z^g} q_\beta({\bf m},\int_{P_1}^{P_2})\right).
\end{split}
\end{align*}
Set the functions $F_1(\bZ), F_2(\bZ), F_3(\bZ)$ of
$\bZ \in \R^g$ as
\begin{align*
\begin{split}
F_1(\bZ)&=\Theta({\bf Z}+\int_{P_1}^{P_3})+\Theta({\bf Z}+\int_{P_2}^{P_4})
+\Theta[\beta](\int_{P_3}^{P_2})+\Theta[\beta](\int_{P_1}^{P_4}),\\
F_2(\bZ)&=\Theta({\bf Z}+\int_{P_2}^{P_3})+\Theta({\bf Z}+\int_{P_1}^{P_4})
+\Theta[\beta](\int_{P_3}^{P_1})+\Theta[\beta](\int_{P_4}^{P_2}),\\
F_3(\bZ)&=\Theta({\bf Z}+\int_{P_1+P_2}^{P_3+P_4})+\Theta({\bf Z})
+\Theta[\beta](\int_{P_4}^{P_3})+\Theta[\beta](\int_{P_1}^{P_2}).
\end{split}
\end{align*}
Then, the formula
\begin{align}\label{trop-fay}
F_i(\bZ)=\min[F_{i+1}(\bZ),F_{i+2}(\bZ)]
\end{align}
holds if $s_i=\pm 1,s_{i+1}=s_{i+2}=\mp 1$ for $i \in \Z / 3 \Z$.
\end{theorem}
This theorem generalizes \cite[Theorem 2.4]{IT08},
where $C(f_{\ve})$ is a special hyperelliptic curve.
We introduce the following lemma for later convenience:
\begin{lemma}\label{lemma:theta-ch}\cite[Proposition 2.1]{IT08}
Let $C$ be a hyperelliptic curve of genus $g$ and take
$\beta = (\beta_j)_j \in \frac{1}{2}\Z^g$.
Set $\alpha = (\alpha_j)_j \in \frac{1}{2}\Z^g$ as
$$
\alpha_j = - \frac{1}{2} \delta_{j,i-1} + \frac{1}{2} \delta_{j,i},
$$
where $i$ is defined by the condition
$\beta_j=0 ~(1\leq j \leq i-1)$ and $\beta_i\neq 0$ mod $\Z$.
Then $(\alpha,\beta)$ is a non-singular odd theta characteristic
for $\Jac(C)$.
\end{lemma}
\subsection{Tropical Jacobian}
When the positive definite symmetric
matrix $B \in M_g(\R)$ is the period matrix of
a smooth tropical curve $\Gamma$,
the $g$-dimensional real torus $J(\Gamma)$ defined by
$$
J(\Gamma) := \R^g / \Z^g B
$$
is called the tropical Jacobian \cite{MikhaZhar06} of $\Gamma$.
\section{Discrete and ultradiscrete generalized Toda lattice}
\subsection{Generalized discrete periodic Toda lattice $T(M,N)$}
Fix $M, N \in \Z_{>0}$.
Let $T(M,N)$ be the generalization of discrete periodic Toda lattice
defined by the difference equations \cite{Iwao09,NagaiTokihiroSatsuma99}
\begin{align}
\label{slN-dpToda}
\begin{split}
&I_n^{t+1} + V_{n-1}^{t+\frac{1}{M}}
= I_n^t + V_n^t,
\\
&V_n^{t+\frac{1}{M}} I_n^{t+1} = I_{n+1}^t V_n^t,
\end{split}
\quad (n \in \Z /N \Z, ~t \in \Z/M),
\end{align}
on the phase space $T$:
\begin{align}\label{spaceT}
\begin{split}
&\Big\{(I_{n}^t, I_{n}^{t+\frac{1}{M}}, \ldots, I_n^{t+\frac{M-1}{M}}, V_n^t)_{n=1, \ldots,N} \in \C^{(M+1)N}
\\
& \qquad ~\Big|~
\prod_{n=1}^N V_n^t, ~\prod_{n=1}^N I_n^{t+\frac{k}{M}} ~(k=0,\ldots,M-1)
\text{ are distinct } \Big\}.
\end{split}
\end{align}
Eq. \eqref{slN-dpToda} is written in the Lax form given by
$$
L^{t+1}(y) M^t(y) = M^t(y) L^t(y),
$$
where
\begin{align}\label{MNLax}
&L^t(y) = M^t(y) R^{t+\frac{M-1}{M}}(y) \cdots R^{t+\frac{1}{M}}(y) R^t(y),
\\
\nonumber
&R^t(y) =
\begin{pmatrix}
I_2^t & 1 & & & \\
& I_3^t & 1 & & \\
& & \ddots & \ddots & \\
& & & I_{N}^t & 1 \\
y& & & & I_1^t \\
\end{pmatrix},
\quad
M^t(y) =
\begin{pmatrix}
1 & & & & \frac{V_1^t}{y} \\
V_2^t & 1 & & \\
& V_3^t & 1 \\
& & \ddots & \ddots & \\
& & & V_{N}^t & 1 \\
\end{pmatrix}.
\end{align}
The Lax form ensures that the characteristic polynomial
$\Det(L^t(y) - x \mathbb{I}_N)$ of the Lax matrix $L^t(y)$
is independent of $t$, namely, the coefficients of
$\Det(L^t(y) - x \mathbb{I}_N)$ are integrals of motion of $T(M,N)$.
Assume $\gcd(M,N)=1$ and
set $d_j = [\frac{(M+1-j)N}{M}] ~(j=1,\ldots,M)$.
We consider three spaces for $T(M,N)$:
the phase space $T$ \eqref{spaceT},
the coordinate space $L$ for the Lax matrix $L^t(y)$ \eqref{MNLax},
and the space $F$ of the spectral curves.
The two spaces $L$ and $F$ are given by
\begin{align}
&L = \{(a_{i,j}^t,b_i^t)_{i=1,\ldots,M, ~j=1,\dots,N} \in \C^{(M+1)N} \},
\\
\begin{split}\label{slN-dpF}
&F = \Big\{ y^{M+1} + f_M(x) y^{M} + \cdots + f_1(x) y + f_{0}
\in \C[x,y] ~\Big|~
\\
& \qquad \qquad
\deg_x f_j(x) \leq d_j ~(j=1,\ldots, M),
~-f_1(x) \text{ is monic}
\Big\},
\end{split}
\end{align}
where each element in $L$ corresponds to the matrix:
\begin{align*}
L^t(y) =
\begin{pmatrix}
a_{1,1}^t & a_{2,2}^t & \cdots & a_{M,M}^t & 1 & & & \frac{b_N^t}{y}\\
b_1^t & a_{1,2}^t & a_{2,3}^t & \cdots & a_{M,M+1}^t& 1 & \\
& \ddots & \ddots & \ddots & & \ddots & \ddots \\[2mm]
& & \ddots & & & & a_{M,N-1}^t & 1 \\
y & & & \ddots& & & a_{M-1,N-1}^t & a_{M,N}^t \\
y a^t_{M,1} & y & & & & & & a_{M-1,N}^t \\
\vdots & \ddots & \ddots & & & \ddots & \ddots & \vdots\\[2mm]
y a^t_{2,1} & \cdots & y a^t_{M,M-1} & y & & & b_{N-1}^t & a_{1,N}^t \\
\end{pmatrix}.
\end{align*}
Define two maps $\psi : T \to L$ and $\phi : L \to F$ given by
\begin{align*}
&\psi((I_{n}^t, I_{n}^{t+\frac{1}{M}}, \ldots, I_n^{t+\frac{M-1}{M}}, V_n^t)_{n=1, \ldots,N}) = L^t(y)
\\
&\phi(L^t(y)) = (-1)^{N+1} y \Det(L^t(y) - x \mathbb{I}_N),
\end{align*}
Via the map $\psi$ (resp. $\phi \circ \psi$), we can regard
$F$ as a set of polynomial functions on $L$ (resp. $T$).
We write $n_F$ for the number of the polynomial functions in $F$,
which is $\displaystyle{n_F = \frac{1}{2}(M+1)(N+1)}$.
\begin{proposition}\label{prop:integrability}
The $n_F$ functions in $F$ are functionally independent in $\C[T]$.
\end{proposition}
To prove this proposition we use the following:
\begin{lemma}\label{lemma:psi}
Define
$$
I^{t+\frac{k}{M}} = \prod_{n=1}^N
I_n^{t+\frac{k}{M}},
\quad
V^t = \prod_{n=1}^N V_n^t ~~(t \in \Z, ~ k=0,\ldots,M-1).
$$
The Jacobian of $\psi$ does not vanish iff
$I^{t+\frac{k}{M}} \neq I^{t+\frac{j}{M}}$ for $0 \leq k < j \leq M-1$
and $I^{t+\frac{k}{M}} \neq V^t$ for $0 \leq k \leq M-1$.
\end{lemma}
\begin{proof}
Since the dimensions of $T$ and $L$ are same,
the Jacobian matrix of $\psi$ becomes an $(M+1)N$ by $(M+1)N$ matrix.
By using elementary transformation, one sees that the Jacobian matrix is
block diagonalized into $M+1$ matrices of $N$ by $N$,
and the Jacobian is factorized as
$$
\pm \Det B \cdot \prod_{k=1}^{M-1} \Det A^{(k)},
$$
where $A^{(k)}$ and $B$ are
\begin{align*}
&A^{(k)} = P(I^{t+\frac{k}{M}}, I^{t+\frac{k-1}{M}})
P(I^{t+\frac{k+1}{M}}, I^{t+\frac{k-1}{M}}) \cdots
P(I^{t+\frac{M-1}{M}}, I^{t+\frac{k-1}{M}})
\\
&\hspace{8cm} (k=1,\ldots,M-1),
\\
&B = P(I^{t+\frac{M-1}{M}},V^t) P(I^{t+\frac{M-2}{M}},V^t) \cdots
P(I^{t+\frac{1}{M}},V^t) P(I^{t},V^t),
\\[1mm]
&P(J,K) = \begin{pmatrix}
J_1 & & & & -K_N \\[1mm]
-K_1 & J_2 & & & \\
& \ddots & \ddots & & \\
& & \ddots & \ddots & \\[1mm]
& & & -K_{N-1} & J_N
\end{pmatrix} \in M_N(\C).
\end{align*}
Since
$\Det P(J,K) = \prod_{n=1}^N J_n - \prod_{n=1}^N K_n$,
we obtain
$$
\Det B \cdot \prod_{k=1}^{M-1} \Det A^{(k)}
= \prod_{0 \leq k \leq M-1} (I^{t+\frac{k}{M}}-V^t) \cdot
\prod_{0 \leq j < k \leq M-1} (I^{t+\frac{k}{M}}-I^{t+\frac{j}{M}}).
$$
Thus the claim follows.
\end{proof}
\begin{remark}
The above lemma is true for $\gcd(M,N) > 1$, too.
\end{remark}
\begin{proof}(Proposition \ref{prop:integrability}) \\
Take a generic $f \in F$ such that the algebraic curve $C_f$ given by
$f=0$ is smooth. The genus $g$ of $C_f$ is $\frac{1}{2} (N-1)(M+1)$,
and we have $\dim L = n_F + g$.
Due to the result by Mumford and van Moerbeke
\cite[Theorem 1]{vanMoerMum79},
the isolevel set $\phi^{-1}(f)$ is isomorphic to the affine part of
the Jacobian variety $\mathrm{Jac}(C_f)$ of $C_f$,
which denotes $\dim_{\C} \phi^{-1}(f) = g$.
Thus $F$ has to be a set of $n_F$ functionally
independent polynomials in $\C[L]$.
Then the claim follows from Lemma \ref{lemma:psi}.
\end{proof}
\subsection{Generalized ultradiscrete periodic Toda lattice $\mT(M,N)$}
We consider the difference equation
\eqref{slN-dpToda} on the phase space $T_{\ve}$:
\begin{align*}
&\Big\{ (I_{n}^t, I_{n}^{t+\frac{1}{M}}, \ldots,
I_n^{t+\frac{M-1}{M}}, V_n^t)_{n=1, \ldots,N} \in K_{\ve}^{N(M+1)}
\\
& \qquad \qquad ~\Big|~
\val \bigl(\prod_{n=1}^N I_n^{t+\frac{k}{M}}\bigr)
< \val \bigl(\prod_{n=1}^N V_n^{t}\bigr)
~(k=0,\ldots,M-1),
\\
& \qquad \quad \qquad
\val \bigl(\prod_{n=1}^N I_n^{t+\frac{k}{M}}) ~(k=0,\ldots,M-1)
\text{ are distinct }
\Big\}.
\end{align*}
We assume $\gcd(M,N) = 1$.
Let $F_{\ve} \subset K_{\ve}[x,y]$ be the set of polynomials
over $K_{\ve}$ defined by the similar formula to \eqref{slN-dpF}, \textit{i.e.}
\[
F_{\ve} =
\Big\{ y^{M+1}+\sum_{j=0}^{M}{f_j(x)y^j}
\in K_{\ve}[x,y] ~\Big|~
\deg_x f_j \leq d_j, ~
-f_1(x) \text{ is monic}
\Big\}.
\]
The {\em tropicalization} of the above system becomes
the generalized ultradiscrete periodic Toda lattice
$\mT(M,N)$, which is the piecewise-linear map:
\begin{align}
\begin{split}\label{ud-pToda}
&Q_n^{t+1} = \min[W_n^t, Q_n^t - X_n^t], \\
&W_n^{t+\frac{1}{M}} = Q_{n+1}^t + W_n^t - Q_n^{t+1},
\end{split}
\quad (n \in \Z/N \Z, ~t \in \Z/M),
\\ \nonumber
&\text{where }
X_n^t = \min_{k=0,\ldots,N-1}[\sum_{j=1}^k (W_{n-j}^t-Q_{n-j}^t)],
\end{align}
on the phase space $\mT$:
\begin{align*}
\mT &=
\Big\{(Q_{n}^t, Q_{n}^{t+\frac{1}{M}}, \ldots, Q_n^{t+\frac{M-1}{M}}, W_n^t)_{n=1,\ldots,N} \in \R^{(M+1)N}
\\
& \qquad \qquad
~\Big|~ \sum_n Q_n^{t+\frac{k}{M}} < \sum_n W_n^{t} ~ (k=0,\ldots,M-1),
\\
& \qquad \quad \qquad
\sum_n Q_n^{t+\frac{k}{M}} ~(k=0,\ldots,M-1)
\text{ are distinct } \Big\}.
\end{align*}
Here we set $\val(I_n^t) = Q_n^t$ and $\val(V_n^t) = W_n^t$.
The tropicalization of $F_{\ve}$ becomes the space of tropical polynomials
on $\mT$:
\begin{align}\label{slM-udF}
\begin{split}
\mathcal{F} =
&\Bigl\{
\min \Bigl[ (M+1) Y, ~
\min_{j=1,\ldots,M}
\bigl[jY + \min[d_j X + F_{j,d_j},\ldots,
\\
& \hspace{2.5cm}
X+F_{j,1},F_{j,0}]\bigr],
~ F_0 \Bigr] ~\Big|~ F_{1,d_1} = 0, ~F_{j,i},
F_0 \in \R
\Bigr\}.
\end{split}
\end{align}
We write $\Phi$ for the map $\mT \to \mF$.
\subsection{Spectral curves for $T(M,N)$ and good tropicalization}
We continuously assume $\gcd(M,N)=1$.
\begin{proposition}\label{prop:MNToda-curve}
For a generic point $\tau \in T_{\ve}$,
that is a point in a certain Zariski open subset of $T_{\ve}$,
the spectral curve $\phi \circ \psi(\tau)$ has a good tropicalization.
\end{proposition}
To show this proposition, we use the following lemma:
\begin{lemma}\label{lemma:MN}
Fix $l \in K_{\ve}[x,y]$ and set $h_t = y^{M+1} - x^N y + tl$,
where $t\in K_{\ve}$.
Then $C^0(h_t)$ is non-singular in $(K_{\ve}^\times)^2$ except for
finitely many $t$.
\end{lemma}
\begin{proof}
Fix $a,b \in \Z$ as $Ma-Nb=1$.
(It is always possible since $\gcd(M,N)=1$.)
Then the map $\nu: (K_{\ve}^\times)^2 \to (K_{\ve}^\times)^2;
~(x,y) \mapsto (u, v) = (x^N/y^M, x^a/y^b)$ is holomorphic.
The push forward of $h_t/y^{M+1}$ by $\nu$ becomes
$$
\tilde{h} := (h_t/y^{M+1})_\ast = (1-u+t\tilde{l})
\quad (\tilde{l} \in K_{\ve}[u,v,u^{-1},v^{-1}]).
$$
By using the following claim, we see that $C^0(\tilde{h})$ is non-singular.
\begin{claim}
Fix $f,g \in K_{\ve}[u,v]$ such that $C^0(f)$ is non-singular,
and $f$ and $g$ are coprime to each other.
Define
$$
U = \{ t \in K_{\ve} ~|~ C^0(f+tg) \text{ is singular }\} \subset K_{\ve}.
$$
Then $U$ is a finite algebraic set.
\end{claim}
Then the lemma follows.
\end{proof}
\begin{proof}(Proposition \ref{prop:MNToda-curve})
\\
Recall the definition of good tropicalization (Definition \ref{def2.1}).
The part (1) follows from Proposition \ref{prop:integrability} immediately.
The part (2):
For any $f_{\ve} \in F_{\ve}$, it can be easily checked that if
two points
$P_1,P_2\in TV(f_{\ve})$ exist on a same
edge of the tropical curve, then $f_{\ve}^{P_1}=f_{\ve}^{P_2}$.
This fact implies that the set $\{f_{\ve}^P\,\vert\, P\in TV(f_{\ve})\}$ is finite.
Therefore, the set
\begin{align*}
\bigtriangleup=\{f_{\ve}\in F_{\ve}\,\vert\, C^{0}(f^P_{\ve})
&\mbox{ is non-reduced or singular in $(K_{\ve}^\times)^2$}
\\
& \hspace{3cm}\mbox{ for some } P\in TV(f_{\ve}) \}
\end{align*}
is a union of finitely many
non-trivial algebraic subsets of $F_{\ve}\simeq K_{\ve}^{n_F}$.
Using Proposition \ref{prop:integrability}
(with the map $\iota$ with any $\bar{\ve} \in \R_{>0}$)
and Lemma \ref{lemma:MN},
we conclude that $(\phi\circ\psi)^{-1}(\bigtriangleup)\subset T_{\ve}$
is an analytic subset with positive codimension.
(We need Lemma \ref{lemma:MN} when $f_{\ve}^P$ includes $y^{M+1} - x^N y$.)
\end{proof}
\section{General solutions to $\mathcal{T}(M,N)$}
\subsection{Bilinear equation}
In the following we use a notation $[t] = t ~{\rm mod}~ 1$
for $t \in \frac{\Z}{M}$.
The following proposition gives the bilinear form for
$\mT(M,N)$:
\begin{proposition}
Let $\{T_n^t \}_{n \in \Z; t \in \frac{\Z}{M}}$ be a set of
functions with a quasi-periodicity,
$T_{n+N}^t = T_n^t + (an + bt +c)$
for some $a,b,c \in \R$.
Fix $\delta^{[t]}, \theta^{[t]} \in \R$ such that
$$
(a) \text{ $\delta^{[t]} + \theta^{[t]}$ does not depend on $t$},
\qquad
(b) ~2b-a < N \theta^{[t]} \text{ for } t \in \Z/M.
$$
Assume $T_n^t$ satisfies
\begin{align}\label{tau-function}
T_n^t + T_n^{t+1+\frac{1}{M}} =
\min[ T_n^{t+1} + T_n^{t+\frac{1}{M}},
T_{n-1}^{t+1+\frac{1}{M}} + T_{n+1}^t + \theta^{[t]}].
\end{align}
Then $T_n^t$ gives a solution to \eqref{ud-pToda}
via the transformation:
\begin{align}\label{QW-T}
\begin{split}
&Q_n^t = T_{n-1}^{t} + T_{n}^{t+\frac{1}{M}}
- T_{n-1}^{t+\frac{1}{M}} - T_{n}^{t}+\delta^{[t]},
\\
&W_n^t = T_{n-1}^{t+1} + T_{n+1}^t-
T_{n}^{t} - T_{n}^{t+1} + \delta^{[t]}+\theta^{[t]}.
\end{split}
\end{align}
\end{proposition}
We omit the proof since it is essentially same as that of $M=1$ case
in \cite[\S 3]{IT09}.
\begin{remark}
Via \eqref{QW-T}, \eqref{ud-pToda} is directly related to
\begin{align*}
&T_n^t + T_n^{t+1+\frac{1}{M}} =
T_n^{t+1} + T_n^{t+\frac{1}{M}} + X_{n+1}^t,
\\ \nonumber
&X_n^t = \min_{j=0,\ldots,M-1}
\bigl[
j \theta^{[t]}
+ T_{n-j-1}^{t+\frac{1}{M}}+T_{n-j-1}^{t+1}+T_n^t+T_{n-1}^t
\\ \nonumber & \hspace{3cm}
-(T_{n-1}^{t+1}+T_{n-1}^{t+\frac{1}{M}}+T_{n-j}^t+T_{n-j-1}^t) \bigr].
\end{align*}
This is shown to be equivalent to \eqref{tau-function}
under the quasi-periodicity of $T_n^t$.
See \cite[Proposition 3.3 and 3.4]{IT09} for the proof.
\end{remark}
\subsection{Example: $\mT(3,2)$}\label{sec:32}
We demonstrate a general solution to $\mT(3,2)$.
Take a generic point $\tau \in T_{\ve}$,
and the spectral curve $C(f_{\ve})$ for $T(3,2)$ on $K_{\ve}$
is given by the zero of $f_{\ve} = \phi \circ \psi (\tau) \in F_{\ve}$:
$$
f_{\ve} = y^4 + y^3 f_{30} + y^2(x f_{21} + f_{20}) + y(-x^2+x f_{11}+f_{10})
+ f_0.
$$
Due to Proposition \ref{prop:MNToda-curve},
$C(f_{\ve})$ has a good tropicalization.
The tropical curve $\Gamma :=TV(f_{\ve})$ in $\R^2$
is the indifferentiable points of $\xi := \Val(X,Y;f_{\ve})$:
$$
\min \big[ 4 Y, 3Y + F_{30}, 2 Y + \min[X+F_{21},F_{20}],
Y + \min[2X,X+F_{11},F_{10}], F_0 \bigr].
$$
We assume that $\Gamma$ is smooth, then its genus is $g=2$.
See Figure \ref{GammaC} for $\Gamma$,
where we set the basis $\gamma_1, \gamma_2$ of $\pi_1(\Gamma)$.
\begin{figure}
\begin{center}
\unitlength=1.2mm
\begin{picture}(80,80)(0,0)
\put(0,5){\line(1,0){80}}
\thicklines
\put(0,0){\line(1,1){5}}
\put(5,5){\line(1,1){15}}
\put(5,5){\line(4,1){20}}
\put(25,10){\line(2,1){10}}
\put(20,20){\line(1,0){15}}
\put(35,20){\line(2,1){15}}
\put(20,60){\line(-1,1){20}}
\put(20,60){\line(2,-1){30}}
\put(50,45){\line(1,0){30}}
\put(50,27.5){\line(1,0){30}}
\put(35,15){\line(1,0){30}}
\put(25,10){\line(1,0){30}}
\put(35,15){\line(0,1){5}}
\put(20,20){\line(0,1){40}}
\put(50,45){\line(0,-1){17.5}}
\put(25,10){\circle*{1.5}} \put(23,6){$A_1$}
\put(35,15){\circle*{1.5}} \put(33,11){$A_2$}
\put(50,27.5){\circle*{1.5}} \put(48,23.5){$A_3$}
\put(50,45){\circle*{1.5}} \put(49,47){$R$}
\put(20,60){\circle*{1.5}} \put(19,62){$P$}
\put(5,5){\circle*{1.5}} \put(6,2){$Q$}
\thinlines
\put(35,36){\oval(20,20)}
\put(33,25.2){$>$}
\put(33,28){$\gamma_1$}
\put(23,15){\oval(10,6)}
\put(21,11.2){$>$}
\put(21,14){$\gamma_2$}
\end{picture}
\caption{Tropical spectral curve $\Gamma$ for $\mT(3,2)$}\label{GammaC}
\end{center}
\end{figure}
The period matrix $B$ for $\Gamma$ becomes
$$
B = \begin{pmatrix}
2 F_0 - 7 F_{11} + F_{20} & F_{11} - F_{20} \\
F_{11} - F_{20} & F_{11} + F_{20}
\end{pmatrix},
$$
and the tropical Jacobi variety $J(\Gamma)$ of $\Gamma$ is
$$
J(\Gamma) = \R^2 / \Z^2 B.
$$
We fix $6$ points on the universal covering space of $\Gamma$
as follows:
\begin{align*}
&\vec{L} = \int_P^Q = (F_0-3F_{11},F_{11}),
\\
&\vec{\lambda}_1 = \int_Q^{A_3} = (F_{10}-2F_{11},-F_{20}),
\quad
\vec{\lambda}_2 = \int_Q^{A_2} = (0, F_{20}-F_{30}),
\\
&\vec{\lambda}_3 = \int_Q^{A_1} = (0, F_{30}),
\quad
\vec{\lambda} = \int_R^P = (F_{10}-2F_{11},0).
\end{align*}
Here the path $\gamma_{Q \to A_3}$ from $Q$ to $A_3$ is chosen
as $\gamma_{Q \to A_3} \cap \gamma_1 \cap \gamma_2 \neq \emptyset$.
Remark that
$\vec{\lambda} = \vec{\lambda}_1 + \vec{\lambda}_2 + \vec{\lambda}_3$
holds.
\begin{proposition}\label{prop:T32}
Fix $\bZ_0 \in \R^2$.
The tropical theta function $\Theta(\bZ;B)$ satisfies
the following identities:
\begin{align}\label{T32Fay}
\begin{split}
&\Theta(\bZ_0) + \Theta(\bZ_0+\vec{\lambda}+\vec{\lambda}_i)
\\ &\quad
= \min [ \Theta(\bZ_0+\vec{\lambda}) + \Theta(\bZ_0+\vec{\lambda}_i),
\Theta(\bZ_0-\vec{L})
+ \Theta(\bZ_0+\vec{L}+\vec{\lambda}+\vec{\lambda}_i)
+ \theta_i],
\end{split}
\end{align}
for $i=1,2,3$, where $\theta_1 = F_0-3 F_{11}$ and
$\theta_2 = \theta_3 = F_0-2 F_{11}$.
\end{proposition}
\begin{proof}
Since the curve $C(f_{\ve})$ is hyperelliptic,
we fix a non-singular odd theta characteristic as
$(\alpha,\beta)=((\frac{1}{2},0),(\frac{1}{2},\frac{1}{2}))$
following Lemma \ref{lemma:theta-ch}.
By setting $(P_1,P_2,P_3,P_4) = (R,Q,P,A_{4-i})$ in Theorem \ref{tropicalFay}
for $i=1,2,3$, we obtain \eqref{T32Fay}.
\end{proof}
Now it is easy to show the following:
\begin{proposition}
(i) Fix $\bZ_0 \in \R^2$ and $\{i,j\} \subset \{1,2,3\}$,
and define $T_n^t$ by
\begin{align*}
\begin{split}
&T_n^t = \Theta(\bZ_0-\vec{L}n+\vec{\lambda}t),
\\
&T_n^{t+\frac{1}{3}}
= \Theta(\bZ_0-\vec{L}n+\vec{\lambda}t+\vec{\lambda}_i),
\\
&T_n^{t+\frac{2}{3}}
= \Theta(\bZ_0-\vec{L}n+\vec{\lambda}t+\vec{\lambda}_i+\vec{\lambda}_j),
\end{split}
\quad (t \in \Z).
\end{align*}
Then they satisfy the bilinear equation \eqref{tau-function}
with
$\theta^{[0]} = \theta_i$, $\theta^{[\frac{1}{3}]} = \theta_j$ and
$\theta^{[\frac{2}{3}]} = \theta_k$,
where $\{k\} = \{1,2,3\} \setminus \{i,j\}$.
\\
(ii) With (i) and
$\delta^{[\frac{k}{3}]}=F_0-2F_{11}-\theta^{[\frac{k}{3}]} ~(k=0,1,2)$,
we obtain a general solution to $\mT(3,2)$.
\end{proposition}
\begin{remark}
Depending on a choice of $\{i,j\} \subset \{1,2,3\}$,
we have $3!=6$ types of solutions.
This suggests a claim for the isolevel set $\Phi^{-1}(\xi)$:
$$
\Phi^{-1}(\xi) \simeq J(\Gamma)^{\oplus 6}.
$$
\end{remark}
\subsection{Conjectures on $\mT(M,N)$}
We assume $\gcd(M,N)=1$ again.
Let $\Gamma$ be the smooth tropical curve given by
the indifferentiable points of a tropical polynomial $\xi \in \mF$
\eqref{slM-udF}.
We fix the basis of $\pi_1(\Gamma)$ by using
$\gamma_{i,j} ~(i=1,\ldots,M, ~j=1,\ldots,d_i)$ as
Figure \ref{fig:GammaMN}.
The genus $g = \frac{1}{2}(N-1)(M+1)$ of $\Gamma$ can be obtained
by summing up $d_j$ from $j=1$ to $\max_{j=1.\ldots,M} \{j ~|~ d_j\geq 1\}$.
\begin{figure}
\begin{center}
\unitlength=1.2mm
\begin{picture}(80,80)(0,0)
\put(0,5){\line(1,0){80}}
\thicklines
\put(0,0){\line(1,1){5}}
\put(5,5){\line(1,1){10}}
\put(15,15){\line(5,1){5}}
\put(15,15){\line(1,2){5}}
\put(20,25){\line(0,1){4}}
\put(20,25){\line(4,1){4}}
\put(5,5){\line(5,1){10}}
\put(25,10){\line(-4,-1){5}}
\put(25,10){\line(2,1){10}}
\put(35,20){\line(-1,0){5}}
\put(35,20){\line(2,1){5}}
\put(20,60){\line(-1,1){20}}
\put(20,60){\line(2,-1){5}}
\put(60,45){\line(1,0){20}}
\put(60,45){\line(-4,1){16}}
\put(44,49){\line(-3,1){9}}
\put(50,27.5){\line(1,0){30}}
\put(50,27.5){\line(-2,-1){3}}
\put(35,15){\line(1,0){30}}
\put(25,10){\line(1,0){30}}
\put(44,49){\line(0,-1){5}}
\put(35,15){\line(0,1){5}}
\put(20,60){\line(0,-1){20}}
\put(20,43){\line(1,0){4}}
\put(50,27.5){\line(0,1){5}}
\put(60,38){\line(0,1){7}}
\put(60,38){\line(-2,-1){4}}
\put(60,38){\line(1,0){20}}
\put(25,10){\circle*{1.5}} \put(23,6){$A_1$}
\put(35,15){\circle*{1.5}} \put(33,11){$A_2$}
\put(50,27.5){\circle*{1.5}} \put(48,23.5){$A_i$}
\put(20,60){\circle*{1.5}} \put(19,62){$P$}
\put(5,5){\circle*{1.5}} \put(6,2){$Q$}
\put(60,38){\circle*{1.5}} \put(58,34){$A_M$}
\put(60,45){\circle*{1.5}} \put(59,47){$R$}
\put(44,35){$\vdots$}
\put(23,33){$\vdots$}
\put(38,25){$\ldots$}
\put(27,22){$\ldots$}
\put(20,12){$\ldots$}
\thinlines
\put(54,41){\circle{7}}
\put(52.5,36.8){$>$} \put(51,40){$\gamma_{1,d_1}$}
\put(25,50){\circle{7}}
\put(23.5,45.8){$>$} \put(23,49){$\gamma_{1,1}$}
\put(22,20.5){\circle{5}}
\put(20.5,17.4){$>$} \put(8,20){$\gamma_{M-1,1}$}
\put(31,16.5){\circle{5}}
\put(29.5,13.4){$>$} \put(36,17){$\gamma_{M,d_M}$}
\put(15,10.5){\circle{5}}
\put(13.5,7.4){$>$} \put(5,12){$\gamma_{M,1}$}
\end{picture}
\caption{Tropical spectral curve $\Gamma$ for $\mT(M,N)$}
\label{fig:GammaMN}
\end{center}
\end{figure}
Fix three points $P$, $Q$, $R$ on the universal covering space
$\tilde{\Gamma}$ of $\Gamma$ as Figure \ref{fig:GammaMN}, and define
\begin{align*}
\vec{L} = \int_P^Q, \qquad
\vec{\lambda} = \int_R^P.
\end{align*}
Fix $A_i ~(i=1,\ldots,M)$ on $\tilde{\Gamma}$ as Figure \ref{fig:GammaMN},
such that
\begin{align*}
\vec{\lambda}_i = \int_Q^{A_{M+1-i}} ~ (i=1,\ldots,M)
\end{align*}
satisfy $\vec{\lambda} = \sum_{i=1}^M \vec{\lambda}_i$.
We expect that
the bilinear form \eqref{tau-function} is obtained as a consequence of
the tropical Fay's identity \eqref{trop-fay},
by setting $(P_1,P_2,P_3,P_4) = (R,Q,P,A_i)$ in Theorem \ref{tropicalFay}.
The followings are our conjectures:
\begin{conjecture}\label{conj:1}
Let $\mathcal{S}_M$ be the symmetric group of order $M$.
Fix $\bZ_0 \in \R^g$ and $\sigma \in \mathcal{S}_M$, and set
$$
T_n^{t+\frac{k}{M}} = \Theta(\bZ_0 - \vec{L}n + \vec{\lambda} t +
\sum_{i=1}^{k} \lambda_{\sigma(i)})
$$
for $k=0,\ldots,M-1$.
Then the followings are satisfied:
\\
(i) $T_n^t$ satisfy \eqref{tau-function} with some $\theta^{[t]}$.
\\
(ii) $T_n^t$ gives a general solution to $\mT(M,N)$ via \eqref{QW-T}.
\end{conjecture}
\begin{conjecture}\label{conj:2}
The above solution induces the isomorphism map
from $J(\Gamma)^{\oplus M!}$ to the isolevel set $\Phi^{-1}(\xi)$.
\end{conjecture}
\begin{remark}
In the case of $\mT(1,g+1)$ and $\mT(2g-1,2)$,
the smooth tropical spectral curve $\Gamma$ is
hyperelliptic and has genus $g$.
For $\mT(1,g+1)$, Conjectures \ref{conj:1} and \ref{conj:2}
are completely proved \cite{IT09,IT09b}.
For $\mT(3,2)$, Conjecture \ref{conj:1} is shown in \S \ref{sec:32}.
\end{remark}
\section*{Acknowledgements}
R.~I. thanks the organizers of the international conference
``Infinite Analysis 09
--- New Trends in Quantum Integrable Systems''
held in Kyoto University in July 2009,
for giving her an opportunity to give a talk.
She also thanks Takao Yamazaki for advice on the manuscript.
R.~I. is partially supported by Grant-in-Aid for Young Scientists (B)
(19740231).
S.~I. is supported by KAKENHI
(21-7090).
| proofpile-arXiv_065-5332 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
Since the discovery of AGN, variability was established as a main property
of the population and it was among the first ones to be explored \citep{KBSmith63}.
The luminosities of AGN have been observed to vary in the whole electromagnetic
range and the majority of the objects are exhibiting continuum variations of
about 20\% on timescales of months to years \citep{KBHook94}. From a physical
point of view, variations can set limits on the size of the central emitting
region and the differences in the variability properties in the X-ray, optical
and radio bands provide important information on the underlying structure. The
mechanism of variability itself is still unknown and a variety of models have
been proposed \citep{KBTerl92,KBHawk93,KBKawa98}. Thus, the study of the AGN
variability is very important and can put constraints on the models describing
the AGN energy source and the AGN structure.
On the other hand, supernovae (SN) are very powerful cosmological probes and
their systematic discovery outside the local Universe has led to major scientific
results, like the confirmation that the Universe is accelerating \citep{KBRiess98,KBPerl99}.
Such studies require well sampled light curves and large statistical samples which can
be achieved by monitoring wide areas of the sky to very faint limiting magnitudes.
This kind of surveys produce huge amounts of data that can be suitable
also for other scientific studies. For example, given that the time sampling is adequate,
data gathered during SN searches can be used to detect AGN through variability.
One of the main purposes of this work is to explore this possibility and create
suitable tools for the efficient selection of AGN in such databases. The two
projects that have provided us with their data are: the Southern inTermediate
Redshift ESO Supernova Search \citep[STRESS,][]{KBBott08} and the
ESSENCE (Equation of State: SupErNovae trace Cosmic Expansion) survey \citep{KBmikn07}.
\section{AGN in the STRESS survey}
The STRESS survey \citep{KBBott08} includes 16 fields with
multi-band information. Each of those covers an area of 0.3deg$^2$
and has been monitored for 2 years with the ESO/MPI 2.2m telescope.
For this study we choose to use the so-called AXAF field, which is centered at
$\alpha$=03:32:23.7, $\delta$=-27:55:52 (J2000) and overlaps with various surveys,
which provide us with further data, such as the COMBO-17 survey \citep{KBWolf03} with
measurements in 5 wide and 12 narrow filters, resulting in a low resolution
spectrum, the ESO Imaging Survey \citep[EIS,][]{KBarn01}, the GOODS survey \citep{KBgiav04}
and the two X-ray surveys, Chandra Deep Field South \citep[CDFS,][]{KBGiac02} and
the Extended-CDFS survey \citep[ECDFS,][]{KBLeh05}.
For our variability study of the AXAF field we have used 8 epochs
obtained in the V band, during the period 1999-2001, thus covering 2 years.
For each source, detected in at least 5 epochs, we have measured the average magnitude
and its r.m.s. variation which is then compared with a 3$\sigma$ threshold we have
obtained averaging the r.m.s in bins of magnitude. Details on the calculation
of the variability threshold can be found in \citet{KBtre08}. This procedure has
yielded a catalogue of 132 AGN candidates down to V=24~mag.
Despite all the active surveys in our area, only 31\% of our candidates have
public spectroscopic information. For this reason we have performed a
spectroscopic follow up using EMMI at the ESO/NTT (La Silla). We obtained
low resolution spectra for 27 sources belonging to the bright
part of our sample (V$<$21.3mag). We now have 55\% of our candidates
spectroscopically confirmed. The remaining objects are typically fainter than what we have been
able to observe so far. Based on this dataset a complex picture emerges. Out of
the 27 sources for which we have obtained spectra, 17 are Broad Line AGNs (BLAGNs),
1 is a normal galaxy, and a considerable amount (7) are Narrow Emission Line
Galaxies (NELGs). The remaining two sources are stars. The spectra of all the sources
we have observed with details about their properties are presented by \citet{KBbout09}.
\begin{figure}[!ht]
\plottwo{boutsia_k_fig1.eps}{boutsia_k_fig2.eps}
\caption{Luminosity in the R band vs. luminosity in the hard (2-8keV) X-ray band (left panel).
Large dots indicate BLAGNs, diamonds represent NELGs and squares are normal galaxies.
The smaller symbols indicate non variable sources detected in the X-rays in our field. The objects
with ID 94 and ID 125 indicate sources with X-ray measurements derived from the recent
2Ms survey \citep{KBluo08}. The source with ID 26 is one of our sources with optical spectra typical
of BLAGN but undetected by color and X-ray emission. Redshift histogram of the AXAF variable candidates
(right panel) that have been confirmed as BLAGN based on their optical spectra. The black line refers to all variable
candidates with known redshift and the shaded section represents the part of the redshifts determined during our campaign.}
\end{figure}
Among the normal galaxies and the NELGs, which make up the low luminosity part
of our candidates, we may distinguish two groups. A fraction of these sources (7/14)
displays high variability and their colours ($\ub$ and $\bv$) as well as X-ray to optical ratio
(X/O), are consistent with AGN (see Fig.1). The other 7 sources, are less variable, have lower X/O
ratios and their colours are dominated by the host galaxy. According to our analysis,
these latter sources have properties consistent with Low Luminosity AGN (LLAGN),
contaminated by the light of the host galaxy. All these sources have extended morphologies
and would not have been detected by the color technique, that is limited to point like
sources, nor by their X-ray emission since they are not detected in the hard X-ray
band (2-8keV) despite the 1Ms exposure time for the CDFS. For the NELGs with the
necessary lines detected to place them in the diagnostic diagram \citep{KBkew06},
we find that they tend to lie in the locus of the composite sources.
Out of the 65 known BLAGNs in our field, 47 (72\%) are found to display significant
variability. The confirmed BLAGNs of our sample have an average X/O of 0.55. This
value is consistent with the X/O of 0.31 obtained by \citet{KBfior03} for optically
selected samples, while the X-ray selected sources present a ratio of X/O$\sim$1.2.
This is a further indication that by using variability as a selection technique we probe
a different part of the AGN population, favouring the identification of X-ray weak
sources. This fact, in combination with the known correlation between variability
amplitude and luminosity (in the sense that AGN of lower luminosity show larger variability
amplitudes) makes variability an ideal tool in selecting LLAGNs. Still 45\% of our
candidates remain without optical spectroscopy because of their faintness. In order
to better understand the complex LLAGN population, spectroscopy to fainter flux
limits is needed.
\section{AGN in the ESSENCE survey}
The ESSENCE survey \citep{KBmikn07,KBWV07} has been active for 6 years (2002-2007) and
was carried out with the Blanco 4m telescope at the Cerro Tololo Inter-American
Observatory (CTIO). The cadence of the observations was every other night,
for 20 nights around New Moon, for 3 months per year in the R and I band.
This resulted in very well sampled light curves for all sources in the 12deg$^2$ field.
In order to test our variability method we have used only part of the available
light curves that cover a 2 year period. As it was proven by our previous experience
in the AXAF field, such timespan is a good compromise between selecting AGN and
discriminating against supernova outbursts. In such a wide time-baseline with an
average of 30 epochs per light curve, the variation caused by the SN is well limited
in time and gets diluted, thus the source does not appear as variable. In fact less
than 10 of our variable sources resulted to be known SN discovered by the survey.
The adopted strategy for the ESSENCE dataset is not exactly the same as in the AXAF
field, mainly because we have used the light curves produced directly by the pipeline
of the survey for the needs of the project. Here the variability threshold was
linked to the noise of the data and not the intrinsic variability of the distribution.
In order to minimize spurious detections, the sources had to show significant variability
in both bands in order to be classified as candidates. Following this criterion we have
created a list of $\sim$4800 variable objects down to a magnitude of 22 in the R band.
Since our light curves are composed by a large number of epochs
(an average of 30 epochs in each light curve), we may also derive the structure
function (SF) for each object. The Structure Function (SF) is a method of quantifying
time variability and according to \cite{KBDeVries05}, it is defined as:
\begin{equation}
\it{S(\tau)} = \{\frac{1}{N(\tau)}\sum_{i<j}[m(t_{i})-m(t_{j})]^{2}\}^{1/2}
\end{equation}
where, N($\tau$) is the number of epochs for which $\it{t_{i}-t_{j}}$=$\tau$ and the
relative magnitude measurements are summed. We have defined subsamples of candidates,
depending on the shape of their SF. A flat SF shows that the examined timelag is larger
than the characteristic time of variability and should indicate sources that are not variable
or the period of their variability is much shorter than the probed period. In the case
of AGN, variability is known to increase with time and for the time baseline of the 2 years
sampled by our light curves, it should result to an ascending SF. Thus, for our spectroscopic
follow-up we have chosen candidates with ascending or generally non-flat SF. In Fig.2 we
can see examples of light curves and SFs of sources that were subsequently confirmed as AGN
by our follow-up.
\begin{figure}[!ht]
\plottwo{boutsia_k_fig3.eps}{boutsia_k_fig4.eps}
\plottwo{boutsia_k_fig5.eps}{boutsia_k_fig6.eps}
\caption{Typical light curves in both R and I band (left panel) and binned structure
functions in the R band (right panel) for two variable candidates that were confirmed
as BLAGN after our spectroscopic follow-up. Notice the ascending shape of the SF for both
sources although the shape of their light curves is not comparable.}
\end{figure}
\begin{figure}[!ht]
\plottwo{boutsia_k_fig7.eps}{boutsia_k_fig8.eps}
\caption{Distribution of the variability measurement $\hat{\sigma_{R}}$ versus the R band magnitude
for all sources that we have observed during our spectroscopic follow-up. The large dots represent
the confirmed AGN, the square represents the source which is a normal galaxy and the line
shows the adopted variability threshold. The histogram shows the redshift range of the AGN that were
observed in the spectroscopic follow-up of our sample.}
\end{figure}
During the same spectroscopic run with EMMI at NTT, we have obtained low resolution
spectra for 58 sources that belong to the subsample with the ascending SF and their
magnitudes range between 18.5 and 20.5 in the R band. 53 (91\%) were confirmed as
Broad Line AGN and we have a secure redshift determination. 3 sources show only one
broad emission line and a power law continuum, typical of AGN and although we can not
claim an accurate redshift for these sources, we can still consider them as $\it{bona}$
$\it{fide}$ AGN. This brings our success rate to $\sim$97\%. The remaining sources show
absorption features of which, one is being recognized as normal galaxy. In Fig.3 we show
the position of the observed sources in the distribution of their variability versus their
magnitude. The average redshift of the observed candidates is $<$z$>$=1.40. Details about
the variability method and the comparison of our sample with other AGN samples existing
in the same fields will be presented in Boutsia et al. \citetext{in preparation}.
\section{Conclusions}
We have applied a variability selection method to data collected by SN searches
in order to detect AGN through variability. We have been very successful in
detecting new AGNs, which had escaped traditional selection techniques and have
confirmed a large number of already known AGNs in these fields. This proves that
the AGN field can benefit from such synergic AGN-SN surveys. After a spectroscopic
follow-up, a considerable fraction of our variable candidates turned out to be ``variable galaxies''
with narrow emission lines and properties consistent with LLAGNs diluted by the host galaxy.
By combining the criterion for variability with a secondary criterion concerning the shape
of the SF, we created highly reliable AGN samples, since $\sim$97\% of our candidates
belonging to such a sample was confirmed as BLAGN. In an era that large survey telescopes
are being developed, such variability studies can give valuable feed-back both for
determining the strategy of the observations as well as for the development of software
and pipelines that will allow the scientific community to fully exploit the huge datasets
that will be produced.
| proofpile-arXiv_065-5334 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
The functional renormalization group approach to correlated fermion systems has been of great help to detect different types of instabilities and collective order within many different models. This holds, in particular, for the two-dimensional Hubbard model which is hoped to improve our understanding of superconductivity in the high-$T_c$-cuprates.\cite{zanchi1,zanchi2,halbothmetzner,halbothmetzner2,honisalmi,salmhofer,honerkamp01,katanin} Most studies presented so far rely on the flow of the momentum-dependent four-fermion vertex. (For analogous work on four-quark vertices see \cite{Ellwanger,Meggiolaro}.) They are performed in the so-called $N$-patch scheme where the Fermi surface is discretized into $N$ patches, and the angular dependence of the fermionic four-point function is evaluated only for one momentum on each directional patch.
The approach presented here is based on the introduction of fermionic bilinears corresponding to different types of possible orders (partial bosonization) that was developed and used before in \cite{bbw00,bbw04,bbw05,kw07,krahlmuellerwetterich,simon}. It is also inspired by the efficient parametrization method for the fermionic four-point vertex proposed in \cite{husemann}. The link between the two approaches is given by the fact that different channels of the fermionic four-point function, defined by their (almost) singular momentum structure, correspond to different types of possible orders which are described by different composite bosonic fields.
The advantages of our method are, first, that it allows one to treat the complex momentum dependence of the fermionic four-point function in an efficient, simplified way, involving only a small number of coupled flow equations and second, that it permits to follow the renormalization group flow into phases of broken symmetry. A comparative disadvantage may be a better resolution of contributions from many channels in the $N$-patch approach. (In principle, both approaches can be combined.) In this paper we focus on the first of these two aspects. Spontaneous symmetry breaking was already addressed for antiferromagnetism (AF) \cite{bbw04, bbw05} in the Hubbard model close to half filling and for $d$-wave superconductivity in an effective Hubbard-type model with a dominating coupling in the $d$-wave channel.\cite{kw07}
\begin{figure}[t]
\includegraphics[width=50mm,angle=0.]{mitSimon_Bosonexchange.eps}
\caption{\small{Schematic picture of bosonization of the four-fermion vertex. Solid lines correspond to fermions, the dashed line to a complex (Cooper pair) boson, the wiggly line to a real boson representing a particle-hole state in the spin or charge density wave channel.}}
\label{bosonization}
\end{figure}
The main idea behind partial bosonization, namely, to represent the fermionic four-point vertex by a certain number of exchange bosons, is graphically shown in Fig. \ref{bosonization}. The dependence of the four-point vertex on three external frequencies and momenta (or simply ``momenta'', as we are going to write for short) is parametrized in terms of bosonic propagators together with Yukawa couplings which describe the interaction between one boson and two fermions. The different bosonic channels are distinguished according to the structure of their momentum dependence which may possibly become singular due to a zero of the inverse boson propagator. The momentum-independent part of the four-fermion vertex may either be distributed onto the different bosonic channels or be kept fixed as a purely fermionic coupling. In order to avoid the arbitrariness encountered when one chooses the first of these two options, we adopt the second for our computations. This may be regarded as a prototype for a combination of partial bosonization with the $N$-patch method in the sense of keeping only one patch and setting the four-fermion coupling to a constant value.
Although for a numerically exact treatment of the four-fermion vertex an infinity of bosonic fields would have to be considered in principle, a small number of well-chosen fields may suffice for a reasonable quantitative precision. The choice of fields that has to be included in order to capture the relevant physics depends on the model under investigation. In the case of the two-dimensional Hubbard model at small next-to-nearest-neighbor hopping $|t'|$, a magnetic boson $\mathbf{m}$ and a $d$-wave Cooper pair boson $d$ are needed because they correspond to the instabilities that occur. In order to avoid a too poor momentum resolution of the four-fermion vertex, we also include an $s$-wave Cooper pair boson field $s$ and a charge density boson field $\rho$. Other types of bosons are needed in other contexts---for instance, a $d$-wave charge density boson for the study of Pomeranchuk instabilities or a $p$-wave boson for triplett superconductivity at larger values of $|t'|$ away from the van Hove filling \cite{honisalmi}. With the restriction to the bosons $\mathbf{m},d,s$ and $\rho$, supplemented by a pointlike four-fermion vertex, we will show that interesting results and a semiquantitative understanding can be found in a rather simple truncation.
The two-dimensional Hubbard model \cite{hubbard,kanamori,gutzwiller} on a square lattice has attracted a lot of attention in the past 25 years because it is thought to cover important aspects of the physics of the high-$T_c$ cuprates. In analogy to the phase diagram of the cuprates, it shows antiferromagnetic order at half-filling and is believed to exhibit $d$-wave superconducting order (dSC) away from half filling.\cite{anderson} Today there are many studies which predict the $d$-wave instability to be the dominating one in a certain range of parameters aside from half filling \cite{miyake,loh,bickers,lee,millis,monthoux,scalapino,bickersscalapinowhite,bulut,pruschke,maier,senechal,maierjarrellscalapino,maiermacridin}, for a systematic overview see \cite{scalapinoreview}. The picture is also confirmed by some strikingly simple scaling approaches \cite{schulz,dzyaloshinskii,lederer} and finds further support within more elaborate renormalization group studies such as \cite{zanchi1,zanchi2,halbothmetzner,halbothmetzner2,honisalmi,salmhofer,honerkamp01,metznerreissrohe,reiss}.
\section{Method and approximation}
The starting point of our treatment is the exact flow equation for the effective average action or flowing action,\cite{cw93}
\begin{equation} \label{floweq}
\partial_k \Gamma_k = \frac{1}{2} \rm{STr} \,
\left(\Gamma^{(2)}_k + R_k\right)^{-1} \partial_k R_k =
\frac{1}{2} \rm{STr}\,\tilde \partial_k \,\left(\ln (\Gamma^{(2)}_k + R_k)\right)\,.
\end{equation}
The dependence on the renormalization group scale $k$ is introduced by adding a regulator $R_k$ to the full inverse propagator $\Gamma^{(2)}_k$. In Eq. (\ref{floweq}), $\rm{STr}$ denotes a supertrace, which sums over momenta, frequencies, and internal indices, while $\tilde \partial_k$ is the scale derivative acting only on the infrared (IR) regulator $R_k$. The Hamiltonian of the system under considerations is taken into account by the initial condition $\Gamma_{k=\Lambda}=S$ of the renormalization flow, where $\Lambda$ denotes some very large UV scale and $S$ is the microscopic action in a functional integral formulation of the Hubbard model. In the IR limit ($k\to 0$) the flowing action $\Gamma_k$ equals the full effective action $\Gamma=\Gamma_{k\to 0}$, which is the generating functional of one-particle-irreducible (1PI) vertex functions.
We employ a compact notation with $Q=(\omega_n=2\pi nT,\mathbf{q})$ and $Q=(\omega_n=(2n+1)\pi T,\mathbf{q})$ for bosonic and fermionic fields and
\begin{eqnarray}\label{eq:sumdefinition}
&&\quad\sum\limits_Q=T\sum\limits_{n=-\infty}^\infty \int\limits_{-\pi}^\pi \frac{d^2q}{(2\pi)^2}\,,\nonumber\\
&&\delta(Q-Q')=T^{-1}\delta_{n,n'}(2\pi)^2\delta^{(2)}(\mathbf{q}-\mathbf{q'})\,.
\end{eqnarray}
The components of the momentum $\mathbf q$ are measured in units of the inverse lattice distance $\mathrm{a}^{-1}$. The discreteness of the lattice is reflected by the $2\pi$-periodicity of the momenta $\mathbf{q}$.
Although Eq. \eqref{floweq} is an exact flow equation, it can only be solved approximately. In particular, a truncation has to be specified for the flowing action, indicating which of the (infinitely many) 1PI vertex functions are actually taken into account. Our ansatz for the flowing action includes contributions for the electrons, for the bosons in the magnetic, charge, and $s$-wave and $d$-wave superconducting channels, and for interactions between fermions and bosons,
\begin{eqnarray}
\Gamma_k[\chi]&=&\Gamma_{F,k}[\chi]+\Gamma_{Fm,k}[\chi]+\Gamma_{F\rho,k}[\chi]+\Gamma_{Fs,k}[\chi]+\Gamma_{Fd,k}[\chi]\nonumber\\
&&+\Gamma_{m,k}[\chi]+\Gamma_{\rho,k}[\chi]+\Gamma_{s,k}[\chi]+\Gamma_{d,k}[\chi]\,.
\end{eqnarray}
The collective field $\chi=(\mathbf m,\rho,s,s^*,d,d^*,\psi,\psi*)$ includes both fermion fields $\psi,\psi*$ and boson fields $\mathbf m,\rho,s,s^*,d,d^*$.
The purely fermionic part $\Gamma_{F}[\chi]$ (the dependence on the scale $k$ is always implicit in what follows) of the flowing action consists of a two-fermion kinetic term $\Gamma_{F\rm{kin}}$, a momentum-independent four-fermion term $\Gamma_F^U$, and the momentum-dependent four-fermion terms $\Gamma_F^m$, $\Gamma_F^\rho$, $\Gamma_F^s$ and $\Gamma_F^d$,
\begin{eqnarray}
\Gamma_{F}[\chi]=\Gamma_{F\rm{kin}}+\Gamma_{F}^U+\Gamma_F^m+\Gamma_F^\rho+\Gamma_F^s+\Gamma_F^d\,.
\end{eqnarray}
The fermionic kinetic term is given by
\begin{eqnarray}\label{fermprop}
\Gamma_{F\rm{kin}}=\sum_{Q}\psi^{\dagger}(Q)P_F(Q)\psi(Q)\,,
\end{eqnarray}
with inverse fermion propagator
\begin{eqnarray}\label{PF}
P_{F}(Q)=i\omega+\xi(\mathbf q)
\,,\end{eqnarray}
where we take for the dispersion relation of the free electrons
\begin{equation}
\xi(\mathbf q)=-\mu-2t(\cos q_x +\cos q_y)-4t' \cos q_x\cos q_y\,.
\end{equation}
The momentum-independent part of the four-fermion coupling is identical to the Hubbard interaction $U$. In our truncation, this coupling is not modified during the flow. The corresponding part of the effective action therefore reads
\begin{eqnarray}
\Gamma_{F}^U &=&\frac{1}{2}\sum_{K_1,K_2,K_3,K_4}U\,\delta\left( K_1-K_2+K_3-K_4 \right)\,\nonumber\\&&\hspace{0.5cm}\times\,\big\lbrack\psi^\dagger(K_1)\psi(K_2)\big\rbrack\,\big\lbrack\psi^\dagger(K_3)\psi(K_4)\big\rbrack\,.
\end{eqnarray}
In this work, as in \cite{zanchi1,zanchi2,halbothmetzner,halbothmetzner2,honisalmi,salmhofer,honerkamp01,katanin}, contributions to the fermionic self-energy are neglected. Instead, we focus on the momentum dependence of the fermionic four-point function $\lambda_F(K_1,K_2,K_3,K_4)$, which, due to energy-momentum conservation $K_4=K_1-K_2+K_3$, is a function of three independent momenta. We decompose this vertex into a sum of four functions $\lambda_F^m(Q)$, $\lambda_F^\rho(Q)$, $\lambda_F^s(Q)$ and $\lambda_F^d(Q)$, each depending on only one particular combination of the $K_i$, which correspond to the four different bosons taken into account. This is inspired by the singular momentum structure of the leading contributions to the four-fermion vertex. In our ansatz for the effective average action these functions enter as
\begin{align}
\Gamma_F^m=-\frac{1}{2}\sum_{K_1,K_2,K_3,K_4}\lambda_F^m(K_1-K_2)\,\delta\left( K_1-K_2+K_3-K_4 \right)\nonumber\\
\times\,\big\lbrack \psi^\dagger(K_1)\boldsymbol\sigma\psi(K_2) \big\rbrack\cdot\big\lbrack \psi^\dagger(K_3)\boldsymbol\sigma\psi(K_4) \big\rbrack\,,\label{aform}\\
\Gamma_F^\rho=-\frac{1}{2}\sum_{K_1,K_2,K_3,K_4}\lambda_F^\rho(K_1-K_2)\,\delta\left( K_1-K_2+K_3-K_4 \right)\nonumber\\
\times\,\big\lbrack \psi^\dagger(K_1)\psi(K_2) \big\rbrack\,\big\lbrack \psi^\dagger(K_3)\psi(K_4) \big\rbrack
\end{align}
for the real bosons, and, for the superconducting bosons, as
\begin{align}
\Gamma_F^s=\sum_{K_1,K_2,K_3,K_4}\lambda_F^s(K_1+K_3)\,\delta\left( K_1-K_2+K_3-K_4 \right)\nonumber\\
\times\,\big\lbrack \psi^\dagger(K_1)\epsilon\psi^*(K_3) \big\rbrack\,\big\lbrack \psi^T(K_2)\epsilon\psi(K_4) \big\rbrack\,,\\
\Gamma_F^d=\sum_{K_1,K_2,K_3,K_4}\lambda_F^d(K_1+K_3)\,\delta\left( K_1-K_2+K_3-K_4 \right)\nonumber\\
\times\, f_d(( K_1-K_3)/2)\, f_d(( K_2-K_4)/2)\nonumber\\
\times\,\big\lbrack \psi^\dagger(K_1)\epsilon\psi^*(K_3) \big\rbrack\,\big\lbrack \psi^T(K_2)\epsilon\psi(K_4) \big\rbrack\,,\label{dform}
\end{align}
where $\boldsymbol\sigma=(\sigma^1,\sigma^2,\sigma^3)$ is the vector of the Pauli matrices, the matrix $\epsilon$ is defined as $\epsilon=i\sigma^2$, and the function
\begin{equation}
f_d(Q)=f_d(\mathbf q)=\frac{1}{2}\left( \cos{q_x}-\cos{q_y} \right)\,
\end{equation}
is the $d$-wave form factor which is kept fixed during the flow.
In a first step, contributions to the four-fermion vertex are distributed onto the couplings $\lambda_F^m,\;\lambda_F^\rho,\;\lambda_F^s,\;\lambda_F^d$, depending on their momentum dependence. Partial bosonization comes into play at this stage as the absorption of these contributions by the corresponding Yukawa couplings and bosonic propagators. More concretely, this means that the couplings $\lambda_F^m,\;\lambda_F^\rho,\;\lambda_F^s,\;\lambda_F^d$ are set to zero by introducing a scale-dependence of the bosonic fields, which in turn generates additional contributions to the various Yukawa couplings. The technique by means of which this is achieved is called flowing bosonization or rebosonization.\cite{GiesWett,floerchi} We describe it in some detail in Appendix A. In consequence, the complicated spin and momentum dependence of the fermionic four-point function $\lambda_F(K_1,K_2,K_3,K_4)$, as it emerges during the flow, will be captured by the momentum dependence of the propagators of the bosons and the couplings between bosons and fermions.
The interaction between electrons and composite bosons are taken into account in our ansatz for the flowing action by Yukawa-type vertices of the form
\begin{eqnarray}\label{GFak}
\Gamma_{Fm}&=&-\!\sum_{K,Q,Q'}\bar h_m(K)\;\mathbf{m}(K)\cdot[\psi^{\dagger}(Q)\boldsymbol{\sigma}\psi(Q')]\;\delta(K-Q+Q')\,,\nonumber\\
\Gamma_{F\rho}&=&-\!\sum_{K,Q,Q'}\bar h_\rho(K)\;\rho(K)\,[\psi^{\dagger}(Q)\psi(Q')]\;\delta(K-Q+Q')\,,\nonumber\\
\Gamma_{Fs}&=&-\!\sum_{K,Q,Q'}\bar h_s(K)\,\left(s^*(K)\,[\psi^{T}(Q)\epsilon\psi(Q')]\right.\\
&&\hspace{1.5cm}\left.-s(K)\,[\psi^{\dagger}(Q)\epsilon\psi^*(Q')]\right)\;\delta(K-Q-Q')\,,\nonumber\\
\Gamma_{Fd}&=&-\!\sum_{K,Q,Q'}\bar h_d(K)f_d\,\left((Q-Q')/2\right)\left(d^*(K)\,[\psi^{T}(Q)\epsilon\psi(Q')]\right.\nonumber\\
&&\hspace{1.5cm}\left.-d(K)\,[\psi^{\dagger}(Q)\epsilon\psi^*(Q')]\right)\;\delta(K-Q-Q')\,.\nonumber
\end{eqnarray}
Note the presence of the $d$-wave form factor in the second-to-last line. To determine the $k$-dependence of the Yukawa couplings $\bar h_m, \bar h_\rho, \bar h_s, \bar h_d$ is a central task within our approach.
The purely bosonic parts of the effective action are characterized by the bosonic propagators. For the magnetic boson, for instance, the inverse propagator is given by $\tilde P_{m}(Q)\equiv P_{m}(Q)+\bar m_m^2$, where $\bar m_m^2$ is its minimal value and $P_{m}(Q)$ is the (strictly positive) so-called kinetic term. The contributions to the effective average action where the bosonic propagators appear are
\begin{eqnarray}
\Gamma_{m}&=&\frac{1}{2}\sum_{Q}\mathbf{m}^{T}(-Q)\left(P_{m}(Q)+\bar m_m^2\right)\mathbf{m}(Q)\,,\label{m_masse}\\
\Gamma_{\rho}&=&\frac{1}{2}\sum_{Q}\rho(-Q)\left(P_{\rho}(Q)+\bar m_\rho^2\right)\rho(Q)\,,\\
\Gamma_{s}&=&\sum_{Q}s^*(Q)\left(P_{s}(Q)+\bar m_s^2\right)s(Q)\,,\\
\Gamma_{d}&=&\sum_{Q}d^*(Q)\left(P_{d}(Q)+\bar m_d^2\right)d(Q)\,.
\end{eqnarray}
Our parametrization of the frequency and momentum dependence of the bosonic propagators and the Yukawa couplings is described in Appendix B. In contrast to the decomposition of the fermionic four-point vertex proposed in \cite{husemann}, our bosonic propagators exhibit an explicit frequency dependence.
In the present paper, the purely bosonic parts of the flowing action are confined to the bosonic propagators. Higher order purely bosonic interactions are currently investigated and will be included in a forthcoming work.
\section{Initial Conditions and Regulators}
At the microscopic scale $k=\Lambda$ the flowing action must be equivalent to the microscopic action of the Hubbard model, so the initial value of the four-fermion coupling must correspond to the Hubbard interaction $U$. The bosonic fields decouple completely at this scale, so the initial values of the Yukawa couplings are
\begin{equation}
\bar h_m|_\Lambda=\bar h_\rho|_\Lambda=\bar h_s|_\Lambda=\bar h_d|_\Lambda=0\,.
\end{equation}
The purely bosonic part of the effective action on initial scale is set to
\begin{eqnarray}\label{eq:initialcond}
\Gamma_{m}|_{\Lambda}=\mathbf m^T\cdot\mathbf m\,,\quad \Gamma_{\rho}|_{\Lambda}=\rho^T\rho\,,\\
\Gamma_{s}|_{\Lambda}=s^*s\,,\quad \Gamma_{d}|_{\Lambda}=d^*d\,.\nonumber
\end{eqnarray}
In other words, we take $\bar m_{i,\Lambda}^2=t^2$ and then use units $t=1$ and $P_{i,\Lambda}=0$. The choice $\bar m_{i,\Lambda}^2=t^2$ amounts to an arbitrary choice for the normalization of the bosonic fields, which are introduced as redundant auxiliary fields at the scale $\Lambda$, where they do not couple to the electrons. Of course, this changes during the flow, where the bosons are transformed into dynamical composite degrees of freedom, with nonzero Yukawa couplings and a nontrivial momentum dependence of their propagators.
In addition to the truncation of the effective average action, regulator functions for both fermions and bosons have to be specified. We use ``optimized cutoffs'' \cite{litim1,litim2} for both fermions and bosons. The regulator function for fermions is given by
\begin{eqnarray}
R^F_k(Q)=\rm{sgn}(\xi(\mathbf q))\left(k-|\xi(\mathbf q)|\right)\Theta(k-|\xi(\mathbf q)|)\,,
\end{eqnarray}
the regulator functions for the real bosons are given by
\begin{eqnarray}\label{regulator}
R^{m/\rho}_k(Q)=A_{m/\rho}\cdot(k^2-F_{c/i}(\mathbf q,\hat q))\Theta(k^2-F_{c/i}(\mathbf q,\hat q))
\,\end{eqnarray}
allowing for an incommensurability $\hat q$ with $F_{c/i}$ as defined in Appendix B. Regulator functions for the Cooper-pair bosons are of the same form, but no incommensurability needs to be accounted for in these cases.
\section{Flow equations}
The flow equations for the couplings follow from projection of the flow equation for the flowing action onto the various different monomials of fields. The right-hand sides of these flow equations are given by the one-particle-irreducible diagrams having an appropriate number of external lines, including a scale derivative $\tilde \partial_k$ acting only on the IR regulator $R_k$. Diagrams contributing to the flow of boson propagators are shown in Fig. \ref{bosepropkorr}.
\begin{figure}[t]
\includegraphics[width=45mm,angle=0.]{mitSimon_Bosepropkorr.eps}
\caption{\small{1PI diagrams contributing to the flow of bosonic propagators. Wiggly lines denote real bosons (particle-hole channels), dashed lines complex bosons (Cooper pair channels).}}
\label{bosepropkorr}
\end{figure}
Once some bosonic mass term $\bar m_i^2$ changes sign from positive to negative during the flow, this signals the divergence of the four-fermion vertex function in the corresponding channel. A negative mass term indicates local order, since at a given coarse graining scale $k$ the effective average action evaluated at constant field has a minimum for a nonzero value of the boson field. The largest temperature where at fixed values of $U,t',\mu$ one of the mass terms $\bar m_i^2$ changes sign during the flow is called the pseudocritical temperature $T_{pc}$. It can also be described as the largest temperature where short-range order sets it. If this order persists for $k$ reaching a macroscopic scale, the model exhibits effectively spontaneous symmetry breaking, associated in our model to (either commensurate or incommensurate) antiferromagnetism or $d$-wave superconductivity. The largest temperature for which local order persists up to some $k$ corresponding to the inverse size of a macroscopic sample is the true critical temperature $T_c$. In this paper we focus on the symmetric regime where we have a positive mass term and stop the flow once a mass term reaches zero. We plan to address the symmetry-broken regimes in a future work.
\begin{figure}[t]
\includegraphics[width=50mm,angle=0.]{mitSimon_Yukawacomplexreal.eps}\vspace{0.5cm}
\includegraphics[width=65mm,angle=0.]{mitSimon_Yukawadirect.eps}
\caption{\small{1PI diagrams which directly contribute to the flow of the Yukawa couplings.}}
\label{yukdir}
\end{figure}
The flow equations for the Yukawa couplings consist of a direct contribution and an ``indirect'' contribution resulting from flowing bosonization, see Appendix A. Diagrams contributing directly to the flow of the Yukawa couplings are shown in Fig. \ref{yukdir}, those that contribute via flowing bosonization are displayed in Fig. \ref{ff}. Since we choose to distribute contributions from flowing bosonization only onto the Yukawa couplings and not onto the masses, it is crucial to include a momentum dependence of the Yukawa coupling $\bar h_m$ in the magnetic channel in order to account for the emergence of the $d$-wave superconducting instability. Otherwise the contribution of the particle-particle box diagram (the first in the lower line of Fig. \ref{ff}) to the $d$-wave coupling would be underestimated.
\begin{figure}[t]
\includegraphics[width=40mm,angle=0.]{mitSimon_femfourpoint.eps}\vspace{0.5cm}
\includegraphics[width=60mm,angle=0.]{mitSimon_fourfermionU.eps}\vspace{0.5cm}
\includegraphics[width=80mm,angle=0.]{mitSimon_Boxes.eps}
\caption{\small{1PI diagrams contributing to the flow of the Yukawa couplings via flowing bosonization.}}
\label{ff}
\end{figure}
In order to demonstrate how the contributions to the four-fermion vertex are taken into account via flowing bosonization, we discuss the case of the purely fermionic loop diagrams shown in the upper line of Fig. \ref{ff}. As long as no scale dependence due to the regulator function has been introduced, they are given by
\begin{eqnarray}
\Delta\Gamma^F_F&=&-\frac{U^2}{2}\sum_{K_1,K_2,K_3,K_4}\sum_P \\
&&\hspace{-0.6cm}\left( \frac{1}{P_F(P)P_F(P+K_2-K_3)} + \frac{1}{P_F(P)P_F(-P+K_1+K_3)}\right) \nonumber\\
&&\hspace{-0.6cm}\delta\left( K_1-K_2+K_3-K_4 \right)\big\lbrack \psi^\dagger(K_1)\psi(K_2) \big\rbrack\cdot\big\lbrack \psi^\dagger(K_3)\psi(K_4) \big\rbrack\,.\nonumber
\end{eqnarray}
In order to obtain the resulting contribution to the fermionic four-point vertex function $\Delta\Gamma^{F\,(4)}_F$, we have to take the fourth functional derivative of $\Delta\Gamma^F_F$ with respect to the fermionic fields. It is given by
\begin{eqnarray}
&&\Delta\Gamma^{F\,(4)}_F(K_1,K_2,K_3,K_4)=\label{oneloop}\\
&&\frac{1}{4}\frac{\delta^4}{\delta\psi^*_\alpha(K_1)\delta\psi_\beta(K_2)\delta\psi^*_\gamma(K_3)\delta\psi_\delta(K_4)}\Delta\Gamma_F^F \nonumber\\
&&=-\frac{U^2}{4}\sum_P\Big\lbrace \frac{4\,S_{\alpha\gamma;\beta\delta}}{P_F(P)P_F(-P+K_1+K_3)}\nonumber\\
&&- \frac{\delta_{\alpha\delta}\delta_{\gamma\beta}}{P_F(P)P_F(P+K_2-K_1)}
+ \frac{\delta_{\alpha\beta}\delta_{\gamma\delta}}{P_F(P)P_F(P+K_2-K_3)}
\Big\rbrace\nonumber\,,
\end{eqnarray}
where $S_{\alpha\gamma;\beta\delta}=\frac{1}{2}\left( \delta_{\alpha\beta}\delta_{\gamma\delta} - \delta_{\alpha\delta}\delta_{\gamma\beta} \right)$ denotes the singlet projection. The two last lines of Eq. \eqref{oneloop} can be compared to the fourth derivative with respect to the fields of the right hand sides of Eqs. \eqref{aform}\,-\,\eqref{dform}. This allows one to obtain the loop corrections to the four-fermion couplings $\lambda_F^m,\;\lambda_F^\rho,\;\lambda_F^s,\;\lambda_F^d$ introduced there. The second last line of Eq. \eqref{oneloop} can be absorbed by the $s$-boson, the last line by the $\mathbf m$- and $\rho$-bosons. No contribution to the $d$-boson arises at this stage.
To determine how the last line of Eq. \eqref{oneloop} should be distributed onto the $\mathbf m$- and $\rho$-bosonic channels, we use the identity $\delta_{\alpha\delta}\delta_{\gamma\beta}=\frac{1}{2}\left( \delta_{\alpha\beta}\delta_{\gamma\delta}+\sigma^j_{\alpha\beta}\sigma^j_{\gamma\delta} \right)$\,. All terms have now the same structures as those appearing in the fourth functional derivative of Eq. \eqref{aform}. We obtain the following loop contributions to $\lambda_F^m,\;\lambda_F^\rho,\;\lambda_F^s$:
\begin{eqnarray}
\left(\Delta\lambda_F^m\right)^F(K_1-K_2)=-\frac{U^2}{2}\sum_P\frac{1}{P_F(P)P_F(P+K_2-K_1)}\,,\nonumber\\
\left(\Delta\lambda_F^\rho\right)^F(K_1-K_2)=-\frac{U^2}{2}\sum_P\frac{1}{P_F(P)P_F(P+K_2-K_1)}\,,\label{simpleloop}\\
\left(\Delta\lambda_F^s\right)^F(K_1+K_3)=\frac{U^2}{2}\sum_P\frac{1}{P_F(P)P_F(-P+K_1+K_3)}.\nonumber
\end{eqnarray}
The $k$ dependence of $\lambda_F^m,\;\lambda_F^\rho,\;\lambda_F^s$ is obtained from the one-loop expressions \eqref{simpleloop} by adding the infrared cutoff $R_k^F$ to the inverse fermionic propagator and by applying the formal derivative $\tilde\partial_k=(\partial_kR_k^F)\partial/\partial R_k^F$ under the summation. For $\lambda_F^m$, for example, one obtains
\begin{equation}
\partial_k\lambda_F^m(Q)=\tilde\partial_k\Delta\lambda_F^m(Q)\,, \label{looptoflow}
\end{equation}
where the formal derivative $\tilde\partial_k$ should be read as acting under the loop summation of terms contributing to $\Delta\lambda_F^m(Q)$. Note that $(\Delta\lambda_F^m)^F(Q)$ is only part of the complete loop contribution $\Delta\lambda_F^m(Q)$, namely, the one which arises from the two diagrams shown in the first line of Fig. \ref{ff}. The complete $\Delta\lambda_F^m(Q)$ is obtained if the diagrams shown in Fig. \ref{ff} are all taken together.
\begin{figure}[t]
\includegraphics[width=80mm,angle=0.]{mitSimon_fermloop_label.eps}
\caption{\small{Schematic picture of the bosonization of loop contributions to the four-fermion vertex. The terms indicated by the three dots correspond to loop diagrams having internal bosonic lines.}}
\label{bosonization_label}
\end{figure}
In our partially bosonized approach, the fermion loop contributions to the momentum-dependent four-fermion vertex in Eq. \eqref{simpleloop} are fully accounted for by the exchange of the bosons $\mathbf m,\rho$ and $s$. This is shown schematically in Fig. \ref{bosonization_label}. In the language of boson exchange, the momentum dependence of the coupling in, for instance, the magnetic channel can be taken into account by the momentum dependence of the expression $\bar h_m^2(K_1-K_2)\tilde P_m^{-1}(K_1-K_2)$. In practice, we keep $\lambda_F^m=0$ during the flow and account for the loop-generated $\Delta\lambda_F^m$ by a corresponding change $\Delta\bar h_m^2$. We note that only the combination $\bar h_m^2\tilde P_m^{-1}$ appears in the computations as long as the only momentum dependence of the Yukawa couplings is that of the boson momentum. In fact, by a momentum-dependent rescaling of the fields for the $\mathbf m$-boson it is in principle possible to arbitrarily attribute parts of the momentum dependence to $\bar h_m^2$ or to $\tilde P_m$. Nevertheless, introducing the two factors $\bar h_m^2$ and $\tilde P_m^{-1}$ instead of only $\lambda_F^m$ is useful if one wants to approach spontaneous symmetry breaking. It has the advantage that instead of having to deal with the divergent coupling $\lambda_F^m$, one only needs to account for the mass term changing its sign. The term containing $\bar m_m^2$ in Eq. \eqref{m_masse}, which is quadratic in the boson field, becomes part of the effective potential for the magnetic boson in a more extended truncation. Our description of the $k$-dependent flow of $\lambda_F^m$ by means of the $k$-dependent quantities $\bar h_m$ and $\tilde P_m$ (and analogously for $\lambda_F^\rho$ and $\lambda_F^s$) is achieved formally by a $k$-dependent nonlinear field redefinition, see Appendix A, Eq. \eqref{newfield} . At momentum $Q=0$, for example, the contribution to the flow of the momentum-dependent Yukawa couplings due to the diagrams in the first line of Fig. \ref{ff}, according to Eqs. \eqref{rebosrho}, \eqref{rebosm}, is given by
\begin{eqnarray}
\left(\partial_k \bar h_{m/\rho}^2(0)\right)^F&=&-\frac{U^2}{2}\tilde P_{m/\rho}(0)\sum_P\tilde\partial_k\frac{1}{P_F(P)P_F(P)}\,,\label{highscales1}\\
\left(\partial_k \bar h_{s}^2(0)\right)^F&=&\frac{U^2}{2}\tilde P_{s}(0)\sum_P\tilde\partial_k\frac{1}{P_F(P)P_F(-P)}\,.\label{highscales2}
\end{eqnarray}
At this level, we have described the exact one-loop perturbative result for the momentum-dependent four-fermion vertex in terms of boson exchange. The concept of the flowing action, however, allows for a ``renormalization group improvement'' which is obtained by $k$-dependent ``running couplings'' or vertices. In the purely fermionic flows \cite{zanchi1,zanchi2,halbothmetzner,halbothmetzner2,honisalmi,salmhofer,honerkamp01,katanin} the constant coupling $U$ would be replaced by the full momentum- and $k$-dependent four-fermion vertex. In our partially bosonized approach, where we keep only a constant four-fermion coupling $U$, this renormalization group improvement is generated by the diagrams involving internal bosonic lines, shown in Fig. \ref{yukdir} and the second and third lines of \ref{ff}. It is at this level where our truncation for the momentum dependence of the Yukawa couplings and inverse boson propagators as well as the restriction to a certain number of bosons starts to matter.
The momentum dependence of the four-fermion vertex which is generated by boson exchange is much more complicated than the simple form \eqref{simpleloop}. We therefore have to decide how to distribute these contributions onto the different boson exchange channels. To this end, we adopt an approximation where the momentum-dependence of the four-fermion couplings $\lambda_F^m,\;\lambda_F^\rho,\;\lambda_F^s,\;\lambda_F^d$ can be identified with the dependence of the diagrams in Figs. \ref{yukdir} and \ref{ff} on the so-called transfer momentum. This momentum is defined as the difference between the momenta attached to the two fermionic propagators in each diagram. Particle-hole diagrams are absorbed by the real bosons and particle-particle diagrams by the complex Cooper pair bosons. All diagrams are evaluated at external momenta $L=(\pi T,\pi,0)$ and $L'=(\pi T,0,\pi)$ and transfer momenta $0=(0,0,0)$ and $\Pi=(0,\pi,\pi)$. For small values of $|\mu|$ and $|t'|$, the (spatial parts of) momenta $L$ and $L'$ are close to the Fermi surface and the density of states is rather large there, so that this choice will capture the relevant physics for not too large $|\mu|$ and $|t'|$. Where more than one combination of external momenta $\pm L$ and $\pm L'$ is compatible with the condition that the transfer momentum is either $0$ or $\Pi$, we take the average over them. For the coupling in the $d$-wave channel, the evaluation of the contributing diagrams is discussed in more detail in the next section.
While the contributions to the Yukawa couplings in Eqs. \eqref{highscales1} and \eqref{highscales2} are proportional to $U^2$ and therefore present already for large $k$, the diagrams shown in Fig. \ref{yukdir} and in the second and third lines of Fig. \ref{ff} start to have an influence on the flow of the Yukawa couplings only after nonzero Yukawa couplings have been generated due to Eqs. \eqref{highscales1} and \eqref{highscales2} in the first place. In perturbation theory, they would correspond to higher order effects $\sim U^3$ and $U^4$. (Perturbatively, every Yukawa coupling counts as $U$.) The flow of the couplings in the magnetic and charge density channels starts to differ as soon as the diagrams shown in the first line of Fig. \ref{yukdir} become important. They contribute positively to the coupling in the magnetic channel but negatively to the couplings in the charge density and superconducting $s$-wave channels. This explains why among the three Yukawa couplings $\bar h_m\,,\bar h_\rho\,,\bar h_s$ the dominating one is $\bar h_m$, although in accordance with Eqs. \eqref{highscales1} and \eqref{highscales2} all three are generated with equal size at early stages of the flow. Due to the comparatively large Yukawa coupling $\bar h_m$ the mass term $\bar m_m^2$ is driven fastest toward zero by the diagrams in Fig. \ref{bosepropkorr}. We can therefore understand why the charge density and $s$-wave superconducting channels never become critical in the range of parameters investigated.
\section{Coupling in the $d$-wave channel}
The generation of a coupling in the $d$-wave channel in the framework used here has already been discussed in an earlier work.\cite{krahlmuellerwetterich} The $d$-wave Yukawa coupling $\bar h_d$ arises during the renormalization flow due to the first diagram in the lower line of Fig. \ref{ff}, which is the only particle-particle box graph. The coupling is extracted from contributions due to this graph by means of the prescription
\begin{eqnarray}\label{eq:flowlamnda_d}
\Delta\lambda_F^d(\mathbf l,\mathbf l')
&=&\frac{1}{2}\big\{\Delta\Gamma^{(4),pp}_{F,s}(L,L,-L,-L)
\\
& &\hspace{0.4cm}
-\Delta\Gamma^{(4),pp}_{F,s}(L,L',-L,-L')\big\}\nonumber\,,
\end{eqnarray}
where the subscript $s$ denotes the singlet and the superscript $pp$ the particle-particle part of the four-point vertex. The momentum vectors $L$ and $L'$ are defined as in the previous section. For a motivation of this definition of the $d$-wave coupling see \cite{krahlmuellerwetterich}. The contribution from the particle-particle box diagram to the $s$-wave superconducting channel is obtained by adding, instead of subtracting, the two terms on the right-hand side of Eq. \eqref{eq:flowlamnda_d}. The $s$- and $d$-wave superconducting channels of the four-fermion coupling can be described as those parts of its singlet particle-particle contribution which are symmetric ($s$-wave) and antisymmetric ($d$-wave) under a rotation by $90^\circ$ of the outgoing electrons with respect to the incoming electrons. In our approximation, the first diagram in the second line of Fig. \ref{ff} contributes only to the $s$-wave channel.
Once a coupling in the $d$-wave channel has been generated through the particle-particle box diagram, it is further enhanced due to the direct contribution shown as the first graph in the second line of Fig. \ref{yukdir}. Since this graph, which is itself proportional to the Yukawa coupling $\bar h_d$, contributes positively to the flow of $\bar h_d$, it can lead to a growth of this coupling without bounds, i.e. lead to an an instability in the $d$-wave channel. This instability will be the result of antiferromagnetic spin fluctuations (corresponding to the wiggly internal line of the diagram mentioned), so that our results finding a $d$-wave instability through this contribution support the idea, proposed and defended in \cite{miyake,loh,bickers,lee,millis,monthoux,scalapino}, that antiferromagnetic spin fluctuations are responsible for $d$-wave superconductivity in the two-dimensional Hubbard model (and maybe also in the cuprates insofar as the Hubbard model serves as a guide to the relevant cuprate physics).
That the particle-particle graph in the second line of Fig. \ref{yukdir} is crucial for the emergence of a $d$-wave instability arising from antiferromagnetic fluctuations is mirrored by the fact that this diagram has the same momentum structure as the BCS gap equation. In the presence of an interaction which in momentum space is maximal around the $(\pi,\pi)$-points---a condition which is fulfilled when antiferromagnetic spin fluctuations dominate---the gap solving this equation exhibits $d$-wave symmetry.
\section{Numerical results}
We now come to the discussion of the numerical results we have obtained at small next-to-nearest-neighbor hopping $|t'|/t\leq0.1$. For values of $|t'|$ and $|\mu|$ which are larger than those for which we show results in Fig. \ref{phasediag}, there is, in addition to the tendency to antiferromagnetism which is present already at large scales $k\gg0$, a tendency toward ferromagnetism which becomes important at lower scales. Due to our simple parametrization of the inverse magnetic propagator $\tilde P_m$, see Eqs. \eqref{eq:apropparam1}\,-\,\eqref{eq:apropparaminkomm} in Appendix B, and due to our choice of external momenta for evaluating the diagrams shown in Figs. \ref{yukdir} and \ref{ff}, we may overestimate magnetic fluctuations at larger values of $|\mu|$ and $|t'|$. Hence, we do not show any results for these larger values.
In the range of parameters investigated, we find that either antiferromagnetism or $d$-wave superconductivity is the leading instability. In agreement with previous findings, the coupling in the $d$-wave channel emerges due to antiferromagnetic fluctuations. In the parameter regime where this coupling is enhanced most strongly it competes with (and is driven by) the coupling in the \textit{incommensurate} antiferromagnetic (iAF) channel which was studied in detail in Ref. \onlinecite{simon}, where the same framework was used as here.
In the upper panels of Figs. \ref{AFdom} and \ref{SCdom}, the flow of the different channels of the fermionic four-point vertex is displayed at fixed $t'/t=-0.01$ and different values of $U/t=2.5\,,3\,,3.5$. The lower panels show the flow of the corresponding bosonic masses, which approximate unrenormalized inverse susceptibilities in channels which are close to critical. As expected, the antiferromagnetic coupling grows fastest and remains the dominant one for small to intermediate values of $|\mu|$, for a representative case see Fig. \ref{AFdom}. For $\mu/t<-0.28$, however, the $d$-wave coupling diverges for higher temperatures than the antiferromagnetic coupling, for an example of this kind of scenario see Fig. \ref{SCdom}. The couplings in the charge density wave and superconducting $s$-wave channels are also considerably enhanced in both cases, and their influence is quantitatively important although they do not diverge.
\begin{figure}[t]
\includegraphics[width=70mm,angle=0.]{Ats001mu021T0188.eps}
\includegraphics[width=70mm,angle=0.]{Ats001mu021T0188susc.eps}
\caption{\small{Upper panel: flow of the four-fermion vertex in the different channels for $U/t=3$, $t'/t=-0.01$, $\mu/t=-0.12$ and $T/t=0.188$. The shorthands used in the legend are defined as $\lambda_{F,ac}\equiv\bar h_m^2(\Pi)/\tilde P_{m}(\Pi)$, $\lambda_{F,\rho}\equiv\bar h_\rho^2(\Pi)/\tilde P_{\rho}(\Pi)$, $\lambda_{F,s}\equiv\bar h_s^2(0)/\tilde P_s(0)$ and $\lambda_{F,d}\equiv\bar h_d^2(0)/\tilde P_d(0)$. Lower panel: flow of the minima of the inverse bosonic propagators (bosonic mass terms).}}
\label{AFdom}
\end{figure}
\begin{figure}[t]
\includegraphics[width=70mm,angle=0.]{Ats001mu032T0109.eps}
\includegraphics[width=70mm,angle=0.]{Ats001mu032T0109susc.eps}
\caption{\small{Same as Fig. \ref{AFdom} for $\mu/t=-0.32$ and $T/t=0.109$. In addition to the couplings defined in Fig \ref{AFdom} we plot $\lambda_{F,ai}\equiv\bar h_m^2(\Pi-\hat Q)/\tilde P_{m}(\Pi-\hat Q)$ (coupling in the incommensurate AF channel), where $\hat Q=(0,\hat q,0)$ with $\hat q$ the size of the incommensurability. For these parameters the coupling in the $d$-wave channel diverges first. In the magnetic channel, incommensurate antiferromagnetic fluctuations (long-dashed lines) dominate over commensurate ones (short-dashed lines).}}
\label{SCdom}
\end{figure}
In Fig. \ref{phasediag} the highest temperature at a given value of $\mu$ is plotted for which one of the boson masses drops to zero at some scale $\bar k$, signaling the onset of local order on a typical length scale $\bar k^{-1}$ in the corresponding channel. These ``pseudocritical temperatures'' $T_{pc}$ are shown for $t'/t=-0.01$ (upper panel) and $t'/t=-0.1$ (middle panel) and different values of $U$. Pseudocritical temperatures for antiferromagnetism are higher by a factor $\approx 3$ than those presented in our last paper.\cite{simon} This is mainly due to the neglect of the fermionic wave function renormalization and of quartic bosonic couplings in the present paper. Both of these would suppress the growth of the four-fermion vertex and hence the emergence of local order. These contributions are omitted here for the sake of a simple and nevertheless systematic approach to the four-fermion vertex. They will be included in a forthcoming work where also the bosonic vertex functions that directly couple together the different types of bosons will be taken into consideration. We recall that often the true critical temperature $T_c$ is found to be substantially smaller than the pseudocritical temperature $T_{pc}$.\cite{bbw04,kw07}
For some pairs of parameters $U$ and $t'$ there exists a range of values of $\mu$ where incommensurate antiferromagnetism has the largest pseudocritical temperature, for others there are no such values of $\mu$, see Fig. \ref{phasediag}. In the range of $\mu$, where the transition from (either commensurate or incommensurate) antiferromagnetic to $d$-wave superconducting order occurs in Fig. \ref{phasediag}, there is an extremely close competition between the couplings in the commensurate and incommensurate antiferromagnetic and $d$-wave superconducting channels. Which of them diverges first may in part depend on the truncation, so when one includes fermionic self-energy contributions and quartic bosonic couplings, this may have an important effect on the size and existence of regions exhibiting local incommensurate antiferromagnetic order.
Most existing studies using the framework of the fermionic functional renormalization group focus on the flow of the four-fermion vertex, as we do in the present work, so we can compare our results with theirs. The most recent results for the phase diagram of the two-dimensional Hubbard model at varying $\mu$ and fixed $t'$, presented in \cite{andrei}, are obtained by means of an $N$-patch scheme restricted to the flow of the four-fermion vertex. A temperature-flow scheme is used in that work, where the temperature is used as a parameter which flows from infinity to a nonzero value where a first vertex function reaches some critical value. The temperature $T^*$ where the divergence of the four-fermion vertex occurs (it is obtained in \cite{andrei} by means of a polynomial fit of the inverse susceptibilities) may be compared to our pseudocritical temperature $T_{pc}$. Both correspond, albeit in different ways, to the divergence of the four-fermion vertex and the onset of local order. However, whereas in the temperature-flow scheme the temperature appears as a flow parameter, it is kept fixed in our approach.
\begin{figure}[t]
\includegraphics[width=70mm,angle=0.]{Aphasediag001.eps}
\includegraphics[width=70mm,angle=0.]{Aphasediag01.eps}
\caption{\small{Pseudocritical temperatures $T_{pc}$ for $t'/t=-0.01$ (upper panel) and $t'/t=-0.1$ (lower panel) at different values of $U/t=3.5$ (upper lines), $U/t=3$ (middle lines) and $U/t=2.5$ (lower lines). Short-dashed lines denote the onset of commensurate antiferromagnetic order (cAF), long-dashed lines, appearing in small regions at larger values of $-\mu$, the onset of incommensurate antiferromagnetic order (iAF). Solid lines indicate the onset of $d$-wave superconducting order (dSC)).}}
\label{phasediag}
\end{figure}
The results in \cite{andrei} are in complete \textit{qualitative} agreement with ours. This concerns, for instance, the dependence of local order on the values of $U$ and $t'$. If $U$ is increased, the divergence of vertex functions (equivalent to the emergence of local order) occurs already at larger temperatures and the divergence of the coupling in the $d$-wave channel, for the small values of $|t'|$ considered, happens closer to the van Hove filling $\mu=4t'$. There is also agreement on the fact that $d$-wave superconductivity, when its coupling is enhanced most strongly, competes mostly with incommensurate rather than commensurate antiferromagnetism (cAF) as the dominant instability.
The quantitative comparison between our results and those in \cite{andrei} has to be handled with some care: for $T\leq T_{pc}$ our flow is stopped at a nonzero scale $k$. For quantities evaluated at $k\neq0$, the detailed implementation of the infrared cutoff has an effect on the results. (This contrasts with results for temperatures above the pseudocritical line in Fig. \ref{phasediag}, where the four-fermion vertex never diverges, such that we can extrapolate to $k=0$. For $k=0$, any residual dependence on the cutoff scheme is an indication of the shortcomings of a given truncation.) Despite this caveat, the comparison remains instructive. We find that for $t'/t=-0.1$ and $U/t=3.5$ the maximal values of $T_{pc}$ and $T^*$ as functions of $\mu$ differ by a factor of about $4/3$, and slightly more for $U/t=2.5$. For the onset of $d$-wave superconducting order, the pseudocritical temperature $T_{pc}$ as a function of $\mu$ shown in Fig. \ref{phasediag} is larger than $T^*_{dSC}$ obtained in \cite{andrei} by a factor of at least $2$ for both $U/t=2.5$ and $U/t=3.5$. The difference gets more pronounced with increasing distance from half filling. As already mentioned, a possible source of quantitative shortcomings of our calculations is that magnetic fluctuations may be overestimated due to our simple parametrization of the momentum dependence of the magnetic boson propagator (see Eqs. \eqref{eq:apropparam1}\,-\,\eqref{eq:apropparaminkomm}). Furthermore, the diagrams shown in Figs. \ref{yukdir} and \ref{ff} are evaluated at $(0,\pi)$ and $(\pi,0)$, which is adequate only for not too large values of $|\mu|$ where the Fermi surface is close to the boundary of the Brillouin zone. When magnetic fluctuations are generally overestimated and antiferromagnetic fluctuations are dominant, the critical scales for the onset of $d$-wave superconducting order and hence the pseudocritical temperatures can be expected to come out too large. We recall at this place that both the present work and Ref.\ \onlinecite{andrei} neglect the renormalization of the fermionic propagator, which is expected to have a sizeable lowering effect on the value of $T_{pc}$.
\section{Conclusion}
In this work we have shown that the functional renormalization group approach to correlated fermion systems based on partial bosonization can account for the competition between the antiferromagnetic and superconducting instabilities in the two-dimensional Hubbard model. We have studied the emergence of a coupling in the $d$-wave channel and its divergence in a certain parameter range as a consequence of antiferromagnetic spin fluctuations. In a nutshell, this result confirms the spin-fluctuation route to $d$-wave superconductivity in the two-dimensional Hubbard model.
Our treatment of the fermionic four-point vertex paves the way for a unified treatment of spontaneous symmetry breaking in the two-dimensional Hubbard model. In a next step, self-energy corrections to the electrons as well as quartic bosonic couplings can be included in our approach. This may shed light on the question of coexistence of different types of order, which so far has been addressed in the framework of the functional renormalization group only on the basis of a mean field approach replacing the flow of vertex functions at lower scales.\cite{metznerreissrohe,reiss} In a final step, a unified treatment of the flow of vertex functions in both the symmetric and symmetry-broken regimes can be given within the present approach.
{\bf Acknowledgments}: We are grateful to C. Husemann, A. Katanin and M. Salmhofer for useful discussions. SF acknowledges support by Studienstiftung des Deutschen Volkes.
| proofpile-arXiv_065-5342 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
In the past few years the application of quantum information concepts to some
long standing problems has led to a deeper understanding of those problems
\cite{osterloh2002} and,
as a consequence,
to the formulation of new methods to solve (or calculate) them. For example,
the simulability of many body problems is determined by the amount of
entanglement shared between the spins of the system \cite{schuch2008}.
There is a number of quantities that can be calculated in order to analyze the
information carried by a given state including the entanglement of formation
\cite{wootters1998},
the fidelity \cite{tano} and several kinds of entropies, entanglement witnesses
and so on.
Of course it depends on the problem which quantity is more adequate, or
accessible, to be calculated.
In the case of atomic or few body systems with continuous degrees of freedom a
rather natural quantity is the von Neumann entropy, it has been used to study a
number
of problems: Helium like atom \cite{osenda2007}, \cite{osenda2008},
generation of
entanglement via scattering \cite{schmuser2006}, the dynamical entanglement of
small molecules \cite{liu2008}, and entanglement in Hooke's
atom \cite{coe2008}.
In quantum dots, most quantum information studies focus in the amount of
entanglement carried by its eigenstates \cite{dot-states,abdullah2009}, or in
the
controllability of the
system \cite{control}. Both approaches are driven by the possible use of a
quantum dot as
the physical realization of a qubit\cite{loss1998}. The controllability of the
system is
usually investigated (or performed) between the states with the lowest lying
eigenenergies \cite{imamoglu1999}.
Besides the possible use of quantum dots as quantum information
devices there are proposals to use them as photodetectors. The proposal is
based on the use of resonance states because of its properties,
in particular its
large scattering section compared to the scattering sections of bounded states
\cite{sajeev2008}.
{ The resonance states are slowly decaying scattering states
characterized
by a large but finite lifetime. Resonances are also signaled by sharp,
Lorentzian-type peaks in the scattering matrix. In many cases of interest where
complex scaling (analytic dilatation) techniques can be applied, resonances
energies show up as isolated complex eigenvalues of the rotated Hamiltonian
\cite{reinhardt1996}.} Under this
transformation the bound states remain exactly preserved and the resonance states are
exposed as ${\cal L}^2$ functions of the rotated Hamiltonian. Resonance states
can be observed in two-electron quantum dots \cite{bylicki2005,sajeev2008}
and two-electron atoms \cite{dubau1998}.
Recently, a work by Ferr\'on, Osenda, and Serra \cite{ferron2009} studied
the behavior of the
von Neumann entropy associated with ${\mathcal L}^2$
approximations
of resonance states of two electron quantum dots. In particular that work
was focused in the resonance state that arises when the ground state { loses} its
stability, {\em i.e.} the quantum dot does not have two electron bounded states
any more. { Varying} the parameters of the quantum dot allows the energy to cross the
ionization threshold that separates the region where the two electron ground
state is stable from the region where the quantum dot { loses} one electron.
In reference \cite{ferron2009} it was found that the von Neumann entropy
provide a way to obtain the real part of the energy of the resonance, in other
words, the von Neumann entropy provides a stabilization method. The numerical
approximation used in \cite{ferron2009} allowed to obtain only a reduced
number of energy levels (in a region
where the spectrum is continuous) their method provided the real part of the
energy of the resonance only for a discrete set of parameters and this set could
not be chosen {\em a priori}. Notwithstanding this, Ferr\'on {\em et al.},
conjectured that there
is a well defined function $S(E_r)$, the von Neumann entropy of the resonance
state, which has a well defined value for every value of the real part of the
energy of the resonance, $E_r$.
In this work we will show, if $\lambda$ is the external
parameter that
drives the quantum dot through the ionization threshold, that the
entropy $S(E_r(\lambda))$ is a smooth function of $\lambda$. Also it is showed
that
the resonance state entropy calculated by Ferr\'on {\em et al.} is correct near
the ionization
threshold.
We have studied other
quantities, besides the entropy, that are good witnesses of resonance
presence.
One of them is the Fidelity, which has been widely used
\cite{zanardi2006,zanardi2007,zanardi2006,zanardi2009} in the detection of
non-analytical behavior in the spectrum of quantum systems. The analysis of the
Fidelity
provides a method to obtain the real part of the resonance energy from
variational eigenstates. We introduce the {\em Double
Orthogonality} function (DO) that
measures changes in quantum
states and detects the resonance region. The DO compares the
extended continuum states and the state near the resonance, also providing the
real part of the resonance energy.
The paper is organized as follows. In Section \ref{sec-two} we present the
model and briefly explain the technical details to obtain approximate
eigenvalues,
eigenfunctions, and the density of states for the problem. In Section
\ref{sec-three} the fidelity is used to obtain the
real
part of the resonance energy and the Double Orthogonality is introduced as an
alternative method. In Section~\ref{sec-four} the linear entropy and the
expectation value of the Coulombian repulsion are studied using complex scaling
methods. Finally, in Section~\ref{sec-conclu} we discuss our results and
present our conclusions.
\section{The Model and basic results}
\label{sec-two}
There are many models of quantum dots, with different symmetries and
interactions. In this work we consider a model with spherical symmetry, with
two
electrons interacting via the Coulomb repulsion. The main results should not be
affected by the particular
potential choice as it is already known that the near threshold
behavior and other critical
quantities (such as the critical exponents of the energy and other
observables)
are mostly determined by the range of the involved potentials
\cite{pont_serra_jpa08}. Therefore
to model the dot potential we use a short-range potential suitable to apply
the complex
scaling method. After this considerations we propose the following
Hamiltonian $H$ for the system
\begin{equation}
\label{hamiltoniano}
H = -\frac{\hbar^2}{2m} \nabla_{{\mathbf r}_1}^2
-\frac{\hbar^2}{2m} \nabla_{{\mathbf r}_2}^2 + V(r_1)+V(r_2)+
\frac{e^2}{\left|{\mathbf r}_2-{\mathbf r}_1\right|} ,
\end{equation}
where $V(r)=-(V_0/r_0^2)\, \exp{(-r/r_0)}$, ${\mathbf r}_i$ the
position operator of electron $i=1,2$; $r_0$ and $V_0$
determine the range and depth of the dot potential.
After re-scaling with $r_0$, in atomic units the Hamiltonian of Eq.
(\ref{hamiltoniano}) can be written as
\begin{equation}
\label{hamil}
H = -\frac{1}{2} \nabla_{{\mathbf r}_1}^2
-\frac{1}{2} \nabla_{{\mathbf r}_2}^2 -V_0 e^{-r_1}-V_0
e^{-r_2} +
\frac{\lambda}{\left|{\mathbf r}_2-{\mathbf r}_1\right|} ,
\end{equation}
where $\lambda=r_0$.
We choose the exponential binding potential to take advantage of its analytical
properties. In particular this potential is well behaved and
the energy of the resonance states can be calculated using complex scaling
methods. So, besides its simplicity, the exponential potential allows us to
obtain independently the energy of the resonance state and a check for our
results. The threshold energy, $\varepsilon$, of Hamiltonian Eq.~(\ref{hamil}),
that is, the one body ground state energy can be calculated
exactly~\cite{galindo} and is given by \begin{equation}
J_{2\sqrt{2\varepsilon}}\left(\sqrt{2V_0}\right)=0,
\end{equation}
where $J_{\nu}(x)$ is the Bessel function.
The discrete spectrum {\bf and} the resonance states of the model given by
Eq. (\ref{hamil}) can be obtained approximately
using ${\cal L}^2$
variational functions \cite{bylicki2005}, \cite{kruppa1999}. So, if
$\left|\psi_j(1,2)\right\rangle$ are the exact eigenfunctions of the
Hamiltonian, we look for variational approximations
\begin{equation}\label{variational-functions}
\left|\psi_j(1,2)\right\rangle \, \simeq\,
\left|\Psi_j(1,2)\right\rangle \, =\,
\sum_{i=1}^M c^{(j)}_{i} \left| \Phi_i
\right\rangle \, ,\;\; c^{(j)}_{i} = (\mathbf{c}^{(j)})_i
\;\;;\;\;j=1,\cdots,M \,.
\end{equation}
\noindent where the $\left| \Phi_i \right\rangle$ must be chosen adequately and $M$ is the
basis set size.
Since we are interested in the behavior of the system near the
ground-state ionization
threshold, we choose as basis set s-wave singlets given by
\begin{equation}\label{basis}
\left| \Phi_i\right\rangle \equiv \left| n_1,n_2;l\right\rangle =
\left( \phi_{n_1}({r}_1) \, \phi_{n_2}({r}_2) \right)_s
\mathcal{Y}_{0,0}^l (\Omega_1,\Omega_2) \, \chi_{s} \, ,
\end{equation}
where $n_2\leq n_1$, $l\leq n_2$, $\chi_{s}$ is the singlet spinor,
and the $\mathcal{Y}_{0,0}^l
(\Omega_1,\Omega_2) $ are given by
\begin{equation}\label{angular-2e}
\mathcal{Y}_{0,0}^l (\Omega_1,\Omega_2)\,=\, \frac{(-1)^l}{\sqrt{2l+1}} \,
\sum_{m=-l}^{l} (-1)^m Y_{l\,m}(\Omega_1) Y_{l\, -m}(\Omega_2) \, ,
\end{equation}
{\em i.e.} they are eigenfunctions of the total angular momentum with zero
eigenvalue and
the $Y_{l\, m}$ are the spherical harmonics. { Note also that $\mathcal{Y}_{0,0}^l$
is a real function since it is symmetric in the particle index.} The radial term
$(\phi_{n_1}({r}_1) \phi_{n_2}({r}_2))_s$
has the appropriate symmetry for a singlet state,
\begin{equation}\label{radial-sym}
(\phi_{n_1}({r}_1) \phi_{n_2}({r}_2))_s \,=\, \frac{\phi_{n_1}(r_1)
\phi_{n_2}(r_2)+ \phi_{n_1}(r_2) \phi_{n_2}(r_1)}{
\left[ 2\,(1+\langle n_1 | n_2
\rangle^2 ) \right]^{1/2}}
\end{equation}
\noindent where
\begin{equation}\label{int-prod}
\left\langle n_1|n_2 \right\rangle = \int_0^{\infty} r^2 \phi_{n_1}(r)
\phi_{n_2}(r) \, dr
\, \, ,
\end{equation}
\noindent and the $\phi$'s are chosen to satisfy $\langle n_1|n_1\rangle
= 1$. The numerical results are obtained by taking
the Slater type forms for the orbitals
\begin{equation}\label{slater-type}
\phi^{(\alpha)}_{n}({r}) = \left[ \frac{\alpha^{2n+3}}{(2n+2)!}\right]^{1/2} r^n
e^{-\alpha r/2} .
\end{equation}
{ \noindent where $\alpha$ is a non-linear parameter of the basis.} It is clear that
in terms of the functions defined in Eq. (\ref{basis}) the variational
eigenfunctions reads as
\begin{equation}\label{variational-eigen}
\left|\Psi^{(\alpha)}_i(1,2)\right\rangle = \sum_{n_1 n_2 l} c^{(i),(\alpha)}_{n_1 n_2 l}
\left|
n_1, n_2;l;\alpha\right\rangle \, ,
\end{equation}
\noindent where $n_1\geq n_2\geq l \ge 0$, then the basis set size is given by
\begin{equation}
M = \sum_{n_1=0}^N \sum_{n_2=0}^{n_1} \sum_{l=0}^{n_2} 1 \,=\,
\frac{1}{6} (N+1) (N+2) (N+3)\; ,
\end{equation}
so we refer to the basis set size using both $N$ and $M$. In Eq.
(\ref{variational-eigen}) we
added $\alpha$ as a basis index to indicate that in general the
variational eigenfunction is $\alpha$-dependent.
The matrix elements of the kinetic energy, the Coulombic repulsion between the
electrons and other mathematical details involving the functions
$ \left| n_1, n_2 ;l;\alpha\right\rangle$ are given in
references~\cite{osenda2008b},
\cite{pablo-variational-approach}. We only show here for completeness
the matrix elements of the exponential potential in the basis of
Eq.~(\ref{slater-type}),
\begin{equation}\label{mat-exp}
\left\langle n\left|\, e^{-r} \,\right| n'\right\rangle
= \int^{\infty}_{0}
\phi_n(r)\phi_{n'}(r)\, e^{-r} \,r^2\,\textrm{d}r =
\left(\frac{\alpha}{1+\alpha}\right)^{n+n'+3}\frac{(2+n+n')!}{\sqrt{
(2n+2)!\, (2n'+2)!}}.
\end{equation}
\begin{figure}[ht]
\begin{center}
\psfig{figure=fig1_pont.eps,width=16cm}
\end{center}
\caption{\label{avoidedcross}(color on-line) (a) the figure shows the behavior
of the
variational eigenvalues $E_j^{(\alpha)}(\lambda)$ (black lines) for $N=14$ and non-linear parameter $\alpha=2$. The red dashed line
corresponds to the threshold energy $\varepsilon\simeq -1.091$.
Note that the avoided crossings between the variational eigenvalues
are fairly visible. (b) The figure shows the same variational eigenvalues that
(a) (black lines) and the energy calculated using the complex scaling method
(green line) for a parameter $\theta=\pi/10$. }
\end{figure}
Resonance states have isolated complex eigenvalues, $E_{res}=E_r -i \Gamma/2,\;
\Gamma > 0$, whose eigenfunctions are not square-integrable.
These states are considered as quasi-bound states of
energy $E_r$ and inverse life time $\Gamma$. For the Hamiltonian Eq.
(\ref{hamil}), the resonance energies belong to the interval
$(\varepsilon,0)$ \cite{reinhardt1996}.
The resonance states can be analyzed using the
spectrum obtained with a basis of ${\cal L}^2$ functions (see \cite{ferron2009} and
References therein).
The levels above the threshold have several avoided crossings
that ``surround'' the real part of the
energy of the resonance state.
The presence of a resonance
can be made evident looking at the eigenvalues
obtained numerically. Figure~\ref{avoidedcross} shows a typical
spectrum obtained from the variational method. This
figure shows the behavior of the variational eigenvalues
$E_j^{(\alpha)}$ as functions of the parameter $\lambda$. The results shown were
obtained using $N=14$ and $\alpha=2.0$. The value of $\alpha$ was chosen in
order to obtain the best approximation for the energy of the ground state in
the region of $\lambda$ where it exists. The Figure shows
clearly that for
$\lambda<\lambda_{th}\simeq1.54$ there is only one bounded state. Above the
threshold the variational approximation provides a finite number of solutions
with energy below zero. Above the threshold there is not a
clear cut criteria to choose the value of the variational parameter. However,
it is possible to calculate $E_r(\lambda)$ calculating $E_j^{(\alpha)}$ for many
different values of the variational parameter (see Kar and Ho \cite{kar2004}).
Figure~\ref{densidad-0-1} (a) and (b) shows the numerical results for the first and second
eigenvalues respectively, for different values of the variational parameter
$\alpha$.
The figure also shows the behavior of the ground state (below the threshold) and
the
real part of the energy of the resonance calculated using complex scaling
(above the threshold), this curve is used as a reference. The behavior of the
smaller variational eigenvalue $E_1^{(\alpha)}(\lambda)$ is rather clear, below the
threshold $E_1^{(\alpha)}(\lambda)$ is rather insensitive to the actual value of
$\alpha$, the differences between $E_1^{(\alpha=2)}(\lambda)$ and $E_1^{(\alpha=6)}(\lambda)$ are smaller
than the width of the lines shown in the figure. Above the threshold the
behavior changes, the curve for a given value of $\alpha$ has two well defined
regions, in each region $E_1$ is basically a straight line. The two straight
lines in each region has a different slope and the change in the slope is
located around $E_r(\lambda)$.
In the case of $E_2^{(\alpha)}(\lambda)$ there are three regions, in each
one of them the
curve for a given value of $\alpha$ is basically a straight line and the slope
is different in each region. A feature that appears rather clearly is that, for
fixed $\lambda$ , the
density of levels for energy unit is not uniform,
despite that the
curves $E_j^{(\alpha_i)}(\lambda)$ are drawn for forty equally spaced
$\alpha_i$'s between $\alpha=2.0$ and $\alpha=6.0$. This
fact has been observed previously \cite{mandelshtam1993} and the density of
states can be
written in terms of two contributions, a localized one and an extended one. The
localized density of states is attributed to the presence of the resonance
state, conversely the extended density of states is attributed to the continuum
of states between $(\varepsilon, 0)$.
\begin{figure}[ht]
\begin{center}
\psfig{figure=fig2a_pont.eps,width=8cm}
\psfig{figure=fig2b_pont.eps,width=8cm}
\end{center}
\caption{\label{densidad-0-1}(color on-line) (a) The first variational
state energy {\em vs} $\lambda$, for different values of the variational
parameter $\alpha$. From bottom to top $\alpha$ increases its value from
$\alpha=2$ (dashed blue line) to $\alpha=6$ (dashed orange line). The real part
of the resonance eigenvalue obtained using complex
scaling ( $\theta=\pi/10$) is also shown (green line). (b) Same as (a) but for the
second state energy.}
\end{figure}
The localized density of states $\rho(E)$ can be expressed as \cite{kar2004,mandelshtam1993}
\begin{equation}\label{densidad_sin_suma}
\rho(E) = \left|\frac{\partial
E(\alpha)}{\partial \alpha}\right|^{-1} .
\end{equation}
Since we are dealing with a variational approximation, we calculate
\begin{equation}\label{densidad_cal}
\rho(E_j^{(\alpha_i)}(\lambda)) = \left|
\frac{E_j^{(\alpha_{i+1})}(\lambda) -
E_j^{(\alpha_{i-1})}(\lambda)}{\alpha_{i+1} - \alpha_{i-1}}\right|^{-1} .
\end{equation}
Figure~\ref{densidad} shows the typical behavior of $\rho_j(E)\equiv
\rho(E_j^{(\alpha_i)}(\lambda))$ for several
eigenenergies and $\lambda=2.25$. The real and imaginary parts of the
resonance's
energy, $E_r(\lambda)$ and $\Gamma$ respectively, can be obtained from
$\rho(E)$, see for example \cite{kar2004} and references there in. This method
provides and independent way to obtain $E_{res}$, besides the method of
complex scaling.
\begin{figure}[ht]
\begin{center}
\includegraphics[width=8cm]{fig3_pont.eps}
\caption{\label{densidad} (color on-line) The density of states
$\rho(E)$ for $\lambda=2.25$ and basis set size $N=14$. The results were
obtained using Eq. (\ref{densidad_sin_suma}) and correspond to, from top to
bottom, the second (black line), third (dashed red line), fourth (green line)
and fifth (dashed blue line) levels.}
\end{center}
\end{figure}
The values of $E_r(\lambda)$ and $\Gamma(\lambda)$ are obtained performing
a nonlinear fitting of $\rho(E)${, with a Lorentzian function,}
\begin{equation}
\rho(E)=\rho_0 + \frac{A}{\pi}\frac{\Gamma/2}{\left[(E-E_r)^2
+(\Gamma/2)^2\right]}.
\end{equation}
One of the drawbacks of this method results
evident: for each $\lambda$ there are several $\rho_j(E)$ (in fact one for each
variational level), and since each $\rho_j(E)$ provides a value for
$E^j_r(\lambda)$ and $\Gamma^j(\lambda)$ one has to choose which one is the
best.
Kar and Ho \cite{kar2004} solve this problem fitting all the $\rho_j(E)$ and
keeping as the best values for $E_r(\lambda)$ and $\Gamma(\lambda)$ the fitting
parameters with the smaller $\chi^2$ value. At least for their data the best
fitting (the smaller $\chi^2$) usually corresponds to the larger $n$.
This fact has a clear interpretation, if the numerical method approximates
$E_r(\lambda)$ with $E^{(\alpha)}_n(\lambda)$ a large $n$ means that the numerical
method is able to provide a large number of approximate levels, and so the
continuum of states between $(\varepsilon,0)$ is ``better'' approximated.
It is worth to remark that the results obtained from the complex scaling
method and from the density of states are in excellent
agreement, see Table~\ref{tabla1}.
\section{fidelity and double orthogonality functions}
\label{sec-three}
Since the work of Zanardi {et al.} \cite{zanardi2006,zanardi2007} there has
been a growing interest in the fidelity approach as a mean to study quantum
phase transitions \cite{zanardi2006}, the information-theoretic differential
geometry on quantum phase transitions (QPT's) \cite{zanardi2007} or
the disordered quantum $XY$ model \cite{zanardi2009}. In all these cases the
fidelity is used to detect the change of behavior of the states of a quantum
system. For example, if $\lambda$ is the external parameter that drives a system
through a QPT, the fidelity is the superposition ${\mathcal F} =
\langle\Psi(\lambda-\delta \lambda), \Psi(\lambda+\delta \lambda)\rangle $,
where $\Psi$ is the ground state of the system. It has been shown that ${\mathcal
F}$ is a good detector of critical behavior in ordered \cite{zanardi2006} and
disordered systems\cite{zanardi2009}.
In the following we will show that the
energy levels calculated using the variational approximation show critical
behavior near the energy of the resonance, moreover the curve $E_r(\lambda)$
can be obtained from the fidelity.
Figure \ref{fidel1} shows the behavior of the function ${\mathcal G}_n= 1-F_n$,
where \\ $F_n
= |\langle\Psi_n(\lambda), \Psi_n(\lambda+\delta \lambda)\rangle|^2$ , and
$\Psi_n$ is the $n^{th}$ eigenstate obtained with the variational approach.
\begin{figure}[ht]
\begin{center}
\psfig{figure=fig4_pont.eps,width=7cm}
\end{center}
\caption{\label{fidel1}(color on-line) The upper panel shows the behavior of
${\mathcal G}_n$, for $n=1, \ldots, 7$. Each function ${\mathcal G}_n$ has
two peaks, except for $n=1$. Since one of the peaks of ${\mathcal G}_n$
coincides with one of the peaks of ${\mathcal G}_{n+1}$, only one
peak for each level $n$ is apparent. From left to right the visible line at
each peak correspond to $n=2, \ldots, 7$ (cyan, red, yellow, blue, grey,
brown lines, respectively). The $n=1$ line has no visible peak (black).
The lower panel shows the variational eigenlevels $n=1, \ldots, 7$ (with the
same color convention used in the upper panel), and $E_{r}(\lambda)$ (green
dashed line). The black vertical dashed lines connecting both panels show the
value of $\lambda$ where each ${\mathcal G}_n$ has its minimum, $\lambda_n^f$.
The red dots in the lower panel correspond to $E_n(\lambda^f_n)$.}
\end{figure}
The behavior of ${\mathcal G}$ is quite simple. The value of ${\mathcal G}$ is
very small, except near the avoided
crossings where the value of ${\mathcal G}$ increases rather steeply (at least
for small $n$). This
is so because near the avoided crossing the superposition
$|\langle\Psi_n(\lambda), \Psi_n(\lambda+\delta \lambda)\rangle|^2\rightarrow
0$. Actually, $|\langle\Psi_n(\lambda),
\Psi_n(\lambda+\delta \lambda)\rangle|^2\rightarrow 0$ near points such that
$E^{(\alpha)}_n(\lambda)$ has non analytical behavior. Is for this reason that the
fidelity is a good detector of quantum phase transitions
\cite{zanardi2006,zanardi2007,zanardi2009}. In a first order QPT
the energy of the ground state is non analytical, and in a second order QPT the
gap in the avoided crossing between the ground state and the first excited
state goes to zero in the thermodynamic limit.
The previous argument supports why ${\mathcal G_1}$ has only one peak, while
all the others functions ${\mathcal G_n}$ have two peaks, the number of peaks
is the number of avoided crossings of each level. However, since the resonance
state lies somewhere between the avoided crossings it is natural to ask what
feature of the fidelity signals the presence of the resonance. For a given
level $n$ the value of the energy is fixed, so we must pick a distinctive
feature of ${\mathcal G}_n$ that is present for some $\lambda^f_n$, such that
$ E_r(\lambda^f_n) \simeq E_n(\lambda^f_n)$ (from here we will use $E_n$ or
$E_n^{(\alpha)}$ interchangeably). It results to be, that
$\lambda^f_n$ is the value of $\lambda$ such that ${\mathcal G}_n$ attains its
minimum between its two peaks. Figure~\ref{fidel1} shows the points
$E_n(\lambda^f_n)$. In Table ~\ref{tabla1} we tabulate the real part
of the energy calculated using DO, complex
scaling, fidelity and Density of States, for the five values of $\lambda^f_n$
shown in
Figure~\ref{fidel1}. The numerical values obtained using the fidelity and
Density of States methods are identical up to five figures. The Relative Error
between the energies
obtained is less than 0.25\%
\begin{table}[floatfix]
\caption{\label{tabla1} Resonance Energy obtained by four different methods.
Basis size is N=14}
\centering
\begin{tabular}{c c c c r r r r r}
\hline\hline \\[-2.0ex]
\multirow{2}{*}{\makebox[2.5cm][c]{$\lambda_{DO}^n$}} & &
\multirow{2}{*}{\makebox[2.5cm][c]{$DO$}} &
\multirow{2}{*}{\makebox[2.5cm][c]{Complex}} &
\multicolumn{5}{c}{Fidelity and Density of States} \\
\cline{5-9}
& & & \makebox[2.5cm][c]{ Scaling}& $n=2$ & $n=3$
& $n=4$ & $n=5$ & $n=6$\\ [0.5ex]
\hline
1.755 \scriptsize{(n=2)} & E & -0.99075 & -0.99098 & -0.99011 & \---- & \---- & \---- & \---- \\%inserting body of the table
& $\alpha$ & 2.0 & 2.0 & 1.560 & \---- & \----
&\---- & \---- \\
1.8625 \scriptsize{(n=3)} & E &-0.93427 & -0.93452 & -0.93434 & -0.93383 & -0.93303 & \---- & \---- \\
& $\alpha$ & 2.0 & 2.0 & 2.448 & 1.787 & 1.414
&\---- & \---- \\
2.02 \scriptsize{(n=4)} & E &-0.85498 & -0.85556 & -0.85581 & -0.85564 & -0.85531 & -0.85486 & \---- \\
& $\alpha$ & 2.0 & 2.0 & 3.339 & 2.435 & 1.906
&1.519 & \---- \\
2.255 \scriptsize{(n=5)} & E & -0.74329 & -0.74538 & -0.74518 & -0.74527 & -0.74519 & -0.74521 & -0.74514 \\
& $\alpha$ & 2.0 & 2.0 & 4.262 & 3.098 & 2.414
&1.936 & 1.574 \\
2.61 \scriptsize{(n=6)} & E &-0.58276 & -0.59077 & -0.58825&-0.58910 &-0.58942 &-0.58965 & -0.58979\\
& $\alpha$ & 2.0 & 2.0 & 5.248 & 3.799 & 3.799
&2.373 & 1.936 \\ [1ex]
\hline
\end{tabular}
\end{table}
The idea of detecting the resonance
state energy with functions depending on the inner product could be taken a
step further. To this end we consider the functions
\begin{equation}
\label{dort}
DO_n(\lambda) = |\langle \Psi_n(\lambda_{L}), \Psi_n(\lambda) \rangle|^2 +
|\langle \Psi_n(\lambda_{R}), \Psi_n(\lambda)\rangle|^2, \mbox{for} \,
\lambda_{L} < \lambda < \lambda_{R}
\end{equation}
where $\lambda_{L}$ and $\lambda_{R}$ are two given coupling values.
It is clear from the definition that \mbox{$0 \leq DO_n(\lambda) \leq 2$}.
If there are
not resonances between $\lambda_{L}$ and $\lambda_{R}$, the wave function
is roughly independent of $\lambda$ so $DO_n(\lambda) \simeq 2$.
However, the
scenario is different when a resonance is present between
$\lambda_{L}$ and $\lambda_{R}$.
In this case, the avoided crossings for a given state
$\Psi_n$ are located approximately at
$\lambda^{av}_{L}$
and $\lambda^{av}_{R}$, where $L$($R$) stands for the leftmost (rightmost)
avoided crossing. Requesting that
$\lambda_{L}<\lambda^{av}_{L}<\lambda^{av}_{R}<\lambda_{R}$,
it follows that $\langle \Psi_n(\lambda_L)|
\Psi_n(\lambda_R) \rangle \,\simeq\,0$.
With this prescription, the $DO_n$ functions are rather independent of the actual
values chosen for $\lambda_{L}$ and
$\lambda_{R}$.
For a
given $n$, $DO_n(\lambda)$ measures how much the state $\Psi_n(\lambda)$
differs from the extended states $\Psi_n(\lambda_{R})$ and
$\Psi_n(\lambda_{L})$.
We look for the states with minimum $DO_n$, in the same fashion as we did
with the fidelity, we can obtain values $E_r(\lambda^n_{DO}) \simeq
E_n(\lambda^n_{DO})$, where $\lambda^n_{DO}$ is defined by
$DO_n(\lambda^n_{DO}) = \min_{\lambda} DO_n(\lambda)$. Figure~\ref{dortfig}
shows the behavior of $DO_n$ obtained for
the same parameters that the ones used in Figure~\ref{fidel1}, and we compare
the values of $E_n(\lambda^n_{DO})$ with energy values
obtained using complex scaling methods in Table~\ref{tabla1}. The curves in
Figure~\ref{dortfig} show that outside $(\lambda_{L},\lambda_{R})$ the states
$\Psi_n$ change very little when $\lambda$ changes and $DO_n \simeq 1$. Inside
the resonance region, $(\lambda_{L}^{av},\lambda_{R}^{av})$, the functions
$DO_{n}$ change
abruptly. The width in $\lambda$ in which a given $DO_n$ changes abruptly
apparently depends on the width of the resonance, $\Gamma$, but so far we have
not been
able to relate both quantities.
\begin{figure}[floatfix]
\begin{center}
\psfig{figure= fig5_pont.eps,width=7cm}
\end{center}
\caption{\label{dortfig}(color on-line) The lower panel shows the
variational energy levels $E_{n}(\lambda)$, from bottom to top, for $n=1,\ldots,
7$ (the black, green, red, yellow, blue, grey and brown continuous lines,
respectively); $E_{r}(\lambda)$ (the dashed dark green line);
$E_n(\lambda_{DO}^n)$ (blue squares) and $E_n(\lambda_{n}^f)$ (red dots).
The upper panel shows the behavior of $DO_n$ {\em vs} $\lambda$ for
$n=2,\ldots, 7$. The color convention for the $DO_n$ is the same used in the
lower panel. The black dot-dashed vertical lines show the location of the
points $\lambda_{DO}^n$. }
\end{figure}
From Table~\ref{tabla1} and Figure~\ref{dortfig} it is
rather clear that despite that the fidelity and the $DO_n$ provide approximate
values for $E_r(\lambda)$ for different sets of $\lambda$'s, both sets belongs
to the ``same'' curve, {\em i.e.} the same curve considering the numerical
inaccuracies. Both methods would give the same results when
$|\lambda^{av}_{R}-\lambda^{av}_{L} | \rightarrow 0$, but for $N$ finite the
fidelity measures how fast the state changes when
$\lambda\rightarrow\lambda+\Delta\lambda$ and the $DO$ measures how much a
state differs from the extended states located at both sides of the resonance
state.
\section{The entropy}
\label{sec-four}
If $\hat{\rho}^{red}$ is the reduced density operator for one electron
\cite{ferron2009}, then the
von
Neumann entropy ${\mathcal S}$ is given by
\begin{equation}\label{von-neumann-entropy}
{\mathcal S} = -\mathrm{tr}(\hat{\rho}^{\mathrm{red}}
\log_2{\hat{\rho}^{\mathrm{red}}}) ,
\end{equation}
and the linear entropy $S_{\mathrm{lin}}$ is given by \cite{abdullah2009}
\begin{equation}\label{linear-entropy}
{\mathcal S}_{\mathrm{lin}} = 1-\mathrm{tr}\left[(\hat{\rho}^{\mathrm{red}})^2
\right],
\end{equation}
where the reduced density operator is
\begin{equation}\label{rho-red-def}
\hat{\rho}^{\mathrm{red}}(\mathbf{r}_1, \mathbf{r}^{\prime}_1) =
\mathrm{tr}_2 \left| \Psi \right\rangle \left\langle \Psi \right| \, ,
\end{equation}
here the trace is taken over one electron, and $\left|\Psi \right\rangle$ is
the total two-electron wave function. Both entropies,
Eqs. (\ref{von-neumann-entropy}) and (\ref{linear-entropy}), can be used to analyze
how much entanglement has a given state. One can choose between one entropy or
the other out of convenience. In this paper we will use the linear entropy. For
a discussion about the similarities between the two entropies see
Reference~\cite{abdullah2009} and references therein.
As the two electron wave function is not
available we instead use the variational approximation Eq.
(\ref{variational-eigen}).
As has been noted in previous works (see \cite{osenda2007} and References
therein), when the total wave function factorizes in spatial and spinorial
components it is possible to single out both contributions, then the analysis of
the behavior of the entropy is reduced to the analysis of the behavior of
the spatial part $S$, since the spinorial contribution is constant. In this
case, if $\varphi (\mathbf{r}_1, \mathbf{r}_2)$ is the two electron
wave function and $\rho^{red}(\mathbf{r}_1, \mathbf{r}^{\prime}_1)$ is given by
\begin{equation}
\rho^{red}(\mathbf{r}_1, \mathbf{r}^{\prime}_1) = \int
\varphi^{\star}(\mathbf{r}_1, \mathbf{r}_2) \varphi(\mathbf{r}^{\prime}_1,
\mathbf{r}_2) \; d\mathbf{r}_2 ,
\end{equation}
then the linear entropy $S_{\mathrm{lin}}$ can be calculated as
\begin{equation}
S_{\mathrm{lin}}= 1 -\sum_i \lambda_i^2 ,
\end{equation}
where the $\lambda_i$ are the eigenvalues of $\rho^{red}$ and are given by
\begin{equation}
\int \rho^{red}(\mathbf{r}_1, \mathbf{r}^{\prime}_1) \phi_i(\mathbf{r}^{\prime}_1)
\; d\mathbf{r}^{\prime}_1 = \lambda_i \phi_i(\mathbf{r}_1) \, .
\end{equation}
Figure \ref{entro-lin} shows the behavior of the linear
entropy for several variational levels. The meaning of each curve has been
extensively discussed in Reference \cite{ferron2009}. We include a brief
discussion here for completeness.
\begin{figure}[floatfix]
\begin{center}
\psfig{figure=fig6_pont.eps,width=8cm}
\end{center}
\caption{\label{entro-lin} The figure shows the behavior of
$S_{\mathrm{lin}}(\Psi_j^{(\alpha)})$, where the $\Psi_j^{(\alpha)}$ are the variational
eigenstates corresponding to the first seven energy levels shown in Figure
\ref{avoidedcross} for $N=14$ and $\alpha=2{.}0$. All the curves $S_{\mathrm{lin}}(\Psi_j^{(\alpha)})$ , except for
the corresponding
to $S_{\mathrm{lin}} (\Psi_1^{(\alpha)})$, have a single minimum located at
$\lambda_j^S$, {\em i.e.} $S_{\mathrm{lin}}(\Psi_j^{(\alpha)}(\lambda_j^S)) =
\min_{\lambda} S_{\mathrm{lin}}(\Psi_j^{(\alpha)}(\lambda)) $ . If $i<j$
then $\lambda_i^S < \lambda_i^S$. }
\end{figure}
When the two-electron quantum dot loses an electron the state of
the system can be
described as one electron
bounded to the dot potential, and one unbounded electron {\em at infinity}, as
a consequence the
spatial wave function can be written as a symmetrized product
of one electron wave functions so $S_{\mathrm{lin}}=S_c=1/2$. Therefore if only
bound and continuum states are considered the entropy has a discontinuity
when $\lambda$ crosses the threshold value $\lambda_{th}$. The picture
changes significantly when resonance states are considered.
The resonance state keeps its two electrons
``bounded'' before the ionization for a finite time given by the inverse of
the imaginary part of the energy. Of course the life time of a bounded
state is infinite while the life time of a resonance state is finite. In
reference \cite{ferron2009} is suggested
that it is possible to construct a smooth function
$S(E_r(\lambda))$ that ``interpolates'' between the minima of the functions
$S(\Psi_j)$ shown in Figure~\ref{entro-lin}. This assumption was justified by
similar arguments that the used in the present work, {\em i.e.} if we
call
$\lambda_n^S$ to the value of $\lambda$ where $S(\Psi_n)$ gets its minimum then
$E_n(\lambda_n^S)$ follows approximately the curve $E_r(\lambda)$. As
Ferr\'on {\em et al.} \cite{ferron2009} used only one variational parameter
$\alpha$, it
seemed natural to pick the minimum value of $S(\Psi_n)$ as the feature that
signaled the presence of the resonance state.
Until now we have exploited the fact that $E_r(\lambda)$, at a given $\lambda
$ can be approximated by variational eigenvalues corresponding to different
values of the variational parameter, say $E_r(\lambda) \simeq
E_{n}^{(\alpha)}(\lambda) \simeq E_{n^{\prime}}^{(\alpha^{\prime})}(\lambda)$
(the
superscript $\alpha$ is made evident to remark that the eigenvalues correspond
to different variational parameters $\alpha$ and $\alpha^{\prime}$ ). There is
no problem in approximating an exact eigenvalue with different variational
eigenvalues. But, from the point of view of the entropy, there is a problem
since, in general, $S(\Psi_n^{(\alpha)}(\lambda)) $ is not close to
$S(\Psi_n^{(\alpha^{\prime})}(\lambda))$. Moreover, as has
been stressed in Reference~\cite{schuch2008}, a given numerical method could be
useful to
accurately calculate the spectrum of a quantum system, but hopelessly
inaccurate to calculate the entanglement. In
few body systems there is evidence that there is a strong correlation between
the entanglement and the Coulombian repulsion between the components of the
system
\cite{osenda2007,coe2008,abdullah2009,osenda2008b}. Because of this correlation
we will carefully investigate the behavior of the Coulombian repulsion between
the electrons in our model.
For the Hamiltonian Eq. (\ref{hamil}), and $\psi \in {\cal L}^2$ an eigenvector of $H$ with
eigenvalue $E$, the Hellman-Feynman theorem gives that
\begin{equation}\label{h-f}
\frac{\partial E}{\partial \lambda} = \left\langle \psi \right|
\frac{1}{r_{12}}\left| \psi\right\rangle.
\end{equation}
We use both sides of Eq. (\ref{h-f}) to analyze how the variational
approximation works with expectation values of observables that are not the
Hamiltonian. The $r.h.s$ of Eq. (\ref{h-f}) is well defined if we
use ${\cal L}^2$ functions as the approximate variational eigenfunctions.
\begin{figure}[floatfix]
\begin{center}
\psfig{figure=fig7_pont.eps,width=8cm}
\end{center}
\caption{\label{hell-fey} The figure shows the expectation values of the Coulombian
repulsion for the variational states $\Psi^{(\alpha)}_n$, $n=1,\ldots,8$ with
$N=14$ and $\alpha=2.0$. Also is showed the curve $\frac{\partial E_r}{\partial \lambda}$
obtained from the complex Energy of the complex scaling method.}
\end{figure}
To evaluate the $l.h.s$ of Eq. (\ref{h-f}) we take advantage that we
have found, independently, the real part of the
resonance eigenvalue, $E_r(\lambda)$, using complex
scaling methods. Figure~\ref{hell-fey} shows the behavior of
$\frac{dE_r}{d\lambda}$ and the Coulombian repulsion between the two
electrons, $\langle \frac{1}{r_{12}} \rangle_n$, where
$\langle \ldots\rangle_n$ stands for the expectation value calculated with
$\Psi^{(\alpha)}_n$. The behavior of $\langle \frac{1}{r_{12}}\rangle_n$ is
quite simple to analyze, where the linear entropy of $\Psi^{(\alpha)}_n$
has a valley the
expectation value $\langle \frac{1}{r_{12}} \rangle_n$ has a { peak}.
Where the expectation value $\langle \frac{1}{r_{12}}
\rangle_n$ has its maximum the corresponding linear entropy has its
minimum. The inverse behavior showed by the entropy and the Coulombian repulsion
has been observed previously \cite{abdullah2009,osenda2008b}.
For a given variational parameter $\alpha$, and for small $n$, $\langle
\frac{1}{r_{12}} \rangle_n$ has its maximum very close to the curve
$\frac{dE_r}{d\lambda}$. Besides, the shape of both curves near the maximum of
$\langle\frac{1}{r_{12}} \rangle_n$ is very similar, in this sense our
variational approach gives a good approximation not only for $E_r(\lambda)$ but
for its derivative too.
For larger values of $n$ the maximum of $\langle
\frac{1}{r_{12}} \rangle_n $ gets apart from the curve of
$\frac{dE_r}{d\lambda}$, and the shape of the curves near this maximum is quite
different. We proceed as before and changing $\alpha$ we obtain a good
approximation for $\frac{dE_r}{d\lambda}$ up to a certain value
$\lambda_{rep}$. For any $\lambda$ smaller than $\lambda_{rep}$, there
is a pair
$n,\alpha$ such that $\langle
\frac{1}{r_{12}} \rangle_{n,\alpha}$ is locally close to
$\frac{dE_r}{d\lambda}$ and the slope of both curves is (up to numerical
errors) the same, see figure~\ref{r12-a-n}.
\begin{figure}[floatfix]
\begin{center}
\psfig{figure=fig8_pont.eps,width=8cm}
\end{center}
\caption{\label{r12-a-n}(color-online) The figure shows the expectation value
$\langle 1/r_{12} \rangle_2^{(\alpha)}$ vs. $\lambda$ for a basis size $N=14$ and $\alpha=2,\,\ldots\,,3.5$
in $0.1$ steps and for $\alpha=4,\,\ldots\,,5.5$ in $0.5$ steps (solid black
lines). The real (cyan dotted) and imaginary (orange line)
parts of $\langle 1/r_{12} \rangle_\theta$ ($\theta=\pi/10$),
and the derivative of the
real part of the complex-scaled energy are also shown. }
\end{figure}
Apparently there is no way to push further the variational method, at least
keeping the same basis set, in order to obtain a
better approximation than the depicted in Figure~\ref{r12-a-n}. The
difficulty seems to be more deep than just a limitation of the variational
method used until this point. We can clarify this subject using the properties
of the complex scaling method. Let us call $\phi^{\theta}$ the eigenvector
such that
\begin{equation}\label{complex-eigen}
H(\theta) \phi^{\theta} = E_{res} \phi^{\theta} ,
\end{equation}
where $H(\theta)$ is the Hamiltonian obtained from the complex scaling
transformation \cite{moisereport}, and $\theta$ is the angle of ``rotation''.
The eigenvector $\phi^{\theta}$ depends on $\theta$, but for $\theta$ large
enough the eigenvalue $E_{res}$ does not depend on $\theta$. As pointed by
Moiseyev \cite{moisereport}, the real part of the expectation value
of a complex scaled observable is the physical measurable quantity, while the
imaginary part gives the uncertainty of measuring the real part. Moreover,
the
physical measurable quantity must be $\theta$ independent as is, for example,
the eigenvalue $E_{res}$.
The eigenvector $\phi^{\theta}$ can be normalized using that
\begin{equation}\label{complex-norm}
\langle (\phi^{\theta})^{\star} |\phi^{\theta} \rangle =1.
\end{equation}
Since $\phi^{\theta}$ is normalized, we get that
\begin{equation}\label{hell-fey-exp}
\frac{\partial E_{res}}{\partial \lambda} = \left\langle
(\phi^{\theta})^{\star} \right| \frac{e^{-i\theta} }{r_{12}} \left|\phi^{\theta}
\right\rangle = \left\langle\frac{1}{r_{12}} \right\rangle_{\theta},
\end{equation}
in this last equation we have used that, under the complex scaling
transformation,
\begin{equation}
\frac{1}{r_{12}}\rightarrow \frac{e^{-i\theta}}{r_{12}},
\end{equation}
and defined the quantity $ \left\langle\frac{1}{r_{12}}
\right\rangle_{\theta}$. This generalized Hellman-Feynman theorem is also
valid for Gamow states \cite{hfg}.
Figure~\ref{r12-a-n} shows the behavior of the expectation value in the
$ \left\langle\frac{1}{r_{12}}
\right\rangle_{\theta}$ as a function of
$\lambda$. It is clear that the real part of the expectation value $
\left\langle\frac{1}{r_{12}}
\right\rangle_{\theta}$
coincides with $\frac{\partial E_{r}}{\partial \lambda}$. More interestingly,
$\lambda_{rep}$ is where the imaginary part of $ \left\langle\frac{1}{r_{12}}
\right\rangle_{\theta}$ became noticeable. From this fact we conclude that it
is not possible to adequately approximate the Coulombian repulsion of a
resonance state, or its entropy, only with real ${\cal L}^2$ variational functions
despite its success when dealing with the resonance state spectrum.
We define the complex scaled density operator of the resonance state by
\begin{equation}\label{complex-rho}
\rho^{\theta} = \left|\phi^{\theta} \right\rangle \left\langle
(\phi^{\theta})^{\star} \right|,
\end{equation}
and the complex linear entropy
\begin{equation}\label{complex-entropy}
S^{\theta} = 1 - {\mathrm tr} (\rho^{\theta}_{red})^2 ,
\end{equation}
where
\begin{equation}\label{complex-reducida}
\rho^{\theta}_{red} = {\mathrm tr}_2 \rho^{\theta},
\end{equation}
and $\phi^{\theta}$ is the eigenvector of Eq. (\ref{complex-eigen}). {
This definition is motivated by the fact that the density operator should be
the projector onto the
space spanned by $\left|\phi^{\theta}\right\rangle$. As the normalization Eq.
(\ref{complex-norm}) requires the {\em bra} to be conjugated, then $\rho^\theta$ is
the adequate projector to use.}
Because
of the normalization, Eq. (\ref{complex-norm}), we have that ${\mathrm tr}
\rho^{\theta} = {\mathrm tr}\rho^{\theta}_{red}=1$ {\em despite that both
density operators have complex eigenvalues}.
\begin{figure}[floatfix]
\begin{center}
\psfig{figure=fig9_pont.eps,width=16cm}
\end{center}
\caption{\label{fig-complex-entropy}(color-online) Figure (a) shows the linear
entropy for the same values as in figure \ref{entro-lin} (magenta dashed lines).
Also showed is the real part of the complex linear entropy for several values
of the complex rotation angle $\theta=\frac{\pi}{5},\, \frac{\pi}{10},\, \frac{\
pi}{20},\, \frac{\pi}{30},\, \frac{\pi}{40}$ (black empty diamonds, red dots,
green squares, blue triangle, yellow empty dots).
Figure (b) shows the imaginary part of the complex linear entropy for the same
values as in (a). }
\end{figure}
Figure~\ref{fig-complex-entropy} shows that up to certain value of $\lambda$
the real part of $S^{\theta}$ follows closely a envelope containing the minima
of the functions $S(\Psi_n)$. However, for $\lambda$ large enough, $S^{\theta}$
gets apart from the functions $S(\Psi_n)$. It is worth to mention that for
$\theta$ large enough $S^{\theta}$ {\bf does not depend on} $\theta$. On the
other hand, far away from the threshold, the complex scaling requires
larger values of $\theta$ to isolate the resonance state eigenenergy, but in
this regime the method becomes unstable. Because of the numerical evidence,
near the threshold the entropy calculated by Ferr\'on {\em et al.}
is basically correct, but for larger values of $\lambda$ the amount of
entanglement of the
resonance state should be characterized by $S^{\theta}$ and not by any of the
$S(\Psi_n)$.
\section{summary and conclusions}
\label{sec-conclu}
We have presented numerical calculations about the behavior of the fidelity
and the double orthogonality functions $DO_n(\lambda)$. The numerical results
show that it is possible to obtain $E_r(\lambda)$ with great accuracy, for
selected values of $\lambda$, without employing any stabilization method.
These two methods to find $E_r(\lambda)$ do not depend on particular
assumptions about the model or the variational method used to find approximate
eigenfunctions above the threshold, their success depends on the ability of the
approximate eigenstates to detect the non-analytical changes in the spectrum.
The fidelity has been extensively used to detect quantum phase
transitions in spin systems \cite{zanardi2006}, the behavior of
quasi-integrable systems \cite{weinstein2005}, thermal phase transitions
\cite{quan2009}, etc. To the best of our knowledge this work is the first
attempt to apply the concept of fidelity to resonance states and to the
characterization of spectral properties of a system with non-normalizable
eigenstates. Besides, it is remarkable that the fidelity and the double
orthogonality give the real part of the resonance eigenvalue using only real
variational functions. This energy as a function of $\lambda$ is obtained by
moving the nonlinear parameter $\alpha$ but
{\em without} fitting as needed by standard
stabilization methods. Moreover, as shown by the tabulated values in
Table~\ref{tabla1} the fidelity provides $E_r(\lambda)$ as accurately as the
density of states method, with considerable less numerical effort.
We proposed a definition of the resonance entropy
based on a complex scaled extension of the usual definition. The extension
implies that the reduced density operator is not hermitian and has complex
eigenvalues, resulting in a complex entropy.
The real and imaginary
parts of the complex entropy are $\theta$ independent, as should be expected for
the expectation value of an observable \cite{moisereport}. This
independence gives support to the interpretation of the real part of the entropy
as the amount of entanglement of the resonance state.
Other kinds of resonances, as those that arise from perturbation of
bound states embedded in
the continuum, could be studied applying the same quantum information methods
used in this paper. Work is in progress in this direction.
\acknowledgments
We would like to acknowledge SECYT-UNC, CONICET and FONCyT
for partial financial support of this project.
| proofpile-arXiv_065-5344 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}\label{IntroSec}
Scalars are the elements $s$ used in scalar multiplication $s\cdot v$,
yielding for instance a new vector for a given vector $v$. Scalars are
elements in some algebraic structure, such as a field (for vector
spaces), a ring (for modules), a group (for group actions), or a
monoid (for monoid actions).
A categorical description of scalars can be given in a monoidal
category $\cat{C}$, with tensor $\otimes$ and tensor unit $I$, as the
homset $\cat{C}(I,I)$ of endomaps on $I$. In~\cite{KellyL80} it is
shown that such homsets $\cat{C}(I,I)$ always form a commutative
monoid; in~\cite[\S3.2]{AbramskyC09} this is called the `miracle' of
scalars. More recent work in the area of quantum computation has led
to renewed interest in such scalars, see for
instance~\cite{AbramskyC04,AbramskyC09}, where it is shown that the
presence of biproducts makes this homset $\cat{C}(I,I)$ of scalars a
semiring, and that daggers $\dag$ make it involutive. These are first
examples where categorical structure (a category which is monoidal or
has biproducts or daggers) gives rise to algebraic structure (a set
with a commutative monoid, semiring or involution structure). Such
correspondences form the focus of this paper, not only those between
categorical and algebraic structure, but also involving a third
element, namely structure on endofunctors (especially monads). Such
correspondences will be described in terms of triangles of
adjunctions.
To start, we describe the basic triangle of adjunctions that we shall
build on. At this stage it is meant as a sketch of the setting, and
not as an exhaustive explanation. Let $\aleph_{0}$ be the category
with natural numbers $n\in\ensuremath{\mathbb{N}}$ as objects. Such a number $n$ is
identified with the $n$-element set
$\underline{n}=\{0,1,\ldots,n-1\}$. Morphisms $n\rightarrow m$ in
$\aleph_0$ are ordinary functions $\underline{n}\rightarrow
\underline{m}$ between these finite sets. Hence there is a full and
faithful functor $\aleph_{0} \hookrightarrow \Cat{Sets}\xspace$. The underline
notation is useful to avoid ambiguity, but we often omit it when no
confusion arises and write the number $n$ for the set $\underline{n}$.
\begin{figure}
\label{SetsTriangle}
$$
\vcenter{\xymatrix@R-0pc@C+.5pc{
& & \Cat{Sets}\xspace\ar@/_2ex/ [ddll]_{\begin{array}{c}\scriptstyle A\mapsto \\[-.7pc]
\scriptstyle A\times(-)\end{array}}
\ar@/_2ex/ [ddrr]_(0.2){\begin{array}{c}\scriptstyle A\mapsto \\[-.7pc]
\scriptstyle A\times(-)\end{array}\hspace*{-1.5pc}} \\
& \dashv\;\; & & \dashv & \\
\Cat{Sets}\xspace^{\Cat{Sets}\xspace}\ar @/_2ex/[rrrr]_{\textrm{restrict}}
\ar@/_2ex/ [uurr]_(0.6){(-)(1)} & & & &
\Cat{Sets}\xspace^{\aleph_0}\ar@/_2ex/ [uull]_{(-)(1)}
\ar @/_2ex/[llll]_{\textrm{left Kan}}^{\raisebox{-.7pc}{$\bot$}}
}}
$$
\caption{Basic triangle of adjunctions.}
\end{figure}
Now consider the triangle in Figure~\ref{SetsTriangle}, with functor
categories at the two bottom corners. We briefly explain the arrows
(functors) in this diagram. The downward arrows
$\Cat{Sets}\xspace\rightarrow\Cat{Sets}\xspace^{\Cat{Sets}\xspace}$ and $\Cat{Sets}\xspace\rightarrow\Cat{Sets}\xspace^{\aleph_0}$
describe the functors that map a set $A\in\Cat{Sets}\xspace$ to the functor $X
\mapsto A\times X$. In the other, upward direction right adjoints are
given by the functors $(-)(1)$ describing ``evaluate at unit 1'', that
is $F\mapsto F(1)$. At the bottom the inclusion $\aleph_{0}
\hookrightarrow \Cat{Sets}\xspace$ induces a functor $\Cat{Sets}\xspace^{\Cat{Sets}\xspace} \rightarrow
\Cat{Sets}\xspace^{\aleph_0}$ by restriction: $F$ is mapped to the functor
$n\mapsto F(n)$. In the reverse direction a left adjoint is obtained
by left Kan extension~\cite[Ch.~X]{MacLane71}. Explicitly, this left
adjoint maps a functor $F\colon\aleph_{0}\rightarrow\Cat{Sets}\xspace$ to the
functor $\mathcal{L}(F)\colon\Cat{Sets}\xspace\rightarrow\Cat{Sets}\xspace$ given by:
$$\begin{array}{rcl}
\mathcal{L}(F)(X)
& = &
\Big(\coprod_{i\in\ensuremath{\mathbb{N}}}F(i)\times X^{i}\Big)/\!\sim,
\end{array}$$
\noindent where $\sim$ is the least equivalence relation such that,
for each $f\colon n\rightarrow m$ in $\aleph_0$,
$$\begin{array}{rcl}
\kappa_m(F(f)(a), v)
& \sim &
\kappa_n(a, v\mathrel{\circ} f),
\qquad\mbox{where }a\in F(n)\mbox{ and }v\in X^{m}.
\end{array}$$
\auxproof{
We first prove the adjunction
$\Cat{Sets}\xspace\leftrightarrows\Cat{Sets}\xspace^{\aleph_0}$. It involves a bijective
correspondence:
$$\begin{bijectivecorrespondence}
\correspondence[in $\Cat{Sets}\xspace^{\aleph_0}$]
{\xymatrix{A\times(-)\ar[r]^-{\sigma} & F}}
\correspondence[in \Cat{Sets}\xspace]
{\xymatrix{A\ar[r]_-{f} & F(1)}}
\end{bijectivecorrespondence}$$
\noindent Given a natural transformation $\sigma$ one defines
$\overline{\sigma}(a) = \sigma_{1}(a,0)$, where $0\in 1$. In the
other direction, given $f$ one takes, for $a\in A$ and $i\in n$,
$$\begin{array}{rcl}
\overline{f}_{n}(a,i)
& = &
F\Big(1\stackrel{i}{\rightarrow}n\Big)(f(a)) \;\in\; F(n).
\end{array}$$
\noindent It is not hard to see that $\overline{f}$ is natural, since
for $g\colon n\rightarrow m$ in $\aleph_0$ we get:
$$\begin{array}{rcl}
\big(\overline{f}_{m} \mathrel{\circ} A\times g\big)(a, i)
& = &
\overline{f}_{m}(a, g(i)) \\
& = &
F\big(1\stackrel{g(i)}{\rightarrow}m\big)(f(a)) \\
& = &
F\big(1\stackrel{i}{\rightarrow}n\stackrel{g}{\rightarrow} m\big)(f(a)) \\
& = &
F(g)\Big(F\big(1\stackrel{i}{\rightarrow}n\big)(f(a))\Big) \\
& = &
\big(F(g) \mathrel{\circ} \overline{f}_{n}\big)(a,i).
\end{array}$$
\noindent Further,
$$\begin{array}{rcl}
\overline{\overline{\sigma}}_{n}(a,i)
& = &
F\big(1\stackrel{i}{\rightarrow}n\big)(\overline{\sigma}(a)) \\
& = &
F\big(1\stackrel{i}{\rightarrow}n\big)(\sigma_{1}(a,0)) \\
& = &
\sigma_{n}((A\times i)(a, 0)) \\
& = &
\sigma_{n}(a, i) \\
\overline{\overline{f}}(a)
& = &
\overline{f}_{1}(a,0) \\
& = &
F\big(1\stackrel{0}{\rightarrow}1\big)(f(a)) \\
& = &
f(a).
\end{array}$$
We turn to the adjunction $\Cat{Sets}\xspace^{\Cat{Sets}\xspace} \leftrightarrows \Cat{Sets}\xspace^{\aleph_0}$.
It involves:
$$\begin{bijectivecorrespondence}
\correspondence[in $\Cat{Sets}\xspace^{\Cat{Sets}\xspace}$]
{\xymatrix{\mathcal{L}(F)\ar[r]^-{\sigma} & G}}
\correspondence[in $\Cat{Sets}\xspace^{\aleph_0}$]
{\xymatrix{F\ar[r]_-{\tau} & G}}
\end{bijectivecorrespondence}$$
\noindent where $\mathcal{L}(F)$ describes the left Kan extension described
above. Given $\sigma$ define $\overline{\sigma}_{n}\colon F(n)
\rightarrow G(n)$ by $\overline{\sigma}_{n}(a) =
\sigma_{n}([\kappa_{n}(a,\ensuremath{\mathrm{id}}_{n})])$, where $[\kappa_{n}(a,\ensuremath{\mathrm{id}}_{n})]
\in \mathcal{L}(F)(n) = \big(\coprod_{i}F(i)\times n^{i}\big)/\!\sim$. This
yields a natural transformation, since for $f\colon n\rightarrow m$
in $\aleph_0$,
$$\begin{array}{rcl}
\big(G(f) \mathrel{\circ} \overline{\sigma}_{n}\big)(a)
& = &
G(f)\big(\sigma_{n}([\kappa_{n}(a,\ensuremath{\mathrm{id}}_{n})])\big) \\
& = &
\sigma_{m}\big(\mathcal{L}(F)(f)([\kappa_{n}(a,\ensuremath{\mathrm{id}}_{n})])\big) \\
& = &
\sigma_{m}([\kappa_{n}(a,f\mathrel{\circ}\ensuremath{\mathrm{id}}_{n})]) \\
& = &
\sigma_{m}([\kappa_{n}(a,\ensuremath{\mathrm{id}}_{m} \mathrel{\circ} f)]) \\
& = &
\sigma_{m}([\kappa_{m}(F(f)(a),\ensuremath{\mathrm{id}}_{m})]) \\
& = &
\big(\overline{\sigma}_{m} \mathrel{\circ} F(f)\big)(a).
\end{array}$$
\noindent In the other direction, given $\tau$ we take $\overline{\tau}_{X}
\colon \mathcal{L}(F)(X) \rightarrow G(X)$ by:
$$\begin{array}{rcl}
\overline{\tau}_{X}([\kappa_{i}(a,g)])
& = &
G\big(i\stackrel{g}{\rightarrow}X\big)(\tau_{i}(a)) \;\in\; G(X).
\end{array}$$
\noindent This yields a natural transformation since for $f\colon
X\rightarrow Y$ in $\Cat{Sets}\xspace$ we have:
$$\begin{array}{rcl}
\big(G(f) \mathrel{\circ} \overline{\tau}_{X}\big)([\kappa_{i}(a, g)])
& = &
G(f)\Big(G\big(i\stackrel{g}{\rightarrow}X\big)(\tau_{i}(a))\Big) \\
& = &
\Big(G\big(i\stackrel{f\mathrel{\circ} g}{\rightarrow}Y\big)(\tau_{i}(a))\Big) \\
& = &
\overline{\tau}_{Y}([\kappa_{i}(a,f\mathrel{\circ} g)]) \\
& = &
\big(\overline{\tau}_{Y} \mathrel{\circ} \mathcal{L}(F)(f)\big)([\kappa_{i}(a,g)]).
\end{array}$$
\noindent Finally,
$$\begin{array}{rcl}
\overline{\overline{\sigma}}_{X}([\kappa_{i}(a,g)])
& = &
G\big(i\stackrel{g}{\rightarrow}X\big)(\overline{\sigma}_{i}(a)) \\
& = &
G\big(i\stackrel{g}{\rightarrow}X\big)(\sigma_{i}([\kappa_{i}(a,\ensuremath{\mathrm{id}}_{i})])) \\
& = &
\sigma_{n}(\mathcal{L}(F)(g)([\kappa_{i}(a,\ensuremath{\mathrm{id}}_{i})])) \\
& = &
\sigma_{n}([\kappa_{i}(a,g)]) \\
\overline{\overline{\tau}}_{n}(a)
& = &
\overline{\tau}_{n}([\kappa_{n}(a,\ensuremath{\mathrm{id}}_{n})]) \\
& = &
G\big(n\stackrel{\ensuremath{\mathrm{id}}}{\rightarrow}n\big)(\tau_{n}(a)) \\
& = &
\tau_{n}(a).
\end{array}$$
}
\noindent The adjunction on the left in Figure ~\ref{SetsTriangle} is
then in fact the composition of the other two. The adjunctions in
Figure~\ref{SetsTriangle} are not new. For instance, the one at the
bottom plays an important role in the description of analytic functors
and species~\cite{Joyal86}, see
also~\cite{Hasegawa02,AdamekV08,Curien08}. The category of presheaves
$\Cat{Sets}\xspace^{\aleph_0}$ is used to provide a semantics for binding,
see~\cite{FiorePT99}. What is new in this paper is the systematic
organisation of correspondences in triangles like the one in
Figure~\ref{SetsTriangle} for various kinds of algebraic structures
(instead of sets).
\begin{itemize}
\item There is a triangle of adjunctions for monoids, monads, and
Lawvere theories, see Figure~\ref{MonoidTriangleFig}.
\item This triangle restricts to commutative monoids, commutative
monads, and symmetric monoidal Lawvere theories, see
Figure~\ref{ComMonoidTriangleFig}.
\item There is also a triangle of adjunctions for commutative
semirings, commutative additive monads, and symmetric monoidal
Lawvere theories with biproducts, see Figure~\ref{CSRngTriangleFig}.
\item This last triangle restricts to involutive commutative
semirings, involutive commutative additive monads, and dagger
symmetric monoidal Lawvere theories with dagger biproducts, see
Figure~\ref{ICSRngTriangleFig} below.
\end{itemize}
\noindent These four figures with triangles of adjunctions provide a
quick way to get an overview of the paper (the rest is just hard
work). The triangles capture fundamental correspondences between basic
mathematical structures. As far as we know they have not been made
explicit at this level of generality.
The paper is organised as follows. It starts with a section containing
some background material on monads and Lawvere theories. The triangle
of adjunctions for monoids, much of which is folklore, is developed in
Section~\ref{MonoidSec}. Subsequently, Section~\ref{AMndSec} forms an
intermezzo; it introduces the notion of additive monad, and proves
that a monad $T$ is additive if and only if in its Kleisli category
$\mathcal{K}{\kern-.2ex}\ell(T)$ coproducts form biproducts, if and only if in its
category $\textsl{Alg}\xspace(T)$ of algebras products form
biproducts. These additive monads play a crucial role in
Sections~\ref{SemiringMonadSec} and~\ref{Semiringcatsec} which develop
a triangle of adjunctions for commutative semirings. Finally,
Section~\ref{InvolutionSec} introduces the refined triangle with
involutions and daggers.
The triangles of adjunctions in this paper are based on many detailed
verifications of basic facts. We have chosen to describe all
constructions explicitly but to omit most of these verifications,
certainly when these are just routine. Of course, one can continue and
try to elaborate deeper (categorical) structure underlying the
triangles. In this paper we have chosen not to follow that route, but
rather to focus on the triangles themselves.
\section{Preliminaries}\label{PrelimSec}
We shall assume a basic level of familiarity with category theory,
especially with adjunctions and monads. This section recalls some
basic facts and fixes notation. For background information we refer
to~\cite{Awodey06,Borceux94,MacLane71}.
In an arbitrary category \Cat{C} we write finite products as
$\times,1$, where $1\in\cat{C}$ is the final object. The projections
are written as $\pi_{i}$ and tupling as $\tuple{f_{1}}{f_{2}}$. Finite
coproducts are written as $+$ with initial object $0$, and with
coprojections $\kappa_i$ and cotupling $[f_{1},f_{2}]$. We write $!$,
both for the unique map $X \to 1$ and the unique map $0 \to X$. A
category is called distributive if it has both finite products and
finite coproducts such that functors $X\times(-)$ preserve these
coproducts: the canonical maps $0\rightarrow X\times 0$, and $(X\times
Y)+(X\times Z) \rightarrow X\times (Y+Z)$ are isomorphisms. Monoidal
products are written as $\otimes, I$ where $I$ is the tensor unit,
with the familiar isomorphisms: $\alpha\colon X\otimes (Y\otimes Z)
\congrightarrow (X\otimes Y)\otimes Z$ for associativity, $\rho\colon
X\otimes I\congrightarrow X$ and $\lambda\colon I\otimes
X\congrightarrow X$ for unit, and in the symmetric case also
$\gamma\colon X\otimes Y\congrightarrow Y\otimes X$ for swap.
We write $\Cat{Mnd}\xspace(\Cat{C})$ for the category of monads on a category
\Cat{C}. For convenience we write $\Cat{Mnd}\xspace$ for $\Cat{Mnd}\xspace(\Cat{Sets}\xspace)$. Although
we shall use strength for monads mostly with respect to finite
products $(\times, 1)$ we shall give the more general definition
involving monoidal products $(\otimes, I)$. A monad $T$ is called
strong if it comes with a `strength' natural transformation $\ensuremath{\mathsf{st}}\xspace$ with
components $\ensuremath{\mathsf{st}}\xspace\colon T(X)\otimes Y\rightarrow T(X\otimes Y)$,
commuting with unit $\eta$ and multiplication $\mu$, in the sense that
$\ensuremath{\mathsf{st}}\xspace \mathrel{\circ} \eta\otimes\ensuremath{\mathrm{id}} = \eta$ and $\ensuremath{\mathsf{st}}\xspace \mathrel{\circ} \mu\otimes\ensuremath{\mathrm{id}}
= \mu \mathrel{\circ} T(\ensuremath{\mathsf{st}}\xspace) \mathrel{\circ} \ensuremath{\mathsf{st}}\xspace$. Additionally, for the familiar
monoidal isomorphisms $\rho$ and $\alpha$,
$$\hspace*{-1em}\xymatrix@C-1pc{
T(Y)\otimes I\ar[r]^-{\ensuremath{\mathsf{st}}\xspace}\ar[dr]_{\rho} & T(Y\otimes I)\ar[d]^{T(\rho)}
&
T(X)\otimes (Y\otimes Z)\ar[rr]^-{\ensuremath{\mathsf{st}}\xspace} \ar[d]_-{\alpha} & &
T(X\otimes (Y\otimes Z)) \ar[d]^{T(\alpha)} \\
& T(Y)
&
(T(X)\otimes Y)\otimes Z\ar[r]^-{\ensuremath{\mathsf{st}}\xspace\otimes\ensuremath{\mathrm{id}}}&
T(X\otimes Y)\otimes Z\ar[r]^-{\ensuremath{\mathsf{st}}\xspace} &
T((X\otimes Y)\otimes Z)
}$$
\noindent Also, when the tensor $\otimes$ is a cartesian product
$\times$ we sometimes write these $\rho$ and $\alpha$ for the obvious
maps.
The category $\Cat{StMnd}\xspace(\Cat{C})$ has monads with strength $(T,\ensuremath{\mathsf{st}}\xspace)$ as
objects. Morphisms are monad maps commuting with strength. The
monoidal structure on \Cat{C} is usually clear from the context.
\begin{lemma}
\label{SetsStrengthLem}
Monads on \Cat{Sets}\xspace are always strong w.r.t.\ finite products, in a canonical
way, yielding a functor $\Cat{Mnd}\xspace(\Cat{Sets}\xspace) = \Cat{Mnd}\xspace \rightarrow \Cat{StMnd}\xspace =
\Cat{StMnd}\xspace(\Cat{Sets}\xspace)$.
\end{lemma}
\begin{proof}
For every functor $T\colon\Cat{Sets}\xspace\rightarrow\Cat{Sets}\xspace$, there exists a
strength map $\ensuremath{\mathsf{st}}\xspace\colon T(X)\times Y \rightarrow T(X\times Y)$, namely
$\ensuremath{\mathsf{st}}\xspace(u,y) = T(\lam{x}{\tuple{x}{y}})(u)$. It makes the above diagrams
commute, and also commutes with unit and multiplication in case $T$ is
a monad. Additionally, strengths commute with natural transformations
$\sigma\colon T\rightarrow S$, in the sense that $\sigma \mathrel{\circ} \ensuremath{\mathsf{st}}\xspace =
\ensuremath{\mathsf{st}}\xspace \mathrel{\circ} (\sigma\times\ensuremath{\mathrm{id}})$. \hspace*{\fill}$\QEDbox$
\end{proof}
\auxproof{
For $u\in T(X)$ and $y\in Y$,
$$\begin{array}{rcl}
\big(\sigma \mathrel{\circ} \ensuremath{\mathsf{st}}\xspace\big)(u, y)
& = &
\sigma(T(\lam{x}{\tuple{x}{y}})(u)) \\
& = &
\big(\sigma \mathrel{\circ} T(\lam{x}{\tuple{x}{y}})\big)(u) \\
& = &
\big(S(\lam{x}{\tuple{x}{y}}) \mathrel{\circ} \sigma\big)(u) \\
& = &
\ensuremath{\mathsf{st}}\xspace(\sigma(u), y) \\
& = &
\big(\ensuremath{\mathsf{st}}\xspace \mathrel{\circ} (\sigma\times\ensuremath{\mathrm{id}})\big)(u,y).
\end{array}$$
}
Given a general strength map $\ensuremath{\mathsf{st}}\xspace\colon T(X)\otimes Y \rightarrow
T(X\otimes Y)$ in a \textit{symmetric} monoidal category one can
define a swapped $\ensuremath{\mathsf{st}}\xspace'\colon X\otimes T(Y) \rightarrow T(X\otimes Y)$
as $\ensuremath{\mathsf{st}}\xspace' = T(\gamma) \mathrel{\circ} \ensuremath{\mathsf{st}}\xspace \mathrel{\circ} \gamma$, where $\gamma \colon
X\otimes Y \congrightarrow Y\otimes X$ is the swap map. There are now
in principle two maps $T(X)\otimes T(Y) \rightrightarrows T(X\otimes
Y)$, namely $\mu \mathrel{\circ} T(\ensuremath{\mathsf{st}}\xspace') \mathrel{\circ} \ensuremath{\mathsf{st}}\xspace$ and $\mu \mathrel{\circ} T(\ensuremath{\mathsf{st}}\xspace)
\mathrel{\circ} \ensuremath{\mathsf{st}}\xspace'$. A strong monad $T$ is called commutative if these two
composites $T(X)\otimes T(Y) \rightrightarrows T(X\otimes Y)$ are the
same. In that case we shall write $\ensuremath{\mathsf{dst}}\xspace$ for this (single) map, which
is a monoidal transformation, see also~\cite{Kock71a}. The powerset
monad $\mathcal{P}$ is an example of a commutative monad, with $\ensuremath{\mathsf{dst}}\xspace
\colon \mathcal{P}(X)\times\mathcal{P}(Y)\rightarrow \mathcal{P}(X\times Y)$
given by $\ensuremath{\mathsf{dst}}\xspace(U,V) = U\times V$. Later we shall see other examples.
We write $\mathcal{K}{\kern-.2ex}\ell(T)$ for the Kleisli category of a monad $T$, with
$X\in\Cat{C}$ as objects, and maps $X\rightarrow T(Y)$ in \Cat{C} as
arrows. For clarity we sometimes write a fat dot $\klafter$ for
composition in Kleisli categories, so that $g \klafter f = \mu \mathrel{\circ}
T(g) \mathrel{\circ} f$. The inclusion functor $\Cat{C}\rightarrow \mathcal{K}{\kern-.2ex}\ell(T)$ is
written as $J$, where $J(X) = X$ and $J(f) = \eta \mathrel{\circ} f$. A map of
monads $\sigma\colon T\rightarrow S$ yields a functor
$\mathcal{K}{\kern-.2ex}\ell(\sigma)\colon \mathcal{K}{\kern-.2ex}\ell(T) \rightarrow \mathcal{K}{\kern-.2ex}\ell(S)$ which is the identity on
objects, and maps an arrow $f$ to $\sigma\mathrel{\circ} f$. This functor
$\mathcal{K}{\kern-.2ex}\ell(\sigma)$ commutes with the $J$'s. One obtains a functor $\mathcal{K}{\kern-.2ex}\ell
\colon \Cat{Mnd}\xspace(\cat{C}) \to \Cat{Cat}$, where $\cat{Cat}$ is the category
of (small) categories.
We will use the following standard result.
\begin{lemma}
\label{KleisliStructLem}
For $T\in\Cat{Mnd}\xspace(\Cat{C})$, consider the generic statement ``if $\Cat{C}$
has $\diamondsuit$ then so does $\mathcal{K}{\kern-.2ex}\ell(T)$ and $J\colon\Cat{C}
\rightarrow \mathcal{K}{\kern-.2ex}\ell(T)$ preserves $\diamondsuit$'s'', where $\diamondsuit$ is
some property. This holds for:
{\renewcommand{\theenumi}{(\roman{enumi})}
\begin{enumerate}
\item $\diamondsuit$ = (finite coproducts $+, 0$), or in fact any colimits;
\item $\diamondsuit$ = (monoidal products $\otimes, I$), in case
the monad $T$ is commutative;
\end{enumerate}}
\end{lemma}
\begin{proof}
Point \textit{(i)} is obvious; for \textit{(ii)}
one defines the tensor on morphisms in $\mathcal{K}{\kern-.2ex}\ell(T)$ as:
$$\begin{array}{rcl}
\big(X\stackrel{f}{\rightarrow} T(U)\big) \otimes
\big(Y\stackrel{g}{\rightarrow} T(V)\big)
& = &
\big(X\otimes Y \stackrel{f\otimes g}{\longrightarrow} T(U)\otimes T(V)
\stackrel{\ensuremath{\mathsf{dst}}\xspace}{\longrightarrow} T(U\otimes V)\big).
\end{array}$$
\noindent Then: $J(f)\otimes J(g) = \ensuremath{\mathsf{dst}}\xspace \mathrel{\circ} ((\eta \mathrel{\circ}
f)\otimes (\eta \mathrel{\circ} g)) = \eta \mathrel{\circ} (f\otimes g) = J(f\otimes
g)$. \hspace*{\fill}$\QEDbox$
\end{proof}
\auxproof{
Proof of the fact that $\mathcal{K}{\kern-.2ex}\ell(\sigma)$ preserves coproducts:
For $\kappa_1 \colon X \to X+Y$ in $\mathcal{K}{\kern-.2ex}\ell(T)$,
\[\mathcal{K}{\kern-.2ex}\ell(\sigma)(\kappa_1) = \sigma_{X+Y} \mathrel{\circ} \eta^T \mathrel{\circ} \kappa_1 = \eta^S \mathrel{\circ} \kappa_1\]
as $\sigma$ commutes with $\eta$.
For $f\colon X \to Z$, $g\colon Y \to Z$ in $\mathcal{K}{\kern-.2ex}\ell(T)$, i.e. $f\colon X \to T(Z)$, $g\colon Y \to T(Z)$ in $\cat{C}$,
\[\mathcal{K}{\kern-.2ex}\ell(\sigma)(\cotuple{f}{g}) = \sigma_Z \mathrel{\circ} \cotuple{f}{g} = \cotuple{\sigma_Z \mathrel{\circ} f}{\sigma_Z \mathrel{\circ} g} = \cotuple{\mathcal{K}{\kern-.2ex}\ell(\sigma)(f)}{\mathcal{K}{\kern-.2ex}\ell(\sigma)(g)}.\]
The functor $\mathcal{K}{\kern-.2ex}\ell(\sigma)$ preserves monoidal products since:
$$\begin{array}{rcl}
\mathcal{K}{\kern-.2ex}\ell(\sigma)(f) \otimes \mathcal{K}{\kern-.2ex}\ell(\sigma)(g)
& = &
\ensuremath{\mathsf{dst}}\xspace \mathrel{\circ} ((\sigma \mathrel{\circ} f) \otimes (\sigma \mathrel{\circ} g)) \\
& \stackrel{(*)}{=} &
\sigma \mathrel{\circ} \ensuremath{\mathsf{dst}}\xspace \mathrel{\circ} (f\otimes g) \\
& = &
\mathcal{K}{\kern-.2ex}\ell(\sigma)(f\otimes g),
\end{array}$$
\noindent where the marked equality holds since $\sigma$ commutes
with strength:
$$\begin{array}{rcll}
\sigma \mathrel{\circ} \ensuremath{\mathsf{dst}}\xspace
& = &
\sigma \mathrel{\circ} \mu \mathrel{\circ} T(\ensuremath{\mathsf{st}}\xspace') \mathrel{\circ} \ensuremath{\mathsf{st}}\xspace \\
& = &
\mu \mathrel{\circ} \sigma \mathrel{\circ} T(\sigma) \mathrel{\circ} T(\ensuremath{\mathsf{st}}\xspace') \mathrel{\circ} \ensuremath{\mathsf{st}}\xspace
& \mbox{$\sigma$ is a map of monads} \\
& = &
\mu \mathrel{\circ} \sigma \mathrel{\circ} T(\ensuremath{\mathsf{st}}\xspace' \mathrel{\circ} \ensuremath{\mathrm{id}}\otimes\sigma) \mathrel{\circ} \ensuremath{\mathsf{st}}\xspace
& \mbox{$\sigma$ commutes with strength} \\
& = &
\mu \mathrel{\circ} S(\ensuremath{\mathsf{st}}\xspace') \mathrel{\circ} \sigma \mathrel{\circ} \ensuremath{\mathsf{st}}\xspace \mathrel{\circ} (\ensuremath{\mathrm{id}}\otimes\sigma) \\
& = &
\mu \mathrel{\circ} S(\ensuremath{\mathsf{st}}\xspace') \mathrel{\circ} \ensuremath{\mathsf{st}}\xspace \mathrel{\circ} (\sigma\otimes\ensuremath{\mathrm{id}})
\mathrel{\circ} (\ensuremath{\mathrm{id}}\otimes\sigma) \\
& = &
\ensuremath{\mathsf{dst}}\xspace \mathrel{\circ} (\sigma\otimes\sigma).
\end{array}$$
}
As in this lemma we sometimes formulate results on monads in full
generality, \textit{i.e.}~for arbitrary categories, even though our
main results---see Figures~\ref{MonoidTriangleFig},
\ref{ComMonoidTriangleFig}, \ref{CSRngTriangleFig}
and~\ref{ICSRngTriangleFig}---only deal with monads on \Cat{Sets}\xspace. These
results involve algebraic structures like monoids and semirings, which
we interpret in the standard set-theoretic universe, and not in
arbitrary categories. Such greater generality is possible, in
principle, but it does not seem to add enough to justify the
additional complexity.
Often we shall be interested in a ``finitary'' version of the Kleisli
construction, corresponding to the Lawvere
theory~\cite{Lawvere63a,HylandP07} associated with a monad. For a
monad $T\in\Cat{Mnd}\xspace$ on $\Cat{Sets}\xspace$ we shall write $\mathcal{K}{\kern-.2ex}\ell_{\ensuremath{\mathbb{N}}}(T)$ for the
category with natural numbers $n\in\ensuremath{\mathbb{N}}$ as objects, regarded as
finite sets $\underline{n} = \{0,1,\ldots, n-1\}$. A map $f\colon
n\rightarrow m$ in $\mathcal{K}{\kern-.2ex}\ell_{\ensuremath{\mathbb{N}}}(T)$ is then a function $\underline{n}
\rightarrow T(\underline{m})$. This yields a full inclusion
$\mathcal{K}{\kern-.2ex}\ell_{\ensuremath{\mathbb{N}}}(T) \hookrightarrow \mathcal{K}{\kern-.2ex}\ell(T)$. It is easy to see that a map
$f\colon n\rightarrow m$ in $\mathcal{K}{\kern-.2ex}\ell_{\ensuremath{\mathbb{N}}}(T)$ can be identified with an
$n$-cotuple of elements $f_{i}\in T(m)$, which may be seen as $m$-ary
terms/operations.
By the previous lemma the category $\mathcal{K}{\kern-.2ex}\ell_{\ensuremath{\mathbb{N}}}(T)$ has coproducts
given on objects simply by the additive monoid structure $(+, 0)$ on
natural numbers. There are obvious coprojections $n\rightarrow n+m$,
using $\underline{n+m} \cong \underline{n}+\underline{m}$. The
identities $n+0 = n = 0+n$ and $(n+m)+k = n + (m+k)$ are in fact the
familiar monoidal isomorphisms. The swap map is an isomorphism $n+m
\cong m+n$ rather than an identity $n+m = m+n$.
In general, a Lawvere theory is a small category $\cat{L}$ with
natural numbers $n\in\ensuremath{\mathbb{N}}$ as objects, and $(+,0)$ on $\ensuremath{\mathbb{N}}$ forming
finite coproducts in $\cat{L}$. It forms a categorical version of a
term algebra, in which maps $n\rightarrow m$ are understood as
$n$-tuples of terms $t_i$ each with $m$ free variables. Formally a
Lawvere theory involves a functor $\aleph_{0}\rightarrow\cat{L}$ that
is the identity on objects and preserves finite coproducts ``on the
nose'' (up-to-identity) as opposed to up-to-isomorphism. A morphism of
Lawvere theories $F\colon\cat{L}\rightarrow\cat{L'}$ is a functor that
is the identity on objects and strictly preserves finite coproducts.
This yields a category \Cat{Law}\xspace.
\begin{corollary}
\label{Mnd2FCCatCor}
The finitary Kleisli construction $\mathcal{K}{\kern-.2ex}\ell_{\ensuremath{\mathbb{N}}}$ for monads on $\Cat{Sets}\xspace$,
yields a functor $\mathcal{K}{\kern-.2ex}\ell_{\ensuremath{\mathbb{N}}}: \Cat{Mnd}\xspace \rightarrow \Cat{Law}\xspace$. \hspace*{\fill}$\QEDbox$
\end{corollary}
\section{Monoids}\label{MonoidSec}
The aim of this section is to replace the category \Cat{Sets}\xspace of sets at
the top of the triangle in Figure~\ref{SetsTriangle} by the category
\Cat{Mon}\xspace of monoids $(M,\cdot,1)$, and to see how the corners at the
bottom change in order to keep a triangle of adjunctions. Formally,
this can be done by considering monoid objects in the three categories
at the corners of the triangle in Figure~\ref{SetsTriangle} (see
also~\cite{FiorePT99,Curien08}) but we prefer a more concrete
description. The results in this section, which are summarised in
Figure~\ref{MonoidTriangleFig}, are not claimed to be new, but are
presented in preparation of further steps later on in this paper.
\begin{figure}
$$\begin{array}{c}
{\[email protected]@C+.5pc{
& & \Cat{Mon}\xspace\ar@/_2ex/ [ddll]_{\cal A}\ar@/_2ex/ [ddrr]_(0.4){\mathcal{K}{\kern-.2ex}\ell_{\ensuremath{\mathbb{N}}}\mathcal{A}} \\
& \dashv & & \dashv & \\
\Cat{Mnd}\xspace\ar @/_2ex/[rrrr]_{\mathcal{K}{\kern-.2ex}\ell_\ensuremath{\mathbb{N}}}
\ar@/_2ex/ [uurr]_(0.6){\;{\mathcal{E}} \cong \mathcal{H}\mathcal{K}{\kern-.2ex}\ell_{\ensuremath{\mathbb{N}}}} & & \bot & &
\Cat{Law}\xspace\ar@/_2ex/ [uull]_{\mathcal{H}}\ar @/_2ex/[llll]_{\mathcal{T}}
}} \\ \\[-1em]
\mbox{where}\quad
\left\{\begin{array}{ll}
{\cal A}(M) = M\times (-) & \mbox{action monad} \\
\mathcal{E}(T) = T(1) & \mbox{evaluation at singleton set 1} \\
\mathcal{H}(\Cat{L}) = \Cat{L}(1,1) &
\mbox{endo-homset of $1\in\Cat{L}$} \\
\mathcal{K}{\kern-.2ex}\ell_{\ensuremath{\mathbb{N}}}(T) & \mbox{Kleisli category restricted to objects $n\in\ensuremath{\mathbb{N}}$} \\
\mathcal{T}(\cat{L}) = T_{\cat{L}} & \mbox{monad associated with Lawvere theory \cat{L}.}
\end{array}\right.
\end{array}$$
\caption{Basic relations between monoids, monads and Lawvere theories.}
\label{MonoidTriangleFig}
\end{figure}
We start by studying the interrelations between monoids and monads. In
principle this part can be skipped, because the adjunction on the left
in Figure~\ref{MonoidTriangleFig} between monoids and monads follows
from the other two by composition. But we do make this adjunction
explicit in order to completely describe the situation.
The following result is standard. We only sketch the proof.
\begin{lemma}
\label{Mon2MndLem}
Each monoid $M$ gives rise to a monad ${\cal A}(M) = M\times(-)\colon
\Cat{Sets}\xspace \rightarrow \Cat{Sets}\xspace$. The mapping $M\mapsto {\cal A}(M)$ yields
a functor $\Cat{Mon}\xspace\rightarrow\Cat{Mnd}\xspace$.
\end{lemma}
\begin{proof}
For a monoid $(M,\cdot,1)$ the unit map $\eta \colon X\rightarrow M\times X =
{\cal A}(M)$ is $x\mapsto (1,x)$. The multiplication $\mu \colon M\times
(M\times X)\rightarrow M\times X$ is $(s,(t,x)) \mapsto (s\cdot
t,x)$. The standard strength map $\ensuremath{\mathsf{st}}\xspace\colon (M\times X)\times Y
\rightarrow M\times (X\times Y)$ is given by $\ensuremath{\mathsf{st}}\xspace((s,x),y) =
(s,(x,y))$. Each monoid map $f\colon M\rightarrow N$ gives rise to a
map of monads with components $f\times \ensuremath{\mathrm{id}}\colon M\times
X\rightarrow N\times X$. These components commute with
strength. \hspace*{\fill}$\QEDbox$
\end{proof}
The monad $\mathcal{A}(M) = M\times(-)$ is called the `action monad',
as its category of Eilenberg-Moore algebras consists of $M$-actions
$M\times X\rightarrow X$ and their morphisms. The monoid elements act
as scalars in such actions.
Conversely, each monad (on $\Cat{Sets}\xspace$) gives rise to a monoid. In the
following lemma we prove this in more generality. For a category
$\cat{C}$ with finite products, we denote by
$\cat{Mon}(\cat{C})$ the category of monoids in $\cat{C}$,
\textit{i.e.}~the category of objects $M$ in $\cat{C}$ carrying a
monoid structure $1 \rightarrow M \leftarrow M \times M$ with
structure preserving maps between them.
\begin{lemma}
\label{Mnd2MonLem}
Each strong monad $T$ on a category \Cat{C} with finite products, gives rise to a monoid $\mathcal{E}(T) = T(1)$ in $\Cat{C}$. The
mapping $T \mapsto T(1)$ yields a functor $\Cat{StMnd}\xspace(\Cat{C}) \to
\Cat{Mon}\xspace(\Cat{C})$
\end{lemma}
\begin{proof}
For a strong monad $(T, \eta, \mu, \ensuremath{\mathsf{st}}\xspace)$, we define a multiplication
on $T(1)$ by $\mu \mathrel{\circ} T(\pi_2) \mathrel{\circ} \ensuremath{\mathsf{st}}\xspace \colon T(1) \times T(1)
\to T(1)$, with unit $\eta_1 \colon 1 \to T(1)$. Each monad map
$\sigma\colon T \to S$ gives rise to a monoid map $T(1) \to S(1)$ by
taking the component of $\sigma$ at $1$. \hspace*{\fill}$\QEDbox$
\end{proof}
\auxproof{
We check the unit laws diagrammatically:
$$\[email protected]{
T(1)\ar[r]^-{\cong}\ar@{=}[ddr] &
T(1)\times 1\ar[r]^-{\ensuremath{\mathrm{id}}\times\eta_1}\ar[d]^{\ensuremath{\mathsf{st}}\xspace} &
T(1)\times T(1)\ar[d]^{\ensuremath{\mathsf{st}}\xspace}
&
T(1)\ar[r]^-{\cong}\ar@{=}[ddr] &
1\times T(1)\ar[r]^-{\eta_{1}\times\ensuremath{\mathrm{id}}}\ar[dr]_-{\eta}\ar[dd]^{\pi_2} &
T(1)\times T(1)\ar[d]^{\ensuremath{\mathsf{st}}\xspace} \\
&
T(1\times 1)\ar[r]^-{T(\ensuremath{\mathrm{id}}\times\eta)}\ar[d]^{T(\pi_{2})} &
T(1\times T(1))\ar[d]^{T(\pi_{2})}
&
&
&
T(1\times T(1))\ar[d]^{T(\pi_{2})} \\
&
T(1)\ar[r]^-{T(\eta)}\ar@{=}[dr] &
T^{2}(1)\ar[d]^{\mu}
&
&
T(1)\ar[r]^-{\eta}\ar@{=}[dr] &
T^{2}(1)\ar[d]^{\mu} \\
&
&
T(1)
&
&
&
T(1)
}$$
The associativity of multiplication follows from the following diagram.
$$\xymatrix{
(T(1)\times T(1))\times T(1)\ar[r]^-{\ensuremath{\mathsf{st}}\xspace\times\ensuremath{\mathrm{id}}}\ar[d]_{\alpha}^{\cong} &
T(1\times T(1))\times T(1)\ar[r]^-{T(\pi_{2})\times\ensuremath{\mathrm{id}}}\ar[d]^{\ensuremath{\mathsf{st}}\xspace} &
T^{2}(1)\times T(1)\ar[r]^-{\mu}\ar[d]^{\ensuremath{\mathsf{st}}\xspace} &
T(1)\times T(1)\ar[ddd]^{\ensuremath{\mathsf{st}}\xspace} \\
T(1)\times (T(1)\times T(1))\ar[dd]_{\ensuremath{\mathrm{id}}\times\ensuremath{\mathsf{st}}\xspace} &
T((1\times T(1))\times T(1))\ar[r]^-{T(\pi_{2}\times\ensuremath{\mathrm{id}})}
\ar[d]_{T(\alpha)}^{\cong} &
T(T(1)\times T(1))\ar[dd]^{T(\ensuremath{\mathsf{st}}\xspace)} & \\
&
T(1\times (T(1)\times T(1)))\ar[d]^{T(\ensuremath{\mathrm{id}}\times\ensuremath{\mathsf{st}}\xspace)} \\
T(1)\times T(1\times T(1))\ar[d]_{\ensuremath{\mathrm{id}}\times T(\pi_{2})}\ar[r]^-{\ensuremath{\mathsf{st}}\xspace} &
T(1\times T(1\times T(1)))\ar[d]_{T(\ensuremath{\mathrm{id}}\times\ensuremath{\mathsf{st}}\xspace)}\ar[r]^-{T(\pi_{2})} &
T^{2}(1\times T(1))\ar[r]^-{\mu}\ar[d]^{T^{2}(\pi_{2})} &
T(1\times T(1))\ar[d]^{T(\pi_{2})} \\
T(1)\times T^{2}(1)\ar[d]_{\ensuremath{\mathrm{id}}\times\mu}\ar[r]^-{\ensuremath{\mathsf{st}}\xspace} &
T(1\times T^{2}(1))\ar[d]^{T(\ensuremath{\mathrm{id}}\times\mu)}\ar[r]^-{T(\pi_{2})} &
T^{3}(1)\ar[r]^-{\mu}\ar[d]^{T(\mu)} &
T^{2}(1)\ar[d]^-{\mu} \\
T(1)\times T(1)\ar[r]_-{\ensuremath{\mathsf{st}}\xspace} &
T(1\times T(1))\ar[r]_-{T(\pi_{2})} &
T^{2}(1)\ar[r]_-{\mu} & T(1)
}$$
}
The swapped strength map $\ensuremath{\mathsf{st}}\xspace'$ gives rise to a swapped multiplication
on $T(1)$, namely $\mu \mathrel{\circ} T(\pi_1) \mathrel{\circ} \ensuremath{\mathsf{st}}\xspace' \colon T(1) \times
T(1) \to T(1)$, again with unit $\eta_1$. It corresponds to $(a,b)
\mapsto b\cdot a$ instead of $(a,b)\mapsto a\cdot b$ like in the
lemma. In case $T$ is a commutative monad, the two multiplications
coincide as we prove in Lemma \ref{CMnd2CMonLem}.
The functors defined in the previous two Lemmas~\ref{Mon2MndLem} and
\ref{Mnd2MonLem} form an adjunction. This result goes back
to~\cite{Wolff73}.
\begin{lemma}
\label{AdjMndMonLem}
The pair of functors $\mathcal{A} \colon \Cat{Mon}\xspace \rightleftarrows \Cat{Mnd}\xspace
\colon \mathcal{E}$ forms an adjunction $\mathcal{A} \dashv \mathcal{E}$, as on the
left in Figure~\ref{MonoidTriangleFig}.
\end{lemma}
\begin{proof}
For a monoid $M$ and a (strong) monad $T$ on \Cat{Sets}\xspace there are
(natural) bijective correspondences:
$$\begin{bijectivecorrespondence}
\correspondence[in \Cat{Mnd}\xspace]{\xymatrix{\mathcal{A}(M)\ar[r]^-{\sigma} & T}}
\correspondence[in \Cat{Mon}\xspace]{\xymatrix{M\ar[r]_-{f} & T(1)}}
\end{bijectivecorrespondence}$$
\noindent Given $\sigma$ one defines a monoid map $\overline{\sigma}
\colon M\rightarrow T(1)$ as:
$$\xymatrix{
\overline{\sigma} = \Big(M\ar[r]^-{\rho^{-1}}_-{\cong} &
M\times 1 = \mathcal{A}(M)(1)\ar[r]^-{\sigma_1} & T(1)\Big),
}$$
\noindent where $\rho^{-1} = \tuple{\ensuremath{\mathrm{id}}}{!}$ in this cartesian case.
Conversely, given $f$ one gets a monad map $\overline{f}
\colon \mathcal{A}(M) \rightarrow T$ with components:
$$\xymatrix{
\overline{f}_{X} = \Big(M\times X\ar[r]^-{f\times\ensuremath{\mathrm{id}}} &
T(1)\times X\ar[r]^-{\ensuremath{\mathsf{st}}\xspace} &
T(1\times X)\ar[r]^-{T(\lambda)}_-{\cong} & T(X)\Big),
}$$
\noindent where $\lambda = \pi_{2} \colon 1\times X\congrightarrow
X$. Straightforward computations show that these assignments indeed
give a natural bijective correspondence. \hspace*{\fill}$\QEDbox$
\end{proof}
\auxproof{
We briefly check the bijective correspondences:
$$\begin{array}{rcll}
\overline{\overline{\sigma}}_{X}
& = &
T(\pi_{2}) \mathrel{\circ} \ensuremath{\mathsf{st}}\xspace \mathrel{\circ} (\overline{\sigma}\times\ensuremath{\mathrm{id}}) \\
& = &
T(\pi_{2}) \mathrel{\circ} \ensuremath{\mathsf{st}}\xspace \mathrel{\circ} (\sigma_{1}\times\ensuremath{\mathrm{id}}) \mathrel{\circ}
(\tuple{\ensuremath{\mathrm{id}}}{!}\times\ensuremath{\mathrm{id}}) \\
& = &
T(\pi_{2}) \mathrel{\circ} \sigma_{1\times X} \mathrel{\circ} \ensuremath{\mathsf{st}}\xspace \mathrel{\circ}
(\tuple{\ensuremath{\mathrm{id}}}{!}\times\ensuremath{\mathrm{id}})
& \mbox{since $\sigma$ commutes with strength} \\
& = &
\sigma_{X} \mathrel{\circ} (M\times\pi_{2}) \mathrel{\circ} \ensuremath{\mathsf{st}}\xspace \mathrel{\circ}
(\tuple{\ensuremath{\mathrm{id}}}{!}\times\ensuremath{\mathrm{id}})
& \mbox{by naturality of $\sigma$} \\
& \stackrel{(*)}{=} &
\sigma_{X} \mathrel{\circ} \ensuremath{\mathrm{id}} \\
& = &
\sigma_{X},
\end{array}$$
\noindent where the marked equation holds since for $a\in M$
and $x\in X$
$$\begin{array}{rcl}
\big((M\times\pi_{2}) \mathrel{\circ} \ensuremath{\mathsf{st}}\xspace \mathrel{\circ}
(\tuple{\ensuremath{\mathrm{id}}}{!}\times\ensuremath{\mathrm{id}})\big)(a,x)
& = &
\big((M\times\pi_{2}) \mathrel{\circ} \ensuremath{\mathsf{st}}\xspace\big)((a,*),x) \\
& = &
(M\times\pi_{2})(a,(*,x)) \\
& = &
(a,x).
\end{array}$$
\noindent Next,
$$\begin{array}{rcll}
\overline{\overline{f}}
& = &
\overline{f}_{1} \mathrel{\circ} \tuple{\ensuremath{\mathrm{id}}}{!} \\
& = &
T(\pi_{2}) \mathrel{\circ} \ensuremath{\mathsf{st}}\xspace \mathrel{\circ} (f\times\ensuremath{\mathrm{id}})\mathrel{\circ} \tuple{\ensuremath{\mathrm{id}}}{!} \\
& = &
T(\pi_{1}) \mathrel{\circ} \ensuremath{\mathsf{st}}\xspace \mathrel{\circ} (f\times\ensuremath{\mathrm{id}})\mathrel{\circ} \tuple{\ensuremath{\mathrm{id}}}{!}
& \mbox{since $\pi_{1} = \pi_{2} \colon 1\times 1\rightarrow 1$} \\
& = &
\pi_{1} \mathrel{\circ} (f\times\ensuremath{\mathrm{id}})\mathrel{\circ} \tuple{\ensuremath{\mathrm{id}}}{!}
& \mbox{by~\eqref{StrengthMonoidal}} \\
& = &
f \mathrel{\circ} \pi_{1} \mathrel{\circ} \tuple{\ensuremath{\mathrm{id}}}{!} \\
& = &
f.
\end{array}$$
For $f \colon M \to T(1)$, $\overline{f}$ is a monad morphism $\mathcal{A}(M) \to T$:\\
\begin{itemize}
\item Naturality: let $h \colon X \to Y$ be a set-map
$$
\begin{array}{rcl}
T(h) \mathrel{\circ} \overline{f}_X &=&T(h) \mathrel{\circ} T(\pi_2) \mathrel{\circ} st \mathrel{\circ} f \times id \\
&=&
T(\pi_2) \mathrel{\circ} T(id \times h) \mathrel{\circ} st \mathrel{\circ} f \times id \\
&=&
T(\pi_2) \mathrel{\circ} st \mathrel{\circ} id \times h \mathrel{\circ} f \times id\\
&=&
T(\pi_2) \mathrel{\circ} st \mathrel{\circ} f\times id \mathrel{\circ} id \times h\\
&=&
\overline{f}_Y \mathrel{\circ} \mathcal{A}(M)(h)
\end{array}
$$
\item Commutativity with $\eta$:
\xymatrix{
X \ar[r]^{\eta} \ar[d]_{\lambda^{-1}} & M \times X \ar[d]^{f \times id}\\
1 \times X \ar[r]^{\eta \times id} \ar[dd]_{\lambda} \ar[rd]_{\eta} & T(1) \times X \ar[d]^{st} \\
&T(1 \times X) \ar[d]^{T(\pi_2) = T(\lambda)}\\
X \ar[r]_{\eta} &T(X)
}
\item Commutativity with $\mu$: \\
\begin{sideways}
\xymatrix{
M\times(M\times X) \ar[dddddd]_{\mu}\ar[rr]^{id \times (f \times id)} && M \times (T(1) \times X) \ar[d]_{f \times id} \ar[rr]^{id \times \ensuremath{\mathsf{st}}\xspace} && M \times T(1 \times X) \ar[d]_{f \times id} \ar[r]^{id \times T(\pi_2)} & M \times T(X) \ar[d]^{f \times id}\\
&& T(1)\times(T(1)\times X) \ar[rr]^{id \times st} \ar[dr]^{\ensuremath{\mathsf{st}}\xspace} \ar[dl]_{\alpha} && T(1) \times T(1 \times X) \ar[r]^{id \times T(\pi_2)} \ar[d]{\ensuremath{\mathsf{st}}\xspace} & T(1) \times T(X) \ar[d]^{\ensuremath{\mathsf{st}}\xspace}\\
& (T(1) \times T(1))\times X \ar[d]_{\ensuremath{\mathsf{st}}\xspace} && T(1 \times (T(1) \times X)) \ar[ddl]^{T(\alpha)} \ar[ddd]^{T(\pi_2)} \ar[r]^{T(id \times \ensuremath{\mathsf{st}}\xspace)} & T(1 \times T(1 \times X)) \ar[r]^{T(id \times \pi_2)} \ar[ddd]_{T(\pi_2)} & T(1 \times T(X)) \ar[ddd]^{T(\pi_2)} \\
& T(1 \times T(1)) \times X \ar[d]_{T(\pi_2) \times id}\ar[dr]^{\ensuremath{\mathsf{st}}\xspace} &&&& \\
& T^2(1) \times X \ar[dd]_{\mu \times id} \ar[rrd]^{\ensuremath{\mathsf{st}}\xspace} & T((1 \times T(1)) \times X) \ar[dr]^{T(\pi_2 \times id)} &&& \\
&&&T(T(1) \times X) \ar[r]^{T(\ensuremath{\mathsf{st}}\xspace)} & T^2(1 \times X) \ar[r]^{T^2(\pi_2)} \ar[d]_{\mu} & T^2(X) \ar[d]^{\mu}\\
M \times X \ar[r]^{f \times id} & T(1) \times X \ar[rrr]^{\ensuremath{\mathsf{st}}\xspace} &&& T(1 \times X) \ar[r]^{T(\pi_2)}& T(X)
}
\end{sideways}
\end{itemize}
For $\sigma: \mathcal{A}(M) \to T$, $\overline{\sigma}$ is a monoid morphism:
\begin{itemize}
\item Preservation of 1 follows from the fact that $\sigma$ commutes with $\eta_1$ \\
\item Preservation of the multiplication: \\
\xymatrix{
M \times M \ar[r] \ar[ddd]_{\cdot} \ar[rdd] & (M\times 1) \times (M \times 1) \ar[d]_{\ensuremath{\mathsf{st}}\xspace} \ar[r]^{\sigma \times id} & T(1) \times (M\times 1) \ar[r]^{id \times \sigma} \ar[d]_{\ensuremath{\mathsf{st}}\xspace} & T(1) \times T(1) \ar[d]^{\ensuremath{\mathsf{st}}\xspace} \\
&M \times (1 \times (M \times 1)) \ar[r]^{\sigma} \ar[d]_{id \times \pi_2} & T(1 \times (M \times 1)) \ar[d]_{T(\pi_2)} \ar[r]^{T(id \times \sigma)} & T(1 \times T(1)) \ar[d]^{T(\pi_2)}\\
& M \times (M \times 1) \ar[r]^{\sigma} \ar[d]_{\mu} & T(M \times 1) \ar[r]^{T(\sigma)} & T^2(1) \ar[d]^{\mu}\\
M \ar[r] & M \times 1 \ar[rr]^{\sigma} &&T(1)
}
\end{itemize}
For naturality, consider:
$$\begin{bijectivecorrespondence}
\correspondence[in \Cat{Mnd}\xspace]{\xymatrix{\mathcal{A}(N)\ar[r]^-{\mathcal{A}(f)} & \mathcal{A}(M) \ar[r]^-{\sigma} & T \ar[r]^-{\tau} & S}}
\correspondence[in \Cat{Mon}\xspace]{\xymatrix{N\ar[r]_-{f} & M \ar[r]_-{\overline{\sigma}} & T(1) \ar[r]_-{\tau_1} & S(1)}}
\end{bijectivecorrespondence}$$
Then,
$$
\begin{array}{rcl}
\overline{\tau \mathrel{\circ} \sigma \mathrel{\circ} \mathcal{A}(f)} &=& (\tau \mathrel{\circ} \sigma \mathrel{\circ} \mathcal{A}(f))_1 \mathrel{\circ} \tuple{id}{!} \\
&=&
\tau_1 \mathrel{\circ} \sigma_1 \mathrel{\circ} (\mathcal{A}(f))_1 \mathrel{\circ} \tuple{id}{!}\\
&=&
\tau_1 \mathrel{\circ} \sigma_1 \mathrel{\circ} f \times id \mathrel{\circ} \tuple{id}{!}\\
&=&
\tau_1 \mathrel{\circ} \sigma_1 \mathrel{\circ} \tuple{id}{!} \mathrel{\circ} f \\
&=&
\tau_1 \mathrel{\circ} \overline{\sigma} \mathrel{\circ} f
\end{array}
$$
}
Notice that, for a monoid $M$, the counit of the above adjunction is
the projection $\smash{(\mathcal{E} \mathrel{\circ} \mathcal{A})(M) = \mathcal{A}(M)(1)
= M\times 1 \stackrel{\cong}{\rightarrow} M}$. Hence the adjunction
is a reflection.
We now move to the bottom of Figure \ref{MonoidTriangleFig}. The
finitary Kleisli construction yields a functor from the category of
monads to the category of Lawvere theories (Corollary
\ref{Mnd2FCCatCor}). This functor has a left adjoint, as is proven in
the following two standard lemmas.
\begin{lemma}
\label{GLaw2MndLem}
Each Lawvere theory $\cat{L}$, gives rise to a monad $T_{\cat{L}}$ on
$\Cat{Sets}\xspace$, which is defined by
\begin{equation}
\label{LMEqn}
\begin{array}{rcl}
T_{\cat{L}}(X)
& = &
\Big(\coprod_{i\in\ensuremath{\mathbb{N}}}\cat{L}(1,i)\times X^{i}\Big)/\!\sim,
\end{array}
\end{equation}
\noindent where $\sim$ is the least equivalence relation such that,
for each $f\colon i\rightarrow m$ in $\aleph_{0} \hookrightarrow
\cat{L}$,
$$\begin{array}{rcl}
\kappa_m (f \mathrel{\circ} g, v)
& \sim &
\kappa_i(g, v\mathrel{\circ} f),
\qquad\mbox{where }g\in \cat{L}(1,i)\mbox{ and }v\in X^{m}.
\end{array}$$
\noindent Finally, the mapping $\cat{L} \to T_{\cat{L}}$ yields a
functor $\mathcal{T}\colon\Cat{Law}\xspace \to \Cat{Mnd}\xspace$.
\end{lemma}
\begin{proof}
For a Lawvere theory $\cat{L}$, the unit map $\eta \colon X \to
T_{\cat{L}}(X) = \Big(\coprod_{i\in\ensuremath{\mathbb{N}}}\cat{L}(1,i)\times
X^{i}\Big)/\!\sim$ is given by
$$
\begin{array}{rcl}
x &\mapsto& [\kappa_1(id_1,x)].
\end{array}
$$
The multiplication $\mu\colon
T_{\cat{L}}^2(X)\rightarrow T_{\cat{L}}(X)$ is given by:
$$\begin{array}{rcl}
\mu([\kappa_{i}(g,v)])
& = &
[\kappa_{j}((g_{0}+\cdots+g_{i-1})\mathrel{\circ} g, [v_{0},\ldots, v_{i-1}])] \\
& & \qquad \mbox{where }g\colon 1\rightarrow i, \mbox{ and }
v\colon i\rightarrow T_{\cat{L}}(X) \mbox{ is written as} \\
& & \qquad\qquad v(a) = \kappa_{j_{a}}(g_{a}, v_{a}), \mbox{ for }a<i, \\
& & \qquad \mbox{and } j = j_{0} + \cdots + j_{i-1}.
\end{array}$$
\noindent It is straightforward to show that this map $\mu$ is
well-defined and that $\eta$ and $\mu$ indeed define a monad structure
on $T_{\cat{L}}$.
For each morphism of Lawvere theories $F\colon \cat{L} \to \cat{K}$,
one may define a monad morphism $\mathcal{T}(F) \colon T_{\cat{L}} \to
T_{\cat{K}}$ with components $\mathcal{T}(F)_{X} \colon [\kappa_i(g,v)] \mapsto
[\kappa_i(F(g),v)]$. This yields a functor $\mathcal{T} \colon \Cat{Law}\xspace \to
\Cat{Mnd}\xspace$. Checking the details is left to the reader.\hspace*{\fill}$\QEDbox$
\end{proof}
\auxproof{
\begin{enumerate}
\item $\mu$ is well-defined\\
Let $f \colon i \to m$ in $\aleph_0$, $g \in \cat{L}(1,i)$, $v \colon m \to \mathcal{T}(\cat{L})(X)$.\\
Write $v(a) = \kappa_{j_a}(h_a,v_a)$
$$
\begin{array}{rcll}
\lefteqn{\mu([\kappa_m(f \mathrel{\circ} g,v)])}\\
&=& [\kappa_{j_0 + j_{m-1}}((h_0 + \ldots h_{m-1}) \mathrel{\circ} (f \mathrel{\circ} g), [v_0, \ldots v_{m-1}])]\\
&=&
[\kappa_{j_0 + j_{m-1}}([\kappa_{f(0)}, \ldots \kappa_{f(i-1)}] \mathrel{\circ} (h_{f(0)} + \ldots h_{f(i-1)}) \mathrel{\circ} g), [v_0, \ldots v_{m-1}])]\\
&=&
[\kappa_{j_{f(0)} + \ldots +j_{f(i-1)}}((h_{f(0)} + \ldots h_{f(i-1)}) \mathrel{\circ} g, [v_0, \ldots v_{m-1}] \mathrel{\circ} [\kappa_{f(0)}, \ldots \kappa_{f(i-1)}])] &\text{eq.rel.}\\
&=&
[\kappa_{j_{f(0)} + \ldots +j_{f(i-1)}}((h_{f(0)} + \ldots h_{f(i-1)}) \mathrel{\circ} g, [v_{f(0)}, \ldots v_{f(m-1)}])]\\
&=&
\mu([\kappa_i(g,v \mathrel{\circ} f)])
\end{array}
$$
\item $\mu \mathrel{\circ} \eta \colon \mathcal{T}(\cat{L})(X) \to \mathcal{T}(\cat{L})(X) = id$\\
$$
\begin{array}{rcll}
(\mu \mathrel{\circ} \eta)([\kappa_i(g,v)]) &=& \mu([\kappa_1(\idmap_1,[\kappa_i(g,v)])]) \\
&=& [\kappa_i(g \mathrel{\circ} \idmap,v)] = [\kappa_i(g,v)]
\end{array}
$$
\item $\mu \mathrel{\circ} T(\eta) \colon \mathcal{T}(\cat{L})(X) \to \mathcal{T}(\cat{L})(X) = id$\\
$$
\begin{array}{rcll}
(\mu \mathrel{\circ} T\eta)([\kappa_i(g,v)]) &=& \mu([\kappa_i(g, \eta \mathrel{\circ} v)])\\
&=&
[\kappa_{1+\ldots+1}((\idmap+\ldots+\idmap)\mathrel{\circ} g, [v_0 + \ldots v_{i-1}])]\\
&=&
[\kappa_i(g,v)]
\end{array}
$$
\item $\mu \mathrel{\circ} T\mu = \mu \mathrel{\circ} \mu \colon \mathcal{T}(\cat{L})^3(X) \to \mathcal{T}(\cat{L})(X)$\\
Let $[\kappa_i(g,v)] \in \mathcal{T}(\cat{L})^3(X)$, write
$$
\begin{array}{rcl}
v(a) &=& [\kappa_{j_a}(g_a, v_a)] \,\text{where}\, g_a \colon 1 \to j_a \,\text{and}\, v_a \colon j_a \to \mathcal{T}(\cat{L})(X)
\end{array}
$$
$$
\begin{array}{rcll}
v_a(b) &=& [\kappa_{m^{a}_b}(h^a_b, w^{a}_b)]
\end{array}
$$
$$
\begin{array}{rcll}
\lefteqn{(\mu \mathrel{\circ} T\mu)([\kappa_i(g,v)])}\\
&=& \mu([\kappa_i(g, \mu \mathrel{\circ} v)])\\
&=&
\mu([\kappa_i(g, \lambda a.[\kappa_{m^a_0+\ldots+m^a_{j_a-1}}((h^a_0+\ldots h^a_{j_a-1})\mathrel{\circ} g_a, [w^a_0,\ldots,w^a_{j_a-1}])])\\
&=&
[\kappa_{m^0_0+\ldots+m^0_{j_1-1} + \ldots +m^{i-1}_{j_{i-1}-1}}(((h^0_0 + \ldots h^0_{j_0-1})\mathrel{\circ} g_0 + \ldots)\mathrel{\circ} g, [[w^0_0, \ldots w^0_{j_0-1}], \ldots])\\
&=&
[\kappa_{m^0_0+\ldots+m^0_{j_1-1} + \ldots +m^{i-1}_{j_{i-1}-1}}(((h^0_0 + \ldots h^i_{j_i-1}) \mathrel{\circ} (g_0 + \ldots g_{i-1})\mathrel{\circ} g, [w^0_0, \ldots w^{i-1}_{j_{i-1}-1}])\\
&=&
\mu([\kappa_{j_0 + \ldots j_{i-1}}((g_0 + \ldots g_{i-1}) \mathrel{\circ} g, [v_0, \ldots v_{i-1}])])\\
&=&
(\mu \mathrel{\circ} \mu)([\kappa_i(g,v)])
\end{array}
$$
\item For $G \colon \cat{L} \to \cat{K}$, $\mathcal{T}(G)$ is a monad morphism.\\
Preservation of $\eta$:
$$
\begin{array}{rcll}
(\mathcal{T}(G) \mathrel{\circ} \eta)(x) &=& \mathcal{T}(G)([\kappa_1(\idmap_1,x)])\\
&=&
[\kappa_1(G(\idmap_1),x)]\\
&=&
[\kappa_1(\idmap_1,x)]\\
&=&
\eta(x)
\end{array}
$$
Commutates with $\mu$:
$$
\begin{array}{rcl}
\lefteqn{(\mu \mathrel{\circ} \mathcal{T}(G) \mathrel{\circ} \mathcal{T}(\cat{L})(\mathcal{T}(G)))([\kappa_i(g,v)])}\\
&=&
(\mu \mathrel{\circ} \mathcal{T}(G))([\kappa_i(g,\mathcal{T}(G)\mathrel{\circ} v)])\\
&=&
\mu([\kappa_i(G(g),\mathcal{T}(G)\mathrel{\circ} v)])\\
&=&
[\kappa_{j_0+\ldots j_{i-1}}((G(g_0)+\ldots+G(g_{i-1}))\mathrel{\circ} G(g),[v_0,\ldots,v_{i-1}])]\\
&=&
[\kappa_{j_0+\ldots j_{i-1}}((G(g_0+\ldots+g_{i-1})\mathrel{\circ} g),[v_0,\ldots,v_{i-1}])]\\
&=&
\mathcal{T}(G)([\kappa_{j_0+\ldots j_{i-1}}((g_0+\ldots+g_{i-1})\mathrel{\circ} g,[v_0,\ldots,v_{i-1}])])\\
&=&
(\mathcal{T}(G)\mathrel{\circ}\mu)([\kappa_i(g,\mathcal{T}(G)\mathrel{\circ} v)])
\end{array}
$$
Naturality:\\
For a map $h \colon X \to Y$ in $\Cat{Sets}\xspace$,
$$ \mathcal{T}(\cat{L})(h) \colon \Big(\coprod_{i\in\ensuremath{\mathbb{N}}}\cat{L}(1,i)\times X^{i}\Big)/\!\sim \,\to \Big(\coprod_{i\in\ensuremath{\mathbb{N}}}\cat{L}(1,i)\times Y^{i}\Big)/\!\sim,$$
is given by
$$
[\kappa_i(g,w)] \mapsto [\kappa_i(g,h \mathrel{\circ} w)].
$$
$$
\begin{array}{rcll}
(\mathcal{T}(G)_Y \mathrel{\circ} \mathcal{T}(\cat{L})(h))([\kappa_i(g,v)])
&=&
(\mathcal{T}(G)_Y([\kappa_i(g,h \mathrel{\circ} v)])\\
&=&
[\kappa_i(G(g),h \mathrel{\circ} v)]\\
&=&
\mathcal{T}(\cat{K})(h)([\kappa_i(G(g),v)])\\
&=&
(\mathcal{T}(\cat{K})(h) \mathrel{\circ} \mathcal{T}(G)_X)([\kappa_i(g,v)])
\end{array}
$$
\item Functoriality of $\mathcal{T}$\\
$$
\begin{array}{rcll}
\mathcal{T}(\idmap)_X([\kappa_i(g,v)]) &=& [\kappa_i(id(g),v)] = [\kappa_i(g,v)]
\end{array}
$$
and
$$
\begin{array}{rcll}
\mathcal{T}(F \mathrel{\circ} G)_X([\kappa_i(g,v)]) = [\kappa_i((F \mathrel{\circ} G)(g),v)] = (\mathcal{T}(F) \mathrel{\circ} \mathcal{T}(G))([\kappa_i(g,v)])
\end{array}
$$
\end{enumerate}
}
\begin{lemma}
\label{AdjMndLvTLem}
The pair of functors $\mathcal{T} \colon \Cat{Law}\xspace \rightleftarrows \cat{Mnd}
\colon \mathcal{K}{\kern-.2ex}\ell_{\ensuremath{\mathbb{N}}}$ forms an adjunction $\mathcal{T} \dashv \mathcal{K}{\kern-.2ex}\ell_{\ensuremath{\mathbb{N}}}$, as at
the bottom in Figure~\ref{MonoidTriangleFig}.
\end{lemma}
\begin{proof}
For a Lawvere theory $\cat{L}$ and a monad $T$ there are
(natural) bijective correspondences:
$$\begin{bijectivecorrespondence}
\correspondence[in \Cat{Mnd}\xspace]{\xymatrix{\mathcal{T}(\cat{L})\ar[r]^-{\sigma} & T}}
\correspondence[in \Cat{Law}\xspace]{\xymatrix{\cat{L}\ar[r]_-{F} & \mathcal{K}{\kern-.2ex}\ell_{\ensuremath{\mathbb{N}}}(T)}}
\end{bijectivecorrespondence}$$
\noindent Given $\sigma$, one defines a $\cat{\Cat{Law}\xspace}$-map
$\overline{\sigma} \colon \cat{L} \to \mathcal{K}{\kern-.2ex}\ell_{\ensuremath{\mathbb{N}}}(T)$ which is the
identity on objects and sends a morphism $f \colon n \to m$ in
$\cat{L}$ to the morphism
$$
\xymatrix{
n \ar[rrr]^-{\lam{i<n}{[\kappa_m(f \mathrel{\circ} \kappa_i,\idmap_m)]}} &&&
\mathcal{T}(\cat{L})(m) \ar[r]^-{\sigma_m} & T(m)
}$$
\noindent in $\mathcal{K}{\kern-.2ex}\ell_{\ensuremath{\mathbb{N}}}(T)$.
Conversely, given $F$, one defines a monad morphism $\overline{F}$
with components $\overline{F}_X \colon \mathcal{T}(\cat{L})(X) \to T(X)$
given, for $i \in \ensuremath{\mathbb{N}}$, $g \colon 1 \to i \in \cat{L}$ and $v \in
X^i$, by:
$$
\begin{array}{rcll}
[\kappa_i(g,v)] &\mapsto& (T(v) \mathrel{\circ} F(g))(*),
\end{array}
$$
where $*$ is the unique element of $1$. \hspace*{\fill}$\QEDbox$
\end{proof}
\auxproof{
\begin{enumerate}
\item $\overline{\sigma}$ is a morphism in $\Cat{Law}\xspace$\\
\begin{itemize}
\item $\overline{\sigma}$ is a functor\\
For $n \in \ensuremath{\mathbb{N}}$, $i \in n$,
$$
\begin{array}{rcll}
\overline{\sigma}(\idmap_n)(i) &=& \sigma_n ([\kappa_n(\idmap_n \mathrel{\circ} \kappa_i, \idmap_n)])\\
&=&
\sigma_n([\kappa_n(\kappa_i, \idmap_n)])\\
&=&
\sigma_n([\kappa_1(\idmap_1, \idmap_n \mathrel{\circ} \kappa_i)])&\text{def. equivalence rel.}\\
&=&
\sigma_n([\kappa_1(\idmap_1, i)])\\
&=&
(\sigma_n \mathrel{\circ} \eta )(i)\\
&=&
\eta(i)&\text{$\sigma$ preserves $\eta$}\\
&=&
\idmap_n^{\mathcal{K}{\kern-.2ex}\ell_{\ensuremath{\mathbb{N}}}(T)}(i)
\end{array}
$$
For $n \xrightarrow{f} m \xrightarrow{g} k$,
$$
\begin{array}{rcll}
(\overline{\sigma}(g) \klafter \overline{\sigma}(f))(i)
&=&
(\mu \mathrel{\circ} T(\overline{\sigma}(g)) \mathrel{\circ} \overline{\sigma}(f))(i)\\
&=&
(\mu \mathrel{\circ} T(\overline{\sigma}(g) \mathrel{\circ} \sigma_m)(\kappa_m(f \mathrel{\circ} \kappa_i, \idmap_m))\\
&=&
(\mu \mathrel{\circ} \sigma_{T(k)} \mathrel{\circ} \mathcal{T}(\cat{L})(\overline{\sigma}(g)))(\kappa_m(f \mathrel{\circ} \kappa_i, \idmap_m)) \\
&=&
(\mu \mathrel{\circ} \sigma_{T(k)})(\kappa_m(f \mathrel{\circ} \kappa_i, \overline{\sigma}(g)))\\
&=&
(\mu \mathrel{\circ} \sigma_{T(k)})(\kappa_m(f \mathrel{\circ} \kappa_i, \sigma_k \mathrel{\circ} \lambda j \in m [\kappa_k(g,\kappa_j,\idmap_k)]))\\
&=&
(\mu \mathrel{\circ} \sigma_{T(k)}\mathrel{\circ} \mathcal{T}(\cat{L})(\sigma))(\kappa_m(f \mathrel{\circ} \kappa_i, \lambda j \in m [\kappa_k(g,\kappa_j,\idmap_k)]))\\
&=&
(\sigma_k \mathrel{\circ} \mu)(\kappa_m(f \mathrel{\circ} \kappa_i, \lambda j \in m [\kappa_k(g,\kappa_j,\idmap_k)]))\\
&=&
\sigma_k([\kappa_{\coprod_m k}\big((g\mathrel{\circ}\kappa_1 + \ldots + g \mathrel{\circ} \kappa_m)\mathrel{\circ}(f\mathrel{\circ} \kappa_i),[\idmap_k,\ldots \idmap_k]\big)\\
&=&
\sigma_k([\kappa_{\coprod_m k}\big((g+\ldots+g) \mathrel{\circ} (\kappa_1 + \ldots +\kappa_m)\mathrel{\circ}(f\mathrel{\circ} \kappa_i\big), \nabla)])\\
&=&
\sigma_k([\kappa_k\big(\nabla \mathrel{\circ} (g+\ldots+g) \mathrel{\circ} (\kappa_1 + \ldots +\kappa_m)\mathrel{\circ}(f\mathrel{\circ} \kappa_i\big), \idmap_k)])\\
&&
\text{equivalence relation}\\
&=&
\sigma_k([\kappa_k(g \mathrel{\circ} f \mathrel{\circ} \kappa_i, \idmap_k)])\\
&=&
\overline{\sigma}(g \mathrel{\circ} f)(i)
\end{array}
$$
\item $\overline{\sigma}$ preserves coproducts\\
Consider $\tilde{\kappa_1} \colon n \to n+m$ in $\cat{L}$
$$
\begin{array}{rcll}
\overline{\sigma}(\kappa_1)(i) &=& \sigma_{n+m}([\kappa_{n+m}(\tilde{\kappa_1} \mathrel{\circ} \kappa_i, \idmap_{n+m})])\\
&=&
\sigma_{n+m}([\kappa_1(\idmap_1, \tilde{\kappa_1}(i))]) &\text{equiv. rel.}\\
&=&
(\sigma_{n+m}\mathrel{\circ} \eta \mathrel{\circ} \tilde{\kappa_1})(i)\\
&=&
(\eta \mathrel{\circ} \tilde{\kappa_1}) (i)\\
&=&
\kappa_1^{\mathcal{K}{\kern-.2ex}\ell_{\ensuremath{\mathbb{N}}}}(i)
\end{array}
$$
\end{itemize}
\item $\overline{F}\colon \mathcal{T}(\cat{L}) \to T$ is monad morphism.\\
\begin{itemize}
\item $\overline{F}$ is well-defined.\\
Let $f \colon i \to m$ in $\aleph_0$, $g \colon 1 \to i$ in $\cat{L}$, $w\colon m \to X$ in $\Cat{Sets}\xspace$. Then
$$
\begin{array}{rcll}
T(w) \mathrel{\circ} F(f_{\cat{L}} \mathrel{\circ} g) &=& T(w) \mathrel{\circ} (F(f_{\cat{L}}) \klafter F(g))\\
&=&
T(w) \mathrel{\circ} \mu \mathrel{\circ} T(F(f_{\cat{L}})) \mathrel{\circ} F(g)\\
&=&
T(w) \mathrel{\circ} \mu \mathrel{\circ} T(\eta) \mathrel{\circ} T(f) \mathrel{\circ} F(g)\\
&=&
T(w) \mathrel{\circ} T(f) \mathrel{\circ} F(g)\\
&=&
T(w \mathrel{\circ} f) \mathrel{\circ} F(g)
\end{array}
$$
\item Naturality of $\overline{F}$.\\
Let $h \colon X \to Y$,
$$
\begin{array}{rcll}
(T(h) \mathrel{\circ} \overline{F}_X)([\kappa_i(g,w)]) &=& (T(h) \mathrel{\circ} T(w) \mathrel{\circ} F(g))(*)\\
&=&
(T(h \mathrel{\circ} w) \mathrel{\circ} F(g))(*)\\
&=&
\overline{F}_Y([\kappa_i(g, h \mathrel{\circ} w)])\\
&=&
(\overline{F}_Y \mathrel{\circ} \mathcal{T}(\cat{L})(h))([\kappa_i(g,w)])
\end{array}
$$
\item Preservation of $\eta$
$$
\begin{array}{rcll}
(\overline{F}_X \mathrel{\circ} \eta)(x) &=& \overline{F}_X([\kappa_1(\idmap_1,x)])\\
&=&
(T(x) \mathrel{\circ} F(\idmap_1))(*) \\
&=&
(T(x) \mathrel{\circ} \eta_1)(*) &(\eta_1 = \idmap_1^{\mathcal{K}{\kern-.2ex}\ell_{\ensuremath{\mathbb{N}}}})\\
&=&
\eta(x) &\text{naturality of $\eta$}
\end{array}
$$
\item Preservation of $\mu$\\
Let $[\kappa_i(g,v)] \in \mathcal{T}(\cat{L})^2(X) = \coprod_i \cat{L}(1,i) \times \mathcal{T}(\cat{L})(X)^i$ and write, for $a < i$, $v(a) = [\kappa_{j_a}(g_a,v_a)]$, then
$$
\begin{array}{rcl}
\lefteqn{(\overline{F}_X \mathrel{\circ} \mu)([\kappa_i(g,v)])}\\
&=& \overline{F}_X([\kappa_{j_0+\ldots+j_{i-1}}((g_0 + \ldots + g_{i-1}) \mathrel{\circ} g,[v_0,\ldots,v_{i-1}])])\\
&=&
(T([v_0,\ldots,v_{i-1}])\mathrel{\circ} F((g_0 + \ldots + g_{i-1}) \mathrel{\circ} g))(*)\\
&=&
(T([v_0,\ldots,v_{i-1}])\mathrel{\circ} F(g_0 + \ldots + g_{i-1}) \klafter F(g))(*)\\
&=&
(T([v_0,\ldots,v_{i-1}])\mathrel{\circ} \mu^T \mathrel{\circ} T(F(g_0 + \ldots + g_{i-1})) \mathrel{\circ} F(g))(*)\\
&=&
(T([v_0,\ldots,v_{i-1}])\mathrel{\circ} \mu^T \mathrel{\circ} T(F([\kappa_0 \mathrel{\circ} g_0 + \ldots + \kappa_{i-1} \mathrel{\circ} g_{i-1}])) \mathrel{\circ} F(g))(*)\\
&=&
(T([v_0,\ldots,v_{i-1}])\mathrel{\circ} \mu^T \mathrel{\circ} T([F(\kappa_0) \klafter F(g_0) + \ldots + F(\kappa_{i-1}) \klafter F(g_{i-1})]) \mathrel{\circ} F(g))(*)\\
&=&
(T([v_0,\ldots,v_{i-1}])\mathrel{\circ} \mu^T \mathrel{\circ} T([\mu \mathrel{\circ} T(F(\kappa_0)) \mathrel{\circ} F(g_0) + \ldots ]) \mathrel{\circ} F(g))(*)\\
&=&
(T([v_0,\ldots,v_{i-1}])\mathrel{\circ} \mu^T \mathrel{\circ} T([\mu \mathrel{\circ} T(\eta \mathrel{\circ} \kappa_0) \mathrel{\circ} F(g_0) + \ldots ]) \mathrel{\circ} F(g))(*)\\
&=&
(T([v_0,\ldots,v_{i-1}])\mathrel{\circ} \mu^T \mathrel{\circ} T([T(\kappa_0) \mathrel{\circ} F(g_0) + \ldots + T(\kappa_{i-1}) \mathrel{\circ} F(g_{i-1})]) \mathrel{\circ} F(g))(*)\\
&=&
(T([v_0,\ldots,v_{i-1}])\mathrel{\circ} \mu^T \mathrel{\circ} T([T(\kappa_0) + \ldots + T(\kappa_{i-1})] \mathrel{\circ} (F(g_0) + \ldots F(g_{i-1}))) \mathrel{\circ} F(g))(*)\\
&=&
(T([v_0,\ldots,v_{i-1}])\mathrel{\circ} \mu^T \mathrel{\circ} T([T(\kappa_0) + \ldots + T(\kappa_{i-1})] \mathrel{\circ} (F(g_0) + \ldots F(g_{i-1}))) \mathrel{\circ} F(g))(*)\\
&=&
(T([v_0,\ldots,v_{i-1}])\mathrel{\circ} \mu^T \mathrel{\circ} T([T(\kappa_0) + \ldots + T(\kappa_{i-1})] \mathrel{\circ} (F(g_0) + \ldots F(g_{i-1}))) \mathrel{\circ} F(g))(*)\\
&=&
(\mu^T \mathrel{\circ} T^2([v_0,\ldots,v_{i-1}]) \mathrel{\circ} T([T(\kappa_0) + \ldots + T(\kappa_{i-1})] \mathrel{\circ} (F(g_0) + \ldots F(g_{i-1}))) \mathrel{\circ} F(g))(*)\\
&=&
(\mu^T \mathrel{\circ} T(T([v_0,\ldots,v_{i-1}]) \mathrel{\circ} [T(\kappa_0) + \ldots + T(\kappa_{i-1})] \mathrel{\circ} (F(g_0) + \ldots F(g_{i-1}))) \mathrel{\circ} F(g))(*)\\
&=&
(\mu^T \mathrel{\circ} T([T(v_0),\ldots,T(v_{i-1})] \mathrel{\circ} (F(g_0) + \ldots F(g_{i-1}))) \mathrel{\circ} F(g))(*)\\
&=&
(\mu^T \mathrel{\circ} T([T(v_0) \mathrel{\circ} F(g_0),\ldots,T(v_{i-1}) \mathrel{\circ} F(g_{i-1})]) \mathrel{\circ} F(g))(*)\\
&=&
(\mu^T \mathrel{\circ} T([* \mapsto (T(v_0) \mathrel{\circ} F(g_0))(*),\ldots,\star \mapsto (T(v_{i-1}) \mathrel{\circ} F(g_{i-1}))(*)]) \mathrel{\circ} F(g))(*)\\
&=&
(\mu^T \mathrel{\circ} T(\overline{F}_X \mathrel{\circ} [* \mapsto [\kappa_{j_0}(g_0,v_0)],\ldots,\star \mapsto [\kappa_{j_{i-1}}(g_{i-1},v_{i-1})]]) \mathrel{\circ} F(g))(*)\\
&=&
(\mu^T \mathrel{\circ} T(\overline{F}_X \mathrel{\circ} [* \mapsto v(0),\ldots,\star \mapsto v(i-1)]) \mathrel{\circ} F(g))(*)\\
&=&
(\mu^T \mathrel{\circ} T(\overline{F}_X \mathrel{\circ} v) \mathrel{\circ} F(g))(*)\\
&=&
(\mu^T \mathrel{\circ} \overline{F}_{T(X)}([\kappa_i(g,\overline{F}_X \mathrel{\circ} v)])\\
&=&
(\mu^T \mathrel{\circ} \overline{F}_{T(X)} \mathrel{\circ} \mathcal{T}(\cat{L})(\overline{F}_X)([\kappa_i(g,v)])\\
\end{array}
$$
\end{itemize}
\item $\overline{\overline{\sigma}} = \sigma$\\
$\sigma_X \colon \mathcal{T}(\cat{L})(X) = \coprod_i \cat{L}(1,i) \times X^i \to T(X)$
$$
\begin{array}{rcll}
\overline{\overline{\sigma}}([\kappa_i(g,w)]) &=& (T(w) \mathrel{\circ} \overline{\sigma}(g))(*)\\
&=&
(T(w) \mathrel{\circ} \sigma_i)([\kappa_i(g \mathrel{\circ} \kappa_0,\idmap)])\\
&=&
(T(w) \mathrel{\circ} \sigma_i)([\kappa_i(g,\idmap)])&(\kappa_0 \colon 1 \to 1 = \idmap)\\
&=&
(\sigma_X \mathrel{\circ} \mathcal{T}(\cat{L})(w))([\kappa_i(g,\idmap)])\\
&=&
\sigma_X([\kappa_i(g,w)])
\end{array}
$$
\item $\overline{\overline{F}} = F$\\
$F \colon \cat{L} \to \mathcal{K}{\kern-.2ex}\ell_{\ensuremath{\mathbb{N}}}(T)$. Let $f \colon n \to m$ in $\cat{L}$.
Then $\overline{\overline{F}}(f) \colon n \to T(m)$.
$$
\begin{array}{rcll}
\overline{\overline{F}}(f)(i) &=& \overline{F}_m([\kappa_m(f \mathrel{\circ} \kappa_i),\idmap_m])\\
&=&
(T(\idmap_m) \mathrel{\circ} F(f \mathrel{\circ} \kappa_i))(*)\\
&=&
(F(f) \klafter F(\kappa_i))(*)\\
&=&
(\mu \mathrel{\circ} T(F(f)) \mathrel{\circ} \eta \mathrel{\circ} \kappa_i)(*)\\
&=&
(\mu \mathrel{\circ} \eta \mathrel{\circ} F(f))(i)\\
&=&
F(f)(i)
\end{array}
$$
\item Naturality\\
For naturality consider:
$$\begin{bijectivecorrespondence}
\correspondence[in \Cat{Mnd}\xspace]{\xymatrix{\mathcal{T}(\cat{K})\ar[r]^-{\mathcal{T}(G)} & \mathcal{T}(\cat{L}) \ar[r]^-{\sigma} & T \ar[r]^-{\tau} & S}}
\correspondence[in \Cat{Law}\xspace]{\xymatrix{\cat{K}\ar[r]_-{G} & \cat{L} \ar[r]_-{\overline{\sigma}} & \mathcal{K}{\kern-.2ex}\ell_{\ensuremath{\mathbb{N}}}(T) \ar[r]_-{\mathcal{K}{\kern-.2ex}\ell_{\ensuremath{\mathbb{N}}}(\tau)} & \mathcal{K}{\kern-.2ex}\ell_{\ensuremath{\mathbb{N}}}(S)}}
\end{bijectivecorrespondence}$$
Let $f \colon n \to m$ in $\cat{K}$
\end{enumerate}
$$
\begin{array}{rcll}
\overline{\tau \mathrel{\circ} \sigma \mathrel{\circ} \mathcal{T}(G)}(f)(i)
&=&
(\tau \mathrel{\circ} \sigma \mathrel{\circ} \mathcal{T}(G))_m([\kappa_m(f \mathrel{\circ} \kappa_i^{\cat{K}},\idmap_m)])\\
&=&
(\tau_m \mathrel{\circ} \sigma_m)([\kappa_m(G(f \mathrel{\circ} \kappa_i^{\cat{K}}), \idmap_m)])\\
&=&
(\tau_m \mathrel{\circ} \sigma_m)([\kappa_m(G(f) \mathrel{\circ} \kappa_i^{\cat{L}}, \idmap_m)])\\
&=&
(\mathcal{K}{\kern-.2ex}\ell_{\ensuremath{\mathbb{N}}}(\tau) \mathrel{\circ} \sigma_m)([\kappa_m(G(f) \mathrel{\circ} \kappa_i^{\cat{L}}, \idmap_m)])\\
&=&
(\mathcal{K}{\kern-.2ex}\ell_{\ensuremath{\mathbb{N}}}(\tau) \mathrel{\circ} \overline{\sigma})(G(f))(i)\\
&=&
(\mathcal{K}{\kern-.2ex}\ell_{\ensuremath{\mathbb{N}}}(\tau) \mathrel{\circ} \overline{\sigma} \mathrel{\circ} G)(f)(i)\\
\end{array}
$$
}
Finally, we consider the right-hand side of Figure \ref{MonoidTriangleFig}.
For each category $\cat{C}$ and object $X$ in $\cat{C}$, the homset
$\cat{C}(X,X)$ is a monoid, where multiplication is given by
composition with the identity as unit. The mapping $\cat{L} \mapsto
\mathcal{H}(\cat{L}) = \cat{L}(1,1)$, defines a functor
$\Cat{Law}\xspace \to \Cat{Mon}\xspace$. This functor is right adjoint to the
composite functor $\mathcal{K}{\kern-.2ex}\ell_{\ensuremath{\mathbb{N}}} \circ \mathcal{A}$.
\begin{lemma}
\label{AdjCatMonLem}
The pair of functors $\mathcal{K}{\kern-.2ex}\ell_{\ensuremath{\mathbb{N}}} \mathrel{\circ} \mathcal{A} \colon
\Cat{Mon}\xspace \rightleftarrows \Cat{Law}\xspace \colon \mathcal{H}$ forms an
adjunction $\mathcal{K}{\kern-.2ex}\ell_{\ensuremath{\mathbb{N}}} \mathrel{\circ} \mathcal{A} \dashv \mathcal{H}$, as
on the right in Figure~\ref{MonoidTriangleFig}.
\end{lemma}
\begin{proof}
For a monoid $M$ and a Lawvere theory $\cat{L}$ there are
(natural) bijective correspondences:
$$\begin{bijectivecorrespondence}
\correspondence[in \Cat{Law}\xspace]{\xymatrix{\mathcal{K}{\kern-.2ex}\ell_{\ensuremath{\mathbb{N}}}\mathcal{A}(M)\ar[r]^-{F} & \cat{L}}}
\correspondence[in \Cat{Mon}\xspace]{\xymatrix{M\ar[r]_-{f} & \mathcal{H}(\cat{L})}}
\end{bijectivecorrespondence}$$
\noindent Given $F$ one defines a monoid map $\overline{F}
\colon M\rightarrow \mathcal{H}(\cat{L}) = \cat{L}(1,1)$ by
$$s \mapsto F(1 \xrightarrow{\tuple{\lam{x}{s}}{!}} M \times 1).$$
Note that $1 \xrightarrow{\tuple{\lam{x}{s}}{!}} M \times 1 =
\mathcal{A}(M)(1)$ is an endomap on $1$ in
$\mathcal{K}{\kern-.2ex}\ell_{\ensuremath{\mathbb{N}}}\mathcal{A}(M)$. Since $F$ is the identity on objects it
sends this endomap to an element of $\cat{L}(1,1)$.
Conversely, given a monoid map $f\colon M\rightarrow \cat{L}(1,1)$ one
defines a \Cat{Law}\xspace-map $\overline{f} \colon \mathcal{K}{\kern-.2ex}\ell_{\ensuremath{\mathbb{N}}}\mathcal{A}(M)
\rightarrow \cat{L}$. It is the identity on objects and sends a
morphism $h \colon n \to m$ in $\mathcal{K}{\kern-.2ex}\ell_{\ensuremath{\mathbb{N}}}\mathcal{A}(M)$,
\textit{i.e.} $h \colon n \to M \times m$ in $\Cat{Sets}\xspace$, to the morphism
$$\xymatrix@C+1pc{
\overline{f}(h) = \Big(n
\ar[rr]^-{\big[\kappa_{h_{2}(i)} \mathrel{\circ} f(h_{1}(i))\big]_{i < n}}
&&m \Big)}.
$$
\noindent Here we write $h(i)\in M\times m$ as pair $(h_{1}(i),
h_{2}(i))$. We leave further details to the reader. \hspace*{\fill}$\QEDbox$
\end{proof}
\auxproof{
\begin{enumerate}
\item $\overline{F} \colon M \to \cat{L}(1,1)$ is monoid morphism.\\
For $s \in M$, define $\underline{s} = 1 \xrightarrow{\tuple\lam{x}{s}{!}} M \times 1$.
$$
\begin{array}{rcll}
\overline{F}(s) \cdot \overline{F}(t) &=& F(\underline{s}) \cdot F(\underline{t})\\
&=&
F(\underline{t}) \mathrel{\circ} F(\underline{s})\\
&=&
F(\underline{t} \mathrel{\circ} \underline{s})\\
&=&
F(\mu \mathrel{\circ} (\idmap \cdot \underline{t}) \mathrel{\circ} \underline{s})\\
&=&
F(\underline{s \cdot t}) = \overline{F}(s \cdot t).
\end{array}
$$
and
$$
\overline{F}(1) = F(\underline{1}) = F(\idmap_1) = \idmap_{F(1)} = \idmap_I.
$$
\item $\overline{f}\colon (\mathcal{K}{\kern-.2ex}\ell_{\ensuremath{\mathbb{N}}} \mathrel{\circ} \mathcal{A})(M) \to \cat{L}$ is a morphism in \Cat{Law}\xspace.
First we show that $\overline{f}$ is a functor.\\
Let $h \colon n \to M\times m$ and $g \colon m \to M \times p$ be morphisms in $(\mathcal{K}{\kern-.2ex}\ell_{\ensuremath{\mathbb{N}}} \mathrel{\circ} \mathcal{A})(M)$.
$$
\begin{array}{rcl}
g \klafter h &=& n \xrightarrow{h} M \times m \xrightarrow{\idmap \times g} M \times (M \times p) \xrightarrow{\mu} M \times p\\
&=&
i \mapsto (h_1(i),h_2(i)) \mapsto (h_1(i), (g_1(h_2(i)), g_2(h_2(i)))) \mapsto (h_1(i) \cdot g_1(h_2(i)), g_2(h_2(i)))
\end{array}
$$
Using this, we see that
$$
\begin{array}{rcl}
\overline{f}(g) \mathrel{\circ} \overline{f}(h) &=& \big[\kappa_{g_{2}(j)} \mathrel{\circ} f(g_{1}(j))\big]_{j \in m}\mathrel{\circ} \big[\kappa_{h_{2}(i)} \mathrel{\circ} f(h_{1}(i))\big]_{i \in n}\\
&=&
\big[\big[\kappa_{g_{2}(j)} \mathrel{\circ} f(g_{1}(j))\big]_{j \in m}\mathrel{\circ} \kappa_{h_{2}(i)} \mathrel{\circ} f(h_{1}(i))\big]_{i \in n}\\
&=&
\big[\kappa_{g_{2}(h_2(i))} \mathrel{\circ} f(g_{1}(h_2(i))) \mathrel{\circ} f(h_{1}(i))\big]_{i \in n}\\
&=&
\big[\kappa_{g_{2}(h_2(i))} \mathrel{\circ} f(h_{1}(i)\cdot g_{1}(h_2(i)))\big]_{i \in n}\\
&=&
\big[\kappa_{(g\mathrel{\circ} h)_2(i)} \mathrel{\circ} f((g\mathrel{\circ} h)_2(i))\big]_{i \in n}\\
\end{array}
$$
$\overline{f}$ also preserves the identity: $\idmap \colon n \to M \times n, i \mapsto (1,i)$ and therefore
$$
\begin{array}{rcl}
\overline{f}(\idmap_n) &=& \big[\kappa_{\idmap_{2}(i)} \mathrel{\circ} f(\idmap_{1}(i))\big]_{i \in n}\\
&=&
\big[\kappa_i \mathrel{\circ} f(1)\big]_{i \in n}\\
&=&
\big[\kappa_i \mathrel{\circ} \idmap\big]_{i \in n}\\
&=&
\idmap
\end{array}
$$
So $\overline{f}$ is a functor.
$\overline{f}$ preserves the coproduct structure as the canonical map
$$
\overline{f}(n) + \overline{f}(m) \xrightarrow{\tuple{\overline{f}(\kappa_1)}{\overline{f}(\kappa_2)}} \overline{f}(n+m)
$$
is the identity map, and therefore certainly an isomorphism.
\item $\overline{\overline{f}} = f$
$$
\overline{\overline{f}}(s) = \overline{f}(\underline{s}) = f(\underline{s}_1(*)) = f(s)
$$
\item $\overline{\overline{F}}=F$
It is clear that $\overline{\overline{F}}=F$ on objects. Now let $h \colon n \to M \times m$.
$$
\begin{array}{rcll}
\overline{\overline{F}}(h) &=& \big[\kappa_{h_{2}(i)} \mathrel{\circ} \overline{F}(h_{1}(i))\big]_{i \in n}\\
&=&
\big[\kappa_{h_{2}(i)} \mathrel{\circ} F(\underline{h_{1}(i)})\big]_{i \in n}\\
&=&
\big[F(\kappa_{h_{2}(i)}) \mathrel{\circ} F(\underline{h_{1}(i)})\big]_{i \in n}&\text{$F$ preserves coproducts}\\
&=&
\big[F(\kappa_{h_{2}(i)} \mathrel{\circ} \underline{h_{1}(i)})\big]_{i \in n}\\
&=&
\big[F(h \mathrel{\circ} \kappa_i)\big]_{i \in n}&\text{(1)}\\
&=&
F(h)
\end{array}
$$
where (1) relies on the fact that
$$
\begin{array}{rcl}
\kappa_{h_{2}(i)} \mathrel{\circ} \underline{h_{1}(i)} &=& 1 \xrightarrow{\underline{h_{1}(i)}} M\times `1 \xrightarrow{\idmap \times \kappa_{h_{2}(i)}} M \times (M \times m) \xrightarrow{\mu} M \times m \\
&&
* \mapsto (h_1(i),*) \mapsto (h_1(i),(1,h_2(i))) \mapsto (h_1(i),h_2(i)) = h(i)
\end{array}
$$
\item Naturality\\
Consider:
$$\begin{bijectivecorrespondence}
\correspondence[in \Cat{Law}\xspace]{\xymatrix{\mathcal{K}{\kern-.2ex}\ell_{\ensuremath{\mathbb{N}}}\mathcal{A}(N)\ar[r]^-{\mathcal{K}{\kern-.2ex}\ell_{\ensuremath{\mathbb{N}}}\mathcal{A}(f)} & \mathcal{K}{\kern-.2ex}\ell_{\ensuremath{\mathbb{N}}}\mathcal{A}(M) \ar[r]^-{F} & \cat{L} \ar[r]^-{G} & \cat{K}}}
\correspondence[in \Cat{Mon}\xspace]{\xymatrix{N\ar[r]_-{f} & M \ar[r]_-{\overline{F}} & \cat{L}(1,1) \ar[r]_-{\mathcal{H}(G)} & \cat{K}(1,1)}}
\end{bijectivecorrespondence}$$
To prove: $\overline{G \mathrel{\circ} F \mathrel{\circ} \mathcal{K}{\kern-.2ex}\ell_{\ensuremath{\mathbb{N}}}\mathcal{A}(f)} = \mathcal{H}(G) \mathrel{\circ} \overline{F} \mathrel{\circ} f$.
$$
\begin{array}{rcll}
\overline{G \mathrel{\circ} F \mathrel{\circ} \mathcal{K}{\kern-.2ex}\ell_{\ensuremath{\mathbb{N}}}\mathcal{A}(f)}(s)
&=&
(G \mathrel{\circ} F \mathrel{\circ} \mathcal{K}{\kern-.2ex}\ell_{\ensuremath{\mathbb{N}}}\mathcal{A}(f))(\underline{s})\\
&=&
(G \mathrel{\circ} F)((f \times \idmap) \mathrel{\circ} \underline{s}) &\text{definition $\mathcal{K}{\kern-.2ex}\ell_{\ensuremath{\mathbb{N}}}\mathcal{A}$}\\
&=&
(G \mathrel{\circ} F)(\underline{f(s)})\\
&=&
(\mathcal{H}(G)\mathrel{\circ} \overline{F}\mathrel{\circ} f)(s)
\end{array}
$$
\end{enumerate}
}
Given a monad $T$ on $\Cat{Sets}\xspace$, $\mathcal{H}\mathcal{K}{\kern-.2ex}\ell_{\ensuremath{\mathbb{N}}}(T) =
\mathcal{K}{\kern-.2ex}\ell_{\ensuremath{\mathbb{N}}}(T)(1,1) = \Cat{Sets}\xspace(1, T(1))$ is a monoid, where the multiplication is
given by
$$
(1\xrightarrow{a} T(1))\cdot (1 \xrightarrow{b} T(1)) =
\big( 1 \xrightarrow{a} T(1) \xrightarrow{T(b)} T^2(1)
\xrightarrow{\mu} T(1)\big).
$$
\noindent The functor $\mathcal{E}: \Cat{Mnd}\xspace(\cat{C}) \to \Cat{Mon}\xspace(\cat{C})$, defined
in Lemma \ref{Mnd2MonLem} also gives a multiplication on
$\Cat{Sets}\xspace(1,T(1)) \cong T(1)$, namely $\mu \mathrel{\circ} T(\pi_2) \mathrel{\circ} \ensuremath{\mathsf{st}}\xspace
\colon T(1) \times T(1) \to T(1)$. These two multiplications coincide
as is demonstrated in the following diagram,
$$
\xymatrix@C+1pc{
1 \ar[r]_-{a}\ar @/^4ex/ [rrr]^-{\tuple{a}{b}} &
T(1) \ar[r]^-{\rho^{-1}} \ar[rd]^{T(\rho^{-1})}
\ar@/_{6ex}/[ddrr]_{T(b)} &
T(1) \times 1 \ar[r]_-{\ensuremath{\mathrm{id}} \times b} \ar[d]^{\ensuremath{\mathsf{st}}\xspace} &
T(1) \times T(1) \ar[d]^{\ensuremath{\mathsf{st}}\xspace}\ar @/^2ex/ [ddr]^{\cdot} & \\
&& T(1 \times 1) \ar[r]_-{T(\ensuremath{\mathrm{id}} \times b)} & T(1 \times T(1))
\ar[d]^{T(\lambda)} & \\
&&&T^2(1) \ar[r]^{\mu}&T(1)
}
$$
\noindent In fact, $\mathcal{E} \cong \mathcal{H}\mathcal{K}{\kern-.2ex}\ell_{\ensuremath{\mathbb{N}}}$, which completes
the picture from Figure \ref{MonoidTriangleFig}.
\subsection{Commutative monoids}\label{ComMonoidSubsec}
In this subsection we briefly summarize what will change in the triangle
in Figure~\ref{MonoidTriangleFig} when we restrict ourselves to
commutative monoids (at the top). This will lead to commutative
monads, and to tensor products. The latter are induced by
Lemma~\ref{KleisliStructLem}. The new situation is described in
Figure~\ref{ComMonoidTriangleFig}. For the adjunction between
commutative monoids and commutative monads we start with the following
basic result.
\begin{lemma}
\label{CMnd2CMonLem}
Let $T$ be a commutative monad on a category \Cat{C} with finite
products. The monoid $\mathcal{E}(T) = T(1)$ in $\Cat{C}$ from
Lemma~\ref{Mnd2MonLem} is then commutative.
\end{lemma}
\begin{proof}
Recall that the multiplication on $T(1)$ is given by $\mu \mathrel{\circ}
T(\lambda) \mathrel{\circ} \ensuremath{\mathsf{st}}\xspace \colon T(1) \times T(1) \to T(1)$ and
commutativity of the monad $T$ means $\mu \mathrel{\circ} T(\ensuremath{\mathsf{st}}\xspace') \mathrel{\circ} \ensuremath{\mathsf{st}}\xspace
= \mu \mathrel{\circ} T(\ensuremath{\mathsf{st}}\xspace) \mathrel{\circ} \ensuremath{\mathsf{st}}\xspace'$ where $\ensuremath{\mathsf{st}}\xspace' = T(\gamma) \mathrel{\circ} \ensuremath{\mathsf{st}}\xspace
\mathrel{\circ} \gamma$, for the swap map $\gamma$, see
Section~\ref{PrelimSec}. Then:
$$\hspace*{-2em}\begin{array}[b]{rcl}
\mu \mathrel{\circ} T(\lambda) \mathrel{\circ} \ensuremath{\mathsf{st}}\xspace \mathrel{\circ} \gamma
& = &
\mu \mathrel{\circ} T(T(\lambda) \mathrel{\circ} \ensuremath{\mathsf{st}}\xspace') \mathrel{\circ} \ensuremath{\mathsf{st}}\xspace \mathrel{\circ} \gamma \\
& = &
T(\lambda) \mathrel{\circ} \mu \mathrel{\circ} T(\ensuremath{\mathsf{st}}\xspace') \mathrel{\circ} \ensuremath{\mathsf{st}}\xspace \mathrel{\circ} \gamma \\
& = &
T(\rho) \mathrel{\circ} \mu \mathrel{\circ} T(\ensuremath{\mathsf{st}}\xspace) \mathrel{\circ} \ensuremath{\mathsf{st}}\xspace' \mathrel{\circ} \gamma
\quad\mbox{by commutativity of \rlap{$T$,}}\\
& & \qquad\mbox{and because $\rho = \lambda
\colon 1\times 1\rightarrow 1$} \\
& = &
\mu \mathrel{\circ} T(T(\rho) \mathrel{\circ} \ensuremath{\mathsf{st}}\xspace \mathrel{\circ} \gamma) \mathrel{\circ} \ensuremath{\mathsf{st}}\xspace \\
& = &
\mu \mathrel{\circ} T(\rho \mathrel{\circ} \gamma) \mathrel{\circ} \ensuremath{\mathsf{st}}\xspace \\
& = &
\mu \mathrel{\circ} T(\lambda) \mathrel{\circ} \ensuremath{\mathsf{st}}\xspace.
\end{array}\eqno{\square}$$
\end{proof}
The proof of the next result is easy and left to the reader.
\begin{lemma}
\label{CMon2CMndLem}
A monoid $M$ is commutative (Abelian) if and only if the associated
monad $\mathcal{A}(M) = M\times (-)\colon \Cat{Sets}\xspace\rightarrow\Cat{Sets}\xspace$ is
commutative (as described in Section~\ref{PrelimSec}). \hspace*{\fill}$\QEDbox$
\end{lemma}
Next, we wish to define an appropriate category \Cat{SMLaw}\xspace of Lawvere
theories with symmetric monoidal structure $(\otimes, I)$. In order to
do so we need to take a closer look at the category $\aleph_0$
described in the introduction. Recall that $\aleph_0$ has $n\in\ensuremath{\mathbb{N}}$
as objects whilst morphisms $n\rightarrow m$ are functions
$\underline{n} \rightarrow \underline{m}$ in \Cat{Sets}\xspace, where, as
described earlier $\underline{n} = \{0,1,\ldots,n-1\}$. This category
$\aleph_0$ has a monoidal structure, given on objects by
multiplication $n\times m$ of natural numbers, with $1\in\ensuremath{\mathbb{N}}$ as
tensor unit. Functoriality involves a (chosen) coordinatisation, in
the following way. For $f\colon n\rightarrow p$ and $g\colon
m\rightarrow q$ in $\aleph_{0}$ one obtains $f\otimes g \colon n\times
m\rightarrow p\times q$ as a function:
$$\begin{array}{rcl}
f\otimes g
& = &
\ensuremath{\mathsf{co}}\xspace_{p,q}^{-1} \mathrel{\circ} (f\times g) \mathrel{\circ} \ensuremath{\mathsf{co}}\xspace_{n,m}
\;\colon\; \underline{n\times m}\longrightarrow \underline{p\times q},
\end{array}$$
\noindent where $\ensuremath{\mathsf{co}}\xspace$ is a coordinatisation function
$$\xymatrix@C+.5pc{
\underline{n\times m} = \{0,\ldots, (n\times m)-1\}
\ar[r]^-{\textsf{co}_{n,m}}_-{\cong} &
\{0,\ldots, n-1\} \times \{0,\ldots, m-1\} =
\underline{n}\times\underline{m},
}$$
\noindent given by
\begin{equation}
\label{CooEqn}
\begin{array}{rcl}
\ensuremath{\mathsf{co}}\xspace(c) = (a,b)
& \Leftrightarrow &
c = a\cdot m + b.
\end{array}
\end{equation}
\noindent We may write the inverse $\ensuremath{\mathsf{co}}\xspace^{-1} \colon \overline{n}
\times\overline{m} \rightarrow \overline{n\times m}$ as a small
tensor, as in $a\mathrel{\raisebox{.05pc}{$\scriptstyle \otimes$}} b = \ensuremath{\mathsf{co}}\xspace^{-1}(a,b)$. Then: $(f\otimes
b)(a\mathrel{\raisebox{.05pc}{$\scriptstyle \otimes$}} b) = f(a)\mathrel{\raisebox{.05pc}{$\scriptstyle \otimes$}} g(b)$. The monoidal isomorphisms in
$\aleph_0$ are then obtained from $\Cat{Sets}\xspace$, as in
$$\xymatrix{
\gamma^{\aleph_0} = \Big(\underline{n\times m}\ar[r]^-{\ensuremath{\mathsf{co}}\xspace} &
\underline{n}\times\underline{m}\ar[r]^-{\gamma^{\Cat{Sets}\xspace}} &
\underline{m}\times\underline{n}\ar[r]^-{\ensuremath{\mathsf{co}}\xspace^{-1}} &
\underline{m\times n}\Big).
}$$
\noindent Thus $\gamma^{\aleph_0}(a\mathrel{\raisebox{.05pc}{$\scriptstyle \otimes$}} b) = b\mathrel{\raisebox{.05pc}{$\scriptstyle \otimes$}}
a$. Similarly, the associativity map $\alpha^{\aleph_0} \colon n\otimes
(m\otimes k) \rightarrow (n\otimes m)\otimes k$ is determined as
$\alpha^{\aleph_0}(a\mathrel{\raisebox{.05pc}{$\scriptstyle \otimes$}} (b\mathrel{\raisebox{.05pc}{$\scriptstyle \otimes$}} c)) = (a\mathrel{\raisebox{.05pc}{$\scriptstyle \otimes$}} b)\mathrel{\raisebox{.05pc}{$\scriptstyle \otimes$}}
c$. The maps $\rho\colon n\times 1\rightarrow n$ in $\aleph_0$ are
identities.
\auxproof{
Formally,
$$\begin{array}{rcl}
\alpha^{\aleph_0}
& = &
\ensuremath{\mathsf{co}}\xspace_{n\times m,k}^{-1} \mathrel{\circ} (\ensuremath{\mathsf{co}}\xspace_{n,m}^{-1}\times\ensuremath{\mathrm{id}}) \mathrel{\circ}
\alpha^{\Cat{Sets}\xspace} \mathrel{\circ} (\ensuremath{\mathrm{id}}{}\times\ensuremath{\mathsf{co}}\xspace_{m,k})\mathrel{\circ} \ensuremath{\mathsf{co}}\xspace_{n,m\times k}.
\end{array}$$
}
This tensor $\otimes$ on $\aleph_0$ distributes over sum: the
canonical distributivity map $(n\otimes m)+(n\otimes k) \rightarrow
n\otimes (m+k)$ is an isomorphism. Its inverse maps $a\mathrel{\raisebox{.05pc}{$\scriptstyle \otimes$}} b \in
n\otimes (m+k)$ to $a\mathrel{\raisebox{.05pc}{$\scriptstyle \otimes$}} b \in \underline{n\times m}$ if $b<m$,
and to $a\mathrel{\raisebox{.05pc}{$\scriptstyle \otimes$}} (b-m)\in\underline{n\times k}$ otherwise.
We thus define the objects of the category \Cat{SMLaw}\xspace to be symmetric
monoidal Lawvere theories $\cat{L}\in\Cat{Law}\xspace$ for which the map
$\aleph_{0}\rightarrow\cat{L}$ strictly preserves the monoidal
structure that has just been described via multiplication $(\times,
1)$ of natural numbers; additionally the coproduct structure must be
preserved, as in \Cat{Law}\xspace. Morphisms in \Cat{SMLaw}\xspace are morphisms in \Cat{Law}\xspace that
strictly preserve this tensor structure. We note that for
$\cat{L}\in\Cat{SMLaw}\xspace$ we have a distributivity $n\otimes m + n\otimes k
\congrightarrow n\otimes (m+k)$, since this isomorphism lies in the
range of the functor $\aleph_{0}\rightarrow\cat{L}$.
By Lemma~\ref{KleisliStructLem} we know that the Kleisli category
$\mathcal{K}{\kern-.2ex}\ell(T)$ is symmetric monoidal if $T$ is commutative. In order to see
that also the finitary Kleisli category $\mathcal{K}{\kern-.2ex}\ell_{\ensuremath{\mathbb{N}}}(T)\in\Cat{Law}\xspace$ is
symmetric monoidal, we have to use the coordinatisation map described
in~\eqref{CooEqn}. For $f\colon n\rightarrow p$ and $g\colon m
\rightarrow q$ in $\mathcal{K}{\kern-.2ex}\ell_{\ensuremath{\mathbb{N}}}(T)$ we then obtain $f\otimes g \colon
n\times m\rightarrow p\times q$ as
$$\[email protected]{
f\otimes g = \Big(\underline{n\times m}\ar[r]^-{\ensuremath{\mathsf{co}}\xspace} &
\underline{n}\times\underline{m}\ar[r]^-{f\times g} &
T(\underline{p})\times T(\underline{q})\ar[r]^-{\ensuremath{\mathsf{dst}}\xspace} &
T(\underline{p}\times\underline{q})\ar[r]^-{T(\ensuremath{\mathsf{co}}\xspace^{-1})} &
T(\underline{p\times q})\Big).
}$$
We recall from~\cite{KellyL80} (see
also~\cite{AbramskyC04,AbramskyC09}) that for a monoidal category
$\cat{C}$ the homset $\cat{C}(I,I)$ of endomaps on the tensor unit
forms a commutative monoid. This applies in particular to Lawvere
theories $\cat{L}\in\Cat{SMLaw}\xspace$, and yields a functor $\mathcal{H}\colon
\Cat{SMLaw}\xspace \rightarrow \Cat{Mon}\xspace$ given by $\mathcal{H}(\cat{L}) =
\cat{L}(1,1)$, where $1\in\cat{L}$ is the tensor unit. Thus we almost
have a triangle of adjunctions as in
Figure~\ref{ComMonoidTriangleFig}. We only need to check the
following result.
\auxproof{ Starting from the middle $I$ on the left, $t\mathrel{\circ} s\colon
I\rightarrow I$ is the upper path $t \mathrel{\circ} \rho^{-1} \mathrel{\circ} \lambda
\mathrel{\circ} s$ to the middle $I$ on the right. Similarly $s\mathrel{\circ} t$ is
the lower path.
$$\xymatrix@C13ex@R-1ex{
I \ar_-{\lambda}^-{\cong}[r] & I \otimes I \ar@{=}[r] & I \otimes
I \ar^-{\ensuremath{\mathrm{id}} \otimes t}[d] \ar^-{\cong}_-{\rho^{-1}}[r] & I \ar^-{t}[d] \\
I \ar^-{\cong}_-{\lambda=\rho}[r] \ar^-{s}[u] \ar_-{t}[d] & I
\otimes I \ar^-{s \otimes \ensuremath{\mathrm{id}}}[u]
\ar_-{\ensuremath{\mathrm{id}} \otimes t}[d] \ar^-{s \otimes t}[r] & I \otimes I
\ar^-{\cong}_-{\lambda^{-1}=\rho^{-1}}[r] & I \\
I \ar_-{\rho}^-{\cong}[r] & I \otimes I \ar@{=}[r] & I \otimes I
\ar_-{s \otimes \ensuremath{\mathrm{id}}}[u] \ar^-{\cong}_-{\lambda^{-1}}[r] & I. \ar_-{s}[u]
}$$
}
\begin{lemma}
\label{LMCommLem}
The functor $\mathcal{T}\colon\Cat{Law}\xspace\rightarrow\Cat{Mnd}\xspace$ defined in~\eqref{LMEqn}
restricts to $\Cat{SMLaw}\xspace \rightarrow \Cat{CMnd}\xspace$. Further, this restriction is
left adjoint to $\mathcal{K}{\kern-.2ex}\ell_{\ensuremath{\mathbb{N}}}\colon \Cat{CMnd}\xspace\rightarrow \Cat{SMLaw}\xspace$.
\end{lemma}
\begin{proof}
For $\cat{L}\in\Cat{SMLaw}\xspace$ we define a map
$$\[email protected]{
\mathcal{T}(\cat{L})(X)\times \mathcal{T}(\cat{L})(Y)\ar[rr]^-{\ensuremath{\mathsf{dst}}\xspace} & &
\mathcal{T}(\cat{L})(X\times Y) \\
\big([\kappa_{i}(g,v)], [\kappa_{j}(h,w)]\big)\ar@{|->}[rr] & &
[\kappa_{i\times j}(g\otimes h, (v\times w) \mathrel{\circ} \ensuremath{\mathsf{co}}\xspace_{i,j})],
}$$
\noindent where $g\colon 1\rightarrow i$ and $h\colon 1\rightarrow j$
in $\cat{L}$ yield $g\otimes h\colon 1 = 1\otimes 1 \rightarrow
i\otimes j = i\times j$, and $\ensuremath{\mathsf{co}}\xspace$ is the coordinatisation
function~\eqref{CooEqn}. Then one can show that both $\mu \mathrel{\circ}
\mathcal{T}(\cat{L})(\ensuremath{\mathsf{st}}\xspace') \mathrel{\circ} \ensuremath{\mathsf{st}}\xspace$ and $\mu \mathrel{\circ} \mathcal{T}(\cat{L})(\ensuremath{\mathsf{st}}\xspace)
\mathrel{\circ} \ensuremath{\mathsf{st}}\xspace'$ are equal to $\ensuremath{\mathsf{dst}}\xspace$. This makes $\mathcal{T}(\cat{L})$ a
commutative monad.
In order to check that the adjunction $\mathcal{T} \dashv \mathcal{K}{\kern-.2ex}\ell_{\ensuremath{\mathbb{N}}}$
restricts, we only need to verify that the unit $\cat{L} \rightarrow
\mathcal{K}{\kern-.2ex}\ell_{\ensuremath{\mathbb{N}}}(\mathcal{T}({\cat{L}}))$ strictly preserves tensors. This is
easy. \hspace*{\fill}$\QEDbox$
\auxproof{
We shall use the following formulation of multiplication $\mu\colon
\mathcal{T}(\cat{L})(\mathcal{T}(\cat{L})(X))\rightarrow \mathcal{T}(\cat{L})(X)$:
$$\begin{array}{rcl}
\mu([\kappa_{i}(g,v)])
& = &
[\kappa_{j}((g_{0}+\cdots+g_{i-1})\mathrel{\circ} g, [v_{0},\ldots, v_{i-1}])] \\
& & \qquad \mbox{where }g\colon 1\rightarrow i, \mbox{ and }
v\colon i\rightarrow \mathcal{T}(\cat{L})(X) \mbox{ is written as} \\
& & \qquad\qquad v(a) = [\kappa_{j_{a}}(g_{a}, v_{a})], \mbox{ for }a<i, \\
& & \qquad \mbox{and } j = j_{0} + \cdots + j_{i-1}.
\end{array}$$
\noindent We note that the strength map $\ensuremath{\mathsf{st}}\xspace\colon \mathcal{T}(\cat{L})(X)
\times Y \rightarrow \mathcal{T}(\cat{L})(X\times Y)$ is given by:
$$\begin{array}{rcl}
[(\kappa_{i}(g, v), y)]
& \longmapsto &
[\kappa_{i}(g, \lam{a<i}{(g(a), y)})].
\end{array}$$
We use that the following ``left distributivity'' map is an identity
in $\aleph_0$.
\begin{equation}
\label{LeftDistrEqn}
\xymatrix@R-2pc@C-1pc{
\mathsf{ld} = \Big(i\times j = j+\cdots+j\ar[r]^-{=} &
1\otimes j+\cdots+ 1\otimes j\ar[r]^-{\cong} &
(1+\cdots+1)\otimes j = i\otimes j\Big)
}
\end{equation}
\noindent Thus, omitting equivalence brackets,
$$\begin{array}{rcl}
\lefteqn{\big(\mu \mathrel{\circ} \mathcal{T}(\cat{L})(\ensuremath{\mathsf{st}}\xspace') \mathrel{\circ} \ensuremath{\mathsf{st}}\xspace)
(\kappa_{i}(g,v), \kappa_{j}(h,w))} \\
& = &
\big(\mu \mathrel{\circ} \mathcal{T}(\cat{L})(\ensuremath{\mathsf{st}}\xspace')\big)
(\kappa_{i}(g, \lam{a<i}{\kappa_{i}(v(a), \kappa_{j}(h,w))})\big) \\
& = &
\mu\big(\kappa_{i}(g, \lam{a<i}{\ensuremath{\mathsf{st}}\xspace'(\kappa_{i}(v(a), \kappa_{j}(h,w)))})\big) \\
& = &
\mu\big(\kappa_{i}(g, \lam{a<i}{\kappa_{j}(h,
\lam{b<j}{(v(a),w(b))})})\big) \\
& = &
\kappa_{i\times j}((h+\cdots+h)\mathrel{\circ} g, [\lam{b<j}{(v(a),w(b))}]_{a<i}) \\
& = &
\kappa_{i\times j}((h+\cdots+h)\mathrel{\circ} g, (v\times w)\mathrel{\circ} \ensuremath{\mathsf{co}}\xspace_{i,j}) \\
& = &
\kappa_{i\times j}(\mathsf{ld} \mathrel{\circ} (h+\cdots+h)\mathrel{\circ} g,
(v\times w)\mathrel{\circ} \ensuremath{\mathsf{co}}\xspace_{i,j}) \\
& & \qquad \mbox{because this $\mathsf{ld}$ from~\eqref{LeftDistrEqn}
is the identity in $\aleph_0$} \\
& = &
\kappa_{i\times j}(g\otimes h, (v\times w)\mathrel{\circ} \ensuremath{\mathsf{co}}\xspace_{i,j})
\qquad \mbox{see below} \\
& = &
\ensuremath{\mathsf{dst}}\xspace(\kappa_{i}(g,v), \kappa_{j}(h,w)).
\end{array}$$
\noindent We still need to check:
$$\xymatrix@C+1.8pc{
1\ar[r]^-{g}\ar[dd]_{\lambda^{-1}}^{=} &
i = 1+\cdots+1\ar[r]^-{h+\cdots+h}\ar[d]^{\lambda^{-1}+\cdots+\lambda^{-1}}
\ar @/_12ex/ [dd]_{\lambda^{-1}} &
j+\cdots+j\ar[d]_{\lambda^{-1}+\cdots+\lambda^{-1}}^{=}
\ar @/^12ex/[dd]^{\mathsf{ld}} \\
&
1\otimes 1+\cdots+ 1\otimes 1\ar[d]^{[\kappa_{a}\otimes\ensuremath{\mathrm{id}}]_{a<i}}_{\cong}
\ar[r]^-{\ensuremath{\mathrm{id}}\otimes h+\cdots+ \ensuremath{\mathrm{id}}{}\otimes h} &
1\otimes j+\cdots+ 1\otimes j\ar[d]_{[\kappa_{a}\otimes\ensuremath{\mathrm{id}}]_{a<i}}^{\cong} \\
1\otimes 1\ar[r]^-{g\otimes\ensuremath{\mathrm{id}}} &
(1+\cdots+1)\otimes 1\ar[r]^-{\ensuremath{\mathrm{id}}\otimes h} &
(1+\cdots+1)\otimes j\rlap{$\;=i\times j$}
}$$
\noindent Next, for the other composite we get:
$$\begin{array}{rcl}
\lefteqn{\big(\mu \mathrel{\circ} \mathcal{T}(\cat{L})(\ensuremath{\mathsf{st}}\xspace) \mathrel{\circ} \ensuremath{\mathsf{st}}\xspace')
(\kappa_{i}(g,v), \kappa_{j}(h,w))} \\
& = &
\big(\mu \mathrel{\circ} \mathcal{T}(\cat{L})(\ensuremath{\mathsf{st}}\xspace)\big)
(\kappa_{j}(h, \lam{b<j}{\kappa_{i}(g,v), w(b)})\big) \\
& = &
\mu\big(\kappa_{j}(h, \lam{b<j}{\ensuremath{\mathsf{st}}\xspace(\kappa_{i}(g,v), w(b))})\big) \\
& = &
\mu\big(\kappa_{j}(h, \lam{b<j}{\kappa_{i}(g,
\lam{a<i}{(v(a),w(b))})})\big) \\
& = &
\kappa_{j\times i}((g+\cdots+g)\mathrel{\circ} h, [\lam{a<i}{(v(a),w(b))}]_{b<j}) \\
& = &
\kappa_{j\times i}((g+\cdots+g)\mathrel{\circ} h, (v\times w) \mathrel{\circ} \gamma^{\Cat{Sets}\xspace}
\mathrel{\circ} \ensuremath{\mathsf{co}}\xspace_{j,i}) \\
& = &
\kappa_{j\times i}((g+\cdots+g)\mathrel{\circ} h, (v\times w) \mathrel{\circ}
\ensuremath{\mathsf{co}}\xspace_{i,j} \mathrel{\circ} \ensuremath{\mathsf{co}}\xspace_{i,j}^{-1} \mathrel{\circ} \gamma^{\Cat{Sets}\xspace}
\mathrel{\circ} \ensuremath{\mathsf{co}}\xspace_{j,i}) \\
& = &
\kappa_{j\times i}((g+\cdots+g)\mathrel{\circ} h, (v\times w) \mathrel{\circ}
\ensuremath{\mathsf{co}}\xspace_{i,j} \mathrel{\circ} \gamma^{\aleph_{0}}) \\
& = &
\kappa_{i\times j}(\gamma^{\aleph_{0}} \mathrel{\circ} (g+\cdots+g)\mathrel{\circ} h,
(v\times w) \mathrel{\circ} \ensuremath{\mathsf{co}}\xspace_{i,j}) \\
& = &
\kappa_{i\times j}(\gamma^{\aleph_{0}} \mathrel{\circ} \mathsf{ld} \mathrel{\circ}
(g+\cdots+g)\mathrel{\circ} h, (v\times w) \mathrel{\circ} \ensuremath{\mathsf{co}}\xspace_{i,j}) \\
& = &
\kappa_{i\times j}(\gamma^{\aleph_{0}} \mathrel{\circ} (h\otimes g),
(v\times w) \mathrel{\circ} \ensuremath{\mathsf{co}}\xspace_{i,j}) \\
& = &
\kappa_{i\times j}((g\otimes h) \mathrel{\circ} \gamma^{\aleph_{0}},
(v\times w) \mathrel{\circ} \ensuremath{\mathsf{co}}\xspace_{i,j}) \\
& = &
\kappa_{i\times j}(g\otimes h, (v\times w) \mathrel{\circ} \ensuremath{\mathsf{co}}\xspace_{i,j}) \\
& & \qquad \mbox{because $\gamma^{\aleph_{0}}\colon 1\otimes 1\rightarrow
1\otimes 1$ is the identity} \\
& = &
\ensuremath{\mathsf{dst}}\xspace(\kappa_{i}(g,v), \kappa_{j}(h,w)).
\end{array}$$
For the adjunction $\mathcal{T}\colon \Cat{SMLaw}\xspace \leftrightarrows \Cat{CMnd}\xspace \colon
\mathcal{K}{\kern-.2ex}\ell_{N}$ we only need to check that the unit $\eta\colon \cat{L}
\rightarrow \mathcal{K}{\kern-.2ex}\ell_{\ensuremath{\mathbb{N}}}(\mathcal{T}(\cat{L}))$ is a map in \Cat{SMLaw}\xspace. On
objects it is of course the identity. It sends a morphism
$f\colon n\rightarrow p$ in \cat{L} to the map
$\eta(f)\colon n\rightarrow T_{\cat{L}}(p)$ given by:
$$\begin{array}{rcl}
\eta(f)(i)
& = &
\kappa_{p}(f\mathrel{\circ} \kappa_{i}, \ensuremath{\mathrm{id}}_{p}),
\end{array}$$
\noindent see the proof of Lemma~\ref{AdjMndLvTLem}. We need to check
that it preserves tensor. For an additional map $g\colon m\rightarrow
q$ in \cat{L} we get, for $i<n\times m$, say $i = a\mathrel{\raisebox{.05pc}{$\scriptstyle \otimes$}} b = a\cdot
m + b$, for $a<n$ and $b<m$,
$$\begin{array}{rcl}
\big(\eta(f)\otimes\eta(g)\big)(i)
& = &
\big(T_{\cat{L}}(\ensuremath{\mathsf{co}}\xspace_{p,q}^{-1}) \mathrel{\circ} \ensuremath{\mathsf{dst}}\xspace \mathrel{\circ} (\eta(f)\times \eta(g))
\mathrel{\circ} \ensuremath{\mathsf{co}}\xspace_{n,m}\big)(i) \\
& = &
T_{\cat{L}}(\ensuremath{\mathsf{co}}\xspace_{p,q}^{-1})(\ensuremath{\mathsf{dst}}\xspace(\eta(f)(a), \eta(g)(b))) \\
& = &
T_{\cat{L}}(\ensuremath{\mathsf{co}}\xspace_{p,q}^{-1})(\ensuremath{\mathsf{dst}}\xspace(\kappa_{p}(f\mathrel{\circ}\kappa_{a}, \ensuremath{\mathrm{id}}_{p}),
\kappa_{q}(g \mathrel{\circ} \kappa_{b}, \ensuremath{\mathrm{id}}_{q}))) \\
& = &
T_{\cat{L}}(\ensuremath{\mathsf{co}}\xspace_{p,q}^{-1})(\kappa_{p\times q}(
(f\mathrel{\circ}\kappa_{a})\otimes (g\mathrel{\circ}\kappa_{b}), \ensuremath{\mathsf{co}}\xspace_{p,q})) \\
& = &
\kappa_{p\times q}((f\otimes g) \mathrel{\circ} (\kappa_{a}\otimes\kappa_{b}),
\ensuremath{\mathsf{co}}\xspace_{p,q}^{-1} \mathrel{\circ} \ensuremath{\mathsf{co}}\xspace_{p,q}) \\
& \smash{\stackrel{*}{=}} &
\kappa_{p\times q}((f\otimes g)\mathrel{\circ} \kappa_{i}, \ensuremath{\mathrm{id}}_{p\times q}) \\
& = &
\eta(f\otimes g)(i)
\end{array}$$
\noindent where the marked equation holds because in $\aleph_0$
(and hence in $\cat{L}$),
$$\begin{array}{rcl}
\kappa_{a}\otimes \kappa_{b}
& = &
\ensuremath{\mathsf{co}}\xspace_{n,m}^{-1} \mathrel{\circ} (\kappa_{a}\times \kappa_{b}) \mathrel{\circ} \ensuremath{\mathsf{co}}\xspace_{1,1} \\
& = &
\mathrel{\raisebox{.05pc}{$\scriptstyle \otimes$}}\; \mathrel{\circ} (\kappa_{a}\times \kappa_{b}) \mathrel{\circ} \ensuremath{\mathsf{co}}\xspace_{1,1} \\
& = &
\kappa_{i}.
\end{array}$$
}
\end{proof}
\begin{figure}
$$\[email protected]@C+.5pc{
& & \Cat{CMon}\xspace\ar@/_2ex/ [ddll]_{\cal A}\ar@/_2ex/ [ddrr]_(0.4){\mathcal{K}{\kern-.2ex}\ell_{\ensuremath{\mathbb{N}}}\mathcal{A}} \\
& \dashv & & \dashv & \\
\Cat{CMnd}\xspace\ar @/_2ex/[rrrr]_{\mathcal{K}{\kern-.2ex}\ell_\ensuremath{\mathbb{N}}}
\ar@/_2ex/ [uurr]_(0.6){\;{\mathcal{E}} \cong \mathcal{H}\mathcal{K}{\kern-.2ex}\ell_{\ensuremath{\mathbb{N}}}} & & \bot & &
\Cat{SMLaw}\xspace\ar@/_2ex/ [uull]_{\mathcal{H}}\ar @/_2ex/[llll]_{\mathcal{T}}
}$$
\caption{Commutative version of Figure~\ref{MonoidTriangleFig}, with
commutative monoids, commutative monads and symmetric monoidal
Lawvere theories.}
\label{ComMonoidTriangleFig}
\end{figure}
\section{Additive monads}\label{AMndSec}
Having an adjunction between commutative monoids and commutative
monads (Figure~\ref{ComMonoidTriangleFig}) raises the question whether
we may also define an adjunction between commutative semirings and some
specific class of monads. It will appear that so-called additive
commutative monads are needed here. In this section we will define and
study such additive (commutative) monads and see how they relate to
biproducts in their Kleisli categories and categories of algebras.
We consider monads on a category $\cat{C}$ with both
finite products and coproducts. If, for a monad
$T$ on $\cat{C}$, the object $T(0)$ is final---\textit{i.e.}~satisfies
$T(0) \cong 1$---then $0$ is both initial and final in the Kleisli
category $\mathcal{K}{\kern-.2ex}\ell(T)$. Such an object that is both initial and final is
called a \emph{zero object}.
Also the converse is true, if $0 \in \mathcal{K}{\kern-.2ex}\ell(T)$ is a zero object, then
$T(0)$ is final in $\cat{C}$. Although we don't use this in the
remainder of this paper, we also mention a related result on
the category of Eilenberg-Moore algebras. The proofs are simple and
are left to the reader.
\begin{lemma}\label{zerolem}
For a monad $T$ on a category $\cat{C}$ with finite products $(\times,
1)$ and coproducts $(+,0)$, the following statements are equivalent.
\begin{enumerate}
\renewcommand{\theenumi}{(\roman{enumi})}
\item $T(0)$ is final in $\cat{C}$;
\item $0 \in \mathcal{K}{\kern-.2ex}\ell(T)$ is a zero object;
\item $1 \in \textsl{Alg}\xspace(T)$ is a zero object. \hspace*{\fill}$\QEDbox$
\end{enumerate}
\end{lemma}
\auxproof{
(i) $\Rightarrow$ (ii):
For each object $X$, there is a unique map $0 \to T(X)$ in $\cat{C}$
(i.e. $0 \to X$ in $\mathcal{K}{\kern-.2ex}\ell(T)$), by initiality of $0$. Furthermore there
is a unique map $X \to T(0)$ in $\cat{C}$ (i.e. $X \to 0$ in
$\mathcal{K}{\kern-.2ex}\ell(T)$), as by (i) $T(0)$ is final.\\
(ii) $\Rightarrow$ (i):
As 0 is a zero object in $\mathcal{K}{\kern-.2ex}\ell(T)$, there is for each object $X \in
\cat{C}$, a unique map $X \to 0$ in $\mathcal{K}{\kern-.2ex}\ell(T)$ (i.e. a unique map $X \to
T(0)$ in $\cat{C}$) so $T(0)$ is final.\\
(i) $\Rightarrow$ (iii):
As the free construction $\cat{C} \to \textsl{Alg}\xspace(T), X \mapsto (T^2(X)
\xrightarrow{\mu} T(X))$ preserves coproducts $(T^2(0)
\xrightarrow{\mu} T(0))$ is initial in $\textsl{Alg}\xspace(T)$ By (i), $T(0) = 1$,
so $1$ is initial in $\textsl{Alg}\xspace(T)$. Clearly $1$ is also final, so it is a
zero object.\\
(iii) $\Rightarrow$ (i):
As $T(0)$ is initial in $\textsl{Alg}\xspace(T)$ and (by assumption) 1 is a
zero-object in $\textsl{Alg}\xspace(T)$ (hence in particular also initial), $T(0)
\cong 1$ and $T(0)$ is final.
}
A zero object yields, for any pair of objects $X, Y$, a unique ``zero
map'' $0_{X,Y}\colon X \to 0 \to Y$ between them. In a Kleisli
category $\mathcal{K}{\kern-.2ex}\ell(T)$ for a monad $T$ on $\cat{C}$, this zero map $0_{X,Y}
\colon X \to Y$ is the following map in $\cat{C}$
\begin{equation}
\label{KleisliZeroMap}
\xymatrix{
0_{X,Y} =
\Big(X\ar[r]^-{!_X} & 1 \cong T(0) \ar[r]^-{T(!_Y)} & T(Y)\Big).
}
\end{equation}
\auxproof{
We check that this map is the composite of the two unique maps
$X\dashrightarrow 0 \dashrightarrow Y$ in $\mathcal{K}{\kern-.2ex}\ell(T)$, where
$X\dashrightarrow 0$ is $X \stackrel{!}{\rightarrow} 1 \congrightarrow
T(0)$, and $0\dashrightarrow Y$ is $!_{T(Y)}$, as in:
$$
\xymatrix{
X \ar[r]^{!} & 1 \ar[r]^-{\cong} &T(0) \ar[rr]^-{T(!_{TY})} \ar[rd]_-{T(!_{Y})} && T^2(Y) \ar[r]^-{\mu} & T(Y) \\
&&&T(Y) \ar[ru]^-{T(\eta)} \ar[rru]_-{\ensuremath{\mathrm{id}}}
}
$$
}
\noindent For convenience, we make some basic properties of this zero
map explicit.
\begin{lemma}
\label{zeroproplem}
Assume $T(0)$ is final, for a monad $T$ on $\cat{C}$. The resulting
zero maps $0_{X,Y} \colon X \to T(Y)$ from~\eqref{KleisliZeroMap} make
the following diagrams in \cat{C} commute
$$\[email protected]@R-0.5pc{
X\ar[r]^-{0}\ar[dr]_{0} & T^{2}(Y)\ar[d]^{\mu}
& T(X)\ar[l]_-{T(0)}\ar[dl]^{0}
&
X\ar[r]^-{0}\ar[d]_{f}\ar[dr]_-{0} & T(Y)\ar[d]^{T(f)}
&
X\ar[r]^-{0}\ar[dr]_{0} & T(Y)\ar[d]^{\sigma_Y}
\\
& T(Y) &
&
Y\ar[r]_-{0} & T(Z) &
& S(Y)
}$$
\noindent where $f\colon Y\rightarrow Z$ is a map in \cat{C} and
$\sigma\colon T\rightarrow S$ is a map of monads. \hspace*{\fill}$\QEDbox$
\end{lemma}
\auxproof{
We write $\langle\rangle$ for the unique map
$X\rightarrow 1\congrightarrow T(0)$ to the final object $0$ in
$\mathcal{K}{\kern-.2ex}\ell(T)$ in:
$$\begin{array}{rcll}
\mu \mathrel{\circ} 0_{X,TY}
& = &
\mu \mathrel{\circ} T(!_{T(Y)}) \mathrel{\circ} \langle\rangle \\
& = &
\mu \mathrel{\circ} T(\eta) \mathrel{\circ} T(!_{Y}) \mathrel{\circ} \langle\rangle \\
& = &
T(!_{Y}) \mathrel{\circ} \langle\rangle \\
& = &
0_{X,Y} \\
\mu \mathrel{\circ} T(0_{X,Y})
& = &
\mu_{Y} \mathrel{\circ} T(T(!_{Y}) \mathrel{\circ} \langle\rangle) \\
& = &
T(!_{Y}) \mathrel{\circ} \mu_{0} \mathrel{\circ} \langle\rangle \\
& = &
T(!_{Y}) \mathrel{\circ} \langle\rangle
& \mbox{since $T(0)$ is final} \\
& = &
0_{T(X), Y} \\
0_{Y,Z} \mathrel{\circ} f
& = &
T(!_{Z}) \mathrel{\circ} \langle\rangle \mathrel{\circ} f \\
& = &
T(!_{Z}) \mathrel{\circ} \langle\rangle \\
& = &
0_{X,Z} \\
T(f) \mathrel{\circ} 0_{X,Y}
& = &
T(f) \mathrel{\circ} T(!_{Y}) \mathrel{\circ} \langle\rangle \\
& = &
T(!_{Z}) \mathrel{\circ} \langle\rangle \\
& = &
0_{X,Z} \\
\sigma_{Y} \mathrel{\circ} 0_{X,Y}
& = &
\sigma_{Y} \mathrel{\circ} T(!_{Y}) \mathrel{\circ} \langle\rangle \\
& = &
S(!_{Y}) \mathrel{\circ} \sigma_{0} \mathrel{\circ} \langle\rangle \\
& = &
S(!_{Y}) \mathrel{\circ} \langle\rangle
& \mbox{by uniqueness of maps to 1 in} \\
& & & T(0) \stackrel{\sigma_0}{\rightarrow} S(0) \congrightarrow 1 \\
& = &
0_{X,Y}.
\end{array}$$
}
Still assuming that $T(0)$ is final, the zero
map~\eqref{KleisliZeroMap} enables us to define a canonical map
\begin{equation}
\label{bcDef}
\xymatrix@C+1pc{
\ensuremath{\mathsf{bc}}\xspace \stackrel{\textrm{def}}{=} \Big(T(X+Y)\ar[rr]^-{
\tuple{\mu \mathrel{\circ} T(p_1)}{\mu \mathrel{\circ} T(p_2)}}
& & T(X) \times T(Y)\Big),
}
\end{equation}
\noindent where
\begin{equation}
\label{kleisliprojdef}
\[email protected]{
p_{1} \stackrel{\textrm{def}}{=}
\Big(X+Y \ar[rr]^-{\cotuple{\eta}{0_{Y,X}}} & & T(X)\Big),
&
p_{2} \stackrel{\textrm{def}}{=}
\Big(X+Y \ar[rr]^-{\cotuple{0_{X,Y}}{\eta}} & & T(Y)\Big).
}
\end{equation}
\noindent Here we assume that the underlying category \cat{C} has both
finite products and finite coproducts. The abbreviation ``\ensuremath{\mathsf{bc}}\xspace'' stands
for ``bicartesian'', since this maps connects the coproducts and
products. The auxiliary maps $p_{1},p_{2}$ are sometimes called
projections, but should not be confused with the (proper) projections
$\pi_{1},\pi_{2}$ associated with the product $\times$ in \cat{C}.
We continue by listing a series of properties of this map \ensuremath{\mathsf{bc}}\xspace that
will be useful in what follows.
\begin{lemma}
\label{bcproplem}
In the context just described, the map $\ensuremath{\mathsf{bc}}\xspace \colon T(X+Y) \to
T(X)\times T(Y)$ in~\eqref{bcDef} has the following properties.
{\renewcommand{\theenumi}{(\roman{enumi})}
\begin{enumerate}
\item\label{natbcprop} This \ensuremath{\mathsf{bc}}\xspace is a natural transformation,
and it commutes with any monad map $\sigma\colon T\rightarrow S$, as in:
$$\[email protected]{
T(X+Y)\ar[r]^-{\ensuremath{\mathsf{bc}}\xspace}\ar[d]|{T(f+g)} & T(X)\times T(Y)\ar[d]|{T(f)\times T(g)}
&
T(X+Y)\ar[r]^-{\ensuremath{\mathsf{bc}}\xspace}\ar[d]|{\sigma_{X+Y}}
& T(X)\times T(Y)\ar[d]|{\sigma_{X}\times\sigma_{Y}}
\\
T(U+V)\ar[r]^-{\ensuremath{\mathsf{bc}}\xspace} & T(U)\times T(V)
&
S(X+Y)\ar[r]^-{\ensuremath{\mathsf{bc}}\xspace} & S(X)\times S(Y)
}$$
\item\label{natmonoidalbcprop} It also commutes with the monoidal
isomorphisms (for products and coproducts in $\cat{C}$):
$$\begin{array}{c}
\[email protected]{
T(X+0)\ar[r]^-{\ensuremath{\mathsf{bc}}\xspace}\ar[dr]_{T(\rho)}^{\cong} & T(X)\times T(0)\ar[d]^{\rho}_{\cong}
&
T(X+Y)\ar[r]^-{\ensuremath{\mathsf{bc}}\xspace}\ar[d]_{T(\cotuple{\kappa_2}{\kappa_1})}^{\cong}
& T(X)\times T(Y)\ar[d]^{\tuple{\pi_2}{\pi_1}}_{\cong}
\\
& T(X)
&
T(Y+X)\ar[r]^-{\ensuremath{\mathsf{bc}}\xspace} & T(Y)\times T(X)
} \\
\\[-1em]
\[email protected]{
T((X+Y)+Z) \ar[r]^-{\ensuremath{\mathsf{bc}}\xspace} \ar[d]_-{T(\alpha)}^-{\cong} & T(X+Y)\times T(Z) \ar[r]^-{\ensuremath{\mathsf{bc}}\xspace \times id} & (T(X)\times T(Y))\times T(Z) \ar[d]^-{\alpha}_-{\cong}
\\
T(X+(Y+Z)) \ar[r]^-{\ensuremath{\mathsf{bc}}\xspace} & T(X)\times T(Y+Z) \ar[r]^-{id \times \ensuremath{\mathsf{bc}}\xspace} & T(X)\times (T(Y) \times T(Z))
}
\end{array}$$
\item\label{comEtaMubcprop} The map \ensuremath{\mathsf{bc}}\xspace interacts with $\eta$ and
$\mu$ in the following manner:
$$
\[email protected]{
X+Y \ar[d]_-{\eta} \ar[dr]^-{\tuple{p_1}{p_2}} &\\
T(X+Y) \ar[r]_-{\ensuremath{\mathsf{bc}}\xspace} & T(X) \times T(Y)
}
$$
$$\hspace*{-1em}\[email protected]{
T^2(X+Y) \ar[r]^-{\mu} \ar[d]_-{T(\ensuremath{\mathsf{bc}}\xspace)}& T(X+Y) \ar[dd]^{\ensuremath{\mathsf{bc}}\xspace}
\hspace*{-1em} & \hspace*{-1em}
T(T(X) + T(Y)) \ar[r]^-{\ensuremath{\mathsf{bc}}\xspace} \ar[d]|{T(\cotuple{T(\kappa_1)}{T(\kappa_2)})}& T^2(X) \times T^2(Y) \ar[dd]^{\mu \times \mu}
\\
T(T(X)\times T(Y)) \ar[d]|{{\tuple{T(\pi_1)}{T(\pi_2)}}} &
\hspace*{-1em} & \hspace*{-1em}
T^2(X+Y) \ar[d]_-{\mu}&
\\
T^2(X) \times T^2(Y)\ar[r]^-{\mu \times \mu}& T(X) \times T(Y)
\hspace*{-1em} & \hspace*{-1em}
T(X+Y) \ar[r]^-{\ensuremath{\mathsf{bc}}\xspace} & T(X) \times T(Y)
}
$$
\item\label{comStbcprop} If $\cat{C}$ is a distributive category,
$\ensuremath{\mathsf{bc}}\xspace$ commutes with strength $\ensuremath{\mathsf{st}}\xspace$ as follows:
$$\hspace*{-1em}\xymatrix@C-1pc{
T(X+Y)\times Z \ar[r]^-{\ensuremath{\mathsf{bc}}\xspace \times \idmap} \ar[d]_-{\ensuremath{\mathsf{st}}\xspace}&
(T(X) \times T(Y))\times Z \ar[r]^-{\mathit{dbl}} &
(T(X) \times Z) \times (T(Y) \times Z) \ar[d]^-{\ensuremath{\mathsf{st}}\xspace \times \ensuremath{\mathsf{st}}\xspace}
\\
T((X+Y) \times Z) \ar[r]^-{\cong} &T((X\times Z) + (Y\times Z)) \ar[r]^-{\ensuremath{\mathsf{bc}}\xspace} &T(X \times Z) \times T(Y \times Z)
}
$$
\noindent where $\mathit{dbl}$ is the ``double'' map
$\tuple{\pi_1 \times \idmap}{\pi_2 \times \idmap}\colon (A\times B)\times C
\rightarrow (A\times C)\times (B\times C)$.
\end{enumerate}
}
\end{lemma}
\begin{proof}
These properties are easily verified, using Lemma~\ref{zeroproplem}
and the fact that the projections $p_i$ are natural, both in $\cat{C}$
and in $\mathcal{K}{\kern-.2ex}\ell(T)$. \hspace*{\fill}$\QEDbox$
\end{proof}
\auxproof{
Naturality of the projections in $\cat{C}$: for $f\colon X\rightarrow U$ and
$g\colon Y\rightarrow V$ one has:
$$\begin{array}{rcl}
p_{1} \mathrel{\circ} (f+g)
& = &
\cotuple{\eta}{0} \mathrel{\circ} (f+g) \\
& = &
\cotuple{\eta \mathrel{\circ} f}{0 \mathrel{\circ} g} \\
& = &
\cotuple{T(f) \mathrel{\circ} \eta}{T(f) \mathrel{\circ} 0} \\
& = &
T(f) \mathrel{\circ} \cotuple{\eta}{0} \\
& = &
T(f) \mathrel{\circ} p_{1}.
\end{array}$$
\noindent Similarly, for $f\colon X\rightarrow T(U)$ and $g\colon
Y\rightarrow T(V)$ one has naturality wrt.\ the coproducts in
$\mathcal{K}{\kern-.2ex}\ell(T)$ since:
$$\begin{array}{rcl}
p_{1} \klafter (f+g)
& = &
\mu \mathrel{\circ} T(\cotuple{\eta}{0}) \mathrel{\circ}
\cotuple{T(\kappa_{1}) \mathrel{\circ} f}{T(\kappa_{2}) \mathrel{\circ} g} \\
& = &
\mu \mathrel{\circ} \cotuple{T(\eta) \mathrel{\circ} f}{T(0) \mathrel{\circ} g} \\
& = &
\cotuple{\mu \mathrel{\circ} T(\eta) \mathrel{\circ} f}{\mu \mathrel{\circ} T(0) \mathrel{\circ} g} \\
& = &
\cotuple{\mu \mathrel{\circ} \eta \mathrel{\circ} f}{0 \klafter g} \\
& = &
\cotuple{\mu \mathrel{\circ} T(f) \mathrel{\circ} \eta}{0} \\
& = &
\cotuple{\mu \mathrel{\circ} T(f) \mathrel{\circ} \eta}{\mu \mathrel{\circ} T(f) \mathrel{\circ} 0}
\\
& = &
\mu \mathrel{\circ} T(f) \mathrel{\circ} \cotuple{\eta}{0} \\
& = &
f \klafter p_{1}
\end{array}$$
For naturality of \ensuremath{\mathsf{bc}}\xspace assume $f\colon X\rightarrow U$ and
$g\colon Y\rightarrow V$. Then:
$$\begin{array}{rcl}
(T(f)\times T(g)) \mathrel{\circ} \ensuremath{\mathsf{bc}}\xspace
& = &
(T(f) \times T(g)) \mathrel{\circ} \tuple{\mu \mathrel{\circ} T(p_{1})}{\mu \mathrel{\circ} T(p_{2})} \\
& = &
\tuple{T(f) \mathrel{\circ} \mu \mathrel{\circ} T(p_{1})}{T(g) \mathrel{\circ} \mu \mathrel{\circ} T(p_{2})} \\
& = &
\tuple{\mu \mathrel{\circ} T(T(f) \mathrel{\circ} p_{1})}{\mu \mathrel{\circ} T(T(g) \mathrel{\circ} p_{2})} \\
& = &
\tuple{\mu \mathrel{\circ} T(p_{1} \mathrel{\circ} (f+g))}{\mu \mathrel{\circ} T(p_{2} \mathrel{\circ} (f+g))} \\
& = &
\tuple{\mu \mathrel{\circ} T(p_{1})}{\mu \mathrel{\circ} T(p_{2})} \mathrel{\circ} T(f+g) \\
& = &
\ensuremath{\mathsf{bc}}\xspace \mathrel{\circ} T(f+g).
\end{array}$$
The map $\ensuremath{\mathsf{bc}}\xspace$ commutes with a monad map $\sigma$ since:
$$\xymatrix@C+1pc{
T(X+Y) \ar[rr]^-{\tuple{T(\cotuple{\eta^T}{0})}{T(\cotuple{0}{\eta^T})}} \ar[rrd]|{\tuple{T(\cotuple{\eta^S}{0})}{T(\cotuple{0}{\eta^S})}} \ar[dd]_{\sigma} && T^2(X) \times T^2(Y) \ar[r]^{\mu \times \mu} \ar[d]^{T\sigma \times T\sigma} & T(X) \times T(Y) \ar[dd]^{\sigma \times \sigma} \\
&& TS(X) \times TS(Y) \ar[d]^{\sigma_{S(X)} \times \sigma_{S(Y)}} &\\
S(X+Y) \ar[rr]^-{\tuple{S(\cotuple{\eta^S}{0})}{S(\cotuple{0}{\eta^S})}} && S^2(X) \times S^2(Y) \ar[r]^{\mu \times \mu} & S(X) \times S(Y)
}$$
Commutation with $\rho$'s:
$$\begin{array}{rcl}
\pi_{1} \mathrel{\circ} \ensuremath{\mathsf{bc}}\xspace
& = &
\pi_{1} \mathrel{\circ} \tuple{\mu \mathrel{\circ} T(p_{1})}{\mu \mathrel{\circ} T(p_{2})} \\
& = &
\mu \mathrel{\circ} T(\cotuple{\eta}{0_{0,X}}) \\
& = &
\mu \mathrel{\circ} T(\cotuple{\eta}{\eta \mathrel{\circ} \,!_{X}}) \\
& = &
\mu \mathrel{\circ} T(\eta \mathrel{\circ} \cotuple{\ensuremath{\mathrm{id}}}{!}) \\
& = &
\cotuple{\ensuremath{\mathrm{id}}}{!}.
\end{array}$$
Commutation with swap maps:
$$\begin{array}{rcl}
\tuple{\pi_2}{\pi_1} \mathrel{\circ} \ensuremath{\mathsf{bc}}\xspace
& = &
\tuple{\pi_2}{\pi_1} \mathrel{\circ} \tuple{\mu \mathrel{\circ} T(p_{1})}{\mu \mathrel{\circ} T(p_{2})} \\
& = &
\tuple{\mu \mathrel{\circ} T(p_{2})}{\mu \mathrel{\circ} T(p_{1})} \\
& = &
\tuple{\mu \mathrel{\circ} T(\cotuple{0}{\eta})}{\mu \mathrel{\circ} T(\cotuple{\eta}{0})} \\
& = &
\tuple{\mu \mathrel{\circ} T(\cotuple{\eta}{0} \mathrel{\circ} \cotuple{\kappa_2}{\kappa_1})}
{\mu \mathrel{\circ} T(\cotuple{0}{\eta} \mathrel{\circ} \cotuple{\kappa_2}{\kappa_1})} \\
& = &
\tuple{\mu \mathrel{\circ} T(p_{1})}{\mu \mathrel{\circ} T(p_{2})} \mathrel{\circ}
T(\cotuple{\kappa_2}{\kappa_1}) \\
& = &
\ensuremath{\mathsf{bc}}\xspace \mathrel{\circ} T(\cotuple{\kappa_2}{\kappa_1})
\end{array}$$
Commutation with $\alpha$':
$$\begin{array}{rcl}
\lefteqn{\tuple{\pi_{1} \mathrel{\circ} \pi_{1}}{
\tuple{\pi_{2}\mathrel{\circ} \pi_{1}}{\pi_2}} \mathrel{\circ} (\ensuremath{\mathsf{bc}}\xspace\times\ensuremath{\mathrm{id}}) \mathrel{\circ} \ensuremath{\mathsf{bc}}\xspace} \\
& = &
\tuple{\pi_{1} \mathrel{\circ} \ensuremath{\mathsf{bc}}\xspace \mathrel{\circ} \pi_{1}}{
\tuple{\pi_{2} \mathrel{\circ} \ensuremath{\mathsf{bc}}\xspace \mathrel{\circ} \pi_{1}}{\pi_{2}}} \mathrel{\circ} \ensuremath{\mathsf{bc}}\xspace \\
& = &
\tuple{\mu \mathrel{\circ} T(p_{1}) \mathrel{\circ} \pi_{1} \mathrel{\circ} \ensuremath{\mathsf{bc}}\xspace}{
\tuple{\mu \mathrel{\circ} T(p_{2}) \mathrel{\circ} \pi_{1} \mathrel{\circ} \ensuremath{\mathsf{bc}}\xspace}
{\pi_{2} \mathrel{\circ} \ensuremath{\mathsf{bc}}\xspace}} \\
& = &
\tuple{\mu \mathrel{\circ} T(p_{1}) \mathrel{\circ} \mu \mathrel{\circ} T(p_{1})}{
\tuple{\mu \mathrel{\circ} T(p_{2}) \mathrel{\circ} \mu \mathrel{\circ} T(p_{1})}
{\mu \mathrel{\circ} T(p_{2})}} \\
& = &
\tuple{\mu \mathrel{\circ} \mu \mathrel{\circ} T(T(p_{1}) \mathrel{\circ} p_{1})}{
\tuple{\mu \mathrel{\circ} \mu \mathrel{\circ} T(T(p_{2}) \mathrel{\circ} p_{1})}
{\mu \mathrel{\circ} T(p_{2})}} \\
& = &
\tuple{\mu \mathrel{\circ} T(\mu) \mathrel{\circ} T(T(p_{1}) \mathrel{\circ} \cotuple{\eta}{0})}{
\tuple{\mu \mathrel{\circ} T(\mu) \mathrel{\circ} T(T(p_{2}) \mathrel{\circ} \cotuple{\eta}{0})}
{\mu \mathrel{\circ} T(p_{2})}} \\
& = &
\tuple{\mu \mathrel{\circ} T(\mu \mathrel{\circ} \cotuple{\eta \mathrel{\circ} p_{1}}{0})}{
\tuple{\mu \mathrel{\circ} T(\mu \mathrel{\circ} \cotuple{\eta\mathrel{\circ} p_{2}}{0})}
{\mu \mathrel{\circ} T(p_{2})}} \\
& = &
\tuple{\mu \mathrel{\circ} T(\cotuple{p_{1}}{0})}{
\tuple{\mu \mathrel{\circ} T(\cotuple{p_{2}}{0})}
{\mu \mathrel{\circ} T(p_{2})}} \\
& \stackrel{(*)}{=} &
\tuple{\mu \mathrel{\circ} T(\cotuple{p_1}{0})}
{\tuple{\mu \mathrel{\circ} T(\cotuple{\cotuple{0}{\eta}}{0})}
{\mu \mathrel{\circ} T(\cotuple{\cotuple{0}{0}}{\eta})}} \\
& = &
\tuple{\mu \mathrel{\circ} T(\cotuple{p_1}{0})}
{\tuple{\mu \mathrel{\circ} T(\mu) \mathrel{\circ}
T(\cotuple{\cotuple{0}{\eta\mathrel{\circ}\eta}}{0 \mathrel{\circ} \eta})}
{\mu \mathrel{\circ} T(\mu) \mathrel{\circ}
T(\cotuple{\cotuple{0}{0\mathrel{\circ}\eta}}{\eta\mathrel{\circ}\eta})}} \\
& = &
\tuple{\mu \mathrel{\circ} T(\cotuple{p_1}{0})}
{\tuple{\mu \mathrel{\circ} \mu \mathrel{\circ} T^{2}(p_{1})}
{\mu \mathrel{\circ} \mu \mathrel{\circ} T^{2}(p_{2})}
\mathrel{\circ} T(\cotuple{\cotuple{0}{T(\kappa_{1}) \mathrel{\circ} \eta}}
{T(\kappa_{2}) \mathrel{\circ} \eta})} \\
& = &
\tuple{\mu \mathrel{\circ} T(\cotuple{p_1}{0})}
{\tuple{\mu \mathrel{\circ} T(p_{1}) \mathrel{\circ} \mu}{\mu \mathrel{\circ} T(p_{2}) \mathrel{\circ} \mu}
\mathrel{\circ} T(\cotuple{\cotuple{0}{T(\kappa_{1}) \mathrel{\circ} \eta}}
{T(\kappa_{2}) \mathrel{\circ} \eta})} \\
& = &
\tuple{\mu \mathrel{\circ} T(\cotuple{p_1}{0})}
{\ensuremath{\mathsf{bc}}\xspace \mathrel{\circ} \mu \mathrel{\circ} T(\cotuple{\cotuple{0}{\eta\mathrel{\circ}\kappa_{1}}}
{\eta\mathrel{\circ}\kappa_{2}})} \\
& = &
(\ensuremath{\mathrm{id}}\times\ensuremath{\mathsf{bc}}\xspace) \mathrel{\circ}
\tuple{\mu \mathrel{\circ} T(
\cotuple{\cotuple{\eta}{0\mathrel{\circ}\kappa_{1}}}
{0\mathrel{\circ}\kappa_{2}})}
{\mu \mathrel{\circ} T(
\cotuple{\cotuple{0}{\eta\mathrel{\circ}\kappa_{1}}}
{\eta\mathrel{\circ}\kappa_{2}})} \\
& = &
(\ensuremath{\mathrm{id}}\times\ensuremath{\mathsf{bc}}\xspace) \mathrel{\circ} \\
& & \quad
\tuple{\mu \mathrel{\circ} T(p_{1} \mathrel{\circ}
\cotuple{\cotuple{\kappa_1}{\kappa_{2}\mathrel{\circ}\kappa_{1}}}
{\kappa_{2}\mathrel{\circ}\kappa_{2}})}
{\mu \mathrel{\circ} T(p_{2} \mathrel{\circ}
\cotuple{\cotuple{\kappa_1}{\kappa_{2}\mathrel{\circ}\kappa_{1}}}
{\kappa_{2}\mathrel{\circ}\kappa_{2}})} \\
& = &
(\ensuremath{\mathrm{id}}\times\ensuremath{\mathsf{bc}}\xspace) \mathrel{\circ} \ensuremath{\mathsf{bc}}\xspace \mathrel{\circ}
T(\cotuple{\cotuple{\kappa_1}{\kappa_{2}\mathrel{\circ}\kappa_{1}}}
{\kappa_{2}\mathrel{\circ}\kappa_{2}}).
\end{array}$$
\noindent The marked equation $\stackrel{(*)}{=}$ use $[0,0] = 0$,
which holds since $[0,0] \mathrel{\circ} \kappa_{i} = 0 = 0 \mathrel{\circ} \kappa_{i}$.
Commutation with $\eta$:
$$\begin{array}{rcl}
\ensuremath{\mathsf{bc}}\xspace \mathrel{\circ} \eta
& = &
\tuple{\mu \mathrel{\circ} T(p_{1})}{\mu \mathrel{\circ} T(p_{2})} \mathrel{\circ} \eta \\
& = &
\tuple{\mu \mathrel{\circ} T(p_{1}) \mathrel{\circ} \eta }{\mu \mathrel{\circ} T(p_{2}) \mathrel{\circ} \eta } \\
& = &
\tuple{\mu \mathrel{\circ} \eta \mathrel{\circ} p_{1}}{\mu \mathrel{\circ} \eta \mathrel{\circ} p_{2}} \\
& = &
\tuple{p_1}{p_2}.
\end{array}$$
Commutation with $\mu$, involving two diagrams:
$$\begin{array}{rcl}
\ensuremath{\mathsf{bc}}\xspace \mathrel{\circ} \mu
& = &
\tuple{\mu \mathrel{\circ} T(p_{1})}{\mu \mathrel{\circ} T(p_{2})} \mathrel{\circ} \mu \\
& = &
\tuple{\mu \mathrel{\circ} T(p_{1}) \mathrel{\circ} \mu}{\mu \mathrel{\circ} T(p_{2}) \mathrel{\circ} \mu} \\
& = &
\tuple{\mu \mathrel{\circ} \mu \mathrel{\circ} T^{2}(p_{1})}
{\mu \mathrel{\circ} \mu \mathrel{\circ} T^{2}(p_{2})} \\
& = &
\tuple{\mu \mathrel{\circ} T(\mu) \mathrel{\circ} T^{2}(p_{1})}
{\mu \mathrel{\circ} T(\mu) \mathrel{\circ} T^{2}(p_{2})} \\
& = &
\tuple{\mu \mathrel{\circ} T(\mu \mathrel{\circ} T(p_{1}))}{\mu \mathrel{\circ} T(\mu \mathrel{\circ} T(p_{2}))} \\
& = &
\mu\times\mu \mathrel{\circ} \tuple{T(\mu \mathrel{\circ} T(p_{1}))}{T(\mu \mathrel{\circ} T(p_{2}))} \\
& = &
\mu\times\mu \mathrel{\circ} \tuple{T(\pi_{1})}{T(\pi_{2})} \mathrel{\circ}
T(\tuple{\mu \mathrel{\circ} T(p_{1})}{\mu \mathrel{\circ} T(p_{2})}) \\
& = &
\mu\times\mu \mathrel{\circ} \tuple{T(\pi_{1})}{T(\pi_{2})} \mathrel{\circ} T(\ensuremath{\mathsf{bc}}\xspace).
\end{array}$$
\noindent And similarly,
$$\begin{array}{rcl}
\lefteqn{\ensuremath{\mathsf{bc}}\xspace \mathrel{\circ} \mu \mathrel{\circ} T(\cotuple{T(\kappa_{1})}{T(\kappa_{2})})} \\
& = &
\tuple{\mu \mathrel{\circ} T(p_{1})}{\mu \mathrel{\circ} T(p_{2})} \mathrel{\circ}
\mu \mathrel{\circ} T(\cotuple{T(\kappa_{1})}{T(\kappa_{2})}) \\
& = &
\tuple{\mu \mathrel{\circ} T(p_{1}) \mathrel{\circ} \mu \mathrel{\circ}
T(\cotuple{T(\kappa_{1})}{T(\kappa_{2})})}
{\mu \mathrel{\circ} T(p_{2}) \mathrel{\circ} \mu \mathrel{\circ}
T(\cotuple{T(\kappa_{1})}{T(\kappa_{2})})} \\
& = &
\tuple{\mu \mathrel{\circ} \mu \mathrel{\circ} T^{2}(p_{1}) \mathrel{\circ}
T(\cotuple{T(\kappa_{1})}{T(\kappa_{2})})}
{\mu \mathrel{\circ} \mu \mathrel{\circ} T^{2}(p_{2}) \mathrel{\circ}
T(\cotuple{T(\kappa_{1})}{T(\kappa_{2})})} \\
& = &
\tuple{\mu \mathrel{\circ} T(\mu) \mathrel{\circ}
T(\cotuple{T(p_{1}\mathrel{\circ}\kappa_{1})}{T(p_{1}\mathrel{\circ}\kappa_{2})})}
{\mu \mathrel{\circ} T(\mu) \mathrel{\circ}
T(\cotuple{T(p_{2} \mathrel{\circ} \kappa_{1})}{T(p_{2}\mathrel{\circ}\kappa_{2})})} \\
& = &
\tuple{\mu \mathrel{\circ} T(\mu \mathrel{\circ} \cotuple{T(\eta)}{T(0)})}
{\mu \mathrel{\circ} T(\mu \mathrel{\circ} \cotuple{T(0)}{T(\eta)})} \\
& = &
\tuple{\mu \mathrel{\circ} T(\cotuple{\mu \mathrel{\circ} T(\eta)}{\mu \mathrel{\circ} T(0)})}
{\mu \mathrel{\circ} T(\cotuple{\mu \mathrel{\circ} T(0)}{\mu \mathrel{\circ} T(\eta)})} \\
& = &
\tuple{\mu \mathrel{\circ} T(\cotuple{\mu \mathrel{\circ} \eta}{\mu \mathrel{\circ} 0})}
{\mu \mathrel{\circ} T(\cotuple{\mu \mathrel{\circ} 0}{\mu \mathrel{\circ} \eta})} \\
& = &
\tuple{\mu \mathrel{\circ} T(\mu) \mathrel{\circ} T(\cotuple{\eta}{0})}
{\mu \mathrel{\circ} T(\mu) \mathrel{\circ} T(\cotuple{0}{\eta})} \\
& = &
\tuple{\mu \mathrel{\circ} \mu \mathrel{\circ} T(p_{1})}
{\mu \mathrel{\circ} \mu \mathrel{\circ} T(p_{2})} \\
& = &
\mu\times\mu \mathrel{\circ} \tuple{\mu \mathrel{\circ} T(p_{1})}{\mu \mathrel{\circ} T(p_{2})} \\
& = &
\mu\times\mu \mathrel{\circ} \ensuremath{\mathsf{bc}}\xspace.
\end{array}$$
Finally, for commutation of the diagram with the doubling map $\mathit{dbl}$,
we first note that in a distributive category the following diagram
commutes.
$$\xymatrix{
(X+Y)\times Z\ar[rr]^-{p_{1}\times\ensuremath{\mathrm{id}}} & & T(X)\times Z\ar[d]^{\ensuremath{\mathsf{st}}\xspace} \\
(X\times Z)+(Y\times Z)\ar[u]^{\mathit{dist}}_{\cong}\ar[rr]^-{p_1}
& & T(X\times Z)
}$$
\noindent where $\mathit{dist} =
\cotuple{\kappa_{1}\times\ensuremath{\mathrm{id}}}{\kappa_{2}\times\ensuremath{\mathrm{id}}}$ is the
canonical distribution map. Commutation of this diagram holds because:
$$\begin{array}{rcl}
\ensuremath{\mathsf{st}}\xspace \mathrel{\circ} p_{1}\times\ensuremath{\mathrm{id}} \mathrel{\circ} \mathit{dist}
& = &
\cotuple{\ensuremath{\mathsf{st}}\xspace \mathrel{\circ} p_{1}\times\ensuremath{\mathrm{id}} \mathrel{\circ} \kappa_{1}\times\ensuremath{\mathrm{id}}}
{\ensuremath{\mathsf{st}}\xspace \mathrel{\circ} p_{1}\times\ensuremath{\mathrm{id}} \mathrel{\circ} \kappa_{2}\times\ensuremath{\mathrm{id}}} \\
& = &
\cotuple{\ensuremath{\mathsf{st}}\xspace \mathrel{\circ} \eta\times\ensuremath{\mathrm{id}}}{\ensuremath{\mathsf{st}}\xspace \mathrel{\circ} 0\times\ensuremath{\mathrm{id}}} \\
& \stackrel{(*)}{=} &
\cotuple{\eta}{0} \\
& = &
p_{1},
\end{array}$$
\noindent where the marked equation holds because $\ensuremath{\mathsf{st}}\xspace \mathrel{\circ}
0\times\ensuremath{\mathrm{id}} = 0$, which follows from distributivity $0\times Z \cong
0$, so that $T(0\times Z)$ is final in:
$$\xymatrix@C+1pc{
Y\times Z\ar[r]^-{!\times\ensuremath{\mathrm{id}}}\ar@ /^5ex/[rr]^-{0\times\ensuremath{\mathrm{id}}}\ar[d]_{!} &
T(0)\times Z\ar[r]^-{T(!)\times\ensuremath{\mathrm{id}}}\ar[d]^{\ensuremath{\mathsf{st}}\xspace} &
T(X)\times Z\ar[d]^{\ensuremath{\mathsf{st}}\xspace} \\
T(0)\ar[r]_-{\cong}\ar@ /_5ex/[rr]_-{T(!)} &
T(0\times Z)\ar[r]_-{T(!\times\ensuremath{\mathrm{id}})} & T(X\times Z)
}$$
\noindent Now we can prove that the diagram in
Lemma~\ref{bcproplem}.\ref{comStbcprop} commutes:
$$\begin{array}{rcl}
\lefteqn{\ensuremath{\mathsf{st}}\xspace\times\ensuremath{\mathsf{st}}\xspace \mathrel{\circ} \mathit{dbl} \mathrel{\circ} \ensuremath{\mathsf{bc}}\xspace\times\ensuremath{\mathrm{id}}} \\
& = &
\ensuremath{\mathsf{st}}\xspace\times\ensuremath{\mathsf{st}}\xspace \mathrel{\circ} \tuple{\pi_{1}\times\ensuremath{\mathrm{id}}}{\pi_{2}\times\ensuremath{\mathrm{id}}}
\mathrel{\circ} \ensuremath{\mathsf{bc}}\xspace\times\ensuremath{\mathrm{id}} \\
& = &
\tuple{\ensuremath{\mathsf{st}}\xspace \mathrel{\circ} (\mu \mathrel{\circ} T(p_{1}))\times\ensuremath{\mathrm{id}}}
{\ensuremath{\mathsf{st}}\xspace \mathrel{\circ} (\mu \mathrel{\circ} T(p_{2})) \times\ensuremath{\mathrm{id}}} \\
& = &
\tuple{\mu \mathrel{\circ} T(\ensuremath{\mathsf{st}}\xspace) \mathrel{\circ} \ensuremath{\mathsf{st}}\xspace \mathrel{\circ} T(p_{1})\times\ensuremath{\mathrm{id}}}
{\mu \mathrel{\circ} T(\ensuremath{\mathsf{st}}\xspace) \mathrel{\circ} \ensuremath{\mathsf{st}}\xspace \mathrel{\circ} T(p_{2}) \times\ensuremath{\mathrm{id}}} \\
& = &
\tuple{\mu \mathrel{\circ} T(\ensuremath{\mathsf{st}}\xspace) \mathrel{\circ} T(p_{1}\times\ensuremath{\mathrm{id}}) \mathrel{\circ} \ensuremath{\mathsf{st}}\xspace}
{\mu \mathrel{\circ} T(\ensuremath{\mathsf{st}}\xspace) \mathrel{\circ} T(p_{2} \times\ensuremath{\mathrm{id}}) \mathrel{\circ} \ensuremath{\mathsf{st}}\xspace} \\
& = &
\tuple{\mu \mathrel{\circ} T(p_{1} \mathrel{\circ} \mathit{dist}^{-1})}
{\mu \mathrel{\circ} T(p_{2} \mathrel{\circ} \mathit{dist}^{-1})} \mathrel{\circ} \ensuremath{\mathsf{st}}\xspace\\
& = &
\ensuremath{\mathsf{bc}}\xspace \mathrel{\circ} T(\mathit{dist}^{-1}) \mathrel{\circ} \ensuremath{\mathsf{st}}\xspace.
\end{array}$$
}
The definition of the map $\ensuremath{\mathsf{bc}}\xspace$ also makes sense for arbitrary
set-indexed (co)products (see~\cite{Jacobs10a}), but here we only
consider finite ones. Such generalised $\ensuremath{\mathsf{bc}}\xspace$-maps also satisfy
(suitable generalisations of) the properties in Lemma~\ref{bcproplem}
above.
We will study monads for which the canonical map $\ensuremath{\mathsf{bc}}\xspace$ is an
isomorphism. Such monads will be called `additive monads'.
\begin{definition}
A monad $T$ on a category $\cat{C}$ with finite products $(\times,
1)$ and finite coproducts $(+,0)$ will be called \emph{additive} if
$T(0)\cong 1$ and if the canonical map $\ensuremath{\mathsf{bc}}\xspace\colon T(X+Y) \to T(X)
\times T(Y)$ from~\eqref{bcDef} is an isomorphism.
\end{definition}
We write $\cat{AMnd}(\cat{C})$ for the category of additive monads on
$\cat{C}$ with monad morphism between them, and similarly
$\cat{ACMnd}(\cat{C})$ for the category of additive and commutative
monads on $\cat{C}$.
A basic result is that additive monads $T$ induce a commutative monoid
structure on objects $T(X)$. This result is sometimes taken as
definition of additivity of monads
(\textit{cf.}~\cite{GoncharovSM09}).
\begin{lemma}
\label{AdditiveMonadMonoidLem}
\label{Mnd2MonLem2}
Let $T$ be an additive monad on a category $\cat{C}$ and $X$ an object
of $\cat{C}$. There is an addition $+$ on $T(X)$ given by
$$
\xymatrix{
+ \stackrel{\textrm{def}}{=} \Big(T(X) \times T(X) \ar[r]^-{\ensuremath{\mathsf{bc}}\xspace^{-1}} &
T(X+X) \ar[r]^-{T(\nabla)} & T(X)\Big),
}
$$
\noindent where $\nabla = \cotuple{\ensuremath{\mathrm{id}}}{\ensuremath{\mathrm{id}}}$. Then:
{\renewcommand{\theenumi}{(\roman{enumi})}
\begin{enumerate}
\item this $+$ is commutative and associative,
\item and has unit $0_{1,X}: 1 \to T(X)$;
\item this monoid structure is preserved by maps $T(f)$ as well as by
multiplication $\mu$;
\item the mapping $(T,X) \mapsto (T(X), +, 0_{1,X})$ yields a
functor $\mathcal{A}d \colon \cat{AMnd}(\cat{C}) \times \cat{C} \to
\cat{CMon}(\cat{C})$.
\end{enumerate}}
\end{lemma}
\begin{proof}
The first three statements follow by the properties of $\ensuremath{\mathsf{bc}}\xspace$ from
Lemma~\ref{bcproplem}. For instance, $0$ is a (right) unit for $+$ as
demonstrated in the following diagram.
$$\xymatrix{
T(X) \ar[rr]^-{\rho^{-1}}_-{\cong}\ar[drr]^(0.6){\cong}_-{T(\rho^{-1})} & &
T(X)\times T(0) \ar[rr]^-{\ensuremath{\mathrm{id}} \times T(!)} \ar[d]^-{\ensuremath{\mathsf{bc}}\xspace^{-1}} & &
T(X)\times T(X) \ar[d]^-{\ensuremath{\mathsf{bc}}\xspace^{-1}}\ar@ /^10ex/[dd]^{+}
\\
& & T(X+0) \ar[rr]^-{T(\ensuremath{\mathrm{id}} +\, !)} \ar[drr]_-{T(\rho)}^-{\cong} & &
T(X+X) \ar[d]^-{T(\nabla)}
\\
&&&&T(X)
}
$$
Regarding (iv) we define, for a pair of morphisms $\sigma: T \to S$ in $\cat{AMnd}(\cat{C})$ and $f\colon X \to Y$ in $\cat{C}$,
$$\mathcal{A}d((\sigma, f)) = \sigma \mathrel{\circ} T(f) \colon T(X) \to S(Y),$$ which
is equal to $S(f) \mathrel{\circ} \sigma$ by naturality of
$\sigma$. Preservation of the unit by $\mathcal{A}d((\sigma, f))$ follows from
Lemma \ref{zeroproplem}. The following diagram demonstrates
that addition is preserved.
$$
\xymatrix{
T(X) \times T(X) \ar[rr]^-{T(f)\times T(f)} \ar[d]_-{\ensuremath{\mathsf{bc}}\xspace^{-1}} && T(Y)\times T(Y) \ar[r]^{\sigma \times \sigma} \ar[d]^-{\ensuremath{\mathsf{bc}}\xspace^{-1}} & S(Y)\times S(Y) \ar[dd]^-{\ensuremath{\mathsf{bc}}\xspace^{-1}}
\\
T(X+X) \ar[rr]^-{T(f+f)} \ar[rrd]_-{\sigma} \ar[dd]_-{T(\nabla)} && T(Y+Y) \ar[rd]^-{\sigma}
\\
&&S(X+X) \ar[r]^-{S(f+f)} \ar[d]_-{S(\nabla)} & S(Y+Y) \ar[d]^-{S(\nabla)}
\\
T(X) \ar[rr]^-{\sigma} && S(X) \ar[r]^-{S(f)} & S(Y)
}
$$
\noindent where we use point~\ref{natbcprop} of Lemma \ref{bcproplem}
and the naturality of $\sigma$. It is easily checked that this mapping
defines a functor.\hspace*{\fill}$\QEDbox$
\end{proof}
\auxproof{
Commutativity holds since $+ \mathrel{\circ} \tuple{\pi_2}{\pi_1} = +$, see:
$$
\xymatrix{
T(X)\times T(Y) \ar[r]^-{\ensuremath{\mathsf{bc}}\xspace^{-1}} \ar[d]_-{\tuple{\pi_2}{\pi_1}} &
T(X+X) \ar[d]_-{T(\cotuple{\kappa_2}{\kappa_1})} \ar[rd]^-{T(\nabla)}
\\
T(X)\times T(X) \ar[r]_-{\ensuremath{\mathsf{bc}}\xspace^{-1}} & T(X+X) \ar[r]_-{T(\nabla)} & T(X)
}
$$
\noindent Associativity:
$$
\xymatrix{
(T(X)\times T(X))\times T(X) \ar[r]^-{\alpha} \ar[d]_-{\ensuremath{\mathsf{bc}}\xspace^{-1} \times\ensuremath{\mathrm{id}}} &
T(X) \times (T(X)\times T(X)) \ar[r]^-{\ensuremath{\mathrm{id}} \times \ensuremath{\mathsf{bc}}\xspace^{-1}} &
T(X) \times T(X+X) \ar[d]^-{\ensuremath{\mathsf{bc}}\xspace^{-1}} \ar[r]^-{\ensuremath{\mathrm{id}} \times T(\nabla)} &
T(X) \times T(X) \ar[d]^-{\ensuremath{\mathsf{bc}}\xspace^{-1}}
\\
T(X+X) \times T(X) \ar[d]_-{T(\nabla) \times \ensuremath{\mathrm{id}}} \ar[r]^-{\ensuremath{\mathsf{bc}}\xspace^{-1}} &
T((X+X)+X) \ar[r]^-{\alpha'} \ar[d]^-{T(\nabla +\ensuremath{\mathrm{id}})} &
T(X+(X+X)) \ar[r]^-{T(\ensuremath{\mathrm{id}}+\nabla)} & T(X+X) \ar[d]^-{T(\nabla)}
\\
T(X)\times T(X) \ar[r]^-{\ensuremath{\mathsf{bc}}\xspace^{-1}} & T(X+X) \ar[rr]^-{T(\nabla)} && T(X)
}
$$
\noindent Preservation by maps $T(f)$:
$$\begin{array}{rcl}
+ \mathrel{\circ} (T(f) \times T(f))
& = &
T(\nabla) \mathrel{\circ} \ensuremath{\mathsf{bc}}\xspace^{-1} \mathrel{\circ} (T(f)\times T(f)) \\
& = &
T(\nabla) \mathrel{\circ} T(f+f) \mathrel{\circ} \ensuremath{\mathsf{bc}}\xspace^{-1} \\
& = &
T(f) \mathrel{\circ} T(\nabla) \mathrel{\circ} \ensuremath{\mathsf{bc}}\xspace^{-1} \\
& = &
T(f) \mathrel{\circ} +
\end{array}$$
\noindent Similarly for $\mu$,
$$\begin{array}{rcl}
+ \mathrel{\circ} (\mu\times\mu)
& = &
T(\nabla) \mathrel{\circ} \ensuremath{\mathsf{bc}}\xspace^{-1} \mathrel{\circ} (\mu\times\mu) \\
& = &
T(\nabla) \mathrel{\circ} \mu \mathrel{\circ} T(\cotuple{T(\kappa_{1})}{T(\kappa_{2})})
\mathrel{\circ} \ensuremath{\mathsf{bc}}\xspace^{-1} \\
& = &
\mu \mathrel{\circ} T^{2}(\nabla) \mathrel{\circ}
T(\cotuple{T(\kappa_{1})}{T(\kappa_{2})}) \mathrel{\circ} \ensuremath{\mathsf{bc}}\xspace^{-1} \\
& = &
\mu \mathrel{\circ}
T(T(\nabla) \mathrel{\circ} \cotuple{T(\kappa_{1})}{T(\kappa_{2})}) \mathrel{\circ} \ensuremath{\mathsf{bc}}\xspace^{-1} \\
& = &
\mu \mathrel{\circ}
T(\cotuple{\ensuremath{\mathrm{id}}}{\ensuremath{\mathrm{id}}}) \mathrel{\circ} \ensuremath{\mathsf{bc}}\xspace^{-1} \\
& = &
\mu \mathrel{\circ} +.
\end{array}$$
}
\auxproof{
Direct proof of the fact that T(X) with the given addition is a commutative monoid:
To show that $0_{1,X}$ indeed serves as a unit, consider:
$$
\xymatrix{
T(X) \ar[r]^-{\rho^{-1}} & T(X)\times 1 \ar[r]^-{id \times !^{-1}} & T(X)\times T(0) \ar[rr]^-{id \times T(!)} \ar[d]_-{\ensuremath{\mathsf{bc}}\xspace^{-1}}&& T(X)\times T(X) \ar[d]^-{\ensuremath{\mathsf{bc}}\xspace^{-1}}
\\
&&T(X+0) \ar[rr]^-{T(id + !)} \ar[drr]_-{T(\cotuple{id}{!})} && T(X+X) \ar[d]^-{T(\nabla)}
\\
&&&&T(X)
}
$$
Note that
$$
\begin{array}{rcl}
\rho \mathrel{\circ} \idmap\times ! \mathrel{\circ} \ensuremath{\mathsf{bc}}\xspace &=& \pi_1 \mathrel{\circ} \tuple{\mu \mathrel{\circ} T(p_1)}{!}\\
&=&
\mu \mathrel{\circ} T(\cotuple{\eta}{0})\\
&=&
\mu \mathrel{\circ} T(\eta) \mathrel{\circ} T(\cotuple{id}{!})\\
&=&
T(\cotuple{id}{!}),
\end{array}
$$
which proves the right identity law. The left identity law is shown similarly.
Associativity:
$$
\xymatrix{
(T(X)\times T(X))\times T(X) \ar[r]^-{\alpha} \ar[d]_-{\ensuremath{\mathsf{bc}}\xspace^{-1} \times id} & T(X) \times (T(X)\times T(X)) \ar[r]^-{\ensuremath{\mathrm{id}} \times \ensuremath{\mathsf{bc}}\xspace^{-1}} & T(X) \times T(X+X) \ar[d]^-{\ensuremath{\mathsf{bc}}\xspace^{-1}} \ar[r]^-{id \times T(\nabla)} & T(X) \times T(X) \ar[d]^-{\ensuremath{\mathsf{bc}}\xspace^{-1}}
\\
T(X+X) \times T(X) \ar[d]_-{T(\nabla) \times id} \ar[r]^-{\ensuremath{\mathsf{bc}}\xspace^{-1}} & T((X+X)+X) \ar[r]^-{\alpha'} \ar[d]^-{T(\nabla +id)} & T(X+(X+X)) \ar[r]^-{T(id+\nabla)} & T(X+X) \ar[d]^-{T(\nabla)}
\\
T(X)\times T(X) \ar[r]^-{\ensuremath{\mathsf{bc}}\xspace^{-1}} & T(X+X) \ar[rr]^-{T(\nabla)} && T(X)
}
$$
Commutativity:
$$
\xymatrix{
T(X)\times T(Y) \ar[r]^-{\ensuremath{\mathsf{bc}}\xspace^{-1}} \ar[d]_-{\tuple{\pi_2}{\pi_1}} & T(X+X) \ar[d]_-{T(\cotuple{\kappa_2}{\kappa_1})} \ar[rd]^-{T(\nabla)}
\\
T(X)\times T(X) \ar[r]^-{\ensuremath{\mathsf{bc}}\xspace^{-1}} & T(X+X) \ar[r]^-{T(\nabla)} & T(X)
}
$$
}
By Lemma \ref{KleisliStructLem}, for a monad $T$ on a category
$\cat{C}$ with finite coproducts, the Kleisli construction yields a
category $\mathcal{K}{\kern-.2ex}\ell(T)$ with finite coproducts. Below we will prove that,
under the assumption that $\cat{C}$ also has products, these
coproducts form biproducts in $\mathcal{K}{\kern-.2ex}\ell(T)$ if and only if $T$ is
additive. Again, as in Lemma \ref{zerolem}, a related result holds for
the category $\textsl{Alg}\xspace(T)$.
\begin{definition}
\label{biprodcatdef}
A \emph{category with biproducts} is a category $\cat{C}$ with a zero
object $0 \in \cat{C}$, such that, for any pair of objects $A_1, A_2
\in \cat{C}$, there is an object $A_1 \oplus A_2 \in \cat{C}$ that is
both a product with projections $\pi_i: A_1 \oplus A_2 \to A_i$ and a
coproduct with coprojections $\kappa_i: A_i \to A_1 \oplus A_2$, such
that
$$\begin{array}{rcl}
\pi_j \mathrel{\circ} \kappa_i & = & \left\{
\begin{array}{ll}
\ensuremath{\mathrm{id}}_{A_i} & \text{if }\, i = j \\
0_{A_i, A_j} & \text{if }\, i \ne j.
\end{array} \right.
\end{array}$$
\end{definition}
\begin{theorem}
\label{AMnd2BCat}
For a monad $T$ on a category $\cat{C}$ with finite products $(\times,
1)$ and coproducts $(+,0)$, the following are equivalent.
\begin{enumerate}
\renewcommand{\theenumi}{(\roman{enumi})}
\item $T$ is additive;
\item the coproducts in $\cat{C}$ form biproducts in
the Kleisli category $\mathcal{K}{\kern-.2ex}\ell(T)$;
\item the products in $\cat{C}$ yield biproducts in
the category of Eilenberg-Moore algebras $\textsl{Alg}\xspace(T)$.
\end{enumerate}
\end{theorem}
Here we shall only use this result for Kleisli categories, but we
include the result for algebras for completeness.
\begin{proof}
First we assume that $T$ is additive and show that $(+,0)$ is a
product in $\mathcal{K}{\kern-.2ex}\ell(T)$. As projections we take the maps $p_i$
from~\eqref{kleisliprojdef}. For Kleisli maps $f\colon Z\rightarrow
T(X)$ and $g\colon Z\rightarrow T(Y)$ there is a tuple via the map
\ensuremath{\mathsf{bc}}\xspace, as in
$$\xymatrix{
\tuple{f}{g}_{\mathcal{K}{\kern-.2ex}\ell} \stackrel{\textrm{def}}{=} \Big(Z \ar[r]^-{\tuple{f}{g}} &
T(X) \times T(Y) \ar[r]^-{\ensuremath{\mathsf{bc}}\xspace^{-1}} & T(X + Y)\Big).
}$$
\noindent One obtaines $p_{1} \klafter \tuple{f}{g}_{\mathcal{K}{\kern-.2ex}\ell} = \mu \mathrel{\circ}
T(p_{1}) \mathrel{\circ} \ensuremath{\mathsf{bc}}\xspace^{-1} \mathrel{\circ} \tuple{f}{g} = \pi_{1} \mathrel{\circ} \ensuremath{\mathsf{bc}}\xspace
\mathrel{\circ} \ensuremath{\mathsf{bc}}\xspace^{-1} \mathrel{\circ} \tuple{f}{g} = \pi_{1} \mathrel{\circ} \tuple{f}{g} =
f$. Remaining details are left to the reader.
Conversely, assuming that the coproduct $(+,0)$ in $\cat{C}$ forms a biproduct in $\mathcal{K}{\kern-.2ex}\ell(T)$, we have to show that the bicartesian map $\ensuremath{\mathsf{bc}}\xspace \colon T(X+Y) \to T(X) \times T(Y)$ is an isomorphism.
As $+$ is a biproduct, there exist projection maps $q_i \colon X_1+X_2 \to X_i$ in $\mathcal{K}{\kern-.2ex}\ell(T)$ satisfying
$$\begin{array}{rcl}
q_j \klafter \kappa_i & = & \left\{
\begin{array}{ll}
\ensuremath{\mathrm{id}}_{X_i} & \text{if }\, i = j \\
0_{X_i, X_j} & \text{if }\, i \ne j.
\end{array} \right.
\end{array}$$
\noindent From these conditions it follows that $q_i = p_i$, where
$p_i$ is the map defined in~\eqref{kleisliprojdef}. The ordinary
projection maps $\pi_i \colon T(X_1) \times T(X_2) \to T(X_i)$ are
maps $T(X_1) \times T(X_2) \to X_i$ in $\mathcal{K}{\kern-.2ex}\ell(T)$. Hence, as + is a
product, there exists a unique map $h \colon T(X_1) \times T(X_2) \to
X_1 + X_2$ in $\mathcal{K}{\kern-.2ex}\ell(T)$, \textit{i.e.} $h \colon T(X_1) \times T(X_2)
\to T(X_1 + X_2)$ in $\cat{C}$, such that $p_1 \klafter h = \pi_1$ and
$p_2 \klafter h = \pi_2$. It is readily checked that this map $h$ is
the inverse of $\ensuremath{\mathsf{bc}}\xspace$.
To prove the equivalence of $\textit{(i)}$ and $\textit{(iii)}$, first
assume that the monad $T$ is additive. In the category $\textsl{Alg}\xspace(T)$ of
algebras there is the standard product
$$\[email protected]{
\Big(T(X)\ar[r]^-{\alpha} & X\Big)\times\Big(T(Y)\ar[r]^-{\beta} & Y\Big)
\stackrel{\textrm{def}}{=}
\Big(T(X\times Y)\ar[rrr]^-{\tuple{\alpha\mathrel{\circ} T(\pi_{1})}
{\beta\mathrel{\circ} T(\pi_{2})}} & & & X\times Y\Big).
}$$
\noindent In order to show that $\times$ also forms a coproduct in
$\textsl{Alg}\xspace(T)$, we first show that for an arbitrary algebra $\gamma\colon
T(Z)\rightarrow Z$ the object $Z$ carries a commutative monoid
structure. We do so by adapting the structure $(+,0)$ on $T(Z)$ from
Lemma~\ref{AdditiveMonadMonoidLem} to $(+_{Z}, 0_Z)$ on $Z$ via
$$\begin{array}{rcl}
+_{Z}
& \stackrel{\textrm{def}}{=} &
\xymatrix{
\Big(Z\times Z\ar[r]^-{\eta\times\eta} &
T(Z)\times T(Z)\ar[r]^-{+} & T(Z)\ar[r]^-{\gamma} & Z\Big)
} \\[-.3pc]
0_{Z}
& \stackrel{\textrm{def}}{=} &
\xymatrix{\Big(1 \ar[r]^-{0} &
T(Z)\ar[r]^-{\gamma} & Z\Big)
}
\end{array}$$
\noindent This monoid structure is preserved by homomorphisms of
algebras. Now, we can form coprojections $k_{1} = \tuple{\ensuremath{\mathrm{id}}}{0_{Y}
\mathrel{\circ}\;!} \colon X\rightarrow X\times Y$, and a cotuple of algebra
homomorphisms $\smash{(TX\stackrel{\alpha}{\rightarrow}X)
\stackrel{f}{\longrightarrow} (TZ\stackrel{\gamma}{\rightarrow}Z)}$
and $\smash{(TY\stackrel{\beta}{\rightarrow}X)
\stackrel{g}{\longrightarrow} (TZ\stackrel{\gamma}{\rightarrow}Z)}$
given by
$$\[email protected]{
\cotuple{f}{g}_{\textsl{Alg}\xspace} \stackrel{\textrm{def}}{=}
\Big(X\times Y\ar[r]^-{f\times g} & Z\times Z\ar[r]^-{+_Z} & Z\Big).
}$$
\noindent Again, remaining details are left to the reader.
Finally, to show that \textit{(iii)} implies \textit{(i)}, consider
the algebra morphisms:
$$\xymatrix@C-1pc{
\Big(T^2(X_i) \ar[r]^-{\mu} & T(X_i)\Big)\ar[rr]^-{T(\kappa_i)} & &
\Big(T^2(X_1 + X_2)\ar[r]^-{\mu} &T(X_1 +X_2)\Big).
}$$
\noindent The free functor $\cat{C}\rightarrow \textsl{Alg}\xspace(T)$ preserves
coproducts, so these $T(\kappa_{i})$ form a coproduct diagram in
$\textsl{Alg}\xspace(T)$. As $\times$ is a coproduct in $\textsl{Alg}\xspace(T)$, by assumption, the
cotuple $\cotuple{T(\kappa_1)}{T(\kappa_2)} \colon T(X_1) \times
T(X_2) \to T(X_1+X_2)$ in $\textsl{Alg}\xspace(T)$ is an isomorphism. The
coprojections $\ell_{i}\colon T(X_{i}) \rightarrow T(X_{1})\times
T(X_{2})$ satisfy $\ell_{1} =
\tuple{\pi_{1}\mathrel{\circ}\ell_{1}}{\pi_{2}\mathrel{\circ}\ell_{2}} =
\tuple{\idmap}{0}$, and similarly, $\ell_{2} = \tuple{0}{\idmap}$. Now we
compute:
$$\begin{array}{rcl}
\ensuremath{\mathsf{bc}}\xspace \mathrel{\circ} \cotuple{T(\kappa_1)}{T(\kappa_2)} \mathrel{\circ} \ell_{1}
& = &
\tuple{\mu\mathrel{\circ} T(p_{1})}{\mu\mathrel{\circ} T(p_{2})} \mathrel{\circ} T(\kappa_{1}) \\
& = &
\tuple{\mu \mathrel{\circ} T(p_{1} \mathrel{\circ} \kappa_{1})}
{\mu \mathrel{\circ} T(p_{2} \mathrel{\circ} \kappa_{1})} \\
& = &
\tuple{\mu \mathrel{\circ} T(\eta)}{\mu \mathrel{\circ} T(0)} \\
& = &
\tuple{\idmap}{0} \\
& = &
\ell_{1}.
\end{array}$$
\noindent Similarly, $\ensuremath{\mathsf{bc}}\xspace \mathrel{\circ} \cotuple{T(\kappa_1)}{T(\kappa_2)}
\mathrel{\circ} \ell_{2} = \ell_{2}$, so that $\ensuremath{\mathsf{bc}}\xspace \mathrel{\circ}
\cotuple{T(\kappa_1)}{T(\kappa_2)} = \idmap$, making $\ensuremath{\mathsf{bc}}\xspace$ an
isomorphism. \hspace*{\fill}$\QEDbox$
\end{proof}
\auxproof{
\textbf{Proof (i) implies (ii) and (i) implies (iii):}
Unicity of the product map:\\
Suppose $h \colon Z \to X + Y$ s.t. $p_i \klafter h = f_i$. To prove: $h = \tuple{f}{g}_{\mathcal{K}{\kern-.2ex}\ell}$, that is,
$$
\ensuremath{\mathsf{bc}}\xspace^{-1} \mathrel{\circ} \tuple{f_1}{f_2} = h.
$$
This is equivalent to
$$
\tuple{p_1 \klafter h}{p_2 \klafter h} = \ensuremath{\mathsf{bc}}\xspace \mathrel{\circ} h.
$$
This is shown as follows:
$$
\begin{array}{rcl}
\ensuremath{\mathsf{bc}}\xspace \mathrel{\circ} h &=& \tuple{\mu \mathrel{\circ} T(p_1)}{\mu \mathrel{\circ} T(p_2)} \mathrel{\circ} h \\
&=&
\tuple{\mu \mathrel{\circ} T(p_1) \mathrel{\circ} h}{\mu \mathrel{\circ} T(p_2) \mathrel{\circ} h} \\
&=&
\tuple{p_1 \klafter h}{p_2 \klafter h}
\end{array}
$$
Properties of the composition of projections with coprojections
$k_{i} = \eta \mathrel{\circ} \kappa_{i}$
$$
\begin{array}{rcl}
p_1 \klafter k_1 &=& \mu \mathrel{\circ} T(p_1) \mathrel{\circ} \eta \mathrel{\circ} \kappa_1 \\
&=&
\mu \mathrel{\circ} \eta \mathrel{\circ} p_1 \mathrel{\circ} \kappa_1\\
&=&
\cotuple{\eta}{0} \mathrel{\circ} {\kappa_1}\\
&=&
\eta \,\,(=id_{\mathcal{K}{\kern-.2ex}\ell})
\end{array}
$$
$$
\begin{array}{rcl}
p_1 \klafter k_2 &=& \mu \mathrel{\circ} T(p_1) \mathrel{\circ} \eta \mathrel{\circ} \kappa_2 \\
&=&
\cotuple{\eta}{0} \mathrel{\circ} \kappa_2\\
&=&
0
\end{array}
$$
In the category of algebras we first prove that $(+_{\gamma}, 0_{\gamma})$ is a
commutative monoid. We first check that they are homomorphisms of
algebras. For $0_{\gamma}$ this hold because $T(0)$ is final and
the map $0\colon T(0)\rightarrow T(Z)$ is $T(!_{Z})$, so that:
$$\xymatrix{
T^{2}(0)\ar[d]_{\mu_{0} = \;!}\ar[r]^-{T(0)} &
T^{2}(Z)\ar[d]_{\mu_Z}\ar[r]^-{T(\gamma)} & T(Z)\ar[d]^{\gamma} \\
T(0)\ar[r]^-{0} & T(Z)\ar[r]^-{\gamma} & Z
}$$
\noindent Similarly, the addition $+_\gamma$ is a homomorphism of
algebras $\gamma\times\gamma\rightarrow\gamma$, as shown in:
$$\begin{array}{rcl}
\lefteqn{+_{\gamma} \mathrel{\circ} (\gamma\times\gamma)} \\
& = &
\gamma \mathrel{\circ} + \mathrel{\circ} \eta\times\eta \mathrel{\circ}
\tuple{\gamma\mathrel{\circ} T(\pi_{1})}{\gamma\mathrel{\circ} T(\pi_{2})} \\
& = &
\gamma \mathrel{\circ} + \mathrel{\circ} \eta\times\eta \mathrel{\circ} \gamma\times\gamma \mathrel{\circ}
\tuple{T(\pi_{1})}{T(\pi_{2})} \\
& = &
\gamma \mathrel{\circ} + \mathrel{\circ} T(\gamma)\times T(\gamma) \mathrel{\circ} \eta\times\eta \mathrel{\circ}
\tuple{T(\pi_{1})}{T(\pi_{2})} \\
& = &
\gamma \mathrel{\circ} T(\gamma) \mathrel{\circ} + \mathrel{\circ} \eta\times\eta \mathrel{\circ}
\tuple{T(\pi_{1})}{T(\pi_{2})} \\
& = &
\gamma \mathrel{\circ} \mu \mathrel{\circ} + \mathrel{\circ} \eta\times\eta \mathrel{\circ}
\tuple{T(\pi_{1})}{T(\pi_{2})} \\
& = &
\gamma \mathrel{\circ} + \mathrel{\circ} \mu\times\mu \mathrel{\circ} \eta\times\eta \mathrel{\circ}
\tuple{T(\pi_{1})}{T(\pi_{2})} \\
& = &
\gamma \mathrel{\circ} + \mathrel{\circ} \mu\times\mu \mathrel{\circ} T(\eta)\times T(\eta) \mathrel{\circ}
\tuple{T(\pi_{1})}{T(\pi_{2})} \\
& = &
\gamma \mathrel{\circ} \mu \mathrel{\circ} + \mathrel{\circ}
\tuple{T(\eta \mathrel{\circ} \pi_{1})}{T(\eta \mathrel{\circ} \pi_{2})} \\
& = &
\gamma \mathrel{\circ} \mu \mathrel{\circ} + \mathrel{\circ}
\tuple{T(\pi_{1} \mathrel{\circ} \eta\times\eta)}{T(\pi_{2} \mathrel{\circ} \eta\times\eta)} \\
& = &
\gamma \mathrel{\circ} \mu \mathrel{\circ} T(\nabla) \mathrel{\circ} \ensuremath{\mathsf{bc}}\xspace^{-1} \mathrel{\circ}
\tuple{T(\pi_{1})}{T(\pi_{2})} \mathrel{\circ} T(\eta\times\eta) \\
& = &
\gamma \mathrel{\circ} \mu \mathrel{\circ} T^{2}(\nabla) \mathrel{\circ}
T(\cotuple{T(\kappa_{1})}{T(\kappa_{2})} \mathrel{\circ} \ensuremath{\mathsf{bc}}\xspace^{-1} \mathrel{\circ}
\tuple{T(\pi_{1})}{T(\pi_{2})} \mathrel{\circ} T(\eta\times\eta) \\
& = &
\gamma \mathrel{\circ} T(\nabla) \mathrel{\circ} \mu \mathrel{\circ}
T(\cotuple{T(\kappa_{1})}{T(\kappa_{2})} \mathrel{\circ} \ensuremath{\mathsf{bc}}\xspace^{-1} \mathrel{\circ}
\tuple{T(\pi_{1})}{T(\pi_{2})} \mathrel{\circ} T(\eta\times\eta) \\
& = &
\gamma \mathrel{\circ} T(\nabla) \mathrel{\circ} \ensuremath{\mathsf{bc}}\xspace^{-1} \mathrel{\circ} \mu\times\mu \mathrel{\circ}
\tuple{T(\pi_{1})}{T(\pi_{2})} \mathrel{\circ} T(\eta\times\eta) \\
& & \qquad\mbox{by Lemma~\ref{bcproplem}} \\
& = &
\gamma \mathrel{\circ} T(\nabla) \mathrel{\circ} \mu \mathrel{\circ} T(\ensuremath{\mathsf{bc}}\xspace^{-1}) \mathrel{\circ}
T(\eta\times\eta) \\
& & \qquad\mbox{again by Lemma~\ref{bcproplem}} \\
& = &
\gamma \mathrel{\circ} \mu \mathrel{\circ} T^{2}(\nabla) \mathrel{\circ} T(\ensuremath{\mathsf{bc}}\xspace^{-1}) \mathrel{\circ}
T(\eta\times\eta) \\
& = &
\gamma \mathrel{\circ} T(\gamma) \mathrel{\circ} T(T(\nabla) \mathrel{\circ} \ensuremath{\mathsf{bc}}\xspace^{-1} \mathrel{\circ}
\eta\times\eta) \\
& = &
\gamma \mathrel{\circ} T(\gamma \mathrel{\circ} + \mathrel{\circ} \eta\times\eta) \\
& = &
\gamma \mathrel{\circ} T(+_{\gamma}).
\end{array}$$
We continue with the unit law $+_{\gamma} \mathrel{\circ}
(\ensuremath{\mathrm{id}}\times 0_{\gamma}) = \pi_{1} \colon Z\times T(0) \rightarrow Z$,
where we omit the isomorphism $T(0)\congrightarrow 1$ for convenience
(or simply take $T(0)$ as final object).
$$\begin{array}{rcl}
+_{\gamma} \mathrel{\circ} (\ensuremath{\mathrm{id}}\times 0_{\gamma})
& = &
\gamma \mathrel{\circ} + \mathrel{\circ} \eta\times\eta \mathrel{\circ}
\ensuremath{\mathrm{id}}\times \gamma \mathrel{\circ} \ensuremath{\mathrm{id}}\times T(!_{Z}) \\
& = &
\gamma \mathrel{\circ} + \mathrel{\circ} \ensuremath{\mathrm{id}}\times T(\gamma) \mathrel{\circ}
\eta\times\eta \mathrel{\circ} \ensuremath{\mathrm{id}}\times T(!_{Z}) \\
& = &
\gamma \mathrel{\circ} + \mathrel{\circ} T(\gamma)\times T(\gamma) \mathrel{\circ}
T(\eta)\times\eta \mathrel{\circ} \eta\times T(!_{Z}) \\
& & \qquad \mbox{since } \gamma \mathrel{\circ} \eta = \ensuremath{\mathrm{id}} \\
& = &
\gamma \mathrel{\circ} + \mathrel{\circ} T(\gamma)\times T(\gamma) \mathrel{\circ}
\eta\times\eta \mathrel{\circ} \eta\times T(!_{Z}) \\
& & \qquad \mbox{since } T(\eta) \mathrel{\circ} \eta = \eta \mathrel{\circ} \eta \\
& = &
\gamma \mathrel{\circ} T(\gamma) \mathrel{\circ} + \mathrel{\circ}
\eta\times\eta \mathrel{\circ} \eta\times T(!_{Z}) \\
& = &
\gamma \mathrel{\circ} \mu \mathrel{\circ} + \mathrel{\circ}
\eta\times\eta \mathrel{\circ} \eta\times T(!_{Z}) \\
& = &
\gamma \mathrel{\circ} + \mathrel{\circ} \mu\times\mu \mathrel{\circ}
\eta\times\eta \mathrel{\circ} \eta\times T(!_{Z}) \\
& = &
\gamma \mathrel{\circ} + \mathrel{\circ} \ensuremath{\mathrm{id}}{}\times T(!_{Z}) \mathrel{\circ} \eta\times\ensuremath{\mathrm{id}} \\
& = &
\gamma \mathrel{\circ} \pi_{1} \mathrel{\circ} \eta \times \ensuremath{\mathrm{id}} \\
& & \qquad \mbox{by the unit law for $+$} \\
& = &
\gamma \mathrel{\circ} \eta \mathrel{\circ} \pi_{1} \\
& = &
\pi_{1}
\end{array}$$
\noindent Commutativity is easy:
$$\begin{array}{rcl}
+_{\gamma} \mathrel{\circ} \tuple{\pi_{2}}{\pi_{1}}
& = &
\gamma \mathrel{\circ} + \mathrel{\circ} \eta\times\eta \mathrel{\circ} \tuple{\pi_{2}}{\pi_{1}} \\
& = &
\gamma \mathrel{\circ} + \mathrel{\circ} \tuple{\eta\mathrel{\circ}\pi_{2}}{\eta\mathrel{\circ}\pi_{1}} \\
& = &
\gamma \mathrel{\circ} + \mathrel{\circ}
\tuple{\pi_{2} \mathrel{\circ} \eta\times\eta}{\pi_{1}\mathrel{\circ}\eta\times\eta} \\
& = &
\gamma \mathrel{\circ} + \mathrel{\circ} \tuple{\pi_{2}}{\pi_{1}} \mathrel{\circ} \eta\times\eta \\
& = &
\gamma \mathrel{\circ} + \mathrel{\circ} \eta\times\eta
\qquad\mbox{by commutativity of $+$ on $T(Z)$} \\
& = &
+_{\gamma}.
\end{array}$$
\noindent Associativity also holds via associativity of $+$ on $T(Z)$,
using $\alpha =
\tuple{\tuple{\pi_1}{\pi_{1}\mathrel{\circ}\pi_{2}}}{\pi_{2}\mathrel{\circ}\pi_{2}}
\colon Z\times (Z\times Z)\rightarrow (Z\times Z)\times Z$. It
requires a bit more work.
$$\begin{array}{rcl}
\lefteqn{+_{\gamma} \mathrel{\circ} (+_{\gamma}\times\ensuremath{\mathrm{id}}) \mathrel{\circ} \alpha} \\
& = &
\gamma \mathrel{\circ} + \mathrel{\circ} \eta\times\eta \mathrel{\circ}
((\gamma \mathrel{\circ} + \mathrel{\circ} \eta\times\eta)\times\ensuremath{\mathrm{id}}) \mathrel{\circ} \alpha \\
& = &
\gamma \mathrel{\circ} + \mathrel{\circ} ((\eta \mathrel{\circ} \gamma \mathrel{\circ} +)\times\ensuremath{\mathrm{id}})
\mathrel{\circ} ((\eta\times\eta)\times\eta) \mathrel{\circ} \alpha \\
& = &
\gamma \mathrel{\circ} + \mathrel{\circ} (T(\gamma) \mathrel{\circ} \eta)\times (T(\gamma) \mathrel{\circ} T(\eta))
\mathrel{\circ} (+\times\ensuremath{\mathrm{id}}) \mathrel{\circ} \alpha \mathrel{\circ} (\eta\times(\eta\times\eta)) \\
& = &
\gamma \mathrel{\circ} T(\gamma) \mathrel{\circ} + \mathrel{\circ} (\eta\times T(\eta))
\mathrel{\circ} (+\times\ensuremath{\mathrm{id}}) \mathrel{\circ} \alpha \mathrel{\circ} (\eta\times(\eta\times\eta)) \\
& = &
\gamma \mathrel{\circ} \mu \mathrel{\circ} + \mathrel{\circ} (\eta\times T(\eta))
\mathrel{\circ} (+\times\ensuremath{\mathrm{id}}) \mathrel{\circ} \alpha \mathrel{\circ} (\eta\times(\eta\times\eta)) \\
& = &
\gamma \mathrel{\circ} + \mathrel{\circ} (\mu\times\mu) \mathrel{\circ} (\eta\times T(\eta))
\mathrel{\circ} (+\times\ensuremath{\mathrm{id}}) \mathrel{\circ} \alpha \mathrel{\circ} (\eta\times(\eta\times\eta)) \\
& = &
\gamma \mathrel{\circ} + \mathrel{\circ} (+\times\ensuremath{\mathrm{id}}) \mathrel{\circ} \alpha \mathrel{\circ}
(\eta\times(\eta\times\eta)) \\
& = &
\gamma \mathrel{\circ} + \mathrel{\circ} (\ensuremath{\mathrm{id}}\times +) \mathrel{\circ} (\eta\times(\eta\times\eta))
\qquad\mbox{by associativity of +} \\
& = &
\gamma \mathrel{\circ} + \mathrel{\circ} (\mu\times\mu) \mathrel{\circ} (T(\eta)\times \eta)
\mathrel{\circ} (\ensuremath{\mathrm{id}}\times +) \mathrel{\circ} (\eta\times(\eta\times\eta)) \\
& = &
\gamma \mathrel{\circ} \mu \mathrel{\circ} + \mathrel{\circ} (T(\eta)\times \eta)
\mathrel{\circ} (\ensuremath{\mathrm{id}}\times +) \mathrel{\circ} (\eta\times(\eta\times\eta)) \\
& = &
\gamma \mathrel{\circ} T(\gamma) \mathrel{\circ} + \mathrel{\circ} (T(\eta)\times \eta)
\mathrel{\circ} (\ensuremath{\mathrm{id}}\times +) \mathrel{\circ} (\eta\times(\eta\times\eta)) \\
& = &
\gamma \mathrel{\circ} + \mathrel{\circ} (T(\gamma) \mathrel{\circ} T(\eta))\times (T(\gamma) \mathrel{\circ} \eta)
\mathrel{\circ} (\ensuremath{\mathrm{id}}\times +) \mathrel{\circ} (\eta\times(\eta\times\eta)) \\
& = &
\gamma \mathrel{\circ} + \mathrel{\circ} (\ensuremath{\mathrm{id}}\times(\eta \mathrel{\circ} \gamma)) \mathrel{\circ}
(\ensuremath{\mathrm{id}}\times +) \mathrel{\circ} (\eta\times(\eta\times\eta)) \\
& = &
\gamma \mathrel{\circ} + \mathrel{\circ} (\eta\times\eta) \mathrel{\circ}
(\ensuremath{\mathrm{id}}\times(\gamma\mathrel{\circ} + \mathrel{\circ} \eta\times\eta)) \\
& = &
+_{\gamma} \mathrel{\circ} (\ensuremath{\mathrm{id}}\times +_{\gamma}).
\end{array}$$
Preservation by algebra homomorphism
$\smash{(TZ\stackrel{\gamma}{\rightarrow}Z)
\stackrel{h}{\longrightarrow} (TW\stackrel{\delta}{\rightarrow}W)}$
follows from:
$$\begin{array}{rcl}
+_{\delta} \mathrel{\circ} (h\times h)
& = &
\delta \mathrel{\circ} + \mathrel{\circ} \eta\times\eta \mathrel{\circ} h\times h \\
& = &
\delta \mathrel{\circ} + \mathrel{\circ} T(h)\times T(h) \mathrel{\circ} \eta\times\eta \\
& = &
\delta \mathrel{\circ} T(h) \mathrel{\circ} + \mathrel{\circ} \eta\times\eta \\
& = &
h \mathrel{\circ} \gamma \mathrel{\circ} + \mathrel{\circ} \eta\times\eta \\
& = &
h \mathrel{\circ} +_{\gamma} \\
h \mathrel{\circ} 0_{\gamma}
& = &
h \mathrel{\circ} \gamma \mathrel{\circ} 0 \\
& = &
\delta \mathrel{\circ} T(h) \mathrel{\circ} 0 \\
& = &
\delta \mathrel{\circ} 0 \\
& = &
0_{\delta}.
\end{array}$$
Now we can prove the cotuple laws:
$$\begin{array}{rcl}
\cotuple{f}{g}_{\textsl{Alg}\xspace} \mathrel{\circ} k_{1}
& = &
+_{\gamma} \mathrel{\circ} f\times g \mathrel{\circ} \tuple{\ensuremath{\mathrm{id}}}{0_{\beta} \mathrel{\circ} \;!} \\
& = &
+_{\gamma} \mathrel{\circ} f\times g \mathrel{\circ} \ensuremath{\mathrm{id}}\times 0_{\beta} \mathrel{\circ}
\tuple{\ensuremath{\mathrm{id}}}{!} \\
& = &
+_{\gamma} \mathrel{\circ} \ensuremath{\mathrm{id}}\times 0_{\gamma} \mathrel{\circ} f\times \ensuremath{\mathrm{id}} \mathrel{\circ}
\tuple{\ensuremath{\mathrm{id}}}{!} \\
& = &
\pi_{1} \mathrel{\circ} f\times \ensuremath{\mathrm{id}} \mathrel{\circ} \tuple{\ensuremath{\mathrm{id}}}{!} \\
& = &
f \mathrel{\circ} \pi_{1} \mathrel{\circ} \tuple{\ensuremath{\mathrm{id}}}{!} \\
& = &
f.
\end{array}$$
\noindent What we forgot to check is that $\cotuple{f}{g}_{\textsl{Alg}\xspace}$
is a homomorphism of algebra $\alpha\times\beta \rightarrow \gamma$.
This holds since:
$$\begin{array}{rcl}
\cotuple{f}{g}_{\textsl{Alg}\xspace} \mathrel{\circ} (\alpha\times\beta)
& = &
+_{\gamma} \mathrel{\circ} f\times g \mathrel{\circ}
\tuple{\alpha \mathrel{\circ} T(\pi_{1})}{\beta \mathrel{\circ} T(\pi_{2})} \\
& = &
+_{\gamma} \mathrel{\circ} f\times g \mathrel{\circ} \alpha\times\beta \mathrel{\circ}
\tuple{T(\pi_{1})}{T(\pi_{2})} \\
& = &
+_{\gamma} \mathrel{\circ} \gamma\times \gamma \mathrel{\circ} T(f)\times T(g) \mathrel{\circ}
\tuple{T(\pi_{1})}{T(\pi_{2})} \\
& = &
+_{\gamma} \mathrel{\circ} \gamma\times \gamma \mathrel{\circ}
\tuple{T(f\mathrel{\circ} \pi_{1})}{T(g\mathrel{\circ} \pi_{2})} \\
& = &
+_{\gamma} \mathrel{\circ} \gamma\times \gamma \mathrel{\circ}
\tuple{T(\pi_{1}) \mathrel{\circ} T(f\times g)}{T(\pi_{2}) \mathrel{\circ} T(f\times g)} \\
& = &
+_{\gamma} \mathrel{\circ} \gamma\times \gamma \mathrel{\circ}
\tuple{T(\pi_{1})}{T(\pi_{2})} \mathrel{\circ} T(f\times g) \\
& = &
+_{\gamma} \mathrel{\circ}
\tuple{\gamma \mathrel{\circ} T(\pi_{1})}{\gamma \mathrel{\circ} T(\pi_{2})} \mathrel{\circ}
T(f\times g) \\
& = &
\gamma \mathrel{\circ} T(+_{\gamma}) \mathrel{\circ} T(f\times g) \\
& & \qquad\mbox{since $+_\gamma$ is a homomorphism of algebras} \\
& = &
\gamma \mathrel{\circ} T(\cotuple{f}{g}_{\textsl{Alg}\xspace})
\end{array}$$
\noindent If $h\colon X\times Y\rightarrow Z$ is also an algebra
homomorphism satisfying $h \mathrel{\circ} k_{1} = f$ and $h\mathrel{\circ} k_{2} = g$,
then:
$$\begin{array}{rcl}
\lefteqn{\cotuple{f}{g}_{\textsl{Alg}\xspace}} \\
& = &
+_{\gamma} \mathrel{\circ} (f\times g) \\
& = &
\gamma \mathrel{\circ} + \mathrel{\circ} \eta\times\eta \mathrel{\circ}
(h\mathrel{\circ} k_{1})\times (h\mathrel{\circ} k_{2}) \\
& = &
\gamma \mathrel{\circ} + \mathrel{\circ} T(h\mathrel{\circ} k_{1})\times T(h\mathrel{\circ} k_{2}) \mathrel{\circ}
\eta\times\eta \\
& = &
\gamma \mathrel{\circ} T(h) \mathrel{\circ} + \mathrel{\circ} T(k_{1})\times T(k_{2}) \mathrel{\circ}
\eta\times\eta \\
& = &
h \mathrel{\circ} \tuple{\alpha\mathrel{\circ} T(\pi_{1})}{\beta \mathrel{\circ} T(\pi_{2})} \mathrel{\circ}
+ \mathrel{\circ} \\
& & \qquad
T(\ensuremath{\mathrm{id}}\times(\beta\mathrel{\circ} T(!)) \mathrel{\circ} \rho^{-1})\times
T((\alpha\mathrel{\circ} T(!))\times \ensuremath{\mathrm{id}}\mathrel{\circ} \lambda^{-1})
\mathrel{\circ} \eta\times\eta \\
& = &
h \mathrel{\circ} \alpha\times\beta \mathrel{\circ} \tuple{T(\pi_{1})}{T(\pi_{2})} \mathrel{\circ}
+ \mathrel{\circ} \\
& & \qquad
T((\alpha\mathrel{\circ}\eta)\times(\beta\mathrel{\circ} T(!)) \mathrel{\circ} \rho^{-1})\times
T((\alpha\mathrel{\circ} T(!))\times (\beta\mathrel{\circ}\eta)\mathrel{\circ} \lambda^{-1})
\mathrel{\circ} \eta\times\eta \\
& = &
h \mathrel{\circ} \alpha\times\beta \mathrel{\circ} \tuple{T(\pi_{1})}{T(\pi_{2})} \mathrel{\circ}
+ \mathrel{\circ} T(\alpha\times\beta)\times T(\alpha\times\beta) \mathrel{\circ} \\
& & \qquad
T(\eta\times T(!)\mathrel{\circ} \rho^{-1})\times
T(T(!)\times \eta \mathrel{\circ} \lambda^{-1}) \mathrel{\circ} \eta\times\eta \\
& = &
h \mathrel{\circ} \alpha\times\beta \mathrel{\circ} \tuple{T(\pi_{1})}{T(\pi_{2})} \mathrel{\circ}
T(\alpha\times\beta) \mathrel{\circ} + \mathrel{\circ} \\
& & \qquad
T(\eta\times T(!)\mathrel{\circ} \rho^{-1})\times
T(T(!)\times \eta \mathrel{\circ} \lambda^{-1}) \mathrel{\circ} \eta\times\eta \\
& = &
h \mathrel{\circ} \alpha\times\beta \mathrel{\circ} T(\alpha)\times T(\beta) \mathrel{\circ}
\tuple{T(\pi_{1})}{T(\pi_{2})} \mathrel{\circ} T(\nabla) \mathrel{\circ} \ensuremath{\mathsf{bc}}\xspace^{-1} \mathrel{\circ} \\
& & \qquad
T(\eta\times T(!)\mathrel{\circ} \rho^{-1})\times
T(T(!)\times \eta \mathrel{\circ} \lambda^{-1}) \mathrel{\circ} \eta\times\eta \\
& = &
h \mathrel{\circ} \alpha\times\beta \mathrel{\circ} \mu\times \mu \mathrel{\circ}
\tuple{T(\pi_{1})}{T(\pi_{2})} \mathrel{\circ} T(\nabla) \mathrel{\circ} \\
& & \qquad
T\Big(\big(\eta\times T(!)\mathrel{\circ} \rho^{-1}\big) +
\big(T(!)\times \eta \mathrel{\circ} \lambda^{-1}\big)\Big)
\mathrel{\circ} \ensuremath{\mathsf{bc}}\xspace^{-1} \mathrel{\circ} \eta\times\eta \\
& \stackrel{(*)}{=} &
h \mathrel{\circ} \alpha\times\beta \mathrel{\circ} \mu\times \mu \mathrel{\circ}
\tuple{T(p_{1})}{T(p_{2})} \mathrel{\circ} \ensuremath{\mathsf{bc}}\xspace^{-1} \mathrel{\circ} \eta\times\eta \\
& = &
h \mathrel{\circ} \alpha\times\beta \mathrel{\circ}
\tuple{\mu \mathrel{\circ} T(p_{1})}{\mu \mathrel{\circ} T(p_{2})} \mathrel{\circ}
\ensuremath{\mathsf{bc}}\xspace^{-1} \mathrel{\circ} \eta\times\eta \\
& = &
h \mathrel{\circ} \alpha\times\beta \mathrel{\circ} \ensuremath{\mathsf{bc}}\xspace \mathrel{\circ}
\ensuremath{\mathsf{bc}}\xspace^{-1} \mathrel{\circ} \eta\times\eta \\
& = &
h \mathrel{\circ} \alpha\times\beta \mathrel{\circ} \eta\times\eta \\
& = &
h,
\end{array}$$
\noindent where the marked equation $\stackrel{(*)}{=}$ holds because:
$$\begin{array}{rcl}
\lefteqn{\pi_{1} \mathrel{\circ} \nabla \mathrel{\circ}
\Big(\big(\eta\times T(!)\mathrel{\circ} \rho^{-1}\big) +
\big(T(!)\times \eta \mathrel{\circ} \lambda^{-1}\big)\Big)} \\
& = &
\pi_{1} \mathrel{\circ}
\big[\eta\times T(!)\mathrel{\circ} \rho^{-1}, \;
T(!)\times \eta \mathrel{\circ} \lambda^{-1}\big] \\
& = &
\big[\pi_{1} \mathrel{\circ} \eta\times T(!)\mathrel{\circ} \rho^{-1}, \;
\pi_{1} \mathrel{\circ} T(!)\times \eta \mathrel{\circ} \lambda^{-1}\big] \\
& = &
\big[\eta \mathrel{\circ} \pi_{1} \mathrel{\circ} \rho^{-1}, \;
T(!) \mathrel{\circ} \pi_{1} \mathrel{\circ} \lambda^{-1}\big] \\
& = &
\cotuple{\eta}{T(!) \mathrel{\circ} \;!} \qquad \mbox{since } \\
& & \qquad
\xymatrix{\big(T(X)\ar[r]^-{\rho^{-1}}_-{\cong} &
T(X)\times T(0)\ar[r]^-{\pi_{1}}_-{\cong} & T(X)\big) = \ensuremath{\mathrm{id}}_{X} } \\
& & \qquad
\xymatrix{\big(T(Y)\ar[r]^-{\lambda^{-1}}_-{\cong} &
T(0)\times T(Y)\ar[r]^-{\pi_{1}}_-{\cong} & T(0)\big) = \; !_{T(Y)} } \\
& = &
\cotuple{\eta}{0} \\
& = &
p_{1}.
\end{array}$$
\textbf{proof of (ii) impies (i) and (iii) implies (i)}
\renewcommand{\theenumi}{(\roman{enumi})}
\begin{enumerate}
\item Kleisli category.\\
First we show that $q_i = p_i \colon X_1+X_2 \to T(X_i)$.
$$
\begin{array}{rcll}
q_1 \mathrel{\circ} \kappa_1^{\Cat{Sets}\xspace} &=& \mu \mathrel{\circ} \eta \mathrel{\circ} q_1 \mathrel{\circ} \kappa_1^{\Cat{Sets}\xspace}\\
&=&
\mu \mathrel{\circ} Tq_1 \mathrel{\circ} \eta \mathrel{\circ} \kappa_1^{\Cat{Sets}\xspace}\\
&=&
q_1 \klafter \kappa_1^{\mathcal{K}{\kern-.2ex}\ell}\\
&=&
\idmap^{\mathcal{K}{\kern-.2ex}\ell} = \eta
\end{array}
$$
And similarly $q_1 \mathrel{\circ} \kappa_2^{\Cat{Sets}\xspace} = q_1 \klafter \kappa_2^{\mathcal{K}{\kern-.2ex}\ell} = 0$, hence $q_1 = \cotuple{\eta}{0} = p_1$.
Let $h$ be the map defined in the proof above. Then $\ensuremath{\mathsf{bc}}\xspace \mathrel{\circ} h = \idmap$, as
$$
\begin{array}{rcll}
\ensuremath{\mathsf{bc}}\xspace \mathrel{\circ} h &=& \tuple{\mu \mathrel{\circ} T(p_1)}{\mu \mathrel{\circ} T(p_2)} \mathrel{\circ} h\\
&=&
\tuple{\mu \mathrel{\circ} T(p_1)\mathrel{\circ} h}{\mu \mathrel{\circ} T(p_2)\mathrel{\circ} h}\\
&=&
\tuple{p_1 \klafter h}{p_2 \klafter h} \\
&=&
\tuple{\pi_1}{\pi_2} = \idmap
\end{array}
$$
Also $h \mathrel{\circ} \ensuremath{\mathsf{bc}}\xspace = \ensuremath{\mathrm{id}} \colon T(X_1+X_2) \to T(X_1+X_2)$. To prove this, note that $h \mathrel{\circ} \ensuremath{\mathsf{bc}}\xspace$ is a map $T(X_1+X_2) \to X_1 + X_2$ in $\mathcal{K}{\kern-.2ex}\ell(T)$. Composing with the projections in $\mathcal{K}{\kern-.2ex}\ell(T)$ yields:
$$
p_1 \klafter (h \mathrel{\circ} \ensuremath{\mathsf{bc}}\xspace) = \mu \mathrel{\circ} Tp_1 \mathrel{\circ} h \mathrel{\circ} \ensuremath{\mathsf{bc}}\xspace = (p_1 \klafter h) \mathrel{\circ} \ensuremath{\mathsf{bc}}\xspace = \pi_1 \mathrel{\circ} \ensuremath{\mathsf{bc}}\xspace
$$
and similarly $p_2 \klafter (h \mathrel{\circ} \ensuremath{\mathsf{bc}}\xspace) = \pi_2 \mathrel{\circ} \ensuremath{\mathsf{bc}}\xspace$. Hence
$$
h \mathrel{\circ} \ensuremath{\mathsf{bc}}\xspace = \tuple{\pi_1 \mathrel{\circ} \ensuremath{\mathsf{bc}}\xspace}{\pi_2 \mathrel{\circ} \ensuremath{\mathsf{bc}}\xspace}_{\mathcal{K}{\kern-.2ex}\ell} = \tuple{\mu \mathrel{\circ} Tp_1}{\mu \mathrel{\circ} Tp_2} = \idmap,
$$
as $p_i \klafter \ensuremath{\mathrm{id}} = \mu \mathrel{\circ} Tp_i$.
So $h$ is the inverse of $\ensuremath{\mathsf{bc}}\xspace$.
\item $\textsl{Alg}\xspace(T)$\\
As, by assumption $\times$ is a biproduct in $\textsl{Alg}\xspace(T)$, there exists
coprojections $l_i \colon (T(A) \to A) \to (T(A\times B) \to A\times
B)$. By definition of the biproduct, $\pi_1 \mathrel{\circ} l_1 = \idmap$ and
$\pi_2 \mathrel{\circ} l_1 = 0$. The map $0 \colon A \to B$ in $\textsl{Alg}\xspace(T)$ is the
unique map $A \xrightarrow{!} 1 \xrightarrow{!} B$. Considering the
diagram
$$
\xymatrix{
T(1) \ar[r]^{\cong} \ar[d]_{!} & T^2(0) \ar[d]^{\mu} \ar[r]^{T^2(!)} & T^2(B) \ar[r]^{T(\beta)} \ar[d]^{\mu} & T(B) \ar[d]^{\beta}\\
1 \ar[r]^{\cong} & T(0) \ar[r]^-{T(!)} & T(B) \ar[r]^-{\beta} &B
}
$$
we see that $\beta \mathrel{\circ} T(!) \colon 1 \to B$ is an algebra morphism, and hence it is the unique map $1 \to B$. So
$$
l_1 = \tuple{\idmap}{\beta \mathrel{\circ} T(!) \mathrel{\circ} !}
$$
Write $g = \cotuple{T(\kappa_1)}{T(\kappa_2)} \colon T(X_1) \times T(X_2) \to T(X_1+X_2)$. We prove that $g$ is the inverse of $\ensuremath{\mathsf{bc}}\xspace$.
First we show that $\ensuremath{\mathsf{bc}}\xspace$ is a morphism in $\textsl{Alg}\xspace(T)$. We have to prove
that the following diagram commutes:
$$
\xymatrix{
T^2(X_1 + X_2) \ar[r]^-{T\ensuremath{\mathsf{bc}}\xspace} \ar[d]_{\mu} & T(T(X_1) \times T(X_2)) \ar[d]^-{\tuple{\mu \mathrel{\circ} T\pi_1}{\mu \mathrel{\circ} T\pi_2}}\\
T(X_1+X_2) \ar[r]^-{\ensuremath{\mathsf{bc}}\xspace} & T(X_1) \times T(X_2)
}
$$
$$
\begin{array}{rcll}
\tuple{\mu \mathrel{\circ} T\pi_1}{\mu \mathrel{\circ} T\pi_2} \mathrel{\circ} T\ensuremath{\mathsf{bc}}\xspace &=& \tuple{\mu \mathrel{\circ} T\mu \mathrel{\circ} T^2p_1}{\mu \mathrel{\circ} T\mu \mathrel{\circ} T^2p_2}\\
&=&
\tuple{\mu \mathrel{\circ} \mu \mathrel{\circ} T^2p_1}{\mu \mathrel{\circ} \mu \mathrel{\circ} T^2p_2}\\
&=&
\tuple{\mu \mathrel{\circ} Tp_1 \mathrel{\circ} \mu}{\mu \mathrel{\circ} Tp_2 \mathrel{\circ} \mu}\\
&=&
\tuple{\mu \mathrel{\circ} Tp_1}{\mu \mathrel{\circ} Tp_2} \mathrel{\circ} \mu = \ensuremath{\mathsf{bc}}\xspace \mathrel{\circ} \mu
\end{array}
$$
Now we consider $\ensuremath{\mathsf{bc}}\xspace \mathrel{\circ} g \colon T(X_1) \times T(X_2) \to T(X_1) \times T(X_2)$. The coprojection $l_1 \colon T(X_1) \to T(X_1) \times T(X_2)$ is given by
$$
l_1 = \tuple{\idmap}{\mu \mathrel{\circ} T(!) \mathrel{\circ} !} = \tuple{id}{T(!)\mathrel{\circ} !}
$$
Note that:
$$
\begin{array}{rcll}
\ensuremath{\mathsf{bc}}\xspace \mathrel{\circ} g \mathrel{\circ} l_1 &=& \ensuremath{\mathsf{bc}}\xspace \mathrel{\circ} T\kappa_1\\
&=&
\tuple{\mu \mathrel{\circ} T(\cotuple{\eta}{0})}{\mu \mathrel{\circ} T(\cotuple{0}{\eta})} \mathrel{\circ} T\kappa_1 &\text{where}\,0=X_1 \xrightarrow{!} T(0) \xrightarrow{T(!)} T(X_2)\\
&=&
\tuple{\mu \mathrel{\circ} T\eta}{\mu \mathrel{\circ} T(0)}\\
&=&
\tuple{\idmap}{\mu \mathrel{\circ} T^2(!) \mathrel{\circ} T(!)}\\
&=&
\tuple{\idmap}{T(!) \mathrel{\circ} \mu \mathrel{\circ} T(!)}\\
&=&
\tuple{\idmap}{T(!) \mathrel{\circ} !} = l_1 &\text{$\mu \mathrel{\circ} T(!)$ maps into the final object}\\
\end{array}
$$
Similarly $\ensuremath{\mathsf{bc}}\xspace \mathrel{\circ} g \mathrel{\circ} l_2 = l_2$. Hence, by the property of the coproduct, $\ensuremath{\mathsf{bc}}\xspace \mathrel{\circ} g = \idmap$.
To prove that $g \mathrel{\circ} \ensuremath{\mathsf{bc}}\xspace = \ensuremath{\mathrm{id}} \colon T(X_1+X_2) \to T(X_1) + T(X_2)$, first note that the free construction
$$
F \colon \cat{C} \leftrightarrows \textsl{Alg}\xspace(T) \colon U,
$$
where $F(X) = T^2(X) \xrightarrow{\mu} X$, preserves coproducts.
Hence $T^2(X_1 + X_2) \xrightarrow{\mu} T(X_1+X_2)$ is the coproduct of $T^2(X_1) \xrightarrow{\mu} T(X_1)$ and $T^2(X_2) \xrightarrow{\mu} T(X_2)$ with coprojection $T\kappa_1$ and $T\kappa_2$.
$$
\begin{array}{rcll}
g \mathrel{\circ} \ensuremath{\mathsf{bc}}\xspace \mathrel{\circ} T\kappa_1 &=& g \mathrel{\circ} \tuple{\mu \mathrel{\circ} \eta}{\mu \mathrel{\circ} T(0)}\\
&=&
g \mathrel{\circ} \tuple{\idmap}{\mu \mathrel{\circ} T(0)}\\
&=&
g \mathrel{\circ} l_1 = T\kappa_1
\end{array}
$$
Similarly $g \mathrel{\circ} \ensuremath{\mathsf{bc}}\xspace \mathrel{\circ} T\kappa_2 = T\kappa_2$. Hence, by the property of the coproduct, $g\mathrel{\circ}\ensuremath{\mathsf{bc}}\xspace =\idmap$. So $g$ is the inverse of $\ensuremath{\mathsf{bc}}\xspace$.
\end{enumerate}
}
It is well-known (see for instance~\cite{KellyL80,AbramskyC04}) that a
category with finite biproducts $(\oplus, 0)$ is enriched over
commutative monoids: each homset carries a commutative monoid
structure $(+,0)$, and this structure is preserved by pre- and
post-composition. The addition operation $+$ on homsets is obtained
as
\begin{equation}
\label{HomsetPlus}
\xymatrix{
f+g \stackrel{\textrm{def}}{=} \Big(X\ar[r]^-{\tuple{\ensuremath{\mathrm{id}}}{\ensuremath{\mathrm{id}}}} &
X\oplus X\ar[r]^-{f\oplus g} &
Y\oplus Y\ar[r]^-{\cotuple{\ensuremath{\mathrm{id}}}{\ensuremath{\mathrm{id}}}} & Y\Big).
}
\end{equation}
\noindent The zero map is neutral element for this addition. One can
also describe a monoid structure on each object $X$ as
\begin{equation}
\label{ObjMon}
\xymatrix{
X\oplus X\ar[r]^-{\cotuple{\ensuremath{\mathrm{id}}}{\ensuremath{\mathrm{id}}}} & X &
0.\ar[l]_-{0}
}
\end{equation}
\noindent We have just seen that the Kleisli category of an additive
monad has biproducts, using the addition operation from
Lemma~\ref{AdditiveMonadMonoidLem}. When we apply the sum
description~\eqref{ObjMon} to such a Kleisli category its biproducts,
we obtain precisely the original addition from
Lemma~\ref{AdditiveMonadMonoidLem}, since the codiagonal $\nabla =
\cotuple{\ensuremath{\mathrm{id}}}{\ensuremath{\mathrm{id}}}$ in the Kleisli category is given $T(\nabla)
\mathrel{\circ} \ensuremath{\mathsf{bc}}\xspace^{-1}$.
\subsection{Additive commutative monads}
In the remainder of this section we focus on the category
$\Cat{ACMnd}\xspace(\cat{C})$ of monads that are both additive and commutative on
a distributive category \cat{C}. As usual, we simply write $\Cat{ACMnd}\xspace$ for
$\Cat{ACMnd}\xspace(\Cat{Sets}\xspace)$. For $T \in \cat{ACMnd}(\cat{C})$, the Kleisli
category $\mathcal{K}{\kern-.2ex}\ell(T)$ is both symmetric monoidal---with $(\times,1)$ as
monoidal structure, see Lemma~\ref{KleisliStructLem}---and has
biproducts $(+,0)$. Moreover, it is not hard to see that this monoidal
structure distributes over the biproducts via the canonical map
$(Z\times X)+(Z\times Y)\rightarrow Z\times (X+Y)$ that can be lifted
from \cat{C} to $\mathcal{K}{\kern-.2ex}\ell(T)$.
We shall write $\Cat{SMBLaw}\xspace \hookrightarrow \Cat{SMLaw}\xspace$ for the category of
symmetric monoidal Lawvere theories in which $(+,0)$ form not only
coproducts but biproducts. Notice that a projection $\pi_{1}\colon
n+m\rightarrow n$ is necessarily of the form $\pi_{1} =
\cotuple{\idmap}{0}$, where $0 \colon m\rightarrow n$ is the zero map
$m\rightarrow 0 \rightarrow n$. The tensor $\otimes$ distributes over
$(+,0)$ in \Cat{SMBLaw}\xspace, as it already does so in \Cat{SMLaw}\xspace. Morphisms in
\Cat{SMBLaw}\xspace are functors that strictly preserve all the structure.
The following result extends Corollary~\ref{Mnd2FCCatCor}.
\begin{lemma}
The (finitary) Kleisli construction on a monad yields a functor
$\mathcal{K}{\kern-.2ex}\ell_{\ensuremath{\mathbb{N}}} \colon \Cat{ACMnd}\xspace \rightarrow \Cat{SMBLaw}\xspace$.
\end{lemma}
\begin{proof}
It follows from Theorem~\ref{AMnd2BCat} that $(+,0)$ form biproducts
in $\mathcal{K}{\kern-.2ex}\ell_{\ensuremath{\mathbb{N}}}(T)$, for $T$ an additive commutative monad (on
\Cat{Sets}\xspace). This structure is preserved by functors $\mathcal{K}{\kern-.2ex}\ell_{\ensuremath{\mathbb{N}}}(\sigma)$,
for $\sigma\colon T\rightarrow S$ in \Cat{ACMnd}\xspace. \hspace*{\fill}$\QEDbox$
\auxproof{
$$
\begin{array}[b]{rcl}
\mathcal{K}{\kern-.2ex}\ell_{\ensuremath{\mathbb{N}}}(\sigma)(k_{1})
& = &
\sigma \mathrel{\circ} \eta \mathrel{\circ} \kappa_{i} \\
& = &
\eta \mathrel{\circ} \kappa_{i} \\
& = &
k_{i} \\
\mathcal{K}{\kern-.2ex}\ell_{\ensuremath{\mathbb{N}}}(\sigma)(p_{1})
& = &
\sigma \mathrel{\circ} \cotuple{\eta}{0} \\
& = &
\cotuple{\sigma\mathrel{\circ}\eta}{\sigma\mathrel{\circ} 0} \\
& = &
\cotuple{\eta}{0} \\
& = &
p_{1} \\
\mathcal{K}{\kern-.2ex}\ell_{\ensuremath{\mathbb{N}}}(\sigma)(\cotuple{f}{g}_{\mathcal{K}{\kern-.2ex}\ell})
& = &
\sigma \mathrel{\circ} \cotuple{f}{g} \\
& = &
\cotuple{\sigma\mathrel{\circ} f}{\sigma \mathrel{\circ} g} \\
& = &
\cotuple{\mathcal{K}{\kern-.2ex}\ell_{\ensuremath{\mathbb{N}}}(\sigma)(f)}{\mathcal{K}{\kern-.2ex}\ell_{\ensuremath{\mathbb{N}}}(\sigma)(g)}_{\mathcal{K}{\kern-.2ex}\ell} \\
\mathcal{K}{\kern-.2ex}\ell_{\ensuremath{\mathbb{N}}}(\sigma)\tuple{f}{g}_{\mathcal{K}{\kern-.2ex}\ell})
& = &
\sigma \mathrel{\circ} \ensuremath{\mathsf{bc}}\xspace^{-1} \mathrel{\circ} \tuple{f}{g} \\
& = &
\ensuremath{\mathsf{bc}}\xspace^{-1} \mathrel{\circ} (\sigma\times\sigma) \mathrel{\circ} \tuple{f}{g} \\
& = &
\ensuremath{\mathsf{bc}}\xspace^{-1} \mathrel{\circ} \tuple{\sigma\mathrel{\circ} f}{\sigma \mathrel{\circ} g} \\
& = &
\tuple{\mathcal{K}{\kern-.2ex}\ell_{\ensuremath{\mathbb{N}}}(\sigma)(f)}{\mathcal{K}{\kern-.2ex}\ell_{\ensuremath{\mathbb{N}}}(\sigma)(g)}_{\mathcal{K}{\kern-.2ex}\ell}.
\end{array}
$$
}
\end{proof}
We have already seen in Lemma \ref{LMCommLem} that the functor $\mathcal{T}
\colon \Cat{Law}\xspace\rightarrow\Cat{Mnd}\xspace$ defined in Lemma \ref{GLaw2MndLem}
restricts to a functor between symmetric monoidal Lawvere theories and
commutative monads. We now show that it also restricts to a functor
between symmetric monoidal Lawvere theories with biproducts and
commutative additive monads. Again, this restriction is left adjoint
to the finitary Kleisli construction.
\begin{lemma}
\label{LMCSRngLem}
The functor $\mathcal{T}\colon\Cat{SMLaw}\xspace\rightarrow\Cat{CMnd}\xspace$ from Lemma
\ref{LMCommLem} restricts to $\Cat{SMBLaw}\xspace \rightarrow \Cat{ACMnd}\xspace$. Further,
this restriction is left adjoint to the finitary Kleisli construction
$\mathcal{K}{\kern-.2ex}\ell_{\ensuremath{\mathbb{N}}}\colon \Cat{ACMnd}\xspace\rightarrow \Cat{SMBLaw}\xspace$.
\end{lemma}
\begin{proof}
First note that $T_{\cat{L}}(0)$ is final:
$$
\begin{array}{rclclcl}
T_{\cat{L}}(0) &=& \textstyle{\coprod_i}\,\cat{L}(1,i) \times 0^i &\cong& \cat{L}(1,0) \times 0^0 &\cong& 1,
\end{array}
$$
\noindent where the last isomorphism follows from the fact that
$(+,0)$ is a biproduct in $\cat{L}$ and hence $0$ is terminal. The
resulting zero map $0_{X,Y} \colon X \to T(Y)$ is given by
$$
\begin{array}{rcl}
x &\mapsto& [\kappa_0(!\colon 1 \to 0, ! \colon 0 \to Y)].
\end{array}
$$
To prove that the bicartesian map $\ensuremath{\mathsf{bc}}\xspace \colon T_{\cat{L}}(X+Y) \to
T_{\cat{L}}(X) \times T_{\cat{L}}(Y)$ is an isomorphism, we introduce
some notation. For $[\kappa_i(g,v)] \in T_{\cat{L}}(X+Y)$, where $g
\colon 1 \to i$ and $v \colon i \to X+Y$, we form the pullbacks (in
\Cat{Sets}\xspace)
$$\[email protected]{
i_{X}\ar[d]_{v_X}\ar[r]\pullback & i\ar[d]^-{v} &
i_{Y}\ar[d]^{v_Y}\ar[l]\pullback[dl] \\
X\ar[r]^-{\kappa_{1}} & X+Y & Y\ar[l]_-{\kappa_{2}}
}$$
\noindent By universality of coproducts we can write $i = i_{X} + i_{Y}$
and $v = v_{X}+v_{Y} \colon i_{X}+i_{Y} \rightarrow X+Y$. Then we
can also write $g = \tuple{g_X}{g_Y} \colon 1\rightarrow i_{X}+i_{Y}$.
Hence, for $[k_i(g,v)] \in T_{\cat{L}}(X+Y)$,
\begin{equation}
\label{LawMndbcEqn}
\begin{array}{rcl}
\ensuremath{\mathsf{bc}}\xspace([\kappa_i(g,v)])
& = &
\big([\kappa_{i_X}(g_{X}, v_{X})], \, [\kappa_{i_Y}(g_{Y}, v_{Y})]\big).
\end{array}
\end{equation}
\noindent It then easily follows that the map $T_{\cat{L}}(X) \times
T_{\cat{L}}(Y) \to T_{\cat{L}}(X+Y)$ defined by
$$
\begin{array}{rcll}
([\kappa_i(g,v)], [\kappa_j(h,w)])& \mapsto & [\kappa_{i+j}(\langle g, h\rangle,v+w)]
\end{array}
$$
is the inverse of $\ensuremath{\mathsf{bc}}\xspace$.
Checking that the unit of the adjunction
$\mathcal{T}\colon\Cat{SMLaw}\xspace\leftrightarrows\Cat{CMnd}\xspace\colon \mathcal{K}{\kern-.2ex}\ell_{\ensuremath{\mathbb{N}}}$ preserves the
product structure is left to the reader. This proves that also the
restricted functors form an adjunction.\hspace*{\fill}$\QEDbox$
\end{proof}
\auxproof{
To see that the map $\ensuremath{\mathsf{bc}}\xspace = \tuple{\mu \mathrel{\circ} T_{\cat{L}}p_1}{\mu \mathrel{\circ} T_{\cat{L}}p_2}$ is indeed given as above, note that for $[\kappa_i(g,v)] \in T_{\cat{L}}(X+Y)$
$$
(\mu \mathrel{\circ} T([\eta,0]))([\kappa_i(g,v)]) = \mu([\kappa_i(g, [\eta,0] \mathrel{\circ} v)])
$$
and
$$
([\eta,0] \mathrel{\circ} v)(a) = \left\{
\begin{array}{ll}
\kappa_1([\idmap,a]) & \text{if}\, v(a) \in X \\
\kappa_0([!,!]) & \text{otherwise}.
\end{array} \right.
$$
Write $f$ for the map $T(X) \times T(Y) \to T(X+Y)$, $([\kappa_i(g,v)], [\kappa_j(h,w)]) \mapsto [\kappa_{i+j}(\langle g, h\rangle,v+w)]$. Claim: $f$ is the inverse of $\ensuremath{\mathsf{bc}}\xspace$.
$$
\begin{array}{rcll}
(f \mathrel{\circ} \ensuremath{\mathsf{bc}}\xspace)([\kappa_i(g,v)]) &=& [\kappa_{i_X+i_Y}\big(\tuple{\langle \pi_{x_0}, \ldots, \pi_{x_{i_X-1}}\rangle \mathrel{\circ} g}{\langle \pi_{y_0}, \ldots, \pi_{y_{i_Y-1}}\rangle \mathrel{\circ} g},\\
&&
v \mathrel{\circ} [\kappa_{x_0}, \ldots \kappa_{x_{i_X-1}}] + v \mathrel{\circ} [\kappa_{y_0}, \ldots \kappa_{y_{i_Y-1}}]\big)]\\
&=&
[\kappa_i\big(\langle \pi_{x_0}, \ldots, \pi_{x_{i_X-1}}, \pi_{y_0}, \ldots, \pi_{y_{i_Y-1}}\rangle \mathrel{\circ} g,\\
&&
v \mathrel{\circ}[\kappa_{x_0}, \ldots \kappa_{x_{i_X-1}},\kappa_{y_0}, \ldots \kappa_{y_{i_Y-1}}]\big)]\\
&=&
[\kappa_i(g,v \mathrel{\circ}[\kappa_{x_0}, \ldots \kappa_{x_{i_X-1}},\kappa_{y_0}, \ldots \kappa_{y_{i_Y-1}}] \mathrel{\circ} \langle \pi_{x_0}, \ldots, \pi_{x_{i_X-1}}, \pi_{y_0}, \ldots, \pi_{y_{i_Y-1}}\rangle)]\\
&=&
[\kappa_i(g,v)]
\end{array}
$$
$$
\begin{array}{rcll}
(\ensuremath{\mathsf{bc}}\xspace\mathrel{\circ} f)([\kappa_i(g,v)], [\kappa_j(h,w)])
&=&
\ensuremath{\mathsf{bc}}\xspace([\kappa_{i+j}(\langle g, h\rangle,v+w)])\\
&=&
\big([\kappa_i(\langle \pi_0, \ldots \pi_{i-1} \rangle \mathrel{\circ} \langle g, h\rangle, (v+w) \mathrel{\circ} [\kappa_0, \ldots \kappa_{i-1}])],\\
&&
[\kappa_j(\langle \pi_i, \ldots \pi_{i+j} \rangle \mathrel{\circ} \langle g, h\rangle, (v+w) \mathrel{\circ} [\kappa_i, \ldots \kappa_{i+j}])]\big)\\
&=&
([\kappa_i(g,v)], [\kappa_j(h,w)])
\end{array}
$$
Hence, $\ensuremath{\mathsf{bc}}\xspace$ is an isomorphism.
To show that $\eta = \overline{\idmap_{T_{\cat{L}}}} \colon \cat{L} \to (\mathcal{K}{\kern-.2ex}\ell_{\ensuremath{\mathbb{N}}}\mathrel{\circ}\mathcal{T})(\cat{L})$ preserves the product structure consider $n+m \xrightarrow{\pi_1^{\cat{L}}} n$ in $\cat{L}$. We will show that $\overline{\idmap}(\pi_1^{\cat{L}}) \colon n+m \to T_{\cat{L}}(n) = \pi_1^{\mathcal{K}{\kern-.2ex}\ell_{\ensuremath{\mathbb{N}}}(T_{\cat{L}})} = [\eta, 0]$ (where this last $\eta$ is the unit of the monad $T_{\cat{L}}$). For $i \in n$,
$$
\begin{array}{rcll}
\overline{\idmap}(\pi_1^{\cat{L}})(i) &=& [\kappa_n(\pi_1^{\cat{L}}\mathrel{\circ}\kappa_i, \idmap_n)]\\
&=&
[\kappa_1(\idmap, \pi_1 \mathrel{\circ} \kappa_i)] &\text{eq. rel}\\
&=&
[\kappa_1(\idmap, i)]\\
&=&
\eta(i)
\end{array}
$$
For $i \in m$,
$$
\begin{array}{rcll}
\overline{\idmap}(\pi_1^{\cat{L}})(i) &=& [\kappa_n(\pi_1^{\cat{L}}\mathrel{\circ}\kappa_i, \idmap_n)]\\
&=&
[\kappa_n(\pi_1^{\cat{L}}\mathrel{\circ}\kappa_2 \mathrel{\circ} \kappa_{i-n}, \idmap_n)] &(\text{where}\, \kappa_{i-n} \colon 1 \to m)\\
&=&
[\kappa_n(0_{m,n} \mathrel{\circ} \kappa_{i-n}, \idmap_n)] \\
&=&
[\kappa_n(! \mathrel{\circ} ! \mathrel{\circ} \kappa_{i-n}, \idmap_n)]\\
&=&
[\kappa_0(! \mathrel{\circ} \kappa_{i-n}, \idmap_n \mathrel{\circ} !)]\\
&=&
[\kappa_0(!,!)]\\
&=&
0_{m,n}(i)
\end{array}
$$
}
In the next two sections we will see how additive commutative monads
and symmetric monoidal Lawvere theories with biproducts relate to
commutative semirings.
\section{Semirings and monads}\label{SemiringMonadSec}
This section starts with some clarification about semirings and
modules. Then it shows how semirings give rise to certain
``multiset'' monads, which are both commutative and additive. It is
shown that the ``evaluate at unit 1''-functor yields a map in the
reverse direction, giving rise to an adjunction, as before.
A commutative semiring in \Cat{Sets}\xspace consists of a set $S$ together with
two commutative monoid structures, one additive $(+,0)$ and one
multiplicative $(\cdot, 1)$, where the latter distributes over the
former: $s\cdot 0 = 0$ and $s\cdot (t+r) = s\cdot t + s\cdot r$. For
more information on semirings, see~\cite{Golan99}. Here we only
consider commutative ones. Typical examples are the natural
numbers $\ensuremath{\mathbb{N}}$, or the non-negative rationals $\mathbb{Q}_{\geq 0}$, or
the reals $\mathbb{R}_{\geq 0}$.
One way to describe semirings categorically is by considering the
additive monoid $(S,+,0)$ as an object of the category \Cat{CMon} of
commutative monoids, carrying a multiplicative monoid structure
$\smash{I \stackrel{1}{\rightarrow} S \stackrel{\cdot}{\leftarrow}
S\otimes S}$ in this category \Cat{CMon}. The tensor guarantees that
multiplication is a bihomomorphism, and thus distributes over
additions.
In the present context of categories with finite products we do not
need to use these tensors and can give a direct categorical
formulation of such semirings, as a pair of monoids $\smash{1
\stackrel{0}{\rightarrow} S \stackrel{+}{\leftarrow} S\times S}$ and
$\smash{1 \stackrel{1}{\rightarrow} S \stackrel{\cdot}{\leftarrow}
S\times S}$ making the following distributivity diagrams commute.
$$\[email protected]{
S\times 1\ar[r]^-{\ensuremath{\mathrm{id}}\times 0}\ar[d]_-{!} &
S\times S\ar[d]^{\cdot}
&
(S\times S)\times S\ar[r]^-{\mathit{dbl}}\ar[d]_{+\times\ensuremath{\mathrm{id}}} &
(S\times S)\times (S\times S)\ar[r]^-{\cdot\times\cdot} &
S\times S\ar[d]^{+} \\
1 \ar[r]^-{0}& S
&
S\times S\ar[rr]^-{\cdot} & & S
}$$
\noindent where $\mathit{dbl} =
\tuple{\pi_{1}\times\ensuremath{\mathrm{id}}}{\pi_{2}\times\ensuremath{\mathrm{id}}}$ is the doubling map
that was also used in Lemma~\ref{bcproplem}. With the obvious notion
of homomorphism between semirings this yields a category
$\Cat{CSRng}\xspace(\Cat{C})$ of (commutative) semirings in a category \Cat{C}
with finite products.
Associated with a semiring $S$ there is a notion of module over
$S$. It consists of a commutative monoid $(M,0,+)$ together with a
(multiplicative) action $\star\colon S\times M\rightarrow M$ that is
an additive bihomomorphism, that is, the action preserves the additive
structure in each argument separately. We recall that the properties
of an action are given categorically by $\star \mathrel{\circ} (\cdot \times
\ensuremath{\mathrm{id}}) = \star \mathrel{\circ} (\ensuremath{\mathrm{id}} \times \star) \mathrel{\circ} \alpha^{-1} \colon
(S\times S) \times M \to M$ and $\star \mathrel{\circ} (1\times\ensuremath{\mathrm{id}}) = \pi_2
\colon 1 \times M \to M$. The fact that $\star$ is an additive
bihomomorphism is expressed by
$$\[email protected]@C-1pc{
S\times (M\times M)\ar[r]^-{\mathit{dbl'}}\ar[dd]_{\ensuremath{\mathrm{id}}\times +} &
(S\times M)\times (S\times M)\ar[d]^-{\star\times\star} &
(S\times S)\times M\ar[l]_-{\mathit{dbl}}\ar[dd]^{+\times\ensuremath{\mathrm{id}}} \\
& M\times M\ar[d]^{+} \\
S\times M\ar[r]^-{*} & M & S\times M\ar[l]_-{*}
}$$
\noindent where $\mathit{dbl}'$ is the obvious duplicator of $S$.
Preservation of zeros is simply $\star \mathrel{\circ} (0\times \ensuremath{\mathrm{id}}) = 0
\mathrel{\circ} \pi_{1} \colon 1\times M\rightarrow M$ and $\star \mathrel{\circ}
(\ensuremath{\mathrm{id}}\times 0) = 0 \mathrel{\circ} \pi_{2} \colon S\times 1\rightarrow M$.
\auxproof{
$$\xymatrix{
1\times M\ar[r]^-{0\times\ensuremath{\mathrm{id}}}\ar[d]_{\pi_1} &
S\times M\ar[d]^{\star} &
S\times 1\ar[l]_-{\ensuremath{\mathrm{id}}\times 0}\ar[d]^{\pi_{2}} \\
1\ar[r]^-{0} & M & 1\ar[l]_-{0}
}$$
}
We shall assemble such semirings and modules in one category
$\mathcal{M}od(\Cat{C})$ with triples $(S, M, \star)$ as objects, where
$\star\colon S\times M\rightarrow M$ is an action as above. A
morphism $(S_{1},M_{1},\star_{1})\rightarrow (S_{2},M_{2},\star_{2})$
consists of a pair of morphisms $f\colon S_{1}\rightarrow S_{2}$
and $g\colon M_{1} \rightarrow M_{2}$ in \Cat{C} such that $f$
is a map of semirings, $f$ is a map of monoids, and the actions
interact appropriately: $\star_{2} \mathrel{\circ} (f\times g) = g
\mathrel{\circ} \star_{1}$.
\subsection{From semirings to monads}
To construct an adjunction between semirings and additive commutative
monads we start by defining, for each commutative semiring $S$, the
so-called multiset monad on $S$ and show that this monad is both
commutative and additive.
\begin{definition}\label{MultisetDef}
For a semiring $S$, define a ``multiset'' functor
$M_{S}\colon\Cat{Sets}\xspace\rightarrow\Cat{Sets}\xspace$ on a set $X$ by
$$\begin{array}{rcl}
M_{S}(X)
& = &
\set{\varphi\colon X\rightarrow S}{\ensuremath{\mathrm{supp}}(\varphi)\mbox{ is finite}},
\end{array}$$
\noindent where $\ensuremath{\mathrm{supp}}(\varphi) = \setin{x}{X}{\varphi(x) \neq 0}$
is called the support of $\varphi$. For a function $f\colon
X\rightarrow Y$ one defines $M_{S}(f) \colon M_{S}(X) \rightarrow
M_{S}(Y)$ by:
$$\begin{array}{rcl}
M_{S}(f)(\varphi)(y)
& = &
\sum_{x\in f^{-1}(y)}\varphi(x).
\end{array}$$
\end{definition}
\noindent Such a multiset $\varphi\in M_{S}(X)$ may be written as
formal sum $s_{1}x_{1}+\cdots+s_{k}x_{k}$, where $\ensuremath{\mathrm{supp}}(\varphi) =
\{x_{1}, \ldots, x_{k}\}$ and $s_{i} = \varphi(x_{i})\in S$ describes
the ``multiplicity'' of the element $x_{i}$. In this notation one can
write the application of $M_S$ on a map $f$ as
$M_{S}(f)(\sum_{i}s_{i}x_{i}) = \sum_{i}s_{i}f(x_{i})$. Functoriality
is then obvious.
\begin{lemma}
\label{CSRng2CAMndLem}
For each semiring $S$, the multiset functor
$M_S$ forms a commutative and additive monad, with unit and multiplication:
$$\[email protected]{
X\ar[r]^-{\eta} & M_{S}(X)
& &
M_{S}(M_{S}(X))\ar[r]^-{\mu} & M_{S}(X) \\
x \ar@{|->}[r] & 1x
& &
\sum_{i}s_{i}\varphi_{i} \ar@{|->}[r] &
\lamin{x}{X}{\sum_{i}s_i\varphi_{i}(x)}.
}$$
\end{lemma}
\begin{proof}
The verification that $M_S$ with these $\eta$ and $\mu$ indeed forms
a monad is left to the reader. We mention that for commutativity and
additivity the relevant maps are given by:
$$\xymatrix@[email protected]{
M_{S}(X)\times M_{S}(Y)\ar[r]^-{\ensuremath{\mathsf{dst}}\xspace} & M_{S}(X\times Y)
&
M_{S}(X+Y)\ar[r]^-{\ensuremath{\mathsf{bc}}\xspace} & M_{S}(X)\times M_{S}(Y) \\
(\varphi,\psi)\ar@{|->}[r] & \lam{(x,y)}{\varphi(x)\cdot\psi(y)}
&
\chi\ar@{|->}[r] & (\chi\mathrel{\circ}\kappa_{1}, \chi\mathrel{\circ}\kappa_{2}).
}$$
\noindent Clearly, $\ensuremath{\mathsf{bc}}\xspace$ is an isomorphism, making $M_S$ additive. \hspace*{\fill}$\QEDbox$
\end{proof}
\begin{lemma}\label{SRng2CAMndProp}
The assignment $S \mapsto M_S$ yields a functor $\mathcal{M} \colon
\Cat{CSRng}\xspace \to \Cat{ACMnd}\xspace$.
\end{lemma}
\begin{proof}
Every semiring homomorphism $f \colon S \to R$, gives rise to a
monad morphism $\mathcal{M}(f) \colon M_S \to M_R$ with components
defined by $\mathcal{M}(f)_X (\sum_{i}s_{i}x_{i}) =
\sum_{i}f(s_{i})x_{i}$. It is left to the reader to check that
$\mathcal{M}(f)$ is indeed a monad morphism. \hspace*{\fill}$\QEDbox$
\end{proof}
\auxproof{
$\mathcal{M}(f)$ is a monad morphism:
\begin{itemize}
\item Naturality: let $g \colon X \to Y$ in $\Cat{Sets}\xspace$,
$$
\begin{array}{rcl}
\mathcal{M}(f)_Y \mathrel{\circ} M_S(g)(\sum_{i}s_{i}x_{i}) &=& (\sum_{i}f(s_{i})g(x_{i}))\\
&=&
M_R(g) \mathrel{\circ} \mathcal{M}(f)_X(\sum_{i}s_{i}x_{i})
\end{array}
$$
\item Commutativity with $\eta$:
$$
\begin{array}{rcl}
\mathcal{M}(f) \mathrel{\circ} \eta^S (x) &=& \mathcal{M}(f)(1^S x) \\
&=&
f(1)x = 1^R x \\
&=&
\eta^R(x)
\end{array}
$$
\item Commutativity with $\mu$:
$$
\begin{array}{rcl}
\lefteqn{\textstyle\mu \mathrel{\circ} \mathcal{M}(f)_{M_R(X)} \mathrel{\circ}
M_S(\mathcal{M}(f)_X)(\sum_{i}s_{i}(\sum_{j}t_{ij}x_{j}))} \\
&=&
\mu \mathrel{\circ} \mathcal{M}(f)_{M_T(X)}(\sum_{i}s_{i}(\sum_{j}f(t_{ij})x_{j}))\\
&=&
\mu(\sum_{i}f(s_{i})(\sum_{j}f(t_{ij})x_{j}))\\
&=&
\sum_{j}(\sum_i^R f(s_{i})f(t_{ij}))x_{j}\\
&=&
\sum_{j}f(\sum_i^R s_{i}t_{ij})x_{j}\\
&=&
\mathcal{M}(f)_X(\sum_i^T s_{i}t_{ij}x_{j})\\
&=&
\mathcal{M}(f)_X \mathrel{\circ} \mu (\sum_{i}s_{i}(\sum_{j}f(t_{ij})x_{j}))
\end{array}
$$
\end{itemize}
Functoriality of $\mathcal{M}$:\\
For $S \xrightarrow{f} T \xrightarrow{g} R$,
$$
\mathcal{M}(g)_X \mathrel{\circ} \mathcal{M}(f)_X (\sum_i s_i x_i) = \sum_i g(f(s_i)) x_i = \mathcal{M}(g \mathrel{\circ} f)(\sum_i s_i x_i)
$$
For $S \xrightarrow{id} S$,
$$
\mathcal{M}(id)(\sum_i s_i x_i) = \sum_i id(s_i) x_i = \sum_i s_i x_i
$$
}
For a semiring $S$, the category $\textsl{Alg}\xspace(M_{S})$ of algebras of the
multiset monad $M_S$ is (equivalent to) the category
$\mathcal{M}od_{S}(\Cat{C}) \hookrightarrow \mathcal{M}od(\Cat{C})$ of modules over
$S$. This is not used here, but just mentioned as background
information.
\subsection{From monads to semirings}
A commutative additive monad $T$ on a category $\cat{C}$ gives rise to
two commutative monoid structures on $T(1)$, namely the multiplication
defined in Lemma \ref{CMnd2CMonLem} and the addition defined in Lemma
\ref{Mnd2MonLem2} (considered for $X=1$). In case the category
$\cat{C}$ is distributive these two operations turn $T(1)$ into a semiring.
\begin{lemma}
\label{CAMnd2CSRngLem}
Each commutative additive monad $T$ on a distributive category
$\cat{C}$ with terminal object $1$ gives rise to a semiring $\mathcal{E}(T)
= T(1)$ in $\cat{C}$. The mapping $T \mapsto \mathcal{E}(T)$ yields a functor
$\Cat{ACMnd}\xspace(\cat{C}) \to \Cat{CSRng}\xspace(\cat{C})$.
\end{lemma}
\begin{proof}
For a commutative additive monad $T$ on $\cat{C}$, addition on
$T(1)$ is given by $T(\nabla) \mathrel{\circ} \ensuremath{\mathsf{bc}}\xspace^{-1} \colon T(1) \times
T(1) \to T(1)$ with unit $0_{1,1} \colon 1 \to T(1)$ as in
Lemma~\ref{Mnd2MonLem2}, the multiplication is given by $\mu \mathrel{\circ}
T(\lambda) \mathrel{\circ} \ensuremath{\mathsf{st}}\xspace \colon T(1) \times T(1) \to T(1)$ with unit
$\eta_1 \colon 1 \to T(1)$ as in Lemma \ref{CMnd2CMonLem}.
It was shown in the lemmas just mentioned that both addition and
multiplication define a commutative monoid structure on $T(1)$. The
following diagram proves distributivity of multiplication over
addition.
$$
\xymatrix{
(T(1) \times T(1))\times T(1) \ar[r]^-{\ensuremath{\mathsf{bc}}\xspace^{-1} \times id} \ar[d]_-{dbl} & T(1+1)\times T(1) \ar[r]^-{T(\nabla)\times id} \ar[d]_-{\ensuremath{\mathsf{st}}\xspace} & T(1) \times T(1) \ar[d]^-{\ensuremath{\mathsf{st}}\xspace}
\\
(T(1) \times T(1))\times (T(1)\times T(1)) \ar[d]_-{\ensuremath{\mathsf{st}}\xspace\times \ensuremath{\mathsf{st}}\xspace} & T((1+1) \times T(1)) \ar[r]^-{T(\nabla \times id)} \ar[d]_-{\cong} & T(1\times T(1)) \ar[ddd]^-{T(\lambda)}
\\
T(1\times T(1)) \times T(1\times T(X)) \ar[r]^-{\ensuremath{\mathsf{bc}}\xspace^{-1}} \ar[d]_-{T(\lambda) \times T(\lambda)} & T(1\times T(1) + 1 \times T(1)) \ar[d]_-{T(\lambda + \lambda}
\\
T^2(1) \times T^2(1) \ar[dd]_-{\mu\times\mu} \ar[r]^-{\ensuremath{\mathsf{bc}}\xspace^{-1}}& T(T(1)+T(1)) \ar[rd]^-{T(\nabla)} \ar[d]_-{T(\cotuple{T(\kappa_1)}{T(\kappa_2)})}
\\
&T^2(1+1) \ar[r]^-{T^2(\nabla)} \ar[d]_-{\mu} & T^2(1) \ar[d]^-{\mu}
\\
T(1)\times T(1) \ar[r]^-{\ensuremath{\mathsf{bc}}\xspace^{-1}} & T(1+1) \ar[r]^-{T(\nabla)} & T(1)
}
$$
\noindent Here we rely on Lemma~\ref{bcproplem} for the commutativity
of the upper and lower square on the left.
In a distributive category $0 \cong 0 \times X$, for every object
$X$. In particular $T(0 \times T(1)) \cong T(0) \cong 1$ is final.
This is used to obtain commutativity of the upper-left square of the
following diagram proving $0 \cdot s = 0$:
$$
\xymatrix{
T(1) \ar[r]^-{\cong} \ar[d]^-{!} & T(0) \times T(1) \ar[r]^-{T(!) \times id} \ar[d]^-{\ensuremath{\mathsf{st}}\xspace} & T(1) \times T(1) \ar[d]^-{\ensuremath{\mathsf{st}}\xspace}
\\
T(0) \ar@/_{6ex}/[ddrr]_{T(!)} \ar[r]^-{\cong} \ar[rd]^{T(!)} & T(0 \times T(1)) \ar[d]^-{T(\lambda)} \ar[r]^-{T(!\times id)} & T(1 \times T(1)) \ar[d]^-{T(\lambda)}
\\
&T^2(1) \ar[r]^{id} & T^2(1) \ar[d]^-{\mu}
\\
&&T(1)
}
$$ For a monad morphism $\sigma \colon T \to S$, we define
$\mathcal{E}(\sigma) = \sigma_1 \colon T(1) \to S(1)$. By Lemma
\ref{Mnd2MonLem}, $\sigma_1$ commutes with the multiplicative
structure. As $\sigma_1 = T(id) \mathrel{\circ} \sigma_1 = \mathcal{A}d((\sigma,id))$,
it follows from Lemma \ref{Mnd2MonLem2} that $\sigma_1$ also commutes
with the additive structure and is therefore a
$\Cat{CSRng}\xspace$-homomorphism.\hspace*{\fill}$\QEDbox$
\end{proof}
\subsection{Adjunction between monads and semirings}
The functors defined in the Lemmas \ref{SRng2CAMndProp} and
\ref{CAMnd2CSRngLem}, considered on $\cat{C} = \Cat{Sets}\xspace$, form an
adjunction $\mathcal{M} \colon \Cat{CSRng}\xspace \leftrightarrows
\Cat{ACMnd}\xspace \colon \mathcal{E}$. To prove this adjunction we first show that
each pair $(T,X)$, where $T$ is a commutative additive monad on a
category $\cat{C}$ and $X$ an object of $\cat{C}$, gives rise to a
module on $\cat{C}$ as defined at the beginning of this section.
\begin{lemma}
\label{Mnd2ModLem}
Each pair $(T,X)$, where $T$ is a commutative additive monad on a
category $\cat{C}$ and $X$ is an object of $\cat{C}$, gives rise to a
module $\mathcal{M}od(T,X) = (T(1), T(X), \star)$. Here $T(1)$ is the
commutative semiring defined in Lemma \ref{CAMnd2CSRngLem} and $T(X)$
is the commutative monoid defined in Lemma \ref{Mnd2MonLem2}. The
action map is given by $\star = T(\lambda) \mathrel{\circ} dst \colon T(1)
\times T(X) \to T(X)$. The mapping $(T,X) \mapsto \mathcal{M}od(T,X)$ yields a
functor $\Cat{ACMnd}\xspace(\cat{C}) \times \cat{C} \to \mathcal{M}od(\cat{C})$.
\end{lemma}
\begin{proof}
Checking that $\star$ defines an appropriate action requires some work
but is essentially straightforward, using the properties from Lemma
\ref{bcproplem}. For a pair of maps $\sigma \colon T \to S$ in
$\Cat{ACMnd}\xspace(\cat{C})$ and $g \colon X \to Y$ in $\cat{C}$, we define a map
$\mathcal{M}od(\sigma,g)$ by
$$
\xymatrix{(T(1), T(X), \star^T) \ar[rr]^-{(\sigma_1, \sigma_Y \mathrel{\circ} T(g))} && (S(1), S(Y), \star^S).}
$$
\noindent Note that, by naturality of $\sigma$, one has $\sigma_Y
\mathrel{\circ} T(g) = S(g) \mathrel{\circ} \sigma_X$. It easily follows that this
defines a $\mathcal{M}od(\cat{C})$-map and that the assignment is
functorial.\hspace*{\fill}$\QEDbox$
\end{proof}
\auxproof{First note that $\star$ may also be described as follows.
$$
\begin{array}{rcl}
\star &=& T(\pi_2) \mathrel{\circ} \ensuremath{\mathsf{dst}}\xspace\\
&=&
T(\pi_2) \mathrel{\circ} \mu \mathrel{\circ} T(\ensuremath{\mathsf{st}}\xspace') \mathrel{\circ} \ensuremath{\mathsf{st}}\xspace\\
&=&
\mu \mathrel{\circ} T^2(\pi_2) \mathrel{\circ} T(\ensuremath{\mathsf{st}}\xspace') \mathrel{\circ} \ensuremath{\mathsf{st}}\xspace\\
&=&
\mu \mathrel{\circ} T(\pi_2) \mathrel{\circ} \ensuremath{\mathsf{st}}\xspace
\end{array}
$$
\begin{itemize}
\item $\star$ defines an action:
\begin{enumerate}
\item $ s \star 0 = 0$:
$$
\xymatrix{
T(1) \ar[r]^-{\tuple{id}{!^{-1}}} \ar[dd]_-{!} & T(1)\times T(0) \ar[r]^-{T(id) \times T(!)} \ar[d]_-{dst} &T(1) \times T(X) \ar[d]^-{\ensuremath{\mathsf{dst}}\xspace}
\\
&T(1\times 0) \ar[r]^-{T(id\times !)} \ar[d]_-{T(\pi_2)} & T(1\times X) \ar[d]^-{T(\pi_2)}
\\
1 \ar[r]^-{\cong} & T(0) \ar[r]^-{T(!)} & T(X)
}
$$
The left square commutes as $T(0)$ is terminal. The upper right square commutes by naturality of $\ensuremath{\mathsf{dst}}\xspace$.
\item $0 \star x = 0$:
$$
\xymatrix{
T(X) \ar[d]_-{!} \ar[r]^-{\cong} & T(0) \times T(X) \ar[d]^-{\ensuremath{\mathsf{st}}\xspace} \ar[r]^-{T(!) \times \idmap} & T(1) \times T(X) \ar[d]^-{\ensuremath{\mathsf{st}}\xspace}
\\
T(0) \ar@/_{6ex}/[ddrr]_{T(!)} \ar[r]^-{T(\cong)} \ar[rd]_-{T(!)} & T(0 \times T(X)) \ar[d]^-{T(\pi_2)} \ar[r]^-{T(!\times\idmap)} & T(1\times T(X)) \ar[d]^-{T(\pi_2)}\\
&T^2(X) \ar[r]^-{id} & T^2(X) \ar[d]^-{\mu}\\
&&T(X)}
$$
The upper left square commutes as the category $\cat{C}$ is assumed to be distributive and therefore $T(0 \times T(X)) \cong T(0) \cong 1$ is terminal.
\item $1 \star x = x$:
$$
\begin{array}{rcll}
T(\pi_2) \mathrel{\circ} \ensuremath{\mathsf{dst}}\xspace \mathrel{\circ} (\eta \times \idmap)
&=&
T(\pi_2) \mathrel{\circ} \mu \mathrel{\circ} T(\ensuremath{\mathsf{st}}\xspace) \mathrel{\circ} \ensuremath{\mathsf{st}}\xspace' \mathrel{\circ} (\eta \times \idmap)\\
&=&
T(\pi_2) \mathrel{\circ} \mu \mathrel{\circ} T(\ensuremath{\mathsf{st}}\xspace) \mathrel{\circ} T(\eta \times \idmap) \mathrel{\circ} \ensuremath{\mathsf{st}}\xspace' &\text{naturality of $\ensuremath{\mathsf{st}}\xspace'$}\\
&=&
T(\pi_2) \mathrel{\circ} \mu \mathrel{\circ} T(\eta) \mathrel{\circ} \ensuremath{\mathsf{st}}\xspace'\\
&=&
T(\pi_2) \mathrel{\circ} \ensuremath{\mathsf{st}}\xspace'\\
&=&
\pi_2 \colon 1 \times T(1) \to T(1)
\end{array}
$$
\item $s \star (x+y) = s\star x + s\star y$
$$
\xymatrix{
T(1) \times (T(X)\times T(X)) \ar[r]^-{\ensuremath{\mathrm{id}} \times \ensuremath{\mathsf{bc}}\xspace^{-1}} \ar[d]^-{dbl} & T(1) \times T(X+X) \ar[d]^-{\ensuremath{\mathsf{st}}\xspace'} \ar[r]^-{\ensuremath{\mathrm{id}} \times T(\nabla)} & T(1) \times T(X) \ar[dd]^-{\ensuremath{\mathsf{st}}\xspace'}
\\
(T(1)\times T(X))\times(T(1)\times T(X)) \ar[d]_-{\ensuremath{\mathsf{st}}\xspace'\times\ensuremath{\mathsf{st}}\xspace'} & T(T(1) \times (X+X)) \ar[dr]^-{T(id \times \nabla)} \ar[d]^-{\cong}
\\
T(T(1) \times X)\times T(T(1)\times X) \ar[r]^-{\ensuremath{\mathsf{bc}}\xspace^{-1}} \ar[d]_-{T(\ensuremath{\mathsf{st}}\xspace)\times T(\ensuremath{\mathsf{st}}\xspace)} & T(T(1) \times X + T(1) \times X) \ar[r]^-{T(\nabla)} \ar[d]^-{T(\ensuremath{\mathsf{st}}\xspace + \ensuremath{\mathsf{st}}\xspace)}&
T(T(1)\times X) \ar[dd]^-{T(\ensuremath{\mathsf{st}}\xspace)}
\\
T^2(1\times X) \times T^2(1\times X) \ar[r]^-{\ensuremath{\mathsf{bc}}\xspace^{-1}} \ar[dd]_-{\mu\times\mu} & T(T(1\times X)+T(1\times X)) \ar[rd]^-{T(\nabla)} \ar[d]_-{T(\cotuple{T(\kappa_1)}{T(\kappa_2)})}
\\
&T^2(1 \times X + 1 \times X) \ar[r]^-{T^2(\nabla)} \ar[d]^-{\mu} & T^2(1\times X) \ar[d]^-{\mu}
\\
T(1 \times X)\times T(1\times X) \ar[d]_-{T(\pi_2)\times T(\pi_2)} \ar[r]^-{\ensuremath{\mathsf{bc}}\xspace^{-1}} & T(1\times X + 1\times X) \ar[d]^-{T(\pi_1+\pi_2)} \ar[r]^-{T(\nabla)} &T(1\times X) \ar[d]^-{T(\pi_2)}
\\
T(X)\times T(X) \ar[r]^-{\ensuremath{\mathsf{bc}}\xspace^{-1}} & T(X+X) \ar[r]^-{T(\nabla)} &T(X)}
$$
\item $(s+t)\star x = (s\star x) + (t\star x)$
$$
\xymatrix{
(T(1)\times T(1))\times T(X) \ar[d]_{dbl'} \ar[r]^{\ensuremath{\mathsf{bc}}\xspace^{-1}\times \idmap} & T(1+1)\times T(X) \ar[d]^-{\ensuremath{\mathsf{st}}\xspace} \ar[r]^-{T(\nabla) \times \idmap} & T(1) \times T(X) \ar[d]^-{\ensuremath{\mathsf{st}}\xspace}
\\
(T(1)\times T(X))\times(T(1)\times T(X)) \ar[d]_-{\ensuremath{\mathsf{st}}\xspace\times\ensuremath{\mathsf{st}}\xspace} & T((1+1)\times T(X)) \ar[d]^-{T(\cong)} \ar[r]^-{T(\nabla \times \idmap)} & T(1 \times T(X)) \ar[dd]^-{T(\pi_2)}
\\
T(1\times T(X)) \times T(1\times T(X)) \ar[d]_-{T(\pi_2)\times T(\pi_2)} \ar[r]^-{\ensuremath{\mathsf{bc}}\xspace^{-1}} &T(1\times T(X) + 1\times T(X)) \ar[d]^-{T(\pi_2 + \pi_2)}
\\
T^2(X) \times T^2(X) \ar[r]^-{\ensuremath{\mathsf{bc}}\xspace^{-1}} \ar[dd]_-{\mu \times \mu} & T(T(X)+T(X)) \ar[r]^-{T(\nabla)} \ar[d]_-{T(\cotuple{T(\kappa_1)}{T(\kappa_2)})} & T^2(X) \ar[dd]^-{\mu}
\\
&T^2(X+X)\ar[d]^-{\mu} \ar[ru]_-{T^2(\nabla)}
\\
T(X)\times T(X) \ar[r]^-{bc^{-1}} & T(X+X) \ar[r]^-{T(\nabla)} &T(X)
}
$$
\item $(s\cdot t) \star x = s\star(t \star x)$
$$
\begin{sideways}
\xymatrix{
(T(1)\times T(1)) \times T(X) \ar[rr]^-{\ensuremath{\mathsf{st}}\xspace\times\idmap} \ar[d]^-{\alpha^{-1}} && T(1\times T(1)) \times T(X) \ar[d]^-{\ensuremath{\mathsf{st}}\xspace} \ar[r]^-{T(\pi_2) \times \idmap} & T^2(1) \times T(X) \ar[r]^-{\mu \times \idmap} \ar[d]^-{\ensuremath{\mathsf{st}}\xspace} & T(1) \times T(X) \ar[dd]^-{\ensuremath{\mathsf{st}}\xspace}
\\
T(1)\times (T(1)\times T(X)) \ar[d]^-{\idmap\times \ensuremath{\mathsf{st}}\xspace} \ar[r]^-{\ensuremath{\mathsf{st}}\xspace} & T(1\times(T(1)\times T(X))) \ar[r]^-{T(\alpha)} \ar[d]^-{T(\ensuremath{\mathrm{id}} \times\ensuremath{\mathsf{st}}\xspace)} \ar@/_{4ex}/[rr]_{T(\pi_2)} & T((1\times T(1)) \times T(X)) \ar[r]^-{T(\pi_2\times \idmap)} & T(T(1)\times T(X))\ar[d]^-{T(\ensuremath{\mathsf{st}}\xspace)}
\\
T(1)\times T(1\times T(X)) \ar[d]^-{\ensuremath{\mathrm{id}} \times T(\pi_2)} \ar[r]^-{\ensuremath{\mathsf{st}}\xspace} & T(1\times T(1\times T(X))) \ar[d]^-{T(\ensuremath{\mathrm{id}} \times T(\pi_2))} && T^2(1 \times T(X)) \ar[r]^-{\mu} \ar[d]^-{T^2(\pi_2)} & T(1\times T(X)) \ar[d]^-{T(\pi_2)}
\\
T(1) \times T^2(X) \ar[r]^-{\ensuremath{\mathsf{st}}\xspace} \ar[d]^-{\ensuremath{\mathrm{id}} \times\mu} & T(1\times T^2(X)) \ar[d]^-{T(\ensuremath{\mathrm{id}} \times \mu)} && T^3(X) \ar[r]^-{\mu} \ar[d]^-{T(\mu)} & T^2(X) \ar[d]^-{\mu}
\\
T(1)\times T(X) \ar[r]^-{\ensuremath{\mathsf{st}}\xspace} & T(1\times T(X)) \ar[rr]^-{T(\pi_2)} && T^2(X) \ar[r]^-{\mu} & T(X)}
\end{sideways}
$$
\end{enumerate}
\item $\mathcal{M}od(\sigma,g) \colon \mathcal{M}od(T,X) \to \mathcal{M}od(S,Y)$ is a map of modules.
In Lemma \ref{CAMnd2CSRngLem} we have shown already that $\sigma_1$ is
a $\Cat{SRng}\xspace$-morphism. The map $\sigma_Y \mathrel{\circ} T(g)$ is a
$\cat{Mon}(\cat{C})$-map as it preserves $0$:
$$
\xymatrix{
1 \ar[r]^-{\cong} \ar[rd]_-{\cong} & T(0) \ar[r]^-{T(!)} \ar[d]_-{\sigma_0} & T(X) \ar[d]^-{\sigma_X} \ar[r]^-{T(g)} & T(Y) \ar[d]^-{\sigma_Y}
\\
&S(0) \ar[r]^-{S(!)} \ar@/_{4ex}/[rr]_{S(!)}& S(X) \ar[r]^-{S(g)} & S(Y)
}
$$
and the addition:
$$
\begin{array}{rcl}
\sigma_Y \mathrel{\circ} T(g) \mathrel{\circ} T(\nabla) \mathrel{\circ} \ensuremath{\mathsf{bc}}\xspace^{-1}
&=&
S(g) \mathrel{\circ} \sigma_X \mathrel{\circ} T(\nabla) \mathrel{\circ} \ensuremath{\mathsf{bc}}\xspace^{-1}\\
&=&
S(g) \mathrel{\circ} S(\nabla) \mathrel{\circ} \sigma_{X+X} \mathrel{\circ} \ensuremath{\mathsf{bc}}\xspace^{-1}\\
&=&
S(g) \mathrel{\circ} S(\nabla) \mathrel{\circ} \ensuremath{\mathsf{bc}}\xspace^{-1} \mathrel{\circ} (\sigma_X \times \sigma_X) \\
&=&
S(\nabla) \mathrel{\circ} S(g+g) \mathrel{\circ} \ensuremath{\mathsf{bc}}\xspace^{-1} \mathrel{\circ} (\sigma_X \times \sigma_X)\\
&=&
S(\nabla) \mathrel{\circ} \ensuremath{\mathsf{bc}}\xspace^{-1} \mathrel{\circ} (S(g) \times S(g)) \mathrel{\circ} (\sigma_X \times \sigma_X)\\
&=&
S(\nabla) \mathrel{\circ} \ensuremath{\mathsf{bc}}\xspace^{-1} \mathrel{\circ} (S(g) \mathrel{\circ} \sigma_X)\times (S(g) \mathrel{\circ} \sigma_X)\\
&=&
S(\nabla) \mathrel{\circ} \ensuremath{\mathsf{bc}}\xspace^{-1} \mathrel{\circ} (\sigma_Y \mathrel{\circ} T(g))\times (\sigma_Y \mathrel{\circ} T(g))
\end{array}
$$
Furthermore the pair preserves the action:
$$
\xymatrix{
T(1)\times T(X) \ar[r]^{\sigma_1 \times \sigma_X} \ar[d]^-{\ensuremath{\mathsf{dst}}\xspace} \ar@/_{8ex}/[dd]_{\star} & S(1) \times S(X) \ar[d]_-{\ensuremath{\mathsf{dst}}\xspace} \ar[r]^-{\ensuremath{\mathrm{id}} \times S(g)} & S(1) \times S(Y) \ar[d]_-{\ensuremath{\mathsf{dst}}\xspace} \ar@/^{8ex}/[dd]^{\star}
\\
T(1\times X) \ar[r]^-{\sigma_{1\times X}} \ar[d]^-{T(\pi_2)} & S(1\times X) \ar[r]^-{S(\ensuremath{\mathrm{id}} \times g)} \ar[d]_-{S(\pi_2)} & S(1\times Y) \ar[d]_-{S(\pi_2)}
\\
T(X) \ar[r]^-{\sigma_X} & S(X) \ar[r]^-{S(g)} & S(Y)
}
$$
\item Functoriality
$\mathcal{M}od$ preserves the identity map, as
$$
\begin{array}{rcl}
\mathcal{M}od(\idmap_{(T,X)}) &=& \mathcal{M}od((\idmap_T,\idmap_X))\\
&=&
((\idmap_T)_1, (\idmap_T)_Y \mathrel{\circ} T(\idmap_X))\\
&=&
(\idmap_{T(1)}, \idmap_{T(X)})
\end{array}
$$
And it preserves the composition as, for $(T,X) \xrightarrow{(\tau, f)} (S, Y) \xrightarrow{(\sigma, g)} (R,Z)$,
$$
\begin{array}{rcl}
\mathcal{M}od(\sigma, g) \mathrel{\circ} Mod(\tau, f)
&=&
(\sigma_1, \sigma_Z \mathrel{\circ} S(g)) \mathrel{\circ} (\tau_1, \tau_Y \mathrel{\circ} T(f))\\
&=&
(\sigma_1 \mathrel{\circ} \tau_1, \sigma_Z \mathrel{\circ} S(g) \mathrel{\circ} \tau_Y \mathrel{\circ} T(f))\\
&=&
((\sigma \mathrel{\circ} \tau)_1, \sigma_Z \mathrel{\circ} \tau_Z \mathrel{\circ} T(g) \mathrel{\circ} T(f))\\
&=&
((\sigma \mathrel{\circ} \tau)_1, (\sigma \mathrel{\circ} \tau)_Z \mathrel{\circ} T(g \mathrel{\circ} f))\\
&=&
\mathcal{M}od(\sigma \mathrel{\circ} \tau, g \mathrel{\circ} f)
\end{array}
$$
\end{itemize}
}
\begin{lemma}
\label{AdjCSR2ACMLem}
The pair of functors $\mathcal{M} \colon \Cat{CSRng}\xspace \leftrightarrows
\Cat{ACMnd}\xspace \colon \mathcal{E}$ forms an adjunction, $\mathcal{M} \dashv
\mathcal{E}$.
\end{lemma}
\begin{proof}
For a semiring $S$ and a commutative additive monad $T$ on \Cat{Sets}\xspace there are
(natural) bijective correspondences:
$$\begin{bijectivecorrespondence}
\correspondence[\qquad in \Cat{CAMnd}]
{\xymatrix{\llap{$M_{S} = \;$}\mathcal{M}(S)\ar[r]^-{\sigma} & T}}
\correspondence[\qquad in \Cat{CSRng}\xspace]
{\xymatrix{S\ar[r]_-{f} & \mathcal{E}(T)\rlap{$\;=T(1)$}}}
\end{bijectivecorrespondence}$$
Given $\sigma \colon M_S \to T$, one defines a semiring map $\overline{\sigma} \colon S \to T(1)$ by
$$
\xymatrix{
\overline{\sigma} = \Big( S \ar[rr]^-{\lambda s. (\lambda x. s)} && M_S(1) \ar[r]^-{\sigma_1} & T(1) \Big).}
$$
Conversely, given a semiring map $f \colon S \to T(1)$, one gets a monad map $\overline{f} \colon M_S \to T$ with components:
$$\xymatrix{
M_S(X) \ar[r]^-{\overline{f}_X} & T(X)
\quad\mbox{given by}\quad
\textstyle{\sum_i} s_ix_i \ar@{|->}[r] &
\textstyle{\sum_i} f(s_i) \star \eta(x_i),
}$$
\noindent where the sum on the right hand side is the addition in
$T(X)$ defined in Lemma \ref{Mnd2MonLem2} and $\star$ is the action of
$T(1)$ on $T(X)$ defined in Lemma \ref{Mnd2ModLem}.
Showing that $\overline{f}$ is indeed a monad morphism requires some
work. In doing so one may rely on the properties of the action and on
Lemma \ref{AdditiveMonadMonoidLem}. The details are left to the
reader. \hspace*{\fill}$\QEDbox$
\end{proof}
\auxproof{
The adjunction is proved as follows:
\begin{enumerate}
\item Given $\sigma$, $\overline{\sigma}$ is a $\Cat{CSRng}\xspace$-morphism (notes p. 88):
\begin{itemize}
\item Preservation of 0:
$$
\begin{array}{rcll}
\overline{\sigma}(0_S) &=& \sigma_1(0\cdot *) &* \, \text{is the unique element of the terminal set}\\
&=&
\sigma_1(M_S(!)(\emptyset)) &\text{empty function is the unique element of $M_S(0)$}\\
&&&M_S(!) \colon M_S(0) \to M_S(1), \emptyset \mapsto 0\cdot *\\
&=&
T(!) \mathrel{\circ} \sigma_0(\emptyset)\\
&=&
0^T &M_S(0) \cong 1 \cong T(0)
\end{array}
$$
\item Preservation of the addition:
$$
\begin{array}{rcll}
\overline{\sigma}(a+b) &=& \sigma_1((a+b)\cdot *)\\
&=&
\sigma_1 \mathrel{\circ} M_S(\nabla) \mathrel{\circ} (\ensuremath{\mathsf{bc}}\xspace^{M_S})^{-1}(\tuple{a\cdot *}{b\cdot*}) &\text{use definition of $\ensuremath{\mathsf{bc}}\xspace$ in $M_S$}\\
&=&
T(\nabla) \mathrel{\circ} \sigma \mathrel{\circ} (\ensuremath{\mathsf{bc}}\xspace^{M_S})^{-1}(\tuple{a\cdot *}{b\cdot*})\\
&=&
T(\nabla)\mathrel{\circ} (\ensuremath{\mathsf{bc}}\xspace^{T})^{-1}\mathrel{\circ} \sigma \times \sigma(\tuple{a\cdot *}{b\cdot*})\\
&=&
T(\nabla)\mathrel{\circ} (\ensuremath{\mathsf{bc}}\xspace^{T})^{-1}(\tuple{\sigma(a\cdot *)}{\sigma(b\cdot*)})\\
&=&
\overline{\sigma}(a) + \overline{\sigma}(b)
\end{array}
$$
\item Preservation of 1:
$$
\begin{array}{rcl}
\overline{\sigma}(1) &=& \sigma_1(1\cdot*)\\
&=&
\sigma_1 \mathrel{\circ} \eta^{M_S}(*)\\
&=&
\eta^T(*) = 1
\end{array}
$$
\item Preservation of the multiplication
$$
\begin{array}{rcll}
\overline{\sigma}(a\cdot b) &=& \sigma_1((a\cdot b)\cdot *)\\
&=&
\sigma_1 \mathrel{\circ} \mu \mathrel{\circ} M_S(\pi_2) \mathrel{\circ} \ensuremath{\mathsf{st}}\xspace^{M_S}(\tuple{a\cdot*}{b\cdot*})\\
&=&
\mu \mathrel{\circ} T(\pi_2)\mathrel{\circ} \ensuremath{\mathsf{st}}\xspace^T \mathrel{\circ} \sigma \times \sigma (\tuple{a\cdot*}{b\cdot*}) &\text{see diagram}\\
&=&
\overline{\sigma}(a)\cdot\overline{\sigma}(b)
\end{array}
$$
$$
\xymatrix{
M_S(1)\times M_S(1) \ar[d]_{\ensuremath{\mathsf{st}}\xspace} \ar[r]^-{\sigma \times id} & T(1) \times M_S(1) \ar[d]_-{\ensuremath{\mathsf{st}}\xspace} \ar[r]^-{id \times \sigma} &T(1)\times T(1) \ar[d]^-{\ensuremath{\mathsf{st}}\xspace}
\\
M_S(1 \times M_S(1)) \ar[d]_-{M_S(\pi_2)} \ar[r]^-{\sigma} & T(1\times M_S(1)) \ar[d]^-{T(\pi_2)} \ar[r]^-{T(id \times \sigma)} &T(1\times T(1)) \ar[d]^-{T(\pi_2)}
\\
M_S^2(1) \ar[r]^-{\sigma} \ar[d]_-{\mu} &TM_S(1) \ar[r]^-{T\sigma} & T^2(1) \ar[d]^-{\mu}
\\
M_S(1) \ar[rr]^{\sigma} && T(1)}
$$
\end{itemize}
\item Given $f \colon S \to T(1)$, $\overline{f} \colon M_S \to T$ is a $\Cat{ACMnd}\xspace$-morphism.
\begin{itemize}
\item Naturality:
$$
\begin{array}{rcll}
T(g) \mathrel{\circ} \overline{f}_X
&=&
T(g)(\sum^{T(X)}f(s_i) \star \eta(x_i))\\
&=&
\sum^{T(Y)}T(g)\big(f(s_i) \star \eta(x_i)\big) &\text{T(g) preserves +, Lemma \ref{AdditiveMonadMonoidLem}}\\
&=&
\sum^{T(Y)} f(s_i) \star(T(g)(\eta(x_i)) &\text{same argument as above}\\
&=&
\sum^{T(Y)} f(s_i) \star \eta(g(x_i)) &\text{naturality of $\eta$}\\
&=&
\overline{f}_Y(\sum^{T(Y)} s_i \star g(x_i))\\
&=&
\overline{f}_Y \mathrel{\circ} M_S(g)(\sum s_ix_i)
\end{array}
$$
\item $\overline{f}$ commutes with $\eta$:
$$
\begin{array}{rcll}
\overline{f}\mathrel{\circ}\eta^{M_S}(x)
&=&
\overline{f}(1\cdot x)\\
&=&
f(1) \star\eta^T(x)\\
&=&
1 \star \eta^T(x) &\text{$f$ preserves 1}\\
&=&
\eta^T(x)
\end{array}
$$
\item $\overline{f}$ commutes with $\mu$:\\
Let $\alpha \in M_S^2(X)$, say $\alpha = \sum_i s_i\sum_j t_{ij} x_{j}$
$$
\begin{array}{rcll}
\lefteqn{\mu^T \mathrel{\circ} \overline{f}_{T(X)} \mathrel{\circ} M_S(\overline{f}_X)(\alpha)}\\
&=&
\mu^T \mathrel{\circ} \overline{f}_{T(X)}\big(\sum_i s_i \sum_j^{T(X)} f(t_{ij})\star\eta(x_{j})\big)\\
&=&
\mu^T\big(\sum_i^{T^2(X)} f(s_i) \cdot \eta(\sum_j^{T(X)} f(t_{ij})\star\eta(x_{j}))\big)\\
&=&
\sum_i^{T(X)} \mu \big(f(s_i) \cdot \eta(\sum_j^{T(X)} f(t_{ij})\star\eta(x_{j}))\big)
&\text{$\mu$ commutes with $+$, Lemma \ref{AdditiveMonadMonoidLem}}\\
&=& \sum_i^{T(X)} f(s_i) \star \sum_j^{T(X)} f(t_{ij})\star\eta(x_{j})&
(1)\\
&=& \sum_i^{T(X)}\sum_j^{T(X)} f(s_it_{ij})\star \eta(x_{j})&
\text{properties action}\\
&&&\text{$f$ pres. mult.}\\
&=&
\sum_j^{T(X)}\sum_i^{T(X)} f(s_it_{ij})\star \eta(x_{j})&
\text{com. of $+$ in $T(X)$}\\
&=&
\sum_j^{T(X)}(\sum_i^{T(X)} f(s_it_{ij}))\star \eta(x_{j})&
\text{property action}\\
&=&
\sum_j^{T(X)}(f(\sum_i^S s_it_{ij}))\star\eta(x_{j})&
\text{$f$ preserves $+$}\\
&=&
\overline{f}_X(\sum_j(\sum_i^S s_it_{ij})x_{j})\\
&=&
\overline{f}_X \mathrel{\circ} \mu^{M_S}(\alpha)
\end{array}
$$
(1) relies on the fact that for $a \in T(1), b \in T(X)$, $a\star b = \mu(a \star \eta(b))$:
$$
\begin{array}{rcl}
\mu(a \star \eta(b)) &=& \mu \mathrel{\circ} T(\pi_2) \mathrel{\circ} \ensuremath{\mathsf{dst}}\xspace
\mathrel{\circ} \ensuremath{\mathrm{id}} \times \eta (a,b)\\
&=&
\mu \mathrel{\circ} T(\pi_2) \mathrel{\circ} \ensuremath{\mathsf{st}}\xspace (a,b) \\
&=&
T(\pi_2)\mathrel{\circ}\ensuremath{\mathsf{dst}}\xspace(a,b)\\
&=&
a\star b
\end{array}
$$
\end{itemize}
\item $\overline{\overline{f}} = f$:
$$
\begin{array}{rcl}
\overline{\overline{f}}(s) &=& \overline{f}_1(s*)\\
&=&
f(s)\star\eta(*)\\
&=&
f(s) \star 1^{T(1)} = f(s)
\end{array}
$$
\item $\overline{\overline{\sigma}} = \sigma$:
$$
\begin{array}{rcll}
\overline{\overline{\sigma}}_X(\sum s_ix_i)
&=&
\sum^{T(X)} \overline{\sigma}(s_i) \star^T \eta^T(x_i)\\
&=&
\sum^{T(X)} \sigma_1(s_i*) \star^T (\sigma_X \mathrel{\circ} \eta^{M_S}) (x_i)\\
&=&
\sum^{T(X)} \sigma_1(s_i*) \star^T \sigma_X(1x_i)\\
&=&
\sum^{T(X)} \sigma_X((s_i*) \star^{M_S} (1x_i)) &\text{$\sigma$ preserves $\star$}\\
&=&
\sum^{T(X)} \sigma_X(s_ix_i) &\text{def. $\ensuremath{\mathsf{dst}}\xspace$ for $M_S$}\\
&=&
\sigma_X(\sum s_ix_i) &\sigma_X \text{preserves +, Lemma \ref{AdditiveMonadMonoidLem}}
\end{array}
$$
\item Naturality: consider
$$
\begin{bijectivecorrespondence}
\correspondence[in \Cat{CAMnd}] {\xymatrix{M_S\ar[r]^-{\mathcal{M}(f)} & M_R \ar[r]^-{\sigma} & T \ar[r]^-{\tau} & V}}
\correspondence[in \Cat{CSRng}\xspace]{\xymatrix{S\ar[r]_-{f} & R \ar[r]_-{\overline{\sigma}} & T(1) \ar[r]_-{\tau_1} & V(1)}}
\end{bijectivecorrespondence}
$$
$$
\begin{array}{rcll}
\overline{\tau \mathrel{\circ} \sigma \mathrel{\circ} \mathcal{M}(f)}(s)
&=&
(\tau \mathrel{\circ} \sigma \mathrel{\circ} \mathcal{M}(f))_1(s*)\\
&=&
(\tau_1 \mathrel{\circ} \sigma_1)(f(s)*)\\
&=&
(\tau_1 \mathrel{\circ} \overline{\sigma} \mathrel{\circ} f) (s)
\end{array}
$$
\end{enumerate}
}
Notice that the counit of the above adjunction $\mathcal{E}\mathcal{M}(S) =
M_S(1) \to S$ is an isomorphism. Hence this adjunction is in fact a
reflection.
\section{Semirings and Lawvere theories}\label{Semiringcatsec}
In this section we will extend the adjunction between commutative
monoids and symmetric monoidal Lawvere theories depicted in
Figure~\ref{ComMonoidTriangleFig} to an adjunction between commutative
semirings and symmetrical monoidal Lawvere theories with biproducts,
\textit{i.e.}~between the categories $\Cat{CSRng}\xspace$ and $\Cat{SMBLaw}\xspace$.
\subsection{From semirings to Lawvere theories}\label{CSRng2CatSbSec}
Composing the multiset functor $\mathcal{M} \colon \Cat{CSRng}\xspace \to \Cat{ACMnd}\xspace$
from the previous section with the finitary Kleisli construction
$\mathcal{K}{\kern-.2ex}\ell_{\ensuremath{\mathbb{N}}}$ yields a functor from \Cat{CSRng}\xspace to \Cat{SMBLaw}\xspace. This functor may
be described in an alternative (isomorphic) way by assigning to every
semiring $S$ the Lawvere theory of matrices over $S$, which is defined
as follows.
\begin{definition}\label{MatrCatDef}
For a semiring $S$, the Lawvere theory $\mathcal{M}at(S)$ of \emph{matrices
over S} has, for $n, m \in \mathbb{N}$ morphisms (in $\Cat{Sets}\xspace$) $n
\times m \to S$, \textit{i.e.}~$n\times m$ matrices over $S$, as
morphisms $n \to m$. The identity $\idmap_n \colon n \to n$ is given
by the identity matrix:
$$\begin{array}{rcl}
\idmap_n(i,j) & = & \left\{
\begin{array}{ll}
1 & \text{if }\, i = j \\
0 & \text{if }\, i \ne j.
\end{array} \right.
\end{array}$$
\noindent The composition of $g \colon n \to m$ and $h \colon m \to p$
is given by matrix multiplication:
$$
(h \mathrel{\circ} g)(i,k) = \textstyle \sum_j g(i,j) \cdot h(j,k).
$$
The coprojections $\kappa_1 \colon n \to n+m$ and $\kappa_2 \colon m
\to n+m$ are given by
$$
\kappa_1(i,j) = \left\{
\begin{array}{ll}
1 & \text{if }\, i = j \\
0 & \text{otherwise}.
\end{array} \right.
\quad\quad\quad
\kappa_2(i,j) = \left\{
\begin{array}{ll}
1 & \text{if }\, j \ge n \,\text{and}\, j-n=i \\
0 & \text{otherwise}.
\end{array} \right.
$$
\end{definition}
\begin{lemma}
\label{KleisliMatLem}
The assignment $S \mapsto \mathcal{M}at(S)$ yields a functor $\Cat{CSRng}\xspace \to
\Cat{Law}\xspace$. The two functors $\mathcal{M}at \mathcal{E}$ and $\mathcal{K}{\kern-.2ex}\ell_{\ensuremath{\mathbb{N}}} \colon \Cat{ACMnd}\xspace
\to \cat{Law}$ are naturally isomorphic.
\end{lemma}
\begin{proof}
A map of semirings $f \colon S \to R$ gives rise to a functor
$\mathcal{M}at(f) \colon \mathcal{M}at(S) \to \mathcal{M}at(R)$ which is the identity on
objects and which acts on morphisms by post-composition: $h \colon n
\times m \to S$ in $\mathcal{M}at(S)$ is mapped to $f \mathrel{\circ} h \colon n
\times m \to T$ in $\mathcal{M}at(T)$. It is easily checked that $\mathcal{M}at(f)$ is
a morphism of Lawvere theories and that the assigment is functorial.
To prove the second claim we define two natural
transformations. First we define $\xi \colon \mathcal{M}at \mathcal{E} \to
\mathcal{K}{\kern-.2ex}\ell_{\ensuremath{\mathbb{N}}}$ with components $\xi_T \colon \mathcal{M}at(T(1)) \to
\mathcal{K}{\kern-.2ex}\ell_{\ensuremath{\mathbb{N}}}(T)$ that are the identity on objects and send a morphism
$h \colon n \times m \to T(1)$ in $\mathcal{M}at(T(1))$ to the morphism
$\xi_T(h)$ in $\mathcal{K}{\kern-.2ex}\ell_{\ensuremath{\mathbb{N}}}(T)$ given by
$$
\[email protected]{
\xi_T(h){=}
\Big(n \ar[rr]^-{\langle h(\_,j) \rangle_{j \in m}} && T(1)^m \ar[r]^-{\ensuremath{\mathsf{bc}}\xspace_m^{-1}}& T(m)\Big),
}
$$
where $\ensuremath{\mathsf{bc}}\xspace_m^{-1}$ is the inverse of the generalised bicartesian map
$$
\xymatrix{
\ensuremath{\mathsf{bc}}\xspace_m = \Big(T(m) \, = \, T(\coprod_m 1) \ar[r]& T(1)^m\Big).
}
$$
\noindent And secondly, in the reverse direction, we define $\theta
\colon \mathcal{K}{\kern-.2ex}\ell_{\ensuremath{\mathbb{N}}} \to \mathcal{M}at \mathcal{E}$ with components $\theta_T \colon
\mathcal{K}{\kern-.2ex}\ell_{\ensuremath{\mathbb{N}}}(T) \to \mathcal{M}at(T(1))$ that are the identity on objects and
send a morphism $g \colon n \to T(m)$ in $\mathcal{K}{\kern-.2ex}\ell_{\ensuremath{\mathbb{N}}}(T)$ to the
morphism $\theta_T(g) \colon n \times m \to T(1)$ in $\mathcal{M}at(T(1))$
given by
\begin{equation}
\label{Kl2MatEqn}
\begin{array}{rcl}
\theta_T(g)(i,j) & = & (\pi_j \mathrel{\circ} \ensuremath{\mathsf{bc}}\xspace_m \mathrel{\circ} g) (i).
\end{array}
\end{equation}
\noindent It requires some work, but is relatively straightforward to
check that the components $\xi_T$ and $\theta_T$ are $\Cat{Law}\xspace$-maps. To
prove preservation of the composition by $\xi_T$ and $\theta_T$ one
uses the definition of addition and multiplication in $T(1)$ and
(generalisations of) the properties of the map $\ensuremath{\mathsf{bc}}\xspace$ listed in Lemma
\ref{bcproplem}. A short computation shows that the functors are each
other's inverses. The naturality of both $\xi$ and $\theta$ follows
from (a generalisation of) point \ref{natbcprop} of Lemma
\ref{bcproplem}.\hspace*{\fill}$\QEDbox$
\end{proof}
\auxproof{
$\mathcal{M}at(f) \colon \mathcal{M}at(S) \to \mathcal{M}at(T)$ is a functor as it preserves the identity:
$$
\mathcal{M}at(f)(\idmap_n^S) = f \mathrel{\circ} \idmap_n^S = \idmap_n^R,
$$
($f$ is a semiring homomorphism and therefore preserves $0$ and $1$) and composition:
$$
\begin{array}{rcll}
\mathcal{M}at(f)(h \mathrel{\circ} g)(i,j) &=& f(\sum_k h(k,j)\cdot g(i,k))\\
&=&
\sum_k f(h(k,j))\cdot f(g(i,k)) &\text{$f$ preserves addition and multiplication}\\
&=&
\mathcal{M}at(f)(h) \mathrel{\circ} \mathcal{M}at(f)(g).
\end{array}
$$
$\mathcal{M}at(f)$ strictly preserves finite coproducts as
$$
\begin{array}{rcll}
\mathcal{M}at(f)(\kappa_i^{\mathcal{M}at{S}}) &=& f \mathrel{\circ} \kappa_i^{\mathcal{M}at{S}}\\
&=&
\kappa_i^{\mathcal{M}at{R}},
\end{array}
$$
where the last equality follows from the fact that the matrices representing the coprojections consists of only zeros and ones and being a semiring homomorphism $f(0) = 0$ and $f(1) =1$.
Furthermore $\mathcal{M}at$ itself is a functor $\Cat{CSRng}\xspace \to \Cat{Law}\xspace$, as $\mathcal{M}at(\idmap_S) = \idmap_{\mathcal{M}at(S)}$ and
$$
\begin{array}{rcl}
\mathcal{M}at(f_2 \mathrel{\circ} f_1)(h) &=& (f_2 \mathrel{\circ} f_1) \mathrel{\circ} h = f_2 \mathrel{\circ} (f_1 \mathrel{\circ} h)\\
&=& \mathcal{M}at(f_2) \mathrel{\circ} Mat(f_1)(h)
\end{array}
$$
Now we show that the two functors $\mathcal{M}at\mathcal{E}$ and $\mathcal{K}{\kern-.2ex}\ell_{\ensuremath{\mathbb{N}}}$ are naturally isomorphic.
\begin{itemize}
\item $\xi_T: \mathcal{M}at\mathcal{E}(T) = \mathcal{M}at(T(1)) \to \mathcal{K}{\kern-.2ex}\ell_{\ensuremath{\mathbb{N}}}(T)$ is a $\Cat{Law}\xspace$-map.\\
\begin{enumerate}
\item $\xi_T$ preserves the identity:
$$
\begin{array}{rcll}
\xi_T(\idmap_n^{\mathcal{M}at(T(1))}) &=&
\ensuremath{\mathsf{bc}}\xspace^{-1} \mathrel{\circ} \langle \idmap_n^{\mathcal{M}at(T(1))} \mathrel{\circ} \lambda i. \tuple{i}{j}\rangle_{j \in n}\\
&=&
\ensuremath{\mathsf{bc}}\xspace^{-1} \mathrel{\circ} \langle p_j \rangle_{j \in n} &\text{where $p_j$ is in \eqref{kleisliprojdef}}\\
&=&
\eta_n = \idmap_n^{\mathcal{K}{\kern-.2ex}\ell_{\ensuremath{\mathbb{N}}}(T)} &\text{Lemma \ref{bcproplem}\ref{comEtaMubcprop}}
\end{array}
$$
\item $\xi_T$ preserves the composition:\\
Let $g \colon n\times m \to T(1)$ and $h \colon m \times p \to T(1)$.
$$
\begin{array}{rcll}
\xi(h \mathrel{\circ} g)(\_) &=& \ensuremath{\mathsf{bc}}\xspace^{-1} \mathrel{\circ} \langle (h \mathrel{\circ} g)(\_,k)\rangle_{k \in p}\\
&=&
\ensuremath{\mathsf{bc}}\xspace^{-1} \mathrel{\circ} \langle \sum_{j\in m} h(j,k) \cdot g(\_,j)\rangle_{k\in p} &\text{definition of composition in $\mathcal{M}at(T(1))$}
\end{array}
$$
$$
\begin{array}{rcll}
(\xi(h) \mathrel{\circ} \xi(g))(\_) &=& \mu \mathrel{\circ} T(\xi(h)) \mathrel{\circ} \xi(g)\\
&=&
\mu \mathrel{\circ} T(\ensuremath{\mathsf{bc}}\xspace^{-1}) \mathrel{\circ} T(\langle h(\_,k)\rangle_{k\in p}) \mathrel{\circ} \ensuremath{\mathsf{bc}}\xspace^{-1} \mathrel{\circ} \langle g(\_,j)\rangle_{j \in m}\\
&=&
\ensuremath{\mathsf{bc}}\xspace^{-1} \mathrel{\circ} \mu^p \mathrel{\circ} \langle T(\pi_k)\rangle_{k\in p} \mathrel{\circ} T(\langle h(\_,k)\rangle_{k\in p}) \mathrel{\circ} \ensuremath{\mathsf{bc}}\xspace^{-1} \mathrel{\circ} \langle g(\_,j)\rangle_{j \in m}&\text{Lemma \ref{bcproplem}\ref{comEtaMubcprop}}\\
&=&
\ensuremath{\mathsf{bc}}\xspace^{-1} \mathrel{\circ} \langle \mu \mathrel{\circ} T(h(\_,k))\rangle_{k \in p} \mathrel{\circ} \ensuremath{\mathsf{bc}}\xspace^{-1} \mathrel{\circ} \langle g(\_,j)\rangle_{j \in m}
\end{array}
$$
So it suffices to show that, for $k \in p$,
$$
\sum_{j\in m} h(j,k) \cdot g(\_,j) = \mu \mathrel{\circ} T(h(\_,k)) \mathrel{\circ} \ensuremath{\mathsf{bc}}\xspace^{-1} \mathrel{\circ} \langle g(\_,j)\rangle_{j \in m}
$$
using the definition of $+$ and $\cdot$ in $T(1)$ this is demonstrated as follows ($j \in m$ everywhere):
$$
\xymatrix@C1pc{
n \ar[r]^-{\langle g(_,j)\rangle_{j}} & T(1)^m \ar[r]^-{\cong} \ar[rdd]_{\idmap} & (T(1) \times 1)^m \ar[d]_{\ensuremath{\mathsf{st}}\xspace^m} \ar[rr]^-{\prod_{j} \ensuremath{\mathrm{id}} \times \kappa_j} && (T(1) \times m)^m \ar[rr]^-{\prod_{j} \ensuremath{\mathrm{id}} \times h(\_,k)} && (T(1) \times T(1))^m \ar[d]^-{\ensuremath{\mathsf{st}}\xspace^m} \ar[rdd]^-{\cdot}
\\
&&T(1 \times 1)^m \ar[rr]^-{\prod_{j} T(\ensuremath{\mathrm{id}} \times \kappa_j)} \ar[d]^-{T(\pi_2)^m}&& T(1 \times m)^m \ar[rr]^-{\prod_{j} T(\ensuremath{\mathrm{id}} \times h(\_,k))} && T(1 \times T(1))^m \ar[d]^-{T(\pi_2)^m}
\\
&&T(1)^m \ar[rr]^-{\prod_{j} T(\kappa_j)} \ar[d]^-{\ensuremath{\mathsf{bc}}\xspace^{-1}} && T(m)^m \ar[rr]^-{\prod_{j} T(h(\_,k))} && T^2(1)^m \ar[r]^-{\mu^m} \ar[d]^-{\ensuremath{\mathsf{bc}}\xspace^{-1}}& T(1)^m \ar[d]^-{\ensuremath{\mathsf{bc}}\xspace^{-1}}
\\
&T(\coprod_m 1) \ar[r]^-{\cong} & T(m) \ar[rrrrd]_-{T(h(\_,k))} \ar[rr]^-{T(\coprod_j \kappa_j)} &&T(\coprod_j m) \ar[rr]^-{T(\coprod_j h(\_,k)}&&T(\coprod_j T(1)) \ar[d]^-{T(\nabla)}&T(m) \ar[d]^-{T(\nabla)}\\
&&&&&&T^2(1) \ar[r]^-{\mu} &T(1)
}
$$
Commutation of the lower right square follows by point (iii) of Lemma \ref{Mnd2MonLem2}.
\item $\xi_T$ preserves coproducts:\\
Let $\kappa_1^{\mathcal{M}at(T(1))} \colon n \to n+m$ be the coprojection in $\mathcal{M}at(T(1))$. To prove:
$$
\xi_T(\kappa_1^{\mathcal{M}at(T(1))}) = \ensuremath{\mathsf{bc}}\xspace^{-1} \mathrel{\circ} \langle \kappa_1^{\mathcal{M}at(T(1))}(\_,j)\rangle_{j\in n+m} = \eta \mathrel{\circ} \kappa_1^{\Cat{Sets}\xspace} = \kappa_1^{\mathcal{K}{\kern-.2ex}\ell_{\ensuremath{\mathbb{N}}}(T)}
$$
This is shown as follows:
$$
\begin{array}{rcll}
\ensuremath{\mathsf{bc}}\xspace \mathrel{\circ} \eta \mathrel{\circ} \kappa_1^{\Cat{Sets}\xspace} &=& \langle p_j \rangle_{j \in n+m} \mathrel{\circ} \kappa_1^{\Cat{Sets}\xspace} &\text{Lemma \ref{bcproplem}(iii)}\\
&=&
\langle \kappa_1^{\mathcal{M}at(T(1))}(\_,j)\rangle_{j\in n+m},
\end{array}
$$
where the last equality follows from the fact that $0^{T(1)} = 0_{1,1}(*)$ and $1^{T(1)} = \eta_1(*)$
\end{enumerate}
\item $\theta_T \colon \mathcal{K}{\kern-.2ex}\ell_{\ensuremath{\mathbb{N}}}(T) \to \mathcal{M}at(T(1))$ is a $\Cat{Law}\xspace$-map.
\begin{enumerate}
\item $\theta_T$ preserves the identity:
$$
\begin{array}{rcll}
\theta_T(\idmap_n^{\mathcal{K}{\kern-.2ex}\ell_{\ensuremath{\mathbb{N}}}(T)}(\_,j) &=&
\pi_j \mathrel{\circ} \ensuremath{\mathsf{bc}}\xspace \mathrel{\circ} \eta_n\\
&=&
\pi_j \mathrel{\circ} \langle p_j \rangle_{j \in n} &\text{Lemma \ref{bcproplem}\ref{comEtaMubcprop}}\\
&=&
p_j = \idmap_n^{\mathcal{M}at(T(1))}(\_,j)
\end{array}
$$
\item $\theta_T$ preserves the composition:\\
Let $g\colon n \to T(m)$ and $h\colon m \to T(p)$. Then $\theta_T(h \mathrel{\circ} g) \colon n \times p \to T(1)$. We fix the second coordinate and consider it as a map $n \to T(1)$.
$$
\begin{array}{rcll}
\lefteqn{(\theta(h)\mathrel{\circ}\theta(g))(\_,k)}\\
&=&
\sum_j \theta(h)(j,k) \cdot \theta(g)(\_,j)\\
&=&
T(\nabla) \mathrel{\circ} \ensuremath{\mathsf{bc}}\xspace^{-1} \mathrel{\circ} \mu^m \mathrel{\circ} T(\pi_2)^m \mathrel{\circ} \ensuremath{\mathsf{st}}\xspace^m \mathrel{\circ} \prod_j (\ensuremath{\mathrm{id}} \times \theta(h)(\_,k)) \mathrel{\circ} \prod_j (\ensuremath{\mathrm{id}} \times \kappa_j) \mathrel{\circ} \\ &&(\rho^{-1})^m \mathrel{\circ} \ensuremath{\mathsf{bc}}\xspace \mathrel{\circ} g\\
&=&
T(\nabla) \mathrel{\circ} \ensuremath{\mathsf{bc}}\xspace^{-1} \mathrel{\circ} \mu^m \mathrel{\circ} T(\pi_2)^m \mathrel{\circ} \prod_j T(\ensuremath{\mathrm{id}} \times \theta(h)(\_,k)) \mathrel{\circ} \prod_j T(\ensuremath{\mathrm{id}} \times \kappa_j) \mathrel{\circ} \ensuremath{\mathsf{st}}\xspace^m \mathrel{\circ} \\ && (\rho^{-1})^m \mathrel{\circ} \ensuremath{\mathsf{bc}}\xspace \mathrel{\circ} g\\
&=&
T(\nabla) \mathrel{\circ} \ensuremath{\mathsf{bc}}\xspace^{-1} \mathrel{\circ} \mu^m \mathrel{\circ} \prod_j T(\theta(h)(\_,k)) \mathrel{\circ} \prod_j T(\kappa_j) \mathrel{\circ} T(\pi_2)^m \mathrel{\circ} \ensuremath{\mathsf{st}}\xspace^m \mathrel{\circ} \\ && (\rho^{-1})^m \mathrel{\circ} \ensuremath{\mathsf{bc}}\xspace \mathrel{\circ} g\\
&=&
T(\nabla) \mathrel{\circ} \ensuremath{\mathsf{bc}}\xspace^{-1} \mathrel{\circ} \mu^m \mathrel{\circ} \prod_j T(\theta(h)(\_,k)) \mathrel{\circ} \prod_j T(\kappa_j) \mathrel{\circ} \ensuremath{\mathsf{bc}}\xspace \mathrel{\circ} g\\
&=&
\mu \mathrel{\circ} T(\nabla) \mathrel{\circ} \ensuremath{\mathsf{bc}}\xspace^{-1} \mathrel{\circ} \prod_j T(\theta(h)(\_,k)) \mathrel{\circ} \prod_j T(\kappa_j) \mathrel{\circ} \ensuremath{\mathsf{bc}}\xspace \mathrel{\circ} g\\
&=&
\mu \mathrel{\circ} T(\nabla) \mathrel{\circ} T(\coprod_j \theta(h)(\_,k)) \mathrel{\circ} T(\coprod_j \kappa_j) \mathrel{\circ} \ensuremath{\mathsf{bc}}\xspace^{-1} \mathrel{\circ} \ensuremath{\mathsf{bc}}\xspace \mathrel{\circ} g\\
&=&
\mu \mathrel{\circ} T(\theta(h)(\_,k)) \mathrel{\circ} g(\_)\\
&=&
\mu \mathrel{\circ} T(\pi_k) \mathrel{\circ} T(\ensuremath{\mathsf{bc}}\xspace) \mathrel{\circ} T(h) \mathrel{\circ} g\\
&=&
\mu \mathrel{\circ} \pi_k \mathrel{\circ} \langle T(\pi_k)\rangle_{k\in p} \mathrel{\circ} T(\ensuremath{\mathsf{bc}}\xspace) \mathrel{\circ} T(h) \mathrel{\circ} g\\
&=&
\pi_k \mathrel{\circ} \mu^p \mathrel{\circ} \langle T(\pi_k)\rangle_{k\in p} \mathrel{\circ} T(\ensuremath{\mathsf{bc}}\xspace) \mathrel{\circ} T(h) \mathrel{\circ} g\\
&=&
\pi_k \mathrel{\circ} \ensuremath{\mathsf{bc}}\xspace \mathrel{\circ} \mu \mathrel{\circ} T(h) \mathrel{\circ} g\\
&=&
\theta(h \mathrel{\circ} g)(\_,k)
\end{array}
$$
\item $\theta_T$ preserves the coproduct structure\\
Consider $\kappa_1^{\mathcal{K}{\kern-.2ex}\ell_{\ensuremath{\mathbb{N}}}(T)} \colon n \to T(n+m) = \eta \mathrel{\circ} \kappa_1^{\Cat{Sets}\xspace}$. Then
$$
\theta_T(\kappa_1^{\mathcal{K}{\kern-.2ex}\ell_{\ensuremath{\mathbb{N}}}(T)}) \colon n \times (n+m) \to T(1)
$$
and
$$
\begin{array}{rcll}
\theta_T(\kappa_1^{\mathcal{K}{\kern-.2ex}\ell_{\ensuremath{\mathbb{N}}}(T)})(\_,j) &=& \pi_j \mathrel{\circ} \ensuremath{\mathsf{bc}}\xspace_m \mathrel{\circ} \eta \mathrel{\circ} \kappa_1^{\Cat{Sets}\xspace}\\
&=&
\pi_j \mathrel{\circ} \langle p_j \rangle_{j \in n+m} \mathrel{\circ} \kappa_1^{\Cat{Sets}\xspace} &\text{(Lemma \ref{bcproplem}(iii))}\\
&=&
p_j \mathrel{\circ} \kappa_1^{\Cat{Sets}\xspace}\\
&=&
\kappa_1^{\mathcal{M}at(T(1))}(\_,j)
\end{array}
$$
\end{enumerate}
\item Naturality of $\xi \colon \mathcal{M}at\mathcal{E} \to \mathcal{K}{\kern-.2ex}\ell_{\ensuremath{\mathbb{N}}}$.\\
Let $\sigma \colon T \to V$ and $h \colon n \times m \to T(1)$ in $(\mathcal{M}at\mathcal{E})(T)$.
$$
\begin{array}{rcl}
(\mathcal{K}{\kern-.2ex}\ell_{\ensuremath{\mathbb{N}}}(\sigma) \mathrel{\circ} \xi_T)(h)
&=&
\sigma_m \mathrel{\circ} \ensuremath{\mathsf{bc}}\xspace^{-1} \mathrel{\circ} \langle h \mathrel{\circ} \lambda i.\tuple{i}{j} \rangle_{j \in m}\\
&=&
\ensuremath{\mathsf{bc}}\xspace^{-1} \mathrel{\circ} \sigma_1^m \mathrel{\circ} \langle h \mathrel{\circ} \lambda i.\tuple{i}{j} \rangle_{j \in m}\\
&=&
\ensuremath{\mathsf{bc}}\xspace^{-1} \mathrel{\circ} \langle (\sigma_1 \mathrel{\circ} h) \mathrel{\circ} \lambda i.\tuple{i}{j} \rangle_{j \in m}\\
&=&
(\xi_V \mathrel{\circ} (\mathcal{M}at\mathcal{E})(\sigma))(h)
\end{array}
$$
\item Naturality of $\theta \colon \mathcal{K}{\kern-.2ex}\ell_{\ensuremath{\mathbb{N}}} \to \mathcal{M}at\mathcal{E}$.\\
Let $\sigma \colon T \to V$ and $g \colon n \to T(m)$ in $\mathcal{K}{\kern-.2ex}\ell_{\ensuremath{\mathbb{N}}}(T)$.
$$
\begin{array}{rcl}
((\mathcal{M}at\mathcal{E})(\sigma)\mathrel{\circ} \theta_T)(g)(\_,j)
&=&
\sigma_1 \mathrel{\circ} \pi_j \mathrel{\circ} \ensuremath{\mathsf{bc}}\xspace \mathrel{\circ} g\\
&=&
\pi_j \mathrel{\circ} \sigma_1^m \mathrel{\circ} \ensuremath{\mathsf{bc}}\xspace \mathrel{\circ} g\\
&=&
\pi_j \mathrel{\circ} \ensuremath{\mathsf{bc}}\xspace \mathrel{\circ} \sigma_m \mathrel{\circ} g \\
&=&
(\theta_V \mathrel{\circ} \mathcal{K}{\kern-.2ex}\ell_{\ensuremath{\mathbb{N}}}(\sigma))(g)(\_,j)
\end{array}
$$
\item $\xi_T \mathrel{\circ} \theta_T = id$.\\
Let $g \colon n \to T(m)$ in $\mathcal{K}{\kern-.2ex}\ell_{\ensuremath{\mathbb{N}}}(T)$.
$$
\begin{array}{rcl}
(\xi_T \mathrel{\circ} \theta_T)(g) &=& \ensuremath{\mathsf{bc}}\xspace^{-1} \mathrel{\circ} \langle \theta_T(g) \mathrel{\circ} \lambda i.\tuple{i}{j}\rangle_{j \in m}\\
&=&
\ensuremath{\mathsf{bc}}\xspace^{-1} \mathrel{\circ} \langle \pi_j \mathrel{\circ} \ensuremath{\mathsf{bc}}\xspace \mathrel{\circ} g \rangle_{j \in m}\\
&=&
\ensuremath{\mathsf{bc}}\xspace^{-1} \mathrel{\circ} \ensuremath{\mathsf{bc}}\xspace \mathrel{\circ} g\\
&=&
g
\end{array}
$$
\item $\theta_T \mathrel{\circ} \xi_T = id$.\\
Let $h \colon n\times m \to T(1)$ in $\mathcal{M}at(T(1))$.
$$
\begin{array}{rcl}
(\theta_T \mathrel{\circ} \xi_T) (i,j) &=& \pi_j \mathrel{\circ} \ensuremath{\mathsf{bc}}\xspace \mathrel{\circ} \xi_T(h)(i)\\
&=&
\pi_j \mathrel{\circ} \ensuremath{\mathsf{bc}}\xspace \mathrel{\circ} \ensuremath{\mathsf{bc}}\xspace^{-1} \mathrel{\circ} \langle h \mathrel{\circ} \lambda x.\tuple{x}{k}\rangle_{k \in m}(i)\\
&=&
\pi_j\big(\langle h(i,k)\rangle_{k\in m}\big)\\
&=&
h(i,j)
\end{array}
$$
\end{itemize}
}
The pair of functors $\mathcal{M} \colon \Cat{CSRng}\xspace \leftrightarrows
\Cat{ACMnd}\xspace \colon \mathcal{E}$ forms a reflection, $\mathcal{E}\mathcal{M} \cong \idmap$
(Lemma~\ref{AdjCSR2ACMLem}). Combining this with the previous
proposition, it follows that also the functors $\mathcal{M}at,
\mathcal{K}{\kern-.2ex}\ell_{\ensuremath{\mathbb{N}}}\mathcal{M} \colon \Cat{CSRng}\xspace \to \Cat{Law}\xspace$ are naturally
isomorphic. Hence, the functor $\mathcal{M}at \colon \Cat{CSRng}\xspace \to \Cat{Law}\xspace$ may be
viewed as a functor from commutative semirings to symmetric monoidal
Lawvere theories with biproducts. For a commutative semiring $S$ the
projection maps $\pi_1 \colon n+m \to n$ and $\pi_2 \colon n + m \to
m$ in $\mathcal{M}at(S)$ are defined in a similar way as the coprojection maps
from Definition \ref{MatrCatDef}. For a pair of maps $g \colon m \to
p$, $h \colon n \to q$, the tensor product $g \otimes h \colon (m
\times n) \to (p \times q)$ is the map $g \otimes h \colon (m \times
n) \times (p \times q) \to S$ defined as
$$\begin{array}{rcl}
(g \otimes h)((i_0,i_1),(j_0,j_1))
& = &
g(i_0,j_0) \cdot h(i_1,j_1),
\end{array}$$
where $\cdot$ is the multiplication from $S$.
\subsection{From Lawvere theories to semirings}\label{Cat2CSRngSec}
In Section~\ref{ComMonoidSubsec}, just after Lemma~\ref{CMon2CMndLem},
we have already seen that the homset $\cat{L}(1,1)$ of a Lawvere
theory $\cat{L} \in \Cat{SMLaw}\xspace$ is a commutative monoid, with
multiplication given by composition of endomaps on $1$. In case $\cat{L}$ also has biproducts we have, by
\eqref{HomsetPlus}, an addition on this homset, which is preserved by
composition. Combining those two monoid structures yields a semiring
structure on $\cat{L}(1,1)$. This is standard, see
\textit{e.g.}~\cite{AbramskyC04,KellyL80,Heunen10a}. The assignment of
the semiring $\cat{L}(1,1)$ to a Lawvere theory $\cat{L} \in \Cat{SMBLaw}\xspace$
is functorial and we denote this functor, as in
Section~\ref{ComMonoidSubsec}, by $\mathcal{H} \colon \Cat{SMBLaw}\xspace \to
\Cat{CSRng}\xspace$.
\subsection{Adjunction between semirings and Lawvere theories}
Our main result is the adjunction on the right in the triangle of
adjunctions for semirings, see Figure~\ref{CSRngTriangleFig}.
\begin{lemma}
\label{CSRng2CatAdjLem}
The pair of functors $\mathcal{M}at \colon \Cat{CSRng}\xspace \rightleftarrows \Cat{SMBLaw}\xspace
\colon \mathcal{H}$, forms an adjunction $\mathcal{M}at \dashv \mathcal{H}$.
\end{lemma}
\begin{proof}
For $S \in \Cat{CSRng}\xspace$ and $\cat{L} \in \Cat{SMBLaw}\xspace$ there are
(natural) bijective correspondences:
$$\begin{bijectivecorrespondence}
\correspondence[in \Cat{SMBLaw}\xspace]{\xymatrix{\mathcal{M}at(S)\ar[r]^-{F} & \cat{L}}}
\correspondence[in \Cat{CSRng}\xspace]{\xymatrix{S\ar[r]_-{f} & \mathcal{H}(\cat{L})}}
\end{bijectivecorrespondence}$$
\
\noindent Given $F$ one defines a semiring map $\overline{F}
\colon S\rightarrow \mathcal{H}(\cat{L}) = \cat{L}(1,1)$ by
$$
s \mapsto F(1 \times 1 \xrightarrow{\lam{x}{s}} S).
$$
\noindent Note that $1 \times 1 \xrightarrow{\lam{x}{s}} S$ is an
endomap on $1$ in $\mathcal{M}at(S)$ which is mapped by $F$ to an element of
$\cat{L}(1,1)$.
Conversely, given $f$ one defines a \Cat{SMBLaw}\xspace-map $\overline{f} \colon
\mathcal{M}at(S) \rightarrow \cat{L}$ which sends a morphism $h \colon n \to m$
in $\mathcal{M}at(S)$, \textit{i.e.} $h \colon n \times m \to S$ in $\Cat{Sets}\xspace$, to
the following morphism $n\rightarrow m$ in $\cat{L}$, forming an
$n$-cotuple of $m$-tuples
$$\xymatrix@C+1pc{
\overline{f}(h) = \Big(n
\ar[rr]^-{\big[\big\langle f(h(i,j))\big\rangle_{j < m}\big]_{i < n}}
&&m\Big)}.
$$
It is readily checked that $\overline{F} \colon S \to \cat{L}(1,1)$ is
a map of semirings. To show that $\overline{f} \colon \mathcal{M}at(S) \to
\cat{L}$ is a functor one has to use the definition of the semiring
structure on $\cat{L}(1,1)$ and the properties of the biproduct on
$\cat{L}$. One easily verifies that $\overline{f}$ preserves the
biproduct. To show that it also preserves the monoidal structure one
has to use that, for $s, t \in \cat{L}(1,1)$, $s \otimes t = t\mathrel{\circ} s
\,(=s \mathrel{\circ} t)$. \hspace*{\fill}$\QEDbox$
\end{proof}
\auxproof{
\begin{enumerate}
\item $\overline{F}$ is a semiring homomorphism
\begin{itemize}
\item $\overline{F}$ preserves $\cdot$
$$
\begin{array}{rcll}
\overline{F}(s\cdot t) &=& F(1 \times 1 \xrightarrow{{\lambda x. (s\cdot t)}} S)\\
&=&
F((1 \times 1 \xrightarrow{\lam{x}{s}} S) \mathrel{\circ} (1 \times 1 \xrightarrow{{\lambda x. t}} S))\\
&=&
F(1 \times 1 \xrightarrow{\lam{x}{s}} S) \mathrel{\circ} F(1 \times 1 \xrightarrow{{\lambda x. t}} S)\\
&=&
F(1 \times 1 \xrightarrow{{\lambda x. t}} S) \cdot^{\mathcal{H}(\cat{C})} F(1 \times 1 \xrightarrow{\lam{x}{s}} S)\\
&=&
\overline{F}(t) \cdot \overline{F}(s)\\
&=& \overline{F}(s) \cdot \overline{F}(t)
&\text{commutativity}
\end{array}
$$
\item $\overline{F}$ preserves $+$
$$
\begin{array}{rcll}
\overline{F}(s + t) &=& F(1 \times 1 \xrightarrow{{\lambda x. (s+t)}} S)\\
&=& F(1 \xrightarrow{\cotuple{s}{t}} 1 \oplus 1 \xrightarrow{\nabla} 1)\\
&=& I \xrightarrow{\cotuple{F(s)}{F(t)}} I \oplus I \xrightarrow{\nabla} I &\text{$F$ preserves all structure}\\
&=& F(s) + F(t)
\end{array}
$$
\item $\overline{F}$ preserves 0
$$
\begin{array}{rcl}
\overline{F}(0) &=& F(1 \times 1 \xrightarrow{{\lambda x. 0}} S)\\
&=& F(1 \xrightarrow{!} 0 \xrightarrow{!} 1)\\
&=& F(1)\xrightarrow{F(!)} F(0) \xrightarrow{F(!)} F(1)\\
&=& 1 \xrightarrow{!} 0 \xrightarrow{!} 1 = 0_{\mathcal{H}(\cat{L})}
\end{array}
$$
\item $\overline{F}$ preserves 1
$$
\begin{array}{rcl}
\overline{F}(1) &=& F(1 \times 1 \xrightarrow{{\lambda x. 1}} S)\\
&=& F(id_1)\\
&=& id_1\\
&=& 1_{\mathcal{H}(\cat{L})}
\end{array}
$$
\end{itemize}
\item $\overline{f}$ is a \Cat{SMBLaw}\xspace-morphism
\begin{itemize}
\item $\overline{f}$ is a functor\\
Let $n$ be an object of $\mathcal{M}at(S)$,
$$\xymatrix@C+1pc{
\overline{f}(id_n) = \Big(n
\ar[rr]^-{\big[\big\langle f(id_n(i,j))\big\rangle_{j\in m}\big]_{i \in n}}
&&n\Big)}.
$$
As $f$ is semiring homomorphism, it preserves $0$ and $1$. Hence
$$
f(\idmap_n(i,j)) = \left\{
\begin{array}{ll}
id_1 & \text{if }\, i = j \\
0_{1,1} & \text{if }\, i \ne j.
\end{array} \right.
$$
It follows that $\overline{f}(\idmap_n) = \idmap_{\bigoplus_n I}$ (using the properties of the biproduct).\\
\\
Let $g \colon n \to m$ and $h \colon m \to p$ in $\mathcal{M}at{S}$.
$$
\begin{array}{rcl}
\overline{f}(h \mathrel{\circ} g) &=& \big[\big\langle f(h \mathrel{\circ} g(i,k))\big\rangle_{k\in p}\big]_{i \in n}\\
&=&
\big[\big\langle f(\sum_j h(j,k) \cdot g(i,j))\big\rangle_{k\in p}\big]_{i \in n}\\
&=&
\big[\big\langle f(\sum_j g(i,j) \cdot h(j,k))\big\rangle_{k\in p}\big]_{i \in n}\\
&=&
\big[\big\langle \sum_j f(g(i,j)) \cdot f(h(j,k))\big\rangle_{k\in p}\big]_{i \in n}\\
&=&
\big[\big\langle \nabla \mathrel{\circ} \oplus_{j \in m} ((f(h(j,k)) \mathrel{\circ} f(g(i,j)))) \mathrel{\circ} \Delta \big\rangle_{k\in p}\big]_{i \in n}\\
&=&
\big[\big\langle \nabla \mathrel{\circ} (\oplus_{j \in m} f(h(j,k))) \mathrel{\circ} (\oplus_{j\in m} f(g(i,j))) \mathrel{\circ} \Delta \big\rangle_{k\in p}\big]_{i \in n}\\
&=&
\big[\big\langle [f(h(j,k))]_{j\in m} \mathrel{\circ} \big\langle f(g(i,j)) \big\rangle_{j\in m} \big\rangle_{k\in p}\big]_{i \in n}\\
&=&
\big[\big\langle [f(h(j,k))]_{j\in m} \big\rangle_{k\in p} \mathrel{\circ} \big\langle f(g(i,j)) \big\rangle_{j\in m} \big]_{i \in n}\\
&=&
\big\langle [f(h(j,k))]_{j\in m} \big\rangle_{k\in p} \mathrel{\circ} \big[\langle f(g(i,j)) \rangle_{j\in m} \big]_{i \in n}\\
&=&
\big[\big\langle f(h(j,k))\big\rangle_{k\in p}\big]_{j\in m} \mathrel{\circ} \big[\langle f(g(i,j)) \rangle_{j\in m} \big]_{i \in n}\\
&=&
\overline{f}(h) \mathrel{\circ} \overline{f}(g)
\end{array}
$$
\item $\overline{f}$ preserves the biproduct structure\\
We have to show that the canonical maps:
$$
n +m = \overline{f}(n \oplus m) \xrightarrow{\tuple{\overline{f}(\pi_1)}{\overline{f}(\pi_2)}} \overline{f}(n) \oplus \overline{f}(m) = n+m
$$
and
$$
n+m = \overline{f}(n) \oplus \overline{f}(m) \xrightarrow{\cotuple{\overline{f}(\kappa_1)}{\overline{f}(\kappa_2)}} \overline{f}(n \oplus m) = n+m
$$
are mutually inverse.
$$
\begin{array}{rcl}
\tuple{\overline{f}(\pi_1)}{\overline{f}(\pi_2)} &=& \tuple{\big[\langle f(\pi_1(i,x))\rangle_{i \in n}\big]_{x \in n+m}}{\big[\langle f(\pi_2(j,x))\rangle_{j \in m}\big]_{x \in n+m}}\\
&=&
\tuple{\big\langle [f(\pi_1(i,x))]_{x \in n+m}\big\rangle_{i \in n}}{\big\langle [f(\pi_2(j,x))]_{x \in n+m}\big\rangle_{j \in m}}\\
&=&
\idmap_{n+m}
\end{array}
$$
And similarly $\cotuple{\overline{f}(\kappa_1)}{\overline{f}(\kappa_2)} = \idmap_{\oplus_{n+m}I}$
\item $\overline{f}$ preserves the monoidal structure\\
On objects:
$$
\begin{array}{rcll}
\overline{f}(n)\otimes \overline{f}(m) &=& n \times m\\
&=&
\overline{f}(n\times m)
\end{array}
$$
Furthermore we have to show that for $h_1 \colon m \to p$ and $h_2 \colon n \to q$ in $\mathcal{M}at(S)$, $\overline{f}(h_1 \otimes h_2) = \overline{f}(h_1) \otimes \overline{h_2}$. By way of example we consider $h_1 \colon 2 \to 1$ and $h_2 \colon 2 \to 1$,
Consider the canonical identities:
$$
\beta_1 = 1 \otimes 1 + 1 \otimes 1 + 1 \otimes 1 + 1 \otimes 1 \xrightarrow{[\kappa_1 \mathrel{\circ} (\ensuremath{\mathrm{id}} \otimes \kappa_1), \kappa_1 \mathrel{\circ} (\ensuremath{\mathrm{id}} \otimes \kappa_2),\kappa_2 \mathrel{\circ} (\ensuremath{\mathrm{id}} \otimes \kappa_1), \kappa_2 \mathrel{\circ} (\ensuremath{\mathrm{id}} \otimes \kappa)]} 1 \otimes (1+1) + 1 \otimes (1+1)
$$
and
$$
\beta_2 = 1 \otimes (1+1) + 1 \otimes (1+1) \xrightarrow{[\kappa_1 \otimes \idmap, \kappa_2 \otimes \idmap]} (1+1) \otimes (1+1)
$$
We have to show:
$$
(\overline{f}(h_1) \otimes \overline{f}(h_2)) \mathrel{\circ} \beta_2 \mathrel{\circ} \beta_1 = \overline{f}(h_1 \otimes h_2)
$$
We just show
$$
(\overline{f}(h_1) \otimes \overline{f}(h_2)) \mathrel{\circ} \beta_2 \mathrel{\circ} \beta_1 \mathrel{\circ} \kappa_1 = \overline{f}(h_1 \otimes h_2) \mathrel{\circ} \kappa_1,
$$
showing equality under composition with the other three coprojections is done in a similar way.
$$
\begin{array}{rcll}
\lefteqn{(\overline{f}(h_1) \otimes \overline{f}(h_2)) \mathrel{\circ} \beta_2 \mathrel{\circ} \beta_1 \mathrel{\circ} \kappa_1} \\
&=&
(\overline{f}(h_1) \otimes \overline{f}(h_2)) \mathrel{\circ} \beta_2 \mathrel{\circ} \kappa_1 \mathrel{\circ} (\ensuremath{\mathrm{id}} \otimes \kappa_1)\\
&=&
(\overline{f}(h_1) \otimes \overline{f}(h_2)) \mathrel{\circ} (\kappa_1 \otimes \idmap) \mathrel{\circ} (\ensuremath{\mathrm{id}} \otimes \kappa_1)\\
&=&
(\overline{f}(h_1) \mathrel{\circ} \kappa_1) \otimes (\overline{f}(h_2) \mathrel{\circ} \kappa_1)\\
&=&
f(h_1(0,0)) \otimes f(h_2(0,0)) &\text{(def. $\overline{f}$)}\\
&=&
f(h_2(0,0)) \mathrel{\circ} f(h_1(0,0)) &(s \otimes t = t \mathrel{\circ} s)\\
&=&
f(h_2(0,0) \cdot h_1(0,0)) &(\text{$f$ pres. mult.})\\
&=&
\overline{f}(h_1 \otimes h_2) \mathrel{\circ} \kappa_1 &(\text{def. $\otimes$ in $\mathcal{M}at(S)$})
\end{array}
$$
\end{itemize}
\item $\overline{\overline{F}} = F$\\
Let us denote $\underline{s} = 1 \times 1 \xrightarrow{\lam{x}{s}} S$.\\
Clearly $\overline{\overline{F}} = F$ on objects. Now let $h \colon n \to m$ in $\mathcal{M}at(S)$, \textit{i.e} $h \colon n\times m \to S$ in \cat{Sets}, by definition
$$
\xymatrix@C+1pc{
\overline{\overline{F}}(h) = \Big(n
\ar[rr]^-{\big[\big\langle \overline{F}(h(i,j))\big\rangle_{j\in m}\big]_{i \in n}}
&&m\Big)}.
$$
$$
\begin{array}{rcll}
\big[\big\langle \overline{F}(h(i,j))\big\rangle_{j\in m}\big]_{i \in n} &=& \big[\big\langle F(\underline{h(i,j)})\big\rangle_{j\in m}\big]_{i \in n} &\text{definition $\overline{F}$}\\
&=&
F(\big[\big\langle \underline{h(i,j)}\big\rangle_{j\in m}\big]_{i \in n}) &\text{$F$ preserves the biproduct structure}\\
&=&
F(h) &\text{definition of (co)tuples in $\mathcal{M}at(S)$}
\end{array}
$$
\item $\overline{\overline{f}} = f$\\
$$
\begin{array}{rcll}
\overline{\overline{f}}(s) &=& \overline{f}(\underline{s})\\
&=&
I \xrightarrow{f(\underline{s}(*,*))} I &\text{definition $\overline{f}$}\\
&=&
f(s)
\end{array}
$$
\item For naturality consider
$$\begin{bijectivecorrespondence}
\correspondence[in \Cat{SMBLaw}\xspace]{\xymatrix{\mathcal{M}at(S)\ar[r]^-{\mathcal{M}at(g)} & \mathcal{M}at(R) \ar[r]^-{F} & \cat{L} \ar[r]^-{G} & \cat{K}}}
\correspondence[in \Cat{CSRng}\xspace]{\xymatrix{S\ar[r]_-{g} & R \ar[r]_-{\overline{F}} & \mathcal{H}(\cat{L}) \ar[r]_-{\mathcal{H}(G)} & \mathcal{H}(\cat{K})}}
\end{bijectivecorrespondence}$$
Then
$$
\begin{array}{rcll}
\overline{G \mathrel{\circ} F \mathrel{\circ} \mathcal{M}at(g)}(s) &=& G \mathrel{\circ} f \mathrel{\circ} \mathcal{M}at(g)(\underline{s})\\
&=&
G \mathrel{\circ} F (\underline{g(s)})\\
&=&
G \mathrel{\circ} \overline{F}(g(s)) &\text{definition $\overline{F}$}\\
&=&
\mathcal{H}(G) \mathrel{\circ} \overline{F} \mathrel{\circ} g(s) &\text{definition $\mathcal{H}(G)$}
\end{array}
$$
\end{enumerate}
}
The results of Section~\ref{SemiringMonadSec} and~\ref{Semiringcatsec}
are summarized in Figure \ref{CSRngTriangleFig}.
\begin{figure}
$$\[email protected]@C+.5pc{
& & \Cat{CSRng}\xspace\ar@/_2ex/ [ddll]_{\cal M}
\ar@/_2ex/ [ddrr]_(0.3){\mathcal{M}at \cong \mathcal{K}{\kern-.2ex}\ell_{\ensuremath{\mathbb{N}}}\mathcal{M}\;\;} \\
& \dashv & & \dashv & \\
\Cat{ACMnd}\xspace\ar @/_2ex/[rrrr]_{\mathcal{K}{\kern-.2ex}\ell_\ensuremath{\mathbb{N}}}
\ar@/_2ex/ [uurr]_(0.6){\;{\mathcal{E}} \cong \mathcal{H}\mathcal{K}{\kern-.2ex}\ell_{\ensuremath{\mathbb{N}}}} & & \bot & &
\Cat{SMBLaw}\xspace\ar@/_2ex/ [uull]_{\mathcal{H}}\ar @/_2ex/[llll]_{\mathcal{T}}
}$$
\caption{Triangle of adjunctions starting from commutative semirings,
with commutative additive monads, and symmetric
monoidal Lawvere theories with biproducts.}
\label{CSRngTriangleFig}
\end{figure}
\section{Semirings with involutions}\label{InvolutionSec}
In this final section we enrich our approach with involutions.
Actually, such involutions could have been introduced for monoids
already. We have not done so for practical reasons: involutions on
semirings give the most powerful results, combining daggers on
categories with both symmetric monoidal and biproduct structure.
An involutive semiring (in \Cat{Sets}\xspace) is a semiring $(S, +, 0, \cdot, 1)$
together with a unary operation $*$ that preserves the addition and
multiplication, \textit{i.e.}~$(s+t)^* = s^*+t^*$ and $0^* = 0$, and
$(s\cdot t)^* = s^* \cdot t^*$ and $1^*=1$, and is involutive,
\textit{i.e.}~$(s^*)^* = s$. The complex numbers with conjugation form
an example. We denote the category of involutive semirings, with
homomorphisms that preserve all structure, by $\Cat{ICSRng}\xspace$.
The adjunction $\mathcal{M} \colon \Cat{CSRng}\xspace \leftrightarrows \cat{ACMnd}
\colon \mathcal{E}$ considered in Lemma~\ref{AdjCSR2ACMLem} may be restricted
to an adjunction between involutive semirings and so-called involutive
commutative additive monads (on \Cat{Sets}\xspace), which are commutative additive
monads $T$ together with a monad morphism $\zeta\colon T \to T$
satisfying $\zeta \mathrel{\circ} \zeta = id$. We call $\zeta$ an involution on
$T$, just as in the semiring setting. A morphism between such monads
$(T,\zeta)$ and $(T',\zeta')$, is a monad morphism $\sigma \colon T
\to T'$ preserving the involution, \textit{i.e.}~satisfying $\sigma
\mathrel{\circ} \zeta = \zeta' \mathrel{\circ} \sigma$. We denote the category of
involutive commutative additive monads by $\Cat{IACMnd}\xspace$.
\begin{lemma}
The functors $\mathcal{M} \colon \Cat{CSRng}\xspace \leftrightarrows \Cat{ACMnd}\xspace
\colon \mathcal{E}$ from Lemma \ref{SRng2CAMndProp} and Lemma
\ref{CAMnd2CSRngLem} restrict to a pair of functors $\mathcal{M}
\colon \Cat{ICSRng}\xspace \leftrightarrows \Cat{IACMnd}\xspace \colon \mathcal{E}$. The restricted
functors form an adjunction $\mathcal{M} \vdash \mathcal{E}$.
\end{lemma}
\begin{proof}
Given a semiring $S$ with involution $*$, we may define an
involution $\zeta$ on the multiset monad $\mathcal{M}(S) = M_S$ with
components
$$
\zeta_X \colon M_S(X) \to M_S(X), \qquad
\textstyle\sum_i s_ix_i \mapsto \sum_i s_i^*x_i.
$$
Conversely, for an involutive monad $(T, \zeta)$, the map $\zeta_1$ gives an involution on the semiring $\mathcal{E}(T) = T(1)$.
A simple computation shows that the unit and the counit of the
adjunction $\mathcal{M} \colon \Cat{CSRng}\xspace \leftrightarrows \Cat{ACMnd}\xspace \colon
\mathcal{E}$ from Lemma~\ref{AdjCSR2ACMLem} preserve the involution (on
semirings and on monads respectively). Hence the restricted functors
again form an adjunction. \hspace*{\fill}$\QEDbox$
\end{proof}
\auxproof{
\begin{itemize}
\item $\mathcal{M} \colon \Cat{CSRng}\xspace \leftrightarrows \Cat{ACMnd}\xspace$ restricts to a functor $\Cat{ICSRng}\xspace \leftrightarrows \Cat{IACMnd}\xspace$.\\
Using Lemma \ref{SRng2CAMndProp}, it is only left to show that $\zeta \colon M_S \to M_S$ is a monad morphism s.t. $\zeta \mathrel{\circ} \zeta = \idmap$ and that for a morphism of involutive semirings $g \colon S \to R$, $\mathcal{M}(g)$ preserves the involution.
\begin{enumerate}
\item Naturality\\
Let $f \colon X \to Y$, then
$$
\begin{array}{rcl}
(\zeta_Y \mathrel{\circ} M_S(f))(\sum_i s_i x_i) &=& \zeta_Y(\sum_i s_if(x_i))\\
&=&
\sum_i s_i^*f(x_i)\\
&=&
(M_S(f) \mathrel{\circ} \zeta_X)(\sum_i s_i x_i)
\end{array}
$$
\item Commutativity with $\eta$:
$$
\zeta \mathrel{\circ} \eta (x) = \zeta(1x) = 1^*x = 1x = \eta(x)
$$
\item Commutativity with $\mu$:
$$
\begin{array}{rcll}
\lefteqn{ (\mu \mathrel{\circ} M_S\zeta_X\mathrel{\circ}\zeta_{M_S(X)})(\sum_i s_i \sum_j t_{ij}x_{j})}\\
&=&
(\mu \mathrel{\circ} M_S\zeta_X)(\sum_i s_i^* \sum_j t_{ij}x_{j})\\
&=&
\mu(\sum_i s_i^* \sum_j t_{ij}^*x_{j})\\
&=&
\sum_j(\sum_i^S s_i^*t_{ij}^*)x_j &\text{definition $\mu$}\\
&=&
\sum_j(\sum_i^S s_it_{ij})^*x_j &\text{$*$ commutes with $\cdot$ and $+$}\\
&=&
(\zeta_X \mathrel{\circ} \mu)(\sum_i s_i \sum_j t_{ij}x_j)
\end{array}
$$
\item $\zeta \mathrel{\circ} \zeta = \idmap$
$$
(\zeta \mathrel{\circ} \zeta)(\sum_i s_i x_i) = \zeta(\sum_i s_i^*x_i) = \sum_i s_i^{**}x_i = \sum_i s_i x_i.
$$
\item Let $g \colon S \to R$. To prove: $\mathcal{M}(g) \mathrel{\circ} \zeta^S = \zeta^R \mathrel{\circ} \mathcal{M}(g)$.\\
$$
\begin{array}{rcl}
(\mathcal{M}(g) \mathrel{\circ} \zeta^S)(\sum_i s_i x_i) &=& \mathcal{M}(g) (\sum_i s_i^* x_i)\\
&=&
\sum_i g(s_i^*) x_i\\
&=&
\sum_i (g(s_i))^* x_i\\
&=&
(\zeta^R \mathrel{\circ} \mathcal{M}(g))(\sum_i s_ix_i)
\end{array}
$$
\end{enumerate}
\item $\mathcal{E}\colon \Cat{ACMnd}\xspace \to \Cat{CSRng}\xspace$ restricts to a functor $\Cat{ICSRng}\xspace \to \Cat{IACMnd}\xspace$:\\
Using Lemma \ref{CAMnd2CSRngLem} it is left to show that $\zeta_1$ is an involution on $T(1)$, and that, for a morphism $\sigma \colon T \to T'$, $\mathcal{E}(\sigma) = \sigma_1$ preserves the involution. This last point is trivial as, by definition of morphisms in $\Cat{IACMnd}\xspace$, $\sigma \mathrel{\circ} \zeta = \zeta \mathrel{\circ} \sigma$. As for the first point:
\begin{enumerate}
\item $(a+b)^* = a^* + b^*$\\
Addition on $T(1)$ is given by $T(\nabla) \mathrel{\circ} \ensuremath{\mathsf{bc}}\xspace^{-1} \colon T(1) \times T(1) \to T(1)$.
$$
\begin{array}{rcll}
\zeta_1 \mathrel{\circ} T(\nabla) \mathrel{\circ} \ensuremath{\mathsf{bc}}\xspace^{-1} &=& T(\nabla) \mathrel{\circ} \zeta_{1+1} \mathrel{\circ} \ensuremath{\mathsf{bc}}\xspace^{-1} & \text{Naturality of $\zeta$}\\
&=&
T(\nabla) \mathrel{\circ} \ensuremath{\mathsf{bc}}\xspace^{-1} \mathrel{\circ} (\zeta_1 \times \zeta_1) & \text{Lemma \ref{bcproplem}(i)}
\end{array}
$$
\item $(a\cdot b)^* = a^*b^*$\\
Multiplication on $T(1)$ is given by $T(\pi_2) \mathrel{\circ} \ensuremath{\mathsf{dst}}\xspace \colon T(1) \times T(1) \to T(1)$.\
$$
\begin{array}{rcll}
\zeta_1 \mathrel{\circ} T(\pi_2) \mathrel{\circ} \ensuremath{\mathsf{dst}}\xspace &=& T(\pi_2) \mathrel{\circ} \zeta_{1\times 1} \mathrel{\circ} \ensuremath{\mathsf{dst}}\xspace & \text{Naturality of $\zeta$}\\
&=&
T(\pi_2) \mathrel{\circ} \ensuremath{\mathsf{dst}}\xspace \mathrel{\circ} (\zeta \times \zeta) & \text{Naturality of $\ensuremath{\mathsf{dst}}\xspace$}
\end{array}
$$
\item $a^{**} = a$
$$
a^{**} = \zeta_1(\zeta_1(a)) = (\zeta_1 \mathrel{\circ} \zeta_1)(a) = \idmap(a) = a.
$$
\end{enumerate}
\item The restricted functors again form an adjunction.\\
Consider the bijective correspondances defined in
Lemma~\ref{AdjCSR2ACMLem}.
\begin{itemize}
\item $\overline{\sigma} \colon S \to T(1)$ preserves the involution.
$$
\begin{array}{rcll}
\overline{\sigma}(s^*) &=& \sigma_1(s^**)\\
&=&
\sigma_1(\zeta^{M_S}(s*))\\
&=&
\zeta^T(\sigma_1(s*))\\
&=&
(\overline{\sigma}(s))^*
\end{array}
$$
\item $\overline{f} \colon M_S \to T$ preserves the involution.
$$
\begin{array}{rcll}
(\overline{f} \mathrel{\circ} \zeta^{M_S})(\sum_i s_ix_i)
&=&
\overline{f}(\sum_i s_i^*x_i)\\
&=&
\sum^{T(X)}_if(s_i^*)\star\eta^T(x_i) &\text{$\star$ is the action of $T(1)$ on $T(X)$}\\
&=&
\sum^{T(X)}_if(s_i)^*\star\eta^T(x_i)\\
&=&
\sum^{T(X)}_i\zeta_1^T(f(s_i))\star\zeta^T(\eta^T(x_i)) &\text{$\zeta_1$ is the involution on $T(1)$}\\
&&&\text{and $\zeta \mathrel{\circ} \eta = \eta$}\\
&=&
\sum^{T(X)}_i\zeta^T((f(s_i))\star\eta^T(x_i))&\text{By Lemma \ref{Mnd2ModLem}, $\mathcal{M}od(\zeta, \idmap) = (\zeta_1,\zeta)$}\\
&&&\text{is a map of modules}\\
&=&
\zeta^T(\sum^{T(X)}_ig(s_i)\star\eta^T(x_i))&\text{(1)}\\
&=&
(\zeta^T\mathrel{\circ}\overline{f})(\sum_i s_ix_i)
\end{array}
$$
(1) follows from the fact that
$$
T(\nabla) \mathrel{\circ} \ensuremath{\mathsf{bc}}\xspace^{-1} \mathrel{\circ} (\zeta \times \zeta) = \zeta \mathrel{\circ} T(\nabla) \mathrel{\circ} \ensuremath{\mathsf{bc}}\xspace^{-1},
$$
using Lemma \ref{bcproplem}(i) and naturality of $\zeta$.
\end{itemize}
\end{itemize}
}
The adjunction $\mathcal{M}at \colon \Cat{CSRng}\xspace \rightleftarrows \Cat{SMBLaw}\xspace \colon
\mathcal{H}$ from Lemma~\ref{CSRng2CatAdjLem} may also be restricted
to involutive semirings. To do so, we have to consider dagger
categories. A dagger category is a category $\cat{C}$ with a functor
$\dagger \colon \cat{C}\ensuremath{^{\mathrm{op}}} \to \cat{C}$ that is the identity on
objects and satisfies, for all morphisms $f \colon X \to Y$,
$(f^{\dagger})^{\dagger} = f$. The functor $\dagger$ is called a
dagger on $\cat{C}$. Combining this dagger with the categorical
structure we studied in Section \ref{Semiringcatsec} yields a
so-called dagger symmetric monoidal category with dagger biproducts,
that is, a category $\cat{C}$ with a symmetric monoidal structure
$(\otimes, I)$, a biproduct structure $(\oplus,0)$ and a dagger
$\dagger$, such that, for all morphisms $f$ and $g$, $(f \otimes
g)^{\dagger} = f^{\dagger} \otimes g^{\dagger}$, all the coherence
isomorphisms $\alpha$, $\rho$ and $\gamma$ are dagger isomorphisms
and, with respect to the biproduct structure, $\kappa_i =
\pi_i^{\dagger}$, where a dagger isomorphism is an isomorphism $f$
satisfying $f^{-1} = f^{\dagger}$. Further details may be found
in~\cite{AbramskyC04,AbramskyC09,Heunen10a}.
We will denote the category of dagger symmetric monoidal Lawvere
theories with dagger biproducts such that the monoidal structure
distributes over the biproduct structure by \Cat{DSMBLaw}\xspace. Morphisms in
\Cat{DSMBLaw}\xspace are maps in \Cat{SMBLaw}\xspace that (strictly) commute with the daggers.
\begin{lemma}
The functors $\mathcal{M}at \colon \Cat{CSRng}\xspace \rightleftarrows \Cat{SMBLaw}\xspace \colon
\mathcal{H}$ defined in Section \ref{Semiringcatsec} restrict to a
pair of functors $\mathcal{M}at \colon \Cat{ICSRng}\xspace \rightleftarrows \Cat{DSMBLaw}\xspace
\colon \mathcal{H}$. The restricted functors form an adjunction,
$\mathcal{M}at \dashv \mathcal{H}$.
\end{lemma}
\begin{proof}
For an involutive semiring $S$, we may define a dagger on the Lawvere
theory $\mathcal{M}at(S)$ by assigning to a morphism $f\colon n \to m$ in
$\mathcal{M}at(S)$ the morphism $f^{\dagger} \colon m \to n$ given by
\begin{equation}
\label{MatDagEqn}
\begin{array}{rcl}
f^{\dagger}(i,j)
& = &
f(j,i)^{*}.
\end{array}
\end{equation}
\noindent Some short and straightforward computations show that the
functor $\dagger$ is indeed a dagger on $\mathcal{M}at(S)$, which interacts
appropriately with the monoidal and biproduct structure.
\auxproof{
\begin{enumerate}
\item $\dagger$ is a funtor\\
Clearly $\idmap^{\dagger} = \idmap$, as $1^* = 1$ and $0^* = 0$. Preservation of the composition is shown as follows:
$$
\begin{array}{rcll}
(f \mathrel{\circ} g)^{\dagger}(i,j) &=& ((g \mathrel{\circ} f)(j,i))^*\\
&=&
(\sum_l g(j,l)\cdot f(l,i))^*\\
&=&
\sum_l g(j,l)^*\cdot f(l,i)^*\\
&=&
\sum_l g(l,j)^{\dagger}\cdot f(i,l)^{\dagger}\\
&=&
\sum_l f(i,l)^{\dagger}\cdot g(l,j)^{\dagger}\\
&=&
(g^{\dagger}\mathrel{\circ} f^{\dagger})(i,j)
\end{array}
$$
\item $f^{\dagger\dagger} = f$
$$
f^{\dagger\dagger}(i,j) = (f^{\dagger}(j,i))^* = f(i,j)^{**} = f(i,j).
$$
\item $\pi_i^{\dagger} = \kappa_i$
\item $(h\otimes g)^{\dagger} = h^{\dagger}\otimes g^{\dagger}$\\
Let $h \colon n \times p \to S$ and $g \colon m \times q \to S$, then $(h\otimes g)^{\dagger} \colon (p \times q) \times (n \times m) \to S$.
$$
\begin{array}{rcll}
(h\otimes g)^{\dagger}((x_0, x_1),(y_0,y_1)) &=& ((h \otimes g)((y_0, y_1),(x_0, x_1)))^*\\
&=&
(h(y_0,x_0) \cdot g(y_1,x_1))^*\\
&=&
(h(y_0,x_0))^* \cdot (g(y_1,x_1))^*\\
&=&
h^{\dagger}(x_0, y_0) \cdot g^{\dagger}(x_1,y_1)\\
&=&
(h^{\dagger} \otimes g^{\dagger})((x_0, x_1),(y_0,y_1))
\end{array}
$$
\item $\dagger$ interacts appropriately with the other symmetric
monoidal structure.\\ As the matrices representing the morphisms
$\alpha \colon (n \otimes m)\otimes k \to n \otimes (m\otimes k)$,
$\rho \colon n \otimes 1 \to n$, $\lambda \colon 1 \times n \to n$
and $\gamma \colon n \otimes m \to n \otimes m$ only consists of
zeros and ones, one easily sees that $\alpha^{\dagger} =
\alpha^{-1}$ etc.
\item $Mat(f) \colon \mathcal{M}at(S) \to \mathcal{M}at(R)$ is a morphism in
$\Cat{DSMBLaw}\xspace$, for all semiring morphisms $f \colon S \to R$.\\ We
have already seen that $\mathcal{M}at(f)$ preserves the biproduct and
monoidal structure. As to the dagger,
$$
\mathcal{M}at(f)(h^{\dagger})(i,j) = h(f(j,i)^*) = (h(f(j,i)))^* = (\mathcal{M}at(h)(f))^{\dagger}(i,j).
$$
\end{enumerate}
}
For a dagger symmetric monoidal Lawvere theory $\cat{L}$ with dagger
biproduct, it easily follows from the properties of the dagger that
this functor induces an involution on the semiring
$\mathcal{H}(\cat{L}) = \cat{L}(1,1)$, namely via $s\mapsto s^{\dag}$.
\auxproof{
\begin{enumerate}
\item $* (=\dagger) \colon \cat{L}(I,I) \to \cat{L}(1,1)$ is an
involution.\\ Preservation on the addition:
$$
\begin{array}{rcll}
(a+b)^* &=& (\nabla \mathrel{\circ} (a \oplus b) \mathrel{\circ} \Delta)^{\dagger}\\
&=&
\Delta^{\dagger} \mathrel{\circ} (a \oplus b)^{\dagger} \mathrel{\circ} \nabla^{\dagger}\\
&=&
\nabla \mathrel{\circ} (a^{\dagger} \oplus b^{\dagger}) \mathrel{\circ} \Delta\\
&=&
a^*+b^*
\end{array}
$$
Here we use the fact that in a dagger biproduct category: $\nabla^{\dagger} = \Delta$, $\Delta^{\dagger} = \nabla$ and $(a \oplus b)^{\dagger} = a^{\dagger} \oplus b^{\dagger}$.
Preservation of the multiplication:
$$
(a \cdot b)^* = (b \mathrel{\circ} a)^{\dagger} = a^{\dagger} \mathrel{\circ} b^{\dagger} = b^* \cdot a^* = a^* \cdot b^*
$$
and involutiveness:
$$
a^{**} = a^{\dagger\dagger} = a
$$
\item For $F \colon \cat{L} \to \cat{D}$, $\mathcal{H}(F) \colon
\cat{L}(1,1) \to \cat{D}(1,1)$ preserves the involution.
$$
\mathcal{H}(F)(a^*) = F(a^*) = F(a^{\dagger}) = (F(a))^{\dagger} = F(a)^*.
$$
\end{enumerate}
}
The unit and the counit of the adjunction $\mathcal{M}at \colon \Cat{CSRng}\xspace
\leftrightarrows \Cat{SMBLaw}\xspace$ from Lemma~\ref{CSRng2CatAdjLem} preserve
the involution and the dagger respectively. Hence, also the restricted
functors form an adjunction. \hspace*{\fill}$\QEDbox$
\end{proof}
\auxproof{
\begin{enumerate}
\item $\overline{F} \colon S \to \cat{L}(I,I)$ preserves the involution.
$$
\overline{F}(s^*) = F(\underline{s^*}) = F((\underline{s})^{\dagger}) = (F(\underline{s}))^{\dagger} = (F(\underline{s}))^* = (\overline{F}(s))^*,
$$
where $\underline{s} = 1 \times 1 \xrightarrow{\lam{x}{s}} S$.
\item $\overline{f} \colon \mathcal{M}at(S) \to \cat{L}$ preserves the dagger.\\
Let $h \colon n \to m$ in $\mathcal{M}at(S)$. Then $h^{\dagger} \colon m \to n$
$$
\begin{array}{rcll}
\overline{f}(h^{\dagger}) &=& \big[\big\langle f(h^{\dagger}(j,i))\big\rangle_{i\in n}\big]_{j \in m}\\
&=&
\big[\big\langle f((h(i,j))^*)\big\rangle_{i\in n}\big]_{j \in m}&\text{definition of $\dagger$ on $\mathcal{M}at(S)$}\\
&=&
\big[\big\langle (f((h(i,j)))^{\dagger}\big\rangle_{i\in n}\big]_{j \in m} &\text{$\dagger$ is the involution on $\cat{L}(I,I)$}\\
&=&
\big\langle\big[ (f((h(i,j)))\big]_{i\in n}\big\rangle_{j \in m}^{\dagger} &\text{$\dagger$ interacts with the biproduct structure}\\
&=&
\big[\big\langle (f((h(i,j)))\big\rangle_{j\in m}\big]_{i \in n}^{\dagger} &\text{property of the biproduct}\\
&=&
(\overline{f}(h))^{\dagger}
\end{array}
$$
\end{enumerate}
}
To complete our last triangle of adjunctions, recall that, for the
Lawvere theory associated with a (involutive commutative additive)
monad $T$, $\mathcal{K}{\kern-.2ex}\ell_{\ensuremath{\mathbb{N}}}(T) \cong \mathcal{M}at(\mathcal{E}(T))$, see
Proposition~\ref{KleisliMatLem}. Hence, using the previous two
lemmas, the finitary Kleisli construction restricts to a functor
$\mathcal{K}{\kern-.2ex}\ell_{\ensuremath{\mathbb{N}}} \colon \Cat{IACMnd}\xspace \to \Cat{DSMBLaw}\xspace$. For the other direction we
use the following result.
\begin{lemma}
\label{InvLMCommLem}
The functor $\mathcal{T}\colon\Cat{SMBLaw}\xspace\rightarrow\Cat{ACMnd}\xspace$ from
Lemma~\ref{LMCommLem} restricts to $\Cat{DSMBLaw}\xspace \rightarrow \Cat{IACMnd}\xspace$,
and yields a left adjoint to $\mathcal{K}{\kern-.2ex}\ell_{\ensuremath{\mathbb{N}}}\colon \Cat{IACMnd}\xspace\rightarrow
\Cat{DSMBLaw}\xspace$.
\end{lemma}
\begin{proof}
To start, for a Lawvere theory $\cat{L}\in\Cat{DSMBLaw}\xspace$ with dagger $\dag$
we have to define an involution $\zeta\colon T_{\cat{L}} \rightarrow
T_{\cat{L}}$. For a set $X$ this involves a map
$$\[email protected]{
\llap{$T_{\cat{L}}(X) =\;$}
\big(\coprod_{i\in\ensuremath{\mathbb{N}}}\cat{L}(1,i)\times X^{i}\Big)/\!\sim
\ar[r]^-{\zeta_X} &
\big(\coprod_{i\in\ensuremath{\mathbb{N}}}\cat{L}(1,i)\times X^{i}\Big)/\!\sim
\rlap{$\;=T_{\cat{L}}(X)$} \\
[\kappa_{i}(g,v)]\ar@{|->}[r] &
[\kappa_{i}(\langle g_{0}^{\dag}, \ldots, g_{i-1}^{\dag}\rangle, v)],
}$$
\noindent where $g\colon 1\rightarrow i$ is written as $g = \langle
g_{0}, \ldots, g_{i-1}\rangle$ using that $i = 1+\cdots+1$ is not only
a sum, but also a product. Clearly, $\zeta$ is natural, and satisfies
$\zeta\mathrel{\circ}\zeta = \ensuremath{\mathrm{id}}{}$. This $\zeta$ is also a map of monads;
commutatution with multiplication $\mu$ requires commutativity of
composition in the homset $\cat{L}(1,1)$.
The unit of the adjunction $\eta\colon \cat{L} \rightarrow
\mathcal{K}{\kern-.2ex}\ell_{\ensuremath{\mathbb{N}}}(T_{\cat{L}}) \cong \mathcal{M}at(T_{\cat{L}}(1))$ commutes with
daggers, since for $f\colon n\rightarrow m$ in \cat{L} we get
$\eta(f)^{\dag} = \eta(f^{\dag})$ via the following argument in
$\mathcal{M}at(T_{\cat{L}}(1))$. For $i<n$ and $j<m$,
$$\begin{array}[b]{rcll}
\eta(f)^{\dag}(i,j)
& = &
\eta(f)(i,j)^{*}
& \mbox{by~\eqref{MatDagEqn}} \\
& = &
\pi_{j}\ensuremath{\mathsf{bc}}\xspace_{m}(\kappa_{m}(f \mathrel{\circ} \kappa_{i}, \ensuremath{\mathrm{id}}_{m}))^{*}
& \mbox{by~\eqref{Kl2MatEqn}} \\
& = &
\kappa_{1}(\pi_{j} \mathrel{\circ} f \mathrel{\circ} \kappa_{i}, \ensuremath{\mathrm{id}}_{1})^{*}
& \mbox{by definition of $\ensuremath{\mathsf{bc}}\xspace$, see~\eqref{LawMndbcEqn}} \\
& = &
\kappa_{1}(\pi_{j} \mathrel{\circ} f \mathrel{\circ} \kappa_{i}, \ensuremath{\mathrm{id}})^{\dag}
& \mbox{since $(-)^{*} = (-)^{\dag}$ on $T_{\cat{L}}(1)$} \\
& = &
\kappa_{1}((\pi_{j} \mathrel{\circ} f \mathrel{\circ} \kappa_{i})^{\dag}, \ensuremath{\mathrm{id}}) \\
& = &
\kappa_{1}(\kappa_{i}^{\dag} \mathrel{\circ} f^{\dag} \mathrel{\circ} \pi_{j}^{\dag}, \ensuremath{\mathrm{id}}) \\
& = &
\kappa_{1}(\pi_{i} \mathrel{\circ} f^{\dag} \mathrel{\circ} \kappa_{j}, \ensuremath{\mathrm{id}}) \\
& = &
\eta(f^{\dag})(i,j).
\end{array}\eqno{\square}$$
\auxproof{
We check commutation with $\eta$ and $\mu$.
$$\begin{array}{rcl}
\lefteqn{\big(\zeta \mathrel{\circ} \eta\big)(x)} \\
& = &
\zeta(\kappa_{1}(\ensuremath{\mathrm{id}}_{1}, x) \\
& = &
\kappa_{1}((\ensuremath{\mathrm{id}}_{1})^{\dag}, x) \\
& = &
\kappa_{1}(\ensuremath{\mathrm{id}}_{1}, x) \\
& = &
\eta(x) \\
\lefteqn{\big(\mu \mathrel{\circ} T_{\cat{L}}(\zeta) \mathrel{\circ}
\zeta\big)(\kappa_{i}(g, v))} \\
& = &
\big(\mu \mathrel{\circ} T_{\cat{L}}(\zeta)\big)
(\kappa_{i}(\langle (\pi_{a}\mathrel{\circ} g)^{\dag}\rangle_{a<i}, v)) \\
& = &
\mu(\kappa_{i}(\langle (\pi_{a}\mathrel{\circ} g)^{\dag}\rangle_{a<i},
\zeta \mathrel{\circ} v)) \\
& = &
\mu(\kappa_{i}(\langle (\pi_{a}\mathrel{\circ} g)^{\dag}\rangle_{a<i},
\lam{a<i}{\kappa_{j_a}(\langle (\pi_{b} \mathrel{\circ} h_{a,b})^{\dag} \rangle_{b<j_a},
w_a)})) \\
& & \qquad \mbox{if }v(a) = \kappa_{j_a}(h_{a}, w_{a}) \\
& = &
\kappa_{j}((\langle (\pi_{b} \mathrel{\circ} h_{0})^{\dag} \rangle_{b<j_0} + \cdots +
\langle (\pi_{b} \mathrel{\circ} h_{i-1})^{\dag} \rangle_{b<j_i-1}) \mathrel{\circ}
\langle (\pi_{a}\mathrel{\circ} g)^{\dag}\rangle_{a<i}, \\
& & \qquad [w_{0}, \ldots, w_{i-1}]), \qquad
\mbox{where }j = j_{0} + \cdots + j_{i-1} \\
& \smash{\stackrel{(*)}{=}} &
\kappa_{j}(\langle (\pi_{b} \mathrel{\circ} \pi_{a} \mathrel{\circ}
(h_{0}+\cdots+h_{i-1})\mathrel{\circ} g)^{\dag}
\rangle_{a<i, b<j_{a}}, [w_{0}, \ldots, w_{i-1}]) \\
& = &
\zeta(\kappa_{j}((h_{0}+\cdots+h_{i-1})\mathrel{\circ} g, [w_{0}, \ldots, w_{i-1}])) \\
& = &
\big(\zeta \mathrel{\circ} \mu)(\kappa_{i}(g, v)).
\end{array}$$
\noindent The marked equation holds by commutativity of composition
in $\cat{L}(1,1)$, see:
$$\begin{array}{rcl}
\lefteqn{\pi_{b} \mathrel{\circ} \pi_{a} \mathrel{\circ}
\langle (\pi_{b} \mathrel{\circ} \pi_{a} \mathrel{\circ}
(h_{0}+\cdots+h_{i-1})\mathrel{\circ} g)^{\dag} \rangle_{a<i, b<j_{a}}} \\
& = &
(\pi_{b} \mathrel{\circ} \pi_{a} \mathrel{\circ} (h_{0}+\cdots+h_{i-1})\mathrel{\circ} g)^{\dag} \\
& = &
(\pi_{b} \mathrel{\circ} h_{a} \mathrel{\circ} \pi_{a} \mathrel{\circ} g)^{\dag} \\
& = &
(\pi_{a} \mathrel{\circ} g)^{\dag} \mathrel{\circ} (\pi_{b} \mathrel{\circ} h)^{\dag} \\
& \smash{\stackrel{\textrm{comm}}{=}} &
(\pi_{b} \mathrel{\circ} h)^{\dag} \mathrel{\circ} (\pi_{a} \mathrel{\circ} g)^{\dag} \\
& = &
\pi_{b} \mathrel{\circ} \langle (\pi_{b} \mathrel{\circ} h_{a})^{\dag} \rangle_{b<j_a} \mathrel{\circ}
\pi_{a} \mathrel{\circ} \langle (\pi_{a}\mathrel{\circ} g)^{\dag}\rangle_{a<i} \\
& = &
\pi_{b} \mathrel{\circ} \pi_{a} \mathrel{\circ}
(\langle (\pi_{b} \mathrel{\circ} h_{0})^{\dag} \rangle_{b<j_0} + \cdots +
\langle (\pi_{b} \mathrel{\circ} h_{i-1})^{\dag} \rangle_{b<j_i-1}) \mathrel{\circ}
\langle (\pi_{a}\mathrel{\circ} g)^{\dag}\rangle_{a<i}.
\end{array}$$
}
\end{proof}
In the definition of the involution $\zeta$ on the monad $T_{\cat{L}}$
in this proof we have used that $+$ is a (bi)product in the Lawvere
theory \cat{L}, namely when we decompose the map $g\colon 1\rightarrow
i$ into its components $\pi_{a} \mathrel{\circ} g\colon 1\rightarrow 1$ for
$a<i$. We could have avoided this biproduct structure by first taking
the dagger $g^{\dag} \colon i\rightarrow 1$, and then precomposing
with coprojections $g^{\dag} \mathrel{\circ} \kappa_{a} \colon 1 \rightarrow
1$. Again applying daggers, cotupling, and taking the dagger one gets
the same result. This is relevant if one wishes to consider
involutions/daggers in the context of monoids, where products in the
corresponding Lawvere theories are lacking.
\auxproof{
These two approaches are the same since if we have biproducts:
$$\begin{array}{rcl}
\big[(g^{\dag} \mathrel{\circ} \kappa_{0})^{\dag}, \ldots,
(g^{\dag} \mathrel{\circ} \kappa_{i-1})^{\dag}\big]^{\dag}
& = &
\big[\kappa_{0}^{\dag} \mathrel{\circ} g^{\dag\dag}, \ldots,
\kappa_{i-1}^{\dag} \mathrel{\circ} g^{\dag\dag}\big]^{\dag} \\
& = &
\big[\pi_{0} \mathrel{\circ} g, \ldots, \pi_{i-1} \mathrel{\circ} g\big]^{\dag} \\
& = &
\langle (\pi_{0}\mathrel{\circ} g)^{\dag}, \ldots, (\pi_{i-1}\mathrel{\circ} g)^{\dag} \rangle.
\end{array}$$
}
By combining the previous three lemmas we obtain another triangle of
adjunctions in Figure~\ref{ICSRngTriangleFig}. This concludes our
survey of the interrelatedness of scalars, monads and categories.
\begin{figure}
$$\[email protected]@C+.5pc{
& & \Cat{ICSRng}\xspace\ar@/_2ex/ [ddll]_{\cal M}
\ar@/_2ex/ [ddrr]_(0.3){\mathcal{M}at \cong \mathcal{K}{\kern-.2ex}\ell_{\ensuremath{\mathbb{N}}}\mathcal{M}\;\;} \\
& \dashv & & \dashv & \\
\Cat{IACMnd}\xspace\ar @/_2ex/[rrrr]_{\mathcal{K}{\kern-.2ex}\ell_\ensuremath{\mathbb{N}}}
\ar@/_2ex/ [uurr]_(0.6){\;{\mathcal{E}} \cong \mathcal{H}\mathcal{K}{\kern-.2ex}\ell_{\ensuremath{\mathbb{N}}}} & & \bot & &
\Cat{DSMBLaw}\xspace\ar@/_2ex/ [uull]_{\mathcal{H}}\ar @/_2ex/[llll]_{\mathcal{T}}
}$$
\caption{Triangle of adjunctions starting from involutive commutative
semi\-rings, with involutive commutative additive monads, and dagger
symmetric monoidal Lawvere theories with dagger biproducts.}
\label{ICSRngTriangleFig}
\end{figure}
\bibliographystyle{plain}
| proofpile-arXiv_065-5348 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
After the breakthroughs of
2005~\cite{Pretorius:2005gq,Campanelli:2005dd,Baker:2005vv},
state-of-the-art numerical relativity (NR) codes can nowadays
routinely evolve (spinning) coalescing binary black hole
systems with comparable masses and extract the gravitational wave (GW)
signal with high accuracy~\cite{Hannam:2007ik,Hannam:2007wf,
Sperhake:2008ga,Campanelli:2008nk,Scheel:2008rj,
Chu:2009md,Mosta:2009rr,Pollney:2009yz}.
However, despite these considerable improvements,
the numerical computation of coalescing black hole binaries where
the mass ratio is considerably different from 1:1 is
still challenging. To date, the mass ratio 10:1 (without spin)
remains the highest that was possible to numerically evolve
through the transition from inspiral to plunge and merger with
reasonable accuracy~\cite{Gonzalez:2008bi,Lousto:2010tb}.
In recent years, work at the interface between analytical
and numerical relativity, notably using the effective-one-body (EOB)
resummed analytical formalism~\cite{Buonanno:1998gg,Buonanno:2000ef,
Damour:2000we,Damour:2001tu,Buonanno:2005xu,Damour:2009ic},
has demonstrated the possibility of using NR results
to develop accurate analytical models of dynamics and
waveforms of coalescing black-hole binaries~\cite{Buonanno:2007pf,
Damour:2007yf,Damour:2007vq,Damour:2008te,Boyle:2008ge,
Buonanno:2009qa,Damour:2009kr,Pan:2009wj,Barausse:2009xi}.
By contrast, when the mass ratio is large, approximation methods
based on black-hole perturbation theory are expected to yield
accurate results, therefore enlarging our black-hole binaries
knowledge by a complementary perspective.
In addition, when the larger black-hole mass is in the range
$10^5M_\odot$-$10^7M_\odot$, the GWs emitted by the radiative
inspiral of the small object fall within the sensitivity band
of the proposed space-based detector LISA~\cite{lisa1,lisa2},
so that an accurate modelization of these extreme-mass-ratio-inspirals
(EMRI) is one of the goals of current GW research.
The first calculation of the complete gravitational waveform emitted
during the transition from inspiral to merger in the
extreme-mass-ratio limit was performed in
Refs.~\cite{Nagar:2006xv,Damour:2007xr}, thanks to the combination of
2.5PN Pad\'e resummed radiation reaction force~\cite{Damour:1997ub}
with Regge-Wheeler-Zerilli perturbation
theory~\cite{Regge:1957td,Zerilli:1970se,Nagar:2005ea,Martel:2005ir}.
This test-mass laboratory was then used to understand,
element by element, the physics that enters in the dynamics
and waveforms during the transition from inspiral to plunge
(followed by merger and ringdown), providing important inputs
for EOB-based analytical models. In particular, it helped to:
(i) discriminate between two expressions of the resummed radiation
reaction force;
(ii) quantify the accuracy of the resummed multipolar
wavefom;
(iii) quantify the effect of non-quasi-circular corrections
(both to waveform and radiation reaction);
(iv) qualitatively understand
the process of generation of quasi-normal modes (QNMs); and
(v) improve the matching procedure of the ``insplunge'' waveform to a
``ringdown'' waveform with (several) QNMs.
In the same spirit, the multipolar expansion of the
gravitational wave luminosity of a test-particle in
circular orbits on a Schwarzschild
background~\cite{Cutler:1993vq,Yunes:2008tw,Pani:2010em}
was helpful to devise an improved resummation
procedure~\cite{Damour:2007yf,Damour:2008gu}
of the PN (Taylor-expanded) multipolar
waveform~\cite{Kidder:2007rt,Blanchet:2008je}. Such resummation
procedure is one of the cardinal elements of what we think is
presently the best EOB analytical
model~\cite{Damour:2009kr,Pan:2009wj}.
Similarly, Ref.~\cite{Yunes:2009ef} compared ``calibrated''
EOB-resummed waveforms~\cite{Boyle:2008ge} with Teukolsky-based
perturbative waveforms, and confirmed that the EOB framework
is well suited to model EMRIs for LISA.
In addition, recent numerical achievement in the calculation of the
conservative gravitational self-force (GSF) of circular orbits
in a Schwarzschild background~\cite{Detweiler:2008ft,Barack:2009ey,Barack:2010tm}
prompted the interplay between post-Newtonian (PN) and
GSF efforts~\cite{Blanchet:2009sd,Blanchet:2010zd}, and
EOB and GSF efforts~\cite{Damour:2009sm}.
In particular, the information coming from GSF data helped
to break the degeneracy (among some EOB parameters) which was left
after using comparable-mass NR data to constrain the EOB
formalism~\cite{Damour:2009sm}. (See also Ref.~\cite{Barausse:2009xi}
for a different way to incorporate GSF results in EOB).
In this paper we present a revisited computation of the
GWs emission from the transition from inspiral to plunge in the
test-mass limit. We improve the previous calculation of
Nagar et al.~\cite{Nagar:2006xv} in two aspects:
one numerical and the other analytical. The first is that
we use a more accurate (4th-order) numerical
algorithm to solve the Regge-Wheeler-Zerilli equations numerically;
this allows us to capture the higher order multipolar information
(up to $\ell=8$) more accurately than in~\cite{Nagar:2006xv}.
The second aspect is that we have replaced the
2.5PN Pad\'e resummed radiation reaction force
of~\cite{Nagar:2006xv} with the 5PN resummed one that relies
on the results of Ref.~\cite{Damour:2008gu}.
The aim of this paper is then two-fold.
On the one hand, our new test-mass perturbative allows us to
describe in full, and with high accuracy, the properties of
the gravitational radiation emitted during the transition
inspiral-plunge-merger and ringdown, {\it without making the adiabatic
approximation} which is the hallmark of most existing approaches to
the GW emission by EMRI systems~\cite{Glampedakis:2002cb,Hughes:2005qb}.
We compute the multipolar waveform up to $\ell=8$, we
discuss the relative weight
of each multipole during the nonadiabatic plunge phase,
and describe the structure of the ringdown.
In addition, from the multipolar waveform we compute also
the total recoil, or kick, imparted to the system by the
wave emission, thereby complementing NR results~\cite{Gonzalez:2008bi}.
On the other hand, we can use our upgraded test-mass laboratory
to provide inputs for the EOB formalism, notably for completing
the EOB multipolar waveform during the late-inspiral, plunge and
merger. As a first step in this direction, we show that the
analytically resummed radiation reaction
introduced in~\cite{Damour:2008gu,Damour:2009kr}
gives an {\it excellent fractional agreement} ($\sim 10^{-3}$)
with the angular momentum flux computed \'a la Regge-Wheeler-Zerilli
even during the plunge phase.
This paper is organized as follows. In Sec.~II we give a summary
of the formalism employed. In Sec.~III we describe the multipolar
structure of the waveforms; details on the energy and angular
momentum emitted during the plunge-merge-ringdown transition
are presented as well as an analysis of the ringdown phase.
Section~IV is devoted to present the computation of the final kick,
emphasizing the importance of high multipoles.
The following Sec.~V is devoted to some consistency checks: on the
one hand, we discuss the aforementioned agreement between the
mechanical angular momentum loss and GW energy flux during the
plunge; on the other hand, we investigate the influence
of EOB ``self-force'' terms (either in the conservative and non
conservative part of the dynamics)
on the waveforms. We present a summary of our findings in Sec.~VI.
In Appendix~A we supply some technical details related to our
numerical framework, while in Appendix~B we list some useful numbers.
We use geometric units with $c=G=1$.
\section{Analytic framework}
\label{sec:dynamics}
\subsection{Relative dynamics}
The relative dynamics of the system is modeled specifying the
EOB dynamics to the small-mass limit. The formalism that we use
here is the specialization to the test-mass limit of the improved
EOB formalism introduced in Ref.~\cite{Damour:2009kr} that crucially
relies on the ``improved resummation'' procedure of the multipolar
waveform of Ref.~\cite{Damour:2008gu}.
Let us recall that the EOB approach to the general relativistic
two-body dynamics is a {\it nonperturbatively resummed} analytic
technique which has been developed in
Refs.~\cite{Buonanno:1998gg,Buonanno:2000ef,Damour:2000we,Damour:2001tu,Buonanno:2005xu}.
This technique uses, as basic input, the results of PN theory,
such as: (i) PN-expanded equations of motion for two pointlike bodies,
(ii) PN-expanded radiative multipole moments, and (iii) PN-expanded
energy and angular momentum fluxes at infinity. For the moment, the
most accurate such results are the 3PN conservative
dynamics~\cite{Damour:2001bu,Blanchet:2003gy}, the 3.5PN energy
flux~\cite{Blanchet:2001aw,Blanchet:2004bb,Blanchet:2004ek} for the $\nu\neq0$
case, and 5.5PN~\cite{Tanaka:1997dj} accuracy for the $\nu=0$ case.
Then the EOB approach ``packages'' this PN-expanded information in special
{\it resummed} forms which extend the validity of the PN results
beyond the expected weak-field-slow-velocity regime into (part of)
the strong-field-fast-motion regime.
In the EOB approach the relative dynamics of a binary
system of masses $m_1$ and $m_2$ is described
by a Hamiltonian $H_{\rm EOB}(M,\mu)$ and a radiation reaction force
${\cal F}_{\rm EOB}(M,\mu)$, where $M\equiv m_1+m_2$ and $\mu\equiv m_1m_2/M$.
In the general comparable-mass case $H_{\rm EOB}$
has the structure $H_{\rm EOB}(M,\mu)=M\sqrt{1+2\nu(\hat{H}_\nu - 1)}$
where $\nu\equiv \mu/M\equiv m_1m_2/(m_1+m_2)^2$ is the symmetric mass ratio.
In the test mass limit that we are considering, $\nu\ll 1$, we can expand
$H_{\rm EOB}$ in powers of $\nu$. After subtracting inessential constants
we get a Hamiltonian per unit ($\mu$) mass
$\hat{H}=\lim_{\nu \to 0}(H-{\rm const.})/\mu=\lim_{\nu\to 0}\hat{H}_\nu$.
As in Refs.~\cite{Nagar:2006xv,Damour:2007xr}, we replace the Schwarzschild
radial coordinated $r_*=r + 2M\log[r/(2M)-1]$ and, correspondingly, the radial
momentum $P_R$ by the conjugate momentum $P_{R_*}$ of $R_*$, so that
the specific Hamiltonian has the form
\begin{equation}
\hat{H} = \sqrt{ A\left( 1+ \frac{p_{\varphi}^2}{\hat{r}^2} \right)+p_{r_*}^2} \ .
\end{equation}
Here we have introduced dimensionless variables ${\hat{r}}\equiv R/M$,${\hat{r}}_*\equiv
R_*/M$, $p_{r_*}\equiv P_{R_*}/\mu$, $p_\varphi \equiv P_\varphi/(\mu M)$
and $A=1-2/{\hat{r}}$. Hamilton's canonical equations for $({\hat{r}},r_*,p_{r_*},p_{\varphi})$
in the equatorial plane ($\theta=\pi/2$) yield
\begin{align}
\label{eob:1}
\dot{{\hat{r}}}_* &= \dfrac{p_{{\hat{r}}_*}}{\hat{H}} \ , \\
\label{eob:2}
\dot{{\hat{r}}} &= \dfrac{A}{\hat{H}}p_{r_*} \equiv v_r \ , \\
\label{eob:3}
\dot{\varphi} &= \dfrac{A}{\hat{H}}\dfrac{p_\varphi}{{\hat{r}}^2} \equiv \Omega \ ,\\
\label{eob:4}
\dot{p}_{r_*} &= -\dfrac{{\hat{r}}-2}{{\hat{r}}^3\hat{H}}\left[p_{\varphi}^2\left(\dfrac{3}{{\hat{r}}^2}-\dfrac{1}{{\hat{r}}}\right)+1\right] \ , \\
\label{eob:5}
\dot{p}_\varphi &= \hat{\cal F}_\varphi \ .
\end{align}
Note that the quantity $\Omega$ is dimensionless and represents the orbital frequency
in units of $1/M$.
In these equations the extra term $\hat{\cal F}_{\varphi}$
[of order $O(\nu)$] represents the non conservative part of
the dynamics, namely the radiation reaction force.
Following~\cite{Damour:2006tr,Nagar:2006xv,Damour:2007xr}, we use the
following expression:
\be
\label{eq:rr}
\hat{{\cal F}}_\varphi \equiv -\dfrac{32}{5}\nu \Omega^5 {\hat{r}}^4 \hat{f}_{\rm DIN}(v_\varphi),
\end{equation}
where $v_\varphi = \hat{r}\Omega$ is the azimuthal velocity and
$\hat{f}_{\rm DIN}=F^{\ell_{\rm max}}/F_{\rm Newt}$ denotes the (Newton normalized)
energy flux up to multipolar order $\ell_{\rm max}$(in the $\nu=0$ limit)
resummed according to the ``improved resummation'' technique
of Ref.~\cite{Damour:2008gu}. This resummation procedure is based on a
particular multiplicative decomposition of the multipolar gravitational
waveform. The energy flux is written as
\be
F^{\ell_{\rm max}}=\dfrac{1}{8\pi}\sum_{\ell =2}^{\ell_{\rm max}} \sum_{m=1}^{\ell} (m\Omega)^2|r h_{\ell m}|^2.
\end{equation}
where $h_{\ell m}$ is the factorized waveform of~\cite{Damour:2008gu},
\be
h_{\ell m}=h_{\ell m}^{(N,\epsilon)}\hat{S}_{\rm eff}^{(\epsilon)}T_{\ell m}e^{{\rm i} \delta_{\ell m}}\rho^\ell_{{\ell m}}
\end{equation}
where $h_{\ell m}^{(N,\epsilon)}$ represents the Newtonian contribution
given by Eq.~(4) of~\cite{Damour:2008gu}, $\epsilon=0$ (or $1$) for
$\ell+m$ even (odd), $\hat{S}^{\epsilon}_{\rm eff}$ is the effective
``source'', Eqs.~(15-16) of~\cite{Damour:2008gu}; $T_{\ell m}$ is
the ``tail factor'' that resums an infinite number of ``leading logarithms''
due to tail effects, Eq.~(19) of~\cite{Damour:2008gu}; $\delta_{\ell m}$
is a residual phase correction, Eqs.~(20-28) of~\cite{Damour:2008gu};
and $\rho_{\ell m}$ is the residual modulus
correction, Eqs.~(C1-C35) in~\cite{Damour:2008gu}.
In our setup we truncate the sum on $\ell$ at $\ell_{\rm max}=8$.
We refer the reader to Fig.~1 (b) of~\cite{Damour:2008gu}
to figure out the capability of the new resummation
procedure to reproduce the actual flux (computed numerically) for
the sequence of circular orbits in Schwarzshild\footnote{We mention that, although
Ref.~\cite{Damour:2008gu} also proposes to further (Pad\'e) resum
the residual amplitude corrections $\rho_{\ell m}$
(and, in particular, the dominant one, $\rho_{22}$) to
improve the agreement with the ``exact'' data, we prefer
not to include any of these sophistications here.
This is motivated by the fact that,
along the sequence of circular orbits, all the different
choices are practically equivalent up to (and sometimes below)
the adiabatic last stable orbit (LSO) at $r=6M$ (see in this respect their Fig.~5).
In practice our $\rho_{22}$ actually corresponds to the Taylor-expanded
version (at 5PN order) of the remnant amplitude correction,
denoted $T_5[\rho_{22}]$ in~\cite{Damour:2008gu}.}.
\subsection{Gravitational wave generation}
\label{sec:rwz}
The computation of the gravitational waves
generated by the relative dynamics follows th
same line of Refs.~\cite{Nagar:2006xv,Damour:2007xr},
and relies on the numerical solution, in the time
domain, of the Regge-Wheeler-Zerilli equations for
metric perturbations of the Schwarzschild black hole
with a point-particle source.
Once the dynamics from Eqs.~\eqref{eob:1}-\eqref{eob:5}
is computed, one needs to solve numerically (for each multipole
$(\ell,m)$ of even (e) or odd (o) type)
a couple of decoupled partial differential
equations
\begin{equation}
\label{eq:rwz}
\partial_t^2\Psi^{(\rm e/o)}_{\ell m}-\partial_{r_*}^2\Psi^{(\rm e/o)}_{\ell m} + V^{(\rm
e/o)}_{\ell}\Psi^{(\rm e/o)}_{\ell m} = S^{(\rm e/o)}_{\ell m} \
\end{equation}
with source terms $S^{(\rm e/o)}_{\ell m}$ linked to the dynamics of
the binary. Following~\cite{Nagar:2006xv}, the sources are written
in the functional form
\begin{align}
\label{source:standard}
S^{(\rm e/o)}_{\ell m} & = G^{(\rm e/o)}_{\ell m}(r,t)\delta(r_*-R_*(t)) \nonumber \\
& + F^{(\rm e/o)}_{\ell m}(r,t)\partial_{r_*}\delta(r_*-R_*(t)) \ ,
\end{align}
with $r$-dependent [rather than $R(t)$-dependent] coefficients
$G(r)$ and $F(r)$. The explicit expression of the sources is
given in Eqs.~(20-21) of~\cite{Nagar:2006xv}, to which we
address the reader for further technical details.
We mention, however, that in our approach the distributional
$\delta$-function is approximated by a narrow
Gaussian of finite width $\sigma\ll M$. In Ref.~\cite{Nagar:2006xv}
it was already pointed out that, if $\sigma$ is sufficiently
small and the resolution is sufficiently high (so that the
Gaussian can be cleanly resolved) this approximation is
competitive with other approaches that employ a mathematically
more rigorous treatment of the
$\delta$-function~\cite{Lousto:1997wf,Martel:2001yf,Sopuerta:2005gz}
(see in this respect Table~1 and Fig.~2 of Ref.~\cite{Nagar:2006xv}).
That analysis motivates us to use the same representation
of the $\delta$-function also in this paper, but together
with an improved numerical algorithm to solve the wave equations.
In fact, the solution of Eqs.~\eqref{eq:rwz}
is now provided via the method of lines by means of a 4th-order Runge-Kutta
algorithm with 4th-order finite differences used to approximate the space
derivatives. This yields better accuracy in the waveforms (using resolutions
comparable to those of Ref.~\cite{Nagar:2006xv}), and allows to better
resolve the higher multipoles. More details about the numerical implementation,
convergence properties, accuracy, and comparison with published results
are given in Appendix~\ref{sec:numerical}.
From the numerically calculated master functions $\Psi^{(\rm e/o)}_{\ell m}$,
one can then obtain, when considering the limit $r\to\infty$, the $h_+$ and $h_\times$
gravitational-wave polarization amplitude
\begin{align}
\label{eq:hplus_cross}
h_+-{\rm i}h_{\times} = \dfrac{1}{r}\sum_{\ell\geq 2,m}\sqrt{\frac{(\ell+2)!}{(\ell-2)!}}
\left(\Psi^{(\rm e)}_{\ell m}
+{\rm i}\Psi^{(\rm o)}_{\ell m}\right)
\;_{-2}Y^{\ell m} \ ,
\end{align}
where $\;_{-2}Y^{\ell m}\equiv\,_{-2}Y^{\ell m}(\theta,\varphi)$ are
the $s=2$ spin-weighted spherical harmonics~\cite{goldberg67}.
From this expression, all the interesting second-order quantities
follow. The emitted power,
\be
\label{eq:dEdt}
\dot{E} = \frac{1}{16\pi}\sum_{\ell\geq 2,m}\frac{(\ell+2)!}{(\ell-2)!}
\left(\left|\dot{\Psi}^{(\rm o)}_{\ell m}\right|^2 +
\left|\dot{\Psi}^{(\rm e)}_{\ell m}\right|^2\right)\;, \\
\end{equation}
the angular momentum flux
\be
\label{eq:dJdt}
\dot{J} = \frac{1}{32\pi}\sum_{\ell\geq 2,m}\bigg\{{\rm i}m\frac{(\ell+2)!}{(\ell -2)!}
\left[\dot{\Psi}^{(\rm e)}_{\ell m}\Psi^{(\rm e)*}_{\ell m}
+\dot{\Psi}_{\ell m}^{({\rm o})}\Psi^{(\rm o)*}_{\ell m}\right] + c.c.\bigg\} \\
\end{equation}
and the linear momentum flux~\cite{Thorne:1980ru,Pollney:2007ss,Ruiz:2007yx}
\begin{align}
\label{eq:dPdt}
{\cal F}^{\bf P}_x + {\rm i}{\cal F}^{\bf P}_y &= \dfrac{1}{8\pi}\sum_{\ell\geq 2,m}
\bigg[{\rm i} a_{{\ell m}} \dot{\Psi}_{{\ell m}}^{({\rm e})}\dot{\Psi}^{({\rm o})*}_{\ell,m+1}\nonumber\\
&+b_{{\ell m}}\left(\dot{\Psi}^{(\rm e)}_{{\ell m}}\dot{\Psi}^{(\rm
e)*}_{\ell+1,m+1}+\dot{\Psi}^{(\rm o)}_{{\ell m}}\dot{\Psi}^{(\rm o)*}_{\ell
+1,m+1}\right)\bigg] \ .
\end{align}
with
\begin{align}
a_{{\ell m}} & = 2(\ell-1)(\ell+2)\sqrt{(\ell-m)(\ell+m+1)}\\
b_{\ell m} & =\dfrac{(\ell +3)!}{(\ell +1)(\ell -2)!}\sqrt{\dfrac{(\ell + m
+1)(\ell+m+2)}{(2\ell+1)(2\ell +3)}} .
\end{align}
\section{Relative dynamics and waveforms}
\label{res:waves}
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.35\textwidth]{fig1.eps}
\caption{ \label{fig:trj} Transition from quasicircular inspiral
orbit to plunge. Initial position is $r_0=7M$ and $\nu=10^{-3}$.}
\end{center}
\end{figure}
Let us now consider the dynamics and waveforms obtained within our
new setup. Evidently, at the {\it qualitative} level our results are
analogous to those of Refs.~\cite{Nagar:2006xv,Damour:2007xr}.
By contrast, at the {\it quantitative} level, dynamics and waveforms
are slightly different due to the new, more accurate, radiation reaction force.
The particle is initially at $r=7M$. The dynamics is initiated with the
so called post-circular initial data for ($p_\varphi$,$p_{r}$)
introduced in Ref.~\cite{Buonanno:2000ef} and specialized to the $\mu\to 0$
limit (see Eqs.~(9)-(13) of Ref.~\cite{Nagar:2006xv}). Because of the
smallness of the value of $\mu$ we are using, this approximation
is sufficient to guarantee that the initial eccentricity is
negligible. To have a better modelization of the extreme-mass-ratio
limit regime we considered three values of the mass ratio $\nu$,
namely $\nu=\{10^{-2},\, 10^{-3},\,10^{-4}\}$.
The values of $\nu$ are chosen so that the particle passes
through a long (when $\nu\leq 10^{-3}$) quasicircular adiabatic
inspiral before entering the nonadiabatic plunge phase.
Fig~\ref{fig:trj} displays the relative trajectory for
$\nu=10^{-3}$. The system executes about 40 orbits
before crossing the LSO at $r=6M$ while plunging
into the black hole.
\begin{figure*}[t]
\begin{center}
\includegraphics[width=0.9\textwidth]{fig2.eps}
\fig{fig:full_wave}{Complete $\ell=m=2$ gravitational (Zerilli)
waveform corresponding to the dynamics depicted in Fig.~\ref{fig:trj}.
The waveform is extracted at $r_*^{\rm obs}/M=1000$.}
\end{center}
\end{figure*}
The main multipolar contribution to the gravitational signal is clearly
the $\ell=m=2$. The real part of the corresponding
waveform is displayed in Fig.~\ref{fig:full_wave}. It is
extracted at $r_*^{\rm obs}=1000M$ and it is shown
versus observer's retarded time $u=(t^{\rm obs}-r_*^{\rm obs})M$.
Note how the amplitude of the long wavetrain emitted during the
adiabatic quasicircular inspiral grows very slowly for about
$4000M$, until the transition from inspiral to plunge
around the crossing of the adiabatic LSO frequency.
In the following we want however to focus on the higher
order multipolar contributions to the waveform, as they are
particularly relevant in our test-mass setup. The computation
of these multipoles and their inclusion in the analysis is
one of the new results of this paper
\footnote{We note that calculations up to $\ell=4$ were already performed
in Ref.~\cite{Nagar:2006xv,Damour:2007xr}, but
no higher-order multipolar waveforms were either shown or discussed
in details. The present calculations rely strongly on the new
developed 4th-order code. An explicit comparison between the two
codes is shown in Appendix~\ref{sec:numerical}.
}.
Figure~\ref{fig:module_multipoles} summarizes the information a
bout the multipolar waveforms up to $\ell=8$.
The left panels show the moduli (normalized by the mass ratio $\nu$),
while the right panels show the corresponding instantaneous gravitational
wave frequencies $M\omega_{\ell m}$.
We show, for each value of $\ell$, the dominant
(even-parity) ones, i.e. those with $m=\ell$, together with some
subdominant (odd-parity) ones. The comparison between the moduli
highlights how the amplitude of higher modes, that is almost
negligible during the adiabatic inspiral, can be magnified of about
factor two (see the $\ell=2$, $m=1$ case) or three
(see the $m=\ell=6$ case) during the nonadiabatic
plunge phase. This fact is expected to have some relevance in
those computations that are dominated by the nonadiabatic
plunge phase, like the computation of the recoil velocity
imparted to the center of mass of the system due to the
linear momentum carried away by GWs~\cite{Damour:2006tr}.
As we will see in Sec.~\ref{sec:recoil}, high-order multipoles
are, in fact, needed to obtain an accurate result.
An analysis of the relative importance of the different multipoles
based on energy considerations is the subject of
Sec.~\ref{sec:qgeoplunge}.
\begin{figure*}[t]
\begin{center}
\includegraphics[width=0.45\textwidth]{fig3a.eps}\qquad
\includegraphics[width=0.45\textwidth]{fig3b.eps}\\
\vspace{5 mm}
\includegraphics[width=0.45\textwidth]{fig3c.eps}\qquad
\includegraphics[width=0.45\textwidth]{fig3d.eps}\\
\caption{\label{fig:module_multipoles}Multipolar structure of the waveform.
The left panels exhibit the moduli; the right panels the instantaneous
gravitational wave frequencies for some representative multipoles.
Note the oscillation pattern during ringdown (especially in the $\ell=2$,
$m=1$ modulus and frequency) due to the interference between positve
and negative frequency QNMs. The waveform refer to the $\nu=10^{-3}$ mass ratio.}
\end{center}
\end{figure*}
As for the instantaneous GW frequency, the right-panels
of Fig.~\ref{fig:module_multipoles} show the same kind of
behavior for each multipole: $M\omega_{\ell m}$ is approximately
equal to $m\Omega$ during the inspiral, to grow abruptly during
the nonadiabatic plunge phase until it saturates at the ringdown frequency
(indicated by dashed lines in the plot). As already pointed out
in Ref.~\cite{Nagar:2006xv,Damour:2007xr} the oscillation pattern
that is clearly visible for some multipoles is due to the contemporary
(but asymmetric) excitation of the positive and negative frequency
QNMs of the black hole. We shall give details on this phenomenon in
Sec.~\ref{sec:fitringdown}.
\subsection{quasiuniversal plunge}
\label{sec:qgeoplunge}
In this section we discuss in quantitative terms the relative contribution
of each multipole during the plunge, merger and ringdown phase.
The analysis is based on the energy and angular momentum computed from the
emitted GW. While these quantities represent a ``synthesis'' of the
information we need, their computation and interpretation have some
subtle points that are discussed below.
For a (adiabatic) sequence of circular orbits, this information
was originally obtained in Cutler et al.~\cite{Cutler:1993vq};
for the radial plunge of a particle initially at rest at infinity,
the classical work of Davis, Ruffini, Press and Price~\cite{Davis:1971gg}
found that about the $90\%$ of the total energy is quadrupole ($\ell=2$)
radiation, and about the $8\%$ is octupole ($\ell=3$) radiation.
Concerning the transition from quasicircular inspiral to plunge,
Ref.~\cite{Nagar:2006xv} performed a (preliminary) calculation
of the total energy and angular momentum losses during a ``plunge''
phase (that was defined by the condition $r<5.9865M$, with $\nu=0.01$)
followed by merger and ringdown, computing all the multipolar
contributions up to $\ell=4$ (see Table~2 in~\cite{Nagar:2006xv}).
We will follow up and improve the calculation of Ref.~\cite{Nagar:2006xv}.
Let us first point out some conceptual difficulties.
As a matter of fact, any kind of
computation of the losses during the transition from
inspiral to plunge in our setup will depend both on the
value of $\nu$, and on the initial time from which one starts the
integration of the fluxes (for instance on the time when one
defines the beginning of the ``plunge'' phase).
It follows that, if robust and meaningful results are desired, the
calculation has to be focused on the part of the waveforms
that is quasiuniversal (i.e., with
negligible dependence on $\nu$). As was pointed
out in~\cite{Nagar:2006xv}, the quasiuniversal
behavior reached in the $\nu\to 0$ limit is linked to the
quasigeodesic character of the plunge motion, which
approaches the geodesic which starts from the LSO in the
infinite past with zero radial velocity.
In this respect, let us recall that, as shown in
Ref.~\cite{Buonanno:2000ef}, the transition from the adiabatic
inspiral to the nonadiabatic plunge is {\it not sharp}, but rather
{\it blurred}, namely it occurs in a radial domain around the LSO
which scaled with $\nu$ as $r-6M\sim \alpha M\nu^{2/5}$,
with the radial velocity scaling as $v_r\sim -\beta\nu^{3/5}$.
In practical terms, this means that the quasiuniversal, quasigeodesic
plunge does not really start at $r_0 = 6M$, but at
about $r_0/M\sim 6-\alpha\nu^{2/5}$.
In Ref.~\cite{Buonanno:2000ef}, using a 2.5PN Pad\'e resummed
radiation reaction, the coefficients $\alpha$ and $\beta$ were determined
to be $\alpha_{\rm 2.5PN}=1.89$ and $\beta_{\rm 2.5PN}=-0.072$.
However, since our setup is based on the 5PN resummed radiation
reaction force, we do not expect those numbers to remain unchanged,
so that they do not represent for us a reliable estimate to extract
the part of the waveforms we are interested in.
Taking a pragmatical approach, we can determine this quasiuniversal
region by contrasting our simulations at different $\nu$, so to
see when the dependence on $\nu$ is
sufficiently ``small'' (say at $1\%$ level in the energy
and angular momentum losses, see below).
Figure~\ref{fig:conv_with_mu}, displays the ``convergence''
to the $\ell=m=2$ modulus (upper panel) and frequency (lower panel)
for the three values of $\nu$. For convenience the waveforms
have been time-shifted so that the maxima of the waveform
mudulus (located at $u-u_{\rm max}=0$ in the figure)
coincide. The plot clearly shows that the late-time part
of the waveform has a converging trend to some
``universal'' pattern that progressively approximates
the ``exact'' $\nu=0$ case. Note that, at the visual level,
amplitudes and frequencies for $\nu=\{10^{-3}\,10^{-4}\}$
look barely distinguishable during the late part of
the plunge, entailing a very weak dependence on the
properties of radiation reaction. From this analysis we
can assume a quasiuniversal and quasigeodesic plunge
starting at about $u-u_{\rm max}=-50$ (vertical dashed line),
which corresponds to $M\omega_{22}\simeq 0.167$, which
is about $1.23\times (2\Omega_{\rm LSO})$ (for reference,
we indicate with a horizontal line the $2\Omega_{\rm LSO}$
frequency in the lower panel of the figure)~\footnote{Note that
the radial separation that corresponds to $u-u_{\rm max}=-50$
is $r\simeq 5.2M$ [more precisely, $r\simeq 5.199M$ $(5.228M$)
for $\nu=10^{-4}$ ($\nu=10^{-3}$)], i.e., we have a $13\%$
difference with the value, $5.88$ obtained using the former
EOB analysis with $\alpha=1.89$.}.
We integrate the multipolar energy and angular momentum
fluxes from $u-u_{\rm max}=-50M$ onwards and sum over all the multipoles
up to $\ell=8$. The outcome of this computation is listed
in Table~\ref{tab:table1} for the $\nu=\{10^{-3},\, 10^{-4}\}$.
Note that the agreement of these numbers at the level of $1\%$
is a good indication of the quasigeodesic character of the
dynamics behind the part of the waveform that we have selected.
The numerical information of Table~\ref{tab:table1} is
completed by Tables~\ref{tab_app1}-\ref{tab_app2}
in Appendix~\ref{sec:losses}, were we list the values and
the relative weight of each partial multipolar contribution.
Coming thus to the main conclusion of this analysis, it turns out
that the $\ell=m=2$ multipole contributes to the total
energy (angular momentum) for about the $58\%$ ($62\%$),
the $\ell=m=3$ for about the $20\%$ ($20\%$) , the $\ell=m=4$ for
about the $8\%$ ($7.6\%$) and the $\ell=m=5$ for the $3.5\%$ ($3.3\%$).
For what concerns the odd-parity multipole, the dominant
one, $\ell=2$, $m=1$, contributes to $4.3\%$ of the total energy
and $2.3\%$ of the total angular momentum.
We address again the reader to Appendix~\ref{sec:losses} for the fully
precise quantitative information.
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.45\textwidth]{fig4.eps}
\caption{\label{fig:conv_with_mu} ``Convergence'' of the waveform
when $\nu\to 0$. Retarded times have been shifted so that the
zero coincides with the maximum of the waveform modulus
$|\Psi^{(\rm e)}_{22}|$ for each value of $\nu$. The horizontal
dashed line indicates the adiabatic LSO frequency. The vertical dashed
line conventionally identifies the beginning of an approximately
quasiuniversal and quasigeodesic plunge phase.}
\end{center}
\end{figure}
\subsection{Ringdown}
\label{sec:fitringdown}
Let us focus now on the analysis of the waveform during pure ringdown only.
Our main aim here is to extract quantitative information from the
oscillations that are apparent in the gravitational wave frequency (and
modulus) during ringdown (see Fig.~\ref{fig:module_multipoles}).
As explained in Sec.~IIIB of
Ref.~\cite{Damour:2007xr}, the physical interpretation
of this phenomenon is clear, namely it is due to an asymmetric
excitation of the positive and negative QNM frequencies of the black hole
triggered by the ``sign'' of the particle motion (clockwise or counterclockwise).
The modes that have the {\it same sign} of $m\Omega$ are the dominant
ones, while the others with opposite sign are less excited (smaller amplitude).
Since QNMs are basically excited by a resonance mechanism, their
strength (amplitude) for a given multipole $(\ell,m)$ depends
on their ``distance'' to the critical (real) exciting frequency
$m\Omega_{\rm max}$ of the source, where $\Omega_{\rm max}$ indicates
the maximum of the orbital frequency.
In our setup, the particle is inspiralling counterclockwise
(i.e., $\Omega >0$), therefore the positive frequency QNMs
are more excited than the negative frequency ones.
The amount of (relative) excitation will depend on $m$.
Such QNM ``interference'' phenomenon was noted and explained already
in Refs.~\cite{Nagar:2006xv,Damour:2007xr}, although no quantitative
information was actually extracted from the numerical data. We
perform here this quantitative analysis.
\begin{table}[t]
\caption{\label{tab:table1}Total energy and angular momentum emitted
during the quasiuniversal, quasigeodesic plunge phase, the merger and
ringdown (it is defined by the condition
$M\omega_{22}\gtrsim 0.167$, see Fig.~\ref{fig:conv_with_mu} ).}
\begin{center}
\begin{ruledtabular}
\begin{tabular}{lcc}
$\nu$ & $M\Delta E/\mu^2$ & $\Delta J/\mu^2$ \\
\hline \hline
$10^{-3}$ & 0.47688 & 3.48918 \\
$10^{-4}$ & 0.47097 & 3.44271 \\
\end{tabular}
\end{ruledtabular}
\end{center}
\end{table}
The waveform during the ringdown has the structure
\be
\label{eq:qnm}
\Psi_{{\ell m}}^{(\rm e/o)}=\sum_n C_{\ell m n}^+ e^{-\sigma^+_{\ell n}t} + \sum_n
C_{\ell m n}^- e^{-\sigma^-_{\ell n}t},
\end{equation}
were we use the notation of Refs.~\cite{Damour:2006tr,Damour:2007xr},
and denote the QNM complex frequencies with
$\sigma_{\ell n}^{\pm}=\alpha_{\ell n}\pm \omega_{\ell n}$
and $C^{\pm}_{\ell m n}$ the corresponding complex amplitudes.
For each value of $\ell$, $n$ indicates the order of the mode,
$\alpha_{\ell n}$ its inverse damping time and $\omega_{\ell n}$ its
frequency. For example,
defining $a_{\ell m n}e^{{\rm i} {\vartheta}_{\ell m n}}\equiv C^{-}_{\ell m n}/C^+_{\ell m n}$,
in the presence of only one QNM (e.g., the fundamental one, $n=0$)
the instantaneous frequency computed from Eq.~(\ref{eq:qnm}) reads
\bea
\label{eq:qnm:interference}
\omega_{{\ell m}}^{(\rm e/o)}&=&
-\Im\left(\frac{\dot{\Psi}_{\ell m}^{(\rm e/o)}}{\Psi_{\ell m}^{(\rm e/o)}}\right) \\
&=& \frac{\left(1-a_{\ell m 0}^2\right)\omega_{\ell 0}}
{1+a_{\ell m 0}^2+2a_{\ell m 0}\cos\left(2 \omega_{\ell 0} t +
\vartheta_{\ell m 0}\right)}\nonumber .
\eea
This simple formula illustrates that, if the two modes are equally excited
($a_{\ell 0}=1$) then there is a destructive interference and the
instantaneous frequency is zero; on the contrary, if one mode (say the
positive one) is more excited than the other, the instantaneous frequency
oscillates around a constant value that asymptotically tends to
$\omega^+_{\ell 0}$ when $C_{\ell m 0}^-\rightarrow0$.
In general, one can use a more sofisticated version of
Eq.~\eqref{eq:qnm:interference}, that includes various overtones for
a given multipolar order, as a template to fit the instantaneous GW frequency
and to measure the various $a_{\ell m n}$ and $\vartheta_{\ell m n}$
during the ringdown.
For simplicity, we concentrate here only on the measure of $a_{\ell m 0}$,
and we use directly Eq.~\eqref{eq:qnm:interference}. To perform such
a fit\footnote{For
this particular investigation we use $\nu=10^{-2}$ data. The reason
for this choice is that, in our grid setup, the waveforms are practically
causally disconnected by the boundaries and we have a longer and cleaner
ringdown than in the other two cases.}
(with a least-square method) we consider only the part of the
ringdown that is dominated by the fundamental (least-damped) QNM; i.e.,
the ``plateau of oscillations'' approximately starting at $u/M= 4340$
in the right-panels of Fig.~\ref{fig:module_multipoles}.
\begin{table}[t]
\caption{\label{tab:rngdwn:fit} Fit of QNM interference with Eq.~(\ref{eq:qnm:interference})
for a representative sample of multipoles.
The numbers refer to $\nu=10^{-2}$. The $M\omega_{\ell 0}$
column lists the values of the fundamental QNMs frequencies gathered from
the literature~\cite{Chandrasekhar:1975zza,Chandrasekhar:1985kt,
Leaver:1985ax,Berti:2005ys} (see also Ref.~\cite{Berti:2009kk} for a recent
review and for highly accurate computations).
By contrast, the primed values are obtained from our numerical
data by fitting the ringdown frequency for \emph{both} $a_{\ell m 0}$ and
$M\omega_{\ell0}$. Note the good consistency between the two methods.}
\begin{center}
\begin{ruledtabular}
\begin{tabular}{llcccc}
$\ell$ & $m$ & $a_{\ell m 0}$ & $a'_{\ell m 0}$ & $M\omega_{\ell0}$ & $M\omega'_{\ell0}$\\
\hline \hline
2 & 1 & 7.2672$\times10^{-2}$ & 7.2678$\times10^{-2}$ & 0.37367 & 0.37369 \\
2 & 2 & 4.8476$\times10^{-3}$ & 4.848$\times10^{-3}$ & 0.37367 & 0.37361 \\
3 & 1 & 9.3403$\times10^{-2}$ & 9.3403$\times10^{-2}$ & 0.59944 & 0.59944 \\
3 & 2 & 8.008$\times10^{-3}$ & $8.011\times10^{-3}$ & 0.59944 & 0.59936 \\
3 & 3 & 5.5471$\times10^{-4}$ & 5.5477$\times10^{-4}$ & 0.59944 & 0.59943 \\
4 & 1 & 9.1560$\times10^{-2}$ & 9.1559$\times10^{-2}$ & 0.80917 & 0.80918 \\
4 & 2 & 9.1433$\times10^{-3}$ & 9.1435$\times10^{-3}$ & 0.80917 & 0.80917 \\
4 & 3 & 9.0473$\times10^{-4}$ & 9.0475$\times10^{-4}$ & 0.80917 & 0.80917 \\
4 & 4 & 6.382$\times10^{-5}$ & 6.379$\times10^{-5}$ & 0.80917 & 0.80918 \\
\end{tabular}
\end{ruledtabular}
\end{center}
\end{table}
The fundamental frequency $n=0$ has been used as given input,
and we fit for the amplitude ratio $a_{\ell m 0}$ and relative
phase $\theta_{\ell m 0}$. The outcome of the fit for some multipoles
is exhibited in Table~\ref{tab:rngdwn:fit}. Note that in the third
and fourth column we list also the values that one obtains by fitting
{\it also} for the frequency $\omega_{\ell 0}'$. We obtain perfectly
consistent results.
Note that for the multipole $\ell=8$ we were obliged to compute
the frequency only in this way, since we could not find this number in the results
of~\cite{Berti:2009kk}: we obtain the value $M\omega_{8 0}=1.60619$.
The table quantifies that the strongest interference pattern, that always
occurs for $m=1$ (for any $\ell$), corresponds to a relative contribution of
the negative frequency mode of the order of about $9\%$.
This trend remains true for all values of $\ell$. For example,
we have $a_{810}=9.54\times 10^{-2}$ and $a_{710}=9.48\times 10^{-2}$.
Note finally that the presence of the negative mode for the $\ell=2$, $m=1$
shows up also in the corresponding modulus $|\Psi_{21}|/\nu$, with the
characteristic oscillating pattern superposed to the
exponential decay (see top-left panel of Fig.~\ref{fig:module_multipoles}).
[See also Ref.~\cite{Hadar:2009ip} for an analytical treatment
of the ringdown excitation amplitudes during the plunge].
\section{Gravitational recoil}
\label{sec:recoil}
Let us now come to the computation of the gravitational
recoil, or ``kick'', imparted to the system due to the
anisotropic emission of gravitational radiation.
The calculation of these kicks in general relativity
has been carried out in a variety of ways, that before 2005
relied mainly on analytical and semianalytical techniques.
In particular, let us mention that, after the pioneering
calculation of Fitchett~\cite{Fitchett:1983} and
Fitchett and Detweiler~\cite{Fitchett:1984qn}, earlier estimates
included a perturbative calculation~\cite{Favata:2004wz},
a close-limit calculation~\cite{Sopuerta:2006wj}
and a post-Newtonian calculation valid during
the inspiral phase only~\cite{Blanchet:2005rj}.
This latter calculation has been recently improved
by bringing together post-Newtonian theory and the
close limit approximation, yielding close agreement
with purely NR results~\cite{LeTiec:2009yg}.
In addition, a first attempt to compute the final
kick within the EOB approach~\cite{Damour:2006tr} yielded
the {\it analytical understanding} (before any numerical result
were available) of the qualitative behavior of the kick
velocity (notably the so-called ``antikick'' phase),
as driven by the intrinsically nonadiabatic character
of the plunge phase. Such preliminary EOB
calculation was then improved in~\cite{Schnittman:2007sn},
which included also inputs from NR simulations.
On the numerical side, after the pioneering computation of
Baker et al.~\cite{Baker:2006vn}, there has been a
plethora of computations of the kick from spinning black holes
binaries, focusing in particular on the so-called superkick
configurations. By contrast, for the nonspinining case,
Refs.~\cite{Gonzalez:2006md,Gonzalez:2008bi} represent
to date the largest span of mass ratios for which
the final kick velocity is known (see also Ref.~\cite{Pollney:2007ss}
for the nonprecessing, equal-mass spinning case).
In addition, the use of semianalytical models prompted a deeper
understanding of the structure of the gravitational recoil
as computed in NR simulations~\cite{Schnittman:2007ij}.
However, despite all these numerical efforts, to date
there are no ``numerical'' computations of the final recoil
velocity in the $\nu\to 0$ limit: the only estimates
rely on fits to NR data of the form
\be
\label{eq:kick}
v^{\rm kick}=A\nu^2\sqrt{1-4\nu}(1+B\nu )\ ,
\end{equation}
with the coefficient $A$ giving the extrapolated value in
the $\nu\to 0$ limit~\cite{Gonzalez:2006md,Gonzalez:2008bi}.
The aim of this section is to provide a value of $A$ that
comes from an actual (numerical) computation within
perturbation theory.
It is convenient to treat the kick velocity
vector imparted to the system by GW emission as a complex
quantity, i.e.
$v\equiv v_x+{\rm i} v_y$.
By integrating Eq.~\eqref{eq:dPdt} in time and by changing the
sign, the (complex) velocity accumulated by the system up to
a certain time $t$ is given by
\be
\label{eq:kick_one}
v \equiv v_x + {\rm i}v_y= -\dfrac{1}{M}\int_{-\infty}^t\left({\cal F}_x^{\bf P} + {\rm i}{\cal F}_y^{\bf P}\right) dt' .
\end{equation}
Since in practical situations one is always dealing with a
finite timeseries for the linear momentum flux, it is not
possible to begin the integration from $t=-\infty$, but rather
at a finite initial time $t_0$. This then amounts in the need
of fixing some (vectorial) integration constant $v_0$
that accounts for the velocity that the system has acquired in
evolving from $t=-\infty$ to $t=t_0$, i.e.
\be
\label{eq:kick_two}
v = v_0 -\dfrac{1}{M}\int_{t_0}^t\left({\cal F}_x^{\bf P} + {\rm i}{\cal F}_y^{\bf P}\right) dt.
\end{equation}
As it was emphasized in Ref.~\cite{Pollney:2007ss}, the proper
inclusion of $v_0$ is crucial to get the correct (monotonic)
qualitative and quantitative behavior of the time evolution
of the magnitude $|v|$ of the recoil velocity.
Typically, not only the final value
of $|v|$ may be wrong of about a $10\%$, but one can also
have spurious oscillations in $|v|$ during the inspiral phase
if $v_0$ is not properly determined or simply set to zero.
See in this respect Sec.~IVA of Ref.~\cite{Pollney:2007ss}.
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.45\textwidth]{fig5.eps}
\caption{\label{fig:hodograph}Parametric plot of
$v_x$ versus $v_y$ (for $\nu=10^{-3}$) obtained from
Eq.~\eqref{eq:kick_two} with
$v_0/\nu^2 = (1.549-{\rm i} 1.0644)\times 10^{-3}$.
The analogous plot with $v_0=0$ is shown in the inset.}
\end{center}
\end{figure}
As in Ref.~\cite{Pollney:2007ss}, the numerical determination
of $v_0$ can be done with the help of the ``hodograph'', i.e.,
a parametric plot of the instantaneous velocity vector in the
complex velocity plane $(v_x,v_y)$. This hodograph is displayed
in Fig.~\ref{fig:hodograph} for $\nu=10^{-3}$.
Let us focus first on the inset, that exhibits the outcome
of the time integration with $v_0=0$.
Note that the center of the inspiral (corresponding to the
velocity accumulated during the quasiadiabatic inspiral phase)
is displaced with respect to the correct value $v=(0,0)$,
corresponding to $v_0=0$ at $t=-\infty$.
The initial $v_x^0$ and $v_y^0$ are determined
as the translational ``shifts'' that one needs to add
(in both $v_x$ and $v_y$) so that the ``center'' of this
inspiral is approximately zero. The result of this
operation led (for $\nu=10^{-3}$) to
$v_x^0/\nu^2=1.1549\times 10^{-3}$ and $v_y^0/\nu^2=-1.0644\times
10^{-3}$; this is displayed in the main panel of Fig.~\ref{fig:hodograph}.
This judicious choice of the integration constant is such that
the modulus $|v|=\sqrt{v_x^2 + v_y^2}$ of the accumulated recoil
velocity grows essentially monotonically in time and no spurious
oscillations are present during the inspiral phase. This is
emphasized by Fig.~\ref{fig:recoil}. In the figure, we show,
as a solid line, the modulus of the {\it total} accumulated
kick velocity versus observer's retarded time (as before, waveforms
are extracted at $r_*^{\rm obs}/M=1000$). This ``global'' computation
is done including in the sum of Eq.~\eqref{eq:dPdt} all the partial
multipolar contribution up to $\ell=7$ (which actually means
considering also the interference terms between
$\ell=7$ and $\ell=8$ modes). To guide the eye, we added
a vertical dashed line locating the maximum of
$|\Psi^{(\rm e)}_{22}|$, that approximately corresponds to
the dynamical time when the particle crosses the light-ring.
Note the typical shape of $|v|$, with a clean local
maximum and the so-called ``antikick'' behavior, that is
qualitatively identical to the corresponding curves
computed (for different mass ratios) by NR simulations
(see for example Fig.~1 of Ref.~\cite{Schnittman:2007ij}
for the 2:1 mass ratio case).
In addition to the total recoil magnitude computed up to $\ell=7$,
we display on the same plot also the ``partial'' contribution, i.e.
computations of $|v|$ where we truncate the sum over
$\ell$ in Eq.~\eqref{eq:dPdt}
at a given value $\ell^*<7$. In the figure we show
(depicted as various type of nonsolid lines) the evolution of
recoil with $2\leq\ell^*\leq 6$.
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.45\textwidth]{fig6.eps}
\caption{\label{fig:recoil}Time-evolution of the magnitude of the recoil
velocity. The figure shows the monotonic ``multipolar'' convergence
to the final result. The plot refers to mass ratio $\nu=10^{-3}$.}
\end{center}
\end{figure}
Note that each partial-$\ell$ contribution to the linear momentum
flux has been integrated in time (with the
related choice of integration constants) before performing the
vectorial sum to obtain the total $v$. The fact that each
curve nicely grows monotonically without spurious oscillations
during the late-inspiral phase is a convincing indication of the
robustness of the procedure we used to determine $v_0^{\ell^*}$
by means of hodographs\footnote{Note that the procedure can actually
be automatized by determining the ``baricenter'' of the adiabatic
inspiral in the $(v_x,v_y)$ plane corresponding to early evolution.}.
\begin{table}[t]
\caption{\label{tab:recoil}Magnitude of the final and maximum kick
velocities for the three values of $\nu$ considered. The last row
lists the values extrapolated to $\nu=0$ from $\nu=\{10^{-3},10^{-4}\}$ data.}
\begin{center}
\begin{ruledtabular}
\begin{tabular}{lcc}
$\nu$ & $|v^{\rm end}|/\nu^2$ & $|v^{\rm max}|/\nu^2$ \\
\hline \hline
$10^{-2}$ & 0.043234 & 0.050547 \\
$10^{-3}$ & 0.044401 & 0.052058 \\
$10^{-4}$ & 0.044587 & 0.052298 \\
\hline
{\bf 0} & {\bf 0.0446} & {\bf 0.0523} \\
\end{tabular}
\end{ruledtabular}
\end{center}
\end{table}
The figure highlights at a visual level the influence of
high multipoles to achieve an accurate results. To give some
meaningful numbers, if we consider $\ell^*=6$, we obtain
$|v_6^{\rm fin}|/\nu^2=0.0437$ (i.e. $1.7\%$ difference),
while $|v_5^{\rm fin}|/\nu^2= 0.0424$ ($4.5\%$ difference).
The information conveyed by Fig.~\ref{fig:recoil} is completed by
Table~\ref{tab:recoil}, where we list the final value of the
modulus of the recoil velocity of the center of mass $|v^{\rm end}|/\nu^2$
(as well as the corresponding maximum value $|v^{\rm max}|/\nu^2$)
obtained in our setup for the three values of $\nu$ that we have
considered. The computation of the kick for the other values of $\nu$
is procedurally identical and thus we show only the final numbers.
The good agreement between the three numbers is consistent with
the interpretation that the recoil is almost completely determined
by the nonadiabatic plunge phase of the system (as emphasized
in Ref.~\cite{Damour:2006tr}), and thus it is almost unaffected
by the details of the inspiral phase. Because of the late-plunge
consistency between waveforms that we showed above
for $\nu=\{10^{-3},\,10^{-4}\}$, we have decided to extrapolate the
corresponding values of the kick for $\nu=0$. The corresponding
numbers are listed (in bold) in the last row of Table~\ref{tab:recoil}.
\begin{table}[t]
\caption{\label{tab:other_recoil}Recoil velocities in the test-mass limit
obtained by (extrapolating) different finite-mass results.
Our ``best'' value is shown in bold. See text for explanations.}
\begin{center}
\begin{ruledtabular}
\begin{tabular}{lcc}
Reference & $|v^{\rm end}|/\nu^2$ & \\
\hline \hline
Gonz\'alez {\it et al.}~\cite{Gonzalez:2006md} & 0.04 & \\
\hline
Damour and Gopakumar~\cite{Damour:2006tr} & [0.010,\,0.035] &\\
Schnittman and Buonanno~\cite{Schnittman:2007sn} & [0.018,\,0.041] &\\
Sopuerta {\it et al.}~\cite{Sopuerta:2006wj} & [0.023,\,0.046] & \\
Le Tiec, Blanchet and Will~\cite{LeTiec:2009yg} & 0.032 & \\
\hline
This work & {\bf 0.0446} & \\
\end{tabular}
\end{ruledtabular}
\end{center}
\end{table}
In Table~\ref{tab:other_recoil} we compare the value of the
final recoil with that (extrapolated to the test-mass limit)
obtained from NR simulations~\cite{Gonzalez:2006md} and with
semianalytical or seminumerical predictions,
like the EOB~\cite{Damour:2006tr,Schnittman:2007sn},
the close-limit approximation~\cite{Sopuerta:2006wj}
(that all give a range, with rather large error bars)
and the recent calculation of Le Tiec et al.~\cite{LeTiec:2009yg}
based on a hybrid post-Newtonian-close-limit calculation
We conclude this section by discussing in more detail the comparison
of our result with the NR-extrapolated value. Since the NR-extrapolated
value that we list in Table~\ref{tab:other_recoil} was obtained using
{\it only} the data of Ref.~\cite{Gonzalez:2006md} (without the 10:1
mass ratio simulation of~\cite{Gonzalez:2008bi}), we have decided to
redo the fit with all the NR data together (that have been kindly given to
us by the Authors).
To improve the sensitivity of the fit when $\nu$ gets small, we first
factor out the $\nu^2$ dependence in the data (i.e., we consider
$v^{\rm NR}/\nu^2$, by continuity with the test-mass result).
We then fit the data with the function
\begin{equation}
\label{kickfit}
\tilde{f}(\nu) = A \sqrt{1-4\nu}\left( 1+ B \nu \right) \ .
\end{equation}
Table~\ref{tab:kickfit} displays the results of the fit obtained using:
the NR data of Ref.~\cite{Gonzalez:2006md} (consistent with the published result),
first row; the joined information of
Refs.~\cite{Gonzalez:2006md,Gonzalez:2008bi},
second row; and the NR data of~\cite{Gonzalez:2006md,Gonzalez:2008bi} together
with the test-mass result calculated in this paper. Note that the NR fit are
perfectly consistent with the test-mass value: in particular, our extrapolated
value $|v^{\rm end}|/\nu^2=0.0446$ shows an agreement of $1.5\%$ with the
value of $A$ obtained from the fit to the most complete NR information
(in bold in Table~\ref{tab:kickfit}).
The information of the table is completed by Fig.~\ref{fig:fick}, that
displays $\tilde{f}(\nu)$ (as a dash-dot line) obtained from the complete
NR data of Refs.~\cite{Gonzalez:2006md,Gonzalez:2008bi}. Note the visual
good agreement between this extrapolation and the test-mass point when $\nu\to 0$.
For contrast, we also show on the plot (as a dashed line) the outcome of the
fit with the simple Newtonian-like formula ($B=0$)~\cite{Fitchett:1983}.
We also tested the effect of adding a quadratic correction
[i.e. a term $C\nu^2$ in the polynomial multiplying the square root
in $\tilde{f}(\nu)$], but we found that it does not really improve
the description of the data.
\begin{table}[t]
\caption{\label{tab:kickfit}Fit coefficients for the final magnitude of the kick
velocity from NR simulations as a function of $\nu$, Eq.~\eqref{kickfit}.
See text for discussion.}
\begin{center}
\begin{ruledtabular}
\begin{tabular}{lcc}
Data & $A$ & $B$ \\
\hline \hline
Gonz\'alez et al.~\cite{Gonzalez:2006md} & 0.04070 & -0.9883 \\
Gonz\'alez et al.~\cite{Gonzalez:2006md,Gonzalez:2008bi} & {\bf 0.04396} & {\bf -1.3012} \\
Gonz\'alez et al.~\cite{Gonzalez:2006md,Gonzalez:2008bi}+ This work & 0.04446 & -1.3482\\
\end{tabular}
\end{ruledtabular}
\end{center}
\end{table}
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.45\textwidth]{fig7.eps}
\caption{\label{fig:fick} Results of the fit of
NR data of Refs.~\cite{Gonzalez:2006md,Gonzalez:2008bi} using
Eq.~\eqref{kickfit}. Note the good agreement between the NR-extrapolation
and our test-mass result. The bottom panel contains the relative
difference with the data. This plot corresponds to the second row
of Table~\ref{tab:kickfit}, without the test-mass point. See text for details.}
\end{center}
\end{figure}
\section{Consistency checks}
\label{sec:checks}
In this section, we finally come to the discussion of some internal
consistency checks of our approach. These consist in:
(i) the verification of the consistency between the mechanical angular
momentum loss (as driven by our analytical, resummed radiation
reaction force) and the actual gravitational wave energy flux
computed from the waves (as a follow up of a similar analysis
done in Ref.~\cite{Damour:2007xr});
(ii) a brief analysis of the influence on the (quadrupolar) waveform
of the higher-order $\nu$-dependent EOB corrections entering
the conservative and nonconservative part of the relative dynamics.
\subsection{Angular momentum loss}
One of the results of Ref.~\cite{Damour:2007xr} was about
the comparison between the mechanical angular momentum loss
provided by the resummed radiation reaction $\hat{{\cal F}}_{\varphi}$
and the angular momentum flux computed from the multipolar waveform.
At that time, the main focus of Ref.~\cite{Damour:2007xr} was on
to use of the ``exact'' instantaneous gravitational wave
angular momentum flux $\dot{J}$ [see Eq.~\eqref{eq:dJdt}],
to discriminate between two different expression of the
2.5PN Pad\'e resummed angular momentum flux
${\cal F}_\varphi^{\rm 2.5PN}$ that are degenerate during
the adiabatic early inspiral.
In addition , in that setup it was also possible to:
(i) check consistency between $\dot{J}$ and $-{\cal F}_\varphi$
during the inspiral and early plunge; (ii) argue that non-quasicircular
corrections in the radiation reaction are present to produce
a good agreement between the ``analytical'' and the exact
angular momentum fluxes also during the plunge, almost up
to merger and (iii) show that the ``exact'' flux is practically
insensitive to (any kind of) NQC corrections.
Since we are now using a new radiation reaction force with
respect to Ref.~\cite{Damour:2007xr}, it is interesting to
redo the comparison between the Regge-Wheeler-Zerilli ``exact''
flux and the ``analytical'' mechanical loss computed along the
relative dynamics.
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.45\textwidth]{fig8.eps}
\caption{\label{fig:dJdt_vs_time}Comparison between two angular
momentum losses: the GW flux (solid line) computed \`a la Regge-Wheeler-Zerilli
including up to $\ell=8$ radiation multipoles, and the mechanical angular momentum
loss $-{\cal F}_\varphi$ (dash line). The two vertical lines correspond (from left to right)
to the particle crossing respectively the adiabatic LSO location ($r=6M$) and
the light-ring location ($r=3M$). The plot refers to $\nu=10^{-3}$.}
\end{center}
\end{figure}
The result of this comparison is displayed in Fig.~\ref{fig:dJdt_vs_time},
that is the analogous of (part of) Fig.~2 of Ref.~\cite{Damour:2007xr}.
We show in the figure the mechanical angular momentum loss
(changed of sign) $-\hat{{\cal F}}_\varphi/\nu$ versus the
mechanical time $t/M$ together
with the instantaneous angular momentum flux $\dot{J}/\nu$
(computed from $\Psi^{(\rm e/o)}_{{\ell m}}$ including all
contributions up to $\ell=8$) versus observer's retarded time.
Note that we did not introduce here a possible shift between the
mechanical time $t$ and observer's retarded time $u$. As such
a shift is certainly expected to exist, our results should be
viewed as giving a lower bound on the agreement
between ${\cal F}_\varphi$ and $\dot{J}$.
Note the very good visual agreement, not only above the LSO
(vertical dashed line) but also {\it below} the LSO,
and actually almost during the {\it entire} plunge phase.
In fact, the accordance between the two fluxes is actually
visually very good almost up to the merger
(approximately identified by the maximum of the
$\ell=m=2$ waveform, see dash-dot line
in the inset)\footnote{Following the reasoning line of~\cite{Damour:2007xr},
the result displayed in the figure is telling us that most of the non-quasi
circular corrections to the waveforms (and energy flux) are already taken
into account automatically in our resummed flux, due to the intrinsic
dependence on it on $p_{r_*}$ through the Hamiltonian, so that one
might need only to add pragmatically corrections that are very
small in magnitude. These issues will deserve more careful
investigations in forthcoming studies}.
We inspect this agreement at a more quantitative level in
Fig.~\ref{fig:DeltaJ}, where we plot the (relative) difference between
$\dot{J}/\nu$ and $-\hat{{\cal F}}_\varphi/\nu$ versus (twice)
the orbital frequency. The inset shows the relative difference from
initial frequency to $2\Omega_{\rm max}$, where $\Omega_{\rm max}$
is the maximum of the orbital frequency.
The main panel is a close-up centered around the LSO frequency.
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.45\textwidth]{fig9.eps}
\caption{\label{fig:DeltaJ}Difference between mechanical angular
momentum loss and GW energy flux shown versus twice the orbital frequency
$2\Omega$. The vertical line locates the adiabatic LSO frequency. The main
panel focuses on the inspiral phase, while the inset shows the full range
until the $2\Omega_{\rm max}$, where $\Omega_{\rm max}$ indicates the
maximum of orbital frequency.}
\end{center}
\end{figure}
Note that the relative difference is of the order of $10^{-3}$
during the late inspiral and the plunge, increasing at about only
a $10\%$ just before merger.
We have performed the same analysis for $\nu=10^{-2}$ and $\nu=10^{-4}$,
obtaining similar results. This is an indication that we have reached
the limit of accuracy of our resummation procedure, limit that evidently
is more apparent during the late part of the plunge.
It is however remarkable that the fractional different
is so small, confirming the validity of the
improved $\rho$-resummation of Ref.~\cite{Damour:2008gu}.
In this respect, we mention in passing that this fractional difference
can be made even smaller by further Pad\'e resumming the residual amplitude
corrections $\rho_{\ell m}$ in a proper way. This route was explored in
Ref.~\cite{Damour:2008gu} for the $\rho_{22}$ amplitude, yielding indeed
better agreement with the ``exact'' circularized waveform amplitude.
A more detailed analysis of these delicate issues lies out of the
purpose of this paper, but will be investigated in future work.
\subsection{Influence of dynamical ``self-force''
$\nu$-dependent effects on the waveforms.}
\label{sec:eob_full}
In the work that we have presented so far we have included
in the relative dynamics only the leading order part of the
radiation reaction force, namely the one proportional to $\nu$.
This allowed us to compute, consistently as shown above,
Regge-Wheeler-Zerilli-type waveforms.
In doing so we have neglected all the finite-$\nu$ effects
that are important in the (complete) EOB description of the
two-body problem, that is: (i) $\nu$-dependent corrections to the
conservative part of the dynamics\footnote{These corrections
come in both from the resummed EOB Hamiltonian $H_{\rm EOB}$ with
the double-square-root structure and from the EOB radial
potential $A(r,\nu)$.} and (ii) higher order $\nu$ dependent
corrections in the nonconservative part of the dynamics,
i.e. corrections entering in the definition of
the angular momentum flux $\hat{{\cal F}}_{\varphi}$.
In this section we want to quantify the effects entailed by
these corrections on our result. To do so, we switch on
the ``self-force'' $\nu$-dependent corrections in the Hamiltonian
and in the flux defining the complete EOB relative dynamics and
we compute EOB waveforms for $\nu=\{10^{-2},10^{-3},10^{-4}\}$.
Since this analysis aims at giving us only a general quantitative
idea of the effect of ``self-force'' corrections, we restrict
ourself only to the computation of the $\ell=m=2$ ``insplunge''
waveform, without the matching to QNMs~\cite{Damour:2009kr}.
Note also that we neglect the non-quasi-circular corrections
advocated in Eq.~(5) of~\cite{Damour:2009kr}.
(See also Ref.~\cite{Damour:2009ic}).
\begin{table}[t]
\caption{\label{tab:table5} Accumulated phase difference (computed
from $\omega_1= 0.10799$ up to $\omega_2\equiv 2\Omega_{\rm LSO}
= 0.13608$ ) [in radians] between $\ell=m=2$ EOB waveforms.
$\Delta\phi_{\rm LSO}^{\rm EOB_{5PN}}$ is the phase difference accumulated
between the EOB$_{\rm 5PN}$ and the EOB$_{\rm testmass}$ insplunge
waveforms, while $\Delta\phi_{\rm LSO}^{\rm EOB_{\rm 1PN}}$ is the
phase difference accumulated between the EOB$_{\rm 1PN}$ and the
EOB$_{\rm testmass}$ insplunge waveforms.
See text for more precise explanations.}
\begin{center}
\begin{ruledtabular}
\begin{tabular}{lcc}
$\nu$ & $\Delta\phi_{\rm LSO}^{\rm EOB_{5PN}}$ [rad] & $\Delta\phi_{\rm
LSO}^{\rm EOB_{1PN}}$ [rad] \\
\hline \hline
$10^{-2}$ & 3.2 & 0.40 \\
$10^{-3}$ & 3.8 & 0.43 \\
$10^{-4}$ & 4.1 & 0.44 \\
\end{tabular}
\end{ruledtabular}
\end{center}
\end{table}
For each value of $\nu=\{10^{-2},10^{-3},10^{-4}\}$, we compute
three insplunge $h_{22}$ resummed waveforms with increasingly
physical complexity.
The first, EOB$_{\rm testmass}$ insplunge waveform, is obtained within
the ${\cal O}(\nu)$ approximation used so far; i.e., we set to zero
all the $\nu$ dependent EOB corrections in $H_{\rm EOB}$
and in the normalized flux,
$\hat{f}_{\rm DIN}\equiv \hat{f}_{\rm DIN}(v_\varphi;\nu=0)$.
The second, EOB$_{\rm 5PN}$ insplunge waveform, is computed from
the full EOB dynamics, with the complete $H_{\rm EOB}$
and $\nu$-dependent (Newton normalized)
flux $\hat{f}_{\rm DIN}(v_\varphi;\nu)$ replaced
in Eq.~\ref{eq:rr}. The radial potential $A(u;a_5,a_6,\nu)$ is given
by the Pad\'e resummed form of Eq.~(2) of Ref.~\cite{Damour:2009kr}
and $a_5$ and $a_6$ are EOB flexibility parameters that take into
account 4PN and 5PN corrections in the conservative
part of the dynamics. They have been constrained by comparison
with numerical results~\cite{Damour:2009kr,Damour:2009sm}.
Following~\cite{Damour:2009sm}, we use here the values
$a_5=-22.3$ and $a_6=+252$ as ``best choice''.
The third, EOB$_{\rm 1PN}$ insplunge waveform, is obtained by keeping the same
flux $\hat{f}_{\rm DIN}(v_\varphi;\nu)$ of
the EOB$_{\rm 5PN}$ case, but only part of the
EOB Hamiltonian. More precisely, we restrict the effective
Hamiltonian $\hat{H}_{\rm eff}$ at 1PN level.
This practically means using $A(r;0)\equiv 1-2M/r$ and
dropping the $p_{r_*}^4/r^2$ correction term that
enters in $\hat{H}_{\rm eff}$ at the 3PN level.
See Eq.~(1) in~\cite{Damour:2009sm}.
We compute the relative phase difference, accumulated
between frequencies $(\omega_1,\omega_2)$, between the
EOB$_{\rm testmass}$ waveform and the other two.
We chose $\omega_1=0.10799$, that corresponds to the initial
(test-mass) GW frequency, and $\omega_2=2\Omega_{\rm LSO}\simeq 0.13608$.
Instead of comparing the waveforms versus time, we found
it convenient to do the following comparison versus frequency.
For each waveform, we compute the following auxiliar quantity
\be
Q_{\omega} = \dfrac{\omega^2}{\dot{\omega}}.
\end{equation}
This quantity measures the effective number of GW cycles spent
around GW frequency $\omega$ (and correspondingly weighs the
signal-to-noise ratio~\cite{Damour:2000gg}), and is a useful
diagnostics for comparing the relative phasing accuracy of various
waveforms~\cite{DNTrias}.
Then, the gravitational wave phase $\phi_{(\omega_1,\omega_2)}$
accumulated between frequencies $(\omega_1,\omega_2)$ is
given by
\be
\label{eq:phi}
\phi_{(\omega_1,\omega_2)}=\int_{\omega_1}^{\omega_2} Q_{\omega}d\log\omega.
\end{equation}
We can then define the relative dephasing accumulated between two
waveforms as
\be
\Delta\phi_{(\omega_1,\omega_2)}^{{\rm EOB}_{n{\rm
PN}}}=\int_{\omega_1}^{\omega_2}\Delta Q_{\omega}^{{\rm EOB}_{n{\rm PN}}}d\log(\omega),
\end{equation}
where $\Delta Q_{\omega}^{{\rm EOB}_{n{\rm PN}}}\equiv Q_{\omega}^{{\rm EOB}_{n{\rm PN}}}-Q_{\omega}^{\rm EOB_{testmass}}$.
The results of this comparison are contained in Table~\ref{tab:table5}.
Note the influence of the correction due to the conservative
part of the self force. Since this correction changes the location
of the adiabatic $r$-LSO position~\cite{Buonanno:1998gg}, it
entails a larger effect on the late-time portion of the
binary dynamics and waveforms, resulting in a more
consistent dephasing.
\section{Conclusions}
\label{sec:end}
We have presented a new calculation of the gravitational wave emission
generated through the transition from adiabatic inspiral to plunge, merger
and ringdown of a binary systems of nonspinning black holes in the extreme
mass ratio limit. We have used a Regge-Wheeler-Zerilli perturbative approach
completed by leading order EOB-based radiation reaction force. With respect
to previous work, we have improved (i) on the numerical algorithm used to
solve the Regge-Wheeler-Zerilli equations and (ii) on the analytical
definition of the improved EOB-resummed radiation reaction force.
Our main achievements are listed below.
\begin{enumerate}
\item
We computed the complete multipolar waveform up to multipolar order $\ell=8$.
We focused on the relative impact (at the level of energy
and angular momentum losses) of the subdominant multipoles during the
part of the plunge that can be considered quasiuniversal
(and quasigeodesic) in good approximation.
We analyzed also the structure of the ringdown waveform
at the quantitative level. In particular, we measured
the relative amount of excitation of the fundamental QNMs with positive
and negative frequency. We found that, for each value of $\ell$,
the largest excitation of the negative modes always occurs for $m=1$
and is of the order of $9\%$ of the corresponding positive mode.
\item
The central numerical result of the paper is the computation of the
gravitational recoil, or kick, imparted to the center of
mass of the system due to the anisotropic emission of gravitational waves.
We have discussed the influence of high modes in the multipolar expansion
of the recoil. We showed that one has to consider $\ell\geq 4$
to have a $\sim 10\%$ accuracy in the final kick.
We found for the magnitude of the final and maximum recoil velocity
the values $|v^{\rm end}|/\nu^2=0.0446$ and $|v^{\rm max}|/\nu^2=0.0523$.
The value of the final recoil shows a remarkable agreement ($<2\%$) with
the one extrapolated from a sample of NR simulations,
$|v^{\rm end}_{\rm NR}|/\nu^2\simeq 0.0439$.
\item
The ``improved resummation'' for the radiation reaction used in this paper
yields a better consistency agreement between mechanical angular momentum
losses and gravitational wave energy flux than the previously employed
Pad\'e resummed procedure. In particular, we found an agreement between
the angular momentum fluxes of the order of $0.1\%$ during the plunge
(well below the LSO), with a maximum disagreement of the order of $10\%$
reached around the merger.
This is a detailed piece of evidence that EOB waveforms computed via the
resummation procedure of~\cite{Damour:2008gu} can yield accurate input
for LISA-oriented science.
\end{enumerate}
While writing this paper, we became aware of a similar calculation
of the final recoil by Sundararajan, Khanna and Hughes~\cite{PKH}.
Their calculation is based on a different method to treat the
transition from inspiral to
plunge (see Refs.~\cite{Sundararajan:2007jg,Sundararajan:2008bw,Sundararajan:2008zm}
and references therein). In the limiting case of a nonspinning binary,
their results for the final and maximum kick are fully consistent with ours.
\acknowledgements
We thank Thibault Damour for fruitful discussions,
inputs and a careful reading of the manuscript.
We also acknowledge useful correspondence with
Scott Hughes, Gaurav Khanna and Pranesh Sundararajan,
who made us kindly aware of their results before publication.
We are grateful to Bernd Br\"ugmann, Mark Hannam, Sascha Husa,
Jos\'e~A.~Gonz\'alez, and Ulrich Sperhake for giving us access to their NR data.
Computations were performed on the INFN Beowulf clusters
{\tt Albert} at the University of Parma and the {\tt Merlin}
cluster at IHES. We also thank Roberto De Pietri, Fran\c{c}ois Bachelier, and
Karim Ben Abdallah for technical assistance
and E.~Berti for discussions.
SB is supported by DFG Grant SFB/Transregio~7 ``Gravitational Wave Astronomy''.
SB thank IHES for hospitality and support
during the development of this work.
| proofpile-arXiv_065-5353 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section*{Abstract}
Complex, non-additive genetic interactions are common and can be
critical in determining phenotypes. Genome-wide association studies
(GWAS) and similar statistical studies of linkage data, however,
assume additive models of gene interactions in looking for
associations between genotype and phenotype. In general, these
statistical methods view the compound effects of multiple genes on a
phenotype as a sum of partial influences of each individual gene and
can often miss a substantial part of the heritable effect. Such
methods do not make use of any biological knowledge about underlying
genotype-phenotype mechanisms. Modeling approaches from the Artificial
Intelligence field that incorporate deterministic knowledge into
models while performing statistical analysis can be applied to include
prior knowledge in genetic analysis. We chose to use the most general
such approach, Markov Logic Networks (MLNs), that employs first-order
logic as a framework for combining deterministic knowledge with
statistical analysis. Using simple, logistic regression-type MLNs we
have been able to replicate the results of traditional statistical
methods. Moreover, we show that even with quite simple models we are
able to go beyond finding independent markers linked to a phenotype by
using joint inference that avoids an independence assumption. The
method is applied to genetic data on yeast sporulation, a phenotype
known to be governed by non-linear interactions between genes. In
addition to detecting all of the previously identified loci associated
with sporulation, our method is able to identify four additional loci
with small effects on sporulation. Since their effect on sporulation
is small, these four loci were not detected with standard statistical
methods that do not account for dependence between markers due to gene
interactions. We show how gene interactions can be detected using more
complex models, which in turn can be used as a general framework for
incorporating systems biology with genetics. Such future work that
embodies systems knowledge in probabilistic models is proposed.
\section*{Author Summary}
We have taken up the challenge of devising a framework for the
analysis of genetic data that is fully functional in the usual
statistical correlation analysis used in genome-wide association
studies, but also capable of incorporating prior knowledge about
biological systems relevant to the genetic phenotypes. We develop a
general genetic analysis approach that meets this challenge. We adapt
an AI method for learning models, called Markov Logic Networks, that
is based on the fusion of Markov Random Fields with first order logic.
Our adaption of the Markov Logic Network method for genetics allows
very complex constraints and a wide variety of model classes to be
imposed on probabilistic, statistical analysis. We illustrate the use
of the method by analyzing a data set based on sporulation efficiency
from yeast, in which we demonstrate gene interactions and identify a
number of new loci involved in determining the phenotype.
\section*{Introduction}
Genome-wide association studies (GWAS) have allowed the detection of
many genetic contributions to complex phenotypes in humans (see
\emph{www.genome.gov}). Studies of biological networks of different
kinds, including genetic regulatory networks, protein-protein
interaction networks and others, have made it clear, however, that
gene interactions are abundant and are therefore of likely importance
for genetic analysis~\cite{Manolio09}. Complex, non-additive
interactions between genetic variations are very common and can
potentially play a crucial role in determining
phenotypes~\cite{Brem05,Drees05,Carter07,CarterDudley09}. GWAS and
similar statistical methods such as classical QTL studies generally
assume additive models of gene interaction that attempt to capture a
compound effect of multiple genes on a phenotype as a sum of partial
influences of each individual gene~\cite{HirschhornDaly05,McCarthy08}.
These statistical methods also assume no biological knowledge about
the underlying processes or phenotypes. Since biological networks are
complex, and since variations are numerous, unconstrained searches for
associations between genotype and phenotype require large population
samples, and can succeed only in detecting a limited range of effects.
Without imposing any constraints based on biological knowledge
searching for gene interactions is very challenging, particularly when
input data consist of different data types coming from various
sources.
The major question that motivated this work is ``\emph{Can we
constrain traditional statistical approaches by using biological
knowledge to define some known networks that influence patterns in
the data, and can such approaches produce more complete genetic
models?}'' For example, we might use the patterns present in the
genotype data to build more predictive models based on both genotype
and phenotype data. Note that the problem of using biological
knowledge to constrain a model of genetic interaction is closely
connected to the problem of integrating various types of data in a
single model. In this article we employ a known Artificial
Intelligence (AI) approach (Markov Logic Networks) to reformulate the
problem of defining and finding genetic models in a general way and
use it to facilitate detection of non-additive gene interactions.
This approach allows us to lay the foundations for studies of
essentially any kind of genetic model, which we demonstrate for a
relatively simple model.
Markov Logic Networks (MLNs) is one of the most general approaches to
statistical relational learning, a sub-field of machine learning, that
combines two kinds of modeling: probabilistic graphical models, namely
Markov Random Fields, and first-order logic. Probabilistic graphical
models, first proposed by Pearl~\cite{Pearl88}, offer a way to
represent joint probability distributions of sets of random variables
in a compact fashion. A graphical structure describing the
probabilistic independence relationships in these models allows the
development of numerous algorithms for learning and inference and
makes these models a good choice for handling uncertainty and noise in
data. On the other hand, first-order logic allows us to represent and
perform inferences over complex, relational domains. Propositional
(Boolean) logic, which biologists are most familiar with, describes
the truth state on the level of specific instances, while first-order
logic allows us to make assertions about the truth state of relations
between subsets (classes) of instances. Moreover, using first-order
logic we can represent recursive and potentially infinite structures
such as Markov chains where a temporal dependency of the current state
on the state at the previous time step can be instantiated to an
infinite time series. Thus, first order logic is a very flexible
choice for representing general knowledge, like that we encounter in
biology.
MLNs merge probabilistic graphical models and first-order logic in a
framework that gains the benefits of both representations. Most
importantly, the logic component of MLNs provides an interface for
adding biological knowledge to a model through a set of first-order
constraints. At the same time, MLNs can be seen as a generalization of
probabilistic graphical models since any distribution represented by
the latter can be represented by the former, and this representation
is more compact due to the first-order logic component. Even so,
various learning and inference algorithms for probabilistic graphical
models are applicable to MLNs and are thereby enhanced with logic
inference.
One key advantage of logic-based probabilistic modeling methods, and
in particular MLNs, is that they allow us to work easily with data
that are not independent and identically distributed (not i.i.d.).
Many statistical and machine learning methods assume that the input
data is i.i.d., a very strong, and usually artificial, property that
most biological problems do not share. For instance, biological
variables most often have a spatial or temporal structure, or can even
be explicitly described in a relational database with multiple
interacting relations. MLNs thus provide a means for
non-i.i.d. learning and joint inference of a model. While the input
data used in GWAS and in other genetic studies are rich in complex
statistical interdependencies between the data points, MLNs can easily
deal with any of these data structures.
There are various modeling techniques that employ both probabilistic
graphical models and first-order
logic~\cite{Poole93,NgoHaddawy97,GlesnerKoller95,SatoKameya97,DeRaedt07,Friedman99,KerstingDeRaedt00,Pless06,RichardsonDomingos06}.
Many of them impose different restrictions on the underlying logical
representation in order to be able to map the logical knowledge base
to a graphical model. One common restriction employed, for example,
in~\cite{SatoKameya97,DeRaedt07,KerstingDeRaedt00,Pless06} is to use
only \emph{clausal} first-order formulas of the form $b_1 \land b_2
\land \ldots \land b_n \Rightarrow h$ that are used to represent
cause-effect relationships. The majority of the methods, such as
those introduced
in~\cite{NgoHaddawy97,GlesnerKoller95,KerstingDeRaedt00}, use Bayesian
networks, directed graphical models, as the probabilistic
representation. However, there are a few
approaches~\cite{Pless06,RichardsonDomingos06} that instead use Markov
Random Fields, undirected graphical models, to perform inferences.
We use Markov Logic Networks~\cite{RichardsonDomingos06} that merge
unrestricted first-order logic with Markov Random Fields, and as a
result use the most general probabilistic logic-based modeling
approach. In this paper we show this MLN-based approach to
understanding complex systems and data sets. Similar to~\cite{Yi05}
that proposed a Bayesian approach that can be used to infer models for
QTL detection, our MLN-based approach is a model inference method that
goes beyond just hypothesis testing. Moreover, in this paper we
describe how we have adapted and applied MLN to genetic analysis so
that complex biological knowledge can be included in the models. We
have applied the method to a relatively simple genetic system and data
set, the analysis of the genetics of sporulation efficiency in the
budding yeast \emph{Saccharomyces cerevisiae}. In this system,
recently analyzed by Cohen and co-workers~\cite{Gerke09}, two
genetically and phenotypically diverse yeast strains, whose genomes
were fully characterized, were crossed and the progeny studied for the
genetics of the sporulation phenotype. This provided a genetically
complex phenotype with a well-defined genetic context to which to
apply our method.
\section*{Methods}
\subsection*{Markov Random Fields}
Given a set of random variables of the same type, $\mathbf{X}=\{X_i:1
\le i \le N\}$ and a set of possible values (alphabet)
$\mathbf{A}=\{A_i:1 \le i \le M\}$ so that any variable can take any
value from $A_1$ to $A_M$ (it is easy to extend this to the case of
multiple variable types). Consider a graph, $G$, whose vertices
represent variables, $\mathbf{X}$, and whose edges represent
probabilistic dependencies among the vertices such that a local Markov
property is met. The local Markov property for a random variable $X_i$
can be formally written as $\Pr(X_i=A_j \mid \mathbf{X} \setminus \{X_i\})=\Pr(X_i=A_j \mid N(X_i))$
that states that a state of random variable $X_i$ is conditionally
independent of all other variables given $X_i$'s neighbors $N(X_i)$,
$N(X_i) \subseteq \mathbf{X} \setminus \{X_i\}$. Let $\mathbf{C}$
denote the set of all cliques in $G$, where a clique is a subgraph
that contains an edge for every pair of its nodes (a complete
subgraph). Consider a configuration, $\gamma$, of $\mathbf{X}$ that
assigns each variable, $X_i$, a value from $\mathbf{A}$. We denote
the space of all configurations as
$\mathbf{\Gamma}=\mathbf{A}^{\mathbf{X}}$. A restriction of $\gamma$
to the variables of a specific clique $C$ is denoted by $\gamma_C$.
A \emph{Markov Random Field} (MRF) is defined on $\mathbf{X}$ by a
graph $G$ and a set of potentials $\mathbf{V} = \{ V_C(\gamma_C): C
\in \mathbf{C}, \gamma \in \mathbf{\Gamma} \}$ assigned to the cliques
of the graph. Using cliques allows us to explicitly define the
topology of models, making MRFs convenient to model long-range,
higher-order connections between variables. We encode the
relationships between the variables using the clique potentials. By
the Hammersley-Clifford theorem, a joint probability distribution
represented by an MRF is given by the following Gibbs distribution
\begin{equation}
\Pr(\gamma) = \frac{1}{Z} \prod_{C \in \mathbf{C}} \exp \left(-V_C(\gamma_C) \right),
\label{eq1}
\end{equation}
where the so-called partition function $Z = \sum_{\gamma \in
\mathbf{\Gamma}} \prod_{C \in \mathbf{C}} \exp(-V_C(\gamma_C))$
normalizes the probability to ensure that $\sum_{\gamma \in
\mathbf{\Gamma}} \Pr(\gamma) = 1$.
Without loss of generality we can represent a Markov Random Field as a
log-linear model~\cite{Pietra97}:
\begin{equation}
\Pr(\gamma)=\frac{1}{Z} \exp \left(\sum_i w_i f_i(\gamma) \right),
\label{eq2}
\end{equation}
where $f_i: \mathbf{\Gamma} \rightarrow \mathbb{R}$ are functions
defining features of the MRF and $w_i \in \mathbb{R}$ are the weights
of the MRF. Usually, the features are indicators of the presence or
absence of some attribute, and hence are binary. For instance, we can
consider a feature function that is $1$ when some $X_i$ has a
particular value and $0$ otherwise. Using these types of indicators,
we can make $M$ different features for $X_i$ that can take on $M$
different values. Given some configuration $\gamma_{\mathbf{X}
\setminus \{X_i\}}$ of all the variables $\mathbf{X}$ except $X_i$,
we can have a different weight for this configuration whenever $X_i$
has a different value. The weights for these features capture the
affinity of the configuration $\gamma_{\mathbf{X} \setminus \{X_i\}}$
for each value of $X_i$. Note that the functions defining features can
overlap in arbitrary ways providing representational flexibility.
One simple mapping of a traditional MRF to a log-linear MRF is to use
a single feature $f_i$ for each configuration $\gamma_C$ of every
clique $C$ with the weight $w_i = - V_C(\gamma_C)$. Even though in
this representation the number of features (the number of
configurations) increases exponentially as the size of cliques
increases, the Markov Logic Networks described in the next section
attempt to reduce the number of features involved in the model
specification by using logical functions of the cliques'
configurations.
Given an MRF, a general problem is to find a configuration $\gamma$
that maximizes the probability $\Pr(\gamma)$. Since the space
$\mathbf{\Gamma}$ is very large, performing an exhaustive search is
intractable. For many applications, there are two kinds of
information available: prior knowledge about the constraints imposed
on the simultaneous configuration of connected variables; and
observations about these variables for a particular instance of the
problem. The constraints constitute the model of the world and reflect
statistical dependencies between values of the neighbors captured in
an MRF. For example, when modeling gene association with phenotype,
the restrictions on the likelihood of configurations of co-expressed
genes may be cast as an MRF with cliques of size 2 and 3 (see
figure~\ref{fig1}). In the next section, we give a biological example
involving construction of MRFs with cliques of size 3 and 4, and
provide more mathematical details.
\begin{figure}[!ht]
\begin{center}
\includegraphics[width=4in]{mrf-example.eps}
\end{center}
\caption{\textbf{An example of a Markov Random Field.} The MRF
represents four genes influencing a phenotype with potentials
$V_1, \ldots, V_4$ (blue edges). This model restricts genetic
interactions to two pair-wise interactions with potentials $V_5,
V_6$ (red cliques).}
\label{fig1}
\end{figure}
\subsection*{Markov Logic Networks: Mapping First-Order Logic to Markov Random Fields}
Markov Logic Networks merge Markov Random Fields with first-order
logic. In first-order logic (FOL) we distinguish \textbf{constants}
and \textbf{variables} that represent objects and their classes in a
domain, as well as \textbf{functions} specifying mappings between
subgroups of objects, and \textbf{predicates} representing relations
among objects or their attributes. We call a predicate \emph{ground}
if all its variables are assigned specific values. To illustrate,
consider a study of gene interactions through the phenotypic
comparison of wild type strains, single mutants, and double mutants
such as the one presented in~\cite{Drees05}. Consider a set of
constants representing genes, $\{ \mathtt{g1,g2} \}$, gene interaction
labels, $\{ \mathtt{A,B} \}$, and difference between phenotype values
of mutants, $\{ \mathtt{0,1,2} \}$. We define the following
predicates: $\mathtt{RelWS/2}$ (a 2-argument predicate which captures
a relation between a wild type and a single mutant),
$\mathtt{RelWD/3}$ (a relation between a wild type and a double
mutant), $\mathtt{RelSS/3}$ (a relation between two single mutants),
$\mathtt{RelSD/4}$ (a relation between a single mutant and a double
mutant), $\mathtt{Int/3}$ (an interaction between two genes). Using
FOL we can define a knowledge base consisting of two formulas:
$$
\begin{array}{l}
\displaystyle \mathtt{\forall x,y \in \{g1,g2\},\forall c \in \{0,1\},\forall v,u \in \{0,1,2\},
Int(x,y,c) \Rightarrow (RelWS(x,v) \Leftrightarrow RelSD(y,x,y,u))}\\
\displaystyle \mathtt{\forall x,y \in \{g1,g2\},\forall c \in \{0,1\},\forall v,u,w \in \{0,1,2\},
RelWS(x,v) \land RelWS(y,u) \land RelWD(x,y,w) \Rightarrow Int(x,y,c).}
\end{array}
$$
The first rule represents the knowledge that depending on the type of
interaction between two genes, there is a dependency between
$\mathtt{RelWS(x,v)}$ and $\mathtt{RelSD(y,x,y,u)}$ relations. The
second rule captures the knowledge that three relations,
$\mathtt{RelWS(x,v)}$, $\mathtt{RelWS(y,u)}$, and
$\mathtt{RelWD(x,y,w)}$, together determine the type of gene
interaction.
Note that first-order formulas define relations between (potentially
infinite) groups of objects or their attributes. Formulas in FOL can
be seen as relational templates for constructing models in
propositional logic. Therefore, FOL offers a compact way of
representing and aggregating relational data. For example, two
first-order formulas above can be replaced with 288 propositional
formulas since variables $\mathtt{x,y,c}$ can be assigned 2 different
values and variables $\mathtt{u,v,w}$ can be assigned 3 values.
Moreover, using representational power of FOL, we can specify infinite
structures such as temporal relations, e.g., $\mathtt{Expression(e1,
t1) \land NextTimeStep(t1, t2) \Rightarrow Expression(e2, t2)}$,
that can give rise to a theoretically infinite number of propositions.
The principal limitation of any strictly formal logic system is that
it is not suitable for real applications where data contain
uncertainty and noise. For example, the formulas specified earlier
hold for the real data most of the time, but not always. If there is
at least one data point where a formula does not hold, then the entire
model is disregarded as being false. The two allowed states, true or
false, is equivalent to allowing only probability values $1$ or $0$.
Markov Logic Networks, however, relax this constraint by allowing the
model with unsatisfied formula with a lesser probability than one.
The model with the smallest number of unsatisfied formulas will then
be the most probable.
Markov Logic Networks (MLNs) extend FOL by assigning a weight to each
formula indicating its probabilistic strength. An MLN is a collection
of first-order formulas $F_i$ with associated weights $w_i$. For each
variable of a Markov Logic Network there is a finite set of constants
representing the domain of the variable. A Markov Logic Network
together with its corresponding constants is mapped to a Markov Random
Field as follows.
Given a set of all predicates on an MLN, every ground predicate of the
MLN corresponds to one random variable of a Markov Random Field whose
value is $1$ if the ground predicate is true and $0$ otherwise.
Similarly, every ground formula of $F_i$ corresponds to one feature of
the log-linear Markov Random Field whose value is $1$ if the ground
formula is true and $0$ otherwise. The weight of the feature in the
Markov Random Field is the weight $w_i$ associated with the formula
$F_i$ in the Markov Logic Network.
From the original definitions (\ref{eq1}) and (\ref{eq2}) and the fact
that features of false ground formulas are equal to $0$, the
probability distribution represented by a \emph{ground} Markov Logic
Network is given by
\begin{equation}
\Pr(\gamma)=\frac{1}{Z} \exp \left(\sum_i w_i n_i(\gamma)\right)
= \frac{1}{Z} \prod_i \exp \left(-V_i(\gamma_i)n_i(\gamma)\right),
\label{eq3}
\end{equation}
where $n_i(\gamma)$ is a number of true ground formulas of the formula
$F_i$ in the state $\gamma$ (which directly corresponds to our data),
$\gamma_i$ is the configuration (state) of the ground predicates
appearing in $F_i$. $V_i$ is a potential function assigned to a clique
which corresponds to $F_i$, and $\exp(-V_i(\gamma_i))=\exp(w_i)$.
Note that this probability distribution would change if we changed the
original set of constants. Thus, one can view MLNs as templates
specifying classes of Markov Random Fields, just like FOL templates
specifying propositional formulas.
Figure~\ref{fig2} illustrates a portion of a Markov Random Field
corresponding to the ground MLN. We assume a set of constants and an
MLN specified by the knowledge base from the example above, where a
weight is assigned to each formula.
\begin{figure}[!ht]
\begin{center}
\includegraphics[width=4in]{mln-example.eps}
\end{center}
\caption{ \textbf{An example of a subnetwork of a Markov Random
Field unfolded from a Markov Logic Network program.} Each node
of the MRF corresponds to a ground predicate (a predicate with
variables substituted with constants). The nodes of all ground
predicates that appear in a single formula form a clique such as
the one highlighted with red. The blue triangular cliques
correspond to the first formula of the MLN and are assigned the
weight of the formula (2.1). The larger rectangular cliques, such
as the one colored red, correspond to the second formula of the
MLN with the weight 1.5.}
\label{fig2}
\end{figure}
\subsection*{An Example: Application to Yeast Sporulation Dataset}
We applied our method to a data set generated by Cohen and
co-workers~\cite{Gerke09}. They generated and characterized a set of
374 progeny of a cross between two yeast strains that differ widely in
their efficiency of sporulation (a wine and an oak strain). For each
of the progeny the sporulation efficiency was measured and assigned a
normalized real value between 0 and 1. To generate a discrete value
set we then binned and mapped the sporulation efficiencies into 5
integer values. Each yeast progeny strain was genotyped at 225
markers that are uniformly distributed along the genome. Each marker
takes on one of two possible values indicating whether it derived from
the oak or wine parent genotype.
Using Markov Logic Networks, we first model the effect of a single
marker on the phenotype, i.e., sporulation efficiency. Define a
logistic-regression type model with the following set of formulas:
\begin{equation}
\mathtt{\forall strain \in \{1, \ldots ,374\}, \hspace{5pt} G(strain,m,g)
\Rightarrow E(strain,v), \hspace{5pt} w_{m,g,v}},
\label{eq4}
\end{equation}
for every marker $\mathtt{m}$ under consideration (at this point we
consider one marker in a model), genotype value $\mathtt{g} \in \{
\mathtt{A},\mathtt{B}\}$, and phenotype value $\mathtt{v} \in
\{\mathtt{1}, \ldots, \mathtt{5}\}$. This Markov Logic Network
contains two predicates, $\mathtt{G}$ and $\mathtt{E}$. Predicate
$\mathtt{G}$ denotes markers' genotype values across yeast crosses,
e. g., $\mathtt{G(strain,M1,B)}$ captures all yeast crosses for which
the genotype of a marker $\mathtt{M1}$ is $\mathtt{B}$. Similarly,
predicate $\mathtt{E}$ denotes the phenotype (sporulation efficiency)
across yeast crosses, for instance, $\mathtt{E(strain,1)}$ captures
all yeast strains for which the level of sporulation efficiency is
$\mathtt{1}$. The Markov Logic Network (\ref{eq4}) contains 10
formulas, 1 marker of interest times 2 possible genotype values times
5 possible phenotype values. Each formula represents a pattern that
holds true across all yeast crosses (indicated by the variable
$\mathtt{strain}$) with the same strength (indicated by the weight
$\mathtt{w_{m,g,v}}$). In other words, the weight
$\mathtt{w_{m,g,v}}$ represents the fitness of the corresponding
formula across all strains.
Instantiations of the predicate $\mathtt{G}$ represent a set of
predictor variables, whereas instantiations of the predicate
$\mathtt{E}$ represent a set of target variables (\ref{eq4}). There
are 748 ground predicates of $\mathtt{G}$ (assuming we handle only one
marker in a model) and 1870 ground predicates of $\mathtt{E}$. Each
ground predicate corresponds to a random variable in the corresponding
Markov Random Field (see the previous section for more details).
Since our MLN contains 10 formulas and there are 374 possible
instantiations for each formula, the corresponding log-linear Markov
Random Field contains 3740 features, one for each instantiation of
every formula.
\subsection*{Learning the Weights of MLNs}
Each data point in the original dataset corresponds to one ground
predicate (either $\mathtt{E}$ or $\mathtt{G}$ in our example). For
example, the information that a genotype value of a marker
$\mathtt{M71}$ in a strain $\mathtt{S13}$ is equal to $\mathtt{A}$
corresponds to a ground predicate $\mathtt{G(S13,M71,A)}$. Therefore,
the original dataset can be represented with a collection of ground
predicates that logically hold, which in turn is described as a data
vector $\mathbf{d} = \langle d_1, \ldots, d_N \rangle$, where $N$ is
the number of all possible ground predicates ($N=2618$ in this
example). An element $d_i$ of the vector $\mathbf{d}$ is equal to
$1$, if the $i$th ground predicate (assuming some order) is true and
thus is included in our collection, and $0$ otherwise. Note that this
vector representation is possible under a \emph{closed world
assumption} stating that all ground predicates that are not listed
in our collection are assumed to be false.
In order to carry out training of a Markov Logic Network we can use
standard Newtonian methods for likelihood maximization. The learning
proceeds by iteratively improving weights of the model. At the $j$th
step, given weights $\mathbf{w}^{(j)}$, we compute
$\nabla_{\mathbf{w}^{(j)}} L(\mathbf{w}^{(j)})$, the gradient of the
likelihood, which is our objective function that we maximize.
Consequently, we improve the weights by moving in the direction of the
positive gradient, $\mathbf{w}^{(j+1)} = \mathbf{w}^{(j)} + \alpha
\nabla_{\mathbf{w}^{(j)}} L(\mathbf{w}^{(j)})$, where $\alpha$ is the
step size.
Recall that the likelihood is given by
\begin{equation}
L(\mathbf{w} \mid \gamma)
= \Pr(\gamma \mid \mathbf{w})
= \frac{1}{Z} \exp \left(\sum_i w_i n_i(\gamma)\right)
= \frac{\exp \left(\sum_i w_i n_i(\gamma)\right)}
{\sum_{\gamma' \in \mathbf{\Gamma}} \exp \left(\sum_i w_i n_i(\gamma')\right)},
\label{eq5}
\end{equation}
where $\gamma$ is a state (also called a configuration) of the set of
random variables $\mathbf{X}$ and $\mathbf{\Gamma}$ is a space of all
possible states. Therefore, the log-likelihood is
\begin{equation}
\log L(\mathbf{w} \mid \gamma)
= \log \Pr(\gamma \mid \mathbf{w})
= \sum_i w_i n_i(\gamma)
- \log \left[ \sum_{\gamma' \in \mathbf{\Gamma}} \exp\left(\sum_i w_i n_i(\gamma')\right) \right].
\label{eq6}
\end{equation}
Now derive the gradient with respect to the network weights,
\begin{equation}
\begin{array}{l}
\displaystyle \frac{\partial}{\partial w_j} \log L(\mathbf{w} \mid \gamma)
= n_j(\gamma) - \frac{1}{\sum_{\gamma' \in \mathbf{\Gamma}} \left[ Z \Pr(\gamma') \right]}
\frac{\partial}{\partial w_j} \sum_{\gamma' \in \mathbf{\Gamma}} \exp \left(\sum_i w_i n_i(\gamma')\right)\\
\displaystyle = n_j(\gamma) - \frac{1}{\sum_{\gamma' \in \mathbf{\Gamma}} \left[ Z \Pr(\gamma') \right]}
\sum_{\gamma' \in \mathbf{\Gamma}} \left[ n_j(\gamma')\exp \left(\sum_i w_i n_i(\gamma')\right) \right]\\
\displaystyle = n_j(\gamma)
- \sum_{\gamma' \in \mathbf{\Gamma}} \left[ n_i(\gamma')L(\mathbf{w} \mid \gamma') \right].
\end{array}
\label{eq7}
\end{equation}
Note that the sum is computed over \emph{all possible} variable states
$\gamma'$. The above expression shows that each component of the
gradient is a difference between the number of true instances a
corresponding formula $F_j$ (the number of true ground formulas of
$F_j$) and the expected number of true instances of $F_j$ according to
the current model. However, computation of both components of the
difference is intractably large.
Since the exact number of true ground formulas cannot be tractably
computed from data \cite{RichardsonDomingos06}, the number is
approximated by sampling the instances of the formula and checking
their truth values according to the data.
On the other hand, it is also intractable to compute the expected
number of true ground formulas as well as the log-likelihood
$L(\mathbf{w} \mid \gamma')$. The former involves inference over the
model, whereas the later requires computing the partition function $Z
= \sum_{\gamma' \in \mathbf{\Gamma}} \exp \left(\sum_i w_i
n_i(\gamma') \right)$. One solution, proposed
in~\cite{RichardsonDomingos06}, is to maximize the
\emph{pseudo-likelihood}
\begin{equation}
\hat{L}(\mathbf{w} \mid \gamma) = \prod_{j=1}^N \Pr(\gamma_j \mid \gamma_{MB_j};\mathbf{w}),
\label{eq8}
\end{equation}
where $\gamma_j$ is a restriction of the state $\gamma$ to the $j$th
ground predicate and $\gamma_{MB_j}$ is a restriction of $\gamma$ to
what is called a Markov blanket of the $j$th ground predicate (the
state of the Markov blanket according to our data). We elected to use
this approach. Similar to the original definition of the Markov
blanket in the context of Bayesian networks \cite{Pearl88}, the
\emph{Markov blanket} of a ground predicate is a set of other ground
predicates that are present in some ground formula. Using the yeast
sporulation example, the set of ground predicates $\{ \mathtt{ \forall
m, \forall g \mid G(S1,m,g) } \}$ is the Markov blanket of
$\mathtt{E(S1,1)}$ due to the knowledge base (\ref{eq4}).
Maximization of pseudo-likelihood is computationally more efficient
than maximization of likelihood, since it does not involve inference
over the model, and thus does not require marginalization over a large
number of variables. Currently, we use the limited-memory
Broyden-Fletcher-Goldfarb-Shanno (L-BFGS) algorithm from the
\emph{Alchemy} implementation of MLNs \cite{Kok09} to optimize the
pseudo-likelihood.
\subsection*{Using MLNs for Querying}
After learning is complete, we have a trained Markov Logic Network
that can be used for various types of inference. In particular, we
can answer such queries as ``\emph{what is the probability that a
ground predicate $Q$ is true given that every predicate from a set
(a conjunction) of ground predicates $Ev=\{E_1, \ldots, E_m\}$ is
true?}'' The ground predicate $Q$ is called a \emph{query} and the
set $Ev$ is called an \emph{evidence}. Answering this query is similar
to computing the probability $\Pr(Q \mid E_1 \land \ldots \land E_m,
MLN)$. Using the product rule for probabilities we get
\begin{equation}
\begin{array}{l}
\displaystyle \Pr(Q \mid E_1 \land \ldots \land E_m, MLN)
= \frac{\Pr(Q \land E_1 \land \ldots \land E_m \mid MLN)}{\Pr(E_1 \land \ldots \land E_m \mid MLN)}\\
\displaystyle = \frac{\sum_{\gamma \in \mathbf{\Gamma}_Q \cap \mathbf{\Gamma}_E} \Pr(\gamma \mid MLN)}
{\sum_{\gamma \in \mathbf{\Gamma}_E} \Pr(\gamma \mid MLN)},
\end{array}
\label{eq9}
\end{equation}
where $\mathbf{\Gamma}_P$ is the set of \emph{all possible
configurations} where a ground predicate $P$ is true, and
$\mathbf{\Gamma}_E = \mathbf{\Gamma}_{E_1} \cap \ldots \cap
\mathbf{\Gamma}_{E_m}$.
Computing the above probabilistic query for majority of real
application problems, which include the computational problems posed
by the complexity experienced in systems biology, is intractable.
Therefore, we need to approximate $\Pr(Q \mid E_1 \land \ldots \land
E_m, MLN)$, which can be done using various sampling-based algorithms.
Markov Logic Networks adopt a \emph{knowledge-based model
construction} approach consisting of two steps: 1. constructing the
smallest Markov Random Field from the original MLN that is sufficient
for computing the probability of the query, and 2. inferring the
Markov Random Field using traditional approaches. One of the commonly
used inference algorithms is Gibbs sampling where at each step we
sample a ground predicate $X_j$ given its Markov blanket. In order to
define the probability of the node $X_j$ being in the state $\gamma_j$
given the state of its Markov blanket we use the earlier notation.
Given a $j$th ground predicate $X_j$, all the formulas containing
$X_j$ are denoted by $\mathbf{F}_j$. We denote the Markov blanket of
$X_j$ as $MB_j$ and a restriction of $\gamma$ to the Markov blanket
(the state of the Markov blanket) as $\gamma_{MB_j}$. Similarly, a
restriction of the state $\gamma$ to $X_j$ is denoted as
$\gamma_j$. Recall that each formula $F$ of a Markov Logic Network
corresponds to a feature of a Markov Random Field, where the feature's
value is the truth value $f$ of the formula $F$ depending on states
$\gamma_1, \ldots, \gamma_k$ of the ground predicates $X_1, \ldots,
X_k$ constituting the formula and denoted by $f=F|_{\gamma_1, \ldots,
\gamma_k}$. Note that $F|_{\gamma_1, \ldots, \gamma_k}$ can also be
shown as $F|_{\gamma_1, \gamma_{MB_j}}$. Using this notation we can
express the probability of the node $X_j$ to be in the state
$\gamma_j$ when its Markov blanket is in the state $\gamma_{MB_j}$ as
\begin{equation}
\Pr(\gamma_j \mid \gamma_{MB_j})
= \frac{\exp \left( \sum_{F_i \in \mathbf{F}_j} w_i F_i|_{\gamma_j, \gamma_{MB_j}} \right)}
{\exp \left( \sum_{t=0}^1 \sum_{F_i \in \mathbf{F}_j} w_i F_i|_{X_j=t, \gamma_{MB_j}} \right)}.
\label{eq10}
\end{equation}
For the Gibbs sampling, we let a Markov chain converge and then
estimate the probability of a conjunction of ground predicates to be
true by counting the fraction of samples from the estimated
distribution in which all the ground predicates hold. The Markov chain
is run multiple times in order to handle situations when the
distribution has multiple local maxima so that the Markov chain can
avoid being trapped on one of the peaks. One of the current
implementations, called \emph{Alchemy}~\cite{Kok09}, attempts to
reduce the burn-in time of the Gibbs sampler by applying a local
search algorithm for the weighted satisfiability problem, called
MaxWalkSat~\cite{Selman93}.
\subsection*{Components of the Computational Method}
The general overview of the computational method is given in
figure~\ref{fig3}. At the first step, the method traverses the set of
all markers and assigns an error score to each marker indicating its
predictive power. An error score of a marker corresponds to
performance of an MLN (\ref{eq4}) based on this single marker (a
random variable $\mathtt{m}$ of (\ref{eq4}) that essentially chooses
the location of markers to use in the model has only one value – the
location of this single marker). The algorithm selects markers whose
error scores are considerably lower than the average: we selected the
outliers that are 3 standard deviations below the mean.
\begin{figure}[!ht]
\begin{center}
\includegraphics[width=0.99\textwidth]{setup.eps}
\end{center}
\caption{ \textbf{Components of the computational method.} This
figure illustrates three major computational components. We used
cross-validation to estimate the goodness of fit of a model. Panel
\textbf{A} depicts 4-fold cross-validation. The data is
partitioned into four sets and at each iteration a model is
trained on three sets and then tested on the fourth set resulting
in four prediction error scores that are then averaged for a total
cross-validation error. Panel \textbf{B} shows details of model
training and testing. Using training data we estimate the weights
of each formula of an MLN template. The trained MLN is evaluated
on testing data resulting in an error score. We shuffle the order
in the training data and the order of the testing data and repeat
the training and testing. The error scores after each evaluation
are averaged to produce the average error score of the MLN. Panel
\textbf{C} shows how the components illustrated in panels
\textbf{A} and \textbf{B} are combined to search for the most
informative markers. At the $N$th iteration of the search, given
a data set and set of $N-1$ fixed markers, the method traverses
the set of all markers and evaluates models constructed using the
fixed markers and a selected marker. Note that at the $N$th
iteration, $N$-marker models consider the fixed markers ($N-1$) as
possible interactors with the next marker we found. The method
then selects the outlier model, adds the corresponding marker to
the set of fixed markers, and repeats the traversal of the
markers. The method stops when no outliers are found.}
\label{fig3}
\end{figure}
The current version of the method greedily selects a marker with the
lowest score and appends it to a list of fixed markers (which is
initially empty). The algorithm then repeats the search for an outlier
with the lowest score, but this time a marker's score is computed
using an MLN (\ref{eq4}) that estimates a joint effect of the marker
under consideration together with all of the currently fixed markers
on a phenotype (the variable $\mathtt{m}$ takes on the locations of
all fixed markers and the marker under consideration). At each
iteration the algorithms expands the list of fixed markers and rescans
all of the remaining markers for outliers. The method stops as soon as
no more outliers are detected and returns the fixed markers as
potential loci associated with the phenotype.
Our scanning for predictive genetic markers can be seen as an instance
of the \emph{variable selection} problem. We use cross-validation to
compare probabilistic models and select the one with the best fit to
the data and the smallest number of informative markers. Using
cross-validation we assess how a model generalizes to an independent
dataset, addressing the model overfitting problem and facilitating
unbiased outlier detection. Particularly, we use $K$-fold
cross-validation which is illustrated in
figure~\ref{fig3}(\textbf{A}). The data set is arbitrarily split into
$K$ folds, $\mathbf{D}_1, \ldots, \mathbf{D}_K$, and $K$ iterations
are performed. The $i$th iteration of cross-validation selects
$\bigcup_{j \ne i} \mathbf{D}_j$ as a training dataset and
$\mathbf{D}_i$ as a testing dataset. The model is then trained on the
training dataset and the performance of the model is assessed using
the testing dataset resulting in a prediction error. The average of
prediction errors from $K$ steps, called a cross-validation error, is
used as a score of the model. In case of yeast sporulation efficiency
dataset introduced earlier, we used 11-fold cross-validation (since a
population of 374 yeast strains can be evenly partitioned into 11
subsets with 34 strains in each). The results were generally
insensitive to the cross-validation parameters.
Recall that we distinguish two types of variables, the \emph{target
variables} whose values are predicted, and the \emph{predictor
variables} whose values are used to predict the values of the target
variables. In the example above, sporulation efficiency of a yeast
strain is a target variable, whereas genotype markers are predictor
variables. Note that in some cases we can treat variables as both
targets and predictors (e.g. gene expression in eQTL datasets).
During the evaluation phase in the $i$th iteration of cross-validation
we consider the testing dataset $\mathbf{D}_i$. Using knowledge-based
model construction approach, we build a Markov Random Field that is
small yet sufficient to infer the values of all target variables in
the set $\mathbf{D}_i$. The target variables are inferred based on the
values of the predictor variables from $\mathbf{D}_i$ (see section
``Using MLNs for Querying'').
The model prediction of a target variable $X$ that can take on any
value from $\{x_1,x_2,x_3\}$ can be represented as a vector
$\hat{\mathbf{v}}=\langle p_1, p_2, p_3 \rangle$, where $p_j =
\Pr(X=x_j \mid \mathbf{D}_i, \Theta_{MRF})$ is the probability of $X$
to take on a value $x_j$ given the testing data $\mathbf{D}_i$ and the
Markov Random Field with parameters $\Theta_{MRF}$. On the other hand,
the actual value of $X$ (provided in the testing dataset
$\mathbf{D}_i$) can be represented as a vector $\mathbf{v}=\langle
v_1, v_2, v_3 \rangle$, where $\forall j \ne k, v_j = 0$ and $v_k = 1$
iff $X=x_k$ in $\mathbf{D}_i$. Then the prediction error should
measure the difference between the prediction $\hat{\mathbf{v}}$ and
the true value $\mathbf{v}$. We used the Euclidean distance
$d(\hat{\mathbf{v}},\mathbf{v})$ to compute the prediction error.
This approach might make the comparison to other approaches difficult
since the error can be a value that is not bounded by $0$ and $1$, but
by $0$ and $\sqrt{M}$, where $M$ is the size of the domain of $X$ (3
in our example). Further computation is required to obtain values for
model accuracy, to explain variance, and other standard
characteristics. On the other hand, Euclidean distance is certainly
sufficient for comparing predictions of different models.
Due to the approximate nature of learning and inference in MLNs (see
sections ``Learning the Weights of MLNs'' and ``Using MLNs for
Querying''), two structurally identical models, trained on two data
sets that differ only in the order of the samples, can generate
predictions with slight differences. This is due to the fundamental
path-dependency of learning and inference algorithms in
knowledge-based model construction. For example, the order of
training data affects the order in which the Markov Random Field is
built, which in turn affects the way the approximate reasoning is
performed over the field. Path-dependency introduces artificial noise
into predictions and considerably reduces our ability to distinguish a
signal with a small magnitude (such as a possible minor effect of a
genetic locus on a phenotype) from a background.
In order to reduce the effect of path-dependency on overall model
prediction we shuffle the input data set and average the resulting
predictions. We employed an iterative approach, based on shuffling,
for denoising. At each iteration the model is retrained and
reevaluated on newly shuffled data and the running mean of the model
prediction is computed (see figure~\ref{fig3}(\textbf{B})). The
method incrementally computes the prediction average until achieving
convergence, namely until the difference between the running average
at the two consecutive iterations is smaller than $Th$ for $W$
consecutive steps. The parameters $Th$ and $W$ are directly connected
with the total amount of shuffling and re-estimation performed as
illustrated in figure~\ref{fig4}.
\begin{figure}[!ht]
\begin{center}
\includegraphics[width=3in]{shuffling.eps}
\end{center}
\caption{ \textbf{The amount of shuffling depends on the threshold
and the size of the window.} Note that the number of iterations
until convergence of the algorithm increases when the threshold
decreases or the window size increases. Note also that selecting
tight stopping parameters tends to allow the algorithm to identify
more informative markers.}
\label{fig4}
\end{figure}
In order to perform rigorous denoising, we select a lower value for
the threshold $Th$ ($0.0001$) and a larger window size $W$ ($10$).
The shuffling-based denoising procedure is applied at each iteration
of the cross-validation. Averaging of the predictions after data
shuffling reduces the amount of artificial noise enabling the overall
method for detection of genetic loci to distinguish markers with a
smaller effect on the phenotype (the algorithm detects more
informative markers as illustrated in figure~\ref{fig4}).
There are many different strategies to search for the most informative
subset of genetic markers. In this section we used a greedy approach
in order to illustrate the general MLN-based modeling framework
presented in this paper. In the next section we show that MLN-based
modeling that accounts for dependencies between markers through
joint-inference allows us to find interesting biological results by
using a greedy search method. In order to be confident that the fixed
markers are meaningful, we manually selected markers at each iteration
of the search from the set of outliers and arrived at a similar set of
candidate loci (within the same local region).
\section*{Results}
The analyses presented in this paper are based on the dataset
from~\cite{Gerke09} containing both phenotype (sporulation efficiency)
and genotype information of yeast strains derived from one
intercross. The results are obtained using our method that searches
for the largest set of genetic markers with the strongest compound
effect on the phenotype. All the detailed information on the
computational components of our method is presented in the Methods
section.
In their paper Gerke et al.~\cite{Gerke09} identified 5 markers that
have an effect on sporulation efficiency including 2 markers whose
effect seems to be very small. Moreover, Gerke et al. provided
evidence for non-linear interactions between 3 major loci. The
presence of confirmed markers with various effect and non-linear
interactions make the dataset from~\cite{Gerke09} an ideal choice for
testing our computational method.
Our method allows us to define and to use essentially any genetic
model. First we used a simple regression-type model that mimics simple
statisical approaches, like GWAS. At the first stage, the method
compares the markers according to their individual predictive power.
The amount of the effect of a marker on the phenotype is estimated by
computing a prediction error score from a regression model based
solely on this marker. The top line on the left panel in figure 5
illustrates the error scores of all markers ordered by location ($X$
axis). In figure~\ref{fig5} we observe three loci (around markers 71,
117, and 160) with the strongest effect on sporulation efficiency,
which were identified in~\cite{Gerke09}.
\begin{figure}[!ht]
\begin{center}
\includegraphics[width=0.99\textwidth]{errors.eps}
\end{center}
\caption{ \textbf{Prediction of sporulation efficiency using a
subset of markers.} This figure shows the execution of the
algorithm plotted as error scores vs markers for a number of
models. The top line in the left plot shows error scores of the
models based only on a single marker plotted on the $X$-axis. A
green star indicates the outlier marker with the smallest error
and purple stars depict other outliers (markers whose error
differs from the mean error by 3 standard deviations). The second
line shows the error scores of the models based on a corresponding
marker together with the previously selected marker (indicated
with the green star). All the following lines are interpreted in
the similar way. The left plot shows five markers with a large
effect. The rest of the identified markers (six markers with a
small effect) are illustrated in the right plot.}
\label{fig5}
\end{figure}
At the next stage, our method adds the marker with the strongest
effect (marker 71) to all the following models. This allows us to
compare all other markers according to their predictive power \emph{in
conjunction} with the fixed marker 71. This time the prediction
error score of a marker indicates how well this marker \emph{together}
with the marker 71 predicts the sporulation efficiency. The score is,
thus, computed from the regression model based on two markers: the
marker 71 and a second marker. This is an important distinction from
traditional GWAS where the searches for multiple influential markers
are performed independently from each other. In our approach, using
MLNs and in particular joint-inference, the compound effect of markers
is estimated allowing us to see possible interactions between markers.
The method continues the iterations and selects 11 markers before the
error no longer improves sufficiently and the computation stops.
Among the selected markers, 5 are the same loci previously identified
in~\cite{Gerke09} (markers 71, 117, 160, 123, 79), 3 are the markers
next to the loci with the strongest effect (markers 72, 116, 161), and
3 are the new markers that have not been reported before (markers 57,
14, 20). In addition, the method identifies another marker (marker
130) as a candidate for a locus that has an effect on sporulation
efficiency, although this marker was not selected due to its weak
predictive power. Notice that even with a relatively simple model,
such as logistic regression, and a quite stringent criterion for
outliers (3 standard deviations from the mean, a $p$-value $0.003$ for
a normal distribution) we are able to exceed the number of identified
candidate loci. We argue that our method is more efficient at
discovering markers with a very low individual effect on phenotype
that have non-trivial interactions with other sporulation-affecting
loci due to the use of joint-inference of MLNs.
There are several distinct properties of our method that are important
to note. First, although the method selects the neighboring markers of
the three strongest loci, it does not select a neighbor immediately
after the original loci has been identified, because there are better
markers to be found. For example, after selecting the first marker 71,
the method finds markers 117 and 160, and only then selects marker 72,
which is the neighbor of 71. The method selects the next strongest
marker at each stage that maximally increases the compound effect of
selected markers. Second, our method does not find markers that do not
add sufficient predictive power. The criterion for outliers determines
when the method stops and determines the confidence that the added
markers have a real effect on the phenotype.
For each new marker (57, 14, 20, 130) we examined all genes that were
nearby (different actual distances were used). For example for the
marker 14 we considered all genes located between markers 13 and
15. Table~\ref{tab1} shows genes located near these newly identified
markers that are involved in either meiosis or in sporulation.
\begin{table}[!ht]
\caption{
\bf{Candidate genes near the new informative markers.}}
\begin{tabular}{|c|p{40pt}|p{30pt}|p{45pt}|c|p{220pt}|}
\hline
Marker & Euclidean Error & Coord. & Candidate Genes & GO & Description\\
\hline
57 & 0.7061 & 6, 103743 & YFL039C, ACT1 & S & Actin, structural protein involved in cell polarization, endocytosis, and other cytoskeletal functions\\
& & & YFL037W, TUB2 & M & Beta-tubulin; associates with alpha-tubulin (Tub1p and Tub3p) to form tubulin dimer, which polymerizes to form microtubules\\
& & & YFL033C, RIM15 & M & Glucose-repressible protein kinase involved in signal transduction during cell proliferation in response to nutrients, specifically the establishment of stationary phase; identified as a regulator of IME2; substrate of Pho80p-Pho85p kinase\\
& & & YFL029C, CAK1 & M & Cyclin-dependent kinase-activating kinase required for passage through the cell cycle, phosphorylates and activates Cdc28p\\
& & & YFL009W, CDC4 & M & F-box protein required for G1/S and G2/M transition, associates with Skp1p and Cdc53p to form a complex, SCFCdc4, which acts as ubiquitin-protein ligase directing ubiquitination of the phosphorylated CDK inhibitor Sic1p\\
& & & YFL005W, SEC4 & S & Secretory vesicle-associated Rab GTPase essential for exocytosis\\
\hline
14 & 0.7009 & 2, 656824 & YBR180W, DTR1 & S & Putative dityrosine transporter, required for spore wall synthesis; expressed during sporulation; member of the major facilitator superfamily (DHA1 family) of multidrug resistance transporters\\
& & & YBR186W, PCH2 & M & Nucleolar component of the pachytene checkpoint, which prevents chromosome segregation when recombination and chromosome synapsis are defective; also represses meiotic interhomolog recombination in the rDNA\\
\hline
130 & 0.7010 & 11, 447373 & YKR029C, SET3 & M & Defining member of the SET3 histone deacetylase complex which is a meiosis-specific repressor of sporulation genes; necessary for efficient transcription by RNAPII\\
& & & YKR031C, SPO14 & S & Phospholipase D, catalyzes the hydrolysis of phosphatidylcholine, producing choline and phosphatidic acid; involved in Sec14p-independent secretion; required for meiosis and spore formation; differently regulated in secretion and meiosis\\
\hline
20 & 0.6972 & 3, 188782 & YCR033W, SNT1 & M & Subunit of the Set3C deacetylase complex that interacts directly with the Set3C subunit, Sif2p; putative DNA-binding protein\\
\hline
\end{tabular}
\begin{flushleft}
The list of genes located near the new markers identified by the
MLN-based method. The table shows only the genes that are involved
in sporulation or meiosis. Specific information for the genes can
be found at \emph{www.yeastgenome.org}.
\end{flushleft}
\label{tab1}
\end{table}
The simple logistic regression-type model that was used can be
summarized using the first-order formula $\mathtt{G(strain,m,g)}
\Rightarrow \mathtt{E(strain,v)}$, which captures the effect of a
subset of markers on phenotype. In order to investigate gene-gene
interactions we used a pair-wise model which can be summarized by the
formula $\mathtt{G(strain,m1,g1)} \land \mathtt{G(strain,m2,g2)}
\Rightarrow \mathtt{E(strain,v)}$. The pair-wise model subsumes the
simple regression model, since whenever $\mathtt{m1}$ and
$\mathtt{m2}$ are identical, the pair-wise MLN is mapped to the same
set of cliques as those from the simple MLN. However, the pair-wise
model defines the dependencies between two loci and a phenotype that
are mapped to an additional set of 3-node cliques. The pair-wise
model allows us \emph{explicitly} to account for the pair-wise gene
interactions. When using the pair-wise model, the joint-inference is
performed over an MLN where possible interactions between two markers
are specified with first-order formulas.
The assumption inherent in genome-wide analyses is that a simple
additive effect can be observed when applying the pair-wise model to
loci that do not interact: the compound effect is essentially a sum of
the individual effects of each locus. On the other hand, for two
interacting markers, the pair-wise model is expected to predict a
larger-than-additive compound effect. Since the pair-wise model
incorporates possible interactions, the prediction error of this model
should be smaller than the error of a simple model by a factor that
corresponds to how much the interaction information helps to improve
the prediction.
\begin{figure}[!ht]
\begin{center}
\includegraphics[width=3in]{compare_simple_and_71.eps}
\end{center}
\caption{ \textbf{Investigating the 71-117 loci interaction.} This
figure compares a standard genome-wide scan made by a simple
regression model based on a single marker (red line) and a scan
made by a pair-wise model based on two markers, one of which is
preset to 71 (blue line). The green lines represent the size of
the leftmost red peak corresponding to the difference $d$ between
the baseline prediction error of the simple model and the error
$error_S(71)$. Pink bars represent how much the difference between
$error_S(117)$ and $error_{PW}(71,117)$ is larger than $d$. The
large size of the leftmost pink bar indicates a strong interaction
between markers 71 and 117.}
\label{fig6}
\end{figure}
By using the pair-wise model, we investigated the presence of
interactions between markers 71, 117, 160 which correspond to loci
with the strongest effect on sporulation efficiency. We denote the
prediction error of a simple regression model based on markers $M_1,
\ldots, M_n$ as $error_S(M_1, \ldots, M_n)$, and the error of a
pair-wise model based on markers $M_1, \ldots, M_n$ as
$error_{PW}(M_1, \ldots, M_n)$. Figure~\ref{fig6} compares the
prediction errors of the simple regression model based on one marker
(red line) and the errors of the pair-wise model based on two markers
one of which is preset to 71 (blue line). Note that the baseline
prediction error of the pair-wise model is the same as $error_S(71)$,
which means that on average the choice of a second marker in the
pair-wise model does not affect the prediction. There are, however, 2
markers that visibly improve the prediction, namely markers 117 and
160. Note that the prediction error $error_{PW}(71,160)$ (the right
blue peak) is lower than $error_S(160)$ (the rightmost red peak) by
only a value roughly equal to the difference between the average
prediction errors of simple and pair-wise models (this value is equal
to the size of the leftmost red peak). The reduction of the
prediction error, when combining markers 71 and 160, is additive,
suggesting that there is no interaction between these two markers. On
the other hand, if we look at the effect of combining markers 71 and
117, we can see that the prediction improvement using the pair-wise
model based on both markers (the size of the left blue peak) is
considerably more than just a sum of prediction improvements of two
simple models independently (the leftmost and the middle red peaks).
The non-additive improvement suggests that there is an interaction
between the markers 71 and 117.
\begin{figure}[!ht]
\begin{center}
\includegraphics[width=3in]{compare_simple_and_117.eps}
\end{center}
\caption{ \textbf{Investigation of 117-71 and 117-160 loci
interactions.} This figure compares a standard genome-wide scan
using a single-marker model and a scan using a pair-wise model
based on two markers, one of which is preset to 117. See the
caption of figure~\ref{fig6} for more details. Both pink bars
indicate the presence of non-additive interactions between markers
117 and 71 and markers 117 and 160.}
\label{fig7}
\end{figure}
\begin{figure}[!ht]
\begin{center}
\includegraphics[width=3in]{compare_simple_and_160.eps}
\end{center}
\caption{ \textbf{Investigating an interaction between markers 117
and 160.} This figure compares a standard genome-wide scan using
a single-marker model and a scan using a pair-wise model based on
two markers, one of which is preset to 160. See the caption of
figure~\ref{fig6} for more details. Note $error_{PW}(71,160)$ that
is almost the same as $error_{PW}(117,160)$ even though
$error_S(71)$ is considerably lower than $error_S(117)$. The tall
pink bar on the right side of the figure indicates a non-additive
interaction between markers 117 and 160. On the other hand, the
pink bar on the left is almost non-existent indicating absence of
an interaction between markers 71 and 160.}
\label{fig8}
\end{figure}
Figures~\ref{fig7} and \ref{fig8} show a similar analysis to that
illustrated in figure~\ref{fig6} performed on the other two
markers. The analysis shown in figure~\ref{fig7} confirms the
interactions between markers 71 and 117 and, additionally, suggests
that there is an interaction between markers 117 and 160.
Figure~\ref{fig8} confirms the interaction between 117 and 160 and the
absence of the interaction between 71 and 160. One can see from
figure~\ref{fig8} that the leftmost blue peak indicating
$error_{PW}(71,160)$ is a sum of $error_S(71)$ and $error_S(160)$ (the
pink bar next to the left pink star is extremely short). On the other
hand, the rightmost blue peak is a lot more than just a sum of
individual errors (the pink bar is tall). In fact,
$error_{PW}(117,160)$ is almost the same as $error_{PW}(71,160)$.
The two predicted interactions, 71-117 and 117-160, were
experimentally identified in~\cite{Gerke09}. The strength of these
interactions is significant enough to immediately stand out during the
analysis in figure~\ref{fig6}. We next applied this analysis to the
set of all nine identified loci (71, 117, 160, 123, 57, 14, 130, 79,
20) in order to quantify possible interactions between every pair of
markers. For each two markers $A$ and $B$ from the set of loci we
compute the prediction errors of a simple model based solely on either
$A$ or $B$, denoted as $error_S(A)$ and $error_S(B)$. We also compute
the prediction error of a pair-wise model based on both $A$ and $B$,
denoted as $error_{PW}(A,B)$. Consequently the size of a possible
interaction between $A$ and $B$, denoted as $i(A,B)$, is estimated
using the following expression:
\begin{equation}
\begin{array}{l}
\displaystyle i(A,B) = d(A,B) - d(A) - d(B)\\
\displaystyle = (median - error_{PW}(A,B)) - (median - error_S(A)) - (median - error_S(B))\\
\displaystyle = error_S(A) + error_S(B) - error_{PW}(A,B) - median.
\end{array}
\label{eq11}
\end{equation}
Here $median$ is a baseline of prediction error of a simple model
based on a single marker. We averaged the errors over $10$
independently computed iterations. We next determined how high the
value $i(A,B)$ should be in order to confidently predict an
interaction between markers $A$ and $B$. We selected $36$ pairs of
randomly selected markers that were not from the set of nine
informative markers and computed $i(A,B)$ for each pair. Since we do
not expect any interactions between random, non-informative markers,
their $i(A,B)$ values are used to estimate a confidence interval for
no interaction. We computed a mean and standard deviation of the set
of $i(A,B)$ values corresponding to the randomly chosen markers. It
is estimated that a value of $2.54$ standard deviations away from the
mean completely covers the set of $i(A,B)$ values for all random
markers, and we therefore argue there is a strong likelihood of
interaction between markers $A$ and $B$ whenever $i(A,B)$ is more than
$3$ standard deviations away from the mean. Whenever $i(A,B)$ is less
than $3$ but more than $2.54$ standard deviations away from the mean,
we argue there is a probable interaction between $A$ and $B$.
Estimated interactions between identified loci are illustrated as a
network of marker interactions in figure~\ref{fig9} where the color of
each link represents the level of confidence of the corresponding
interaction. We repeated the estimation of interactions by randomly
selecting another $36$ pairs of markers and computing the confidence
intervals for the new set ($2.32$ standard deviations). The probable
interactions identified from the first experiment were confirmed in
the second experiment. Two possible interactions, however, were
identified in one experiment but not the other and are depicted in
figure~\ref{fig9} with dashed links. This is a result of a slightly
shifted mean of the second set of $36$ random marker pairs relative to
the first set, and the marginal size of the effect. Two interactions
with very large $i(A,B)$ values, 71-117 and 117-160, were previously
identified in~\cite{Gerke09}. We also found several smaller
interactions illustrated in figure~\ref{fig9} that have not been
identified before. Note that since we measure absolute values
$i(A,B)$, it is not a surprise that interactions 71-117 and 117-160
are so large, since the corresponding loci (71, 117, 160) are by far
the strongest. Locus 117, which is involved in the strongest
interactions, corresponds to the gene \emph{IME1}~\cite{Gerke09}, the
master regulator of meiosis (\emph{www.yeastgenome.com}). Since
\emph{IME1} is a very important sporulation gene, it is entirely
reasonable that this gene is central to our interaction network
(figure~\ref{fig9}).
\begin{figure}[!ht]
\begin{center}
\includegraphics[width=0.99\textwidth]{interactions1.eps}
\end{center}
\caption{ \textbf{Estimated network of gene-gene interactions.} This
figure shows a network of estimated interactions between loci
based on genetic data and sporulation phenotype. The color of the
links corresponds to the level of confidence of interactions. The
three darkest colors are associated with the most likely
interactions. The fourth color is associated with an interaction
123-117 that scored high in the first experiment (more than $3$
standard deviations away from the mean), but lower in the second
experiment ($2.5$ standard deviations), although still above the
confidence level ($2.32$ standard deviations). Two possible
interactions depicted with the dashed lines with the lightest
color scored above the confidence level in the first experiment
($2.54$ standard deviations), but below the confidence level in
the second experiment ($1$ standard deviation). Loci 160, 71, 117,
123, 79 were previously identified in~\cite{Gerke09}. Moreover,
the genes of the three major loci were detected: \emph{IME1}
(locus 117), \emph{RME1} (locus 71), and \emph{RSF1} (locus 160)
\cite{Gerke09}. The two strongest interactions, 160-117 and
117-71, were also identified in~\cite{Gerke09}.}
\label{fig9}
\end{figure}
\section*{Discussion}
The method presented in this paper provides a framework for using
virtually any genetic model in a genome-wide study because of the high
representational power of MLNs. This power stems from the use of
general, first-order logic conjoined to probabilistic
reasoning. Moreover, the use of knowledge-based model approaches that
build models based on both data and a \emph{relevant} set of first
order formulas~\cite{Wellman92} allows us to efficiently incorporate
prior biological knowledge into a genetic study. The generality of
MLNs allows greater representational power than most modeling
approaches. The general approach can be viewed as a seamless
unification of statistical analysis, model learning and hypothesis
testing.
As opposed to standard genome-wide approaches to genetics which assume
additivity, the aim of the method described in this article is not to
return values corresponding to the strength of individual effects of
each marker. Our method aims at discovering the loci that are
involved in determining the phenotype. The method computes the error
scores for each marker in the context of the others representing the
strength of each marker's effect in combination with other markers. It
will be valuable in future to derive a scoring technique for markers
that can be used directly to compare with results of the traditional
approaches. In general, our approach provides a way of searching for
the best model predicting the phenotype from the genetic loci. Since
the model and the corresponding joint-inference methodology
incorporate the relations between the model variables, we are able to
begin a quantitative exploration of possible interactions between
genetic loci.
Our method shows promise in that it can accommodate complex models
with internal relationships among the variables specified. The
development of a succinct and clear language and grammar based on FOL
for the description of (probabilistic) biological systems knowledge
will be critical for the widespread application of this method to
genetic analyses. Achieving this goal will also represent a
significant step toward the fundamental integration of systems biology
and the analysis and modeling of networks with genetics.
Additionally, the development of the biological language describing
useful biological constraints can alleviate the computational burden
associated with model inference. MLN-based methods, such as ours, that
perform both logic and probabilistic inferences are computationally
expensive. While increases in computing power steadily reduce the
magnitude of this problem, there are other approaches that will be
necessary. Given such a focused biological language, we could tailor
the learning and inference algorithms specifically to our needs and
thus reduce the overall computational complexity of the method.
Another future direction will be to find fruitful connections between
a previously developed information theory-based approach to gene
interactions~\cite{Carter09} with this AI-derived approach. There are
clear several other applications of this approach in the field of
biology. It is clear that there are, for example, many similarities
between the problem discussed here and the problems of data
integration.
Biomarker identification, particularly for panels of biomarkers, is
another important problem that involves many challenges of data
integration and that can benefit from our MLN-based approach. Similar
to GWAS or QTL mapping, where we search for genetic loci that are
linked with a phenotype of interest, in biomarker detection we search
for proteins or miRNAs that are associated with a disease state in a
set of patient data. Just as in genetics, we can represent biomarkers
as a network because there are various underlying biological
mechanisms that govern the development of a disease. Often the most
informative markers from a medical point of view have weak signals by
themselves. MLNs can allow us to incorporate partial knowledge about
the underlying biological processes to account for the
inter-dependencies making the detection of the informative biomarkers
more effective. It is clear that the approach described here has the
potential to integrate the biomarker problem with human genetics, a
key problem in the future development of personalized medicine.
\section*{Acknowledgments}
The authors are grateful to Aimee Dudley, Barak Cohen and Greg Carter
for stimulating discussions and also Barak Cohen for sharing
experimental data. This work was supported by a grant from the NSF
FIBR program to DG, and the University of Luxembourg-Institute for
Systems Biology Program. We also thank Dudley, Tim Galitski, Andrey
Rzhetsky and especially Greg Carter for comments on the manuscript.
| proofpile-arXiv_065-5365 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section*{Introduction}\label{S:0}
We consider an isothermal binary alloy of two species $A$ and $B$, and denote by $u\in [-1,1]$ the ratio between the two components. By thermodynamic arguments, and under a mass conservation property, Cahn and Hilliard described a fourth-order model for the evolution of an isotropic system of nonuniform composition or density. They introduced a free energy density $\bar{f}$ to define a chemical potential, and use it in the classical transport equation (see \cite{C}, \cite{CH1} and \cite{CH2}). The total free energy $\mathcal{F}$ of the binary alloy is a volume integral on $\Omega$ of this free energy density (bulk free energy):
\begin{equation}\label{Eq:0.1}
\mathcal{F} := \int_{\Omega} \bar{f}(u,\nabla u, \nabla^2u, \dots) \ \dd V.
\end{equation}
They assumed $\bar{f}$ to be a function of $u$ and its spatial derivatives.
A truncated Taylor expansion of $\bar{f}$ has thus the following general form:
\begin{equation}\label{Eq:0.2}
\bar{f}(u) \sim f(u) + L \cdot \nabla u + K_{1}\otimes \nabla^2u+\nabla u \cdot K_{2} \cdot \nabla u,
\end{equation}
where $\nabla$ is the Nabla operator.
By symmetry arguments, they showed that $L = \vec{0}$ and $K_{1}$ and $K_{2}$ are homothetic operators. Moreover they used Neumann boundary
condition to cancel the term in $\nabla^2u$ which yields
\begin{equation}\label{Eq:0.3}
\mathcal{F} := \int_{\Omega} \left( f(u) + \kappa |\nabla u|^2 \right)\dd V,
\end{equation}
where $\kappa$ is a parameter (often denoted $\varepsilon^2/2$) which is referred to as the {\em gradient coefficient}.\\
Then, the chemical potential $w$ is defined by:
\begin{equation}\label{Eq:0.4}
w := f'(u) - 2\kappa \Delta u.
\end{equation}
$\Delta$ is the Laplace operator. If we denote by $J$ the flux and by $\mathcal{M}(u)$ the mobility, the classical Fick law provide the following equations:
\begin{equation}\label{Eq:0.5}
\partial_{t}u = -\nabla \cdot J \text{ and } J = -\mathcal{M}(u) \nabla w.
\end{equation}
Finally, the Cahn-Hilliard equation takes the following general form:
\begin{equation}\label{Eq:0.6}
\left\{\begin{array}{ll}
\partial_{t} u = \nabla\cdot\left[\mathcal{M}(u) \nabla w\right],&\text{ on } \Omega\subset \R^d,\\
\\
w = \psi(u)-\varepsilon^2\Delta u ,&\text{ on } \Omega\subset \R^d,\\
\\
\nabla u \cdot \nu = 0 = \nabla w \cdot \nu, &\text{ on } \partial\Omega,\\
\end{array}\right.
\end{equation}
where $t$ denotes the time variable, $\varepsilon$ ($=\sqrt{2\kappa}$) is a measure of the interfacial thickness, $\psi$ ($=f'$) is a nonlinear term, $\mathcal{M}$ is the mobility function, $\nu$ is the outward pointing unit normal on the boundary $\partial\Omega$.
It is well known that the Cahn-Hilliard equation is a gradient flow in $\mathrm{H}^{-1}$ with
Lyapunov energy functional $\mathcal{F}$.\par\smallskip
For a regular uniform alloy, the free energy $f$ is explicitly given by:
\begin{equation}\label{Eq:0.7}
f : u \mapsto N_{m} k_{B} T_{c}\frac{1-u^2}{2} + N_{m}k_{B}T\left[\frac{1+u}{2}\ln\left(\frac{1+u}{2}\right)+\frac{1-u}{2}\ln\left(\frac{1-u}{2}\right)\right],
\end{equation}
where $k_{B}$ is the Boltzmann constant, $N_{m}$ a molecular density, $T$ the temperature and $T_{c}> T$ the critical temperature. Thus the nonlinear term $\psi$ is:
\begin{equation}\label{Eq:0.8}
\psi:=f' : u \mapsto -N_{m} k_{B} T_{c}u + \frac{N_{m}k_{B}T}{2}\ln\left(\frac{1+u}{1-u}\right),
\end{equation}
which is singular at $u=\pm1$. These singularities give rise to the first difficulty in a numerical study, so this function $\psi$ is often replaced by the derivative of the classic quartic double-well potential, where $f$ takes the following form:
\begin{equation}\label{Eq:0.9}
f : u \mapsto \frac{1}{4}\left(1-u^2\right)^2,
\end{equation}
with derivative:
\begin{equation}\label{Eq:0.10}
\psi : u \mapsto u^3-u.
\end{equation}
The Cahn-Hilliard equation has been extensively studied in the case where $\psi$ is replaced by a polynomial function (see \cite{CH1}, \cite{LANGER} and \cite{MR763473}). Furthermore, this model has been used successfully for describing phase separation phenomena, see for example the survey \cite{MR1657208}, and the references therein, or other recent results on spinodal decomposition and nucleation in \cite{MR1232163, MR2342011, MR1214868, MR1637817, MR1753703, MR1712442, MR1763320, MR2048517}. Recently, Ma and Wang have studied the stationary solutions of the Cahn-Hiliard equation (see \cite{MAWANG}).
The case of non smooth $\psi$ has been the object of much less research (see \cite{MR1123143} and \cite{MR1327930}).
Other frequent simplifications are often made. The mobility $\mathcal M$ is often assumed to be constant and the physical parameters are set to $1$ - as we have done above in \eqref{Eq:0.9}. For a more physically relevant choice of mobility, we mention~\cite{MR1300532} where the following form is proposed $\mathcal{M}(u)=\max\{0,1-u^2\}$. Among the physical parameters, $\varepsilon$ has a peculiar role since it may lead to different asymptotic behaviors and equilibria (see \cite{MR1950337} and section \ref{S:3}). The study of evolution with $\varepsilon \rightarrow 0$ is of great importance: in particular a constant mobility leads to a Mullins-Sekerka evolution (nonlocal coupling) whereas a degenerate mobility leads to a purely local geometric motion (see \cite{MR1742748}). Furthermore, when the interface thickness is of the order of a nanometer, an artificially large parameter $\varepsilon$ is often used to regularize the numerical problem. When a fine resolution is out of reach, a change in the height of the barrier between wells in the free energy density, coupled with a change on $\varepsilon$, allows simulations with larger length scales (see \cite{MR2464502} for details).
\par\medskip
The evolution of the solution of~\eqref{Eq:0.6} can essentially be split into two stages. The first one is the \emph{spinodal decomposition} described in section \ref{S:2} where the two species quickly separate from each other. In longer time, the evolution is slower, and the solution tends to reduce its interfacial energy. These two evolutions require different methods for an efficient global simulation. In the beginning, a very small time step and a precise grid resolution allow efficient computation. But this is not appropriate to get long-time behaviors. So an adaptative time accurate or/and an adaptative mesh can improve the efficiency of the algorithms. However, in the long-time evolution, the interfaces have to be precisely captured so that a global adaptative mesh cannot be used. In the literature, many technical ideas have been studied: adaptive refinement of the time-stepping or of the mesh, $\mathcal{C}^1$ elements (see \cite{MR2464502}), multigrid resolution (see \cite{MR2183612}).\\
We propose here an alternative method using high degree $\mathcal{C}^0$ lagrangian nodal finite elements under a constant mobility $\mathcal{M}\equiv1$. The use of $p$-version (increasing polynomial degree, see \cite{MR615529}) instead of $h$-version (decrease mesh-step) has proved to be efficient for propagation \cite{MR2084226,MR1353516,MR1445739}, corner singularities \cite{MR947469}, or oscillating problems \cite{MR2340008}. The numerical results obtained here with the finite element library \textsc{M\'elina}~\cite{Melina} show that this method is suitable in the Cahn-Hilliard framework as well.
Our paper is organized as follows: in section \ref{S:1}, we shortly describe the discretization (in both time and space) including the nonlinear solver and the high degree finite elements we used. Section \ref{S:2} and section \ref{S:3} are respectively devoted to the numerical results for the one-dimensional and the two dimensional problem. We investigate the performance of our method through different quantitative and qualitative aspects of the Cahn-Hilliard equation: comparison to explicit profile-solution in 1D (see section \ref{S:2}), spinodal decomposition (see section \ref{S:2}), discussion about polynomial approximations of the logarithmic potential (see section \ref{S:2}), impact of the temperature and the parameter $\varepsilon$ (see section \ref{S:2} and \ref{S:3}), long-time behavior and asymptotic stable states (see section \ref{S:3}). The numerical results are compared with existing ones in the literature, validating our approach.
\section{Discretization}\label{S:1}
\subsection{Space-Time schemes}\label{S:1.1}
We start with the description of the time discretisation. Given a large integer $N$, a time step $\tau$, and an initial data $(w_{0},u_{0})$, we denote by $(w_{n},u_{n})_{n \leq N}$ the sequence of approximations at uniformly spaced
times $t_n=n\tau$. The backward Euler scheme is given by:
\begin{equation}\label{Eq:1.1}
\left\{\begin{array}{ll}
\frac{u_{n+1}-u_{n}}{\tau} = \Delta w_{n+1},\\
\\
w_{n+1} = \psi(u_{n+1})-\varepsilon^2\Delta u_{n+1}.
\end{array}\right.
\end{equation}
A Crank-Nicolson scheme could easily be implemented but our experiences show that it gives results quite similar to the ones we shall show in the sequel.
The schemes are immediately generalized to our case. We denote by $\langle\cdot,\cdot\rangle$ the scalar product in $\mathrm{L}^2(\Omega)$. We use the standard Sobolev space $\mathrm{H}^1(\Omega)$
equipped with the seminorm
\begin{displaymath}
|h|_{1} = \|\nabla h\|_{\mathrm{L}^2},
\end{displaymath}
and with the norm
\begin{displaymath}
\|h\|_{1} = \left(|h|_{1}^2 + \|h\|^2_{\mathrm{L}^2}\right)^{1/2}.
\end{displaymath}
The weak form of the equation \eqref{Eq:1.1} reads:
\begin{equation}\label{Eq:1.3}
\left\{\begin{array}{ll}
\langle u_{n+1}-u_{n},\chi\rangle = -\tau\langle \mathcal{M}(u_{n+1}) \nabla w_{n+1},\nabla\chi\rangle, \text{ for all } \chi \in X_{1},\\
\\
\langle w_{n+1},\xi\rangle = \langle \psi(u_{n+1}),\xi\rangle+\langle\varepsilon^2\nabla u_{n+1},\nabla\xi\rangle, \text{ for all } \xi \in X_{2},
\end{array}\right.
\end{equation}
where $X_{1}$ and $X_{2}$ are the spaces of test functions ($\mathrm{H}^1(\Omega)$ for example). We discretise in space by continuous finite elements. Given a polygonal domain $\Omega$, for a small parameter $h>0$, we partition $\Omega$ into a set $\mathcal{T}^h$ of disjoint open elements $K$ such that $h=\max_{K \in \mathcal{T}^h} (\mathrm{diam}(K))$ and $\mathop{\bigcup}_{K \in \mathcal{T}^h}\overline{K} = \overline\Omega$.
Thus, we define the finite element space
\begin{equation}\label{Eq:1.4}
V^h = \left\{\chi \in \mathcal{C}(\bar{\Omega}) : \chi\big|_{K} \in \mathbb{P} \text{ for all } K \in \mathcal{T}^h\right\},
\end{equation}
where $\mathbb{P}$ is a space of polynomial functions, see section \ref{S:1.3}. We denote by $(\varphi_{j})_{j\in J}$ the standard basis of nodal functions. Thus, for $u$ and $v \in \mathcal{C}(\overline{\Omega})$, we define the \emph{lumped scalar product} by:
\begin{displaymath}
\langle u , v \rangle^h := \sum_{i,j} \langle u,\varphi_{i}\rangle\langle v,\varphi_{j}\rangle\langle \varphi_{i},\varphi_{j}\rangle.
\end{displaymath}
The scheme \eqref{Eq:1.3} can be rewritten in the fully discrete form, just by replacing the continuous scalar product with the lumped scalar product.
We denote $\mathbf{u}= (u_{j})_{j\in J}$ and $\mathbf{w}= (w_{j})_{j\in J}$, the finite dimensional representation of $u$ and $w$ (we omit here the subscript $n$ of the time scheme). Then we define the matrices $\mathbf{A}$ and $\mathbf{M}$, whose coefficients are given by the following relations:
\begin{displaymath}
\begin{array}{rcll}
[\mathbf{A}]_{ij} &:=& \langle \nabla\varphi_{i}, \nabla\varphi_{j} \rangle,&\text{``stiffness'' matrix}, \text{ for all } i,j \in J,\\
\\
\left[\mathbf{M}\right]_{ij} &:=& \langle \varphi_{i}, \varphi_{j} \rangle,&\text{``mass'' matrix}, \text{ for all } i,j \in J.\\
\end{array}
\end{displaymath}
For each time-step, given a previous solution $(\mathbf{w}_{n},\mathbf{u}_{n})$, $(\mathbf{w}_{n+1},\mathbf{u}_{n+1})$ is solution of the system
\begin{equation}\label{Eq:1.5}
\left\{\begin{array}{llcl}
\tau\mathbf{A} \mathbf{w}_{n+1} &+ \mathbf{M} \mathbf{u}_{n+1} &=& \mathbf{M} \mathbf{u}_{n},\qquad\qquad\quad\!\!\\
\\
\mathbf{M}\mathbf{w}_{n+1} &- \varepsilon^2 \mathbf{A}\mathbf{u}_{n+1}- \mathbf{M}\mathbf{\Psi}(\mathbf{u}_{n+1}) &=&0,
\end{array}\right.
\end{equation}
where $\mathbf{\Psi}$ is a pointwise operator (\emph{related to $\psi$}), and with $(\mathbf{w}_{0},\mathbf{u}_{0})$ the finite dimensional representation of the initial data. The system \eqref{Eq:1.5} is clearly block-symmetric. The proof of the convergence of this scheme can be found in \cite{MR1609678}.
\subsection{Nonlinear solver}\label{S:1.2}
At each time step, we use a Newton procedure to solve the implicit nonlinear system~\eqref{Eq:1.5}
.
For \eqref{Eq:1.5}, we define the operator $\mathbf{L}$ by:
\begin{equation*}
\mathbf{L} = \left(\begin{array}{cc}
\tau\mathbf{A} &\mathbf{M} \\
\mathbf{M} &- \varepsilon^2 \mathbf{A}
\end{array}\right).
\end{equation*}
Then denote by $\mathbf{S}$ the matrix of the left hand side of the backward Euler scheme,
\begin{equation*}
\mathbf{S} = \left(\begin{array}{cc}
0 &\mathbf{M} \\
0&0
\end{array}\right).
\end{equation*}
Denote also by $\mathbf{G}$ the following operator:
\begin{equation*}
G (\mathbf{w},\mathbf{u}) :=\left(\begin{array}{cc}
0\\-\mathbf{M}\mathbf{\Psi}(\mathbf{u})
\end{array}\right).
\end{equation*}
Finally denote by $\mathbf{Y}_{n}$ the couple $(\mathbf{w}_{n},\mathbf{u}_{n})$ for each $n \leq N$.
The backward Euler scheme at each time-step satisfies the following formula:
\begin{equation}\label{Eq:1.7}
\mathbf{L} \mathbf{Y}_{n+1} + \mathbf{G} (\mathbf{Y}_{n+1}) - \mathbf{S} \mathbf{Y}_{n} = 0.
\end{equation}
The Newton iterates $(\mathbf{Y}_{n}^k:=(\mathbf{w}_{n}^k,\mathbf{u}_{n}^k))_{k\in\N}$ satisfy for each $n\leq N$
\begin{equation}\label{Eq:1.8}
\left\{\begin{array}{lcl}
\mathbf{Y}_{n}^0 &=& \mathbf{Y}_{n},\\
\\
\mathbf{Y}_{n}^{k+1} &=& \mathbf{Y}_{n}^{k} - \left[\mathbf{L}+D_{\mathbf{G}}\left(\mathbf{Y}_{n}^{k}\right)\right]^{-1} \left[\left(\mathbf{L}+\mathbf{G}-\mathbf{S}\right)\left(\mathbf{Y}_{n}^{k}\right)\right], \text{ for all } k \in \N,
\end{array}\right.
\end{equation}
where $D_{\mathbf{G}}\left(\mathbf{Y}_{n}^{k}\right)$ is the differential of $\mathbf{G}$ at point $\mathbf{Y}_{n}^{k}$. Actually, we stop the procedure at $k=k_{n}$ when the residual is small, and define $\mathbf{Y}_{n+1}:= \mathbf{Y}_{n}^{k_{n}}$. System \eqref{Eq:1.8} is an implicit linear system for each Newton-step, handled with a biconjugate gradient method.\\
When the nonlinear term is logarithmic, we should deal with the singularities at $\pm 1$. However, in
all our computations, the solution stays far from $\pm 1$ so that no special care is needed. This is expected. Indeed, it is known that in the one dimensional case the solution satisfies an $\rm{L}^\infty$ bound which
is strictly less than one (see \cite{MR1182511}). The same result has not been proved in higher dimension but it is probably true.\\
A simple remark shows the mass conservation through the total scheme. Indeed, if we multiply the first component of the second equation in the system \eqref{Eq:1.8} by the vector $\mathbb{I}:=(1,1,...,1)$ which belongs to $V^{h}$, we get for all $k \in \N$:
\begin{equation*}
\mathbb{I}\,\mathbf{M}\mathbf{u}_{n}^{k} = \mathbb{I}\,\mathbf{M}\mathbf{u}_{n}.
\end{equation*}
\subsection{Implementation with high degree finite elements}\label{S:1.3}
The finite element library \textsc{M\'elina}~\cite{Melina} has the feature of providing lagrangian nodal elements with order up to $64$ (the nodes may be chosen as the Gauss-Lobatto points to avoid Runge phenomenon for large degrees). It can thus be used as a $p$-version code -- see~\cite{MR615529} -- or even to implement spectral methods -- see~\cite{MR1470226}. In the following results, we use quadrangular elements for two-dimensional computations, with degree from $1$ to $10$. So we use the notation $Q_{i}$ with $i\in\{1,2,3,4,5,6,7,8,9,10\}$ to describe these elements. We justify this strategy by the fact that the expected solution is smooth but may present a thin interface ; since high degree polynomials are able to capture high frequencies they are well suited in such situations. Some comparisons are shown below between degree $1$ on a refined mesh, and degree $10$ on a coarse mesh, justifying the efficiency of the method (in both terms of accuracy and computational cost).
\section{Cahn-Hilliard evolution. Polynomial approximation of the logarithm}\label{S:2}
The temperature plays a crucial role in the evolution of the solution. The function $\psi$ defined in \eqref{Eq:0.8} depends on two
values of the temperature $T$ and $T_{c}$.
When the temperature $T$ is greater than the critical temperature $T_{c}$, the second derivative of $\psi$ is non-negative,
thus function $\psi$ is convex and has only one minimum. We say that the function $\psi$ has a single well profile.
Thus the solution tends to this unique minimum and the alloy exists in a single homogeneous state.
But when the temperature $T$ of the alloy is lowered under the critical temperature $T_{c}$, the function $\psi$ changes from a single well into a double well (see Figure \ref{FIG:A}), and the solution rapidly separates into two phases of nearly homogeneous concentration. This phenomenon is referred to as \emph{spinodal decomposition}. If the initial concentration belongs to the region where the energy density is concave, i.e. between the two \emph{spinodal points} $\sigma_-$ and $\sigma_+$ (see Figure~\ref{FIG:A}), the homogeneous state becomes unstable.
The concentrations of the two regions composing the mixture after a short stabilization have value near the so called \emph{binodal points} $\beta_-$ and $\beta_+$ (see also Figure~\ref{FIG:A}), defined by
\begin{equation}\label{Eq:2.6}
f'(\beta_{-}) = f'(\beta_{+}) = \frac{f(\beta_{+})-f(\beta_{-})}{\beta_{+}-\beta_{-}}, \quad \text{ with } \beta_{-} < \beta_{+}.
\end{equation}
If the free energy is symmetric, the binodal points are the minima of each well, but in a more general case they are on a double tangent line (see \cite{MR2464502}).
\begin{figure}[htp]
\centering
\includegraphics[angle=0,width=12cm]{\SDP
\caption{Free energy density for two different temperatures.}\label{FIG:A}
\end{figure}
The spinodal decomposition is represented in the first two graphs of Figure~\ref{FIG:E1} or Figure \ref{FIG:E2}. \begin{figure}[htp]
\centering
\subfigure[t=0]{
\label{FIG:E1.a}
\includegraphics[angle=0,width=6cm]{\SPINOQUARTICUN}}
\hspace{0.3cm}
\subfigure[t=0.01]{
\label{FIG:E1.b}
\includegraphics[angle=0,width=6cm]{\SPINOQUARTICDEUX}}
\\
\vspace{10pt}
\subfigure[t=0.05]{
\label{FIG:E1.c}
\includegraphics[angle=0,width=6cm]{\SPINOQUARTICTROIS}}
\hspace{0.3cm}
\subfigure[t=0.2]{
\label{FIG:E1.d}
\includegraphics[angle=0,width=6cm]{\SPINOQUARTICQUATRE}}
\\
\vspace{10pt}
\subfigure[t=0.6]{
\label{FIG:E1.e}
\includegraphics[angle=0,width=6cm]{\SPINOQUARTICCINQ}}
\hspace{0.3cm}
\subfigure[t=1]{
\label{FIG:E1.f}
\includegraphics[angle=0,width=6cm]{\SPINOQUARTICSIX}}
\caption{Spinodal decomposition under the classic quartic double-well potential.}
\label{FIG:E1}
\end{figure}
\begin{figure}[htp]
\centering
\subfigure[t=0]{
\label{FIG:E2.a}
\includegraphics[angle=0,width=6cm]{\SPINOLOGUN}}
\hspace{0.3cm}
\subfigure[t=0.01]{
\label{FIG:E2.b}
\includegraphics[angle=0,width=6cm]{\SPINOLOGDEUX}}
\\
\vspace{10pt}
\subfigure[t=0.05]{
\label{FIG:E2.c}
\includegraphics[angle=0,width=6cm]{\SPINOLOGTROIS}}
\hspace{0.3cm}
\subfigure[t=0.2]{
\label{FIG:E2.d}
\includegraphics[angle=0,width=6cm]{\SPINOLOGQUATRE}}
\\
\vspace{10pt}
\subfigure[t=0.6]{
\label{FIG:E2.e}
\includegraphics[angle=0,width=6cm]{\SPINOLOGCINQ}}
\hspace{0.3cm}
\subfigure[t=1]{
\label{FIG:E2.f}
\includegraphics[angle=0,width=6cm]{\SPINOLOGSIX}}
\caption{Spinodal decomposition under a logarithmic potential.}
\label{FIG:E2}
\end{figure}
In longer time, the separated regions evolve to reduce their interfacial energies. These diffuse interfaces are shortened in an effect
resembling the surface tension on a sharp interface, as the material fronts move to reduce their own curvature (see \cite{MR1401172} and \cite{MR997638}).
Finally, the solution reaches an equilibrium the location and form of which depend on the total initial concentration (see \cite{MR1950337}). Nevertheless this equilibrium is always a solution with an interface with minimal measure. On Figures \ref{FIG:E1} and \ref{FIG:E2}, this phenomenon is observed on the last four graphs.
Figure \ref{FIG:E1} corresponds to an evolution under the classic quartic double-well potential \eqref{Eq:0.9}
with non scaled coefficients, whereas Figure \ref{FIG:E2} corresponds to an evolution under the logarithmic potential \eqref{Eq:0.7}. They are both simulated on a $12\times12$ mesh under $Q_{1}$ polynomial elements. The $\varepsilon$ parameter is such that $\varepsilon^2 = 0.07$.
We see that the evolutions are quite similar and lead to the same stationary state. On these two evolutions, we can compare the difference of the energies or the $\rm{L}^2$ norm of the difference (see next paragraph).
Note that the polynomial approximation of the logarithm does not change the qualitative behavior. The
same patterns appear and the long time behavior is very similar. The only notable difference is that with
the logarithmic nonlinearity, the dynamic is slower. This is particularly clear on the graphs (b). The spinodal
decomposition is almost completed only for the polynomial. Similarly, on graphs (f), we see that at time
$t=1$, the logarithmic evolution has not reached equilibrium yet.
We have observed this in all our tests.
The second evolution is often illustrated by the classical benchmark cross. It can be considered as a qualitative validation of the numerical methods. This long time behavior is illustrated in Figure~\ref{FIG:D}. Starting from a cross-shaped initial condition, the interface first diffuses from the arbitrary width of the initial condition to the equilibrium interface width. Next, the solution tries to reduce its interfacial energy and tends to a circular form. In the total free energy \eqref{Eq:0.3}, the term with the free energy function $f$ is responsible to the spinodal decomposition, whereas the gradient term is responsible for the interfacial reduction. This phenomenon has been simulated on a $256 \times 256$ mesh under $Q_{3}$ polynomial elements. Figures \ref{FIG:D} (a), (b) are obtained with
the quartic nonlinearity. We see on Figure \ref{FIG:D} (c) and (d) that again the qualitative behavior is
very similar with the logarithm.
\begin{figure}[htp]
\centering
\subfigure[Quartic potential - t=0]{
\label{FIG:D.a}
\includegraphics[angle=0,width=6cm]{\CROIX}}
\hspace{0.3cm}
\subfigure[t=1]{
\label{FIG:D.b}
\includegraphics[angle=0,width=6cm]{\BULLE}}
\\
\vspace{10pt}
\subfigure[Logarithmic potential - t=0]{
\label{FIG:D.c}
\includegraphics[angle=0,width=6cm]{\CROIXLOG}}
\hspace{0.3cm}
\subfigure[t=1]{
\label{FIG:D.d}
\includegraphics[angle=0,width=6cm]{\BULLELOG}}
\caption{Evolution of a cross-shaped initial condition to a bubble.}
\label{FIG:D}
\end{figure}
It is difficult to measure precisely the qualitative difference between the two evolutions. The only physical
quantity which can be measured in two dimensions is the energy. A detailed study of this aspect is
performed below. Moreover in the one-dimensional case, we are able to measure the interface. We will see that
the quartic nonlinearity tends to thicken the interface.
The replacement of the logarithmic free energy by the quartic one has been done by many authors in order to
avoid numerical and theoretical difficulties raised by the singular values $\pm1$. More generally, we can discuss the approximation of the logarithm by polynomial functions. We consider the $2n$-th order polynomial Taylor expansion $f_{2n}$:
\begin{equation}\label{Eq:2.7}
f_{2n} := u \mapsto \left(T_{c}\left(\frac{1-u^2}{2}\right) + T\left[-\ln(2) +\sum_{p=1}^{n} \frac{u^{2p}}{2p(2p-1)}\right]\right) + K_{2n}.
\end{equation}
It is defined up to an additive constant $K_{2n}$. The constant $K_{2n}$ is apparently arbitrary. However,
it is preferable to choose it in order that the energy of a solution $u$
\begin{equation}
\mathcal{F}_{2n} (u) := \int_{\Omega} \left( f_{2n}(u) + \kappa |\nabla u|^2 \right)\dd V
\end{equation}
is well defined on unbounded domains. Since it is expected that the solution converges to one
of the binodal values, it is natural to choose $K_{2n}$ so that $f_{2n}$ vanishes at those points. We always consider this choice.
We have seen above that the quartic approximation does not seem to change drastically the
qualitative behaviour, except that the evolution is faster. We now perform a quantitative study
to measure more precisely the effect of the polynomial approximation.
The spinodal and binodal points are drawn in Figure \ref{FIG:B} for various $n$. When $n$ increases, the spinodal and binodal points converge to the corresponding values for the logarithmic potential. However, the convergence is rather slow (see Figures \ref{FIG:B} and \ref{FIG:C}).
\begin{figure}[htp]
\centering
\begin{psfrags}
\psfrag{Title}{}
\psfrag{Xlabel}{{\Large \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! Polynomial degree}}
\psfrag{Ylabel}{{\Large \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! Error with the exact point}}
\includegraphics[angle=0,width=12cm]{\Points}
\caption{Polynomial and logarithmic spinodal and binodal points.}\label{FIG:B}
\end{psfrags}
\end{figure}
\begin{figure}[htp]
\centering
\begin{psfrags}
\psfrag{Title}{}
\psfrag{Xlabel}{{\Large \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! Polynomial degree}}
\psfrag{Ylabel}{{\Large \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! Logarithmic error with the exact point}}
\includegraphics[angle=0,width=12cm]{\ConvergencePoints}
\caption{Rate of convergence of the polynomial points.}\label{FIG:C}
\end{psfrags}
\end{figure}
In the one-dimensional case, it is possible to study the thickness of the interface.
Let us consider the domain $\Omega=\mathbb R$ and the quartic potential
\begin{equation}\label{Eq:2.1}
\psi_{4} := u \mapsto - T_{c}u + T\left(u+\frac{u^3}{3}\right),
\end{equation}
which is the derivative of
\begin{equation}\label{Eq:2.2}
f_{4} := u \mapsto T_{c}\left(\frac{1-u^2}{2}\right) + T\left[\frac{u^2}{2} +\frac{u^4}{12}\right]+ K_{4}.
\end{equation}
Then a stationary solution of the Cahn-Hilliard equation~\eqref{Eq:0.6} can be explicitly computed (under a constant mobility $\mathcal{M}(u)\equiv1$), see~\cite{MR983721}:
\begin{equation}\label{Eq:2.3}
u_{} : x\mapsto u_{+}\tanh\left( x\mu\right),
\end{equation}
where
\begin{equation}\label{Eq:2.4}
u_{+} = \sqrt{3\left(\frac{T_c}{T}-1\right)} \quad\text{ and }\quad \mu = \frac{\sqrt{T_c-T}}{\varepsilon\sqrt{2}}.
\end{equation}
It is important to remark that the solution is constrained in $[-u_{+},u_{+}]$.
We can define a characteristic length $\ell$ (see Figure \ref{FIG:F}), corresponding to the width of the region containing the main variations of a solution $u$ :
\begin{displaymath}
\ell := \frac{|\lim_{x\rightarrow +\infty} u(x)|+|\lim_{x\rightarrow -\infty} u(x)|}{\text{Slope in interface point}},
\end{displaymath}
where the interface point is the point $x_{0}$ where $u(x_{0})=0$. Thus we can compute explicitly this length and obtain:
\begin{equation}\label{Eq:ell}
\ell =\frac{2u_+}{u'_{4}(0)} =\frac{2\varepsilon\sqrt{2}}{\sqrt{T_{c}-T}}.
\end{equation}
Cahn and Hilliard have defined a parameter $\lambda:=\frac{2\varepsilon\sqrt{2}}{\sqrt{T_{c}}}$ in order to characterize the interface length. With this parameter $\lambda$ we obtain the following expression for $\ell$:
\begin{equation}\label{Eq:2.5}
\ell = \frac{\lambda}{\sqrt{1-\frac{T}{T_{c}}}}.
\end{equation}
Cahn and Hilliard have shown that in the case of the logarithmic free density the interface length
is of the same order. This suggests that the quartic double well approximation preserves important features of the solution.
\begin{figure}[htp]
\centering
\begin{psfrags}
\psfrag{-u+}{{\huge$-u_{+}$}}
\psfrag{u+}{{\huge$u_{+}$}}
\psfrag{Slope mu}{{\LARGE\color{blue}Slope $\mu$}}
\psfrag{l}{{\LARGE\color{blue}$\ell$}}
\includegraphics[angle=0,width=10cm]{\Interface}
\caption{Interface length for the solution $u_{4}$.}\label{FIG:F}
\end{psfrags}
\end{figure}
In Figure~\ref{FIG:G}, we present the numerical solution for $\Omega=[0,1]$ (blue stars), and the ``tanh-profile'' whose coefficients $u_+$ and $\mu$ have been fitted to the data. The fitting on $u_{+}$ corresponds to the value of the solution on the boundaries of the domain $\Omega$. And the fitting on $\mu$ corresponds to a least square method between the numerical solution and a "tanh-profile" solution interpolated on the same meshes. The ``tanh-profile'' (defined over $\mathbb R$) may be considered as a good approximation of the solution on $\Omega =[0,1]$ since the interface is very thin.
The numerical solution is computed with 35 $Q_{3}$-elements.
\begin{figure}[htp]
\centering
\includegraphics[angle=0,width=12cm]{\FitCurveExemple}
\caption{Fitted curve on the ``tanh-profile''.}
\label{FIG:G}
\end{figure}
However, we have measured numerically the interface width in the quartic and logarithmic cases.
This width is plotted for various $\varepsilon$ on Figure \ref{FIG:GG}. We see that as expected by the formula \eqref{Eq:ell},
it varies linearly with $\varepsilon$. But, for $\varepsilon$ not too
small, the interface width is thinner for the logarithmic equation. The quartic approximation
introduces a non negligible extra diffusivity.
\begin{figure}[htp]
\centering
\includegraphics[angle=0,width=12cm]{\LongueurInterface}
\caption{Length of the interface for the quartic and logarithmic potentials.}
\label{FIG:GG}
\end{figure}
We can also compare the total free energies. Denote by $u$ the solution of a simulation with the \emph{logarithmic} function $f$ and by
$(u_{2n})_{n\geq2}$ the family of solutions of the simulations with the polynomial functions $(f_{2n})_{n\geq2}$. For the energy, we take as reference the logarithmic total free energy, and we study
\begin{equation}
|\mathcal{F}(u_{2n}) - \mathcal{F}(u)|.
\end{equation}
On Figure~\ref{FIG:H}, the evolution of the logarithm of this
quantity is plotted during a classical spinodal decomposition in dimension one.
\begin{figure}[htp]
\centering
\begin{psfrags}
\psfrag{F4}{{\Large$f_{4}$}}
\psfrag{F6}{{\Large$f_{6}$}}
\psfrag{F8}{{\Large$f_{8}$}}
\psfrag{F10}{{\Large$f_{10}$}}
\psfrag{F12}{{\Large$f_{12}$}}
\psfrag{F14}{{\Large$f_{14}$}}
\psfrag{F16}{{\Large$f_{16}$}}
\includegraphics[angle=0,width=12cm]{\Enerlog}
\end{psfrags}
\caption{Polynomial energies versus logarithmic energy.}\label{FIG:H}
\end{figure}
We can see important peaks at the begining and smoother peaks between iterations $500$ and $700$. These peaks appear when the solution has a rapid evolution and when its topological form changes. For instance, these peaks correspond to the changes beetween the fourth and the fifth images of Figure~\ref{FIG:E1}, and between the fifth and the sixth images. After the iteration $750$, all the solutions are in an asymptotic stable state, and the energies do not change anymore. \par
For a quartic potential ($n=2$ i.e. $f_{4}$ in Figure \ref{FIG:H}), the energy error is significant and
the polynomial approximation is not good in that respect.
We could as well have shown the evolution of
$$
|\mathcal{F}_{2n}(u_{2n}) - \mathcal{F}(u)|
$$
In fact, it is very similar and does not bring new information.
On a mathematical point of view, it is interesting to study the error in $L^2$ norm:
$$
\left(\int_\Omega |u_{2n}-u|^2 dx\right)^{1/2}.
$$
We see on Figure \ref{FIG:HH} that for $n=2$, the error is important. It decreases with $n$ but is still
significant for $n=3$. For $n\ge 6$, it is negligible.
\begin{figure}[htp]
\centering
\begin{psfrags}
\psfrag{F4}{{\Large$f_{4}$}}
\psfrag{F6}{{\Large$f_{6}$}}
\psfrag{F8}{{\Large$f_{8}$}}
\psfrag{F10}{{\Large$f_{10}$}}
\psfrag{F12}{{\Large$f_{12}$}}
\psfrag{F14}{{\Large$f_{14}$}}
\psfrag{F16}{{\Large$f_{16}$}}
\includegraphics[angle=0,width=12cm]{\ErrLDEUXlog}
\end{psfrags}
\caption{$\rm{L}^2$ errors between the polynomial solutions and the logarithmic solution.}\label{FIG:HH}
\end{figure}
Figures \ref{FIG:H:2D} and \ref{FIG:HH:2D} present the same quantities for a two-dimensional
spinodal decomposition. We observe the same quantitative difference. Note that we clearly see
that the energy evolution slows down as the degree $n$ grows.
\begin{figure}[htp]
\centering
\begin{psfrags}
\psfrag{Title}{{\Large \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! Two dimensional case}}
\psfrag{Ylabel}{{\Large \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! Log$_{10}($Energy$_{2n}$ - Energy$_{log})$}}
\psfrag{Xlabel}{{\Large \!\!\!\!\!\!\!\!\!\! Time iterations}}
\psfrag{F4}{{\Large$f_{4}$}}
\psfrag{F6}{{\Large$f_{6}$}}
\psfrag{F8}{{\Large$f_{8}$}}
\psfrag{F10}{{\Large$f_{10}$}}
\psfrag{F12}{{\Large$f_{12}$}}
\psfrag{F14}{{\Large$f_{14}$}}
\psfrag{F16}{{\Large$f_{16}$}}
\includegraphics[angle=0,width=12cm]{\EnerlogDEUXD}
\end{psfrags}
\caption{Polynomial energies versus logarithmic energy.}\label{FIG:H:2D}
\end{figure}
\begin{figure}[htp]
\centering
\begin{psfrags}
\psfrag{Title}{{\Large \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! Two dimensional case}}
\psfrag{Ylabel}{}
\psfrag{Xlabel}{{\Large \!\!\!\!\!\!\!\!\!\! Time iterations}}
\psfrag{F4}{{\Large$f_{4}$}}
\psfrag{F6}{{\Large$f_{6}$}}
\psfrag{F8}{{\Large$f_{8}$}}
\psfrag{F10}{{\Large$f_{10}$}}
\psfrag{F12}{{\Large$f_{12}$}}
\psfrag{F14}{{\Large$f_{14}$}}
\psfrag{F16}{{\Large$f_{16}$}}
\includegraphics[angle=0,width=12cm]{\ErrLDEUXlogDEUXD}
\end{psfrags}
\caption{$\rm{L}^2$ errors between the polynomial solutions and the logarithmic solution.}\label{FIG:HH:2D}
\end{figure}
We conclude that the classical quartic approximation of the free energy may be considered as a good
approximation for qualitative behaviour but it produces a significant error and accelerates the dynamics.
If precision is required, one should consider an approximation with a higher order polynomial.
\section{Validation of the numerical method. Choice of the degree of the elements}\label{S:3}
On Figure \ref{FIG:12B}, we have drawn a numerical solution for different times.
It is a $Q_{1}$ solution on a mesh with 100 elements under the quartic double-well potential.
On figure \ref{FIG:12B.f}, the solution has reached its stable state and has binodal values $\pm 1$ on the boundary.
\begin{figure}[htp]
\centering
\subfigure[t=0]{
\label{FIG:12B.a}
\includegraphics[angle=0,width=4.5cm]{\SpinoAAA}}
\hspace{0.3cm}
\subfigure[t=0.1]{
\label{FIG:12B.b}
\includegraphics[angle=0,width=4.5cm]{\SpinoAAB}}
\hspace{0.3cm}
\subfigure[t=0.5]{
\label{FIG:12B.c}
\includegraphics[angle=0,width=4.5cm]{\SpinoAAC}}
\\
\vspace{10pt}
\subfigure[t=10]{
\label{FIG:12B.d}
\includegraphics[angle=0,width=4.5cm]{\SpinoAAD}}
\hspace{0.3cm}
\subfigure[t=20]{
\label{FIG:12B.e}
\includegraphics[angle=0,width=4.5cm]{\SpinoAAE}}
\hspace{0.3cm}
\subfigure[t=50]{
\label{FIG:12B.f}
\includegraphics[angle=0,width=4.5cm]{\SpinoAAF}}
\caption{$Q_{1}$ solution on a mesh with 100 elements.}
\label{FIG:12B}
\end{figure}
In our first set of tests, we start with the same initial state near the ``tanh profile'' solution. The evolutions are driven by the quartic potential function. We wait for the stabilization of all the solutions and study the error on the energies and on the slopes of the interface.
Remark that we can explicitly compute the energy of the explicit solution. And since the energy of a numerical simulation is decreasing in time, this energy should converge to the energy of the explicit solution.
Figure \ref{FIG:12} shows the evolutions of the errors between the numerical energies and the explicit energy according to the degree of the polynomial space $\PP$. Before the 800th iteration in time, the solutions are not stable. They try to minimize their energies. After the 800th iteration, all the solutions are in a stable state. We can see that the evolutions are qualitatively similar at the beginning, but the elements $Q_{1}$, $Q_{2}$ and $Q_{3}$ don't achieve the tolerance zone, whereas the other elements do. However, $Q_{2}$ and $Q_{3}$
give a very good result.
\begin{figure}[htp]
\centering
\includegraphics[angle=0,width=12cm]{\MOVINGENERGY
\caption{Energies during an evolution.}\label{FIG:12}
\end{figure}
The slope of the interface is an essential physical quantity. So we have compared the errors on the slopes between the numerical solutions and the theoretical solution. Note that these slopes correspond to the values of the derivatives of the numerical solutions at the
interface and our finite elements have not a $\mathcal{C}^1$ regularity.
Under the quartic double-well potential \eqref{Eq:2.2}, we have an explicit slope $\mu$ for the stationary solution. On Figure~\ref{FIG:2}, we present the numerical solution for $\Omega=[0,1]$ (blue stars), and the ``tanh-profile'' whose coefficients $u_+$ and $\mu$ have been fitted to the data. The fitting on $u_{+}$ corresponds to the value of the solution on the boundaries of the domain $\Omega$. And the fitting on $\mu$ corresponds to a least square method between the numerical solution and a ``tanh-profile'' solution interpolated on the same meshes. The ``tanh-profile'' (defined over $\mathbb R$) may be considered as a good approximation of the solution on $\Omega =[0,1]$ since the interface is very thin.
\begin{figure}[htp]
\centering
\subfigure[Mesh 18 - Q1]{
\label{FIG:2.a}
\includegraphics[angle=0,width=4.5cm]{\FitCurveMAQAZZ}}
\hspace{0.3cm}
\subfigure[Mesh 36 - Q1]{
\label{FIG:2.b}
\includegraphics[angle=0,width=4.5cm]{\FitCurveMBQAZZ}}
\hspace{0.3cm}
\subfigure[Mesh 72 - Q1]{
\label{FIG:2.c}
\includegraphics[angle=0,width=4.5cm]{\FitCurveMCQAZZ}}
\\
\vspace{10pt}
\subfigure[Mesh 9 - Q2]{
\label{FIG:2.d}
\includegraphics[angle=0,width=4.5cm]{\FitCurveMAQBZZ}}
\hspace{0.3cm}
\subfigure[Mesh 18 - Q2]{
\label{FIG:2.e}
\includegraphics[angle=0,width=4.5cm]{\FitCurveMBQBZZ}}
\hspace{0.3cm}
\subfigure[Mesh 36 - Q2]{
\label{FIG:2.f}
\includegraphics[angle=0,width=4.5cm]{\FitCurveMCQBZZ}}
\\
\vspace{10pt}
\subfigure[Mesh 6 - Q3]{
\label{FIG:2.g}
\includegraphics[angle=0,width=4.5cm]{\FitCurveMAQCZZ}}
\hspace{0.3cm}
\subfigure[Mesh 12 - Q3]{
\label{FIG:2.h}
\includegraphics[angle=0,width=4.5cm]{\FitCurveMBQCZZ}}
\hspace{0.3cm}
\subfigure[Mesh 24 - Q3]{
\label{FIG:2.i}
\includegraphics[angle=0,width=4.5cm]{\FitCurveMCQCZZ}}
\caption{Fitted curves on the ``tanh-profile''.}
\label{FIG:2}
\end{figure}
If we want to compare the solutions between a $Q_{1}$ simulation and a $Q_{10}$ simulation, we need to compare the two simulations under a same complexity which, up to the inversions of the
linear systems, corresponds to a similar computational cost. In the one dimensional case, the complexity corresponds to the value $Degree \times Number\ of\ elements$. For a $Q_{10}$ simulation, we only need a mesh with 10 times less elements than for a $Q_{1}$ simulation.
Figure \ref{FIG:2} represents the numerical solution over mesh grids with three different complexities 18, 36 and 72, and under polynomial functions of degree 1, 2 and 3. For instance, for the elements $Q_{2}$, it corresponds to the mesh grids with 9, 18 and 36 elements. If we increase the number of elements or the degree of the polynomial space $\PP$, then we obtain a better approximation of the slope of the ``tanh-profil'' solution. But for the same complexity, the curves are qualitatively similar. Figures \ref{FIG:3ter.a} and \ref{FIG:3ter.b} show the evolution of this approximation error according to the complexity for $Q_{1}$, $Q_{2}$ and $Q_{3}$ simulations. On Figure \ref{FIG:3ter.b}, we have used a logarithmic scale in order to compare the rate of the convergence.
\begin{figure}[htp]
\centering
\subfigure[Versus the number of elements of the mesh]{
\begin{psfrags}
\psfrag{Title}{}
\psfrag{Xlabel}{}
\psfrag{Ylabel}{}
\label{FIG:3ter.a}
\includegraphics[angle=0,width=6cm]{\SlopesQWMeshW}
\end{psfrags}}
\hspace{1cm}
\subfigure[Versus the logarithm of the number of elements of the mesh]{
\begin{psfrags}
\psfrag{Title}{}
\psfrag{Xlabel}{}
\psfrag{Ylabel}{}
\label{FIG:3ter.b}
\includegraphics[angle=0,width=6cm]{\SlopesQWLogMeshW}
\end{psfrags}}
\caption{Comparison between the errors on the slope under the same complexity.}
\label{FIG:3ter}
\end{figure}
\clearpage
We obviously conclude that, for elements $Q_{1}$, $Q_{2}$ or $Q_{3}$, a fine mesh allows a better approximation. But the $Q_{2}$ and $Q_{3}$ elements seem to reach faster a saturation. They only need 500 elements in order to reach a $10^{-5}$ precision, whereas the $Q_{1}$ elements need 5000 elements ! Figure \ref{FIG:3ter.b} highlights this better speed on the approximation error of the slope. But $Q_{2}$ and $Q_{3}$ elements seems to have a similar speed before reaching the saturation zone.
If we fix the complexity, we can test which degree of the polynomial space $\PP$ can provide the best speed.
Figure \ref{FIG:4bis} shows this approximation error according to the degree of the polynomial space $\PP$ under the same complexity - quantified by the number of degrees of freedom (DoF) of the finite elements space.
\begin{figure}[htp]
\centering
\subfigure[DoF 90]{
\begin{psfrags}
\psfrag{Title}{}
\psfrag{Xlabel}{}
\psfrag{Ylabel}{}
\psfrag{Q1}{\!\!\huge{\color{blue}$Q_{1}$}}
\psfrag{Q2}{\!\!\!\!\huge{\color{blue}$Q_{2}$}}
\psfrag{Q3}{\!\!\!\!\huge{\color{blue}$Q_{3}$}}
\psfrag{Q4}{\!\!\!\!\huge{\color{blue}$Q_{4}$}}
\psfrag{Q5}{\!\!\!\!\huge{\color{blue}$Q_{5}$}}
\psfrag{Q6}{\!\!\!\!\huge{\color{blue}$Q_{6}$}}
\psfrag{Q7}{\!\!\!\!\huge{\color{blue}$Q_{7}$}}
\psfrag{Q8}{\!\!\!\!\huge{\color{blue}$Q_{8}$}}
\psfrag{Q9}{\!\!\!\!\huge{\color{blue}$Q_{9}$}}
\psfrag{Q10}{\!\!\!\!\huge{\color{blue}$Q_{10}$}}
\label{FIG:4bis.a}
\includegraphics[angle=0,width=4.5cm]{\SlopesDLA}
\end{psfrags}}
\hspace{0.3cm}
\subfigure[DoF 300]{
\begin{psfrags}
\psfrag{Title}{}
\psfrag{Xlabel}{}
\psfrag{Ylabel}{}
\psfrag{Q1}{\!\!\huge{\color{blue}$Q_{1}$}}
\psfrag{Q2}{\!\!\!\!\huge{\color{blue}$Q_{2}$}}
\psfrag{Q3}{\!\!\!\!\huge{\color{blue}$Q_{3}$}}
\psfrag{Q4}{\!\!\!\!\huge{\color{blue}$Q_{4}$}}
\psfrag{Q5}{\!\!\!\!\huge{\color{blue}$Q_{5}$}}
\psfrag{Q6}{\!\!\!\!\huge{\color{blue}$Q_{6}$}}
\psfrag{Q7}{\!\!\!\!\huge{\color{blue}$Q_{7}$}}
\psfrag{Q8}{\!\!\!\!\huge{\color{blue}$Q_{8}$}}
\psfrag{Q9}{\!\!\!\!\huge{\color{blue}$Q_{9}$}}
\psfrag{Q10}{\!\!\!\!\huge{\color{blue}$Q_{10}$}}
\label{FIG:4bis.b}
\includegraphics[angle=0,width=4.5cm]{\SlopesDLB}
\end{psfrags}}
\hspace{0.3cm}
\subfigure[DoF 630]{
\begin{psfrags}
\psfrag{Title}{}
\psfrag{Xlabel}{}
\psfrag{Ylabel}{}
\psfrag{Q1}{\!\!\huge{\color{blue}$Q_{1}$}}
\psfrag{Q2}{\!\!\!\!\huge{\color{blue}$Q_{2}$}}
\psfrag{Q3}{\!\!\!\!\huge{\color{blue}$Q_{3}$}}
\psfrag{Q4}{\!\!\!\!\huge{\color{blue}$Q_{4}$}}
\psfrag{Q5}{\!\!\!\!\huge{\color{blue}$Q_{5}$}}
\psfrag{Q6}{\!\!\!\!\huge{\color{blue}$Q_{6}$}}
\psfrag{Q7}{\!\!\!\!\huge{\color{blue}$Q_{7}$}}
\psfrag{Q8}{\!\!\!\!\huge{\color{blue}$Q_{8}$}}
\psfrag{Q9}{\!\!\!\!\huge{\color{blue}$Q_{9}$}}
\psfrag{Q10}{\!\!\!\!\huge{\color{blue}$Q_{10}$}}
\label{FIG:4bis.c}
\includegraphics[angle=0,width=4.5cm]{\SlopesDLC}
\end{psfrags}}
\caption{Error on the slope versus the degree of the polynomial space $\PP$ under the same complexity.}
\label{FIG:4bis}
\end{figure}
Under the same complexity, we see on Figure \ref{FIG:4bis} that high degree elements still provide better approximations than $Q_{1}$ elements.
Although very high degree elements always provide better approximations than low degree elements,
the slopes on Figure \ref{FIG:4bis.c} of the curves for low degrees suggest that $Q_{3}$ elements are a good choice.
Higher elements increase the computation time for matrix inversion and the gain is not valuable.
Figure \ref{FIG:L} shows the error on the energies according to the complexity under $Q_{1}$, $Q_{2}$, $Q_{3}$ and $Q_{4}$ elements. As for the slopes, we see that the error is decreasing as the number of elements of the mesh is increasing. Whereas the error reaches a $10^{-4}$ precision for the slopes before saturation, the error on the energy reaches the tolerance zone for $Q_{4}$ elements on a mesh with $500$ elements. Figure \ref{FIG:M} shows the logarithm of the error according to the logarithm of the complexity. We see that the evolution is linear for the finest meshes with a good speed. We conclude in particular that we can compute an order of the speed of the convergence. For $Q_{1}$ elements, we find an order $2$, for $Q_{2}$ elements, we find an order $4$, for $Q_{3}$ elements, we find an order $4$ and for $Q_{4}$ elements, we find an order $6$. Note that the error on the energies should be of the order as the $\rm{H}^1$ error.
\begin{figure}[htp]
\centering
\subfigure[Errors according to the complexity]{
\label{FIG:L}
\includegraphics[angle=0,width=6cm]{\ErrEnergiesQZIV}
\hspace{0.3cm}
\subfigure[Errors according to the logarithm of the complexity]{
\label{FIG:M}
\includegraphics[angle=0,width=6cm]{\ErrEnergiesQZIVLog}
\caption{Errors according to the logarithm of the complexity}
\label{FIG:LM}
\end{figure}
Now, we fix the complexity and compare the approximation error on the energy according to the degree of the polynomial space $\PP$. For the complexities $90$, $180$, $300$ and $630$, we have drawn the decimal logarithm of the errors on Figure \ref{FIG:17}.
\begin{figure}[htp]
\centering
\begin{psfrags}
\psfrag{Title}{}
\psfrag{Xlabel}{}
\psfrag{Ylabel}{}
\psfrag{DL90}{\Large{\color{blue}DoF90}}
\psfrag{DL180}{\Large{\color{red}DoF180}}
\psfrag{DL300}{\Large{\color{green}DoF300}}
\psfrag{DL630}{\Large{\color{black}DoF630}}
\includegraphics[angle=0,width=12cm]{\ErrEnergiesDLWW
\caption{Errors on the energy according to the logarithm of the complexity}
\label{FIG:17}
\end{psfrags}
\end{figure}
Again, under a same complexity, if we increase the degree of the polynomial space $\PP$, the high degrees can provide better approximation, except on the coarse grids. We can conclude that for a fixed mesh (fine enough), high degrees provide a better approximation. But for each complexity, it seems that we have a saturation because the $Q_{6}$, $Q_{7}$, $Q_{8}$, $Q_{9}$ and $Q_{10}$ elements have almost the same errors. We conclude that we have to use elements with high degrees, but it is not necessary to choose the highest. We have to take into account the computational cost, and the precision of our inverse solver. Indeed, even if the complexity is the same, the finite elements matrices have not the same profil. For instance, the bandwidth of the ``mass'' matrix for $Q_{10}$ elements is much larger than for $Q_{1}$ elements. Figures \ref{FIG:LM} and \ref{FIG:17} indicate that $Q_2$ and $Q_3$ elements are a good compromise to ensure good
results without increasing the computational cost too much.
In the two dimensional case, the results are drawn on Figure \ref{FIG:LLMM}. The behaviour is similar.
\begin{figure}[htp]
\centering
\subfigure[Errors according to the complexity]{
\label{FIG:LL}
\includegraphics[angle=0,width=6cm]{\ErrEnergiesQZVDEUXD}
\hspace{0.3cm}
\subfigure[Errors according to the logarithm of the complexity]{
\label{FIG:MM}
\includegraphics[angle=0,width=6cm]{\ErrEnergiesQZVDEUXDLog}
\caption{Errors on the energy according to the logarithm of the complexity}
\label{FIG:LLMM}
\end{figure}
The energy and the interface are essential physical quantities. From a mathematical
point of view, it is also important to study the $\rm{L}^2$ error.
Figures \ref{FIG:3L2ter.a} and \ref{FIG:3L2ter.b} show the $\rm{L}^2$ error according to the complexity for $Q_{1}$, $Q_{2}$, $Q_{3}$, $Q_{4}$ and $Q_{5}$ elements. On Figure \ref{FIG:3L2ter.b}, we have used a logarithmic scale in order to compare the convergence rate.
\begin{figure}[htp]
\centering
\subfigure[Versus the complexity.]{
\label{FIG:3L2ter.a}
\includegraphics[angle=0,width=6cm]{\ErreursQWMeshW}}
\hspace{1cm}
\subfigure[Versus the decimal logarithm of the complexity.]{
\label{FIG:3L2ter.b}
\includegraphics[angle=0,width=6cm]{\ErreursQWLogMeshW}}
\caption{Comparison between the $\rm{L}^2$ errors under the same complexity.}
\label{FIG:3L2ter}
\end{figure}
We have computed the order of the speed of the convergence. If we extrapolate the lines, we can find the necessary complexity in order to reach the saturation.
\begin{displaymath}
\begin{array}{cccc}
\text{ Degrees }&\text{ Order }&\text{ Complexity for saturation }&\text{ Grid for saturation }\\
1&1.9960&423829&423829\\
2&3.9682&3558&1779\\
3&4.0216&4110&1370\\
4&4.9119&2364&591\\
5&5.9040&1475&295\\
\end{array}
\end{displaymath}
Again, $Q_2$ and $Q_3$ elements give very good results for a reasonable computational cost.
We have decided to prefer $Q_3$ elements because it seems that they provide better results on
the interface length as shown on Figure \ref{FIG:4bis}.
\section{Stationary states}\label{S:4}
The Cahn-Hilliard equation has a lot of asymptotic equilibria (see \cite{MR1331565}, \cite{MR1263907} and \cite{MR763473}). In the one dimensional case, a state can be described by the number of interfaces and their positions. On Figure \ref{FIG:9}, we show four states which are numerically stable. It is possible to observe
more than one interface only for small $\varepsilon$. Only when the interface is very thin -
{\it i.e.} for small $\varepsilon$, the interfaces do not interact. Note that the energy increases
with the number of the interfaces.
In fact, this is a bifurcation phenomenon. When $\varepsilon$ crosses critical values, bifurcations
happen and more stationary solutions appear.
\begin{figure}[htp]
\centering
\includegraphics[angle=0,width=8cm]{\Bosses}
\caption{Four numerically stable states.}
\label{FIG:9}
\end{figure}
In \cite{MR1950337}, the authors consider the stationary states of \eqref{Eq:0.6} on the square. They numerically study the solutions of the following semi-linear elliptic equation.
\begin{equation}\label{Eq:3.1}
\left\{\begin{array}{ll}
c = u-u^3+\varepsilon^2\Delta u ,&\text{ on } \Omega,\\
\\
\nabla u \cdot \nu = 0, &\text{ on } \partial\Omega,
\end{array}\right.
\end{equation}
together with the mass constraint:
\begin{equation}\label{Eq:3.2}
\frac{1}{|\Omega|}\int_{\Omega} u(x) \dd x = m,
\end{equation}
where $\Omega= [0,1]^2$ is the square, $c \in \R$ and $m\in \R$ are parameters. They study stationary solutions under the three-dimensional parameter space $\left(c,m,1/\varepsilon^2\right)$.
For this system and for all $\varepsilon$, a trivial solution is given by the constant solution $u\equiv m$ with $c = m -m^3$. The linearization around $u \equiv m$ of \eqref{Eq:3.1} under the mass constraint reads
\begin{equation}\label{Eq:3.3}
\left\{\begin{array}{ll}
0 = \left(1-3m^2\right)u+\varepsilon^2\Delta u ,&\text{ on } \Omega,\\
\\
\nabla u \cdot \nu = 0, &\text{ on } \partial\Omega.
\end{array}\right.
\end{equation}
Let $v_{r}$ be an eigenfunction of the Laplace-Neumann operator in $\Omega$ defined in \eqref{Eq:4.1} with eigenvalue $r\in \R^+$, then $v_{r}$ is also an eigenfunction of \eqref{Eq:3.3} when
\begin{equation}\label{Eq:3.4}
\frac{1}{\varepsilon^2} = \frac{r}{1-3m^2} \text{ for } |m| < \frac{1}{\sqrt{3}}.
\end{equation}
But $\sigma_{+} = 1/\sqrt{3}$ for the quartic double-well potential, so this equality shows that bifurcations may occur only for $m$ in the spinodal region. For the square domain $\Omega = [0,1]^2$, the eigenfunctions are:
\begin{equation}
v_{r}(x,y)=v_{k,l} (x,y) := \cos(\pi k x) \cos(\pi l y) \quad\text{ for } (x,y) \in [0,1]^2,
\end{equation}
with $(k,l)\in \N^2$ such that $r = (k^2+l^2) \pi^2$. For the mode $v_{1,1}$ (i.e. $r=2\pi^2$), we obtain nontrivial solutions bifurcating at $u\equiv \pm m^*$ with $m^* = \sqrt{\left(1-\varepsilon^2r\right)/3}$. We fix $m=0$, such that the bifurcations occur as $1 =\varepsilon^2 r$.
The previous asymptotic equilibria -- described in \cite{MR1950337} -- are asymptotic solutions of the dynamical evolution. For instance, we have obtained the $v_{1,1}$ mode as a stationary solution of a dynamical evolution (See Figure \ref{FIG:V.c}). A random start may lead to different modes, and actually we only see the most stable of them in long time. Figures \ref{FIG:V.a} and \ref{FIG:V.b} show the stable states that we see most of the time.
All the symmetrical states are also stable. In \cite{MAWANG}, the authors have studied the global attractor on a square and they have proved
that, after the first bifurcation, there exist 4 minimal attractors (see Theorem 4.2 in \cite{MAWANG}) obtained by symmetrization of Figure \ref{FIG:V.a}. The other stable states
shown here appear after subsequent bifurcations. Starting the simulation with well chosen initial data, we have been able to recover dynamically all the stable states described in \cite{MR1950337}. If we choose the mode $v_{4,1}+v_{1,4}$ (which is the last mode studied by Maier-Paape and Miller), we see on Figure~\ref{FIG:V.d} the asymptotic equilibria that we have obtained.
\begin{figure}[htp]
\centering
\subfigure[Mode $v_{0,1}$]{
\label{FIG:V.a}
\includegraphics[angle=0,width=6cm]{\BANDE}}
\hspace{0.3cm}
\subfigure[Mode $v_{0,1}$+$v_{1,0}$]{
\label{FIG:V.b}
\includegraphics[angle=0,width=6cm]{\CERCLE}}
\\
\vspace{10pt}
\subfigure[Mode $v_{1,1}$]{
\label{FIG:V.c}
\includegraphics[angle=0,width=6cm]{\MODEONZE}}
\hspace{0.3cm}
\subfigure[Mode $v_{1,4}$+$v_{4,1}$]{
\label{FIG:V.d}
\includegraphics[angle=0,width=6cm]{\HAUTMODE}}
\caption{Asymptotic equilibria in the Maier-Paape-Miller nomenclature.}
\label{FIG:V}
\end{figure}
In general, the stable states are deeply dependent on the eigenvalues of the Laplace operator on $\Omega$. Let $\left(\rho_{k}\right)_{k\in\N}$ and $\left(v_{\rho_{k}}\right)_{k\in\N}$ be the eigenvalues and eigenvectors of the following problem:
\begin{equation}\label{Eq:4.1}
\left\{\begin{array}{ll}
-\Delta v_{\rho_{k}} = \rho_{k}v_{\rho_{k}} ,&\text{ on } \Omega\subset \R^n,\\
\\
\nabla v_{\rho_{k}} \cdot \nu = 0, &\text{ on } \partial\Omega,\\
\\
\int_{\Omega} v_{\rho_{k}}(\theta) \dd \theta = 0.
\end{array}\right.
\end{equation}
In \cite{MAWANG}, the authors have studied the bifurcations and the global attractors of the Cahn-Hilliard problem. In their nomenclature, they consider the following Cahn-Hilliard equation:
\begin{equation}\label{Eq:3.6}
\left\{\begin{array}{ll}
\partial_{t} v = \Delta w,&\text{ on } \Omega\subset \R^n,\\
\\
w = -\lambda v + \gamma_{2} v^2 + \gamma_{3}v^3-\Delta v ,&\text{ on } \Omega\subset \R^n,\\
\\
\nabla v \cdot \nu = 0 = \nabla w \cdot \nu, &\text{ on } \partial\Omega,\\
\end{array}\right.
\end{equation}
where $\lambda$, $\gamma_{2}$ and $\gamma_{3}$ are parameters. If $u$ is a solution of the system \eqref{Eq:0.6} on $\Omega$, then $v$ is a solution on $\Omega/\sqrt{\varepsilon}$ of \eqref{Eq:3.6} if we define for all $t \in \R$ and $x \in \Omega$:
\begin{equation}\label{Eq:3.7}
v : (t,x) \mapsto u(t,x\sqrt{\varepsilon}).
\end{equation}
and the correpondence is given by the following equalities.
\begin{equation}\label{Eq:3.8}
\lambda := \frac{1}{\varepsilon}, \quad\gamma_{2} := 0 \text{ and } \gamma_{3} := \frac{1}{\varepsilon}.
\end{equation}
They prove that the first bifurcation occurs as their parameter $\lambda$ is greater than a particular value. For our problem, this bifurcation occurs as $\frac{1}{\varepsilon^2} > \rho_{1}$.
Below, we study this first bifurcation and illustrate theoretical results of \cite{MAWANG}.
\subsection{Asymptotic stable states on a rectangle}\label{S:4.1}
In the case of a rectangular domain $\Omega = [0,2]\times[0,1]$, the hypothesis of the Theorem 4.1 in \cite{MAWANG} holds. Accordingly, if $\frac{1}{\varepsilon^2}>\rho_{1} := \frac{\pi^2}{4}$ then there exist exactly two attractors $\pm u_{\varepsilon}$ which can be expressed as
\begin{equation}
\label{e4.1}
\pm u_{\varepsilon}(x,y) = \pm \frac{2\varepsilon}{\sqrt{3}}\sqrt{\frac{1}{\varepsilon^2}-\frac{\pi^2}{4}} \cos \left(\frac{\pi x}{2}\right) + \sqrt{\varepsilon}\ o\left(\left|\frac{1}{\varepsilon^2}-\frac{\pi^2}{4}\right|^{1/2}\right), \quad x \in [0,2], y \in [0,1].
\end{equation}
We define the approximated attractors $\pm v_{\varepsilon}$ by
\begin{displaymath}
\pm v_{\varepsilon}(x) := \pm C(\varepsilon) \sqrt{\frac{1}{\varepsilon^2}-\frac{\pi^2}{4}} \cos \left(\frac{\pi x}{2}\right), \quad x \in [0,2], y \in [0,1],
\end{displaymath}
where $C(\varepsilon)$ is a constant depending on $\varepsilon$. It is chosen in order to minimize
the $\rm{L}^2$ norm of $u'_\varepsilon - v_\varepsilon$.
For multiple values of the parameter $\varepsilon$ around the value $\frac{2}{\pi}$, we have obtained the corresponding numerical stationnary states $u'_{\varepsilon}$.
We have checked numerically the validity of formula \eqref{e4.1}.
We study the following quantity
\begin{displaymath}
\frac{\|u'_{\varepsilon}-v_{\varepsilon}\|_{2}}{\|v_{\varepsilon}\|_{2}} = \frac{\|u'_{\varepsilon}-v_{\varepsilon}\|_{2}}{C(\varepsilon)\sqrt{\left(\frac{1}{\varepsilon^2}-\frac{\pi^2}{4}\right)}}.
\end{displaymath}
This relative $\rm{L}^2$ norm should converge to zero. On Figure \ref{FIG:P}, we have plotted the decimal logarithm of this relative $\rm{L}^2$ norm according to the decimal logarithm of $\frac{1}{\varepsilon^2} -\frac{\pi^2}{4}$.
\begin{figure}[htp]
\centering
\begin{psfrags}
\psfrag{Title}{}
\psfrag{Xlabel}{}
\psfrag{Ylabel}{}
\includegraphics[angle=0,width=8cm]{\BifurcationsRectangleOrdre}
\caption{Convergence of the ``bifurcationned'' solutions on a rectangular domain.}
\label{FIG:P}
\end{psfrags}
\end{figure}
Figure \ref{FIG:P} is in conformity with the expecting theoretical results. We can see that the relative $\rm{L}^2$ error converges to $0$ as $\frac{1}{\varepsilon^2}$ converges to $\frac{\pi^2}{4}$. We even can improve formula \eqref{e4.1} and find the exponent $\alpha_{rectangular}$ such that
\begin{displaymath}
\| u'_{\varepsilon} - v_{\varepsilon} \|_{2} \sim \tilde{C} \left|\frac{1}{\varepsilon^2}-\frac{\pi^2}{4}\right|^{\alpha_{rectangular}}
\end{displaymath}
where $\tilde{C}$ is an unknown constant. We find that the exponent $\alpha_{rectangular} = 1/2+0.98908$, almost $3/2$.
Moreover, since we know explicitly the attractor, we can verify that our minimal constant $C(\varepsilon)$ is near $\frac{2\varepsilon}{\sqrt{3}}$.
On Figure \ref{FIG:T}, we have drawn the logarithm of our minimal constant $C(\varepsilon)$ according to the logarithm of $\varepsilon$.
\begin{figure}
\centering
\begin{psfrags}
\psfrag{Ordre Clambda VS EPS en Log.}{}
\includegraphics[angle=0,width=8cm]{\ConvergenceCstRectangle}
\caption{Order of convergence of the minimal constant $C(\varepsilon)$.}\label{FIG:T}
\end{psfrags}
\end{figure}
We find
\begin{equation}
C(\varepsilon) \sim 1.0682\ *\ \varepsilon^{0.83603}
\end{equation}
this is in conformity with the fact that $C(\varepsilon)$ converges to $\lim_{\varepsilon\rightarrow\frac{2}{\pi}}\frac{2\varepsilon}{\sqrt{3}} = \frac{4}{\pi\sqrt{3}}$.
The segment may be seen as a degenerate rectangle.
On the segment $[0,1]$, the eigenvalues of \eqref{Eq:4.1} are $\rho_{k}:=k^2\pi^2$ for all $k\in\N$. According to Theorem 4.2 in \cite{MAWANG}, there is a bifurcation at $\frac{1}{\varepsilon^2} > \pi^2$. Moreover, Remark 4.2 in \cite{MAWANG} states that there exist two minimal attractors $\pm u_{\varepsilon}$ which can be expressed as
\begin{equation}\label{Eq:EquivalentSegment}
\pm u_{\varepsilon}(x) = \pm C(\varepsilon) \sqrt{\left(\frac{1}{\varepsilon^2}-\pi^2\right)} \cos \left(\pi x\right) + \sqrt{\varepsilon}\ o\left(\left|\frac{1}{\varepsilon^2}-\pi^2\right|^{1/2}\right), \quad x \in [0,1],
\end{equation}
where $C(\varepsilon)$ is a constant which can depend on $\varepsilon$. Again, we define the approximated attractors $\pm v_{\varepsilon}$ by
\begin{displaymath}
\pm v_{\varepsilon}(x) := \pm C(\varepsilon) \sqrt{\left(\frac{1}{\varepsilon^2}-\pi^2\right)} \cos \left(\pi x\right), \quad x \in [0,1].
\end{displaymath}
For multiple values of the parameter $\varepsilon$ around the value $\frac1\pi$, we have obtained the corresponding numerical stationary states $u'_{\varepsilon}$. We choose the constant $C(\varepsilon)$ in order to minimize the $\rm{L}^2$ norm of $u'_{\varepsilon}-v_{\varepsilon}$ and study the convergence of $u'_{\varepsilon}$ to $v_{\varepsilon}$
using the quantity
\begin{displaymath}
\frac{\|u'_{\varepsilon}-v_{\varepsilon}\|_{2}}{\|v_{\varepsilon}\|_{2}} = \frac{2\|u'_{\varepsilon}-v_{\varepsilon}\|_{2}}{C(\varepsilon)\sqrt{\left(\frac{1}{\varepsilon^2}-\pi^2\right)}}.
\end{displaymath}
\begin{figure}[htp]
\centering
\begin{psfrags}
\psfrag{Title}{}
\psfrag{Xlabel}{}
\psfrag{Ylabel}{}
\includegraphics[angle=0,width=8cm]{\BifurcationsSegmentOrdre}
\caption{Convergence of the ``bifurcationned'' solutions on a segment.}
\label{FIG:N}
\end{psfrags}
\end{figure}
Figure \ref{FIG:N} is in conformity with the expected theoretical results. We can see that the relative $\rm{L}^2$ error converges to $0$ as $\frac{1}{\varepsilon^2}$ converges to $\pi^2$
and find the exponent $\alpha_{segment}$ such that
\begin{displaymath}
\|u'_{\varepsilon} - v_{\varepsilon} \|_{2} \sim \tilde{C} \left|\frac{1}{\varepsilon^2}-\pi^2\right|^{\alpha_{segment}}
\end{displaymath}
where $\tilde{C}$ is an unknown constant. We have found $\alpha_{segment}=1/2 + 1.0254$, again almost
$3/2$.
We have said that the constants $C(\varepsilon)$ have been numerically chosen in order to minimize the $\rm{L}^2$ norm of $u'_{\varepsilon}-v_{\varepsilon}$. If we extend the results of \cite{MAWANG}, we expect that the constant $C(\varepsilon) \sim \frac{2}{\sqrt{3}}\varepsilon$.
Thus, on Figure \ref{FIG:NN}, we have drawn the logarithm of our minimal constant $C(\varepsilon)$ according to the logarithm of $\varepsilon$.
\begin{figure}[htp]
\centering
\begin{psfrags}
\psfrag{Ordre Clambda VS sqrt(EPS) en Log.}{}
\includegraphics[angle=0,width=8cm]{\ConvergenceCstSegment}
\caption{Order of convergence of the minimal contant $C(\varepsilon)$.}
\label{FIG:NN}
\end{psfrags}
\end{figure}
If we study the slope, we find
\begin{equation}
C(\varepsilon) \sim 1.0844\ *\ \varepsilon^{0.94594
\end{equation}
which is again in conformity with the theoretical formula.
\subsection{Asymptotic stable states on smooth domains}\label{S:4.3}
For a smooth domain $\Omega$, the hypothesis of Theorem 3.1 in \cite{MAWANG} holds.
We have considered an ellipse. On the ellipse, the first eigenvalue $\rho_{1}\simeq 0.8776$ is simple. On Figure \ref{FIG:23}, we have drawn the corresponding first eigenvector.
\begin{figure}[htp]
\begin{center}
\includegraphics[angle=0,width=8cm]{\EigenEllipse}
\caption{First eigenvector on the ellipse.}\label{FIG:23}
\end{center}
\end{figure}
If $\frac{1}{\varepsilon^2} > \rho_{1}$, the problem \eqref{Eq:3.6} has two steady states $\pm u_{\varepsilon}$ which can be expressed as
\begin{displaymath}
\pm u_{\varepsilon} = \pm C(\varepsilon) \sqrt{\left(\frac{1}{\varepsilon^2}-\rho_{1}\right)} v_{\rho_{1}} + \sqrt{\varepsilon}\ o\left(\left|\frac{1}{\varepsilon^2}-\rho_{1}\right|^{1/2}\right),
\end{displaymath}
where $C(\varepsilon)$ is a constant which can depend on $\varepsilon$. We define the approximated attractors $\pm v_{\varepsilon}$ by
\begin{displaymath}
\pm v_{\varepsilon} := \pm C(\varepsilon) \sqrt{\left(\frac{1}{\varepsilon^2}-\rho_{1}\right)} v_{\rho_{1}},
\end{displaymath}
where $v_{\rho_{1}}$ is a fixed eigenvector.
On the Figure \ref{FIG:25}, we have drawn a steady states.
\begin{figure}[htp]
\begin{center}
\includegraphics[angle=0,width=8cm]{\SingularEllipse}
\caption{Steady state on the ellipse.}\label{FIG:25}
\end{center}
\end{figure}
For multiple values of the parameter $\varepsilon$ around the value $\frac{1}{\sqrt{\rho_{1}}}$, we have obtained the corresponding numerical stationnary states $u'_{\varepsilon}$. As in section \ref{S:4.1}, we choose the constant $C(\varepsilon)$ in order to minimize the $\rm{L}^2$ norm of $u'_{\varepsilon}-v_{\varepsilon}$ and study the convergence of $u'_{\varepsilon}$ to $v_{\varepsilon}$. We consider the quantity
\begin{displaymath}
\frac{\|u'_{\varepsilon}-v_{\varepsilon}\|_{2}}{\|v_{\varepsilon}\|_{2}} = \frac{\|u'_{\varepsilon}-v_{\varepsilon}\|_{2}}{C(\varepsilon)\sqrt{\left(\frac{1}{\varepsilon^2}-\rho_{1}\right)}\|v_{\rho_{1}}\|_{2}}.
\end{displaymath}
According to theorem 3.1, this relative $\rm{L}^2$ norm should converge to zero.
On Figure \ref{FIG:O}, we have drawn the decimal logarithm of this relative $\rm{L}^2$ norm according to the decimal logarithm of $\frac{1}{\varepsilon^2} - \rho_{1}$.
\begin{figure}[htp]
\centering
\begin{psfrags}
\psfrag{Title}{}
\psfrag{Xlabel}{}
\psfrag{Ylabel}{}
\includegraphics[angle=0,width=8cm]{\BifurcationsSegmentOrdre}
\caption{Convergence of the ``bifurcationned'' solutions on the ellipse.}
\label{FIG:O}
\end{psfrags}
\end{figure}
Figure \ref{FIG:O} corroborates the expected theoretical results.
We can see that the relative $\rm{L}^2$ error converges to $0$ as $\frac{1}{\varepsilon^2}$ converges to $\rho_{1}$. Then we compute the exponent $\alpha_{ellipse}$ such that
\begin{displaymath}
\|u'_{\varepsilon} - v_{\varepsilon} \|_{2} \sim \tilde{C} \left|\frac{1}{\varepsilon^2}-\rho_{1}\right|^{\alpha_{ellipse}}
\end{displaymath}
where $\tilde{C}$ is an unknown constant. We find that exponent $\alpha_{ellipse}=1/2 + 0.9580$.
\subsection{Asymptotic stable states on a trapezoid}\label{S:4.4}
We try to see if the results of \cite{MAWANG} extend to non smooth domains. We have tested a trapezoid where the first eigenvalue $\rho_{1} \simeq 2.2417$ is simple. On Figure \ref{FIG:R}, we have drawn the corresponding first eigenvector.
The two steady states $\pm u_{\varepsilon}$ should be expressed as
\begin{displaymath}
\pm u_{\varepsilon}(x) = \pm C(\varepsilon) \sqrt{\left(\frac{1}{\varepsilon^2}-\rho_{1}\right)} v_{\rho_{1}} + \sqrt{\varepsilon}\ o\left(\left|\frac{1}{\varepsilon^2}-\rho_{1}\right|^{1/2}\right), \quad x \in [0,1],
\end{displaymath}
where $C(\varepsilon)$ is a constant which can depend on $\varepsilon$.
We define the approximated attractors $\pm v_{\varepsilon}$ by
\begin{displaymath}
\pm v_{\varepsilon}(x) := \pm C(\varepsilon) \sqrt{\left(\frac{1}{\varepsilon^2}-\rho_{1}\right)} v_{\rho_{1}} , \quad x \in [0,1],
\end{displaymath}
where $v_{\rho_{1}}$ is a fixed eigenvector.
On Figure \ref{FIG:S}, we have drawn a steady state.
\begin{figure}
\begin{center}
\includegraphics[angle=0,width=8cm]{EigenTrapezoid1D.eps} %%%%%%%{EigenTrapezoid3D.eps}
\caption{Eigenvector of the first eigenvalue on the trapezoid.}\label{FIG:R}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[angle=0,width=8cm]{\CercleMD}
\caption{Steady state on the trapezoid.}\label{FIG:S}
\end{center}
\end{figure}
For multiple values of the parameter $\varepsilon$ around the value $\frac{1}{\sqrt{\rho_{1}}}$, we have obtained the corresponding numerical stationnary states $u'_{\varepsilon}$.
As in section \ref{S:4.1}, if we choose the constant $C(\varepsilon)$ in order to minimize the $\rm{L}^2$ norm of $u'_{\varepsilon}-v_{\varepsilon}$, we can study the convergence of $u'_{\varepsilon}$ to $v_{\varepsilon}$. We have found that
\begin{displaymath}
\frac{\|u'_{\varepsilon}-v_{\varepsilon}\|_{2}}{\|v_{\varepsilon}\|_{2}} = \frac{\|u'_{\varepsilon}-v_{\varepsilon}\|_{2}}{C(\varepsilon)\sqrt{\left(\frac{1}{\varepsilon^2}-\rho_{1}\right)}\|v_{\rho_{1}}\|_{2}}
\end{displaymath}
does not converge to $0$. It seems that the bifurcation is different in this case.
On Figure \ref{FIG:Q}, we have drawn the decimal logarithm of
${\|u'_{\varepsilon}-v_{\varepsilon}\|_{2}}$ according to the decimal logarithm of $\frac{1}{\varepsilon^2} - \rho_{1}$
\begin{figure}[htp]
\centering
\begin{psfrags}
\psfrag{Title}{}
\psfrag{Xlabel}{}
\psfrag{Ylabel}{}
\includegraphics[angle=0,width=8cm]{\BifurcationsTrapezeOrdre}
\caption{Convergence of the ``bifurcationned'' solutions on the trapezoid domain.}
\label{FIG:Q}
\end{psfrags}
\end{figure}
We find that
\begin{displaymath}
\|u'_{\varepsilon} - v_{\varepsilon} \|_{2} \sim \tilde{C} \left|\frac{1}{\varepsilon^2}-\rho_{1}\right|^{\alpha_{trapezoid}},
\end{displaymath}
with $\alpha_{trapezoid}=0.49937$. Thus this difference is of the same order as each term.
As in the case of the square, we can find numerically stable states corresponding to the next modes in the nomenclature of Maier-Paape and Miller in \cite{MR1950337}. We have found 4 numerically stable states (see Figure \ref{FIG:29}),
with energies that have been drawn on Figure \ref{FIG:30} against the length of the interface.
We can clear see the linear dependance between the two (the red line is the linear regression according to the least square method).
\begin{figure}[htp]
\begin{center}
\includegraphics[angle=0,width=6cm]{\BandeH}
\hspace{1cm}
\includegraphics[angle=0,width=6cm]{\CercleBG}
\vspace{10pt}
\includegraphics[angle=0,width=6cm]{\CercleHG}
\hspace{1cm}
\includegraphics[angle=0,width=6cm]{\CercleMD}
\caption{Numerically stable states on the trapezoid.}\label{FIG:29}
\end{center}
\end{figure}
\clearpage
\begin{figure}[htp]
\begin{center}
\includegraphics[angle=0,width=8cm]{\EnergiesTrapezoid}
\caption{Energies of the numerically stable states on the trapezoid according to the length of the interface.}\label{FIG:30}
\end{center}
\end{figure}
\noindent{\bf \large Acknowledgments:} We thank Pr. Arnaud Debussche for fruitful discussions about this subject.
\bibliographystyle{abbrv}
| proofpile-arXiv_065-5374 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
Distributed control problems have long proved challenging
for control engineers. In 1968, Witsenhausen~\cite{Witsenhausen68}
gave a counterexample showing that even a seemingly simple distributed
control problem can be hard to solve. For the counterexample,
Witsenhausen chose a two-stage distributed LQG system and provided a
nonlinear control strategy that outperforms all linear laws. It is now
clear that the non-classical information pattern of Witsenhausen's
problem makes it quite challenging\footnote{In words of Yu-Chi
Ho~\cite{YuChiHoCDC08}, ``the simplest problem becomes the hardest
problem.''}; the optimal strategy and the optimal costs for the
problem are still unknown --- non-convexity makes
the search for an optimal strategy hard~\cite{bansalbasar,baglietto,
LeeLauHo}. Discrete approximations of the problem~\cite{hochang} are
even NP-complete\footnote{More precisely, results in~\cite{papadimitriou} imply that the discrete counterparts to the Witsenhausen counterexample are NP-complete if the assumption of Gaussianity of the primitive random variables is relaxed. Further, it is also shown in~\cite{papadimitriou} that with this relaxation, a polynomial time solution to the original \textit{continuous} problem would imply $P=NP$, and thus conceptually the relaxed continuous problem is also hard.}~\cite{papadimitriou}.
In the absence of a solution, research on the counterexample has bifurcated into two different directions. Since there is no known systematic approach to obtain provably optimal solutions, a body of literature (e.g.~\cite{baglietto}~\cite{LeeLauHo}~\cite{marden} and the references therein) applies search heuristics to explore the space of possible control actions and obtain intuition into the structure of good strategies. Work in this direction has also yielded considerable insight into addressing non-convex problems in general.
In the other direction the emphasis is on understanding the role of \textit{implicit communication} in the counterexample. In distributed control, control actions not only attempt to reduce the immediate control costs, they can also communicate relevant information to other controllers to help them reduce costs. Witsenhausen~\cite[Section 6]{Witsenhausen68} and Mitter and Sahai~\cite{AreaExamPaper} aim at developing systematic constructions based on implicit communication. Witsenhausen's two-point quantization strategy is motivated from the optimal strategy for two-point symmetric distributions of the initial state~\cite[Section 5]{Witsenhausen68} and it outperforms linear strategies for certain parameter choices. Mitter and Sahai~\cite{AreaExamPaper} propose multipoint-quantization strategies that, depending on the problem parameters, can outperform linear strategies by an arbitrarily-large factor.
Various modifications to the counterexample investigate if misalignment of these two goals of control and implicit communication makes the problems hard~\cite{bansalbasar,basarCDC08,rotkowitzCDC08,RotkowitzLall,Allerton09Paper,rotkowitz} (see~\cite{WitsenhausenJournal} for a survey of other such modifications). Of particular interest are two works, those of Rotkowitz and Lall~\cite{RotkowitzLall}, and Rotkowitz~\cite{rotkowitz}. The first work~\cite{RotkowitzLall} shows that with extremely fast, infinite-capacity, and perfectly reliable external channels, the optimal controllers are linear not just for the Witsenhausen's counterexample (which is a simple observation), but for more general problems as well. This suggests that allowing for an external channel between the two controllers in Witsenhausen's counterexample might simplify the problem. However, when the channel is not perfect, Martins~\cite{MartinsSideInfo} shows that finding optimal solutions can be hard\footnote{Martins shows that nonlinear strategies that do not even use the external channel can outperform linear ones that do use the channel where the external channel SNR is high. As is suggested by what David Tse calls the ``deterministic perspective" (along the lines of~\cite{DeterministicModel,SalmanThesis,DeterministicApproach}), linear strategies do not make good use of the external channel because they only communicate the ``most significant bits'' --- which can anyway be estimated reliably at the second controller. So if the uncertainty in the initial state is large, the external channel is only of limited help and there may be substantial advantage in having the controllers talk through the plant. A similar problem is considered by Shoarinejad et al in~\cite{shoarinejad}, where noisy side information of the source is available at the receiver. Since this formulation is even more constrained than that in~\cite{MartinsSideInfo}, it is clear that nonlinear strategies outperform linear for this problem as well.}. A closer inspection of the problem in~\cite{MartinsSideInfo} reveals that nonlinear strategies can outperform linear ones by an arbitrarily large factor for any fixed SNR on the external channel. Even to make good use of the external channel resource, one needs nonlinear strategies.
The second work~\cite{rotkowitz} shows that if one considers the induced norm instead of the original expected quadratic cost, linear control laws are optimal and easy to find. The induced norm formulation is therefore easy to solve, and at the same time, it makes no assumptions on the state and the noise distributions. This led Doyle to ask if Witsenhausen's counterexample (with expected quadratic cost) is at all relevant~\cite{PathsAhead} --- after all, not only is the LQG formulation more constrained, it is also harder to solve.
The question thus becomes what norm is more appropriate, and the answer must come from what is relevant in practical situations. In practice, one usually knows the ``typical" amplitude of the noise and the initial state, or at least rough bounds them. The induced-norm formulation may therefore be quite conservative: since no assumptions are made on the state and the noise, it requires budgeting for completely arbitrary behavior of state and noise --- they can even collude to raise the costs for the chosen strategy. To see how conservative the induced-norm formulation can be, notice the following: even allowing for colluding state and noise, mere knowledge of a bound on the noise amplitude suffices to have quantization-based nonlinear strategies outperform linear strategies by an arbitrarily large factor (with the expected cost replaced by a hard-budget. The proof is simpler than that in~\cite{AreaExamPaper}, and is left as an exercise to the interested reader for reasons of limited space). Conceptually, the LQG formulation is only abstracting some knowledge of noise and initial state behavior. In practical situations where such knowledge exists, designs based on an induced norm formulation (and linear strategies) may be needlessly expensive because they budget for impossible events.
The fact that nonlinear strategies can be arbitrarily better brings us to a question that has received little attention in the literature --- how far are the proposed nonlinear strategies from the optimal? It is believed that the strategies of Lee, Lau and Ho~\cite{LeeLauHo} are close to optimal. In Section~\ref{sec:conclusions}, we will see that these strategies can be viewed as an instance of the ``dirty-paper coding'' strategy in information theory, and quantify their advantage over pure quantization based strategies. Despite their improved performance, there was no guarantee that these strategies are indeed close to optimal\footnote{The search in~\cite{LeeLauHo} is not exhaustive. The authors first find a good quantization-based solution. Inspired by piecewise linear strategies (from the neural networks based search of Baglietto \textit{et al}~\cite{baglietto}), each quantization step is broken into several small sub-steps to approximate a piecewise linear curve.
}. Witsenhausen~\cite[Section 7]{Witsenhausen68} derived a lower bound on the costs that is loose in the interesting regimes of small $k$ and large $\sigma_0^2$~\cite{WitsenhausenJournal,CDC09paper}, and hence is insufficient to obtain any guarantee on the gap from optimality.
Towards obtaining such a guarantee, a strategic simplification of the problem was introduced in~\cite{CDCWitsenhausen,WitsenhausenJournal} where we consider an asymptotically-long vector version of the problem. This problem is related to a toy communication problem that we call ``Assisted Interference Suppression'' (AIS) which is an extension of the dirty-paper coding (DPC)~\cite{CostaDirtyPaper} model in information theory. There has been a burst of interest in extensions to DPC in information theory mainly along two lines of work --- multi-antenna Gaussian channels, and the ``cognitive-radio channel.'' For multi-antenna Gaussian channels, a problem of much theoretical and practical interest, DPC turns out to be the optimal strategy (see~\cite{ShamaiBroadcast} and the references therein). The ``cognitive radio channel" problem was formulated by Devroye \textit{et al}~\cite{Devroye1}. This inspired much work in asymmetric cooperation between nodes~\cite{JovicicViswanath,KimStateAmplification,MerhavMasking,KhistiLatticeMDPC,KotagiriLaneman}. In our work~\cite{WitsenhausenJournal,CDCWitsenhausen}, we developed a new lower bound to the optimal performance of the vector Witsenhausen problem. Using this bound, we show that vector-quantization based strategies attain within a factor of $4.45$ of the optimal cost for all problem parameters in the limit of infinite vector length. Further, combinations of linear and DPC-based strategies attain within a factor $2$ of the optimal cost. This factor was later improved to $1.3$ in~\cite{ITW09Paper} by improving the lower bound. While a constant-factor result does not establish true optimality, such results are often helpful in the face of intractable problems like those that are otherwise NP-hard \cite{ApproximationBook}. This constant-factor spirit has also been useful in understanding other stochastic control problems
\cite{CogillLall06, CogillLallHespanha07} and in the asymptotic analysis of problems in multiuser wireless communication \cite{EtkinOneBit, DeterministicModel}.
While the lower bound in~\cite{WitsenhausenJournal} holds for all vector lengths, and hence for the scalar counterexample as well, the ratio of the costs attained by the strategies of~\cite{AreaExamPaper} and the lower bound diverges in the limit
$k\rightarrow 0$ and $\sigma_0\rightarrow\infty$. This suggests that there is a significant finite-dimensional aspect of the problem that
is being lost in the infinite-dimensional limit: either quantization-based strategies are bad, or the lower bound of~\cite{WitsenhausenJournal} is very loose. This effect is
elucidated in \cite{CDC09paper} by deriving a different lower bound showing
that quantization-based strategies indeed attain within a
constant\footnote{The constant is large in \cite{CDC09paper}, but as
this paper shows, this is an artifact of the proof rather than
reality.} factor of the optimal cost for Witsenhausen's original
problem. The bound in~\cite{CDC09paper} is in the spirit of Witsenhausen's original lower
bound, but is more intricate. It captures the idea that observation
noise can force a second-stage cost to be incurred unless the first
stage cost is large.
In this paper, we revert to the line of attack initiated by the vector
simplification of \cite{WitsenhausenJournal}. In Section~\ref{sec:notation}, we formally state the vector version of the counterexample. For obtaining
good control strategies, we observe that the action of the first controller in the quantization-based strategy of~\cite{AreaExamPaper} can be thought of as forcing the state to a point on a one-dimensional
\textit{lattice}. Extending this idea, in Section~\ref{sec:lattice}, we provide lattice-based quantization strategies for finite dimensional spaces and analyze their performance.
Building upon the vector
lower bound of~\cite{WitsenhausenJournal}, a new lower bound is derived in Section~\ref{sec:lowerbound} which is in the spirit of large-deviations-based information-theoretic bounds for finite-length communication problems\footnote{An alternative Central Limit Theorem (CLT)-based approach has also been used in the information-theory literature~\cite{Baron,VerduCLT,VerduDispersion}. In~\cite{VerduCLT,VerduDispersion}, the approach is used to obtain extremely tight approximations at moderate blocklengths for Shannon's noisy communication problem.}
(e.g.~\cite{Gallager,PinskerNoFeedback,OurUpperBoundPaper,
waterslide}). In particular, our new bound extends the tools
in~\cite{waterslide} to a setting with unbounded distortion measure. In Section~\ref{sec:ratio}, we combine the lattice-based upper bound (Section~\ref{sec:lattice}) and the large-deviations lower bound (Section~\ref{sec:lowerbound}) to show that lattice-based quantization strategies attain within a constant factor of the optimal cost for any finite length, uniformly over all problem parameters. For example, this constant factor is numerically found to be smaller than $8$ for the original scalar problem. We also provide a constant factor that holds uniformly over all vector lengths.
To understand the significance of the result, consider the following. At $k=0.01$ and $\sigma_0=500$, the cost attained by the optimal linear scheme is close to $1$. The cost attained by a quantization-based\footnote{The quantization points are regularly spaced about $9.92$ units apart. This results in a first stage cost of about $8.2\times 10^{-4}$ and a second stage cost of about $6.7\times 10^{-5}$.} scheme is $8.894\times 10^{-4}$. Our new lower bound on the cost is $3.170\times 10^{-4}$. Despite the small value of the lower bound,
the ratio of the quantization-based upper bound and the lower bound
for this choice of parameters is less than three!
We conclude in Section~\ref{sec:conclusions} outlining directions of future research and speculating on the form of finite-dimensional strategies (following~\cite{WitsenhausenJournal}) that we conjecture might be optimal.
\section{Notation and problem statement}
\label{sec:notation}
\begin{figure}[htb]
\begin{center}
\includegraphics[scale=0.45]{infotheory}
\caption{Block-diagram for vector version of Witsenhausen's counterexample of length $m$. }
\label{fig:infotheory}
\end{center}
\end{figure}
Vectors are denoted in bold. Upper case tends to be used for random
variables, while lower case symbols represent their realizations. $W(m,k^2,\sigma_0^2)$ denotes the vector version of
Witsenhausen's problem of length $m$, defined as follows (shown in Fig.~\ref{fig:infotheory}):
\begin{itemize}
\item The initial state $\m{X}_0$ is Gaussian, distributed $\mathcal{N}(0,\sigma_0^2\mathbb{I}_m)$, where $\mathbb{I}_m$ is the identity matrix of size $m\times m$.
\item The state transition functions describe the state evolution with time. The state transitions are linear:
\begin{eqnarray*}
\m{X}_1 &=&\m{X}_0+\m{U}_1,\;\;\;\text {and}\\
\m{X}_2 &=&\m{X}_1-\m{U}_2.
\end{eqnarray*}
\item The outputs observed by the controllers:
\begin{eqnarray}
\nonumber\m{Y}_1&=&\m{X}_0,\;\;\;\text{ and}\\
\m{Y}_2&=&\m{X}_1+\m{Z},
\label{eq:outputs}
\end{eqnarray}
where $\m{Z}\sim \mathcal{N}(0,\sigma_Z^2\mathbb{I}_m)$ is Gaussian distributed observation noise.
\item The control objective is to minimize the expected cost, averaged over the random realizations of $\m{X}_0$ and $\m{Z}$. The total cost is a quadratic function of the state and the input given by the sum of two terms:
\begin{eqnarray*}
J_1(\m{x}_1,\m{u}_1) &=& \frac{1}{m}k^2\|\m{u}_1\|^2,\; \text{and}\\
J_2(\m{x}_2,\m{u}_2)&=&\frac{1}{m}\|\m{x}_2\|^2
\end{eqnarray*}
where $\|\cdot\|$ denotes the usual Euclidean 2-norm.
The cost expressions are normalized by the vector-length $m$ to allow for natural comparisons between different vector-lengths. A control strategy is denoted by $\gamma=(\gamma_1,\gamma_2)$, where $\gamma_i$ is the function that maps the observation $\m{y}_i$ at $\co{i}$ to the control input $\m{u}_i$. For a fixed $\gamma$, $\m{x}_1=\m{x}_0+\gamma_1(\m{x}_0)$ is a function of $\m{x}_0$. Thus the first stage cost can instead be written as a function $J_1^{(\gamma)}(\m{x}_0)=J_1(\m{x}_0+\gamma_1(\m{x}_0),\gamma_1(\m{x}_0))$ and the second stage cost can be written as $J_2^{(\gamma)}(\m{x}_0,\m{z})=J_2(\m{x}_0+\gamma_1(\m{x}_0)-\gamma_2(\m{x}_0+\gamma_1(\m{x}_0)+\m{z}),\gamma_2(\m{x}_0+\gamma_1(\m{x}_0)+\m{z}))$.
For given $\gamma$, the expected costs (averaged over $\m{x}_0$ and $\m{z}$) are denoted by $\bar{J}^{(\gamma)}(m,k^2,\sigma_0^2)$ and $\bar{J}_i^{(\gamma)}(m,k^2,\sigma_0^2)$ for $i=1,2$. We define $\bar{J}^{(\gamma)}_{\min}(m,k^2,\sigma_0^2)$ as follows
\begin{equation}
\bar{J}_{\min}(m,k^2,\sigma_0^2):=\inf_{\gamma}\bar{J}^{(\gamma)}(m,k^2,\sigma_0^2).
\end{equation}
\end{itemize}
We note that for the scalar case of $m=1$, the problem is Witsenhausen's original counterexample~\cite{Witsenhausen68}.
Observe that scaling $\sigma_0$ and $\sigma_Z$ by the same factor essentially does not change the problem --- the solution can also be scaled by the same factor (with the resulting cost scaling quadratically with it). Thus, without loss of generality, we assume that the variance of the Gaussian observation noise is $\sigma_Z^2=1$ (as is also assumed in~\cite{Witsenhausen68}). The pdf of the noise $\m{Z}$ is denoted by $f_Z(\cdot{})$. In our proof techniques, we also consider a hypothetical observation noise $\m{Z}_G\sim\mathcal{N}(0,\sigma_G^2)$ with the variance $\sigma_G^2\geq 1$. The pdf of this test noise is denoted by $f_G(\cdot{})$. We use $\psi(m,r)$ to denote $\Pr(\|\m{Z}\|\geq r)$ for $\m{Z}\sim\mathcal{N}(0,\mathbb{I})$.
Subscripts in expectation expressions denote the random variable being averaged over (e.g. $\expectp{\m{X}_0,\m{Z}_G}{\cdot{}}$ denotes averaging over the initial state $\m{X}_0$ and the test noise $\m{Z}_G$).
\section{Lattice-based quantization strategies}
\label{sec:lattice}
\begin{figure}
\begin{center}
\includegraphics[width=9cm]{LatticeCovering}
\includegraphics[width=9cm]{LatticePacking}
\end{center}
\caption{Covering and packing for the 2-dimensional hexagonal
lattice. The packing-covering ratio for this lattice is $\xi=\frac{2}{\sqrt{3}}
\approx 1.15$~\cite[Appendix C]{Fischer}. The first controller
forces the initial state $\m{x}_0$ to the lattice point nearest to
it. The second controller estimates $\m{\widehat{x}}_1$ to be a
lattice point at the centre of the sphere if it falls in one of the
packing spheres. Else it essentially gives up and estimates
$\m{\widehat{x}}_1=\m{y}_2$, the received output itself. A
hexagonal lattice-based scheme would perform better for the 2-D
Witsenhausen problem than the square lattice (of
$\xi=\sqrt{2}\approx1.41$~\cite[Appendix C]{Fischer}) because it has
a smaller $\xi$.}
\label{fig:lattice}
\end{figure}
Lattice-based quantization strategies are the natural generalizations of scalar quantization-based strategies~\cite{AreaExamPaper}. An introduction to lattices can be found
in~\cite{SloaneLattices,almosteverything}. Relevant definitions are
reviewed below. $\mathcal{B}$ denotes the unit ball in $\mathbb{R}^m$.
\begin{definition}[Lattice]
An $m$-dimensional lattice $\Lambda$ is a set of points in $\mathbb{R}^m$ such that if $\m{x},\m{y}\in\Lambda$, then $\m{x}+\m{y}\in \Lambda$, and if $\m{x}\in\Lambda$, then $-\m{x}\in\Lambda$.
\end{definition}
\begin{definition}[Packing and packing radius]
Given an $m$-dimensional lattice $\Lambda$ and a radius $r$, the set
$\Lambda+r\mathcal{B}$ is a \textit{packing} of Euclidean $m$-space if
for all points $\m{x},\m{y}\in\Lambda$,
$(\m{x}+r\mathcal{B})\bigcap(\m{y}+r\mathcal{B})=\emptyset$. The packing
radius $r_p$ is defined as $r_p:=\sup\{r:\Lambda+ r\mathcal{B}
\;\text{is a packing} \}$.
\end{definition}
\begin{definition}[Covering and covering radius]
Given an $m$-dimensional lattice $\Lambda$ and a radius $r$, the set
$\Lambda+r\mathcal{B}$ is a \textit{covering} of Euclidean $m$-space
if $\mathbb{R}^m\subseteq \Lambda + r\mathcal{B}$. The covering radius
$r_c$ is defined as $r_c:=\inf\{r:\Lambda+ r\mathcal{B} \;\text{is a
covering} \}$.
\end{definition}
\begin{definition}[Packing-covering ratio]
The \textit{packing-covering ratio} (denoted by $\xi$) of a lattice
$\Lambda$ is the ratio of its covering radius to its packing radius,
$\xi=\frac{r_c}{r_p}$.
\end{definition}
Because it creates no ambiguity, we do not include the dimension $m$
and the choice of lattice $\Lambda$ in the notation of $r_c$, $r_p$
and $\xi$, though these quantities depend on $m$ and
$\Lambda$.
For a given dimension $m$, a natural control strategy that uses a lattice $\Lambda$ of covering radius $r_c$ and packing radius $r_p$ is as follows. The first controller uses the input $\m{u}_1$ to force the state $\m{x}_0$ to the lattice point nearest to $\m{x}_0$. The second controller estimates $\m{x}_1$ to be the lattice point nearest to $\m{y}_2$. For analytical ease, we instead consider an inferior strategy where the second controller estimates $\m{x}_1$ to be a lattice point only if the lattice point lies within the sphere of radius $r_p$ around $\m{y}_2$. If no lattice point exists in the sphere, the second controller estimates $\m{x}_1$ to be $\m{y}_2$, the received vector itself. The actions $\gamma_1(\cdot{})$ of $\co{1}$ and $\gamma_2(\cdot{})$ of $\co{2}$ are therefore given by
\begin{eqnarray*}
\gamma_1(\m{x}_0) &=& -\m{x}_0+\underset{{\m{x}_1}\in\Lambda }{\text{arg min}}\;\|\m{x}_1-\m{x}_0\|^2,\\
\gamma_2(\m{y}_2) &=& \left\{\begin{array}{cc}
\m{\widetilde{x}}_1 & \text{if}\;\exists\; \m{\widetilde{x}}_1\in\Lambda\;\text{s.t.}\; \|\m{y}_2-\m{\widetilde{x}}_1\|^2< r_p^2\\
\m{y}_2 & \text{otherwise}
\end{array}\right. .
\end{eqnarray*}
The event where there exists no such $\m{\widetilde{x}}_1\in\Lambda$ is referred to as \textit{decoding failure}. In the following, we denote $\gamma_2(\m{y}_2)$ by $\m{\widehat{x}}_1$, the estimate of $\m{x}_1$.
\begin{theorem}
\label{thm:upperbound}
Using a lattice-based strategy (as described above) for $W(m,k^2,\sigma_0^2)$ with $r_c$ and $r_p$ the covering and the packing radius for the lattice, the total average cost is upper bounded by
\begin{eqnarray*}
\bar{J}^{(\gamma)}(m,k^2,\sigma_0^2)\leq \inf_{P\geq 0} k^2P +
\left(\sqrt{\psi(m+2,r_p)}+\sqrt{\frac{P}{\xi^2}}\sqrt{\psi(m,r_p)}\right)^2,
\end{eqnarray*}
where $\xi=\frac{r_c}{r_p}$ is the packing-covering ratio for the
lattice, and $\psi(m,r)=\Pr(\|\m{Z}\|\geq r)$. The following looser bound also holds
\begin{eqnarray*}
\bar{J}^{(\gamma)}(m,k^2,\sigma_0^2)
\leq \inf_{P> \xi^2}
k^2P+\left(1+\sqrt{\frac{P}{\xi^2}}\right)^2
e^{-\frac{mP}{2\xi^2}+\frac{m+2}{2}\left(1+\lon{\frac{P}{\xi^2}}\right)}.
\end{eqnarray*}
\end{theorem}
\textit{Remark}: The latter loose bound is useful for analytical manipulations when proving explicit bounds on the ratio of the upper and lower bounds in Section~\ref{sec:ratio}.
\begin{proof}
Note that because $\Lambda$ has a covering radius of $r_c$, $\|\m{x}_1-\m{x}_0\|^2\leq r_c^2$. Thus the first stage cost is bounded above by $\frac{1}{m}k^2r_c^2$. A tighter bound can be provided for a specific lattice and finite $m$ (for example, for $m=1$, the first stage cost is approximately $k^2\frac{r_c^2}{3}$ if $r_c^2 \ll \sigma_0^2$ because the distribution of $\m{x}_0$ conditioned on it lying in any of the quantization bins is approximately uniform at least for the most likely bins).
For the second stage, observe that
\begin{eqnarray}
\label{eq:eachpoint}
\expectp{\m{X}_1,\m{Z}}{\|\m{X}_1-\m{\widehat{X}}_1\|^2}=\expectp{\m{X}_1}{\expectp{\m{Z}}{\|\m{X}_1-\m{\widehat{X}}_1\|^2|\m{X}_1}}.
\end{eqnarray}
Denote by $\mathcal{E}_m$ the event $\{\|\m{Z}\|^2\geq r_p^2\}$. Observe that under the event $\mathcal{E}_m^c$, $\m{\widehat{X}}_1=\m{X}_1$, resulting in a zero second-stage cost. Thus,
\begin{eqnarray*}
\expectp{\m{Z}}{\|\m{X}_1-\m{\widehat{X}}_1\|^2|\m{X}_1}&=&\expectp{\m{Z}}{\|\m{X}_1-\m{\widehat{X}}_1\|^2\indi{\mathcal{E}_m}|\m{X}_1}+\expectp{\m{Z}}{\|\m{X}_1-\m{\widehat{X}}_1\|^2\indi{\mathcal{E}_m^c}|\m{X}_1}\\
&=&\expectp{\m{Z}}{\|\m{X}_1-\m{\widehat{X}}_1\|^2\indi{\mathcal{E}_m}|\m{X}_1}.
\end{eqnarray*}
We now bound the squared-error under the error event $\mathcal{E}_m$, when either $\m{x}_1$ is decoded erroneously, or there is a decoding failure. If $\m{x}_1$ is decoded erroneously to a lattice point $\m{\widetilde{x}}_1\neq \m{x}_1$, the squared-error can be bounded as follows
\begin{eqnarray*}
\|\m{x}_1-\m{\widetilde{x}}_1\|^2 = \|\m{x}_1-\m{y}_2+\m{y}_2- \m{\widetilde{x}}_1\|^2
\leq \left(\|\m{x}_1-\m{y}_2\|+\|\m{y}_2- \m{\widetilde{x}}_1\|\right)^2
\leq \left(\|\m{z}\|+r_p\right)^2.
\end{eqnarray*}
If $\m{x}_1$ is decoded as $\m{y}_2$, the squared-error is simply
$\|\m{z}\|^2$, which we also upper bound by
$\left(\|\m{z}\|+r_p\right)^2$. Thus, under event $\mathcal{E}_m$,
the squared error $\|\m{x}_1-\m{\widehat{x}}_1\|^2$ is bounded above
by $\left(\|\m{z}\|+r_p\right)^2$, and hence
\begin{eqnarray}
\label{eq:andhence}
\expectp{\m{Z}}{\|\m{X}_1-\m{\widehat{X}}_1\|^2|\m{X}_1}&\leq& \expectp{\m{Z}}{\left(\|\m{Z}\|+r_p\right)^2\indi{\mathcal{E}_m}|\m{X}_1}\nonumber\\
&\overset{(a)}{=}&\expectp{\m{Z}}{\left(\|\m{Z}\|+r_p\right)^2\indi{\mathcal{E}_m}},
\end{eqnarray}
where $(a)$ uses the fact that the pair $(\m{Z},\indi{\mathcal{E}_m})$ is independent of $\m{X}_1$. Now, let $P=\frac{r_c^2}{m}$, so that the first stage cost is at most
$k^2P$. The following lemma helps us derive the upper bound.
\begin{lemma}
\label{lem:upperbound}
For a given lattice with $r_p^2=\frac{r_c^2}{\xi^2}=\frac{mP}{\xi^2}$, the following bound holds
\begin{eqnarray*}
\frac{1}{m}\expectp{\m{Z}}{\left(\|\m{Z}\|+r_p\right)^2\indi{\mathcal{E}_m}}
\leq \left(\sqrt{\psi(m+2,r_p)}+\sqrt{\frac{P}{\xi^2}}\sqrt{\psi(m,r_p)}\right)^2.
\end{eqnarray*}
The following (looser) bound also holds as long as $P>\xi^2$,
\begin{eqnarray*}
\frac{1}{m}\expectp{\m{Z}}{\left(\|\m{Z}\|+r_p\right)^2\indi{\mathcal{E}_m}}\leq \left(1+\sqrt{\frac{P}{\xi^2}}\right)^2e^{-\frac{mP}{2\xi^2}+\frac{m+2}{2}\left(1+\lon{\frac{P}{\xi^2}}\right)}.
\end{eqnarray*}
\end{lemma}
\begin{proof}
See Appendix~\ref{app:upperbound}.
\end{proof}
The theorem now follows from~\eqref{eq:eachpoint},~\eqref{eq:andhence} and Lemma~\ref{lem:upperbound}.
\end{proof}
\section{Lower bounds on the cost}
\label{sec:lowerbound}
\begin{figure}
\begin{center}
\includegraphics[width=9cm]{LowerBoundDerivationMay5}
\caption{A pictorial representation of the proof for the lower bound
assuming $\sigma_0^2=30$. The solid curves show the vector lower bound
of~\cite{WitsenhausenJournal} for various values of observation
noise variances, denoted by $\sigma_G^2$. Conceptually, multiplying these curves by the
probability of that channel behavior yields the shadow curves for the particular $\sigma_G^2$, shown by dashed curves. The scalar lower bound is
then obtained by taking the maximum of these shadow curves. The circles at points along the scalar bound curve indicate the
optimizing value of $\sigma_G$ for obtaining that point on the
bound.}
\label{fig:rainbow2}
\end{center}
\end{figure}
Bansal and Basar~\cite{bansalbasar} use information-theoretic techniques related to rate-distortion and channel capacity to show the optimality of linear strategies in a modified version of Witsenhausen's counterexample where the cost function does not contain a product of two decision variables. Following the same spirit, in~\cite{WitsenhausenJournal} we derive the following lower bound for Witsenhausen's counterexample itself.
\begin{theorem}
\label{thm:oldbound}
For $W(m,k^2,\sigma_0^2)$, if for a strategy $\gamma(\cdot{})$ the average power $\frac{1}{m}\expectp{\m{X}_0}{\|\m{U}_1\|^2}=P$, the following lower bound holds on the second stage cost
\begin{equation*}
\bar{J}_2^{(\gamma)}(m,k^2,\sigma_0^2) \geq \left( \left(\sqrt{\kappa(P,\sigma_0^2)} - \sqrt{P}\right)^+ \right)^2,
\end{equation*}
where $(\cdot{})^+$ is shorthand for $\max(\cdot{}, 0)$ and
\begin{equation}
\kappa(P,\sigma_0^2)=\frac{\sigma_0^2}{\sigma_0^2+P+2\sigma_0\sqrt{P}+1}.
\end{equation}
The following lower bound thus holds on the total cost
\begin{equation*}
\bar{J}^{(\gamma)}(m,k^2,\sigma_0^2) \geq \inf_{P\geq 0} k^2P + \left( \left(\sqrt{\kappa(P,\sigma_0^2)} - \sqrt{P}\right)^+ \right)^2.
\end{equation*}
\end{theorem}
\begin{proof}
We refer the reader to~\cite{WitsenhausenJournal} for the full proof. We outline it here because these ideas are used in the derivation of the new lower bound in Theorem~\ref{thm:newbound}.
Using a triangle inequality argument, we show
\begin{eqnarray}
\sqrt{\frac{1}{m}\expectp{\m{X}_0,\m{Z}}{\|\m{X}_0-\whatmn{X}_1\|^2}}\leq \sqrt{\frac{1}{m}\expectp{\m{X}_0,\m{Z}}{\|\m{X}_0-\m{X}_1\|^2}}+\sqrt{\frac{1}{m}\expectp{\m{X}_0,\m{Z}}{\|\m{X}_1 - \whatmn{X}_1\|^2}}.
\label{eq:triangle1}
\end{eqnarray}
The first term on the RHS is $\sqrt{P}$. It therefore suffices to lower bound the term on the LHS to obtain a lower bound on $\expectp{\m{X}_0,\m{Z}}{\|\m{X}_1 - \whatmn{X}_1\|^2}$. To that end, we interpret $\whatmn{X}_1$ as an estimate for $\m{X}_0$, which is a problem of transmitting a source across a channel. For an iid Gaussian source to be transmitted across a memoryless power-constrained additive-noise Gaussian channel (with one channel use per source symbol), the optimal strategy that minimizes the mean-square error is merely scaling the source symbol so that the average power constraint is met~\cite{GoblickUncoded}. The estimation at the second controller is then merely the linear MMSE estimation of $\m{X}_0$, and the obtained MMSE is $\kappa(P,\sigma_0^2)$. The lemma now follows from~\eqref{eq:triangle1}.
\end{proof}
Observe that the lower bound expression is the same for all vector
lengths. In the following, large-deviation arguments~\cite{BlahutThesis,CsiszarKorner} (called sphere-packing style arguments for historical reasons) are extended following~\cite{PinskerNoFeedback,OurUpperBoundPaper,waterslide} to a joint source-channel setting where the distortion measure is unbounded. The obtained bounds are tighter than those in Theorem~\ref{thm:oldbound} and depend explicitly on the vector length $m$.
\begin{theorem}
\label{thm:newbound}
For $W(m,k^2,\sigma_0^2)$, if for a strategy $\gamma(\cdot{})$ the average power $\frac{1}{m}\expectp{\m{X}_0}{\|\m{U}_1\|^2}=P$, the following lower bound holds on the second stage cost for any choice of $\sigma_G^2\geq 1$ and $L>0$
\begin{equation*}
\bar{J}_2^{(\gamma)}(m,k^2,\sigma_0^2) \geq \eta(P,\sigma_0^2,\sigma_G^2,L).
\end{equation*}
where
\begin{eqnarray*}
\eta(P,\sigma_0^2,\sigma_G^2,L)=\frac{\sigma_G^m}{c_m(L)}\exp\left(-\frac{mL^2(\sigma_G^2-1)}{2}\right)\left( \left(\sqrt{\kappa_2(P,\sigma_0^2,\sigma_G^2,L)} - \sqrt{P}\right)^+ \right)^2,
\end{eqnarray*}
where $\kappa_2(P,\sigma_0^2,\sigma_G^2,L):=$
\begin{eqnarray*}
\frac{\sigma_0^2\sigma_G^2}{c_m^{\frac{2}{m}}(L)e^{1-d_m(L)}\left((\sigma_0+\sqrt{P})^2+d_m(L)\sigma_G^2\right)},
\end{eqnarray*}
$c_m(L):=\frac{1}{\Pr(\|\m{Z}\|^2\leq mL^2)}= \left(1-\psi(m,L\sqrt{m})\right)^{-1}$,
$d_m(L):=\frac{\Pr(\|\mk{Z}{m+2}\|^2\leq mL^2)}{\Pr(\|\m{Z}\|^2\leq mL^2)} =
\frac{1-\psi(m+2,L\sqrt{m})}{1-\psi(m,L\sqrt{m})}$, \\$0< d_m(L)<1$, and
$\psi(m,r)=\Pr(\|\m{Z}\|\geq r)$.
Thus the following lower bound holds on the total cost
\begin{equation}
\bar{J}_{\min}(m,k^2,\sigma_0^2) \geq \inf_{P\geq 0} k^2P +
\eta(P,\sigma_0^2,\sigma_G^2,L),
\end{equation}
for any choice of $\sigma_G^2\geq 1$ and $L>0$ (the choice can depend on $P$). Further, these bounds are at least as tight as those of Theorem~\ref{thm:oldbound} for all values of $k$ and $\sigma_0^2$.
\end{theorem}
\begin{proof}
From Theorem~\ref{thm:oldbound}, for a given $P$, a lower bound on the average second stage cost is $\left(\left( \sqrt{\kappa}-\sqrt{P} \right)^+\right)^2$. We derive
another lower bound that is equal to the
expression for $\eta(P,\sigma_0^2,\sigma_G^2,L)$. The high-level intuition behind this lower
bound is presented in Fig.~\ref{fig:rainbow2}.
Define $\mathcal{S}_L^G:=\{\m{z}:\|\m{z}\|^2\leq mL^2\sigma_G^2\}$
and use subscripts to denote which probability model is being used for the second stage observation noise. $Z$ denotes white Gaussian
of variance $1$ while $G$ denotes white Gaussian of variance
$\sigma_G^2\geq 1$.
\begin{eqnarray}
\nonumber\expectp{\m{X}_0,\m{Z}}{J_2^{(\gamma)}(\m{X}_0,\m{Z})}&= & \int_{\m{z}}\int_{\m{x}_0}J_2^{(\gamma)}(\m{x}_0,\m{z}) f_0(\m{x}_0) f_Z(\m{z}) d\m{x}_0 d\m{z}\\
\nonumber &\geq & \int_{\m{z}\in\mathcal{S}_L^G}\left(\int_{\m{x}_0}J_2^{(\gamma)}(\m{x}_0,\m{z}) f_0(\m{x}_0) d\m{x}_0\right) f_Z(\m{z}) d\m{z}\\
&= &\int_{\m{z}\in\mathcal{S}_L^G}\left(\int_{\m{x}_0}J_2^{(\gamma)}(\m{x}_0,\m{z}) f_0(\m{x}_0) d\m{x}_0\right)\frac{f_Z(\m{z})}{f_G(\m{z})}f_G(\m{z}) d\m{z}.
\label{eq:beforeratio}
\end{eqnarray}
The ratio of the two probability density functions is given by
\begin{eqnarray*}
\frac{f_Z(\m{z})}{f_G(\m{z})}=\frac{e^{-\frac{\|\m{z}\|^2}{2}}}{\left(\sqrt{2\pi}\right)^m}\frac{\left(\sqrt{2\pi\sigma_G^2}\right)^m}{e^{-\frac{\|\m{z}\|^2}{2\sigma_G^2}}}=\sigma_G^m e^{-\frac{\|\m{z}\|^2}{2}\left(1-\frac{1}{\sigma_G^2}\right)}.
\end{eqnarray*}
Observe that $\m{z}\in\mathcal{S}_L^G$, $\|\m{z}\|^2\leq mL^2\sigma_G^2$. Using $\sigma_G^2\geq 1$, we obtain
\begin{equation}
\frac{f_Z(\m{z})}{f_G(\m{z})}
\geq \sigma_G^m
e^{-\frac{m L^2 \sigma_G^2}{2}\left(1-\frac{1}{\sigma_G^2}\right)}
= \sigma_G^m e^{-\frac{mL^2(\sigma_G^2-1)}{2}}.
\label{eq:afterratio}
\end{equation}
Using~\eqref{eq:beforeratio} and~\eqref{eq:afterratio},
\begin{eqnarray}
\nonumber\expectp{\m{X}_0,\m{Z}}{J_2^{(\gamma)}(\m{X}_0,\m{Z})} &\geq &\sigma_G^m e^{-\frac{mL^2(\sigma_G^2-1)}{2}} \int_{\m{z}\in\mathcal{S}_L^G}\left(\int_{\m{x}_0}J_2^{(\gamma)}(\m{x}_0,\m{z}) f_0(\m{x}_0) d\m{x}_0\right) f_G(\m{z}) d\m{z}\\
\nonumber&=&\sigma_G^m e^{-\frac{mL^2(\sigma_G^2-1)}{2}}\expectp{\m{X}_0,\m{Z}_G}{J_2^{(\gamma)}(\m{X}_0,\m{Z}_G)\indi{\m{Z}_G\in\mathcal{S}_L^G}}\\
&=&\sigma_G^m
e^{-\frac{mL^2(\sigma_G^2-1)}{2}}\expectp{\m{X}_0,\m{Z}_G}{J_2^{(\gamma)}(\m{X}_0,\m{Z}_G)|\m{Z}_G\in\mathcal{S}_L^G}\Pr(\m{Z}_G\in\mathcal{S}_L^G).
\label{eq:explb}
\end{eqnarray}
Analyzing the probability term in~\eqref{eq:explb},
\begin{eqnarray}
\nonumber\Pr(\m{Z}_G\in\mathcal{S}_L^G)&=& \Pr\left(\|\m{Z}_G\|^2\leq mL^2\sigma_G^2\right)= \Pr\left(\left(\frac{\|\m{Z}_G\|}{\sigma_G}\right)^2\leq mL^2\right)\\
&=& 1-\Pr\left(\left(\frac{\|\m{Z}_G\|}{\sigma_G}\right)^2> mL^2\right)
= 1-\psi(m,L\sqrt{m}) = \frac{1}{c_m(L)},
\label{eq:sphereprob}
\end{eqnarray}
because $\frac{\m{Z}_G}{\sigma_G}\sim\mathcal{N}(0,\mathbb{I}_m)$. From~\eqref{eq:explb} and~\eqref{eq:sphereprob},
\begin{eqnarray}
\nonumber \expectp{\m{X}_0,\m{Z}}{J_2^{(\gamma)}(\m{X}_0,\m{Z})}&\geq & \sigma_G^m
e^{-\frac{mL^2(\sigma_G^2-1)}{2}}\expectp{\m{X}_0,\m{Z}_G}{J_2^{(\gamma)}(\m{X}_0,\m{Z}_G)|\m{Z}_G\in\mathcal{S}_L^G}(1-\psi(m,L\sqrt{m}))\\
& = & \frac{\sigma_G^m e^{-\frac{mL^2(\sigma_G^2-1)}{2}}}{c_m(L)} \expectp{\m{X}_0,\m{Z}_G}{J_2^{(\gamma)}(\m{X}_0,\m{Z}_G)|\m{Z}_G\in\mathcal{S}_L^G}.
\label{eq:ep0z}
\end{eqnarray}
We now need the following lemma, which connects the new finite-length lower bound to the infinite-length lower bound
of~\cite{WitsenhausenJournal}.
\begin{lemma}
\label{lem:epg}
\begin{eqnarray*}
\expectp{\m{X}_0,\m{Z}_G}{J_2^{(\gamma)}(\m{X}_0,\m{Z}_G)|\m{Z}_G\in \mathcal{S}_L^G}
\geq \left(\left( \sqrt{\kappa_2 (P,\sigma_0^2,\sigma_G^2,L)} -\sqrt{P} \right)^+\right)^2,
\end{eqnarray*}
for any $L>0$.
\end{lemma}
\begin{proof
See Appendix~\ref{app:ep0g}.
\end{proof}
The lower bound on the total average cost now follows from~\eqref{eq:ep0z} and Lemma~\ref{lem:epg}.
We now verify that $d_m(L)\in(0,1)$. That $d_m(L)>0$ is clear from definition. $d_m(L)<1$ because $\{\mk{z}{m+2}:\| \mk{z}{m+2}\|^2\leq mL^2\sigma_G^2\} \subset \{\mk{z}{m+2}: \|\mk{z}{m}\|^2\leq mL^2\sigma_G^2\}$, \emph{i.e.}, a sphere sits inside a cylinder.
Finally we verify that this new lower bound is at least as tight as the one in Theorem~\ref{thm:oldbound}. Choosing $\sigma_G^2=1$ in the expression for $\eta(P,\sigma_0^2,\sigma_G^2,L)$,
\begin{eqnarray*}
\eta(P,\sigma_0^2,\sigma_G^2,L)\geq \sup_{L>0}\frac{1}{c_m(L)}\left(\left( \sqrt{\kappa_2 (P,\sigma_0^2,1,L)} -\sqrt{P} \right)^+\right)^2.
\end{eqnarray*}
Now notice that $c_m(L)$ and $d_m(L)$ converge to $1$ as $L\rightarrow\infty$. Thus $\kappa_2(P,\sigma_0^2,1,L)\overset{L\rightarrow\infty}{\longrightarrow} \kappa(P,\sigma_0^2)$ and therefore, $\eta(P,\sigma_0^2,\sigma_G^2,L)$ is lower bounded by $\left(\left( \sqrt{\kappa}-\sqrt{P} \right)^+\right)^2$, the lower bound in Theorem~\ref{thm:oldbound}.
\end{proof}
\section{Combination of linear and lattice-based strategies attain within a constant factor of the optimal cost}
\label{sec:ratio}
\begin{theorem}[Constant-factor optimality]
The costs for $W(m,k^2,\sigma_0^2)$ are bounded as follows
\begin{eqnarray*}
\inf_{P\geq 0} \sup_{\sigma_G^2\geq 1,L>0} k^2P+\eta(P,\sigma_0^2,\sigma_G^2,L) \leq
\bar{J}_{min}(m,k^2,\sigma_0^2)
\leq
\mu \left(\inf_{P\geq 0} \sup_{\sigma_G^2\geq 1,L>0} k^2P+\eta(P,\sigma_0^2,\sigma_G^2,L)\right),
\end{eqnarray*}
where $\mu=100\xi^2$, $\xi$ is the packing-covering ratio of any lattice in $\mathbb{R}^m$, and $\eta(\cdot)$ is as defined in Theorem~\ref{thm:newbound}. For any $m$, $\mu<1600$. Further, depending on the $(m,k^2,\sigma_0^2)$
values, the upper bound can be attained by lattice-based quantization
strategies or linear strategies. For $m=1$, a numerical calculation (MATLAB code available at~\cite{finiteWitsenahusenMatlabCode}) shows that $\mu<8$ (see Fig.~\ref{fig:scalar2}).
\end{theorem}
\begin{proof}
\begin{figure}
\begin{center}
\includegraphics[width=9cm]{ScalarMay4}
\includegraphics[width=9cm]{hcpMay4}
\end{center}
\caption{The ratio of the upper and the lower bounds for the scalar
Witsenhausen problem (top), and the 2-D Witsenhausen problem
(bottom, using hexagonal lattice of $\xi=\frac{2}{\sqrt{3}}$) for a
range of values of $k$ and $\sigma_0$. The ratio is bounded above by
$17$ for the scalar problem, and by $14.75$ for the 2-D problem.}
\label{fig:scalar}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=9cm]{ScalarRatioMay6}
\end{center}
\caption{An exact calculation of the first and second stage costs yields an improved maximum ratio smaller than $8$ for the scalar Witsenhausen problem.}
\label{fig:scalar2}
\end{figure}
Let $P^*$ denote the power $P$ in the lower bound in Theorem~\ref{thm:newbound}. We show here that for any choice of $P^*$, the ratio of the upper and the lower bound is bounded.
Consider the two simple linear strategies
of zero-forcing ($\m{u}_1=-\m{x}_0$) and zero-input ($\m{u}_1=0$)
followed by LLSE estimation at \co{2}. It is easy to
see~\cite{WitsenhausenJournal} that the average cost attained using these two
strategies is $k^2\sigma_0^2$ and $\frac{\sigma_0^2}{\sigma_0^2+1}<1$
respectively. An upper bound is obtained using
the best amongst the two linear strategies and the lattice-based
quantization strategy.
\textit{Case 1}: $P^*\geq\frac{\sigma_0^2}{100}$. \\
The first stage cost is larger than $k^2\frac{\sigma_0^2}{100}$. Consider the upper bound of $k^2\sigma_0^2$ obtained by zero-forcing. The ratio of the upper bound and the lower bound is no larger than $100$.
\textit{Case 2}: $P^*<\frac{\sigma_0^2}{100}$ and $\sigma_0^2<16$. \\
Using the bound from Theorem~\ref{thm:oldbound} (which is a special case of the bound in Theorem~\ref{thm:newbound}),
\begin{eqnarray*}
\kappa &=& \frac{\sigma_0^2}{(\sigma_0+\sqrt{P^*})^2+1}
\overset{\left(P^*<\frac{\sigma_0^2}{100}\right)}{\geq} \frac{\sigma_0^2}{\sigma_0^2\left(1+\frac{1}{\sqrt{100}}\right)^2+1}\\
&\overset{(\sigma_0^2< 16)}{\geq} & \frac{\sigma_0^2}{16\left(1+\frac{1}{\sqrt{100}}\right)^2+1}=\frac{\sigma_0^2}{20.36 }\geq \frac{\sigma_0^2}{21}.
\end{eqnarray*}
Thus, for $\sigma_0^2<16$ and $P^*\leq \frac{\sigma_0^2}{100}$,
\begin{eqnarray*}
\bar{J}_{min}&\geq& \left((\sqrt{\kappa}-\sqrt{P^*})^+\right)^2\geq \sigma_0^2\left(\frac{1}{\sqrt{21}}-\frac{1}{\sqrt{100}}\right)^2\approx 0.014\sigma_0^2 \geq \frac{\sigma_0^2}{72}.
\end{eqnarray*}
Using the zero-input upper bound of $\frac{\sigma_0^2}{\sigma_0^2+1}$, the ratio of the upper and lower bounds is at most $\frac{72}{\sigma_0^2+1}\leq 72$.
\textit{Case 3}: $P^*\leq\frac{\sigma_0^2}{100}, \sigma_0^2\geq 16, P^*\leq \frac{1}{2}$.\\
In this case,
\begin{eqnarray*}
\kappa &=&\frac{\sigma_0^2}{(\sigma_0+\sqrt{P^*})^2+1}\overset{(P^*\leq \frac{1}{2})}{\geq} \frac{\sigma_0^2}{(\sigma_0+\sqrt{0.5})^2+1}\\
&\overset{(a)}{\geq}& \frac{16}{(\sqrt{16}+\sqrt{0.5})^2+1}\approx 0.6909 \geq 0.69,
\end{eqnarray*}
where $(a)$ uses $\sigma_0^2\geq 16$ and the observation that $\frac{x^2}{(x+b)^2+1}=\frac{1}{\left(1+\frac{b}{x}\right)^2+\frac{1}{x^2}}$ is an increasing function of $x$ for $x,b>0$. Thus,
\begin{eqnarray*}
\left((\sqrt{\kappa}-\sqrt{P})^+\right)^2\geq ((\sqrt{0.69}-\sqrt{0.5})^+)^2\approx 0.0153 \geq 0.015.
\end{eqnarray*}
Using the upper bound of $\frac{\sigma_0^2}{\sigma_0^2+1}<1$, the ratio of the upper and the lower bounds is smaller than $\frac{1}{0.015}<67$.
\textit{Case 4}: $\sigma_0^2>16$, $\frac{1}{2}<P^*\leq\frac{\sigma_0^2}{100}$
Using $L=2$ in the lower bound,
\begin{eqnarray*}
c_m(L)&=&\frac{1}{\Pr(\|\m{Z}\|^2\leq mL^2)}=\frac{1}{1-\Pr(\|\m{Z}\|^2 > mL^2)}\\
& \overset{\text{(Markov's ineq.)}}{\leq} & \frac{1}{1-\frac{m}{mL^2}} \overset{(L=2)}{=} \frac{4}{3},
\end{eqnarray*}
Similarly,
\begin{eqnarray*}
d_m(2)& =& \frac{\Pr(\|\mk{Z}{m+2}\|^2\leq mL^2)}{\Pr(\|\m{Z}\|^2\leq mL^2)}\\
&\geq & \Pr(\|\mk{Z}{m+2}\|^2\leq mL^2) = 1-\Pr(\|\mk{Z}{m+2}\|^2> mL^2)\\
& \overset{\text{(Markov's ineq.)}}{\geq} & 1 - \frac{m+2}{mL^2} = 1 - \frac{1+\frac{2}{m}}{4}\overset{(m\geq 1)}{\geq} 1-\frac{3}{4}=\frac{1}{4}.
\end{eqnarray*}
In the bound, we are free to use any $\sigma_G^2\geq 1$. Using
$\sigma_G^2=6P^*>1$,
\begin{eqnarray*}
\kappa_2 &=&\frac{\sigma_G^2\sigma_0^2}{\left((\sigma_0+\sqrt{P^*})^2+d_m(2)\sigma_G^2\right)c_m^{\frac{2}{m}}(2) e^{1-d_m(2)} }\\
&\overset{(a)}{\geq}&\frac{6P^*\sigma_0^2}{\left((\sigma_0+\frac{\sigma_0}{10})^2+\frac{6\sigma_0^2}{100}\right) \left(\frac{4}{3}\right)^{\frac{2}{m}} e^{\frac{3}{4}} }\overset{(m\geq 1)}{\geq} 1.255 P^*.
\end{eqnarray*}
where $(a)$ uses $\sigma_G^2=6P^*, P^*<\frac{\sigma_0^2}{100}, c_m(2)\leq \frac{4}{3}$ and $1>d_m(2)\geq \frac{1}{4}$. Thus,
\begin{equation}
\left((\sqrt{\kappa_2}-\sqrt{P^*})^+\right)^2 \geq
P^*(\sqrt{1.255}-1)^2 \geq \frac{P^*}{70}.
\end{equation}
Now, using the lower bound on the total cost from Theorem~\ref{thm:newbound}, and substituting $L=2$,
\begin{eqnarray}
\nonumber \bar{J}_{min}(m,k^2,\sigma_0^2) &\geq &
k^2P^* +
\frac{\sigma_G^m}{c_m(2)}
\exp\left(-\frac{mL^2(\sigma_G^2-1)}{2}\right) \left( \left(\sqrt{\kappa_2} - \sqrt{P^*}\right)^+ \right)^2\\
\nonumber &\overset{(\sigma_G^2=6P^*)}{\geq}& k^2P^* + \frac{(6P^*)^m}{c_m(2)} \exp\left( -\frac{4m(6P^*-1)}{2} \right)\;\frac{P^*}{70} \\
\nonumber &\overset{(a)}{\geq}& k^2P^* +
\frac{3^m}{\frac{4}{3}} e^{2m} e^{-12P^*m}\;\frac{1}{70\times 2}\\
\nonumber &\overset{(m\geq 1)}{\geq}& k^2P^* + \frac{3\times 3\times e^2}{4\times 70\times 2} e^{-12mP^*}\\
&> & k^2P^* + \frac{1}{9}e^{-12mP^*},
\label{eq:jminlower}
\end{eqnarray}
where $(a)$ uses $c_m(2) \leq \frac{4}{3}$ and $P^*\geq \frac{1}{2}$. We loosen the lattice-based upper bound from Theorem~\ref{thm:upperbound} and bring it into a form similar to~\eqref{eq:jminlower}. Here, $P$ is a part of the optimization:
\begin{eqnarray}
&&\bar{J}_{min}(m,k^2,\sigma_0^2)\nonumber\\
&\leq &\inf_{P>\xi^2}k^2P+\left(1+\sqrt{\frac{P}{\xi^2}}\right)^2e^{-\frac{mP}{2\xi^2}+\frac{m+2}{2}\left(1+\lon{\frac{P}{\xi^2}} \right)}\nonumber\\
&\leq &\inf_{P>\xi^2}k^2P +\frac{1}{9}e^{-\frac{0.5mP}{\xi^2}+\frac{m+2}{2}\left(1+\lon{\frac{P}{\xi^2}} \right) + 2\lon{ 1+\sqrt{\frac{P}{\xi^2} } } +\lon{9} }\nonumber\\
&\leq &\inf_{P>\xi^2}k^2P+\frac{1}{9}e^{-m\left(\frac{0.5P}{\xi^2}-\frac{m+2}{2m}\left(1+\lon{\frac{P}{\xi^2}} \right) - \frac{2}{m}\lon{ 1+\sqrt{\frac{P}{\xi^2}} } -\frac{\lon{9}}{m} \right)}\nonumber\\
\nonumber&=&\inf_{P>\xi^2}k^2P+\frac{1}{9}e^{-\frac{0.12 mP}{\xi^2}}\times e^{-m\left(\frac{0.38P}{\xi^2}-\frac{1+\frac{2}{m}}{2}\left(1+\lon{\frac{P}{\xi^2}}\right)-\frac{2}{m}\lon{1+\sqrt{\frac{P}{\xi^2}}} -\frac{\lon{9}}{m} \right) } \nonumber\\
\nonumber&\overset{(m\geq 1)}{\leq}&\inf_{P>\xi^2}k^2P+\frac{1}{9}e^{-\frac{0.12 mP}{\xi^2}} e^{-m\left(\frac{0.38P}{\xi^2}-\frac{3}{2}\left(1+\lon{\frac{P}{\xi^2}} \right)-2\lon{1+\sqrt{\frac{P}{\xi^2}}} -\lon{9} \right) } \nonumber \\
&\leq& \inf_{P\geq 34\xi^2} k^2P+\frac{1}{9}e^{-\frac{0.12 mP}{\xi^2}},
\label{eq:jminupper}
\end{eqnarray}
where the last inequality follows from the fact that
$\frac{0.38P}{\xi^2}>\frac{3}{2}\left(1+\lon{\frac{P}{\xi^2}} \right) +
2\lon{1+\sqrt{\frac{P}{\xi^2}}}+\lon{9}$ for $\frac{P}{\xi^2}>34$. This
can be checked easily by plotting it.\footnote{It can also be verified
symbolically by examining the expression $g(b) = 0.38b^2 -
\frac{3}{2}(1 + \ln b^2) - 2 \ln(1+b) - \lon{9}$, taking its derivative
$g'(b) = 0.76b - \frac{3}{b} - \frac{2}{1+b}$, and second
derivative $g''(b) = 0.76 + \frac{3}{b^2} + \frac{2}{(1+b)^2}
> 0$. Thus $g(\cdot{})$ is convex-$\cup$. Further, $g'(\sqrt{34})\approx 3.62>0$, and $g(\sqrt{34}) \approx 0.09$ and so $g(b) > 0$ whenever $b \geq \sqrt{34}$.}
Using $P=100\xi^2P^{*}\geq 50\xi^2>34\xi^2$ (since
$P^{*}\geq\frac{1}{2}$) in~\eqref{eq:jminupper},
\begin{eqnarray}
\nonumber
\bar{J}_{min}(m,k^2,\sigma_0^2)&\leq& k^2 100\xi^2P^{*}+\frac{1}{9}e^{-m\frac{0.12\times 100\xi^2P^{*}}{\xi^2}}\\
&=& k^2 100\xi^2P^{*}+\frac{1}{9}e^{-12mP^{*}}.
\label{eq:upper2}
\end{eqnarray}
Using~\eqref{eq:jminlower} and~\eqref{eq:upper2}, the ratio of the
upper and the lower bounds is bounded for all $m$ since
\begin{equation}
\mu\leq \frac{ k^2 100\xi^2P^{*}+\frac{1}{9}e^{-12mP^{*}}}{
k^2P^{*}+\frac{1}{9}e^{-12mP^{*}}}\leq \frac{ k^2 100\xi^2P^{*}}{
k^2P^{*}}=100\xi^2.
\end{equation}
For $m=1$, $\xi=1$, and thus in the proof the ratio $\mu\leq 100$. For $m$
large, $\xi\approx 2$~\cite{almosteverything}, and $\mu\lesssim
400$. For arbitrary $m$, using the recursive construction
in~\cite[Theorem 8.18]{Micciancio}, $\xi\leq 4$, and thus $\mu\leq
1600$ regardless of $m$.
\end{proof}
Though the proof above succeeds in showing that the ratio is uniformly bounded by a constant, it is not very insightful and the constant is large. However, since the underlying vector bound can be tightened (as shown in~\cite{ITW09Paper}), it is not worth improving the proof for increased elegance at this time. The important thing is that such a uniform constant exists.
A numerical evaluation of the upper and lower bounds (of Theorem~\ref{thm:upperbound} and~\ref{thm:newbound} respectively) shows that the ratio is smaller than $17$ for $m=1$ (see Fig.~\ref{fig:scalar}). A precise calculation of the cost of the quantization strategy improves the upper bound to yield a maximum ratio smaller than $8$ (see Fig.~\ref{fig:scalar2}).
A simple grid lattice has a packing-covering ratio $\xi=\sqrt{m}$. Therefore, while the grid lattice has the best possible packing-covering ratio of $1$ in the scalar case, it has a rather large packing covering ratio of $\sqrt{2}\;(\approx 1.41)$ for $m=2$. On the other hand, a hexagonal lattice (for $m=2$) has an improved packing-covering ratio of $\frac{2}{\sqrt{3}}\approx 1.15$. In contrast with $m=1$, where the ratio of upper and lower bounds of Theorem~\ref{thm:upperbound} and~\ref{thm:newbound} is approximately $17$, a hexagonal lattice yields a ratio smaller than $14.75$, despite having a larger packing-covering ratio. This is a consequence of the tightening of the sphere-packing lower bound (Theorem~\ref{thm:newbound}) as $m$ gets large\footnote{Indeed, in the limit $m\rightarrow\infty$, the ratio of the asymptotic average costs attained by a vector-quantization strategy and the vector lower bound of Theorem~\ref{thm:oldbound} is bounded by $4.45$~\cite{WitsenhausenJournal}.}.
\section{Discussions of numerical explorations and Conclusions}
\label{sec:conclusions}
Though lattice-based quantization strategies allow us to get within a constant
factor of the optimal cost for the vector Witsenhausen problem, they are
not optimal. This is known for the scalar~\cite{LeeLauHo} and the
infinite-length case~\cite{WitsenhausenJournal}. It is shown
in~\cite{WitsenhausenJournal} that the ``slopey-quantization" strategy of Lee, Lau and Ho~\cite{LeeLauHo} that is believed to be very close to optimal in the scalar case can be viewed as an instance of a linear scaling followed by a dirty-paper coding (DPC) strategy. Such DPC-based strategies are also the best known strategies in the asymptotic infinite-dimensional
case, requiring optimal power $P$ to attain $0$ asymptotic mean-square error in the estimation of $\m{x}_1$, and attaining costs within a factor of $1.3$ of the optimal~\cite{ITW09Paper} for all $(k,\sigma_0^2)$. This leads us to conjecture that a DPC-like strategy might be optimal for finite-vector lengths as well. In the following, we numerically explore the performance of DPC-like strategies.
\begin{figure}
\begin{center}
\includegraphics[width = 4 in]{line3}
\end{center}
\caption{Ratio of the achievable costs to the scalar lower bound along $k\sigma_0 =10^{-0.5}$ for various strategies. Quantization with MMSE-estimation at the second controller outperforms quantization with MLE, or even scaled MLE. For slopey-quantization with heuristic DPC-parameter, the parameter $\alpha$ in DPC-based scheme is borrowed from the infinite-length analysis. The figure suggests that along this path ($k\sigma_0=\sqrt{10}$), the difference between optimal-DPC and heuristic DPC is not substantial. However, Fig.~\ref{fig:abvsabc} (b) shows that this is not true in general. }
\label{fig:line}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width = 6 in]{RatioDoublePlot}
\end{center}
\caption{(a) shows the ratio of cost attained by linear+quantization (with MMSE decoding) to DPC with parameter $\alpha$ obtained by brute-force optimization. DPC can do up to $15\%$ better than the optimal quantization strategy. Also the maximum is attained along $k\approx 0.6$ which is different from $k=0.2$ of the benchmark problem~\cite{LeeLauHo}. (b) shows the ratio of cost attained by linear+quantization to DPC with $\alpha$ borrowed from infinite-length optimization. Heuristic DPC does not outperform linear+quantization (with MMSE estimation) substantially. }
\label{fig:abvsabc}
\end{figure}
It is natural to ask how much there is to gain using a DPC-based strategy over a simple quantization strategy. Notice that the DPC-strategy gains not only from the slopey quantization, but also from the MMSE-estimation at the second controller. In Fig.~\ref{fig:line}, we eliminate the latter advantage by considering first a uniform quantization-based strategy with an appropriate scaling of the MLE so that it approximates the MMSE-estimation performance, and then the actual MMSE-estimation strategy for uniform quantization. Along the curve $k\sigma_0=\sqrt{10}$, there is significant gain in using this approximate-MMSE estimation over MLE, and further gain in using MMSE-estimation itself. This also shows that there is an interesting tradeoff between the complexity of the second controller and the system performance.
From Fig.~\ref{fig:line}, along the curve $k\sigma_0=\sqrt{10}$, the DPC-based strategy performs only negligibly better than a quantization-based strategy with MMSE estimation. Fig.~\ref{fig:abvsabc} (a) shows that this is not true in general. A DPC-based strategy can perform up to $15\%$ better than a simple quantization-based scheme depending on the problem parameters. Interestingly, the advantage of using a DPC-based strategy for the case of $k=0.2,\sigma_0=5$ (which is used as the benchmark case in many papers, e.g.~\cite{LeeLauHo,marden}) is quite small. The maximum gain of about $15\%$ is obtained at $k\approx 10^{-0.2}\approx 0.63$, and $\sigma_0=1$ (and indeed, any $\sigma_0>1$. In the future, we suggest the community use the point $(0.63,1)$ as the benchmark case.
Given that there is an advantage in using a DPC-like strategy, an interesting question is whether the DPC parameter $\alpha$ that optimizes the DPC-based strategy's performance at infinite-lengths (in~\cite{WitsenhausenJournal}) gives good performance for the scalar case as well. Fig.~\ref{fig:abvsabc} (b) answers this question at least partially in the negative. This heuristic-DPC does only slightly better than a quantization strategy with MMSE estimation, whereas other values of $\alpha$ do significantly better.
Finally, we observe that while uniform bin-size quantization or DPC-based strategies are designed for atypical noise behavior, atypical behavior of the the initial state is better accommodated by using nonuniform bin-sizes (such as those in~\cite{LeeLauHo,marden}). Table~\ref{tbl:yuchiho} compares the two. Clearly, the advantage in having nonuniform slopey-quantization is small, but not negligible. It would be interesting to calibrate the advantage of nonuniform-bin sizes for $(k,\sigma_0)=(0.63,1)$, a maximum gain point for uniform-bin size slopey-quantization strategies.
\begin{table}[h!b!p!]
\caption{Costs attained for the benchmark case of $k=0.2$, $\sigma_0=5$.}
\begin{center}
\begin{tabular}{|c|c|c|}
\hline
& linear+quantization & Slopey-quantization \\
\hline
Lee, Lau and Ho \cite{LeeLauHo} & 0.1713946 & 0.1673132 \\
Li, Marden and Shamma~\cite{marden} & --- & 0.1670790\\
This paper & 0.1715335 & 0.1673654 \\
\hline
\end{tabular}
\end{center}
\label{tbl:yuchiho}
\end{table}
There are plenty of open problems that arise naturally. Both the lower and the upper bounds have room for improvement. The lower bound can be improved by tightening the vector lower bound of~\cite{WitsenhausenJournal} (one such tightening is performed in~\cite{ITW09Paper}) and obtaining corresponding finite-length results using the sphere-packing tools developed here.
Tightening the upper bound can be performed by using DPC-based techniques over lattices. Further, an exact analysis of the required
first-stage power when using a lattice would yield an improvement (as
pointed out earlier, for $m=1$, $\frac{1}{m}k^2r_c^2$ overestimates
the required first-stage cost), especially for small $m$. Improved
lattice designs with better packing-covering ratios would also improve
the upper bound.
Perhaps a more significant set of open problems are the next steps in understanding more realistic versions of Witsenhausen's problem,
specifically those that include costs on all the inputs and all the states~\cite{Allerton09Paper}, with noisy state evolution and noisy observations at both controllers. The hope is that solutions to these problems can then be used as the basis for provably-good nonlinear controller synthesis for larger distributed systems. Further, tools developed for solving these problems might help address multiuser problems in information theory, in the spirit of~\cite{WuInterferenceControl,EliaPaper1}.
\section*{Acknowledgments}
We gratefully acknowledge the support of the National Science Foundation (CNS-403427, CNS-093240, CCF-0917212 and CCF-729122), Sumitomo Electric and Samsung. We thank Amin Gohari, Bobak Nazer and Anand Sarwate for helpful discussions, and Gireeja Ranade for suggesting improvements in the paper.
\appendices{}
\section{Proof of Lemma~\ref{lem:upperbound}}
\label{app:upperbound}
\begin{eqnarray}
\nonumber \expectp{\m{Z}}{\left(\|\m{Z}\|+r_p\right)^2\indi{\mathcal{E}_m}}&=&
\expectp{\m{Z}}{\|\m{Z}\|^2\indi{\mathcal{E}_m}}+r_p^2\Pr(\mathcal{E}_m)+2r_p\expectp{\m{Z}}{\left(\indi{\mathcal{E}_m}\right)
\left(\|\m{Z}\|\indi{\mathcal{E}_m}\right)}\\
\nonumber&\overset{(a)}{\leq} &
\expectp{\m{Z}}{\|\m{Z}\|^2\indi{\mathcal{E}_m}}+r_p^2\Pr(\mathcal{E}_m) +2r_p\sqrt{\expectp{\m{Z}}{\indi{\mathcal{E}_m}}}
\sqrt{\expectp{\m{Z}}{\|\m{Z}\|^2\indi{\mathcal{E}_m}}}\\
&=& \left( \sqrt{\expectp{\m{Z}}{\|\m{Z}\|^2\indi{\mathcal{E}_m}} } + r_p\sqrt{ \Pr(\mathcal{E}_m) } \right)^2 ,
\label{eq:zplusrp}
\end{eqnarray}
where $(a)$ uses the Cauchy-Schwartz inequality~\cite[Pg. 13]{durrett}.
We wish to express $\expectp{\m{Z}}{\|\m{Z}\|^2\indi{\mathcal{E}_m}}$ in
terms of $\psi(m,r_p):=\Pr(\|\m{Z}\|\geq r_p)=\int_{\|\m{z}\|\geq
r_p}\frac{e^{-\frac{\|\m{z}\|^2}{2}}}{\left(\sqrt{2\pi}\right)^m}d\m{z}$.
Denote by $\mathcal{A}_m(r):=\frac{2\pi^{\frac{m}{2}} r^{m-1}
}{\Gamma\left(\frac{m}{2}\right)}$ the surface area of a sphere of
radius $r$ in $\mathbb{R}^m$~\cite[Pg. 458]{Courant}, where
$\Gamma(\cdot{})$ is the Gamma-function satisfying
$\Gamma(m)=(m-1)\Gamma(m-1)$, $\Gamma(1)=1$, and
$\Gamma(\frac{1}{2})=\sqrt{\pi}$. Dividing the space $\mathbb{R}^m$
into shells of thickness $dr$ and radii $r$,
\begin{eqnarray}
\nonumber\expectp{\m{Z}}{\|\m{Z}\|^2\indi{\mathcal{E}_m}}&=&\int_{\|\m{z}\|\geq r_p}\|\m{z}\|^2\frac{e^{-\frac{\|\m{z}\|^2}{2}}}{\left(\sqrt{2\pi}\right)^m}d\m{z}= \int_{r\geq r_p}r^2\frac{e^{-\frac{r^2}{2}}}{\left(\sqrt{2\pi}\right)^m}\mathcal{A}_m(r)dr\\
\nonumber&=& \int_{r\geq r_p}r^2\frac{e^{-\frac{r^2}{2}}}{\left(\sqrt{2\pi}\right)^m}\frac{2\pi^{\frac{m}{2}} r^{m-1} }{\Gamma\left(\frac{m}{2}\right)}dr\\
&=& \int_{r\geq r_p} \frac{e^{-\frac{r^2}{2}}2\pi}{\left(\sqrt{2\pi}\right)^{m+2}} \frac{2 \pi^{\frac{m+2}{2}} r^{m+1} }{\pi\frac{2}{m}\Gamma\left(\frac{m+2}{2}\right)}dr= m\psi(m+2,r_p).
\label{eq:psinplus2}
\end{eqnarray}
Using~\eqref{eq:zplusrp},~\eqref{eq:psinplus2}, and $r_p=\sqrt{\frac{mP}{\xi^2}}$
\begin{eqnarray*}
\nonumber\expectp{\m{Z}}{\left(\|\m{Z}\|+r_p\right)^2\indi{\mathcal{E}_m}}
\leq m\left(\sqrt{\psi(m+2,r_p)}+\sqrt{\frac{P}{\xi^2}}\sqrt{\psi(m,r_p)}\right)^2,
\end{eqnarray*}
which yields the first part of Lemma~\ref{lem:upperbound}. To obtain a
closed-form upper bound we consider $P>\xi^2$. It suffices to bound $\psi(\cdot{},\cdot{})$.
\begin{eqnarray*}
\psi(m,r_p)&=&\Pr(\|\m{Z}\|^2\geq r_p^2)= \Pr(\exp(\rho\sum_{i=1}^mZ_i^2)\geq \exp(\rho r_p^2))\\
&\overset{(a)}{\leq} & \expectp{\m{Z}}{\exp(\rho\sum_{i=1}^mZ_i^2)}e^{-\rho r_p^2}
=\expectp{Z_1}{\exp(\rho Z_1^2)}^me^{-\rho r_p^2}
\overset{(\text{for}\;0<\rho<0.5)}{=} \frac{1}{(1-2\rho)^{\frac{m}{2}}}e^{-\rho r_p^2},
\end{eqnarray*}
where $(a)$ follows from the Markov inequality, and the last inequality follows from the fact that the moment generating function of a standard $\chi_2^2$ random variable is $\frac{1}{(1-2\rho)^{\frac{1}{2}}}$ for $\rho\in (0,0.5)$~\cite[Pg. 375]{ross}. Since this bound holds for any $\rho\in (0,0.5)$, we choose the minimizing $\rho^*=\frac{1}{2}\left(1-\frac{m}{r_p^2}\right)$. Since $r_p^2=\frac{mP}{\xi^2}$, $\rho^*$ is indeed in $(0,0.5)$ as long as $P>\xi^2$. Thus,
\begin{eqnarray*}
\psi(m,r_p) \leq \frac{1}{(1-2\rho^*)^{\frac{m}{2}}}e^{-\rho^* r_p^2}= \left(\frac{r_p^2}{m}\right)^{\frac{m}{2}} e^{-\frac{1}{2}\left( 1-\frac{m}{r_p^2} \right) r_p^2}= e^{-\frac{r_p^2}{2}+\frac{m}{2}+\frac{m}{2}\lon{\frac{r_p^2}{m}}}.
\end{eqnarray*}
Using the substitutions $r_c^2=mP$, $\xi=\frac{r_c}{r_p}$ and $r_p^2=\frac{mP}{\xi^2}$,
\begin{eqnarray}
\label{eq:psinrp}
\Pr(\mathcal{E}_m)=\psi(m,r_p)=\psi\left(m,\sqrt{\frac{mP}{\xi^2}}\right)\leq e^{-\frac{mP}{2\xi^2}+\frac{m}{2}+\frac{m}{2}\lon{\frac{P}{\xi^2}}}, \;\text{and}
\end{eqnarray}
\begin{eqnarray}
\label{eq:psin2rp}
\expectp{\m{Z}}{\|\m{Z}\|^2\indi{\mathcal{E}_m}}\leq m\psi\left(m+2,\sqrt{\frac{mP}{\xi^2}}\right)\leq me^{-\frac{mP}{2\xi^2}+\frac{m+2}{2}+\frac{m+2}{2}\lon{\frac{mP}{(m+2)\xi^2}}}.
\end{eqnarray}
From~\eqref{eq:zplusrp},~\eqref{eq:psinrp} and~\eqref{eq:psin2rp},
\begin{eqnarray*}
\expectp{\m{Z}}{\left(\|\m{Z}\|+r_p\right)^2\indi{\mathcal{E}_m}}&\leq &\bigg(\sqrt{m}e^{-\frac{mP}{4\xi^2}+\frac{m+2}{4}+\frac{m+2}{4}\lon{\frac{mP}{(m+2)\xi^2}}} \sqrt{\frac{mP}{\xi^2}} e^{-\frac{mP}{4\xi^2}+\frac{m}{4}+\frac{m}{4}\lon{\frac{P}{\xi^2}}}\bigg)^2\\
&\overset{(\text{since}\;P>\xi^2)}{<} & \left(\sqrt{m}\left(1+\sqrt{\frac{P}{\xi^2}}\right)e^{-\frac{mP}{4\xi^2}+\frac{m+2}{4}+\frac{m+2}{4}\lon{\frac{P}{\xi^2}}} \right)^2\\
&= & m\left(1+\sqrt{\frac{P}{\xi^2}}\right)^2e^{-\frac{mP}{2\xi^2}+\frac{m+2}{2}+\frac{m+2}{2}\lon{\frac{P}{\xi^2}}}.
\end{eqnarray*}
\section{Proof of Lemma~\ref{lem:epg}}
\label{app:ep0g}
The following lemma is taken from~\cite{WitsenhausenJournal}.
\begin{lemma}
For any three random variables $A$, $B$ and $C$,
\begin{eqnarray*}
\expect{\|B-C\|^2}\geq \left(\left(\sqrt{\expect{\|A-C\|^2}}-\sqrt{\expect{\|A-B\|^2}}\right)^{+}\right)^2.
\end{eqnarray*}
\end{lemma}
\begin{proof}
See~\cite[Appendix II]{WitsenhausenJournal}.
\end{proof}
Choosing $A=\m{X}_0$, $B=\m{X}_1$ and $C=\whatmn{X}_1$,
\begin{eqnarray}
\nonumber && \expectp{\m{X}_0,\m{Z}_G}{J_2^{(\gamma)}(\m{X}_0,\m{Z}_G)|\m{Z}_G\in\mathcal{S}_L^G} =\frac{1}{m}\expectp{\m{X}_0,\m{Z}_G}{\|\m{X}_1-\whatmn{X}_1\|^2|\m{Z}_G\in\mathcal{S}_L^G}\\
\nonumber&\geq &\bigg(\bigg(\sqrt{\frac{1}{m}\expectp{\m{X}_0,\m{Z}_G}{\|\m{X}_0-\whatmn{X}_1\|^2|\m{Z}_G\in \mathcal{S}_L^G}} - \sqrt{\frac{1}{m}\expectp{\m{X}_0,\m{Z}_G}{\|\m{X}_0-\m{X}_1\|^2|\m{Z}_G\in \mathcal{S}_L^G}} \bigg)^+\bigg)^2\\
&= &\bigg(\bigg(\sqrt{\frac{1}{m}\expectp{\m{X}_0,\m{Z}_G}{\|\m{X}_0-\whatmn{X}_1\|^2|\m{Z}_L\in \mathcal{S}_L^G}}- \sqrt{P} \bigg)^+\bigg)^2,
\label{eq:sqrtd}
\end{eqnarray}
since $\m{X}_0-\m{X}_1=\m{U}_1$ is independent of $\m{Z}_G$ and
$\expect{\|\m{U}_1\|^2}= mP$. Define $\m{Y}_L:=\m{X}_1+\m{Z}_L$ to
be the output when the observation noise $\m{Z}_L$ is distributed as a truncated Gaussian distribution:
\begin{equation}
\label{eq:fz}
f_{Z_L}(\m{z}_L)=\left\{\begin{array}{ll}c_m(L)\frac{e^{-\frac{\|\m{z}_L\|^2}{2\sigma_G^2}}}{\left(\sqrt{2\pi\sigma_G^2}\right)^m}&\m{z}_L\in\mathcal{S}_L^G\\
0& \text{otherwise.}\end{array}\right.
\end{equation}
Let the estimate at the second controller on observing $\m{y}_L$ be
denoted by $\whatmn{X}_L$. Then, by the definition of conditional
expectations,
\begin{equation}
\label{eq:xl}
\expectp{\m{X}_0,\m{Z}_G}{\|\m{X}_0-\whatmn{X}_1\|^2|\m{Z}_G\in \mathcal{S}_L^G} = \expectp{\m{X}_0,\m{Z}_G}{\|\m{X}_0-\whatmn{X}_L\|^2}.
\end{equation}
To get a lower bound, we now allow the controllers to optimize
themselves with the additional knowledge that the observation noise $\m{z}$
must fall in $\mathcal{S}_L^G$. In order to prevent the first controller from ``cheating'' and allocating different powers to the two events (\emph{i.e.} $\m{z}$ falling or not falling in $\mathcal{S}_L^G$), we enforce the constraint that the power $P$ must not change with this additional knowledge. Since the controller's observation $\m{X}_0$ is independent of $\m{Z}$, this constraint is satisfied by the original controller (without the additional knowledge) as well, and hence the cost for the system with the additional knowledge is still a valid lower bound to that of the original system.
The rest of the proof uses ideas from channel coding and the rate-distortion theorem~\cite[Ch. 13]{CoverThomas} from information theory. We view the problem as a problem of implicit communication from the first controller to the second. Notice that for a given $\gamma(\cdot{})$, $\m{X}_1$ is a function of $\m{X}_0$, $\m{Y}_L=\m{X}_1+\m{Z}_L$ is conditionally independent of $\m{X}_0$ given $\m{X}_1$ (since the noise $\m{Z}_L$ is additive and independent of $\m{X}_1$ and $\m{X}_0$). Further, $\whatmn{X}_L$ is a function of $\m{Y}_L$. Thus $\m{X}_0-\m{X}_1-\m{Y}_L-\whatmn{X}_L$ form a Markov chain. Using the
data-processing inequality~\cite[Pg. 33]{CoverThomas},
\begin{equation}
\label{eq:dpi}
I(\m{X}_0;\whatmn{X}_L)\leq I(\m{X}_1;\m{Y}_L),
\end{equation}
where $I(A,B)$ is the expression for mutual information expression between two random variables $A$ and $B$ (see, for example,~\cite[Pg. 18, Pg. 231]{CoverThomas}). To estimate the distortion to which $\m{X}_0$ can be communicated across this truncated Gaussian channel (which, in turn, helps us lower bound the MMSE in estimating $\m{X}_1$), we need to upper bound the term on the RHS of~\eqref{eq:dpi}.
\begin{lemma}
\label{lem:capacity}
\begin{equation*}
\frac{1}{m}I(\m{X}_1;\m{Y}_L) \leq \frac{1}{2}\lo{\frac{e^{1-d_m(L)} (\bar{P}+d_m(L)\sigma_G^2)c_m^{\frac{2}{m}}(L)}{\sigma_G^2} }.
\end{equation*}
\end{lemma}
\begin{proof}
We first obtain an upper bound to the power of $\m{X}_1$ (this bound is the same as that used in~\cite{WitsenhausenJournal}):
\begin{eqnarray*}
\expectp{\m{X}_0}{\|\m{X}_1\|^2}&=&\expectp{\m{X}_0}{\|\m{X}_0+\m{U}_1\|^2}=\expectp{\m{X}_0}{\|\m{X}_0\|^2}+\expectp{\m{X}_0}{\|\m{U}_1\|^2}+2\expectp{\m{X}_0}{{\m{X}_0}^T\m{U}_1}\\
&\overset{(a)}{\leq} & \expectp{\m{X}_0}{\|\m{X}_0\|^2}+\expectp{\m{X}_0}{\|\m{U}_1\|^2}
+2\sqrt{\expectp{\m{X}_0}{\|\m{X}_0\|^2}}\sqrt{\expectp{\m{X}_0}{\| \m{U}_1\|^2}}\\
&\leq & m(\sigma_0+\sqrt{P})^2,
\end{eqnarray*}
where $(a)$ follows from the Cauchy-Schwartz inequality. We use the following definition of \textit{differential entropy} $h(A)$ of a continuous random variable $A$~\cite[Pg. 224]{CoverThomas}:
\begin{equation}
h(A) = -\int_S f_A(a) \lo{f_A(a)} da,
\end{equation}
where $f_A(a)$ is the pdf of $A$, and $S$ is the support set of $A$. Conditional differential entropy is defined similarly~\cite[Pg. 229]{CoverThomas}.
Let $\bar{P}:=(\sigma_0+\sqrt{P})^2$. Now, $\expect{Y_{L,i}^2} = \expect{X_{1,i}^2} + \expect{Z_{L,i}^2} $ (since $X_{1,i}$ is independent of $Z_{L,i}$ and by symmetry, $Z_{L,i}$ are zero mean random variables). Denote $\bar{P}_i=\expect{X_{1,i}^2}$ and $\sigma_{G,i}^2=\expect{Z_{L,i}^2}$. In the following, we derive an upper bound $C_{G,L}^{(m)}$ on $\frac{1}{m}I(\m{X}_1;\m{Y}_L)$.
\begin{eqnarray}
\nonumber C_{G,L}^{(m)}&:=&\sup_{p(\m{X}_1):\expect{\|\m{X}_1\|^2}\leq m\bar{P}}\frac{1}{m}I(\m{X}_1;\m{Y}_L)\\
\nonumber &\overset{(a)}{=}&\sup_{p(\m{X}_1):\expect{\|\m{X}_1\|^2}\leq m\bar{P}}\frac{1}{m}h(\m{Y}_L)-\frac{1}{m}h(\m{Y}_L|\m{X}_1)\\
\nonumber & \overset{}{=} & \sup_{p(\m{X}_1):\expect{\|\m{X}_1\|^2}\leq m\bar{P}}\frac{1}{m}h(\m{Y}_L)-\frac{1}{m}h(\m{X}_1+\m{Z}_L|\m{X}_1)\\
\nonumber & \overset{(b)}{=} & \sup_{p(\m{X}_1):\expect{\|\m{X}_1\|^2}\leq m\bar{P}}\frac{1}{m}h(\m{Y}_L)-\frac{1}{m}h(\m{Z}_L|\m{X}_1)\\
\nonumber & \overset{(c)}{=} & \sup_{p(\m{X}_1):\expect{\|\m{X}_1\|^2}\leq m\bar{P}}\frac{1}{m}h(\m{Y}_L)-\frac{1}{m}h(\m{Z}_L)\\
\nonumber &\overset{(d)}{\leq} &\sup_{p(\m{X}_1):\expect{\|\m{X}_1\|^2}\leq m\bar{P}}\frac{1}{m}\sum_{i=1}^mh(Y_{L,i})-\frac{1}{m}h(\m{Z}_L)\\
\nonumber &\overset{(e)}{\leq} &\sup_{\bar{P}_i:\sum_{i=1}^m\bar{P}_i \leq m\bar{P}} \frac{1}{m}\sum_{i=1}^m\frac{1}{2}\lo{2\pi e(\bar{P}_i+\sigma_{G,i}^2)}-\frac{1}{m}h(\m{Z}_L)\\
&\overset{(f)}{\leq} & \frac{1}{2}\lo{2\pi e (\bar{P}+d_m(L)\sigma_G^2)}-\frac{1}{m}h(\m{Z}_L).
\label{eq:cn}
\end{eqnarray}
Here, $(a)$ follows from the definition of mutual information~\cite[Pg. 231]{CoverThomas}, $(b)$ follows from the fact that translation does not change the differential entropy~\cite[Pg. 233]{CoverThomas}, $(c)$ uses independence of $\m{Z}_L$ and $\m{X}_1$, and $(d)$ uses the chain rule for differential entropy~\cite[Pg. 232]{CoverThomas} and the fact that conditioning reduces entropy~\cite[Pg. 232]{CoverThomas}. In $(e)$, we used the fact that Gaussian random variables maximize
differential entropy. The inequality $(f)$ follows from the concavity-$\cap$ of the $\log(\cdot{})$
function and an application of Jensen's inequality~\cite[Pg. 25]{CoverThomas}. We also use the fact that
$\frac{1}{m}\sum_{i=1}^m\sigma_{G,i}^2= d_m(L)\sigma_G^2$, which can be proven as follows
\begin{eqnarray}
\nonumber\frac{1}{m}\expect{\sum_{i=1}^m Z_{L,i}^2 }&\overset{(\text{using}~\eqref{eq:fz})}{=}& \frac{\sigma_G^2}{m}\int_{\m{z}\in\mathcal{S}_L^G}\frac{\|\m{z}\|^2}{\sigma_G^2} c_m(L)\frac{\exp\left(-\frac{\|\m{z}_G\|^2}{2\sigma_G^2}\right)}{\left(\sqrt{2\pi\sigma_G^2}\right)^m}d\m{z}_G\\
\nonumber & =& \frac{c_m(L)\sigma_G^2}{m}\expect{\|\m{Z}_G\|^2\indi{\|\m{Z}_G\|\leq \sqrt{mL^2\sigma_G^2}}}\\
\nonumber&\overset{(\m{\widetilde{Z}}:=\frac{\m{Z}_G}{\sigma_G})}{=}&\frac{c_m(L)\sigma_G^2}{m}\expect{\|\m{\widetilde{Z}}\|^2\indi{\|\m{\widetilde{Z}}\|\leq \sqrt{mL^2}}}\\
\nonumber&=& \frac{c_m(L)\sigma_G^2}{m}\bigg(\expect{\|\m{\widetilde{Z}}\|^2}-\expect{\|\m{\widetilde{Z}}\|^2\indi{\|\m{\widetilde{Z}}\|> \sqrt{mL^2}}}\bigg)\\
\nonumber&\overset{(\text{using}~\eqref{eq:psinplus2})}{=}&\frac{c_m(L)\sigma_G^2}{m}\left(m-m\psi(m+2,\sqrt{mL^2})\right)\\
&=&c_m(L)\left(1-\psi(m+2,L\sqrt{m}) \right)\sigma_G^2 = d_m(L)\sigma_G^2.
\label{eq:expectzl}
\end{eqnarray}
We now compute $h(\m{Z}_L)$
\begin{eqnarray}
\label{eq:hz}
\nonumber h(\m{Z}_L)&=&\int_{\m{z}\in \mathcal{S}_L^G}f_{Z_L}(\m{z})\lo{\frac{1}{f_{Z_L}(\m{z})}}d\m{z}=\int_{\m{z}\in \mathcal{S}_L^G}f_{Z_L}(\m{z})\lo{\frac{\left(\sqrt{2\pi \sigma_G^2}\right)^m}{c_m(L)e^{-\frac{\|\m{z}\|^2}{2\sigma_G^2}}}}d\m{z}\\
&=& -\lo{c_m(L)}+\frac{m}{2}\lo{2\pi\sigma_G^2}+\int_{\m{z}\in\mathcal{S}_L^G}c_m(L)f_{G}(\m{z})\frac{\|\m{z}\|^2}{2\sigma_G^2}\lo{e}d\m{z}.
\end{eqnarray}
Analyzing the last term of~\eqref{eq:hz},
\begin{eqnarray}
\nonumber \int_{\m{z}\in\mathcal{S}_L^G}c_m(L)f_{G}(\m{z})\frac{\|\m{z}\|^2}{2\sigma_G^2}\lo{e}d\m{z} &=&\frac{\lo{e}}{2\sigma_G^2} \int_{\m{z}\in\mathcal{S}_L^G}c_m(L)\frac{ e^{-\frac{\|\m{z}\|^2}{2\sigma_G^2}} } { \left(\sqrt{2\pi \sigma_G^2}\right)^m }\|\m{z}\|^2d\m{z}\\
\nonumber &=&\frac{\lo{e}}{2\sigma_G^2} \int_{\m{z}}f_{Z_L}(\m{z})\|\m{z}\|^2d\m{z}\\
\nonumber &\overset{(\text{using}~\eqref{eq:fz})}{=}&\frac{\lo{e}}{2\sigma_G^2}\expectp{G}{\|\m{Z}_L\|^2} = \frac{\lo{e}}{2\sigma_G^2} \expectp{G}{\sum_{i=1}^m Z_{L,i}^2 } \\
&\overset{(\text{using}~\eqref{eq:expectzl})}{=}& \frac{\lo{e}}{2\sigma_G^2}md_m(L)\sigma_G^2=\frac{m\lo{e^{d_m(L)}}}{2}.
\label{eq:lastterm}
\end{eqnarray}
The expression $C_{G,L}^{(m)}$ can now be upper bounded using~\eqref{eq:cn},~\eqref{eq:hz} and~\eqref{eq:lastterm} as follows.
\begin{eqnarray}
\nonumber C_{G,L}^{(m)}&\leq& \frac{1}{2}\lo{2\pi e (\bar{P}+d_m(L)\sigma_G^2)}+\frac{1}{m}\lo{c_m(L)} - \frac{1}{2}\lo{2\pi\sigma_G^2}-\frac{1}{2}\lo{e^{d_m(L)}}\\
\nonumber &=& \frac{1}{2}\lo{2\pi e (\bar{P}+d_m(L)\sigma_G^2)}+\frac{1}{2}\lo{c_m^{\frac{2}{m}}(L)} - \frac{1}{2}\lo{2\pi\sigma_G^2}-\frac{1}{2}\lo{e^{d_m(L)}}\\
&= & \frac{1}{2}\lo{\frac{2\pi e (\bar{P}+d_m(L)\sigma_G^2)c_m^{\frac{2}{m}}(L)}{2\pi \sigma_G^2 e^{d_m(L)}} }= \frac{1}{2}\lo{\frac{e^{1-d_m(L)} (\bar{P}+d_m(L)\sigma_G^2)c_m^{\frac{2}{m}}(L)}{\sigma_G^2} }.
\label{eq:capacitybd}
\end{eqnarray}
\end{proof}
Now, recall that the rate-distortion function $D_m(R)$ for squared error distortion for source $\m{X}_0$ and reconstruction $\whatmn{X}_L$ is,
\begin{equation}
D_m(R):=
\inf_{\scriptsize \begin{array}{c}
p(\whatmn{X}_L|\m{X}_0)\\
\frac{1}{m}I(\m{X}_0;\whatmn{X}_L)\leq R
\end{array}}
\frac{1}{m}\expectp{\m{X}_0,\m{Z}_G}{\|\m{X}_0-\whatmn{X}_L\|^2},
\end{equation}
which is the dual of the rate-distortion function~\cite[Pg. 341]{CoverThomas}.
Since $I(\m{X}_0;\whatmn{X}_L)\leq mC_{G,L}^{(m)}$, using the converse to
the rate distortion theorem~\cite[Pg. 349]{CoverThomas} and the upper
bound on the mutual information represented by $C_{G,L}^{(m)}$,
\begin{equation}
\label{eq:ratedist}
\frac{1}{m} \expectp{\m{X}_0,\m{Z}_G}{\|\m{X}_0-\whatmn{X}_L\|^2} \geq D_m(C_{G,L}^{(m)}).
\end{equation}
Since the Gaussian source is iid, $D_m(R)=D(R)$, where
$D(R)=\sigma_0^22^{-2R}$ is the distortion-rate function for a
Gaussian source of variance
$\sigma_0^2$~\cite[Pg. 346]{CoverThomas}. Thus,
using~\eqref{eq:sqrtd},~\eqref{eq:xl} and~\eqref{eq:ratedist},
\begin{eqnarray*}
\expectp{\m{X}_0,\m{Z}_G}{J_2^{(\gamma)}(\m{X}_0,\m{Z})|\m{Z}\in\mathcal{S}_L^G} \geq \left(\left(\sqrt{D(C_{G,L}^{(m)})} - \sqrt{P} \right)^+\right)^2.
\end{eqnarray*}
Substituting the bound on $C_{G,L}^{(m)}$ from~\eqref{eq:capacitybd},
\begin{eqnarray*}
D(C_{G,L}^{(m)})= \sigma_0^2 2^{-2C_{G,L}^{(m)}} =\frac{\sigma_0^2\sigma_G^2}{c_m^{\frac{2}{m}}(L)e^{1-d_m(L)} (\bar{P}+d_m(L)\sigma_G^2)}.
\end{eqnarray*}
Using~\eqref{eq:sqrtd}, this completes the proof of the lemma. Notice
that $c_m(L)\rightarrow 1$ and $d_m(L)\rightarrow 1$ for fixed $m$ as $L\rightarrow\infty$, as well as for fixed $L>1$ as $m\rightarrow\infty$. So the lower bound on $D(C_{G,L}^{(m)})$ approaches $\kappa$ of Theorem~\ref{thm:oldbound} in both of
these two limits.
\bibliographystyle{IEEEtran}
| proofpile-arXiv_065-5385 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
String compactifications with background fluxes (see e.g. \cite{Grana:2005jc,Douglas:2006es,Blumenhagen:2006ci,Denef:2007pq} for reviews) provide a simple framework in which the stabilization of moduli fields can be discussed in a very controlled and natural way. A complete stabilization of all moduli may also require the inclusion of quantum corrections, as e.g. in \cite{Kachru:2003aw}, but there are also scenarios where the fluxes alone are sufficient for a tree-level stabilization of all closed string moduli \cite{DeWolfe:2005uu}.
From a cosmological point of view, it is especially interesting to understand moduli stabilization at positive potential energy, either in order to obtain local dS minima so as to describe the present accelerated cosmic expansion, or in order to stabilize possible runaway directions in inflationary potentials. A particularly well controlled situation would be one in which this could be achieved at a purely classical level, i.e., by the dimensional reduction of the standard two-derivative 10D supergravity action supplemented with the lowest order actions for brane type sources.
As is well-known for a long time, however, there are powerful no-go theorems \cite{Gibbons:1984kp,deWit:1986xg,Maldacena:2000mw} that forbid such tree-level dS compactifications under a few simple assumptions, one of them being the absence of negative tension objects such as orientifold planes. As orientifold planes are a common ingredient in phenomenologically interesting type II compactifications, it seems natural to explore the possibility of tree-level dS vacua or inflation models in type II orientifolds. It is the purpose of this note to give an overview of the most promising controlled models of this type. For simplicity, we do not consider D-branes and the associated open string moduli (although the analysis would be similar). Moreover, we take the O-planes to be smeared over their transverse directions \cite{Grimm:2004ua,DeWolfe:2005uu,Koerber:2008rx,Caviezel:2008ik}, assuming that the results approximate fully localized warped solutions \cite{Acharya:2006ne} consistent with the results of \cite{Douglas:2010rt}.
\section{No-go theorems in the volume-dilaton plane}
Constructions of dS vacua or inflation models from classical string compactifications are severely limited by a set of very simple ``no-go theorems''. These no-go theorems go beyond \cite{Gibbons:1984kp,deWit:1986xg,Maldacena:2000mw}, as they do allow for orientifold planes and generalize the theorems used in \cite{Hertzberg:2007wc} for IIA flux compactifications on Calabi-Yau spaces with O6-planes and the IIB setup of \cite{Giddings:2001yu}. They follow from the scaling behavior of the different scalar potential contributions with respect to two universal moduli fields that occur in any perturbative and geometric string compactification. These two fields are the volume modulus $\rho \equiv (\textrm{vol}_6)^{1/3}$ and an orthogonal modulus related to the dilaton: $\tau \equiv e^{-\phi} \sqrt{\textrm{vol}_6} = e^{-\phi} \rho^{3/2}$, where $\phi$ denotes the 10D dilaton, and $\textrm{vol}_6$ is the 6D volume in string frame. After going to the four-dimensional Einstein frame, one then finds the following scalings for the contributions to the 4D scalar potential coming from the $H$-flux, the RR-fluxes $F_p$, as well as from O$q$-planes and the six-dimensional Ricci scalar $R_6$:
\begin{equation}
V_H \sim \tau^{-2} \rho^{-3}, \quad V_{F_p} \sim \tau^{-4} \rho^{3-p}, \quad V_{Oq} \sim \tau^{-3} \rho^{\frac{q-6}{2}}, \quad V_{R_6} \sim \tau^{-2} \rho^{-1}. \label{Scalings}
\end{equation}
Note that $V_H, V_{F_p} \geq 0$ and $V_{Oq} \leq 0$ while $V_{R_6}$ can have either sign.
The most widely studied classes of compactifications are based on special holonomy manifolds such as $CY_3$'s, $T^2 \times K3$ or $T^6$, which are Ricci-flat, i.e. they have $V_{R_6}=0$. In order to find the minimal necessary ingredients for classical dS vacua in this simplified case, we act on $V=V_H + \sum_p V_{F_p} + \sum_q V_{Oq}$ \footnote{The possible values for $p$ and $q$ consistent with four-dimensional Lorentz invariants are $p\in\{0,2,4,6\}$, $q\in\{4,6,8\}$ in type IIA theory and $p\in\{1,3,5\}$, $q\in \{3,5,7,9\}$ in type IIB theory. To cancel the charges of the O$q$-planes in the Ricci-flat case we need $V_H \neq 0$ and $\sum_p V_{F_p}\neq 0$. For compactification with $V_{R_6} \neq 0$ we need $\sum_p V_{F_p}\neq 0$.} with a differential operator $D:= -a \tau \partial_\tau - b \rho \partial_\rho$, where $a$ and $b$ denote some as yet arbitrary real constants. If one can show that there is a constant $c>0$ such that $D V \geq c V$, then dS vacua and generically slow-roll inflation are excluded. Indeed, a dS extremum requires $D V=0$ and $V>0$, which is inconsistent with $D V \geq c V >0$. Similarly, the slow-roll parameter $\epsilon =\frac{K^{i\bar{j}} \partial_i V \partial_{\bar{j}} V}{V^2} \geq \frac{c^2}{4a^2+3b^2}$ is normally of order one so that slow-roll inflation with $\epsilon \ll 1$ is forbidden. Using (\ref{Scalings}), this means that, if we can find $a,b$ such that
\begin{eqnarray}
&&D V_H = (2a+3b) V_H, \quad D V_{F_p} = (4a+(p-3)b) V_{F_p}, \quad D V_{Oq} = \left( 3a + \frac{6-q}{2} b\right) V_{Oq}, \nonumber\\
&& \text{with} \quad (2a+3b) \geq c \geq \left( 3a + \frac{6-q}{2} b\right), \quad (4a+(p-3)b) \geq c \geq \left( 3a + \frac{6-q}{2} b\right), \quad \forall p,q, \nonumber
\end{eqnarray}
then we have a no-go theorem that forbids classical dS vacua and inflation. The two inequalities above have a solution if and only if $q+p-6\geq 0,\, \forall p,q$. This condition is for example satisfied for type IIA compactifications on a $CY_3$ with O6-planes and arbitrary RR-fluxes or, analogously, for the type IIB theory with O3-planes and $F_3$-flux \cite{Hertzberg:2007wc}. Conversely, avoiding this no-go theorem at the classical level would require compactifications with $H$-flux that, in type IIA, allow for O4-planes and $F_0$-flux or, in type IIB, allow for O3-planes and $F_1$-flux. However, the $F_0$-flux needs to be odd under the O4 orientifold projection and therefore normally has to vanish. Similarly, all one-forms are normally odd under the O3 orientifold projection, but the $F_1$-flux has to be even and should therefore also vanish in this constellation\footnote{It might in principle be possible to consider compactifications on toroidal orbifolds that have for example $F_1$-flux only in the bulk and O3-planes in the twisted sector. In this note we restrict ourselves to the bulk sector only.}.
A possible way out of these difficulties might be to allow also for non Ricci-flat manifolds. This would contribute the additional term $V_{R_6} \sim -R_6 \sim \tau^{-2} \rho^{-1}$ to the scalar potential. It is easy to check that for positively curved manifolds ($V_{R_6} < 0$) the above conditions cannot be relaxed. Although $H$-flux is not necessary anymore to cancel the O$q$-plane tadpole, one still needs it to avoid a no-go theorem with $b=0$. For manifolds with integrated negative curvature, on the other hand, the condition for a no-go theorem becomes relaxed to $q+p-8\geq 0,\, \forall p,q$. The only exception is the case with O3-planes and $F_5$-flux, which saturates this inequality, but would require $c=0$ and therefore cannot be excluded based on this inequality. Table \ref{table} summarizes the no-go theorems against classical dS vacua and slow-roll inflation\footnote{In \cite{Danielsson:2009ff} a similar conclusion is reached for dS vacua with small cosmological constant using the ``abc''-method of \cite{Silverstein:2007ac}.}.
\begin{table}[h!]
\begin{center}
\begin{tabular}{|c|c|c|c|}
\hline
Curvature & No-go, if & No no-go in IIA with & No no-go in IIB with \\ \hline \hline
$V_{R_6} \sim -R_6 \leq 0$ & \begin{tabular}{c} $q+p-6\geq 0,\, \forall p,q,$\\ $\epsilon \geq \frac{(3+q)^2}{3+q^2} \geq \frac{12}{7}$ \end{tabular} & O4-planes and $H$, $F_0$-flux & O3-planes and $H$, $F_1$-flux \\ \hline
$V_{R_6} \sim -R_6 > 0$ & \begin{tabular}{c}
$q+p-8\geq 0,\, \forall p,q,$ \\
(except $q=3,p=5$)\\ $\epsilon \geq \frac{(q-3)^2}{q^2-8q+19} \geq \frac{1}{3}$ \end{tabular} & \begin{tabular}{c}
O4-planes and $F_0$-flux \\
O4-planes and $F_2$-flux \\
O6-planes and $F_0$-flux
\end{tabular} &
\begin{tabular}{c}
O3-planes and $F_1$-flux \\
O3-planes and $F_3$-flux \\
O3-planes and $F_5$-flux \\
O5-planes and $F_1$-flux
\end{tabular} \\ \hline
\end{tabular}
\end{center}
\caption{The table summarizes the conditions that are needed in order to find a no-go theorem in the $(\rho,\tau)$-plane and the resulting lower bound on the slow-roll parameter $\epsilon$. The third and fourth column spell out the minimal ingredients necessary to evade such a no-go theorem.\label{table}}
\end{table}
As we have argued above, it is difficult to find explicit examples with O3-planes and $F_1$-flux or with O4-planes and $F_0$-flux. The same turns out to be true for O3-planes with non-vanishing curvature \cite{Caviezel:2009tu}. The prospects of stabilizing all moduli at tree-level in IIA compactifications with O4-planes are not clear so we will restrict ourselves in the rest of these notes to compactifications on manifolds with negative curvature and O6-planes in type IIA or O5-planes in type IIB. Moreover, we will focus on those compactifications that give an effective 4D, $\mathcal{N}=1$ supergravity action, which leads us to SU(3)-structure manifolds with O6-planes in IIA, and SU(2)-structure compactifications with O5- and O7-planes in type IIB string theory.\footnote{We need to compactify on an SU(2)-structure manifold in IIB, because the $F_1$-flux requires a 1-form. $\mathcal{N}=1$ supersymmetry then also requires O7-planes in addition to the O5-planes.}
\section{Type IIA}
The attempts to construct classical dS vacua in IIA compactifications on manifolds with negative curvature and O6-planes were initiated in \cite{Silverstein:2007ac}, where also other types of sources such as KK5-monopoles were used. A similar construction with only the ingredients of eq. \eqref{Scalings} was attempted in \cite{Haque:2008jz}, whereas in \cite{Danielsson:2009ff} the authors argued that the constructions of \cite{Silverstein:2007ac} and \cite{Haque:2008jz} cannot be lifted to full 10D solutions.
In this note, we review IIA compactifications on a special class of SU(3)-structure manifolds, namely coset spaces \cite{Koerber:2008rx,Caviezel:2008ik,Caviezel:2008tf} involving semisimple and Abelian groups, as well as twisted tori (solvmanifolds) \cite{Ihl:2007ah,Flauger:2008ad}. The underlying Lie group structure endows these spaces with a natural expansion basis (the left-invariant forms)
for the various higher-dimensional fields and fluxes, and one expects that the resulting 4D, $\mathcal{N}=1$ theory is a consistent truncation of the full 10D theory \cite{Cassani:2009ck}. Furthermore, in these compactifications it is possible to stabilize all moduli in AdS vacua \cite{Grimm:2004ua,DeWolfe:2005uu,Ihl:2007ah}. This means that the scalar potential generically depends on all moduli, which is a prerequisite for the construction of metastable dS vacua.
Whereas the previous analysis focused on the behavior of the potential in the volume-dilaton plane, it is clear that once the no-go theorems using these fields are circumvented, one must still make sure that there are no other steep directions of the scalar potential in directions outside the $(\rho,\tau)$-plane. For the coset spaces and twisted tori studied in \cite{Caviezel:2008tf,Flauger:2008ad}, the volume turns out to factorize further into a two-dimensional and a four-dimensional part: $\textrm{vol}_6 = \rho^3 = \rho_{(2)} \rho^2_{(4)}$. In such cases one can then study directions involving $\rho_{(2)}$ or $\rho_{(4)}$ and finds that, if for a given model
\begin{equation}
(-2 \tau\partial_\tau -\rho_{(4)} \partial_{\rho_{(4)}}) V_{R_6} \geq 6 V_{R_6},
\end{equation}
then the full scalar potential also satisfies $(-2 \tau\partial_\tau -\rho_{(4)} \partial_{\rho_{(4)}}) V \geq 6 V$, and one obtains the bound $\epsilon \geq2$. In \cite{Caviezel:2008tf} six out of seven coset spaces could be excluded by this refined no-go theorem. In \cite{Flauger:2008ad} many similar no-go theorems were discussed and used to exclude almost all concrete models of twisted tori.
The only spaces that could not be excluded in this manner are $SU(2) \times SU(2)$ with four O6-planes and a twisted version of $T^6/\mathbb{Z}_2 \times \mathbb{Z}_2$. These two spaces are closely related \cite{Aldazabal:2007sn}, and therefore it is not surprising that they have very similar properties. In particular, for both of these models it is possible to find (numerical) dS extrema \cite{Caviezel:2008tf,Flauger:2008ad}. Unfortunately, these dS extrema are unstable as one of the 14 real scalar fields turns out to be tachyonic with an $\eta$ parameter of order one. Interestingly, this tachyon is not the potential tachyon candidate identified for certain types of K\"ahler potentials in \cite{Covi:2008ea}. This can also be seen from the results in \cite{deCarlos:2009fq}, where a similar K\"ahler potential and a modified superpotential based on non-geometric fluxes lead to stable dS vacua (see also \cite{Dall'Agata:2009gv,Roest:2009tt,Dibitetto:2010rg}).
\section{Type IIB}
For type IIB compactifications we have seen that it is possible to evade simple no-go theorems in the $(\rho,\tau)$-plane if one includes O5-planes and $F_1$-flux. A concrete class of compactifications that allows for these ingredients and also preserves $\mathcal{N}=1$ supersymmetry in 4D was presented in \cite{Caviezel:2009tu} based on 6D SU(2)-structure spaces with O5- and O7-planes. As discussed there, these compactifications are formally T-dual to compactification of type IIA on SU(3)-structure spaces with O6-planes, however these T-duals are generically non-geometric and hence do not fall under the analysis of the previous section.
This particular class of IIB compactifications has the very interesting property that the tree-level scalar potential allows for fully stabilized supersymmetric AdS vacua with large volume and small string coupling \cite{Caviezel:2009tu}. This is very different from the no-scale property of classical type IIB compactifications on $CY_3$ manifolds along the lines of \cite{Giddings:2001yu}. It also shows that the scalar potential generically depends on all moduli.
For six-dimensional SU(2)-structure spaces the split of the volume $\textrm{vol}_6 = \rho^3 = \rho_{(2)} \rho^2_{(4)}$ into a two-dimensional and a four-dimensional part is very natural, and also the complex structure moduli naturally split into two classes. This allows one \cite{Caviezel:2009tu} to derive many no-go theorems and exclude most concrete examples of coset spaces and twisted tori with SU(2)-structure. The only space that was found to evade all the no-go theorems is $SU(2) \times SU(2)$ with an SU(2)-structure and O5- and O7-planes. Just as in the IIA analogue, we can find dS critical points, but again these have at least one tachyonic direction with a large $\eta$ parameter. It would be interesting to understand the geometrical meaning of this tachyon as well as the relation of the dS extrema found in \cite{Caviezel:2008tf,Flauger:2008ad,Caviezel:2009tu} to fully localized warped 10D solutions \cite{Acharya:2006ne,Douglas:2010rt}.
\begin{acknowledgement}
This work was supported by the German Research Foundation (DFG) within the Emmy Noether Program (Grant number ZA 279/1-2) and the Cluster of Excellence ``QUEST".
\end{acknowledgement}
| proofpile-arXiv_065-5394 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{#1}}
\def\hbox{$1\hskip -1.2pt\vrule depth 0pt height 1.6ex width 0.7pt \vrule depth 0pt height 0.3pt width 0.12em$}{\hbox{$1\hskip -1.2pt\vrule depth 0pt height 1.6ex width 0.7pt \vrule depth 0pt height 0.3pt width 0.12em$}}
\begin{document}
\bigskip
\begin{center}
{\Large{\bf{B\"acklund Transformations as exact integrable time-discretizations for the trigonometric Gaudin model}}}
\end{center}
\bigskip
\begin{center}
{ {\bf Orlando Ragnisco, Federico Zullo}}
{Dipartimento di Fisica, Universit\`a di Roma Tre \\ Istituto Nazionale di
Fisica Nucleare, sezione di Roma Tre\\ Via Vasca Navale 84, 00146 Roma, Italy \\
~~E-mail: [email protected], [email protected]}
\end{center}
\medskip
\medskip
\begin{abstract}
\noindent
We construct a two-parameter family of B\"acklund transformations for the trigonometric classical
Gaudin magnet. The approach follows closely the one introduced by E.Sklyanin
and V.Kuznetsov (1998,1999) in a number of seminal papers, and takes
advantage of the intimate relation between the trigonometric and the rational
case. As in the paper by A.Hone, V.Kuznetsov and one of the authors (O.R.)
(2001) the B\"acklund transformations are presented as explicit symplectic
maps, starting from their Lax representation. The (expected) connection with
the \emph{xxz} Heisenberg chain is established and the rational (\emph{xxx}) case is recovered in
a suitable limit. It is shown how to obtain a ``physical'' transformation
mapping real variables into real variables. The interpolating Hamiltonian flow
is derived and some numerical iterations of the map are presented.
\end{abstract}
\bigskip\bigskip\bigskip\bigskip
\noindent
\noindent
KEYWORDS: B\"acklund Transformations, Integrable maps, Gaudin systems, Lax representation, \emph{r}-matrix.
\newpage
\section{Introduction}
B\"acklund transformations are nowadays a widespread useful tool
related to the theory of nonlinear differential equations. The first
historical evidence of their mathematical significance was given by Bianchi
\cite{Bianchi} and B\"acklund \cite{Backlund} on their works on surfaces of
constant curvature. A simple approach to understand their
importance can be to regard them as a mechanism allowing to endow a
given nonlinear differential equation with a nonlinear superposition principle
yielding a set of solutions through a merely \emph{algebraic procedure}
\cite{Rogers},\cite{Adler},\cite{Levi}. B\"acklund transformations are indeed parametric families of difference equations encoding the whole set of symmetries of a given integrable dynamical system. For finite-dimensional integrable systems the technique of B\"acklund transformations leads to the construction of integrable Poisson maps that discretize a family of continuous flows
\cite{SW},\cite{Ves},\cite{Sur2},\cite{S1},\cite{SK},\cite{KV}. Actually in
the last two decades numerous results have appeared in the field of exact
discretization of many-body integrable systems employing the B\"acklund
transformations tools
\cite{Rag_Sur},\cite{Sur2},\cite{Rag},\cite{KV},\cite{SK},\cite{Nij},\cite{S1}.
For the \emph{rational} Gaudin model such
discretization has been obtained ten years ago in \cite{HKR}; afterwards, these
results have been used for constructing an integrable discretization of classical
dynamical systems (as the Lagrange top) connected to Gaudin model through In\"onu-Wigner
contractions \cite{MPR},\cite{KPR},\cite{MPRS}.
\noindent
The aim of the present work is to construct B\"acklund
transformations for the Gaudin model in the partially anisotropic ($xxz$) case,
i.e. for the \emph{trigonometric} Gaudin model. We point out that partial results on this issue have already been
given in \cite{noi}.
\noindent
The paper is organized as follows.
\newline
In Section (\ref{sec1}) we review the main features of the trigonometric Gaudin model from the point of view of its
integrability structure. For the sake of completeness, in Section
(\ref{sec2}) we briefly recall the preliminary results on B\"acklund Transformations (BTs) for trigonometric Gaudin given in \cite{noi}. In Section (\ref{sec3}) the \emph{explicit form} of BTs
is given; it is shown that they are indeed a trigonometric
generalization of the rational ones (see \cite{HKR}) which can be recovered in a suitable (``small angle" ) limit. The simplecticity of the
transformations is also discussed in the same Section and the proof allows us to
elucidate the (expected) link between the Darboux-dressing matrix and the
elementary Lax matrix for the $xxz$ Heisenberg magnet on the lattice. We end the
Section by mentioning an open question, namely the construction of an explicit generating function for
these B\"acklund transformations. In Section (\ref{sec4}) we will show how our
map can lead, with an appropriate choice of B\"acklund parameters, to physical
transformations, i.e. transformations from real variables to real
variables. In the last Section we show how a suitable continuous limit yields the interpolating Hamiltonian flow and finally present numerical examples of iteration of the map.
\section{Gaudin magnet in the trigonometric case} \label{sec1}
For a full account of the integrability structure of the classical and quantum Gaudin model we refer the reader to the fundamental contributions by Semenov-Tian-Shanski \cite{STS} and Babelon-Bernard-Talon \cite{BT}. In this section we briefly recall the main features of the trigonometric Gaudin magnet.
\newline
The Lax matrix of the model is given by the expression:
\begin{equation}\label{eq:lax}
L(\lambda) = \left( \begin{array}{cc} A(\lambda) & B(\lambda)\\C(\lambda)&-A(\lambda)\end{array}
\right)
\end{equation}
\begin{equation}
\label{ABC}
A(\lambda)=\sum_{j=1}^{N}\cot(\lambda-\lambda_{j})s^{3}_{j}, \qquad
B(\lambda)=\sum_{j=1}^{N}\frac{s^{-}_{j}}{\sin(\lambda-\lambda_{j})},\qquad C(\lambda)=\sum_{j=1}^{N}\frac{s^{+}_{j}}{\sin(\lambda-\lambda_{j})}.
\end{equation}
In (\ref{eq:lax}) and (\ref{ABC}) $\lambda \in \mathbb{C}$ is the spectral parameter, $\lambda_{j}$
are arbitrary real parameters of the model, while
$\big(s^{+}_{j},s^{-}_{j},s^{3}_{j}\big)$,\, $j=1, \ldots, N$, are the dynamical
variables of the system obeying to $\oplus^{N} sl(2)$ algebra, i.e.
\begin{gather} \label{poisS}
\big\{s^{3}_{j},s^{\pm}_{k}\big\}=\mp i\delta_{jk}s^{\pm}_{k}, \qquad
\big\{s^{+}_{j},s^{-}_{k}\big\}=-2i\delta_{jk}s^{3}_{k},
\end{gather}
By fixing the $N$ Casimirs $ \big(s_{j}^{3}\big)^{2}+s_{j}^{+}s_{j}^{-}\doteq s_{j}^{2}$ one obtains a symplectic manifold given by the
direct sum of the correspondent $N$ two-spheres.
\\
Reformulating the Poisson structure in terms of the $r$-matrix formalism amounts
to state that the Lax matrix satisfies the \emph{linear} $r$-matrix Poisson algebra (see again \cite{STS}, \cite{BT}) :
\begin{gather} \label{eq:pois}
\big\{ L(\lambda)\otimes \hbox{$1\hskip -1.2pt\vrule depth 0pt height 1.6ex width 0.7pt \vrule depth 0pt height 0.3pt width 0.12em$}, \hbox{$1\hskip -1.2pt\vrule depth 0pt height 1.6ex width 0.7pt \vrule depth 0pt height 0.3pt width 0.12em$}\otimes L(\mu)\big\}=\big[r_{t}(\lambda-\mu), L(\lambda)\otimes \hbox{$1\hskip -1.2pt\vrule depth 0pt height 1.6ex width 0.7pt \vrule depth 0pt height 0.3pt width 0.12em$} + \hbox{$1\hskip -1.2pt\vrule depth 0pt height 1.6ex width 0.7pt \vrule depth 0pt height 0.3pt width 0.12em$}\otimes L(\mu) \big],
\end{gather}
where $r_{t}(\lambda)$ stands for the trigonometric $r$ matrix \cite{FT}:
\begin{gather}
r_{t}(\lambda) = \frac{i}{\sin(\lambda)}\left(\begin{array}{cccc} \cos(\lambda)& 0 & 0 &0 \\
0 & 0 & 1 & 0\\
0 & 1 & 0 & 0\\
0 & 0 & 0 & \cos(\lambda)
\end{array}\right),
\end{gather}
Equation (\ref{eq:pois}) entails the following Poisson brackets for the functions (\ref{ABC}):
\begin{gather}
\{A(\lambda),A(\mu)\}=\{B(\lambda),B(\mu)\}=\{C(\lambda),C(\mu)\}=0,\nonumber\\
\{A(\lambda),B(\mu)\}=i\frac{\cos(\lambda-\mu)B(\mu)-B(\lambda)}{\sin(\lambda-\mu)},\nonumber\\
\{A(\lambda),C(\mu)\}=i\frac{C(\lambda)-\cos(\lambda-\mu)C(\mu)}{\sin(\lambda-\mu)},\nonumber\\
\{B(\lambda),C(\mu)\}=i\frac{2(A(\mu)-A(\lambda))}{\sin(\lambda-\mu)}
\end{gather}
The determinant of the Lax matrix is the generating function of the integrals
of motion:
\begin{equation}\label{generfun}
-\textrm{det}(L)=A^{2}(\lambda)+B(\lambda)C(\lambda)=\sum_{i=1}^{N}\left(\frac{s_{i}^{2}}{\sin^{2}(\lambda-\lambda_{i})}+H_{i}\cot(\lambda-\lambda_{i})\right)-H_{0}^{2}
\end{equation}
where the $N$ Hamiltonians $H_{i}$ are of the form:
\begin{equation} \label{hams}
H_{i}=\sum_{k\neq i}^{N} \frac{2\cos(\lambda_{i}-\lambda_{k})s_{i}^{3}s_{k}^{3}+ s_{i}^{+}s_{k}^{-}+ s_{i}^{-}s_{k}^{+}}{\sin(\lambda_{i}-\lambda_{k})}
\end{equation}
Note that only $N-1$ among these Hamiltonians are independent, because of
$\sum_{i}H_{i}=0$. Another integral is given by $H_{0}$, the projection of the total spin on the $z$ axis:
\begin{equation}
H_{0}=\sum_{j=1}^{N}s_{j}^{3} \doteq J^{3}
\end{equation}
The Hamiltonians $H_{i}$ are in involution for the Poisson bracket
(\ref{poisS}):
\begin{equation}
\{H_{i},H_{j}\}=0 \quad i,j=0,\ldots ,N-1
\end{equation}
The corresponding Hamiltonian flows are then given by:
\begin{equation}
\frac{ds^{3}_{j}}{dt_{i}}=\{H_{i},s^{3}_{j}\} \qquad
\frac{ds^{\pm}_{j}}{dt_{i}}=\{H_{i},s^{\pm}_{j}\}
\end{equation}
In the $xxx$ model a remarkable Hamiltonian is found by taking
a linear combination of the integrals corresponding to (\ref{hams}) in the
rational case \cite{FM}. It describes a mean field spin-spin interaction:
$$
\mathcal{H}_{r}=\frac{1}{2}\sum_{i\neq j}^{N}\mathbf{s}_{i}\cdot\mathbf{s}_{j}
$$
Where the notation for the bold symbol $\mathbf{s}_{i}$ is
$\mathbf{s}_{i}=(s_{i}^{1},s_{i}^{2},s_{i}^{3})$ with
$s_{i}^{+}=s_{i}^{1}+is_{i}^{2}$ and $s_{i}^{-}=s_{i}^{1}-is_{i}^{2}$. The
natural trigonometric generalization of this Hamiltonian can be found by taking the linear combination of \ref{hams}:
$$
\sum_{i=1}^{N}\frac{\sin(2\lambda_{i})}{2}H_{i}
$$
giving
\begin{equation}
\mathcal{H}_{t}=\frac{1}{2}\sum_{i\neq j}^{N}\cos(\lambda_{i}+\lambda_{j})\left(s_{i}^{1}s_{j}^{1}+s_{i}^{2}s_{j}^{2}+\cos(\lambda_{i}-\lambda_{j})s_{i}^{3}s_{j}^{3}\right)
\end{equation}
\section{A first approach to Darboux-dressing matrix}\label{sec2}
In this Section, for the sake of completeness, we recall the results already appeared in \cite{noi}. The leading observation is that by performing the ``uniformization'' mapping:
\begin{gather*}
\lambda \to z\doteq e^{i\lambda} \label{uni}
\end{gather*}
the $N$-sites trigonometric Lax matrix takes a rational form in $z$ that
corresponds to the $2N$-sites rational Lax matrix plus an additional reflection
symmetry (see also \cite{Hik}); in fact, by performing the substitution (\ref{uni}), the Lax matrix (\ref{eq:lax}) becomes:
\begin{gather}
\label{eq:ratiL}
L(z)=iJ^{3}+\sum_{j=1}^{N} \left(\frac{L_{1}^{j}}{z-z_{j}}-\sigma_3\frac{L_{1}^{j}}{z+z_{j}}\sigma_3\right),
\end{gather}
where $\sigma_{3}$ is the Pauli matrix $diag(1,-1)$ and the matrices $L_{1}^{j}$, \, $j=1, \ldots, N$, are given by:
\begin{gather*}
L_{1}^{j}=iz_{j}\left(\begin{array}{cc}s_{j}^{3} & s_{j}^{-}\vspace{1mm}\\
s_{j}^{+} & -s_{j}^{3}\end{array}\right)
\end{gather*}
So, equation (\ref{eq:ratiL}) entails the following involution on $L(z)$:
\begin{gather}
L (z) = \sigma_3 L (-z) \sigma_3\label{Symm}
\end{gather}
Constructing a B\"acklund transformation for the Trigonometric Gaudin System (TGS) amounts to build up a Poisson map for the field variables of the model (\ref{ABC}) such that the integrals of motion (\ref{hams}) are preserved. At the level of Lax matrices, this transformation is usually seeked as a similarity transformation between an {\it{old}}, or ``undressed'', Lax matrix $L$, and a {\it{new}}, or ``dressed'' one, say $\tilde{L}$:
\begin{equation}\label{eq:invpre}
L(z)\to D(z)L(z)D^{-1}(z)\equiv \tl{L}(z)
\end{equation}
But $L$ and $\tilde{L}$ have to enjoy the same reflection symmetry
(\ref{Symm}) too: to preserve this involution the Darboux dressing matrix $D$
has to share with $L$ the property (\ref{Symm}); the elementary dressing
matrix $D$ is then obtained by requiring the existence of only one pair of
opposite poles for $D$ in the complex plane of the spectral parameter. We will show in the next
Section that, thanks to this constraint, one recovers the form of the Lax matrix
for the elementary
$xxz$ Heisenberg spin chain: on the other hand, this is quite natural if one recalls that
for the rational Gaudin model the elementary Darboux-dressing matrix is given
by the Lax matrix for the elementary $xxx$ Heisenberg spin chain \cite{HKR},\cite{SK}. The previous observations lead
to the following Darboux matrix:
\begin{equation}\label{eq:Ddiz}
D(z)=D_{\infty}+\frac{D_{1}}{z-\xi}-\sg_{3}\frac{D_{1}}{z+\xi}\sg_{3}
\end{equation}
By taking the limit $z \to \infty$ in (\ref{eq:Ddiz}) it is readily seen
that $D_{\infty}$ has to be a diagonal matrix. In order to ensure that $L$ and
$\tilde{L}$ have the same rational structure in $z$, we rewrite equation (\ref{eq:invpre}) in the form:
\begin{gather}
\tilde{L} (z) D(z) = D(z) L(z)\label{BTnew}
\end{gather}
Now it is clear that both sides have the same residues at the poles $z=
z_{j}$, $z= \xi_{j}$ (it is unnecessary to look at the poles in $z= -z_{j}$
and $z= -\xi_{j}$ because of the symmetry (\ref{Symm}), so that the following set of equations have to be satisfied:
\begin{gather}
\tilde L_{1}^{(j)} D(z_{j}) = D(z_{j}) L_{1}^{(j)}, \label{resj}
\\
\tilde L (\xi) D_{1} = D_{1} L(\xi). \label {resxi}
\end{gather}
In principle, equations (\ref{resj}), (\ref{resxi}) yield a Darboux matrix
depending \emph{both} on the old (untilded) variables and the new (tilded)
ones, implying in turn an implicit relationship between the same variables. To
get an explicit relationship one has to resort to the so-called spectrality
property \cite{SK} \cite{KV}. To this aim we need to force the determinant of
the Darboux matrix $D(z)$ to have, besides the pair of poles at $z=\pm \xi$, a
pair of opposite \emph{nondynamical} zeroes, say at $z=\pm \eta$, and to allow
the matrix $D_{1}$ to be proportional to a projector \cite{noi}. Again by symmetry it suffices to consider just one of these zeroes.
If $\eta$ is a zero of det$D(z)$, then $D(\eta)$ is a rank one matrix, possessing a one dimensional kernel $|K(\eta)\rangle$; the equation (\ref{BTnew}) :
\begin{gather}
\tilde{L} (\eta) D(\eta) = D(\eta) L(\eta)
\end{gather}
entails
\begin{gather}
D(\eta) L(\eta)|K(\eta)\rangle =0.
\end{gather}
This equation in turn allows to infer that $|K(\eta)\rangle$ is an eigenvector for the Lax matrix $L(\eta)$:
\begin{gather}
\label{spectral1}
L(\eta)|K(\eta)\rangle = \mu (\eta) |K(\eta)\rangle,
\end{gather}
This relations gives a direct link between the parameters appearing in the
dressing matrix $D$ and the \emph{old} dynamical variables in $L$. Because of (\ref{resxi}) we have another one dimensional kernel $|K(\xi)\rangle$ of $D_{1}$, such that:
\begin{gather}
\label{spectral2}
L(\xi)|K(\xi)\rangle = \mu (\xi) |K(\xi)\rangle.
\end{gather}
In \cite{noi} we have shown how the two spectrality conditions (\ref{spectral1}), (\ref{spectral2}) enable to
write $D$ in terms of the old dynamical variables and of the two B\"acklund
parameters $\xi$ and $\eta$. The explicit expression of the Darboux dressing matrix is given by:
\begin{gather}
\label{Dexpl}
D(z)= \frac{\beta z}{z^{2}-\xi^{2}}\left( \begin{array}{cc} {\frac {z \left( p(\eta)\eta-p(\xi)\xi \right)
}{b}}+{\frac{ \left( p(\xi)\eta-p(\eta)\xi \right) \eta\xi}{b z}}&{\frac {{\xi}^{2}-{\eta}^{2}}{b}}\vspace{2mm}\\
{\frac {b p(\xi)p(\eta) \left(
{\xi}^{2}-{\eta}^{2} \right) }{\eta\xi}}&{\frac {b \left( p(\eta)\eta-p(\xi)\xi \right)
}{z}}+{\frac { b z\left( p(\xi)\eta-p(\eta)\xi \right)}{\eta\xi}}\end{array} \right).
\end{gather}
In this expression $\beta$ is a global multiplicative factor, inessential with respect to the form of the
BT, $b$ is an
undeterminate parameter that in Section (\ref{sec3}) we will fix in
order to recover the form of the Lax matrix for the discrete $xxz$ Heisenberg spin chain. The functions
$p(\eta)$ and $p(\xi)$ characterize completely the kernels of $D(\eta)$ and
$D(\xi)$: in fact we have the following formulas \cite{noi}:
\begin{equation}
|K(\xi)\rangle=\left(\begin{array}{c}1\\p(\xi)\end{array}\right) \qquad |K(\eta)\rangle=\left(\begin{array}{c}1\\p(\eta)\end{array}\right)
\end{equation}
As $|K(\xi)\rangle$ and $|K(\eta)\rangle$ are respectively eigenvectors of $L(\xi)$ and $L(\eta)$, $p(\xi)$ and $p(\eta)$
must satisfy:
\begin{gather}
p(\xi)=\frac{\mu(\xi)-A(\xi)}{B(\xi)}, \qquad p(\eta)=\frac{\mu(\eta)-A(\eta)}{B(\eta)}
\end{gather}
with $A(z)$, $B(z)$, $C(z)$ given by (\ref{ABC}) and
$\mu^{2}(z)=A^{2}(z)+B(z)C(z)$.
\section{Explicit map and an equivalent approach to Darboux-dressing matrix} \label{sec3}
The matrix (\ref{Dexpl}) contains just one set of dynamical variables so that
the relation (\ref{eq:invpre}) gives now an explicit map between the variables
$\big(\tilde{s}^{+}_{j},\tilde{s}^{-}_{j},\tilde{s}^{3}_{j}\big)$ and
$\big(s^{+}_{j},s^{-}_{j},s^{3}_{j}\big)$. The map is easily found by (\ref{resj}); it reads:
\begin{subequations}
\begin{equation}\label{eq:s3t}\begin{split}
\tilde{s}^{3}_{k}\, =& \,{\frac {p(\xi)p(\eta) \left( {\xi}^{2}-{\eta}^{2} \right) \left(
\left( {z_{{k}}}^{2}-{\eta}^{2} \right) p(\xi)\xi- \left( {z_{{k}}}^{2}-{\xi}^{2}
\right) p(\eta)\eta \right) s^{-}_{k}z_{k}}{\Delta_{k}}}+\\
&{\frac { \left( {\xi}^{2}-{\eta}^{2} \right) \left( \left( {z_{{k}}}^{2}-{\xi}^{2} \right) p(\xi)\eta-p(\eta)\xi \left( {z_{{k}}}^{2}-{\eta}^{2} \right) \right)
\mbox{}{s}^{+}_{k}z_{k}}{\Delta_{k}}}+\\
&{\frac {s_{k}^{3}\Big[ p(\xi)p(\eta) \left( \left( {\xi}^{2}+{z_{{k}}}^{2}
\right) \left( {\eta}^{2}+{z_{{k}}}^{2} \right) - \left(
{\eta}^{2}+{\xi}^{2} \right)- 8{\eta}^{2}{\xi}^{2}{z_{{k}}}^{2}
\right)}{\Delta_{k}}}+\\
&{-\frac{\left( \eta\xi \left( {\xi}^{2}-{z_{{k}}}^{2} \right) \left( {\eta}^{2}-{z_{{k}}}^{2} \right) \big( {p(\xi)}^{2}+{p(\eta)}^{2} \big)\mbox{} \right)\Big]}{{\Delta_{k}}}}
\end{split}\end{equation}
\begin{equation}\label{eq:spt}\begin{split}
\tilde{s}^{+}_{k}\, =& \,-{\frac {{b}^{2}{p(\xi)}^{2}{p(\eta)}^{2} \left( {\eta}^{2}-{\xi}^{2} \right) ^{2}s^{-}_{k}z_{k}^{2}}{\xi\eta\Delta_{k}}}+{\frac {{b}^{2} \left( \left( {z_{{k}}}^{2}-{\xi}^{2} \right) p(\xi)\eta-p(\eta)\xi \left( {z_{{k}}}^{2}-{\eta}^{2} \right) \right) ^{2}
s^{+}_{k}}{\eta\xi\Delta_{k}}}+\\
&\,{\frac {2{b}^{2}p(\xi)p(\eta) \left( {\xi}^{2}-{\eta}^{2} \right) \left( \left( {z_{{k}}}^{2}-{\xi}^{2} \right) p(\xi)\eta-p(\eta)\xi \left( {z_{{k}}}^{2}-{\eta}^{2} \right) \right)
\mbox{}s^{3}_{k}z_{k}}{\eta\xi\Delta_{k}}}
\end{split}\end{equation}
\begin{equation}\label{eq:smt}\begin{split}
\tilde{s}^{-}_{k}\, =& \,-{\frac { \left( {\eta}^{2}-{\xi}^{2} \right)
^{2}s^{+}_{k}z_{k}^2\xi\eta}{{b}^{2}\Delta_{k}}}+{\frac { \left( \left({z_{{k}}}^{2}-{\eta}^{2} \right) p(\xi)\xi- \left( {z_{{k}}}^{2}-{\xi}^{2} \right) p(\eta)\eta \right) ^{2}s^{-}_{k}\xi\eta}{{b}^{2}\Delta_{k}}}+\\
&{\frac {2 \left( {\xi}^{2}-{\eta}^{2} \right) \left( \left( {z_{{k}}}^{2}-{\eta}^{2} \right) p(\xi)\xi- \left( {z_{{k}}}^{2}-{\xi}^{2} \right) p(\eta)\eta \right) s^{3}_{k}z_{k}\xi\eta}{{b}^{2}\Delta_{k}}}
\end{split}\end{equation}\end{subequations}
where $\Delta_{k}$ is proportional to the determinant of $D(z_{k})$, i.e.
\begin{equation}
\Delta_{k}=(z^{2}_{k}-\xi^{2})(z_{k}^{2}-\eta^{2})(p(\xi)\eta-p(\eta)\xi)(p(\eta)\eta-p(\xi)\xi)
\end{equation}
Formulas (\ref{eq:s3t}), (\ref{eq:spt}), (\ref{eq:smt}) define a two-parameter B\"acklund transformation, the parameters being $\xi$ and $\eta$: as we will show in the next section, it is a crucial point to have a \emph{two}-parameter family of transformations when looking for a physical map from real variables to real variables.
As mentioned in the previous Section, we now show that indeed, by posing:
\begin{equation} \label{bsubst}
b= i\sqrt{\eta\xi}
\end{equation}
the expression (\ref{Dexpl}) of the dressing matrix goes into the expression of the elementary Lax matrix for the classical, partially anisotropic, Heisenberg spin chain on the lattice \cite{FT}.
\noindent
Obviously two matrices differing only for a global multiplicative factor give rise to the same similarity transformation. So we omit the term $\frac{\beta z}{z^{2}-\xi^{2}}$ in (\ref{Dexpl}), and, taking into account (\ref{bsubst}), we write for the diagonal part $D_{d}$ of (\ref{Dexpl}):
\begin{equation} \label{Dd}
D_{d} = \frac{i}{2}\Big((p(\xi)-p(\eta))(v - w)\hbox{$1\hskip -1.2pt\vrule depth 0pt height 1.6ex width 0.7pt \vrule depth 0pt height 0.3pt width 0.12em$}+(p(\xi)+p(\eta))(v+w)\sigma_{3}\Big)
\end{equation}
where $v (\xi, \eta)$ and $w (\xi,\eta)$ are given by:
\begin{equation}
v (\xi,\eta)=\frac{z\xi}{\sqrt{\eta\xi}}-\frac{\eta \sqrt{\eta\xi}}{z} \qquad w(\xi,\eta)=\frac{\xi\sqrt{\eta\xi}}{z}-\frac{z\eta}{\sqrt{\eta\xi}} = -v (\eta,\xi)
\end{equation}
We substitute:
\begin{equation} \label{eq33}
\xi\to e^{i\zeta_{1}} \qquad \eta \to e^{i\zeta_{2}} \qquad z\to e^{i\lambda}
\end{equation}
and take a suitable redefinition of the B\"acklund parameters to clarify the
structure of the $D$ matrix:
\begin{equation}\label{lmu}
\lambda_{0}\doteq\frac{\zeta_{1}+\zeta_{2}}{2} \qquad \mu \doteq \frac{\zeta_{1}-\zeta_{2}}{2}
\end{equation}
With these positions it is simple to find that
$v-w=4ie^{i\lambda_{0}}\sin(\lambda-\lambda_{0})\cos(\mu)$ and
$v+w=4ie^{i\lambda_{0}}\cos(\lambda-\lambda_{0})\sin(\mu)$. So, considering
equation (\ref{Dd}) jointly with the off-diagonal part of (\ref{Dexpl}), the dressing matrix can be written as:
\begin{equation} \label{eq:Dprov}\begin{split}
D(\lambda)=&\alpha
\Big[\sin(\lambda-\lambda_{0})\hbox{$1\hskip -1.2pt\vrule depth 0pt height 1.6ex width 0.7pt \vrule depth 0pt height 0.3pt width 0.12em$}+\frac{p(\zeta_{1})+p(\zeta_{2})}{p(\zeta_{1})-p(\zeta_{2})}\tan(\mu)\cos(\lambda-\lambda_{0})\sigma_{3}+\\
&+\frac{2\sin(\mu)}{p(\zeta_{2})-p(\zeta_{1})}\left(\begin{array}{cc}0 & 1\vspace{1mm}\\
-p(\zeta_{1})p(\zeta_{2}) & 0\end{array}\right)\Big]
\end{split}\end{equation}
where $\alpha$ is the global factor
$2e^{i\lambda_{0}}(p(\zeta_{2})-p(\zeta_{1})) $. Observe that in formula
(\ref{eq:Dprov}), with some abuse of notation, $p(\zeta_{1})$
$\left(p(\zeta_{2})\right)$ stands of course for
$\left.p(\xi)\right|_{\xi=e^{i\zeta_{1}}}$
$\left(\left.p(\eta)\right|_{\eta=e^{i\zeta_{2}}}\right)$. \newline A last change of variables allows to identify the dressing matrix with the elementary Lax matrix of the classical $xxz$ Heisenberg spin chain on the lattice, and furthermore to recover the form of the Darboux matrix for the \emph{rational} Gaudin model \cite{HKR}\cite{SK1} in the limit of \emph{small angles}. Namely, we introduce two new functions, $P$ and $Q$, by letting
\begin{equation}
p(\zeta_{1})=-Q \qquad p(\zeta_{2}) = \frac{2\sin(\mu)}{P}-Q.
\end{equation}
Then equation (\ref{eq:Dprov}) becomes:
\begin{equation} \label{eq:Darboux}
D(\lambda)=\alpha \left(\begin{array}{cc} \sin(\lambda-\lambda_{0}-\mu)+PQ\,\cos(\lambda-\lambda_{0}) &
P\,\cos(\mu)\\
Q\,\sin(2\mu)-PQ^{2}\cos(\mu) & \sin(\lambda-\lambda_{0}+\mu)-PQ\,\cos(\lambda-\lambda_{0})
\end{array}\right)
\end{equation}
Obviously now we can repeat the argument made before about spectrality; indeed now $D\big|_{\lambda=\lambda_{0}+\mu}$ and $D\big|_{\lambda=\lambda_{0}-\mu}$ are rank one matrices. So if $\Omega_{+}$ and $\Omega_{-}$ are respectively the kernels of $D(\lambda_{0}+\mu)$ and $D(\lambda_{0}-\mu)$ one has again that
$\Omega_{+}$ and $\Omega_{-}$ are eigenvectors of $L(\lambda_{0}+\mu)$
and $L(\lambda_{0}-\mu)$ with eigenvalues $\gamma_{+}$ and $\gamma_{-}$ where
$$
\gamma_{\pm}=\gamma(\lambda)\Big\|_{\lambda=\lambda_{0}\pm\mu}$$
\noindent
and we have set (\ref{generfun})
\begin{equation}\label{gammas}
\gamma^{2}(\lambda) \equiv A^{2}(\lambda)+B(\lambda)C(\lambda)=-\textrm{det}(L(\lambda))
\end{equation}
The two kernels are given by:
\begin{equation}
\Omega_{+}=\left(\begin{array}{c}1\\-Q\end{array}\right) \qquad \Omega_{-}=\left(\begin{array}{c}P\\2\,\sin(\mu)-PQ\end{array}\right)
\end{equation}
and the eigenvectors relations yields the following expression of $P$ and $Q$ in terms of the old variables only:
\begin{equation}\label{eq:P&Q}
Q=Q(\lambda_{0}+\mu)=\frac{A(\lambda)-\gm(\lambda)}{B(\lambda)}\Big\|_{\lambda=\lambda_{0}+\mu} \qquad \frac{1}{P}=\frac{Q(\lambda_{0}+\mu)-Q(\lambda_{0}-\mu)}{2\,\sin(\mu)}
\end{equation}
The explicit map can be found by equating the residues at the poles
$\lambda=\lambda_{k}$ in (\ref{BTnew}), that is by the relation:
\begin{equation} \label{resk}
\tl{L}_{k}D_{k}=D_{k}L_{k}
\end{equation}
where
\begin{equation}
L_{k}=\left(\begin{array}{cc}s^{3}_{k} & s^{-}_{k}\\ s^{+}_{k} &
-s^{3}_{k}\end{array}\right),\qquad D_{k}=D(\lambda=\lambda_{k})
\end{equation}
or by performing the needed changes of variables in (\ref{eq:s3t}), (\ref{eq:spt}), (\ref{eq:smt}). Anyway now the map reads:
\begin{subequations}\begin{equation}
\begin{split} \label{eqs3}
\tilde{s}^{3}_{k}=&\frac{2\,\cos^{2}(\mu)-(\cos^{2}(\mu)+\cos^{2}(\delta^{k}_{0}))(1-2PQ\,\sin(\mu)+P^{2}Q^{2})}{\Delta_{k}}s^{3}_{k}+\\
&+\frac{P\,\cos(\mu)(\sin(\delta^{k}_{+})-PQ\,\cos(\delta^{k}_{0}))}{\Delta_{k}}s^{+}_{k}+\\
&-\frac{Q\,\cos(\mu)(2\,\sin(\mu)-PQ)(\sin(\delta^{k}_{-})+PQ\,\cos(\delta^{k}_{0}))}{\Delta_{k}}s^{-}_{k}
\end{split}\end{equation}
\begin{equation}\begin{split} \label{eqsp}
\tilde{s}^{+}_{k}&=\frac{(\sin(\delta_{+}^{k})-PQ\,\cos(\delta_{0}^{k}))^{2}}{\Delta_{k}}s^{+}_{k}-\frac{(Q^{2}\cos^{2}(\mu)(2\,\sin(\mu)-PQ))^{2}}{\Delta_{k}}s^{-}_{k}+\\
&+\frac{2Q\,\cos(\mu)(2\,\sin(\mu)-PQ)(\sin(\delta_{+}^{k})-PQ\,\cos(\delta^{k}_{0}))}{\Delta_{k}}s^{3}_{k}
\end{split}\end{equation}
\begin{equation}\begin{split}\label{eqsm}
\tilde{s}_{k}^{-}=&\frac{(\sin(\delta_{-}^{k})+PQ\,\cos(\delta_{0}^{k}))^{2}}{\Delta_{k}}s_{k}^{-}-\frac{P^{2}\,\cos^{2}(\mu)}{\Delta_{k}}s_{k}^{+}+\\
&-\frac{2P\,\cos(\mu)(\sin(\delta^{k}_{-})+PQ\,\cos(\delta_{0}^{k}))}{\Delta_{k}}s_{k}^{3}
\end{split}\end{equation}\end{subequations}
where for typesetting brevity we have put:
\begin{equation}\left\{\begin{array}{ll}
\delta_{0}^{k}=\lambda_{k}-\lambda_{0}\\
\delta_{\pm}^{k}=\lambda_{k}-\lambda_{0}\pm \mu
\end{array}\right.
\end{equation}
and we have denoted by $\Delta_{k}$ the determinant of $D(\lambda_{k})$, that
is:
$$
\Delta_{k}:=\sin(\lambda_{k}-\lambda_{0}-\mu)\sin(\lambda_{k}-\lambda_{0}+\mu)(1-2PQ\,\sin(\mu)+P^{2}Q^{2})$$
At this point we can show that for ``small'' $\lambda_{0}$ and $\mu$
one obtains, at first order, the B\"acklund for the rational
Gaudin model, independently found by Sklyanin \cite{SK1} on one hand and Hone, Kuznetsov and Ragnisco \cite{HKR} on the other, as the composition of two one-parameter B\"acklunds.
So let us take $\lambda_{0} \to h\lambda_{0}$, $\mu \to h\mu$ and $\lambda \to h\lambda$ where $h$ is the expansion parameter. One has:
$$
\cot(\lambda-\lambda_{k})=\frac{1}{h(\lambda-\lambda_{k})}+O(h)\qquad \frac{1}{\sin(\lambda-\lambda_{k})}=\frac{1}{h(\lambda-\lambda_{k})}+O(h),
$$
so that $Q = q^{r}+O(h^{2})$,
where the superscript $r$ stands for ``rational'' . Thus, $q^{r}$ coincides with
the variable $q$ that one finds in the rational case \cite{HKR}. For the variable $P$ one has:
$$
P = h(p^{r}+O(h^{2})) \qquad \textrm{where} \qquad p^{r}=\frac{2\mu}{q^{r}(\lambda_{0}+\mu)-q^{r}(\lambda_{0}-\mu)}
$$
Taking into account these expressions, it is straightforward to see that the
matrix (\ref{eq:Darboux}) has the expansion:
\begin{equation}
D(\lambda)=hD^{r}(\lambda)+O(h^{3})
\end{equation}
where
\begin{equation} D^{r}(\lambda)= \left(\begin{array}{cc} \lambda-\lambda_{0}-\mu +p^{r}q^{r} & p^{r}\\
q^{r}(2\mu-p^{r}q^{r}) & \lambda-\lambda_{0}+\mu-p^{r}q^{r} \end{array}\right).
\end{equation}
The limit of ``small angles'' in (\ref{eq:s3t}), (\ref{eq:spt}),
(\ref{eq:smt}) obviously leads to the rational map of \cite{HKR}.
\subsection{Symplecticity}
In this subsection we face the question of the simplecticity of our map; the correspondence with the rational B\"acklund in the limit of ``small angles'' shows that the transformations are surely canonical in this limit. Indeed, as our map is explicit, we could check by brute-foce calculations whether the Poisson structure (\ref{poisS}) is preserved by tilded variables. However we will follow a finer argument due to Sklyanin \cite{SK2}.
Suppose that $D(\lambda)$ obeys the \emph{quadratic} Poisson bracket, that is
\begin{equation}\label{eq:PoisD}
\{D^{1}(\lambda),D^{2}(\tau)\}=[r_t(\lambda-\tau),D^{1}(\lambda)\otimes D^{2}(\tau)]
\end{equation}
where as usually $D^{1}=D\otimes\hbox{$1\hskip -1.2pt\vrule depth 0pt height 1.6ex width 0.7pt \vrule depth 0pt height 0.3pt width 0.12em$}$, $D^{2}=\hbox{$1\hskip -1.2pt\vrule depth 0pt height 1.6ex width 0.7pt \vrule depth 0pt height 0.3pt width 0.12em$}\otimes D$. Consider the relation
\begin{equation}\label{eq:transf}
\tilde{L(\lambda)}\tilde{D}(\lambda-\lambda_{0})=D(\lambda-\lambda_{0})L(\lambda)
\end{equation}
in an extended phase space, where the entries of $D$ Poisson commutes with those of $L$.
Note that in (\ref{eq:transf}) we have used tilded variables also for $D(\lambda)$ (in its l.h.s.) because (\ref{eq:transf}) is indeed the B\"acklund transformation in this extended phase space, whose coordinates are $(s^{3}_{j},s^{\pm}_{j},P,Q)$, so that we have also a $\tl{P}$ and a $\tl{Q}$. The key observation is that if both $L$ and $D$ have the
same Poisson structure, given by equation (\ref{eq:PoisD}), then this property
holds true for $LD$ and
$DL$ as well, because in this extended space the
entries of $D$ Poisson commute with the entries of $L$. This means that the
transformation (\ref{eq:transf}) defines a ``canonical''
transformation. Sklyanin showed \cite{SK2} that if one now restricts the variables on the constraint manifold $\tl{P}=P$ and $\tl{Q}=Q$ the symplecticity is
preserved; however this constraint leads to a dependence of $P$ and $Q$ on
the entries of $L$, that for consistency must be the same as the one given by the
equation (\ref{eq:transf}) on this constrained manifold. But there
(\ref{eq:transf}) is just given by the usual BT:
$$
\tilde{L}(\lambda)D(\lambda-\lambda_{0})=D(\lambda-\lambda_{0})L(\lambda)
$$
so that the map preserves the spectrum of $L(\lambda)$ and is canonical.
What remains to show is that indeed
(\ref{eq:PoisD}) is fullfilled by our $D(\lambda)$.
Obviously $D(\lambda)$ cannot have this Poisson structure for any Poisson
bracket between $P$ and $Q$. In the rational case the Darboux matrix has the Poisson structure imposed by the rational $r$-matrix provided $P$ and $Q$ are
canonically conjugated in the extended space \cite{SK2} (and this is why they
were called $P$ and $Q$); in the trigonometric case $P$ and $Q$ are no longer
canonically conjugated but obviously one recovers this property at order $h$ in the ``small angle'' limit.
\noindent
First note that $D(\lambda)$ can be conveniently written as:
\begin{equation}\label{eq:Dlam}
D(\lambda) = \alpha\,\cos(\mu)\Big[\sin(\lambda)\hbox{$1\hskip -1.2pt\vrule depth 0pt height 1.6ex width 0.7pt \vrule depth 0pt height 0.3pt width 0.12em$}+a\,\cos(\lambda)\sigma_{3}+\left(\begin{array}{cc}0 & b\vspace{1mm}\\
c & 0\end{array}\right)\Big]
\end{equation}
where the coefficients $a, b, c$ are given by:
\begin{equation}
a=\frac{PQ-\sin(\mu)}{\cos(\mu)}, \quad b=P, \quad c=2Q\,\sin(\mu)-PQ^{2}
\end{equation}
Inserting (\ref{eq:Dlam}) in (\ref{eq:PoisD}) we have the following constraints:
\begin{equation} \label{eq:pois1}
\{\alpha,\alpha a \}=0 \quad \Longrightarrow \quad \alpha=\alpha(PQ)
\end{equation}
\begin{equation}\label{eq:pois2}
\{\alpha,\alpha b \}=-\alpha^{2}ab \quad \Longrightarrow \quad
\{\alpha,P\}=\alpha P\frac{\sin(\mu)-PQ}{\cos(\mu)}
\end{equation}
\begin{equation}\label{eq:pois3}
\{\alpha,\alpha c \}=\alpha^{2}ac \quad \Longrightarrow \quad \{\alpha,Q\}=-\alpha Q\frac{\sin(\mu)-PQ}{\cos(\mu)}
\end{equation}
All remaining relations, namely
\begin{equation}\label{eq:pois456}
\{\alpha b,\alpha c\}=2\alpha^{2}a \quad \{\alpha a,\alpha b\}=\alpha^{2}b
\quad \{\alpha a,\alpha c\}=-\alpha^{2}c
\end{equation}
give the same constraint, i.e.:
\begin{equation}\label{eq:poispq}
\{Q,P\}=\frac{1+P^{2}Q^{2}-2PQ\sin(\mu)}{\cos(\mu)}
\end{equation}
This expression can be used to find, after a simple integration,
$$
\alpha(PQ)=\frac{k}{\sqrt{(1+P^{2}Q^{2}-2PQ\sin(\mu))}}
$$
so that the Darboux matrix (\ref{eq:Darboux}) is fixed (up to the constant
multiplicative factor $k$).
As previously pointed out, it turns out that the Darboux-dressing matrix (\ref{eq:Darboux}) is formally equivalent to the elementary Lax matrix for the classical $xxz$ Heisenberg spin chain on the lattice \cite{FT}. Moreover it has also the same (quadratic) Poisson bracket. This suggests that indeed $D(\lambda)$ can be recast in the form (see \cite{FT}):
\begin{equation}\label{eq:DdiFT}
D(\lambda)=\mathscr{S}_{0}1+\frac{i}{\sin(\lambda)}\big(\mathscr{S}_{1}\sigma_{1}+\mathscr{S}_{2}\sigma_{2}+\cos(\lambda)\mathscr{S}_{3}\sigma_{3}\big)
\end{equation}
where the $\sigma_{i}$ are the Pauli matrices and the variables
$\mathscr{S}_{i}$ satisfies the following Poisson bracket (\cite{FT}):
\begin{equation}\label{eq:PoisDdiFT} \begin{array}{cc}
\{\mathscr{S}_{i},\mathscr{S}_{0}\}=J_{jk}\mathscr{S}_{j}\mathscr{S}_{k}\\
\{\mathscr{S}_{i},\mathscr{S}_{j}\}=-\mathscr{S}_{0}\mathscr{S}_{k} \end{array}
\end{equation}
where $(i,j,k)$ is a cyclic permutation of $(1,2,3)$ and $J_{jk}$ is
antisymmetric with $J_{12}=0, J_{13}=J_{23}=1$. Indeed it is
straightforward to show that the link between the two representations
(\ref{eq:Dlam}) and (\ref{eq:DdiFT}), up to the factor
$\cos(\mu)\sin(\lambda)$ that does not affect neither (\ref{BTnew}) nor the Poisson bracket (\ref{eq:PoisD}), is given by :
\begin{equation}
\alpha=\mathscr{S}_{0} \quad -\frac{i\alpha}{2}(b+c)=\mathscr{S}_{1} \quad
\frac{\alpha}{2}(b-c)=\mathscr{S}_{2} \quad -ia\alpha=\mathscr{S}_{3}
\end{equation}
and the Poisson brackets (\ref{eq:pois1}), (\ref{eq:pois2}), (\ref{eq:pois3}),
(\ref{eq:pois456}) correspond to those given in (\ref{eq:PoisDdiFT}).
\newline
An open question regards the generating function of our BT. So far we have not been able to write it in any closed form; in our opinion the question
is harder than in the rational case (where the generating function is
known from \cite{HKR}): in fact the rational map corresponding to
(\ref{eq:s3t}), (\ref{eq:spt}), (\ref{eq:smt}) can be written as the
composition of two simpler \emph{one}-parameter B\"acklund transformations,
and this entails the same property to hold for the generating function; in the
trigonometric case a factorization of the B\"acklund transformations cannot
preserve the symmetry (\ref{Symm}): so probably one should look for symmetry-violating generating functions such that their composition enables symmetry to be restored.
\section{Physical B\"acklund transformations}\label{sec4}
The transformations we have found do not map, in general, real variables into
real variables. A sufficient condition to ensure this property is given by:
\begin{equation}
\zeta_{1}=\bar{\zeta}_{2}\label{reality}
\end{equation}
which amounts te require that $\lambda_{0}$ and $\mu$ in (\ref{eqs3}), (\ref{eqsp}),
(\ref{eqsm}) be, respectively, real and imaginary
numbers.
\newline
Indeed we claim that, if (\ref{reality}) holds, starting from a physical solution of the dynamical equations, we
can find a new physical solution with two real parameters. Let us prove the
assertion. B\"acklund transformation are obtained by (\ref{resk});
starting from a real solution means starting from an Hermitian $L_{k}$. Thus, if the transformed matrix $\tl{L}_{k}$ has to be Hermitian
too, the Darboux matrix has to be proportional to an unitary matrix. We will
show that this is the case by choosing $\zeta_{1}=\bar{\zeta}_{2}$ and
$\gamma(\zeta_{1})=-\bar{\gamma}(\zeta_{2})$ ($\gamma$ is the function defined
in (\ref{gammas})). Note that the condition on the $\gamma$'s specifies their
relative sign (the sheet on the Riemann surface), inessential for the
spectrality property. Hereafter we assume the parameter $\mu$, defined in
(\ref{eq33}), to be purely imaginary $=i\epsilon$, so that:
\begin{equation}\label{epsmu}
\zeta_{1}=\lambda_{0}+i\epsilon \qquad (\lambda_{0},\epsilon)\in\mathbb{R}^{2}
\end{equation}
The Darboux matrix at $\lambda=\lambda_{k}$ can be rewritten as:
\begin{equation}\label{eq:Dar1}
D_{k}=\left(\begin{array}{cc}
\sin(v_{k}-i\epsilon)+PQ\,\cos(v_{k}) & P\,\cosh(\epsilon)\\
Q\,\cosh(\epsilon)\,(2i\,\sinh(\epsilon)-PQ) & \sin(v_{k}+i\epsilon)-PQ\,\cos(v_{k})\end{array}\right)
\end{equation}
where $v_{k} \equiv \lambda_{k}-\lambda_{0}$ (we are assuming that the
parameters $\lambda_{k}$ of the model are real).
We recall that in (\ref{eq:Dar1}):
\begin{equation} \label{eq:peq}
Q=Q(\zeta_{1})=\frac{A(\zeta_{1})-\gm(\zeta_{1})}{B(\zeta_{1})}=-\frac{C(\zeta_{1})}{A(\zeta_{1})+\gamma(\zeta_{1})}; \qquad
P=\frac{2i\,\sinh(\epsilon)}{Q(\zeta_{1})-Q(\bar{\zeta_{1}})}.
\end{equation}
Furthermore it is a simple matter to show that
\begin{equation}\label{eq:ABC}
A(\zeta_{1})=\bar{A}(\bar{\zeta_{1}}); \quad B(\zeta_{1})=\bar{C}(\bar{\zeta_{1}}); \quad C(\zeta_{1})=\bar{B}(\bar{\zeta_{1}}).
\end{equation}
If the off-diagonal terms of $D_{k}D_{k}^{\dag}$ has to be zero, then the
following equation has to be fullfilled:
\begin{equation} \label{eq:firsteq}
P(\sin(v_{k}-i\epsilon)-\bar{P}\bar{Q}\,\cos(v_{k}))=\bar{Q}(2i\,\sinh(\epsilon)+\bar{P}\bar{Q})(\sin(v_{k}-i\epsilon)+PQ\,\cos(v_{k}))
\end{equation}
Using relations (\ref{eq:peq}) and rearranging the terms, the previous equation becomes:
\begin{equation}\label{eq:secondeq}\begin{split}
&(\frac{1}{\bar{Q}(\zeta_{1})}-\frac{1}{\bar{Q}(\bar{\zeta_{1}})})\cosh(\epsilon)\sin(v_{k})+i(\frac{1}{\bar{Q}(\zeta_{1})}+\frac{1}{\bar{Q}(\bar{\zeta_{1}})})\cos(v_{k})\sinh(\epsilon)=\\
&=(Q(\zeta_{1})-Q(\bar{\zeta_{1}}))\cosh(\epsilon)\sin(v_{k})+i\cos(v_{k})\sinh(\epsilon)(Q(\zeta_{1})-Q(\bar{\zeta_{1}}))
\end{split}\end{equation}
Note that the relations (\ref{eq:ABC}) gives
$\gamma^{2}(\zeta_{1})=\overline{\gamma^{2}(\bar{\zeta_{1}})}$, implying that
$\gamma^{2}(\lambda)$ is a real function of its complex argument, consistently
with the expansion (\ref{generfun}). \newline The choice:
\begin{equation}\label{gamma12}
\gamma(\zeta_{1})=-\bar{\gamma}(\bar{\zeta_{1}})
\end{equation}
entails:
\begin{equation}
\bar{Q}(\zeta_{1})=-\frac{1}{Q(\bar{\zeta_{1}})}
\end{equation}
With this constraint the equation
(\ref{eq:secondeq}) holds too. Moreover (\ref{gamma12}) makes the diagonal
terms in $D_{k}D_{k}^{\dag}$ equal. This shows that, under the given
assumptions, $D_{k}$ is an unitary matrix.
\section{Interpolating Hamiltonian flow} \label{sec5}
The B\"acklund transformation can be seen as a time discretization of a
one-parameter ($\lambda_{0}$) family of hamiltonian flows with the difference
$i(\bar{\zeta_{1}}-\zeta_{1})=2\epsilon$ playing the role of the time-step. To clarify this point, let
us take the limit $\epsilon \to 0$.\\
We have:
\begin{equation}\label{eq:q0}
Q=\frac{A(\lambda_{0})-\gamma(\lambda_{0})}{B(\lambda_{0})}+O(\epsilon)\equiv Q_{0}+O(\epsilon)
\end{equation}
\begin{equation}\label{eq:p0}
P=-i\epsilon\frac{B(\lambda_{0})}{\gamma(\lambda_{0})}+O(\epsilon^{2})\equiv
i\epsilon P_{0}+O(\epsilon^{2})
\end{equation}
and for the dressing matrix we can write:
\begin{equation}\begin{split}
&D(\lambda)=k\,\sin(\lambda-\lambda_{0})\hbox{$1\hskip -1.2pt\vrule depth 0pt height 1.6ex width 0.7pt \vrule depth 0pt height 0.3pt width 0.12em$}+\\
+&i\epsilon k\left(\begin{array}{cc}
\cos(\lambda-\lambda_{0})(P_{0}Q_{0}-1) & P_{0}\\
Q_{0}(2-P_{0}Q_{0})& \cos(\lambda-\lambda_{0})(1-P_{0}Q_{0})
\end{array}\right)+O(\epsilon^{2})\end{split}
\end{equation}
Reorganizing the terms with the help of $P_{0}$ and $Q_{0}$ given in the equations (\ref{eq:q0}) and (\ref{eq:p0}) we
arrive at the expression:
\begin{equation}\begin{split}
&D(\lambda)=k\,\sin(\lambda-\lambda_{0})\hbox{$1\hskip -1.2pt\vrule depth 0pt height 1.6ex width 0.7pt \vrule depth 0pt height 0.3pt width 0.12em$}+\\
-&\frac{i\epsilon k}{\gamma(\lambda_{0})}\left(\begin{array}{cc}
A(\lambda_{0})\cos(\lambda-\lambda_{0}) & B(\lambda_{0})\\
C(\lambda_{0})& -A(\lambda_{0})\cos(\lambda-\lambda_{0})
\end{array}\right)+O(\epsilon^{2})\end{split}
\end{equation}
It is now straightforward to show that in the limit $\epsilon \to 0$ the equation of the map
$\tl{L}D=DL$ turns into the Lax equation for a continuous flow:
\begin{equation}\label{eq:motion}
\dot{L}(\lambda)= [L(\lambda),M(\lambda,\lambda_{0})]
\end{equation}
where the time derivative is defined as:
\begin{equation}
\dot{L}=\lim_{\epsilon\rightarrow 0}\frac{\tilde{L}-L}{\epsilon}
\end{equation}
and the matrix $M(\lambda,\lambda_{0})$ has the form
\begin{equation}
\frac{i}{\gamma(\lambda_{0})}\left(\begin{array}{cc}
A(\lambda_{0})\cot(\lambda-\lambda_{0}) & \frac{B(\lambda_{0})}{\sin(\lambda-\lambda_{0})}\\
\frac{C(\lambda_{0})}{\sin(\lambda-\lambda_{0})}& -A(\lambda_{0})\cot(\lambda-\lambda_{0})
\end{array}\right)
\end{equation}
The system (\ref{eq:motion})
can be cast in Hamiltonian form:
\begin{equation}
\dot{L}(\lambda)=\{\mathcal{H}(\lambda_{0}),L(\lambda)\}
\end{equation}
with the Hamilton's function given by:
\begin{equation}\label{eq:Ham}
\mathcal{H}(\lambda_{0})=\gamma(\lambda_{0})=\sqrt{A^{2}(\lambda_{0})+B(\lambda_{0})C(\lambda_{0})}
\end{equation}
Quite remarkably, but not surprisingly, the Hamiltonian (\ref{eq:Ham})
characterizing the interpolating flow is (the square root of) the generating
function (\ref{generfun}) of the whole set of conserved quantities. By
choosing the parameter $\lambda_{0}$ to be equal to any of the poles ($\lambda_{i}$) of the
Lax matrix, the map leads to $N$ different maps $\{BT^{(i)}\}_{i=1..N}$, where
$BT^{(i)}$ discretizes the flow
corresponding to the Hamiltonian $H_{i}$, given by equation
(\ref{hams}). Any other integrable map for the trigonometric Gaudin model can be, in principle, written in terms of the
$N$ maps $\{BT^{(i)}\}_{i=1..N}$. \newline
More explicitely, by posing $\lambda_{0}=\delta +\lambda_{i}$ and taking
the limit $\delta \to 0$, the Hamilton's function (\ref{eq:Ham}) gives:
\begin{equation}
\gamma(\lambda_{0})=\frac{s_{i}}{\delta}+\frac{H_{i}}{2s_{i}}+O(\delta)
\end{equation}
and the equations of motion take the form:
\begin{equation}
\dot{L}(\lambda)=\frac{1}{2s_{i}}\{H_{i},L(\lambda)\}
\end{equation}
Accordingly, the interpolating flow encompasses all the commuting flows of
the system, so that the B\"acklund transformations turn out to be an
\emph{exact time-discretizations} of such interpolating flow.
\subsection{Numerics}
\begin{figure}[ht]
\centering
\includegraphics[scale=0.4]{3.eps}
\caption{input parameters: $s_{1}^{+}=2+i$, $s_{1}^{-}=2-i$, $s_{1}^{3}=-2$,
$s_{2}^{+}=50+40i$, $s_{2}^{-}=50-40i$, $s_{2}^{3}=70$, $\lambda_{1}=\pi/110$,
$\lambda_{2}=7\pi/3$, $\lambda_{0}=0.1$, $\mu=-0.002i$}
\end{figure}
\begin{figure}[ht]
\centering
\includegraphics[scale=0.4]{1.eps}
\caption{input parameters: $s_{1}^{+}=0.2+10i$, $s_{1}^{-}=0.2-10i$, $s_{1}^{3}=-1$,
$s_{2}^{+}=10-30i$, $s_{2}^{-}=10+30i$, $s_{2}^{3}=100$, $\lambda_{1}=\pi$,
$\lambda_{2}=7\pi/3$, $\lambda_{0}=0.1$, $\mu=-0.004i$}
\end{figure}
The figures report an example of iteration of the map (\ref{eqs3}),
(\ref{eqsp}), (\ref{eqsm}). For simplicity we take $N=2$. The computations
shows the first $1500$ iterations: the plotted variables are the
physical ones $(s_{1}^{x},s_{1}^{y},s_{1}^{z})$. Only one of the two spins is
shown, namely that labeled by the subscript ``1''. The figures are obtained by a \texttrademark Maple\, code.
\newpage
| proofpile-arXiv_065-5399 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
Single Molecule Magnets (SMMs) have been the subject of intense
research activity since the first and mostly studied one,
Mn$_{12}$-ac, was reported \cite{Sessoli93}. These metal-organic
clusters are usually characterized by a large spin ground state S
and an easy-axis anisotropy which determines the Zero-Field
Splitting (ZFS) of the $S$ state sublevels. The resulting magnetic
bistability makes them interesting for magnetic storage applications
due to their potential to shrink the magnetic bit down to the size
of one single molecule. Until recently and despite the common
efforts of chemists and physicists to find suitable systems that
could retain the magnetization for a long time at non cryogenic
temperatures, Mn$_{12}$-ac was the system showing the `highest'
blocking temperature (3.5 K) and anisotropy barrier (74.4
K)\cite{Chackov06}. The relaxation time in the classical regime
follows the Arrhenius law: $\tau = \tau_0 \exp(U/k_BT)$ (Ref.
\onlinecite{Villain94}). According to this, there are two key points
that have to be considered for the realization of an ideal SMM.
First of all, the anisotropy barrier, given to a first approximation
by $U\sim |D|S^{^2}$ ($D$ is the axial anisotropy parameter), has to
be sufficiently high. This is to prevent the reversal of the
magnetization via a classical thermally activated multistep Orbach
process mediated by spin-phonon interactions. This can be achieved
by the simultaneous increase of $D$ and $S$, two variables that are
intrinsically linked together \cite{Waldmann07}. Secondly, the
pre-exponential factor $\tau_0$ in the Arrhenius law has to be
large. This factor is dominated by the time necessary to climb the
upper states in the energy level diagram, and is proportional to
$D^{-3}$ (Ref. \onlinecite{Villain94}). In addition to the classical
relaxation mechanism, the quantum tunneling of the magnetization
(QTM) that characterizes the spin dynamics of SMMs, has to be taken
into consideration and minimized for magnetic data storage application, since it provides a
shortcut for the relaxation of the magnetization.\\
Therefore, to engineer SMMs able to retain the magnetization for
long time it is crucial to control all the different mechanisms that
provide a relaxation path for the system. Recently we succeeded in
the synthesis of a new class of Mn$^{3+}$-based clusters that
contributed in raising the anisotropy barrier and has served as a
good model system to study the factors involved in the relaxation
mechanism \cite{Carretta08, Carretta09poly}. \\This class consists of
hexanuclear Mn$^{3+}$ clusters (from now on Mn$_6$) which, despite
the generally similar nuclear structure, display a rich variety of
spin ground states and anisotropy energy barriers
\cite{Milios07Ueff53, Milios07Ueff86, Milios07switch,
InglisDalton09, MiliosDalton08,Milios07magstruc}. The six Mn$^{3+}$
ions are arranged in two triangles, with dominant ferromagnetic (FM)
exchange interaction between the two triangles and FM or
antiferromagnetic (AFM) interactions within the two triangles. It
has been found that the nature of the intra-triangle exchange
interaction can be switched from AFM to FM by substituting the
organic ligands bridging the Mn$^{3+}$ ions, leading to a change of the
ground state from a low spin ($S=4$) to a high spin ($S=12$)
\cite{Milios07Ueff53}. Furthermore, deliberately targeted structural
distortions have been successfully used to tune the values of
the exchange interactions \cite{Milios07Ueff86}. The isotropic exchange
interactions, and consequently the overall anisotropy barrier
\cite{Carretta08}, is thus found to be very sensitive to the
structural details. This has been also demonstrated using an
alternative method for distorting the molecule, that is by applying
external hydrostatic pressure and correlating the structural changes
with the magnetic behavior \cite{Prescimone08}. It is therefore
quite important to determine the exchange interactions for different
structures to deduce magneto-structural correlations. This
information can be then used to engineer new clusters with
selectively modified molecular structures that match
the optimized conditions for the desired magnetic properties. \\
We have investigated three members of the family of Mn$_6$ clusters,
with chemical formulas
[Mn$_6$O$_2$(sao)$_6$(O$_2$CMe)$_2$(EtOH)$_4$]$\cdot$4EtOH
(\textbf{1}),
[Mn$_6$O$_2$(Et-sao)$_6$(O$_2$CPh)$_2$(EtOH)$_4$(H$_2$O)$_2$]$\cdot$2EtOH
(\textbf{2}) and
[Mn$_6$O$_2$(Et-sao)$_6$(O$_2$CPh(Me)$_2$)$_2$(EtOH)$_6$]
(\textbf{3}) \cite{Milios04,Milios07Ueff53,Milios07Ueff86}. All
molecules display very similar structures consisting of six
Mn$^{3+}$ ions ($s = 2$) arranged in two staggered triangular units
(see Fig. \ref{fig:Mn6_S4_S12}) related by an inversion centre.
\begin{figure}[htb!]
\includegraphics[width=8cm,angle=0]{figure1}
\caption{(color online) Core of molecules (\textbf{1}) (left) and (\textbf{3}) (right) showing at the bottom the difference in torsion angles ($\alpha_1$, $\alpha_2$ and $\alpha_3$). Color scheme: Mn, orange, O, red, N, blue. H and C ions are omitted for
clarity.} \label{fig:Mn6_S4_S12}
\end{figure}
The only major structural difference between the three clusters
resides is the steric effect of the organic ligands used in proximity to the transition
metal ions. However, despite having very similar structures, the
three molecules have very different magnetic properties. The
coupling between the magnetic ions occurs via superexchange pathways
involving oxygen and nitrogen ions and is found to be extremely
sensitive to intramolecular bond angles and distances. The
particular arrangement of the magnetic ions provides exchange
couplings lying in the cross-over region between AFM and FM. For
this reason, even small structural distortions have tremendous
impact on the magnetic properties of the system. For example, while
the coupling between the two triangles is ferromagnetic for all
molecules, the intra-triangular coupling changes from
antiferromagnetic in (\textbf{1}) to ferromagnetic in (\textbf{2})
and (\textbf{3}) due to a 'twisting' of the oximate linkage. This results
in a 'switching' of the total spin ground state from $S=4$
to $S=12$. Systematic synthesis and studies of various members of the
Mn$_6$ family have revealed that the nature of the coupling is
extremely sensitive to the intra-triangular Mn-O-N-Mn torsion angles
\cite{MiliosDalton08,InglisDalton09} (see Fig.
\ref{fig:Mn6_S4_S12}). There is a critical value for the torsion
angle of $30.85\pm0.45^{\circ}$, above which the pairwise exchange
interaction switches from antiferromagnetic to ferromagnetic, while
a further enhancement of the angle increases the strength of the FM
interaction. This effect has been interpreted in terms of the
particular arrangement of the manganese d$_{z^2}$ orbitals with
respect to the p-orbitals of the nitrogen and oxygen ions. A large
(small) Mn-O-N-Mn torsion angle results in a small (large) overlap
between the magnetic orbitals giving rise to ferromagnetic
(antiferromagnetic or weak ferromagnetic) superexchange interactions
\cite{Cremades09}.\\
Molecules (\textbf{2}) and (\textbf{3}) have the same spin ground
state $S=12$ but very different effective energy barriers
($U_{\text{eff}}\approx 53$ K for (\textbf{2}) and
$U_{\text{eff}}\approx 86.4$ K for (\textbf{3})). This difference
was found to be closely related
to the exchange interactions \cite{Carretta08}.\\
In order to understand this rich variety of behaviors, we performed
a detailed spectroscopic characterization of the three molecules
using inelastic neutron scattering (INS) and frequency domain
magnetic resonance (FDMR). FDMR is only sensitive to transitions
with a predominantly intra-multiplet character, according to the
selection rules $\Delta S=0, \Delta M_S=\pm1$. In contrast, in INS
both inter- and intra-multiplet transitions can be observed ($\Delta
S=0,\pm1, \Delta M_S=0,\pm1$). Thus, the combination of the two
techniques allows assignment of all observed excitations
\cite{Sieber05,Piligkos05}.
The determination of the model spin Hamiltonian parameters enabled
us to estimate theoretically the effective energy barrier for the low spin molecule
(\textbf{1}). Similarly to what we previously reported for the two high spin molecules (\textbf{2}) and (\textbf{3}), the results on (\textbf{1}) show how the presence of low-lying excited spin multiplets plays a crucial role in determining the relaxation of the magnetization.
In conventional systems, the effects of S-mixing can be effectively
modeled by the inclusion of fourth order zero-field splitting
parameters in the giant spin Hamiltonian \cite{Liviotti02}. Here we will show that
this Hamiltonian is completely inadequate for the description of the
spin state energy level structure.
\section{Experimental Methods}
Non-deuterated polycrystalline samples were synthesized according to
published methods \cite{Milios07Ueff53, Milios07Ueff86}.
FDMR spectra were recorded on a previously described quasi-optical
spectrometer,\cite{vanslageren03} which employs backward wave
oscillators as monochromatic coherent radiation sources and a Golay
cell as detector. Sample (\textbf{1}) proved to deteriorate rapidly
upon pressing and over time. Therefore, the FDMR measurements on
(\textbf{1}) were performed on loose microcrystalline material (348
mg) held between two quartz plates. In this unconventional
measurement, the detector signal was recorded as function of
frequency at different temperatures. Extreme care had to be taken to
prevent the slightest positional changes of sample and equipment,
which changes the standing wave pattern in the beam, precluding
normalization. The normalized transmission was calculated by
dividing the signal intensity at a given temperature by that at the
highest temperature (70K). Sample (\textbf{2}) and (\textbf{3})
deteriorate to a lesser extent and FDMR spectra were recorded on
pressed powder pellets made by pressing ca. 250 mg of the unground
sample, with ca. 50 mg n-eicosane (to improve pellet quality) into a
pellet. All spectra were simulated using previously described
software.\cite{kirchner07}
INS experiments were performed using the multi disc-chopper
time-of-fight spectrometers V3/NEAT at the Helmholtz-Zentrum Berlin
f\"{u}r Materialien und Energie (HZB, Berlin, Germany) and IN5 and
IN6 at the Institute Laue Langevin (Grenoble, France). The samples
were inserted into hollow cylindric shaped Aluminum containers and
mounted inside a standard orange cryostat to achieve a base
temperature of 2 K. A vanadium standard was used for the detector
normalization and empty can measurements were used for the
background subtraction.
\section{Theoretical Modeling and Experimental Results}
The experimental data have been modeled using both the giant spin
Hamiltonian (GSH), which considers the ZFS of the ground state
multiplet only, and the microscopic spin Hamiltonian, which treats
isotropic exchange and single-ion ZFS at the same level. Including
only ZFS terms, the giant spin Hamiltonian for a spin state $S$ reads:
\begin{eqnarray}
H_S = D_S\hat{S}_z^2 + E_S(\hat{S}_x^2-\hat{S}_y^2)+B_4^0\hat{O}_4^0,
\label{eq:GSH}
\end{eqnarray}
where $D_S$ and $E_S$ are second order axial and transverse anisotropy, respectively, and $B_4^0$ is the fourth order axial anisotropy, with $\hat{O}_4^0$ the corresponding Stevens operator.
The microscopic spin Hamiltonian includes an isotropic exchange term for each pairwise interaction and single ion ZFS terms for each ion:
\begin{eqnarray}
H&=&\sum_{i < j}J_{i j}{\bf s}(i)\cdot {\bf s}(j) + \sum_{i} d_is_z^2(i) + \sum_{i}\biggl( 35 c_i s_z^4(i)\nonumber\\
&&+ c_i (25 -30 s(s+1)) s_z^2(i)\biggr) ,
\label{eq:H_micro}
\end{eqnarray}
where ${\bf s}(i)$ are spin operators of the $i^{\text{th}}$ Mn ion. The
first term is the isotropic exchange interaction, while the second
and third terms are the second and fourth order axial single-ion
zero-field splitting, respectively (the $z$ axis is assumed
perpendicular to the plane of the triangle).\\
The spin Hamiltonians have been numerically diagonalized by
exploiting the conservation of the $z$-component of the molecular
total spin and the exchange and anisotropy parameters have been
varied to obtain a best fit of the experimental data.
\subsection{Mn$_6$ (\textbf{1}) ($S=4$) $U_{\text{eff}}\approx$ 28 K}
Sample (\textbf{1}) was the first reported member of the Mn$_6$
family\cite{Milios04}. The building block of the molecule is the
[Mn$^{3+}_{3}$ O] triangular unit where Mn$_2$ pairs, bridged by the
NO oxime, form a -Mn-O-N-Mn- moiety (Fig. \ref{fig:Mn6_S4_poly}).
\begin{figure}[htb!]
\includegraphics[width=8cm,angle=0]{figure2}
\caption{(color online) Structure of the Mn$_{6}$ (\textbf{1})
molecular core. The Mn$^{3+}$ ions are located at the vertices of
two oxo-centered triangles. Ions Mn1, Mn2, Mn1' and Mn2' are in
octahedral geometry and ions Mn3 and Mn3' in square pyramidal
geometry, as highlighted in filled and striped orange (left figure).
Color scheme: Mn, orange, O, red, N, blue. H and C ions are omitted
for clarity.} \label{fig:Mn6_S4_poly}
\end{figure}
The Mn-O-N-Mn torsion angles within each triangle are
10.7$^{\circ}$, 16.48$^{\circ}$ and 22.8$^{\circ}$, giving rise to a
dominant antiferromagnetic exchange coupling \cite{MiliosDalton08}.
The two triangular units are coupled ferromagnetically, resulting in
a total spin ground state of $S=4$. Four out of the six metal ions
(Mn1, Mn2, Mn1', Mn2') are six-coordinate and in distorted
octahedral geometry (MnO$_5$N), with the Jahn-Teller axis almost
perpendicular to the plane of the triangle, while the two remaining
ions (Mn3, Mn3') are five-coordinate and in square pyramidal
geometry (see Fig. \ref{fig:Mn6_S4_poly}). The effective energy
barrier was determined from AC susceptibility measurements to be
$U_{\text{eff}}=28$ K, with $\tau_0=3.6\times10^{-8}$ s (Ref.
\onlinecite{Milios04}). From the effective energy barrier an
estimate of $D\approx -0.15$ meV was derived.\\\indent We performed
INS and FDMR measurements to characterize the ground multiplet and
to identify the position of the lowest-lying excited states from
which we determine the effective exchange interaction and the
zero-field splitting parameters.
\begin{figure}[htb!]
\includegraphics[width=8cm,angle=0]{figure3}
\caption{(Color online) (a) FDMR spectra of unpressed
polycrystalline powder of (\textbf{1}) recorded at various
temperatures. The intensity of the higher-frequency resonance line
decreases with temperature, while that of the lower-frequency lines
increases up to 30 K, beyond which it decreases again. Dotted lines
indicate resonance lines due to impurities. (b) Expanded view of the
low frequency part of the 30 K spectrum. (c) Experimental and fitted
spectrum at $T$ = 30 K using the GSH approximation. Note the logarithmic scale
in (a) and (c).}
\label{fig:FDMR_Mn6_S4}
\end{figure}
Figure \ref{fig:FDMR_Mn6_S4} shows the FDMR spectra recorded on 350
mg unpressed powder of (\textbf{1}). The most pronounced feature is
the resonance line at 1.803(7) meV, while much weaker features can
be observed at 1.328(1) meV and 1.07(1) meV. The intensity of the
higher-frequency line is strongest at lowest temperature, proving
that the corresponding transition originates from the ground spin
multiplet. The lower-frequency lines have maximum intensity at
around 30 K. No further features were observed between 0.5 and 3
meV. The intense resonance line shows two shoulders to lower
energies, which are much stronger in pressed powder samples and also
increase with the age of the sample. This behavior is mirrored by
the development of a pronounced asymmetric lineshape in INS studies
on older samples. We attribute these shoulders to microcrystalline
particles that have suffered loss of lattice solvent, which leads to
small conformational changes and this alters the ZFS and exchange
parameters. We discard the possibility of isomers with different
orientations of the Jahn-Teller distortion axes, as observed for
Mn$_{12}$\cite{Aubin01}, because we see no signature of different
isomers in the AC susceptibility. We also discount the possibility
of closely spaced transitions, as observed in the Fe$_{13}$ cluster
\cite{vanslageren06}, because the intratriangle exchange
interactions are not equal.\\\indent The higher frequency resonance
line is attributed to the transition from the
$\left|S=4,M_S=\pm4\right>$ to $\left|S=4,M_S\pm3\right>$ states.
INS measurements have found to be necessary to unambiguously
identify the origin of the lower frequency transitions (see below). Assuming
that these transitions are transitions within the ground multiplet,
a fit of the giant spin Hamiltonian ZFS parameters (Eq.
\ref{eq:GSH}) to the observed resonance line energies yields
$D_{S=4}=-2.12\pm0.03$ cm$^{-1}$ ($-0.263 \pm0.004$ meV) and
$B_4^0=+(1.5\pm0.5)\times10^{-4}$ cm$^{-1}$ ($1.24\pm0.06
\times10^{-5}$ meV). This ground state $D_S$-value is much larger
than reported spectroscopically determined $D_S$-parameters for
other manganese SMMs, e.g. $D_{S=10} = -0.457$ cm$^{-1}$ for
Mn$_{12}$Ac \cite{Mirebeau99}, $D_{S=\frac{17}{2}} = -0.247$
cm$^{-1}$ for Mn$_{9}$ \cite{Piligkos05}, or $D_{S=6} = -1.16$
cm$^{-1}$ for Mn$_{3}$Zn$_{2}$ \cite{Feng08}. The main reason for
this large $D$-value is the fact that the projection coefficients
for the single ion ZFS onto the cluster ZFS are larger for spin
states with lower $S$ (Ref. \onlinecite{Bencini90}).
The determined $D_{S=4}=-2.12$ cm$^{-1}$ value for (\textbf{1}) is
in excellent agreement with that found from DFT calculations
($D=-2.15$ cm$^{-1}$) \cite{Ruiz08}. The expected energy barrier
toward relaxation of the magnetization calculated from the found
spin Hamiltonian parameters is $U_{\text{theor}}=|D|S^2 = 48.8$ K,
which is much larger than the experimentally found
$U_{\text{eff}}\approx$ 28 K, indicating that more complex
relaxation dynamics characterize this system, in analogy to what has
been found for the Mn$_6$ $S=12$ compounds \cite{Carretta08}. The
linewidth of the 1.33 meV line is slightly larger than that of the
1.80 meV line (48 $\mu$eV versus 41 $\mu$eV), which can indicate the
presence of more than one excitation. The simulated spectrum agrees
very well for the higher-frequency resonance line (note that the
intensity is not rescaled), while the lower-frequency line is much
weaker in the experiment than from the fit. This can be tentatively
attributed to the presence of low-lying excited states as observed
previously for Mn$_{12}$Ac \cite{vanslageren09}. To determine the
energy of excited spin states and identify the origin of the low
frequency resonances we resorted to INS, the technique of choice to
directly access inter-multiplet excitations.
\\\indent The INS experiments were performed on $\approx 4$ g of
non-deuterated polycrystalline powder of (\textbf{1}), which was
synthesized as described in Ref.\onlinecite{Milios04}. For our
measurements we used incident neutron wavelengths ranging from 3.0
\AA\ to 5.92 \AA\
with energy resolution between 50 $\mu$eV and 360 $\mu$eV.\\
Figure \ref{fig:Mn6_S4} (a) shows the INS spectra for an incident
wavelength of 4.6 \AA\ collected on NEAT (210 $\mu$ eV full width at
half maximum (FWHM) resolution at the elastic peak). At $T=2$ K,
only the ground state is populated and therefore all excitations
arise from the ground state doublet $\left|S=4,M_S=\pm4\right>$. We
observed a strong transition at 1.77(2) meV, which we assign to the
intra-multiplet transition to the $\left|S=4\,M_S=\pm3\right>$ level,
in agreement with FDMR results (see above). One further excitation
was observed at higher energy at 2.53(1) meV.\\\indent At $T=20$ K,
we detected additional excitations at 1.05(1) meV and 1.31(1) meV,
which must be due to transitions from excited states. All peaks in
the INS spectra show a very unusual asymmetric line-shape, which we
assign to lattice solvent loss (see above).
\begin{figure}[htb!]
\includegraphics[width=8cm,angle=0]{figure4}
\caption{(Color online) (a) INS spectra of (\textbf{1}) with an
incident wavelength of $\lambda=4.6$ \AA\ (NEAT) for $T=2$ K (blue
circles) and $T=20$ K (red squares). The continuous lines represent
the spectra calculated assuming a dimer model for the spin
Hamiltonian (Eq. 3). (b) $Q$-dependence of first intra- (green
circles) and inter-multiplet (black squares) transitions measured on
IN6 for $\lambda=4.1 $ \AA\, and $T$=2 K. Continuous lines represent
the calculated $Q$-dependence using the dimer spin Hamiltonian Eq. (\ref{eq:H_dimer})
(assuming a dimer distance of $R=5.17$ \AA, which corresponds to the distance between
the centre of the two triangles).} \label{fig:Mn6_S4}
\end{figure}
From the comparison of INS data with the FDMR results, we can deduce
that the excitation at 2.53 meV has a pure inter-multiplet origin,
being absent in the FDMR spectra (see Fig. \ref{fig:FDMR_Mn6_S4}).
This is also confirmed by the $Q$-dependence of the scattering
intensity of the observed excitations. Figure \ref{fig:Mn6_S4} (b)
shows this dependence for the $\left|S=4 ,
M_S=\pm4\right>\rightarrow\left|S=4\,M_S=\pm3\right>$ and
$\left|S=4,M_S=\pm4\right>\rightarrow\left|S=3,M_S=\pm3\right>$
transitions. A characteristic oscillatory behavior has been observed
for the $Q$ dependence of the inter-multiplet INS transition (black
squares), which presents a maximum of intensity at a finite $Q$
value (that is related to the geometry of the molecule), and
decreasing intensity as $Q$ goes toward zero. This Q dependence is
typical for magnetic clusters and reflects the multi-spin nature of
the spin states \cite{Furrer77, Waldmann03b}. By contrast, the intra-multiplet
excitation (green circles) has maximum intensity at $Q=0$, as expected for a transition with $\Delta S =0$,
and the intensity decreases with increasing $Q$, following the magnetic form
factor.
\\\indent The INS data directly reveal the presence of low lying
excited multiplets. Indeed, the difference in energy between the
lowest and the highest energy levels of the anisotropy split $S=4$
ground state is given, as a first approximation, by $|D|S^2$=4.2 meV.
The presence of an inter-multiplet excitation at only 2.53 meV
energy transfer, therefore below 4.2 meV, indicates that the first
excited $S$ multiplet lies within the energy interval of the
anisotropy split $S=4$ state. This suggests that the observed low
energy excitations are possibly not pure intra-multiplet transitions, but are
expected to originate from the $S=4$ ground state and from the first
excited $S$ multiplet. Therefore the exact assignment of those
excitations requires a more accurate analysis beyond the GSH
approximation. Indeed, one fundamental requirement for the validity
of the GSH approximation, i.e. an isolated ground state well
separated from the the excited states, is not fulfilled and $S$ is
not a good quantum number to describe the ground state of the
molecule. To model the data it is thus necessary to use the full
microscopic spin Hamiltonian of Eq. \ref{eq:H_micro}. \\Given the
low symmetry of the triangular units in ({\bf 1}), the number of
free parameters in Eq. \ref{eq:H_micro} would be too large to obtain
unambiguous results, considering the low number of experimentally
observed excitations. Hence, we have chosen to describe the
low-energy physics of ({\bf 1}) by a simplified dimer model, an
approximation which has already previously been adopted for ({\bf
2}) and ({\bf 3}) (see Ref. \onlinecite{Bahr2008}). More
specifically, the two triangular units are described as two
ferromagnetically-coupled $S=2$ spins, which also experience an
effective uniaxial crystal-field potential:
\begin{eqnarray}
H_{dimer}=J ({\bf S}_A \cdot {\bf S}_B)+d
(S_{A,z}^2+S_{B,z}^2).
\label{eq:H_dimer}
\end{eqnarray}
The spin Hamiltonian has been diagonalized numerically and the $J$ and $d$
parameters have been varied to obtain a best fit of the experimental
data. The position of the peak at 1.77 meV does not depend on the
exchange interaction, therefore its position sets the value of the
axial anisotropy $d$ parameter. Given the $d$ parameter, a fit of
the position of the peak at 2.53 meV sets the isotropic exchange
parameter $J$.
\begin{figure}[htb!]
\includegraphics[width=8cm,angle=0]{figure5}
\caption{(Color online) Calculated energy level diagram for molecule (\textbf{1}).
The level scheme on the left side is calculated using the dimer
model spin Hamiltonian in Eq. \ref{eq:H_dimer}. The color maps $S_{\text{eff}}$ , where
$\langle S^2\rangle := S_{\text{eff}}(S_{\text{eff}}+ 1)$. The black
dashed line corresponds to the observed value of $U_{\text{eff}}$ = 28 K. The black arrows
indicate transitions which contribute to the observed peak in the INS and FDMR spectra
at $E\approx 1.33$ meV (see text for details). The level diagram on the right has been calculated using the GSH
approximation (Eq. \ref{eq:GSH}).} \label{EnergyLevelS=4}
\end{figure}
The best fit of the experimental data is obtained with $J=-0.19$ meV
and $d=-0.59$ meV. The calculated energy level scheme is reported in
Fig. \ref{EnergyLevelS=4} (left), where the comparison with the
energy level diagram in the GSH approximation is also reported
(right). The value of S$_{\text{eff}}$ (where $\langle S^2\rangle :=
S_{\text{eff}}(S_{\text{eff}}+ 1)$) is labeled in color and shows
that the first $S=3$ excited state is completely nested within the
$S=4$ ground state. From Fig. \ref{EnergyLevelS=4} it is also clear
that the GSH model does not account for a number of spin states
different from the ground state $S=4$ multiplet at low energy.
Furthermore, the assignment of the observed excitations can be
misleading if considering the GSH approximation only. For example,
using the GSH model, the observed peak at 1.33 meV can only be
attributed to a pure intra-multiplet excitation from
$\left|4,\pm3\right>$ to $\left|4,\pm2\right>$, whilst using Eq.
\ref{eq:H_dimer}, it is found to be a superposition of several
inter-multiplet and intra-multiplet transitions (indicated by arrows
in Fig. \ref{EnergyLevelS=4}). The GSH approximation fails to
describe the low energy level diagram of the molecule and
consequently fails to describe the relaxation of the magnetization.
Indeed, the presence of excited states nested within the ground
state multiplet has a significant effect on the relaxation dynamics,
as discussed in section \ref{Discussion}.
\subsection{Mn$_6$ (\textbf{2}) $U_{\text{eff}}\approx$ 53 K vs. Mn$_6$ (\textbf{3}) $U_{\text{eff}}\approx$ 86.4 K}
Introducing sterically more demanding oximate ligands results in a
twisting of the Mn-N-O-Mn torsion angle \cite{Milios07Ueff53}, which
causes switching of the intra-triangle exchange interactions from
antiferromagnetic to ferromagnetic, resulting in a large increase of
the spin of the ground state from $S=4$ to $S=12$. Here, we study two
((\textbf{2}) and (\textbf{3}), respectively) of the many published
derivatives of these $S=12$ Mn$_6$ clusters. Compound (\textbf{2}) has
undergone two structural changes compared to (\textbf{1}). First of
all, the distance between the phenolato oxygen and the two square
pyramidal Mn$^{3+}$ ions has decreased from $\approx 3.5$ \AA\ to
$\approx 2.5$ \AA, thus all Mn$^{3+}$ ions are now in
six-coordinated distorted octahedral geometry (see Fig.
\ref{fig:Mn6_S12_poly}). Secondly, the torsion angles of the
Mn$-$N$-$O$-$Mn moieties has increased strongly with respect to
those in (\textbf{1}), being 38.20$^\circ$, 39.9$^\circ$ and
31.26$^\circ$, compared to 10.7$^{\circ}$, 16.48$^{\circ}$ and 22.8$^{\circ}$ for
(\textbf{1}). In (\textbf{3}), the introduction of two methyl groups on
the carboxylate ligand has increased the non-planarity of the
Mn$-$N$-$O$-$Mn moieties further, giving torsion angles of
39.08$^\circ$, 43.07$^\circ$ and 34.88$^\circ$
\cite{Milios07Ueff86}. The result is that the weakest ferromagnetic
coupling is significantly stronger for (\textbf{3}) compared to (\textbf{2}).
Using a single $J$ model (e.g. assuming that the intra- and
inter-triangle exchange couplings are equal), Milios \textit{et al.}
fitted the DC susceptibility data for molecules
(\textbf{2}) and (\textbf{3}) and obtained: $J$(\textbf{2})$=-0.230$ meV and $J$(\textbf{3})$=-0.404$ meV, respectively \cite{Milios07magstruc,
InglisDalton09} (in our notation for the spin Hamiltonian).\\
\begin{figure}[htb!]
\includegraphics[width=8.5cm,angle=0]{figure6}
\caption{(Color online) Structure of the Mn$_{6}$ (\textbf{2})
molecular core. The Mn$^{3+}$ ions are located at the vertices of
two oxo-centered triangles. All Mn ions are in octahedral geometry
and the octahedra are highlighted in orange (left figure). Color
scheme: Mn, orange, O, red, N, light blue. H and C ions are omitted
for clarity. On the right, a schematic representation is given, together with the exchange coupling scheme
adopted for the spin Hamiltonian calculations.}
\label{fig:Mn6_S12_poly}
\end{figure}
In spite of the fact that both (\textbf{2}) and (\textbf{3}) have $S=12$ ground states and similar geometrical structures, radically different effective energy barriers towards the relaxation of the magnetization were observed, being $U_{\text{eff}}\approx 53$ K for (\textbf{2}) and $U_{\text{eff}}\approx 86.4$ K for (\textbf{3}). Here, we aim to understand this difference by an in-depth study of the energy level structure by means of FDMR and INS.\\
\begin{figure}[htb!]
\includegraphics[width=8cm]{figure7}
\caption{(Color online) (a) FDMR spectra recorded on a pressed
powder pellet of 2 at different temperatures. At the lowest
temperature, the highest-frequency line has highest intensity. The
other transitions are indicated by vertical lines. (b) 10K FDMR
spectrum (symbols) and best fit using the GSH (Eq. \ref{eq:GSH}), with $D =
-0.368$ cm$^{-1}$, and $B_4^0 = -4.0 \times 10^{-6}$ cm$^{-1}$.}
\label{FDMR53K}
\end{figure}
Figure \ref{FDMR53K} shows FDMR spectra recorded on a pressed
powder pellet of (\textbf{2}) at different temperatures. The baseline
shows a pronounced oscillation, which is due to Fabry-P\'erot-like
interference within the plane-parallel pellet \cite{kirchner07}. The
oscillation period and downward slope to higher frequencies are
determined by the thickness of the pellet and the complex dielectric
permittivity, which were determined to be $\epsilon'=3.01$ and
$\epsilon''=0.049$, values typical for molecular magnet samples. In
addition, five resonance lines are observed which we attribute to
resonance transitions within the $S=12$ multiplet. Thus, the highest
frequency line is assigned to the
$\left|12,\pm12\right>\rightarrow\left|12,\pm11\right>$
transition, and so on. The lines are much narrower (11 $\mu$eV FWHM)
than those observed for other SMMs, e.g. 23 $\mu$eV FWHM for
Mn$_{12}$Ac. The fit procedure showed that the lines are
inhomogeneously broadened and best described by Gaussian lineshapes. The small
linewidth indicates that distributions in ZFS parameters ($D$-strain) are
small in these samples. A fit of the GSH parameters (Eq. \ref{eq:GSH}) to the
observed resonance frequencies, yields $D_{S=12}=-0.368$ cm$^{-1}$
(0.0456 meV) and $B_4^0=-4.0\times10^{-6}$ cm$^{-1}$
(4.96$\times10^{-7}$ meV) best parameter values. The theoretical
energy barrier calculated from these ZFS parameters is
$U_{\text{theor}}=76$ K, which is much larger than the
experimentally found $U_{\text{eff}}\approx$ 53 K, indicating that
the molecule can shortcut the barrier in some way. The ZFS values
are in themselves not remarkable, and close to those reported for
other manganese clusters with similar ground state spins, e.g.
$D_{S=10} = -0.457$ cm$^{-1}$ for Mn$_{12}$Ac \cite{Mirebeau99},
$D_{S=\frac{17}{2}} = -0.247$ cm$^{-1}$ for Mn$_{9}$
\cite{Piligkos05}. Interestingly, the fourth order axial ZFS is an
order of magnitude smaller than for Mn$_{12}$Ac. This type of ZFS is
currently accepted to parametrize effects of mixing between spin
multiplets ($S$-mixing) \cite{CarrettaPRL04}, which would mean
that $S$-mixing is only limited, contrary to expectation.
However, the fit does not simulate the resonance line positions and
intensities satisfactorily, which is in contrast to the situation
for other molecular nanomagnets that feature strong $S$-mixing, e.g.
Ni$_4$ \cite{Kirchner08, Sieber05}. Therefore, the investigated Mn$_6$ SMM represents an
example where the giant spin model cannot satisfactorily describe
FDMR spectra, and it will be shown below that this is due to a
complete breakdown of the giant spin model. It will also be shown
that the resonance line at 0.80 meV is due to a transition within
the $S = 11$ excited multiplet. However, removal of this resonance
line does not result in a better fit. The calculated line
intensities are much larger than those experimentally found,
especially for the highest-frequency lines. This we attribute to a
combination of parasitic radiation in the cryostat, and the presence
of many more states than taken into account by the giant spin model,
which decreases the relative population for any given state.
\\\indent
Similar FDMR results were obtained for (\textbf{3}) (Fig.
\ref{FDMR86K}), and six sharp resonance lines were observed.
A fit of the GSH parameters to the observed resonance line positions
yields the following values: $D = -0.362\pm0.001$ cm$^{-1}$ (-0.0449
meV), and $B_4^0 = -6.0\pm0.4 \times 10^{-6}$ cm$^{-1}$
(-7.4$\times 10^{-7}$ meV). The simulated spectrum matches the
experiment much more closely for (\textbf{3}), especially for the
high-frequency lines. Interestingly, the theoretical energy barrier
($U_{\text{theor}}=75$ K) is virtually the same as for (\textbf{2}),
but \emph{smaller} than the experimentally found energy barrier
($U_{\text{eff}}=86$ K). This unprecedented finding means that the
magnetization relaxation must involve states that do not belong to
the ground spin multiplet \cite{Carretta08}. Again, we turn to INS
to determine the positions of the excited spin multiplets, which
will allow full characterization of the system.
\begin{figure}[htb!]
\includegraphics[width=8cm]{figure8}
\caption{(Color online) (a) FDMR spectra of unpressed
polycrystalline powder of (\textbf{3}) recorded at various
temperatures. (b) Experimental and fitted spectrum at $T = 10$
K.} \label{FDMR86K}
\end{figure}
\begin{table}
\squeezetable \caption{INS and FDMR peak positions of the observed
excitations for (\textbf{2}) and (\textbf{3}) (in meV).}
\label{table:list}
\begin{ruledtabular}
\begin{tabular}{lll|lll}
(\textbf{2}) & INS & FDMR & (\textbf{3}) & INS & FDMR \\
\hline
& 4.9(2) & n.o.\footnote{not observed} & & 5.7(2) & n.o. \\
& 4.5(1) & n.o. & & 5.3(2) & n.o. \\
& 4.2(2) & n.o. & & 4.2(2) & n.o. \\
& 2.3(2) & n.o. & & 1.87(3) & n.o. \\
& 1.41(2) & n.o. & & 1.11(1) & 1.107(7) \\
& 1.24(7) & n.o. & & 0.99(1) & 0.993(6) \\
& 1.13(2) & 1.127(5) & & 0.88(2) & 0.883(6) \\
& 0.98(2) & 0.975(5) & & 0.77(1) & 0.772(7) \\
& 0.88(3) & 0.873(6) & & 0.66(1) & 0.657(7) \\
& 0.80(2) & 0.803(7) & & 0.55(2) & 0.551(10) \\
& 0.70(2) & 0.687(5) & & 0.48(1) & n.o. \\
& 0.57(4) & n.o. & & 0.45(1) & n.o. \\
& & & & 0.34(1) & n.o. \\
& & & & 0.31(1) & n.o. \\
& & & & 0.25(1) & n.o. \\
& & & & 0.21(3) & n.o. \\
\end{tabular}
\end{ruledtabular}
\end{table}
Figures (\ref{fig:Mn6_S12_L67_both}a) and (\ref{fig:Mn6_S12_L67_both}b) show
the high resolution INS experimental data for compounds (\textbf{2})
and (\textbf{3}), respectively, collected on IN5 with an incident
wavelength of 6.7 {\AA} (53 $\mu$eV FWHM resolution at the elastic peak).
\begin{figure}[htb!]
\includegraphics[width=6cm]{figure9}
\caption{(Color online) INS spectra collected on IN5 with incident
wavelength of 6.7 {\AA} at $T=2$ K (blue circles) and $T=12$ K (red
squares). (a) for sample (\textbf{2}) and (b) for sample
(\textbf{3}). The spectra calculated with the parameters listed
in Table \ref{table:couplings} are shown as continuous
lines.}\label{fig:Mn6_S12_L67_both}
\end{figure}
At the lowest temperature $T = 2$ K only the ground state is
populated and, due to the INS selection rules, only transitions with
$\Delta S = 0,\pm 1$ and $\Delta M = 0,\pm 1$ can be detected. The
lowest energy excitation can be thus easily attributed to the
intra-multiplet transition from the $\vert S=12,M_S=\pm12\rangle$
ground state to the $\vert S=12,M_S=\pm 11\rangle$ first excited
level. The position of this intra-multiplet excitation is found to
be at about the same energy in both compounds, i.e. $\sim 1.1$ meV,
indicating only small differences in the anisotropy of the two systems. In contrast the first
inter-multiplet $S=12\rightarrow S=11$ excitation at about 1.41 meV
in compound (\textbf{2}) is not visible in the spectra at 6.7 {\AA}
of compound (\textbf{3}). This can be understood looking at the data
at higher energy transfer, collected with an incident wavelength of
3.4 {\AA} (see Figs. (\ref{fig:Mn6_S12_L34_both}a) and
(\ref{fig:Mn6_S12_L34_both}b)). Indeed the first inter-multiplet
excitation is considerably raised in energy in compound (\textbf{3})
with respect to compound (\textbf{2}), from 1.41 meV to 1.87 meV.
This gives a direct evidence of an increase of the isotropic
exchange parameters, while the anisotropic parameters are
approximately the same for both molecules. The INS spectra collected
at a base temperature of 2 K, enabled us to directly access the
whole set of intra-multiplet and inter-multiplet transitions allowed
by the INS selection rules in both compounds. By raising the
temperature to 16 K the intensity of the magnetic peaks decreases,
thus confirming their magnetic origin. A total of five
inter-multiplet excitations for compound (\textbf{2}) toward
different $S=11$ excited states can be detected. For compound
(\textbf{3}) four inter-multiplet excitations have been observed.
All the magnetic excitations are marked in Fig.
\ref{fig:Mn6_S12_L34_both} with the corresponding transition
energies.
\begin{figure}[htb!]
\includegraphics[width=6cm]{figure10}
\caption{(Color online) INS spectra collected on IN5 with incident
wavelength of 3.4 {\AA} at $T=2$ K (blue circles) and 16 K (red squares). (a) for
sample (\textbf{2}) and (b) for sample (\textbf{3})
. The observed transitions are labeled with the corresponding transition energies in meV.}\label{fig:Mn6_S12_L34_both}
\end{figure}
To complete our investigations of the transitions within the $S=12$
ground-state multiplet, we additionally performed high resolution
measurements of molecule (\textbf{3}) using IN5 with incident
wavelengths of 10.5 \AA\ (FWHM = 13
$\mu$eV at the elastic line)(see Fig.
\ref{fig:Mn6_S12_U86_85A_105A}). These measurements allowed us to
observe transitions originating from the top of the anisotropy
barrier.
\begin{figure}[htb!]
\includegraphics[width=8cm]{figure11}
\caption{(Color online) High resolution INS spectra of molecule
(\textbf{3}) collected on IN5 with incident wavelength 10.5 {\AA}
at 24 K. The
energy gain spectra is displayed. Continuous lines are the
calculations using the spin Hamiltonian of Eq. (\ref{eq:H_micro}).
}\label{fig:Mn6_S12_U86_85A_105A}
\end{figure}
A further confirmation of the good assignment of the observed
excitations is provided by the study of their $Q$-dependence. As
revealed by Fig. \ref{fig:Mn6_U86_L5_qdep}, the intra-multiplet
transition ($\Delta S$=0) shows a distinctive $Q$-dependence, with a
pronounced intensity at low $Q$, that dies out quite rapidly following
the Mn$^{3+}$ form factor. In contrast, inter-multiplet excitations
present flatter behavior, with considerably less intensity at
low $Q$.
\begin{figure}[htb!]
\includegraphics[width=6cm]{figure12}
\caption{(Color online) (a) Energy-wavevector colormap of sample
(\textbf{3}) collected on IN5 with incident wavelength of 5.0 \AA.
(b) $Q$-dependence of two transitions from the ground state. The
black squares correspond to the $|S=12,M_s=\pm12> \rightarrow
|S=12,M_s=\pm11>$ intra-multiplet transition and the green circles
display the $|S=12,M_s=\pm12 \rightarrow
|S=11,M_s=\pm11>$.}\label{fig:Mn6_U86_L5_qdep}
\end{figure}
The assignment of the observed excitations to intra-multiplet or
inter-multiplet transitions has been confirmed by comparison with
FDMR measurements performed on both compounds (see Fig.
\ref{FDMR53K} and Fig. \ref{FDMR86K}). The position of the
intra-multiplet INS transitions are consistent with the FDMR
measurements performed on the same sample (see Table
\ref{table:list}). Due to the different selection rules of INS and
FDMR, we can conclude that all the peaks observed at T = 2 K above
1.2 meV energy transfer correspond to inter-multiplet transitions,
since they are absent in the FDMR spectra.\\The straightforward
assignment of the base temperature observed excitations allows us to
draw some considerations on the experimentally deduced energy level
diagram. For both compounds, a rough estimate of the splitting of
the spin ground multiplet gives $|D|S^2 \simeq 6.5$ meV. This value is
comparable to the energy interval explored by the high energy
transfer INS data (Fig. \ref{fig:Mn6_S12_L34_both}), where most of
the inter-multiplet $S=12 \rightarrow S=11$ excitations have been
observed. This experimental observation leads to the conclusion that
also in (\textbf{2}) and (\textbf{3}) several excited states lie
within the anisotropy split ground state, with the consequent
breakdown of the GSH approximation. Due to the inadequacy of the GSH
for (\textbf{2}) and (\textbf{3}), the microscopic spin Hamiltonian
(Eq. \ref{eq:H_micro}) was used to model the data and extract the
exchange constants and anisotropies . The minimal set of free
parameters is given by three different exchange constants
$J_{11^\prime}\equiv J_1$, $J_{12}=J_{23}=J_{13}=J_{1^\prime
2^\prime}=J_{2^\prime 3^\prime}=J_{1^\prime 3^\prime}\equiv J_2$,
and $J_{13^\prime}=J_{1^\prime 3}\equiv J_3$ (Fig.
\ref{fig:Mn6_S12_poly}) and two sets of crystal-field (CF)
parameters $d_1=d_{1^\prime}$, $c_1=c_{1^\prime}$, and
$d_2=d_{2^\prime}$, $c_2=c_{2^\prime}$. Indeed, the ligand cages of
sites 1 and 3 are rather similar and we assumed the corresponding CF
parameters to be equal. Since experimental information is
insufficient to fix independently the two small $c$ parameters,we
have chosen to constrain the ratio
$c_1/c_2$ to the ratio $d_1/d_2$.\\
The isotropic exchange and crystal field parameters deduced by the
simultaneous best fit of the experimental data are reported in Table
\ref{table:couplings}. Figure \ref{EnergyLevel} shows the calculated energy level diagram using the best fit procedure for Eq. \ref{eq:H_micro} (left) and the GSH model (right) for (\textbf{2}) and (\textbf{3}).
\begin{table}
\caption{Isotropic exchange and CF parameters for Eq. \ref{eq:H_micro} (in meV)
deduced by fitting INS and FDMR data for the two Mn$_6$ S=12
compounds.} \label{table:couplings}
\begin{ruledtabular}
\begin{tabular}{rcrrrrrrrr}
& U$_{eff} (K)$ & $J_1$ & $J_2$ & $J_3$ & $d_1$ & $d_2$ & $c_1$ \\
\hline
(\textbf{2}) & 53 &-0.61 & -0.31 & 0.07 & -0.23 & -0.97 & -0.0008 \\
(\textbf{3}) & 86.4 & -0.84 & -0.59 & 0.01 & -0.20 & -0.76 & -0.001 \\
\end{tabular}
\end{ruledtabular}
\end{table}
\begin{figure}[htb!]
\includegraphics[width=7cm,angle=0]{figure13}
\caption{(Color online) Calculated energy level diagram for molecule (\textbf{2})
(top) and molecule (\textbf{3}) (bottom). The level scheme on the
left side is calculated using the microscopic spin Hamiltonian in Eq. \ref{eq:H_micro},
while the level diagram on the right has been calculated in
the GSH approximation (Eq. \ref{eq:GSH}). The dashed lines correspond to the observed value of $U_{\text{eff}}$.}
\label{EnergyLevel}
\end{figure}
\section{Discussion}\label{Discussion}
The experimental data collected on the three variants of Mn$_6$
clusters provide direct evidence that a general feature for this
class of compounds is the nesting of excited multiplets within the
ground state multiplet. This is an unavoidable effect when the
isotropic exchange parameters have the same order of magnitude as
the single ion anisotropy parameters, as it happens to be for
Mn$_6$. The nesting of spin states can be clarified by observing the
energy level diagrams for the three molecules presented in Fig.
\ref{EnergyLevelS=4} and Fig. \ref{EnergyLevel}. The diagram on the
left shows the energy levels calculated by a diagonalization of the
full spin Hamiltonian, while the energy level scheme on the right
hand side has been calculated considering the GSH approximation. It
is clear that the GSH does not account for any of the spin states
with $S$ different from $S_{\text{GS}}$ that lie within the split GS
energy level diagram. The above states represent a shortcut for the
relaxation of the magnetization and can promote resonant inter-multiplet
tunneling processes that manifest as additional steps in the magnetization
curve absent in the GS model\cite{Ramsey08, Bahr2008, Yang07, Soler03, Carretta09poly}.
The overall result is a lowering of the
effective anisotropy barrier with respect to an ideal molecule where
the spin ground state is well separated from the excited ones,
as was firstly demonstrated in Ref. \onlinecite{Carretta08}.\\
We have calculated the relaxation dynamics of molecule (\textbf{1})
following the same procedure adopted in Ref. \onlinecite{Carretta08}
for molecules (\textbf{2}) and (\textbf{3}). We applied a master
equations formalism in which the magnetoelastic (ME) coupling is
modeled as in Ref. \onlinecite{CarrettaPRL06}, with the quadrupole
moments associated to each triangular unit isotropically coupled to
Debye acoustic phonons.
The transition rates $W_{st}$ between pairs of eigenlevels of the
dimer spin Hamiltonian Eq. \ref{eq:H_dimer} is given by:
\begin{equation}
W_{st} =
\gamma^2\Delta_{st}^3n(\Delta_{st})\!\!\!\!\sum_{A,B,q_1,q_2}\!\!\!\!\langle
s|O_{q_1,q_2}({\bf S}_A)|t\rangle \overline{\langle
s|O_{q_1,q_2}({\bf S}_B)|t\rangle}
\end{equation}
where $O_{q_1,q_2}({\bf S}_{A,B})$ are the components of the
Cartesian quadrupole tensor operator, $n(\Delta_{st}) = (e^{\hbar
\Delta_{st}/k_BT}-1)^{-1}$ and $\Delta_{st}=(E_s-E_t)/\hbar$. We
found out that the resulting relaxation spectrum at low $T$ is
characterized by a single dominating relaxation time whose
$T$-dependence displays a nearly Arrhenius behavior $\tau = \tau_0
\exp(U/k_BT)$, as previously observed for molecules (\textbf{2}) and
(\textbf{3}) \cite{Carretta08}. The relaxation dynamics of $M$ is
indeed characterized by two separated time scales: fast processes
that determine the equilibrium within each well of the double-well
potential and a slow inter-well process that at low temperature
determines the unbalancing of the populations of the two wells, and
thus sets the time scale for the reversal of the magnetization. As
can be observed from the energy level diagram of Fig.
\ref{EnergyLevelS=4} there are several levels that can be involved
in the inter-well relaxation process, giving rise to an overall
effective barrier $U_{\text{eff}}$ different from the simple energy
difference between the $M=0$ and $M=\pm 4$ states. The corresponding
calculated energy barrier $U_{\text{calc}}$= 32 K reproduces quite
well the experimental value, $U_{\text{eff}}$= 28 K. The lowering of
the barrier is therefore attributed to the presence of these extra
paths. Indeed, the calculations for artificially isolated $S = 4$
yield $U= 47$ K.\\
It is worth commenting also on the $D$ value for
the ground state of each molecule. Whilst no large difference
between the local $d$ of the low (\textbf{1}) and high ((\textbf{2})
and (\textbf{3})) spin molecules is expected, the overall $D$ value,
as determined using the GSH approximation, is much higher for the
$S=4$ molecule ($D\approx-0.263$ meV) than for the high spin
molecules ($D\approx-0.045$ meV ). However, this observation should
not be misinterpreted. The difference arises from the fact that $D$
depends on the projection of the individual single-ion anisotropies of each magnetic ion onto the total spin quantum number
$S$. In the case where the $S$-mixing is negligible and the spin ground state is a good quantum number, the $D$ parameter for a specific state $S$ can
be written as linear combination of the single-ions anisotropy tensors (Ref. \onlinecite{Bencini90}):
\begin{equation}
{\bf D}=\sum_{i=1}^N a_i {\bf d}_i
\label{eq:D_sum}
\end{equation}
The projection coefficients $a_i$ of the
single ion anisotropy to spin states of different $S$ values can differ
significantly, giving rise to considerably different $D$ values.
The ligand field study of various members of the Mn$_6$
family (Ref. \onlinecite{Piligkos08}) provides experimental evidence of
this. Recent theoretical
studies proposed that the intrinsic relationship between $S$ and $D$ causes a scaling of $U$ that goes approximately with $S^0$
(see Ref. \onlinecite{Waldmann07} and \onlinecite{Ruiz08}), raising the question whether it is worth trying to increase
the value of spin ground state to obtain a larger energy barrier. Indeed,
higher spin ground states would correspond to lower $D$
parameters, neutralizing the overall effect on the height of the
anisotropy barrier. In recently performed electron paramagnetic resonance studies the authors proposed that the barrier goes roughly with $S^1$ instead \cite{Datta09}. In the specific case of Mn$_6$, because of the very large $S$-mixing, the
projection onto a well defined spin state is no more justified and it is not possible to associate the barrier $U$ to a defined $S$ value.
However, if we consider the effective anisotropy barrier
for artificially isolated $S\!=\!4$ and $S\!=\!12$ states (i.e. $U=47$ K for (\textbf{1}) and $U = 105$ K for (\textbf{2})),
we can confirm that the barrier does not go quadratically with $S$,
as one could naively deduce from the equation $U=|D|S^2$. Indeed,
U$_{S=12}$/U$_{S=4}$ = 2.2 $\ll 12^2/4^2$=9. This confirms what has
been pointed out in Ref. \onlinecite{Waldmann07}, i.e. even though
the highest anisotropy barrier is obtained with the molecule with
the highest spin ground state, the increase of the total spin is not
as efficient as one would expect and alternative routes, like
increasing the single ion anisotropy, should be considered.
\section{Conclusion}
We have performed INS and FDMR measurements on three variants of
Mn$_6$ molecular nanomagnets, which have the same magnetic core and
differ by slight changes in the organic ligands. INS measurements
have unambiguously evidenced the presence of low lying excited
states in all the three molecules. The combination of the two
techniques enabled us to determine the spin Hamiltonian
parameters used for the analysis of the magnetic properties. The
nesting of excited states within the ground state multiplet strongly
influences the relaxation behavior and plays a crucial role in
lowering the effective energy barrier. The calculations of the
relaxation dynamics give results that are consistent with the
experimental values and show that the highest barrier is obtained
for ideal molecules with an isolated ground state. This observation
might be valid for a wider class of SMMs.
\begin{acknowledgments}
This work was partly supported by EU-RTN QUEMOLNA Contract No.
MRTN-CT-2003-504880, the German Science Foundation DFG, and EPSRC. This work utilized facilities
supported in part by the National Science Foundation under Agreement
No. DMR-0454672.
\end{acknowledgments}
| proofpile-arXiv_065-5402 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction} \label{sec:intro} Many popular linear classifiers, such as logistic regression, boosting, or SVM, are trained by optimizing a margin-based risk function. For standard linear classifiers $\hat Y=\text{sign} \sum\theta_jX_j$ with $Y\in\{-1,+1\}$, and $X,\theta\in\R^d$ the margin is defined as the product
\begin{align}
Y f_{\theta}(X)\quad \text{where} \quad f_{\theta}(X) \defeq \sum_{j=1}^d \theta_jX_j. \label{eq:defF}
\end{align}
Training such classifiers involves choosing a particular value of $\theta$. This is done by minimizing the risk or expected loss
\begin{align} \label{eq:defR} R(\theta) &= \E_{p(X,Y)} L(Y,f_{\theta}(X))
\end{align}
with the three most popular loss functions
\begin{align} \label{eq:loss1}
L_1(Y,f_{\theta}(X)) &= \exp\left(-Y f_{\theta}(X)\right)\\
L_2(Y,f_{\theta}(X)) &= \log \left( 1+\exp\left(-Y f_{\theta}(X) \right)\right) \label{eq:loss2}\\
L_3(Y,f_{\theta}(X)) &= (1-Yf_{\theta}(X))_+. \label{eq:loss3}
\end{align}
being exponential loss $L_1$ (boosting), logloss $L_2$ (logistic regression) and hinge loss $L_3$ (SVM) respectively ($A_+$ above corresponds to $A$ if $A>0$ and 0 otherwise).
Since the risk $R(\theta)$ depends on the unknown distribution $p$, it is usually replaced during training with its empirical counterpart \begin{align}
R_n(\theta) &= \frac{1}{n} \sum_{i=1}^n L(Y^{(i)},f_{\theta}(X^{(i)})) \label{eq:empiricalLoss}
\end{align}
based on a labeled training set
\begin{align} \label{eq:labeledData} (X^{(1)},Y^{(1)}),\ldots,(X^{(n)},Y^{(n)})\iid p
\end{align}
leading to the following estimator
\begin{align} \nonumber
\hat\theta_n &= \argmin_{\theta} R_n(\theta).
\end{align}
Note, however, that evaluating and minimizing $R_n$ requires labeled data \eqref{eq:labeledData}. While suitable in some cases, there are certainly situations in which labeled data is difficult or impossible to obtain.
In this paper we construct an estimator for $R(\theta)$ using only unlabeled data, that is using
\begin{align} \label{eq:unlabeledData} X^{(1)},\ldots,X^{(n)} \iid p
\end{align}
instead of \eqref{eq:labeledData}. Our estimator is based on the observations that when the data is high dimensional ($d\to\infty$) the quantities
\begin{align} \label{eq:condInProd}
f_{\theta}(X)|\{Y=y\},\quad y\in\{-1,+1\}
\end{align}
are often normally distributed. This phenomenon is supported by empirical evidence and may also be derived using non-iid central limit theorems. We then observe that the limit distributions of \eqref{eq:condInProd} may be estimated from unlabeled data \eqref{eq:unlabeledData} and that these distributions may be used to measure margin-based losses such as \eqref{eq:loss1}-\eqref{eq:loss3}.
We examine two novel unsupervised applications: (i) estimating margin-based losses in transfer learning and (ii) training margin-based classifiers. We investigate these applications theoretically and also provide empirical results on synthetic and real-world data. Our empirical evaluation shows the effectiveness of the proposed framework in risk estimation and classifier training without any labeled data.
The consequences of estimating $R(\theta)$ without labels are indeed profound. Label scarcity is a well known problem which has lead to the emergence of semisupervised learning: learning using a few labeled examples and many unlabeled ones. The techniques we develop lead to a new paradigm that goes beyond semisupervised learning in requiring no labels whatsoever.
\section{Unsupervised Risk Estimation} \label{sec:riskEstimation}
In this section we describe in detail the proposed estimation framework and discuss its theoretical properties. Specifically, we construct an estimator for $R(\theta)$ \eqref{eq:defR} using the unlabeled data \eqref{eq:unlabeledData} which we denote $\hat R_n(\theta\,;X^{(1)},\ldots,X^{(n)})$ or simply $\hat R_n(\theta)$ (to distinguish it from $R_n$ in \eqref{eq:empiricalLoss}).
Our estimation is based on two assumptions. The first assumption is that the label marginals $p(Y)$ are known and that $p(Y=1)\neq p(Y=-1)$. While this assumption may seem restrictive at first, there are many cases where it holds. Examples include medical diagnosis ($p(Y)$ is the well known marginal disease frequency), handwriting recognition or OCR ($p(Y)$ is the easily computable marginal frequencies of different letters in the English language), life expectancy prediction ($p(Y)$ is based on marginal life expectancy tables). In these and other examples $p(Y)$ is known with great accuracy even if labeled data is unavailable. Furthermore, this assumption may be replaced with a weaker form in which we know the ordering of the marginal distributions e.g., $p(Y=1)>p(Y=-1)$, but without knowing the specific values of the marginal distributions.
The second assumption is that the quantity $f_{\theta}(X)|Y$ follows a normal distribution. As $f_{\theta}(X)|Y$ is a linear combination of random variables, it is frequently normal when $X$ is high dimensional. From a theoretical perspective this assumption is motivated by the central limit theorem (CLT). The classical CLT states that $f_{\theta}(X)=\sum_{i=1}^d\theta_i X_i|Y$ is approximately normal for large $d$ if the data components $X_1,\ldots,X_d$ are iid given $Y$. A more general CLT states that $f_{\theta}(X)|Y$ is asymptotically normal if $X_1,\ldots,X_d|Y$ are independent (but not necessary identically distributed). Even more general CLTs state that $f_{\theta}(X)|Y$ is asymptotically normal if $X_1,\ldots,X_d|Y$ are not independent but their dependency is limited in some way. We examine this issue in Section~\ref{sec:CLT} and also show that the normality assumption holds empirically for several standard datasets.
To derive the estimator we rewrite \eqref{eq:defR} by taking expectation with respect to $Y$ and $\alpha=f_{\theta}(X)$
\begin{align} \label{eq:risk2}
R(\theta) = \E_{p(f_{\theta}(X),Y)} L(Y,f_{\theta}(X))
= \sum_{y\in\{-1,+1\}} p(y) \int_{\R} p(f_{\theta}(X)=\alpha|y) L(y,\alpha) \, d\alpha.
\end{align}
Equation~\eqref{eq:risk2} involves three terms $L(y,\alpha)$, $p(y)$ and $p(f_{\theta}(X)=\alpha|y)$. The loss function $L$ is known and poses no difficulty. The second term $p(y)$ is assumed to be known (see discussion above). The third term is assumed to be normal
$f_{\theta}(X)\,|\,\{Y=y\} = \sum_i \theta_i X_i \,| \, \{Y=y\}\sim N(\mu_y,\sigma_y)$ with parameters $\mu_y,\sigma_y$, $y\in\{-1,1\}$ that are estimated by maximizing the likelihood of a Gaussian mixture model. These estimated parameters are used to construct the plug-in estimator $\hat R_n(\theta)$ as follows.
\newcommand*\widefbox[1]{\fbox{\hspace{1em}#1\hspace{1em}}}
\begin{empheq}[box=\widefbox]{align}
\label{eq:ll1}
\ell_{n}(\mu,\sigma)
&= \sum_{i=1}^n \log \sum_{y^{(i)}\in\{-1,+1\}} p(y^{(i)}) p_{\mu_y,\sigma_y}(f_{\theta}(X^{(i)})|y^{(i)}). \\ \label{eq:ll}
(\hat\mu^{(n)},\hat\sigma^{(n)})&=\argmax_{\mu,\sigma} \ell_{n}(\mu,\sigma)\\
\hat R_{n}(\theta) &= \sum_{y\in\{-1,+1\}} p(y) \int_{\R} p_{\hat\mu^{(n)}_y,\hat\sigma^{(n)}_y}(f_{\theta}(X)=\alpha|y) L(y,\alpha) \, d\alpha.
\label{eq:pluginEstimate}
\end{empheq}
We make the following observations.
\begin{enumerate}
\item Although we do not denote it explicitly, $\mu_y$ and $\sigma_y$ are functions of $\theta$.
\item The loglikelihood \eqref{eq:ll1} does not use labeled data (it marginalizes over the label $y^{(i)}$).
\item The parameter of the loglikelihood \eqref{eq:ll1} are $\mu=(\mu_{1},\mu_{-1})$ and $\sigma=(\sigma_{1},\sigma_{-1})$ rather than the parameter $\theta$ associated with the margin-based classifier. We consider the latter one as a fixed constant at this point.
\item The estimation problem \eqref{eq:ll} is equivalent to the problem of maximum likelihood for means and variances of a Gaussian mixture model where the label marginals are assumed to be known. It is well known that in this case (barring the symmetric case of a uniform $p(y)$) the MLE converges to the true parameter values.
\item The estimator $\hat R_n$ \eqref{eq:pluginEstimate} is consistent in the limit of infinite unlabeled data
\[P\left(\lim_{n\to\infty} \hat R_{n}(\theta)=R(\theta)\right)=1.\]
\item The two risk estimators $\hat R_n(\theta)$ \eqref{eq:pluginEstimate} and $R_n(\theta)$ \eqref{eq:empiricalLoss} approximate the expected loss $R(\theta)$. The latter uses labeled samples and is typically more accurate than the former for a fixed $n$.
\item Under suitable conditions $\argmin_{\theta} \hat R_n(\theta)$ converges to the expected risk minimizer
\[ P\left(\,\lim_{n\to\infty} \,\argmin_{\theta\in\Theta} \,R_{n}(\theta)\,=\,\argmin_{\theta\in\Theta}\, R(\theta)\,\right)\,=\,1.\]
This far reaching conclusion implies that in cases where $\argmin_{\theta} R(\theta)$ is the Bayes classifier (as is the case with exponential loss, log loss, and hinge loss) we can retrieve the optimal classifier without a single labeled data point.
\end{enumerate}
\subsection{Asymptotic Normality of $f_{\theta}(X)|Y$} \label{sec:CLT}
The quantity $f_{\theta}(X)|Y$ is essentially a sum of $d$ random variables which for large $d$ is likely to be normally distributed. One way to verify this is empirically, as we show in Figures~\ref{fig:CLT}-\ref{fig:CLT2} which contrast the histogram with a fitted normal pdf for text, digit images, and face images data. For these datasets the dimensionality $d$ is sufficiently high to provide a nearly normal $f_{\theta}(X)|Y$. For example, in the case of text documents ($X_i$ is the relative number of times word $i$ appeared in the document) $d$ corresponds to the vocabulary size which is typically a large number in the range $10^3-10^5$. Similarly, in the case of image classification ($X_i$ denotes the brightness of the $i$-pixel) the dimensionality is on the order of $10^2-10^4$.
Figures~\ref{fig:CLT}-\ref{fig:CLT2} show that in these cases of text and image data $f_{\theta}(X)|Y$ is approximately normal for both randomly drawn $\theta$ vectors (Figure~\ref{fig:CLT}) and for $\theta$ representing estimated classifiers (Figure~\ref{fig:CLT2}). The single caveat in this case is that normality may not hold when $\theta$ is sparse, as may happen for example for $l_1$ regularized models (last row of Figure~\ref{fig:CLT2}).
\begin{figure}
\centering
\begin{tabular}{ccc}
{\scriptsize RCV1 text data} & &
{\scriptsize face images} \\
\includegraphics[width=0.31\textwidth]{figure0001}& \includegraphics[width=0.31\textwidth]{figure0002}& \includegraphics[width=0.31\textwidth]{figure0003} \\
\includegraphics[width=0.31\textwidth]{figure0004}& \includegraphics[width=0.31\textwidth]{figure0005}& \includegraphics[width=0.31\textwidth]{figure0006} \\
\includegraphics[width=0.31\textwidth]{figure0007}& \includegraphics[width=0.31\textwidth]{figure0008}& \includegraphics[width=0.31\textwidth]{figure0009} \\
\includegraphics[width=0.31\textwidth]{figure0010}& \includegraphics[width=0.31\textwidth]{figure0011}& \includegraphics[width=0.31\textwidth]{figure0012} \\
&{\scriptsize MNIST handwritten digit images}&
\end{tabular}
\caption{\small Centered histograms of $f_{\theta}(X)|\{Y=1\}$ overlayed with the pdf of a fitted Gaussian for randomly drawn $\theta$ vectors ($\theta_i\sim U(-1/2,1/2)$). The columns represent datasets (RCV1 text data \citep{lewis04rcv}, MNIST digit images, and face images \citep{Pham-etal-2002}) and the rows represent multiple random draws. For uniformity we subtracted the empirical mean and divided by the empirical standard deviation. The twelve panels show that even in moderate dimensionality (RCV1: 1000 top words, MNIST digits: 784 pixels, face images: 400 pixels) the assumption that $f_{\theta}(X)|Y$ is normal holds often for randomly drawn $\theta$.}\label{fig:CLT}
\end{figure}
\begin{figure}
\centering
\begin{tabular}{ccccc}&
{\scriptsize RCV1 text data} &
&
{\scriptsize face images} & \\ &
\includegraphics[width=0.27\textwidth]{figure0013}&\hspace{-0.2in}
\includegraphics[width=0.27\textwidth]{figure0014}&\hspace{-0.2in}
\includegraphics[width=0.27\textwidth]{figure0015}& \begin{sideways} \scriptsize \hspace{0.4in} Fisher's LDA\end{sideways}\\
\begin{sideways}\scriptsize \hspace{0.4in}log. regression\end{sideways}&
\includegraphics[width=0.27\textwidth]{figure0016}&\hspace{-0.2in}
\includegraphics[width=0.27\textwidth]{figure0017}&\hspace{-0.2in}
\includegraphics[width=0.27\textwidth]{figure0018}&\\
&
\includegraphics[width=0.27\textwidth]{figure0019}&\hspace{-0.2in}
\includegraphics[width=0.27\textwidth]{figure0020}&\hspace{-0.2in}
\includegraphics[width=0.27\textwidth]{figure0021}& \begin{sideways}\scriptsize log. regression ($l_2$ regularized) \end{sideways}\\
\begin{sideways}\scriptsize log. regression ($l_1$ regularized) \end{sideways}&
\includegraphics[width=0.27\textwidth]{figure0022}&\hspace{-0.2in}
\includegraphics[width=0.27\textwidth]{figure0023}&\hspace{-0.2in}
\includegraphics[width=0.27\textwidth]{figure0024}\\
&&{\scriptsize MNIST handwritten digit images}&&
\end{tabular}
\caption{\small Centered histograms of $f_{\theta}(X)|\{Y=1\}$ overlayed with the pdf of a fitted Gaussian for multiple $\theta$ vectors (four rows: Fisher's LDA, logistic regression, $l_2$ regularized logistic regression, and $l_1$ regularized logistic regression-all regularization parameters were selected by cross validation) and datasets (columns: RCV1 text data \citep{lewis04rcv}, MNIST digit images, and face images \citep{Pham-etal-2002}). For uniformity we subtracted the empirical mean and divided by the empirical standard deviation. The twelve panels show that even in moderate dimensionality (RCV1: 1000 top words, MNIST digits: 784 pixels, face images: 400 pixels) the assumption that $f_{\theta}(X)|Y$ is normal holds well for fitted $\theta$ values (except perhaps for $l_1$ regularization in the last row which promotes sparse $\theta$).}\label{fig:CLT2}
\end{figure}
From a theoretical standpoint normality may be argued using a central limit theorem. We examine below several progressingly more general central limit theorems and discuss whether these theorems are likely to hold in practice for high dimensional data. The original central limit theorem states that $\sum_{i=1}^d Z_i$ is approximately normal for large $d$ if $Z_i$ are iid.
\begin{prop}[de-Moivre]
If $Z_i, i\in \mathbb{N}$ are iid with expectation $\mu$ and variance $\sigma^2$ and $\bar{Z}_d=d^{-1}\sum_{i=1}^d Z_i$ then we have the following convergence in distribution
\[ \sqrt{d}(\bar{Z}_d -\mu)/\sigma \tood N(0,1) \quad \text{as } d\to\infty.\]
\end{prop}
As a result, the quantity $\sum_{i=1}^d Z_i$ (which is a linear transformation of $\sqrt{d}(\bar{Z}_d -\mu)/\sigma$) is approximately normal for large $d$. This relatively restricted theorem is unlikely to hold in most practical cases as the data dimensions are often not iid.
A more general CLT does not require the summands $Z_i$ to be identically distributed.
\begin{prop}[Lindberg]
For $Z_i, i\in \mathbb{N}$ independent with expectation $\mu_i$ and variance $\sigma^2_i$, and denoting $s_d^2=\sum_{i=1}^d \sigma_i^2$, we have the following convergence in distribution as $d\to\infty$
\[ s_d^{-1} \sum_{i=1}^d (Z_i - \mu_i) \tood N(0,1) \] if the following condition holds for every $\epsilon>0$
\begin{align}\lim_{d\to\infty} s_d^{-2} \sum_{i=1}^d \E (Z_i-\mu_i)^2 1_{\{|X_i-\mu_i|>\epsilon s_d\}}=0.
\label{eq:LindbergCondition}\end{align}
\end{prop}
This CLT is more general as it only requires that the data dimensions be independent. The condition \eqref{eq:LindbergCondition} is relatively mild and specifies that contributions of each of the $Z_i$ to the variance $s_d$ should not dominate it. Nevertheless, the Lindberg CLT is still inapplicable for dependent data dimensions.
More general CLTs replace the condition that $Z_i, i\in\mathbb{N}$ be independent with the notion of $m(k)$-dependence.
\begin{defn}
The random variables $Z_i,i\in\mathbb{N}$ are said to be $m(k)$-dependent if whenever $s-r>m(k)$ the two sets $\{Z_1,\ldots,Z_r\}$, $\{Z_s,\ldots,Z_k\}$ are independent.
\end{defn}
An early CLT for $m(k)$-dependent RVs is \citep{Hoeffding1948}. Below is a slightly weakened version of the CLT in \citep{Berk1973}.
\begin{prop}[Berk] \label{prop:Berk} For each $k\in\mathbb{N}$ let $d(k)$ and $m(k)$ be increasing sequences and suppose that $Z_1^{(k)},\ldots,Z_{d(k)}^{(k)}$ is an $m(k)$-dependent sequence of random variables. If
\begin{enumerate}
\item $\E|Z_i^{(k)}|^2 \leq M$ for all $i$ and $k$
\item $\Var (Z_{i+1}^{(k)} +\ldots+ Z_j^{(k)})\leq (j-i)K$ for all $i,j,k$
\item $\lim_{k\to\infty} \Var (Z_{1}^{(k)} +\ldots+ Z_{d(k)}^{(k)})/d(k)$ exists and is non-zero
\item $\lim_{k\to\infty} m^2(k)/d(k)=0$
\end{enumerate}
then $\frac{\sum_{i=1}^{d(k)} Z_i^{(k)}}{\sqrt{d(k)}}$ is asymptotically normal as $k\to\infty$.
\end{prop}
Proposition~\ref{prop:Berk} states that under mild conditions the sum of $m(k)$-dependent RVs is asymptotically normal. If $m(k)$ is a constant i.e., $m(k)=m$, $m(k)$-dependence implies that a $Z_i$ may only depend on its neighboring dimensions. Or in other words, dimensions that are removed from each other are independent. The full power of Proposition~\ref{prop:Berk} is invoked when $m(k)$ grows with $k$ relaxing the independence restriction as the dimensionality grows. Intuitively, the dependency of the summands is not fixed to a certain order, but it cannot grow too rapidly.
A more realistic variation of $m(k)$ dependence where the dependency of each variable is specified using a dependency graph (rather than each dimension depends on neighboring dimensions) is advocated in a number of papers, including the following recent result by \cite{rinott1994}.
\begin{defn} \label{def:gd}
A graph $\mathcal{G} = \left(\mathcal{V},\mathcal{E}\right)$ indexing random variables is called a dependency graph if for any pair of disjoint subsets of $\mathcal{V}$, $A_1$ and $A_2$ such that no edge in $\mathcal{E}$ has one endpoint in $A_1$ and the other in $A_2$, we have independence between $\{Z_i: i \in A_1\}$ and $\{Z_i: i \in A_2\}$. The degree $d(v)$ of a vertex is the number of edges connected to it and the maximal degree is $\max_{v \in \mathcal{V}} d(v)$.
\end{defn}
\begin{prop}[Rinott] \label{lab:prop1}
Let $Z_1, \ldots , Z_n$ be random variables having a dependency graph whose maximal degree is strictly less than $D$, satisfying $|Z_i-\E Z_i| \leq B$ a.s., $\forall i$, $\E (\sum_{i=1}^{n} Z_i) = \lambda$ and $\Var (\sum_{i=1}^{n} Z_i) = \sigma^2 >0$, Then for any $w \in \mathbb{R}$,
\[
\left|P\left(\frac{\sum_{i=1}^{n}Z_i-\lambda}{\sigma} \leq w \right) -\Phi (w) \right | \leq \frac{1}{\sigma} \left(\frac{1}{\sqrt{2\pi}}DB + 16 \left(\frac{n}{\sigma^2}\right)^{1/2} D^{3/2}B^2 +10 \left(\frac{n}{\sigma^2}\right)D^2B^3 \right)
\]
\end{prop}
The above theorem states a stronger result than convergence in distribution to a Gaussian in that it states a uniform rate of convergence of the CDF. Such results are known in the literature as Berry Essen bounds. When $D$ and $B$ are bounded and $\Var (\sum_{i=1}^{n} Z_i)=O(n)$ it yields a CLT with an optimal convergence rate of $n^{-1/2}$.
The question of whether the above CLTs apply in practice is a delicate one. For text one can argue that the appearance of a word depends on some words but is independent of other words. Similarly for images it is plausible to say that the brightness of a pixel is independent of pixels that are spatially far removed from it. In practice one needs to verify the normality assumption empirically, which is simple to do by comparing the empirical histogram of $f_{\theta}(X)$ with that of a fitted mixture of Gaussians. As the figures above indicate this holds for text and image data for most values of $\theta$, assuming it
is not sparse.
\subsection{Unsupervised Consistency of $\hat R_n(\theta)$} \label{sec:consistency}
We start with proving identifiability of the maximum likelihood estimator (MLE) for a mixture of two Gaussians with known ordering of mixture proportions. Invoking classical consistency results in conjunction with identifiability we show consistency of the MLE estimator for $(\mu,\sigma)$ parameterizing the distribution of $f_{\theta}(X)|Y$. As a result consistency of the estimator $\hat R_n(\theta)$ follows.
\begin{defn}
A parametric family $\{p_{\alpha}:\alpha\in A\}$ is identifiable when $p_{\alpha}(x)= p_{\alpha'}(x), \forall x$ implies $\alpha=\alpha'$.
\end{defn}
\begin{prop} \label{prop:identifiability} Assuming known label marginals with $p(Y=1)\neq p(Y=-1)$ the Gaussian mixture family
\begin{align} \label{eq:GaussianMix}
p_{\mu,\sigma}(x) = p(y=1) N(x\,; \mu_1,\sigma_1^2) + p(y=-1) N(x\,; \mu_{-1},\sigma_{-1}^2)
\end{align}
is identifiable.
\end{prop}
\begin{proof}
It can be shown that the family of Gaussian mixture model with no apriori information about label marginals is identifiable up to a permutation of the labels $y$ \citep{Teicher1963}. We proceed by assuming with no loss of generality that $p(y=1)>p(y=-1)$. The alternative case ${p(y=1)}<p(y=-1)$ may be handled in the same manner. Using the result of \citep{Teicher1963} we have that if $p_{\mu,\sigma}(x)=p_{\mu',\sigma'}(x)$ for all $x$, then $(p(y),\mu,\sigma)=(p(y),\mu',\sigma')$ up to a permutation of the labels. Since permuting the labels violates our assumption ${p(y=1)}> {p(y=-1)}$ we establish $(\mu,\sigma)=(\mu',\sigma')$ proving identifiability.
\end{proof}
The assumption that $p(y)$ is known is not entirely crucial. It may be relaxed by assuming that it is known whether $p(Y=1)>p(Y=-1)$ or ${p(Y=1)}<{p(Y=-1)}$. Proving Proposition~\ref{prop:identifiability} under this much weaker assumption follows identical lines.
\begin{prop}\label{prop:GaussianMixConsistency}
Under the assumptions of Proposition~\ref{prop:identifiability} the MLE estimates for $(\mu,\sigma)=(\mu_1,\mu_{-1},\sigma_1,\sigma_{-1})$
\begin{align}
(\hat\mu^{(n)},\hat\sigma^{(n)})&=\argmax_{\mu,\sigma} \ell_{n}(\mu,\sigma)\\
\ell_{n}(\mu,\sigma)
&= \sum_{i=1}^n \log \sum_{y^{(i)}\in\{-1,+1\}} p(y^{(i)}) p_{\mu_y,\sigma_y}(f_{\theta}(X^{(i)})|y^{(i)}).
\end{align}
are consistent i.e.,
$(\hat\mu^{(n)}_1,\hat\mu^{(n)}_{-1},\hat\sigma^{(n)}_1,\hat\sigma^{(n)}_{-1})$ converge as $n\to\infty$ to the true parameter values with probability 1.
\end{prop}
\begin{proof}
Denoting $p_{\eta}(z)=\sum_y p(y)p_{\mu_y,\sigma_y}(z|y)$ with $\eta=(\mu,\sigma)$ we note that $p_{\eta}$ is identifiable (see Proposition~\ref{prop:identifiability}) in $\eta$ and the available samples $z^{(i)}=f_{\theta}(X^{(i)})$ are iid samples from $p_{\eta}(z)$. We therefore use standard statistics theory which indicates that the MLE for identifiable parametric model is strongly consistent \citep{Ferguson1996}.
\end{proof}
\begin{prop} \label{prop:riskConsistency} Under the assumptions of Proposition~\ref{prop:identifiability} and assuming the loss $L$ is given by one of \eqref{eq:loss1}-\eqref{eq:loss3} with a normal $f_{\theta}(X)|Y\sim N(\mu_y,\sigma_y^2)$, the plug-in risk estimate
\begin{align} \label{eq:plugin}
\hat R_{n}(\theta) = \sum_{y\in\{-1,+1\}} p(y) & \int_{\R} p_{\hat\mu^{(n)}_y,\hat\sigma^{(n)}_y}(f_{\theta}(X)=\alpha|y) L(y,\alpha) \, d\alpha.
\end{align}
is consistent, i.e., for all $\theta$, \[ P\left(\lim_n \hat R_n(\theta) = R(\theta)\right) = 1.\]
\end{prop}
\begin{proof}
The plug-in risk estimate $\hat R_n$ in \eqref{eq:plugin} is a continuous function (when $L$ is given by \eqref{eq:loss1}, \eqref{eq:loss2} or \eqref{eq:loss3}) of $\hat \mu_1^{(n)},\hat \mu_{-1}^{(n)},\hat \sigma_1^{(n)},\hat \sigma_{-1}^{(n)}$ (note that $\mu_y$ and $\sigma_y$ are functions of $\theta$), which we denote $\hat R_n (\theta) = h(\hat \mu_1^{(n)},\hat \mu_{-1}^{(n)},\hat \sigma_1^{(n)},\hat \sigma_{-1}^{(n)})$.
Using Proposition~\ref{prop:GaussianMixConsistency} we have that
\begin{align*}
\lim_{n\to\infty} (\hat \mu_1^{(n)},\hat \mu_{-1}^{(n)},\hat \sigma_1^{(n)},\hat \sigma_{-1}^{(n)}) = (\mu_1^{\text{true}}, \mu_{-1}^{\text{true}}, \sigma_1^{\text{true}}, \sigma_{-1}^{\text{true}})
\end{align*}
with probability 1. Since continuous functions preserve limits we have \[ \lim_{n\to\infty} h(\hat \mu_1^{(n)},\hat \mu_{-1}^{(n)},\hat \sigma_1^{(n)},\hat \sigma_{-1}^{(n)})= h(\mu^{\text{true}}_1,\mu^{\text{true}}_{-1},\sigma^{\text{true}}_1,\sigma^{\text{true}}_{-1})\] with probability 1 which implies convergence $\lim_{n\to\infty}\hat R_n(\theta) = R(\theta)$ with probability 1.
\end{proof}
\subsection{Unsupervised Consistency of $\argmin \hat R_n(\theta)$}
The convergence above $\hat R_n(\theta)\to R(\theta)$ is pointwise in $\theta$. If the stronger concept of uniform convergence is assumed over $\theta\in\Theta$ we obtain consistency of $\argmin_{\theta} \hat R_n(\theta)$. This surprising result indicates that in some cases it is possible to retrieve the expected risk minimizer (and therefore the Bayes classifier in the case of the hinge loss, log-loss and exp-loss) using only unlabeled data. We show this uniform convergence using a modification of Wald's classical MLE consistency result \citep{Ferguson1996}.
Denoting
\begin{align*}
p_{\eta}(z)&=\sum_{y\in\{-1,+1\}} p(y) p_{\mu_y,\sigma_y}(f(X)=z|y), \quad \eta=(\mu_1,\mu_{-1},\sigma_1,\sigma_{-1})
\end{align*}
we first show that the MLE converges to the true parameter value $\hat \eta_n\to\eta_0$ uniformly. Uniform convergence of the risk estimator $\hat R_{n}(\theta)$ follows. Since changing $\theta\in\Theta$ results in a different $\eta\in E$ we can state the uniform convergence in $\theta\in\Theta$ or alternatively in $\eta\in E$.
\begin{prop} \label{prop:unifConvMLE}
Let $\theta$ take values in $\Theta$ for which $\eta\in E$ for some compact set $E$. Then assuming the conditions in Proposition~\ref{prop:riskConsistency} the convergence of the MLE to the true value $\hat \eta_n\to \eta_0$ is uniform in $\eta_0\in E$ (or alternatively $\theta\in\Theta$).
\end{prop}
\begin{proof}
We start by making the following notation
\begin{align*}
U(z,\eta,\eta_0)&=\log p_{\eta}(z)-\log p_{\eta_0}(z)\\
\alpha(\eta,\eta_0)&=E_{p_{\eta_0}} U(z,\eta,\eta_0)=-D(p_{\eta_0},p_{\eta}) \leq 0
\end{align*}
with the latter quantity being non-positive and 0 iff $\eta=\eta_0$ (due to Shannon's inequality and identifiability of $p_{\eta}$).
For $\rho>0$ we define the compact set $S_{\eta_0,\rho}=\{\eta\in E: \|\eta-\eta_0\|\geq \rho\}$. Since $\alpha(\eta,\eta_0)$ is continuous it achieves its maximum (with respect to $\eta$) on $S_{\eta_0,\rho}$ denoted by $\delta_{\rho}(\eta_0)=\max_{\eta\in S_{\eta_0,\rho}} \alpha(\eta,\eta_0)<0$ which is negative since $\alpha(\eta,\eta_0)=0$ iff $\eta=\eta_0$. Furthermore, note that $\delta_{\rho}(\eta_0)$ is itself continuous in $\eta_0\in E$ and since $E$ is compact it achieves its maximum
\[\delta=\max_{\eta_0\in E} \delta_{\rho}(\eta_0)=\max_{\eta_0\in E}\,\,\,\max_{\eta\in S_{\eta_0,\rho}}\,\,\, \alpha(\eta,\eta_0)<0\]
which is negative for the same reason.
Invoking the uniform strong law of large numbers \citep{Ferguson1996} we have $n^{-1}\sum_{i=1}^n U(z^{(i)},\eta,\eta_0)\to \alpha(\eta,\eta_0)$ uniformly over $(\eta,\eta_0)\in E^2$. Consequentially, there exists $N$ such that for $n>N$ (with probability 1)
\[\sup_{\eta_0\in E}\,\,\,\sup_{\eta\in S_{\eta_0,\rho}} \,\,\, \frac{1}{n} \sum_{i=1}^n U(z^{(i)},\eta,\eta_0)<\delta/2<0.\]
But since $n^{-1}\sum_{i=1}^n U(z^{(i)},\eta,\eta_0)\to 0$ for $\eta=\eta_0$ it follows that the MLE
\[\hat \eta_n = \,\,\max_{\eta\in E} \,\,\frac{1}{n}\sum_{i=1}^n U(z^{(i)},\eta,\eta_0)\]
is outside $S_{\eta_0,\rho}$ (for $n>N$ uniformly in $\eta_0\in E$) which implies $\|\hat\eta_n-\eta_0\|\leq \rho$. Since $\rho>0$ is arbitrarily and $N$ does not depend on $\eta_0$ we have $\hat\eta_n\to\eta_0$ uniformly over $\eta_0\in E$.
\end{proof}
\begin{prop} \label{prop:unifConvRisk}
Assuming that $X,\Theta$ are bounded in addition to the assumptions of Proposition~\ref{prop:unifConvMLE} the convergence $\hat R_n(\theta)\to R(\theta)$ is uniform in $\theta\in\Theta$.
\end{prop}
\begin{proof}
Since $X,\Theta$ are bounded the margin value $f_{\theta}(X)$ is bounded with probability 1. As a result the loss function is bounded in absolute value by a constant $C$. We also note that a mixture of two Gaussian model (with known mixing proportions) is Lipschitz continuous in its parameters
\begin{multline*}
\Bigg|\sum_{y\in\{-1,+1\}} p(y)p_{\hat\mu^{(n)}_y,\hat\sigma^{(n)}_y}(z) -\sum_{y\in\{-1,+1\}} p(y) p_{\mu^{true}_y,\sigma^{true}_y}(z) \Bigg| \\ \leq t(z) \, \cdot \, \Big|\Big|(\hat \mu_1^{(n)},\hat \mu_{-1}^{(n)},\hat \sigma_1^{(n)},\hat \sigma_{-1}^{(n)}) - (\mu_1^{\text{true}}, \mu_{-1}^{\text{true}}, \sigma_1^{\text{true}}, \sigma_{-1}^{\text{true}})\Big|\Big|
\end{multline*}
which may be verified by noting that the partial derivatives of
$p_{\eta}(z)=\sum_y p(y)p_{\mu_y,\sigma_y}(z|y)$
\begin{align*}
\frac{\partial p_{\eta}(z) } {\partial \hat\mu^{(n)}_1}&=
\frac{p(y=1) (z - \hat\mu^{(n)}_1)} {(2\pi)^{1/2} \hat\sigma^{(n)^3}_1}
e^{-\frac{ (z - \hat\mu^{(n)}_1)^2 } {2 \hat\sigma^{(n)^3}_1} }\\
\frac{\partial p_{\eta}(z)}{\partial \hat\mu^{(n)}_{-1}}&= \frac{p(y=-1) (z - \hat\mu^{(n)}_{-1})}{(2\pi)^{1/2} \hat\sigma^{(n)^3}_{-1}} e^{-\frac{(z - \hat\mu^{(n)}_{-1})^2}{2 \hat\sigma^{(n)^3}_{-1}}}\\
\frac{\partial p_{\eta}(z)}{\partial \hat\sigma^{(n)}_1}&= -\frac{p(y=1) (z - \hat\mu^{(n)}_1)^2}{(2\pi)^{3/2} \hat\sigma^{(n)^6}_1} e^{-\frac{(z - \hat\mu^{(n)}_1)^2}{2 \hat\sigma^{(n)^2}_1}}\\
\frac{\partial p_{\eta}(z)}{\partial \hat\sigma^{(n)}_{-1}}&= -\frac{p(y=-1) (z - \hat\mu^{(n)}_{-1})^2}{(2\pi)^{3/2} \hat\sigma^{(n)^6}_{-1}} e^{-\frac{(z - \hat\mu^{(n)}_{-1})^2}{2 \hat\sigma^{(n)^2}_{-1}}}\\
\end{align*}
are bounded for a compact $E$. These observations, together with Proposition~\ref{prop:unifConvMLE} lead to
\begin{align*}
|\hat R_n(\theta)-R(\theta)| &\leq \sum_{y\in\{-1,+1\}} p(y) \int \Big |p_{\hat\mu^{(n)}_y,\hat\sigma^{(n)}_y}(f_{\theta}(X)=\alpha)
-p_{\mu^{\text{true}}_y,\sigma^{\text{true}}_y}(f_{\theta}(X)=\alpha) \Big|\,
|L(y,\alpha)| d\alpha\\
& \leq C \int \Big|\sum_{y\in\{-1,+1\}} p(y)p_{\hat\mu^{(n)}_y,\hat\sigma^{(n)}_y}(\alpha) -\sum_{y\in\{-1,+1\}} p(y) p_{\mu^{\text{true}}_y,\sigma^{\text{true}}_y}(\alpha) \Big| \, d\alpha \\
& \leq C\, \| (\hat \mu_1^{(n)},\hat \mu_{-1}^{(n)},\hat \sigma_1^{(n)},\hat \sigma_{-1}^{(n)}) - (\mu_1^{\text{true}}, \mu_{-1}^{\text{true}}, \sigma_1^{\text{true}}, \sigma_{-1}^{\text{true}}) \| \int_a^b t(z)dz \\
&\leq C'\, \| (\hat \mu_1^{(n)},\hat \mu_{-1}^{(n)},\hat \sigma_1^{(n)},\hat \sigma_{-1}^{(n)}) - (\mu_1^{\text{true}}, \mu_{-1}^{\text{true}}, \sigma_1^{\text{true}}, \sigma_{-1}^{\text{true}}) \| \to 0 \\
\end{align*}
uniformly over $\theta\in\Theta$.
\end{proof}
\begin{prop}
Under the assumptions of Proposition~\ref{prop:unifConvRisk}
\[ P\left(\lim_{n\to\infty} \argmin_{\theta\in\Theta} \hat R_n(\theta)=\argmin_{\theta\in\Theta} R(\theta)\right)=1.\]
\end{prop}
\begin{proof}
We denote $t^*=\argmin R(\theta)$, $t_n = \argmin \hat R_n(\theta)$. Since $\hat R_n(\theta)\to R(\theta)$ uniformly, for each $\epsilon>0$ there exists $N$ such that for all $n>N$, $|\hat R_n(\theta)-R(\theta)|<\epsilon$.
Let $S=\{\theta: \|\theta-t^*\|\geq \epsilon \}$ and $\min_{\theta\in S} R(\theta)>R(t^*)$ ($S$ is compact and thus $R$ achieves its minimum on it). There exists $N'$ such that for all $n>N'$ and $\theta\in S$, $\hat R_n(\theta) \geq R(t^*)+\epsilon$. On the other hand, $\hat R_n(t^*)\to R(t^*)$ which together with the previous statement implies that there exists $N''$ such that for $n>N''$, $\hat R_n(t^*) < \hat R_n(\theta)$ for all $\theta\in S$. We thus conclude that for $n>N''$, $t_n\not\in S$. Since we showed that for each $\epsilon>0$ there exists $N$ such that for all $n>N$ we have $\|t_n-t^*\|\leq \epsilon$, $t_n\to t^*$ which concludes the proof.
\end{proof}
\subsection{Asymptotic Variance} \label{sec:asymVar}
In addition to consistency, it is useful to characterize the accuracy of our estimator $\hat R_n(\theta)$ as a function of $p(y),\mu,\sigma$. We do so by computing the asymptotic variance of the estimator which equals the inverse Fisher information
\begin{align*}
\sqrt{n} (\hat \eta^{\text{mle}}_n -\eta_0) \tood N(0,I^{-1}(\eta^{\text{true}}))
\end{align*}
and analyzing its dependency on the model parameters. We first derive the asymptotic variance of MLE for mixture of Gaussians (we denote below $\eta=(\eta_1,\eta_2), \eta_i=(\mu_i,\sigma_{i})$)
\begin{align}
p_{\eta}(z) &=p(Y=1) N(z; \mu_1,\sigma_1^2) + p(Y=-1) N(z; \mu_{-1},\sigma_{-1}^2) \\
&=p_1 p_{\eta_1}(z) + p_{-1} p_{\eta_{-1}}(z).
\end{align}
The elements of $4 \times 4$ information matrix $I(\eta)$
\begin{align*}
I(\eta_i,\eta_j) = \E \left(\frac{\partial \log p_{\eta}(z) }{\partial \eta_i}\frac{\partial \log p_{\eta}(z)}{\partial \eta_j}\right)
\end{align*}
may be computing using the following derivatives
\begin{align*}
\frac{\partial \log p_{\eta}(z) }{\partial \mu_i} &= \frac{p_i}{\sigma_i} \left(\frac{z-\mu_i}{\sigma_i}\right) \frac{p_{\eta_i}(z)}{p_{\eta}(z)} \\
\frac{\partial \log p_{\eta}(z) }{\partial \sigma^2_i} &=\frac{p_i}{2\sigma_i} \left(\left(\frac{z-\mu_i}{\sigma_i}\right)^2-1\right)\frac{p_{\eta_i}(z)}{p_{\eta}(z)}
\end{align*}
for $i=1,-1$. Using derivations similar to the ones in \citep{behboodian1972} we obtain
\begin{align*}
I(\mu_i,\mu_j) &=\frac{p_ip_j}{\sigma_i\sigma_j} M_{11} \Big(p_{\eta_i}(z),p_{\eta_i}(z)\Big)\\
I(\mu_1,\sigma^2_i)&=\frac{p_1p_i}{2\sigma_1\sigma^2_i} \Big[M_{12} \Big(p_{\eta_i}(z),p_{\eta_i}(z)\Big)-M_{10} \Big(p_{\eta_1}(z),p_{\eta_i}(z)\Big)\Big] \\
I(\mu_{-1},\sigma^2_i)&=\frac{p_{-1}p_i}{2\sigma_{-1}\sigma^2_i} \Big[M_{21} \Big(p_{\eta_i}(z),p_{\eta_{-1}}(z)\Big)-M_{01} \Big(p_{\eta_i}(z),p_{\eta_{-1}}(z)\Big)\Big] \\
I(\sigma^2_i,\sigma^2_i)&=\frac{p^4_i}{4\sigma^4_i} \Big[M_{00} \Big(p_{\eta_i}(z),p_{\eta_i}(z)\Big)-2M_{11} \Big(p_{\eta_i}(z),p_{\eta_i}(z)\Big)+M_{22} \Big(p_{\eta_i}(z),p_{\eta_i}(z)\Big)\Big] \\
I(\sigma^2_1,\sigma^2_{-1})&=\frac{p_1p_{-1}}{4\sigma^2_1\sigma^2_{-1}} \Big[M_{00} \Big(p_{\eta_1}(z),p_{\eta_{-1}}(z)\Big)-M_{20} \Big(p_{\eta_1}(z),p_{\eta_{-1}}(z)\Big)\\&\qquad -M_{02} \Big(p_{\eta_1}(z),p_{\eta_{-1}}(z)\Big)+M_{22} \Big(p_{\eta_1}(z),p_{\eta_{-1}}(z)\Big)\Big]
\end{align*}
where
\begin{align*}
M_{m,n}\Big(p_{\eta_i}(z),p_{\eta_j}(z)\Big) &= \int_{-\infty}^{\infty} \left(\frac{z-\mu_i}{\sigma_i}\right)^m\left(\frac{z-\mu_j}{\sigma_j}\right)^n \frac{p_{\eta_i}(z)p_{\eta_j}(z)}{p_{\eta}(z)} \,dx.
\end{align*}
In some cases it is more instructive to consider the asymptotic variance of the risk estimator $\hat R_n(\theta)$ rather than that of the parameter estimate for $\eta=(\mu,\sigma)$. This could be computed using the delta method and the above Fisher information matrix
\begin{align*}
\sqrt{n} (\hat R_n(\theta) - R(\theta)) \tood N(0,\nabla h(\eta^{\text{true}})^T I^{-1}(\eta^{true})\nabla h(\eta^{\text{true}}))
\end{align*}
where $\nabla h$ is the gradient vector of the mapping $R(\theta)= h(\eta)$. For example, in the case of the exponential loss \eqref{eq:loss1} we get
\begin{align*}
h(\eta)&=p(Y=1)\sigma_1\sqrt{2} \exp {\Big(\frac{(\mu_1-1)^2}{2}-\frac{\mu^2_1}{2\sigma^2_1}\Big)}+p(Y=-1)\sigma_{-1}\sqrt{2} \exp {\Big(\frac{(\mu_{-1}-1)^2}{2}-\frac{\mu^2_{-1}}{2\sigma^2_{-1}}\Big)}\\
\frac{\partial h(\eta)}{\partial \mu_1}& = \frac{\sqrt{2}P(Y=1)(\mu_1(\sigma^2_1-1)-\sigma^2_1)}{\sigma_1} \exp \left(\frac{(\mu_1-1)^2}{2}-\frac{\mu^2_1}{2\sigma^2_1}\right) \\
\frac{\partial h(\eta)}{\partial \mu_{-1}}& = \frac{\sqrt{2}P(Y=-1)(\mu_{-1}(\sigma^2_{-1}-1)+\sigma^2_{-1})}{\sigma_{-1}} \exp \left(\frac{(\mu_{-1}+1)^2}{2}-\frac{\mu^2_{-1}}{2\sigma^2_{-1}}\right) \\
\frac{\partial h(\eta)}{\partial \sigma^2_1}& = \frac{P(Y=1) (\mu^2_1+\sigma^2_1)}{\sqrt{2\sigma_1}} \Big(\frac{(\mu_1-1)^2}{2}-\frac{\mu^2_1}{2\sigma^2_1}\Big) \\
\frac{\partial h(\eta)}{\partial \sigma^2_{-1}}& = \frac{P(Y=-1) (\mu^2_{-1}+\sigma^2_{-1})}{\sqrt{2\sigma_{-1}}} \Big(\frac{(\mu_{-1}+1)^2}{2}-\frac{\mu^2_{-1}}{2\sigma^2_{-1}}\Big).
\end{align*}
Figure~\ref{fig:fim} plots the asymptotic accuracy of $\hat R_n(\theta)$ for log-loss. The left panel shows that the accuracy of $\hat R_n$ increases with the imbalance of the marginal distribution $p(Y)$. The right panel shows that the accuracy of $\hat R_n$ increases with the difference between the means $|\mu_1-\mu_{-1}|$ and the variances $\sigma_1/\sigma_2$.
\begin{figure}
\centering
\includegraphics[scale=0.4]{figure0025}
\includegraphics[scale=0.4]{figure0026}
\caption{Left panel: asymptotic accuracy (inverse of trace of asymptotic variance) of $\hat R_n(\theta)$ for logloss as a function of the imbalance of the class marginal $p(Y)$. The accuracy increases with the class imbalance as it is easier to separate the two mixture components. Right panel: asymptotic accuracy (inverse of trace of asymptotic variance) as a function of the difference between the means $|\mu_1-\mu_{-1}|$ and the variances $\sigma_1/\sigma_2$. See text for more information.}
\label{fig:fim}
\end{figure}
\subsection{Multiclass Classification}
Thus far, we have considered unsupervised risk estimation in binary classification. In this section we describe a multiclass extension based on standard extensions of the margin concept to multiclass classification. In this case the margin vector associated with the multiclass classifier
\begin{align}
\hat Y = \argmax_{k=1,\ldots,K} f_{\theta^k}(X), \qquad X,\theta^k \in\mathbb{R}^d
\end{align}
is $f_{\bm \theta}(X)=(f_{\theta^1}(X),\dots,f_{\theta^K}(X))$. Following our discussion of the binary case, $f_{\theta^k}(X)|Y$, $k=1,\ldots,K$ is assumed to be normally distributed with parameters that are estimated by maximizing the likelihood of a Gaussian mixture model. We thus have $K$ Gaussian mixture models, each one with $K$ mixture components. The estimated parameters are plugged-in as before into the multiclass risk
\begin{align}
R(\bm \theta) &= E_{p(f_{\bm \theta}(X),Y)}L(Y,f_{\bm \theta}(X))
\end{align}
where $L$ is a multiclass margin based loss function such as
\begin{align}
L(Y,f_{\bm \theta}(X)) &= \sum_{k\neq Y} \log(1+\exp(- f_{\theta^k}(X))) \label{eq:loglossMC} \\
L(Y,f_{\bm \theta}(X)) &= \sum_{k\neq Y}(1 + f_{\theta^k}(X))_{+}. \label{eq:hingelossMC}
\end{align}
Since the MLE for a Gaussian mixture model with $K$ components is consistent (assuming $P(Y)$ is known and all probabilities $P(Y=k), k=1,\ldots,K$ are distinct) the MLE estimator for $f_{\theta^k}(X)|Y=k'$ are consistent. Furthermore, if the loss $L$ is a continuous function of these parameters (as is the case for \eqref{eq:loglossMC}-\eqref{eq:hingelossMC}) the risk estimator $\hat R_n(\bm \theta)$ is consistent as well.
\section{Application 1: Estimating Risk in Transfer Learning}
We consider applying our estimation framework in two ways. The first application, which we describe in this section, is estimating margin-based risks in transfer learning where classifiers are trained on one domain but tested on a somewhat different domain. The transfer learning assumption that labeled data exists for the training domain but not for the test domain motivates the use of our unsupervised risk estimation. The second application, which we describe in the next section, is more ambitious. It is concerned with training classifiers without labeled data whatsoever.
In evaluating our framework we consider both synthetic and real-world data. In the synthetic experiments we generate high dimensional data from two uniform distributions $X|\{Y=1\}$ and $X|\{Y=-1\}$ with independent dimensions and prescribed $p(Y)$ and classification accuracy. This controlled setting allows us to examine the accuracy of the risk estimator as a function of $n$, $p(Y)$, and the classifier accuracy.
Figure~\ref{fig:accPyfig} shows that the relative error of $\hat R_n(\theta)$ (measured by
$|\hat R_n(\theta)-R_n(\theta)|/R_n(\theta)$) in estimating the logloss (left) and hinge loss (right) decreases with $n$ achieving accuracy of greater than 99\% for $n>1000$. In accordance with the theoretical results in Section~\ref{sec:asymVar} the figure shows that the estimation error decreases as the classifiers become more accurate and as $p(Y)$ becomes less uniform. We found these trends to hold in other experiments as well. In the case of exponential loss, however, the estimator performed substantially worse (figure omitted). This is likely due to the exponential dependency of the loss on $Yf_{\theta}(X)$ which makes it very sensitive to outliers.
\begin{figure}
\centering
\raisebox{2.1ex}{\includegraphics[scale=0.35]{figure0027}}
\includegraphics[scale=0.4815]{figure0028}\\
\raisebox{2.1ex}{\includegraphics[scale=0.35]{figure0029}}
\includegraphics[scale=0.4815]{figure0030}\\
\caption{The relative accuracy of $\hat R_n$ (measured by
$|\hat R_n(\theta)-R_{n}(\theta)|/R_{n}(\theta)$)
as a function of $n$, classifier accuracy (acc) and the label marginal $p(Y)$ (left: logloss, right: hinge-loss). The estimation error nicely decreases with $n$ (approaching 1\% at $n=1000$ and decaying further). It also decreases with the accuracy of the classifier (top) and non-uniformity of $p(Y)$ (bottom) in accordance with the theory of Section~\ref{sec:asymVar}.}\label{fig:accPyfig}
\end{figure}
Figure~\ref{fig:asymm} shows the accuracy of logloss estimation for a real world transfer learning experiment based on the 20-newsgroup data. Following the experimental setup of \citep{dai07} we trained a classifier (logistic regression) on one 20 newsgroup classification problem and tested it on a related problem. Specifically, we used the hierarchical category structure to generate train and testing sets with different distributions (see Figure~\ref{fig:asymm} and \citep{dai07} for more detail). The unsupervised estimation of the logloss risk was very effective with relative accuracy greater than 96\% and absolute error less than 0.02.
\begin{figure}
\centering { \begin{tabular}{|l|l|l|l|l|l|} \hline
Data & $R_n$ & $|R_n-\hat R_n|$ & $|R_n-\hat R_n|/R_n$ & $n$ & $p(Y=1)$\\ \hline
sci vs. comp & 0.7088 & 0.0093 & 0.013 & 3590 & 0.8257\\ \hline
sci vs. rec & 0.641 & 0.0141 & 0.022 & 3958 & 0.7484\\ \hline
talk vs. rec & 0.5933 & 0.0159 & 0.026 & 3476 & 0.7126\\ \hline
talk vs. comp & 0.4678 & 0.0119 & 0.025 & 3459 & 0.7161 \\ \hline
talk vs. sci & 0.5442 & 0.0241 & 0.044 & 3464 & 0.7151\\ \hline
comp vs. rec & 0.4851 & 0.0049 & 0.010 & 4927 & 0.7972\\ \hline
\end{tabular}}
\caption{Error in estimating logloss for logistic regression classifiers trained on one 20-newsgroup classification task and tested on another. We followed the transfer learning setup described in \citep{dai07} which may be referred to for more detail. The train and testing sets contained samples from two top categories in the topic hierarchy but with different subcategory proportions. The first column indicates the top category classification task and the second indicates the empirical log-loss $R_n$ calculated using the true labels of the testing set \eqref{eq:empiricalLoss}. The third and forth columns indicate the absolute and relative errors of $\hat R_n$. The fifth and sixth columns indicate the train set size and the label marginal distribution.}
\label{fig:asymm}
\end{figure}
\section{Application 2: Unsupervised Learning of
Classifiers} \label{sec:UnsuperTrain}
Our second application is a very ambitious one: training classifiers using unlabeled data by minimizing the unsupervised risk estimate $\hat \theta_n=\argmin \hat R_n(\theta)$. We evaluate the performance of the learned classifier $\hat \theta_n$ based on three quantities: (i) the unsupervised risk estimate $\hat R_n(\hat\theta_n)$, (ii) the supervised risk estimate $R_n(\hat\theta_n)$, and (iii) its classification error rate. We also compare the performance of $\hat \theta_n=\argmin \hat R_n(\theta)$ with that of its supervised analog $\argmin R_n(\theta)$.
We compute $\hat \theta_n=\argmin \hat R_n(\theta)$ using two algorithms (see Algorithms \ref{alg:gradDescent}-\ref{alg:gridSearch}) that start with an initial $\theta^{(0)}$ and iteratively construct a sequence of classifiers $\theta^{(1)},\ldots,\theta^{(T)}$ which steadily decrease $\hat R_n$. Algorithm~\ref{alg:gradDescent} adopts a gradient descent-based optimization. At each iteration $t$, it approximates the gradient vector $\nabla \hat R_n(\theta^{(t)})$ numerically using a finite difference approximation~\eqref{eq:central}. Algorithm~\ref{alg:gridSearch} proceeds by constructing a grid search along every dimension of $\theta^{(t)}$ and set $[\theta^{(t)}]_i$ to the grid value that minimizes $\hat R_n$. Although we focus on unsupervised training of logistic regression (minimizing unsupervised logloss estimate), the same techniques may be generalized to train other margin-based classifiers such as SVM by minimizing the unsupervised hinge-loss estimate.
\begin{algorithm*}
\begin{algorithmic}
\caption{Unsupervised Gradient Descent}
\label{alg:gradDescent}
{
\STATE {\bfseries Input:} $X^{(1)},\ldots,X^{(n)}\in \mathbb{R}^d$, $p(Y)$, step size $\alpha$
\REPEAT
\STATE Initialize $t=0$, $\theta^{(t)} =\theta^0\in \mathbb{R}^d$
\STATE Compute $f_{\theta^{(t)}}(X^{(j)}) = \langle\theta^{(t)},X^{(j)}\rangle$ $\forall j=1,\dots,n$
\STATE Estimate $(\hat \mu_1,\hat\mu_{-1}, \hat\sigma_1,\hat\sigma_{-1})$ by maximizing \eqref{eq:ll}
\FOR{$i=1$ {\bfseries to} $d$}
\STATE Plug-in the estimates into~\eqref{eq:plugin} to approximate
\STATE
\begin{align}&\frac{\partial \hat R_n(\theta^{(t)})}{\partial \theta_i}=\frac{\hat R_n(\theta^{(t)} + h_i e_i) - \hat R_n(\theta^{(t)} - h_i e_i)}{2h_i}\nonumber \\
&(e_i \text{ is an all zero vector except for } [e_i]_i=1)
\label{eq:central}
\end{align}
\ENDFOR
\STATE Form $\nabla \hat R_n(\theta^{(t)})=(\frac{\partial \hat R_n(\theta^{(t)})}{\partial \theta^{(t)}_1}, \dots, \frac{\partial \hat R_n(\theta^{(t)})}{\partial\theta^{(t)}_d})$
\STATE Update $\theta^{(t+1)} = \theta^{(t)} - \alpha \nabla \hat R_n(\theta^{(t)})$, $t = t +1$ \UNTIL{convergence} \STATE {\bfseries Output:} linear classifier $\theta^{\text{final}} = \theta^{(t)}$ }
\end{algorithmic}
\end{algorithm*}
\begin{algorithm*}
\caption{Unsupervised Grid Search}
\label{alg:gridSearch}
\begin{algorithmic}
{ \STATE {\bfseries Input:} $X^{(1)},\ldots,X^{(n)}\in \mathbb{R}^d$, $p(Y)$, grid-size $\tau$
\STATE Initialize $\theta_i \sim \text{Uniform}(-2,2)$ for all $i$
\REPEAT
\FOR{$i=1$ {\bfseries to} $d$}
\STATE Construct $\tau$ points grid in the range $[\theta_i - 4 \tau,\theta_i + 4 \tau]$
\STATE Compute the risk estimate \eqref{eq:plugin} where all dimensions of $\theta^{(t)}$ are fixed except for $[\theta^{(t)}]_i$ which is evaluated at each grid point.
\STATE Set $[\theta^{(t+1)}]_i$ to the grid value that minimized \eqref{eq:plugin}
\ENDFOR
\UNTIL{convergence} \STATE {\bfseries Output:} linear classifier $\theta^{\text{final}} = \theta$ }
\end{algorithmic}
\end{algorithm*}
Figures~\ref{fig:rcv1}-\ref{fig:mnist} display $\hat R_n(\hat\theta_n)$, $R_n(\hat\theta_n)$ and $\text{error-rate}(\hat\theta_n)$ on the training and testing sets as on two real world datasets: RCV1 (text documents) and MNIST (handwritten digit images) datasets. In the case of RCV1 we discarded all but the most frequent $504$ words (after stop-word removal) and represented documents using their tfidf scores. We experimented on the binary classification task of distinguishing the top category (positive) from the next $4$ top categories (negative) which resulted in $p(y=1)=0.3$ and $n=199328$. $70\%$ of the data was chosen as a (unlabeled) training set and the rest was held-out as a test-set. In the case of MNIST data, we normalized each of the $28\times 28=784$ pixels to have $0$ mean and unit variance. Our classification task was to distinguish images of the digit one (positive) from the digit 2 (negative) resulting in $14867$ samples and $p(Y=1)=0.53$. We randomly choose $70\%$ of the data as a training set and kept the rest as a testing set.
\begin{figure} \centering
\includegraphics[scale=0.4]{figure0031} \includegraphics[scale=0.4]{rcv1gridLossTr}\\
\includegraphics[scale=0.4]{figure0032}
\includegraphics[scale=0.4]{figure0033} \\
\includegraphics[scale=0.4]{figure0034}
\includegraphics[scale=0.4]{figure0035}
\caption{Performance of unsupervised logistic regression classifier $\hat\theta_n$ computed using Algorithm~\ref{alg:gradDescent} (left) and Algorithm~\ref{alg:gridSearch} (right) on the RCV1 dataset. The top two rows show the decay of the two risk estimates $\hat R_n(\hat\theta_n)$, $R_n(\hat\theta_n)$ as a function of the algorithm iterations. The risk estimates of $\hat\theta_n$ were computed using the train set (top) and the test set (middle). The bottom row displays the decay of the test set error rate of $\hat\theta_n$ as a function of the algorithm iterations. The figure shows that the algorithm obtains a relatively accurate classifier (testing set error rate 0.1, and $\hat R_n$ decaying similarly to $R_n$) without the use of a single labeled example. For comparison, the test error rate for supervised logistic regression with the same $n$ is 0.07.}
\label{fig:rcv1}
\end{figure}
\begin{figure} \centering
\includegraphics[scale=0.4]{figure0036}
\includegraphics[scale=0.4]{figure0037}\\
\includegraphics[scale=0.4]{figure0038}
\includegraphics[scale=0.4]{figure0039}\\
\includegraphics[scale=0.4]{figure0040}
\includegraphics[scale=0.4]{figure0041}
\caption{Performance of unsupervised logistic regression classifier $\hat\theta_n$ computed using Algorithm~\ref{alg:gradDescent} (left) and Algorithm~\ref{alg:gridSearch} (right) on the MNIST dataset. The top two rows show the decay of the two risk estimates $\hat R_n(\hat\theta_n)$, $R_n(\hat\theta_n)$ as a function of the algorithm iterations. The risk estimates of $\hat\theta_n$ were computed using the train set (top) and the test set (middle). The bottom row displays the decay of the test set error rate of $\hat\theta_n$ as a function of the algorithm iterations. The figure shows that the algorithm obtains a relatively accurate classifier (testing set error rate 0.1, and $\hat R_n$ decaying similarly to $R_n$) without the use of a single labeled example. For comparison, the test error rate for supervised logistic regression with the same $n$ is 0.05.}
\label{fig:mnist}
\end{figure}
Figures~\ref{fig:rcv1}-\ref{fig:mnist} indicate that minimizing the unsupervised logloss estimate is quite effective in learning an accurate classifier without labels. Both the unsupervised and supervised risk estimates $\hat R_n(\hat\theta_n)$, $R_n(\hat\theta_n)$ decay nicely when computed over the train set as well as the test set. Also interesting is the decay of the error rate. For comparison purposes supervised logistic regression with the same $n$ achieved only slightly better test set error rate: 0.05 on RCV1 (instead of 0.1) and 0.07 or MNIST (instead of 0.1).
\subsection{Inaccurate Specification of $p(Y)$}
Our estimation framework assumes that the marginal $p(Y)$ is known. In some cases we may only have an inaccurate estimate of $p(Y)$. It is instructive to consider how the performance of the learned classifier degrades with the inaccuracy of the assumed $p(Y)$.
Figure~\ref{fig:misspecfig} displays the performance of the learned classifier for RCV1 data as a function of the assumed value of $p(Y=1)$ (correct value is $p(Y=1)=0.3$). We conclude that knowledge of $p(Y)$ is an important component in our framework but precise knowledge is not crucial. Small deviations of the assumed $p(Y)$ from the true $p(Y)$ result in a small degradation of logloss estimation quality and testing set error rate. Naturally, large deviation of the assumed $p(Y)$ from the true $p(Y)$ renders the framework ineffective.
\begin{figure}
\centering
\includegraphics[scale=0.4]{figure0042}
\caption{Performance of unsupervised classifier training on RCV1 data (top class vs. classes 2-5) for misspecified $p(Y)$. The performance of the estimated classifier (in terms of training set empirical logloss $R_n$~\eqref{eq:empiricalLoss} and test error rate measured using held-out labels) decreases with the deviation between the assumed and true $p(Y=1)$ (true $p(Y=1)=0.3)$). The classifier performance is very good when the assumed $p(Y)$ is close to the truth and degrades gracefully when the assumed $p(Y)$ is not too far from the truth.}\label{fig:misspecfig}
\end{figure}
\section{Related Work}
Related problems have been addressed in \citep{Mann07} and \citep{Quad09}. The work in \citep{Mann07} performs transduction by enforcing constraints on the label proportions. However, their method requires labeled data. The work in \citep{Quad09} aims to estimate the labels of an unlabeled testing set using known label proportions of $n$ sets of unlabeled observations. The key difference between their approach and ours is that they require as many splits of the data as the number of classes and therefore require the knowledge of the label proportions in each split. This is a much stronger assumption than knowing $p(y)$. As noted previously (see comment after Proposition~\ref{prop:identifiability}), our analysis is in fact valid when only the order of label proportions is known, rather than the absolute values.
An important distinction between our work and the references above is that our work provides an estimate for the margin-based risk and therefore leads naturally to unsupervised versions of logistic regression and support vector machines. We also provide asymptotic analysis showing convergence of the resulting classifier to the optimal classifier (minimizer of \eqref{eq:defR}). Experimental results show that in practice the accuracy of the unsupervised classifier is on the same order (but slightly lower naturally) as its supervised analog.
\section{Discussion}
In this paper we developed a novel framework for estimating margin-based risks using only unlabeled data. We shows that it performs well in practice on several different datasets. We derived a theoretical basis by casting it as a maximum likelihood problem for Gaussian mixture model followed by plug-in estimation.
Remarkably, the theory states that assuming normality of $f_{\theta}(X)$ and a known $p(Y)$ we are able to estimate the risk $R(\theta)$ without a single labeled example. That is the risk estimate converges to the true risk as the number of unlabeled data increase. Moreover, using uniform convergence arguments it is possible to show that the proposed training algorithm converges to the optimal classifier as $n\to\infty$ without any labeled data.
On a more philosophical level, our approach points at novel questions that go beyond supervised and semi-supervised learning. What benefit do labels provide over unsupervised training? Can our framework be extended to semi-supervised learning where a few labels do exist? Can it be extended to non-classification scenarios such as margin based regression or margin based structured prediction? When are the assumptions likely to hold and how can we make our framework even more resistant to deviations from them? These questions and others form new and exciting open research directions.
{
\bibliographystyle{plain}
| proofpile-arXiv_065-5413 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
An important topic in modern wireless communications is the subject
of decentralized networks. By definition, a decentralized network of separate transmitter-receiver pairs
has no central controller to allocate the network resources among
the active users. As such, resource allocation must be performed
locally at each node. In general, users are not already aware of the number of active users and the channel gains\footnote{Throughout the paper, we assume the channel from each transmitter to each receiver is modeled by a static and non-frequency selective gain.}. Also, users are not aware of each
other's codebooks implying multiuser detection is not possible, i.e., users treat each other as noise.
Multiuser interference is known to be the main factor limiting the
achievable rates in such networks particularly in the high Signal-to-Noise Ratio (SNR) or interference limited regime. Therefore, all users must follow
a distributed signaling scheme such that the destructive effect of
interference on each user is minimized, while the resources are
fairly shared among users.
Most of distributed schemes reported in the literature rely on
either \textit{game-theoretic} approaches or \textit{cognitive
radios}. Cognitive radios \cite{2,haykin} have the ability to sense the
unoccupied portion of the available spectrum and use this
information in resource allocation. Although such smart radios avoid the use of a central controller,
they require sophisticated detection techniques for sensing the
spectrum holes and dynamic frequency assignment which add to the
overall system complexity \cite{8,9,10}.
Distributed strategies based on game theoretic arguments have already attracted a great deal of attention. In \cite{1}, the authors introduce a non-cooperative game theoretic framework to investigate the spectral efficiency issue when several users compete over an unlicensed band with no central controller. Reference \cite{12} offers a brief overview of game theoretic dynamic spectrum sharing. Although these schemes enable us to understand the dynamics of distributed resource allocation, they usually suffer from complexity in software and convergence issues as they rely on iterative algorithms.
Spread spectrum communications is a natural setup to share the same bandwidth by several users. This area has attracted tremendous attention by different authors during the past decades in the context of centralized uplink/downlink multiuser systems. Appealing characteristics of spread spectrum systems have motivated researchers to utilize these schemes in networks without a certain infrastructure, i.e., packet radio or ad-hoc networks\cite{15}. In direct sequence spread spectrum systems, the signal of each user is spread using a pseudo-random noise (PN) code. The challenging point is that in a network without a central controller, if two users use the same spreading code, they will not be capable of recovering the data at the receiver side due to the high amount of interference. Distributed code assignment techniques are developed in \cite{38,39}. In \cite{38}, using a greedy approximation algorithm and invoking graph theory, a distributed code assignment protocol is suggested. Another category of research is devoted to devise distributed schemes in the reverse link (uplink) of cellular systems. Distributed power assignments algorithms are proposed in \cite{41,42}. Reference \cite{43} proposes a distributed scheduling method called the token-bucket on-off scenario utilized by autonomous mobile stations where its impact on the overall throughput of the reverse link is investigated. Furthermore, decentralized rate assignments in a multi-sector code division multiple access wireless network are discussed in \cite{44}.
Being a standard technique in spread spectrum communications and due
to its interference avoidance nature, Frequency Hopping is the
simplest spectrum sharing method to use in decentralized networks.
As different users typically have no prior information about the
codebooks of the other users, the most efficient method is avoiding
interference by choosing unused channels. As mentioned earlier,
searching the spectrum to find spectrum holes is not an easy task
due to the dynamic spectrum usage. As such, FH is a realization of a
transmission scheme without sensing, while avoiding the collisions
as much as possible. Frequency hopping is one of the standard
signaling schemes adopted in ad-hoc networks. In short
range scenarios, bluetooth systems \cite{19,20,21} are the most
popular examples of a wireless personal area network or WPAN. Using
FH over the unlicensed ISM band, a bluetooth system provides robust
communication to unpredictable sources of interference. A
modification of Frequency Hopping called Dynamic Frequency Hopping
(DFH), selects the hopping pattern based on interference
measurements in order to avoid dominant interferers. The performance
of a DFH scheme when applied to a cellular system is assessed in
\cite{22,23,24}.
Distributed rate assignment strategies are recently adopted in the context of medium access control. It is well-known \cite{53} that the capacity region $\mathfrak{R}$ of a multiple access channel with $n$ users is a polytope with a $n!$ corner points. Let each corner point of $\mathfrak{R}$ be an $n$-tuple whose elements are among the numbers $R_{1},\cdots,R_{L-1}$ and $R_{L}$. With no cooperation among the users, the authors in \cite{HO} propose that each user selects a codebook of rate $R_{l}$ with probability $p_{l}\in(0,1)$ for $1\leq l\leq L$. Assuming the receiver is aware of the rate selection of all users, the average sum rate of the network is $\bar{R}=\sum_{l_{1},\cdots,l_{n}\in\{1,\cdots,L\}}p_{l_{1}}\cdots p_{l_{n}}(R_{l_{1}}+\cdots+R_{l_{n}})\mathbb{1}_{(R_{l_{1}},\cdots,R_{l_{n}})\in\mathfrak{R}}$ where $\mathbb{1}_{(R_{l_{1}},\cdots,R_{l_{n}})\in\mathfrak{R}}$ is $1$ if $(R_{l_{1}},\cdots,R_{l_{n}})\in\mathfrak{R}$ and $0$ otherwise. Finally, the numbers $p_{1},\cdots, p_{L-1}$ and $p_{L}$ are derived to maximize $\bar{R}$. Major differences of this scenario with a decentralized wireless network are
\textit{1)} The capacity region of a multiuser interference channel is unknown.
\textit{2)} In case transmitters have different choices to select the transmission rate, a certain receiver is not guaranteed to be aware of the transmission rate of interferers.
\textit{3)} Any user is already unaware of the gains of channels connecting the interferers' transmitters to its receiver. Also, any user is never capable of finding the amount of interference it imposes on other users.
It is well-known that in the low SNR regime continuous transmission of $\mathrm{i.i.d.}$ Gaussian signals is optimal. However, as SNR increases, this scheme turns out to be quite inefficient. For instance, the achievable rate of each user eventually saturates, i.e., the achieved Sum Multiplexing Gain\footnote{The Sum Multiplexing Gain represents the scaling of the sum rate in terms of $\log\mathrm{SNR}$ as SNR tends to infinity.} (SMG) is equal to zero. Using the results in \cite{kami-1}, it is easy to see that by using a masking strategy where each user quits transmitting its Gaussian signals independently from transmission to transmission, a nonzero SMG of $\left(1-\frac{1}{n}\right)^{n-1}$ is attained in a decentralized network of $n$ users. This is an interesting result in the sense that if the number of active users tends to infinity, the achieved SMG settles on $\frac{1}{e}>0$.
In the present paper, we answer the following questions:
\textit{Question 1-} Is it possible to achieve an SMG larger than $\frac{1}{e}$ as the number of users becomes large?
We propose a distributed signaling scheme where each user spread its Gaussian signal along a spreading code consisting of $\mathrm{i.i.d.}$ elements selected according a globally known Probability Mass Function (PMF) over a finite alphabet $\mathscr{A}$. Thereafter, the resulting sequence is punctured independently from symbol to symbol with a certain probability representing the masking operation. For example, assuming $\mathscr{A}=\{-1,1\}$, let the generated spreading code have length $10$ and be given by
\begin{equation}
\label{ }
\big(1,1,-1,-1,-1,1,-1,1,1,-1\big).
\end{equation}
Also, an $\mathrm{i.i.d.}$ sequence of $1$'s (representing $\mathsf{TRANSMIT}$) and $0$'s (representing $\mathsf{MASK}$) with length $10$ is generated as
\begin{equation}
\label{ }
\big(0,1,1,0,0,1,1,1,0,0\big).
\end{equation}
Finally, denoting the Gaussian signal to be transmitted by $\boldsymbol{x}$, the sequence
\begin{equation}
\label{ }
\big(0,\boldsymbol{x},-\boldsymbol{x},0,0,\boldsymbol{x},-\boldsymbol{x},\boldsymbol{x},0,0\big).
\end{equation}
is transmitted in $10$ consecutive transmission slots called a transmission frame. This process is repeated independently from transmission frame to transmission frame. We notice that since different users are not aware of each other's signals and the spreading/masking sequences, the noise plus interference vector at the receive side of any user is a mixed Gaussian random vector. We assume the knowledge of interference Probability Density Function (PDF) at the receiver side of each user. We are able to see that using the proposed randomized spreading scheme, the number of active users and the gains of channels conveying the interferers' signals can be easily found by inspecting the interference PDF and solving a set of linear equation.
Assuming all users are \emph{frame-synchronous}, we derive achievable rates for the users in three steps:
\textit{Step 1-} Using Singular Value Decomposition (SVD) of the signal space at the receiver side any user, the interference vector is mapped in the signal space and the complement space\footnote{In any Euclidean space $\mathscr{E}$ and a subspace $\mathscr{U}$ of $\mathscr{E}$, the complement space $\mathscr{U}^{\perp}$ of $\mathscr{U}$ is the set of elements in $\mathscr{E}$ that are perpendicular to any element in $\mathscr{U}$. } of the signal space.
\textit{Step 2-} A conditional version of entropy power inequality is used to derive a lower bound on the mutual information between the input and output of each user along any transmission frame. The conditioning is made over the contents of the interference vector mapped in the complement space of the signal space.
\textit{Step 3-} The resulting lower bound in the previous step highly depends on the differential entropy of mixed Gaussian random vectors. Since there is no closed formula on the differential entropy of a mixed Gaussian vector, a key Lemma is used to find computable bounds on this differential entropy. This leads us to the final formulation of the achievable rate.
In a decentralized network of $n$ users, we are able to show that by regulating the length of the transmission frame and the probabilistic structure of the spreading/masking sequences, the resulting lower bound scales like $\mathrm{SMG}(n)\log\mathrm{SNR}$ where $\lim_{n\to\infty}\mathsf{SMG}(n)=1$. This is exactly the SMG of a centralized orthogonal resource allocation scheme where multiuser interference is completely avoided.
Our focus is not particularly on the high SNR regime. In fact, the length of the transmission frame and the probabilistic parameters of the spreading/masking codes are sensitive to the choice of SNR. Our proposed achievable rate for any user in general depends on the gains of the channels conveying the interference. As mentioned earlier, each user is capable of finding the channel gains, however, if each user attempts to maximize its achievable rate over the length of the transmission frame and other code parameters, different users come up with different choices which results in inconsistency. To circumvent this difficulty, assuming the channel gains are realizations of $\mathrm{i.i.d.}$ continuous random variables, each user selects the code parameters such that the average of achievable rate per user over different realizations of the channel gains is maximized. This leads to a consistent and distributed method to design the best randomization algorithm in constructing the spreading/masking sequences.
An interesting observation is that even in the simplest scenario where the underlying alphabet is $\{-1,1\}$ and no masking is applied\footnote{This reminds us of direct sequence spread spectrum communications.}, the elements of the spreading codes are not equiprobable over $\{-1,1\}$. For example, our simulation results show that in a network of $n=4$ users at $\mathrm{SNR}=60\mathrm{dB}$, the elements of the spreading code must be selected to be $1$ with a probability of $0.01$ and $-1$ with a probability of $0.91$ or vice versa.
\textit{Question 2-} What is the highest achievable rate under the masking protocol? Can one do better than masking?
One may raise the question if masking the transmitted signals independently from transmission slot to transmission slot is by itself sufficient, i.e., by selecting the PDF of the transmitted signals properly (probably non-Gaussian), there is no need for spreading. Using an extremal inequality of Liu-Viswanath \cite{LV}, we are able to show that transmission of Gaussian signals along with spreading and masking yields higher achievable rates that the largest achievable rate with masking alone.
The rest of the paper is organized as follows. Section II offers the system model. In this section, we introduce the randomized spreading coding and discuss how all user can consistently design their spreading/masking sequences. Section III presents the development of achievable rates based on the three steps mentioned earlier. System design is brought in section IV where we offer several design examples. Finally, section V prove the supremacy of blending spreading and masking over masking alone. Conclusion remarks are given in section VI.
\textbf{Notation-} Throughout the paper, we denote random quantities in bold case such as $\boldsymbol{x}$ and $\vec{\boldsymbol{y}}$. A realization of $\boldsymbol{x}$ is denoted by $x$. A circularly symmetric complex Gaussian random vector $\vec{\boldsymbol{x}}$ of length $m$ with zero mean and covariance matrix $C$ is denoted by $\mathcal{CN}(0,C)$. A Bernoulli random variable $\boldsymbol{x}\in\{0,1\}$ with $\Pr\{\boldsymbol{x}=1\}=a\in[0,1]$ is denoted by $\mathrm{Ber}(a)$. For a sequence $(a_{l})_{l=1}^{m}\triangleq(a_{1},\cdots,a_{m})$ and a set $\Xi=\{\xi_{1},\cdots,\xi_{m'}\}\subset\{1,\cdots,m\}$ where $\xi_{1}<\cdots<\xi_{m'}$, we define $(a_{l})_{l\in\Xi}\triangleq (a_{\xi_{1}},\cdots,a_{\xi_{m'}})$. We use $\mathrm{E} \{.\}$ for the expectation operator, $\mathrm{Pr}\{\mathcal{E}\}$ for the probability of an event $\mathcal{E}$, $\mathbb{1}_{\mathcal{E}}$ for the indicator function of an event $\mathcal{E}$ and $p_{\boldsymbol{x}}(.)$ for the PDF of a random variable $\boldsymbol{x}$. Also, $\mathrm{I} (\boldsymbol{x};\boldsymbol{y})$ denotes the mutual information between random variables $\boldsymbol{x}$ and $\boldsymbol{y}$, $\mathrm{h} (\boldsymbol{x})$ the differential entropy of a continuous random variable $\boldsymbol{x}$, $\mathrm{H}(\boldsymbol{x})$ the entropy of a discrete random variable $\boldsymbol{x}$, and the binary entropy function is denoted by $\mathscr{H}(x)\triangleq-x\log x-(1-x)\log(1-x)$ for $x\in[0,1]$. For any $x\in[0,1]$, $\bar{x}$ denotes $1-x$. The Dirac delta function is denoted by $\delta(.)$. For integers $m,n\in\mathbb{N}$, a $m\times n$ matrix in which all elements are $0$ or $1$ is shown by $0_{m\times n}$ or $1_{m\times n}$ respectively. For sets $A$ and $B$, the set $A\backslash B$ denotes a set with elements in $A$ and not in $B$. The cardinality of a set $A$ is denoted by $|A|$. For any two vectors of the same size $\vec{x}$ and $\vec{y}$, the vector $\vec{x}\odot\vec{y}$ is the element-wise product of $\vec{x}$ and $\vec{y}$. For two function $f(\gamma)$ and $g(\gamma)$ of a variable $\gamma>0$, we write $f\sim g$ if $\lim_{\gamma\to\infty}\frac{f}{\log\gamma}=\lim_{\gamma\to\infty}\frac{g}{\log\gamma}$ and $f\lesssim g$ if $\lim_{\gamma\to\infty}\frac{f}{\log\gamma}\leq \lim_{\gamma\to\infty}\frac{g}{\log\gamma}$. The notation $f\gtrsim g$ is defined similarly.
\section{System Model}
We consider a decentralized communication network of $n$ users\footnote{Users consists of a separate transmitter-receiver pairs.}. The static and non frequency-selective gain of the channel from the $i^{th}$ transmitter to the $j^{th}$ receiver is shown by $h_{i,j}$ which is in general a complex number. In a decentralized network, there is no communication or cooperation among different users. Due to the fact that the network has no fixed infrastructure and there is no central controller to manage the network resources among users, resource allocation and rate assignment must be performed locally at every transmitter. A main feature of such networks is that the $i^{th}$ user is not already informed about the channel gains $(h_{j,i})_{j=1}^{n}$ concerning the links connecting different transmitters to the $i^{th}$ receiver. In fact, every receiver has only access to the interference PDF and the knowledge about the number of active users and the channel gains $(h_{j,i})_{j=1}^{n}$ can only be inferred through analyzing this PDF. Also, different users are not aware of each other's codebooks. As such, no multiuser detection is possible and users treat the interference as noise.
\subsection{Randomized Signature Codes}
In this part, we introduce a distributed signaling strategy using randomized spreading/masking. For positive integers $T$ and $K$, the codebook of the $i^{th}$ user consists of $2^{TR_{i}}$ codewords where a typical codeword $(\boldsymbol{x}_{i,t})_{t=1}^{T}$ is a sequence of $\mathrm{i.i.d.}$ circularly symmetric complex Gaussian random variables with zero mean and variance $\gamma$. The $i^{th}$ user transmits $(\boldsymbol{x}_{i,t})_{t=1}^{T}$ in $T$ \emph{transmission frames} where each transmission frame consists of $K$ \emph{transmission slots}. In a typical transmission frame, one of the signals in the codeword $(\boldsymbol{x}_{i,t})_{t=1}^{T}$ is transmitted. To transmit $\boldsymbol{x}_{i,t}$, the $i^{th}$ user randomly constructs two independent sequences called the spreading and the masking codes. The spreading code is a $K\times 1$ vector $\vec{\boldsymbol{\mathfrak{s}}}_{i,t}$ over an alphabet $\mathscr{A}\subset\mathbb{Z}\backslash\{0\}$ where the elements of $\vec{\boldsymbol{\mathfrak{s}}}_{i,t}$ are $\mathrm{i.i.d.}$ with a globally known PMF $(\mathsf{p}_{a})_{a\in\mathscr{A}}$. The masking code is a $K\times 1$ vector $\vec{\boldsymbol{\mathfrak{m}}}_{i,t}$ whose elements are independent $\mathrm{Ber}(\varepsilon)$ random variables for some $\varepsilon\in(0,1]$. Thereafter, the $i^{th}$ user transmits $\boldsymbol{x}_{i,t}\vec{\boldsymbol{\mathfrak{s}}}_{i,t}\odot\vec{\boldsymbol{\mathfrak{m}}}_{i,t}$ in the $t^{th}$ transmission frame. The vector
\begin{equation}\vec{\boldsymbol{s}}_{i,t}\triangleq\vec{\boldsymbol{\mathfrak{s}}}_{i,t}\odot\vec{\boldsymbol{\mathfrak{m}}}_{i,t}
\end{equation}
is called the randomized signature code of the $i^{th}$ user in the $t^{th}$ transmission frame. We remark that the spreading and masking codes of the $i^{th}$ user over different transmission frames are constructed independently. The alphabet $\mathscr{A}$ has the property that for any $a\in\mathscr{A}$, we have $-a\in\mathscr{A}$. The received vector at the receiver side of the $i^{th}$ user in a typical transmission frame is given by\footnote{We omit the frame index for notation simplicity.}
\begin{equation}
\label{e1}
\vec{\boldsymbol{y}}_{i}=\beta h_{i,i}\boldsymbol{x}_{i}\vec{\boldsymbol{s}}_{i}+\sum_{j\neq i}\beta h_{j,i}\boldsymbol{x}_{j}\vec{\boldsymbol{s}}_{j}+\vec{\boldsymbol{z}}_{i}
\end{equation}
where $\vec{\boldsymbol{z}}_{i}$ is a $\mathcal{CN}(0_{K\times 1},I_{K})$ random vector representing the ambient noise at the $i^{th}$ receiver. Also, $\beta$ is a normalization factor ensuring the average transmission power per symbol of the $i^{th}$ user is $\gamma$, i.e.,\begin{equation}
\label{e11}
\beta^{2}\mathrm{E}\left\{\|\vec{\boldsymbol{s}}_{i}\|_{2}^{2}\right\}=1.
\end{equation}
In (\ref{e1}), we have made the assumption that all active users in the network are frame-synchronous meaning their transmission frames start and end at similar time instants. This is not necessarily a valid assumption in a decentralized network, however, this makes the presentation of the subject much easier. It is clear that the transmitted signals of each user along its transmission frames are correlated while signals transmitted in different transmission frames are independent. Hence, we assume any new active user is capable of detecting the correlated segments along the interference plus noise process, and therefore, synchronizing itself with former active users in the network. However, in case different users are not frame-synchronous and users are not aware of the asynchrony pattern, the communication channel of any user is not ergodic anymore and one must perform outage analysis.
Using joint typicality at the receiver side of the $i^{th}$ user, any data rate $R_{i}\leq \mathsf{C}_{i}$ is achievable where
\begin{equation}
\mathsf{C}_{i}\triangleq \frac{\mathrm{I}(\boldsymbol{x}_{i},\vec{\boldsymbol{s}}_{i};\vec{\boldsymbol{y}}_{i})}{K}.
\end{equation}
The term $\mathrm{I}(\boldsymbol{x}_{i},\boldsymbol{s}_{i};\vec{\boldsymbol{y}}_{i})$ indicates that the $i^{th}$ user is also embedding information in the sequence of $\mathrm{i.i.d.}$ signature codes. In fact, one can assume the codeword of any user consists of two sequences, namely, the sequence of Gaussian signals and the sequence of randomized signature codes. Due to the fact that the signature code of any user is not known to other users and on the other hand, the signature codes are independently changing over different transmission frames, the noise plus interference at the receiver side of any user has a mixed Gaussian PDF. This makes $\mathrm{I}(\boldsymbol{x}_{i},\vec{\boldsymbol{s}}_{i};\vec{\boldsymbol{y}}_{i})$ have no closed expression. Therefore, we need to obtain a tight lower bound on this quantity whose computation only needs data that can be inferred from the noise plus interference PDF at the receiver side of the $i^{th}$ user and be fed back to its associated transmitter in order to regulate the transmission rate. Throughout the paper, the interference term at the receiver side of the $i^{th}$ user is denoted by $\vec{\boldsymbol{w}}_{i}$, i.e., $\vec{\boldsymbol{w}}_{i}=\sum_{j\neq i}\beta h_{j,i}\boldsymbol{x}_{j}\vec{\boldsymbol{s}}_{j}$. One can state $\vec{\boldsymbol{w}}_{i}$ as
\begin{equation}
\label{ }
\vec{\boldsymbol{w}}_{i}=\boldsymbol{S}_{i}\Xi_{i}\vec{\boldsymbol{X}}_{i}
\end{equation}
where
\begin{equation}
\label{ }
\boldsymbol{S}_{i}\triangleq\begin{pmatrix}
\vec{\boldsymbol{s}}_{1} & \cdots&\vec{\boldsymbol{s}}_{i-1}&\vec{\boldsymbol{s}}_{i+1}&\cdots&\vec{\boldsymbol{s}}_{n}
\end{pmatrix},
\end{equation}
\begin{equation}
\label{ }
\Xi_{i}\triangleq \mathrm{diag}(h_{1,i},\cdots,h_{i-1,i},h_{i+1,i},\cdots,h_{n,i})
\end{equation}
and
\begin{equation}
\label{ }
\vec{\boldsymbol{X}}_{i}=\begin{pmatrix}
\boldsymbol{x}_{1} & \cdots&\boldsymbol{x}_{i-1}&\boldsymbol{x}_{i+1}&\cdots&\boldsymbol{x}_{n}
\end{pmatrix}^{T}.
\end{equation}
\subsection{Considerations on the Channel Gains and the Number of Active Users }
In general, we assume that the $i^{th}$ receiver is aware of $h_{i,i}$ which can be done through a training sequence sent by the $i^{th}$ transmitter. Assuming the channel gains are realizations of $\mathrm{i.i.d.}$ random variables with a continuous PDF, then the number of Gaussian components in the mixed Gaussian PDF of the interference in any transmission slot at the receiver side of the $i^{th}$ user is $\left(\frac{|\mathscr{A}|}{2}\right)^{n-1}$ if masking is not performed and $\left(\frac{|\mathscr{A}|}{2}+1\right)^{n-1}$ if masking and spreading are both applied. These levels consist of $\sum_{j\neq i}a_{j}^{2}|h_{j,i}|^{2}\gamma$ where $a_{j}\in\mathscr{A}$. As such, as far as $|\mathscr{A}|\geq 3$, the number of active users can be obtained by finding the number of interference power levels. However, if $\mathscr{A}=\{-a,a\}$ for some $a\in\mathbb{N}$ and masking is not performed, the interference PDF in any transmission slot is Gaussian (the interference vector on any transmission frame is still mixed Gaussian) with power $a^{2}\gamma\sum_{j\neq i}|h_{j,i}|^{2}$. Therefore, the number of active users can not be derived by investigating the interference PDF in one transmission slot. In this case, it can be verified that the joint PDF of any two transmission slots in a transmission frame is a mixed Gaussian PDF with $2^{n-1}$ Gaussian components. This yields a method to find $n$ in case $\mathscr{A}$ has only two elements.
By symmetry, characterization of $\mathsf{C}_{i}$ demands the knowledge of an arbitrary reordering of the sequence $(h_{j,i})_{j\neq i}$. In this paper, we derive a lower bound $\mathsf{C}_{i}^{(\mathrm{lb})}$ on $\mathsf{C}_{i}$ which is only a function of the magnitude of the channel gains. Therefore, we need to obtain an arbitrary reordering of $(|h_{j,i}|)_{j\neq i}$. Let $(\mathsf{h}^{(i)}_{1},\mathsf{h}^{(i)}_{2},\cdots,\mathsf{h}^{(i)}_{n-1})$ be a reordering of $(h_{j,i})_{j\neq i}$ based on magnitude, i.e., $|\mathsf{h}^{(i)}_{1}|<|\mathsf{h}^{(i)}_{2}|<\cdots<|\mathsf{h}^{(i)}_{n-1}|$. We consider the following cases:
\textit{Case 1-} If $|\mathscr{A}|\geq 4$, let $a$ and $b$ be the two largest elements in $\mathscr{A}$ such that $a>b$. Denoting the $n-1$ largest interference plus noise power levels on each transmission slot by $\pi_{1}<\cdots<\pi_{n-1}$, we have $\beta^{2}\gamma a^{2}\sum_{j=1}^{n-1}|\mathsf{h}_{j}^{(i)}|^{2}+1=\pi_{n-1}$ and $\beta^{2}\gamma a^{2}\sum_{\substack{j=1\\j\neq l}}^{n-1}|\mathsf{h}_{j}^{(i)}|^{2}+\beta^{2}\gamma b^{2}|\mathsf{h}_{l}^{(i)}|^{2}+1=\pi_{n-1-l}$ for $1\leq l\leq n-2$. These $n-1$ linear equations yield $(|\mathsf{h}_{j}^{(i)}|)_{j=1}^{n-1}$.
\textit{Case 2-} Let masking be the only ingredient in constructing the signatures, i.e., spreading is not applied. Denoting the $n-1$ largest interference plus noise power levels on each transmission slot by $\pi_{1}<\cdots<\pi_{n-1}$, we have $\beta^{2}\gamma \sum_{j=1}^{n-1}|\mathsf{h}_{j}^{(i)}|^{2}+1=\pi_{n-1}$ and $\beta^{2}\gamma \sum_{\substack{j=1\\j\neq l}}^{n-1}|\mathsf{h}_{j}^{(i)}|^{2}+1=\pi_{n-1-l}$ for $1\leq l\leq n-2$. These $n-1$ linear equations yield $(|\mathsf{h}_{j}^{(i)}|)_{j=1}^{n-1}$.
\textit{Case 3-} Let $\mathscr{A}=\{-a,a\}$ for some $a\in\mathbb{R}^{+}$ and masking is performed on top of spreading. Then, we can apply the same procedure in case 2.
\textit{Case 4-} Let $\mathscr{A}=\{-a,a\}$ for some $a\in\mathbb{R}^{+}$ and masking is not applied. The joint PDF of the interference plus noise on any two transmission slots inside a transmission frame is a bivariate mixed Gaussian PDF in which the Gaussian components have covariance matrices of the form
\begin{equation}
\label{ }
\begin{pmatrix}
\beta^{2}\gamma a^{2}\sum_{j=1}^{n-1}|\mathsf{h}_{j}^{(i)}|^{2}+1 & \beta^{2}\gamma a^{2}\sum_{j=1}^{n-1}c_{j}|\mathsf{h}_{j}^{(i)}|^{2} \\
\beta^{2}\gamma a^{2}\sum_{j=1}^{n-1}c_{j}|\mathsf{h}_{j}^{(i)}|^{2} & \beta^{2}\gamma a^{2}\sum_{j=1}^{n-1}|\mathsf{h}_{j}^{(i)}|^{2}+1\end{pmatrix}
\end{equation}
where $c_{j}\in\{-1,1\}$ for $1\leq j\leq n-1$. The $n-2$ largest elements among the off-diagonal elements of these matrices correspond to $\beta^{2}\gamma a^{2}\sum_{\substack{j=1\\j\neq l}}^{n-1}|\mathsf{h}_{j}^{(i)}|^{2}-\beta^{2}\gamma a^{2}|\mathsf{h}_{l}^{(i)}|^{2}$ for $1\leq l\leq n-2$. These elements together with the diagonal element $ \beta^{2}\gamma a^{2}\sum_{j=1}^{n-1}|\mathsf{h}_{j}^{(i)}|^{2}+1$ yield $(|\mathsf{h}_{j}^{(i)}|)_{j=1}^{n-1}$.
Therefore, we have shown that the $i^{th}$ user can find $n$ and a reordering of the sequence $(h_{j,i})_{j\neq i}$.
\subsection{ A Global Tool To Design The Randomized Signature Codes }
An important issue in a decentralized network is to propose a globally known utility function to be optimized by all user without any cooperation. As mentioned earlier, the receivers can infer the number of active users in the network and the channel gains by inspecting the interference PDF. We consider a scenario where this information is fed back to the transmitters. As mentioned earlier, there is no closed formulation on $\mathsf{C}_{i}$. However, we are able to develop a lower bound $\mathsf{C}_{i}^{(\mathrm{lb})}$ for $\mathsf{C}_{i}$ which is tight enough to guarantee \begin{equation}\lim_{\gamma\to\infty}\frac{\mathsf{C}_{i}^{(\mathrm{lb})}}{\log\gamma}=\lim_{\gamma\to\infty}\frac{\mathsf{C}_{i}}{\log\gamma}.\end{equation}
In general, $\mathsf{C}_{i}^{(\mathrm{lb})}$ depends on $\vec{h}_{i}\triangleq(h_{j,i})_{j=1}^{n}$. As such, we denote it explicitly by $\mathsf{C}_{i}^{(\mathrm{lb})}(\vec{h}_{i})$. Assuming $(h_{j,i})_{j=1}^{n}$ are realizations of independent $\mathcal{CN}(0,1)$ random variables $(\boldsymbol{h}_{j,i})_{j=1}^{n}$, we propose that the $i^{th}$ user selects $K$, $(\mathsf{p}_{{a}})_{a\in\mathscr{A}}$ and $\varepsilon$ based on
\begin{equation}
\label{rule}
(\hat{K},(\hat{\mathsf{p}}_{a})_{a\in\mathscr{A}},\hat{\varepsilon})=\arg\sup_{K,(\mathsf{p}_{a})_{a\in\mathscr{A}},\varepsilon}\mathrm{E}\left\{\mathsf{C}_{i}^{(\mathrm{lb})}(\vec{\boldsymbol{h}}_{i})\right\}.
\end{equation}
After selecting $K$ and $(\mathsf{p}_{a})_{a\in\mathscr{A}}$ using (\ref{rule}), the $i^{th}$ user regulates its actual transmission rate at $R_{i}=\mathsf{C}_{i}^{\mathrm{(lb)}}(\vec{h}_{i})$ using the realization of $\vec{\boldsymbol{h}}_{i}=\vec{h}_{i}$.
\section {A Lower Bound $\mathrm{I}(\boldsymbol{x}_{i},\vec{\boldsymbol{s}}_{i};\vec{\boldsymbol{y}}_{i})$ }
One can write $\mathrm{I}(\boldsymbol{x}_{i},\vec{\boldsymbol{s}}_{i};\vec{\boldsymbol{y}}_{i})$ as
\begin{eqnarray}
\label{b1}
\mathrm{I}(\boldsymbol{x}_{i},\vec{\boldsymbol{s}}_{i};\vec{\boldsymbol{y}}_{i})=\mathrm{I}(\vec{\boldsymbol{s}}_{i};\vec{\boldsymbol{y}}_{i})+\mathrm{I}(\boldsymbol{x}_{i};\vec{\boldsymbol{y}}_{i}|\vec{\boldsymbol{s}}_{i})\geq\mathrm{I}(\boldsymbol{x}_{i};\vec{\boldsymbol{y}}_{i}|\vec{\boldsymbol{s}}_{i}).\end{eqnarray}
The term $\mathrm{I}(\boldsymbol{x}_{i};\vec{\boldsymbol{y}}_{i}|\vec{\boldsymbol{s}}_{i})$ is the achievable rate of the $i^{th}$ user as if this user knew the randomized signature code $\vec{\boldsymbol{s}}_{i}$ already, i.e., the achievable rate of the $i^{th}$ user can be in general larger than the case where the signature matrices are already revealed to the receiver side. The extra term $\mathrm{I}(\vec{\boldsymbol{s}}_{i};\vec{\boldsymbol{y}}_{i})$ is bounded from above by $\mathrm{H}(\vec{\boldsymbol{s}}_{i})$ which is not a function of SNR. Therefore,
\begin{equation}
\label{e14}
\lim_{\gamma\to\infty}\frac{\mathrm{I}(\boldsymbol{x}_{i},\vec{\boldsymbol{s}}_{i};\vec{\boldsymbol{y}}_{i})}{\log\gamma}=\lim_{\gamma\to\infty}\frac{\mathrm{I}(\boldsymbol{x}_{i};\vec{\boldsymbol{y}}_{i}|\vec{\boldsymbol{s}}_{i})}{\log\gamma}.\end{equation}
As such, we ignore the term\footnote{It can be verified that $\mathrm{I}(\vec{\boldsymbol{s}}_{i};\vec{\boldsymbol{y}}_{i})=\sum_{\vec{s}\in \mathrm{supp}(\vec{\boldsymbol{s}}_{i})}\Pr\{\vec{\boldsymbol{s}}_{i}=\vec{s}\} \mathrm{D}\left(p_{\vec{\boldsymbol{y}}_{i}|\vec{\boldsymbol{s}}_{i}}(.|\vec{s})\|p_{\vec{\boldsymbol{y}}_{i}}(.)\right)$. This enables us to compute $\mathrm{I}(\vec{\boldsymbol{s}}_{i};\vec{\boldsymbol{y}}_{i})$ directly.} $\mathrm{I}(\vec{\boldsymbol{s}}_{i};\vec{\boldsymbol{y}}_{i})$ and focus on developing a tight lower bound on $\mathrm{I}(\boldsymbol{x}_{i};\vec{\boldsymbol{y}}_{i}|\vec{\boldsymbol{s}}_{i})$.
To develop a lower bound on $\mathrm{I}(\boldsymbol{x}_{i};\vec{\boldsymbol{y}}_{i}|\vec{\boldsymbol{s}}_{i})$, our major tools are linear processing of the channel output based on Singular Value Decomposition of the signature code $\vec{\boldsymbol{s}}_{i}$, a conditional version of Entropy Power Inequality and a key upper bound on the differential entropy of a mixed Gaussian random vector.
We have
\begin{equation}
\label{ }
\mathrm{I}(\boldsymbol{x}_{i};\vec{\boldsymbol{y}}_{i}|\vec{\boldsymbol{s}}_{i})=\sum_{\vec{s}\in\mathrm{supp}(\vec{\boldsymbol{s}}_{i})\backslash\{0_{K\times 1}\}}\Pr\{\vec{\boldsymbol{s}}_{i}=\vec{s}\}\mathrm{I}(\boldsymbol{x}_{i};\vec{\boldsymbol{y}}_{i}|\vec{\boldsymbol{s}}_{i}=\vec{s})\end{equation}
In the following, we find a lower bound on $\mathrm{I}(\boldsymbol{x}_{i};\vec{\boldsymbol{y}}_{i}|\vec{\boldsymbol{s}}_{i}=\vec{s})$ for any $\vec{s}\in\mathrm{supp}(\vec{\boldsymbol{s}}_{i})\backslash\{0_{K\times 1}\}$.
\textbf{Step 1-}
The matrix $\vec{s}\vec{s}^{\dagger}$ has two eigenvalues, namely zero and $\|\vec{s}\|_{2}^{2}$. The eigenvector corresponding to $\|\vec{s}\|_{2}^{2}$ is $\vec{s}$ and the the eigenvectors corresponding to zero are $K-1$ orthonormal vectors denoted by $\vec{g}_{1},\cdots, \vec{g}_{K-2}$ and $ \vec{g}_{K-1}$ which together with the columns of $\frac{\vec{s}}{\|\vec{s}\|_{2}}$ make an orthonormal basis for $\mathbb{R}^{K}$. Let us define
\begin{equation}
\label{ }
G_{i}(\vec{s})\triangleq\begin{pmatrix}
\vec{g}_{i,1}&\cdots&\vec{g}_{i,K-1}
\end{pmatrix},
\end{equation}
\begin{equation}
\label{ }
U_{i}(\vec{s})\triangleq\begin{pmatrix}
\frac{\vec{s}}{\|\vec{s}\|_{2}} &G_{i}(\vec{s})\end{pmatrix}
\end{equation}
and
\begin{equation}
\label{ }
\vec{d}\triangleq\begin{pmatrix}
\|\vec{s}\|_{2} \\
\vec{0}_{(K-1)\times 1}
\end{pmatrix}.
\end{equation}
Writing the SVD of $\vec{s}$,
\begin{equation}
\label{ }
\vec{s}=U_{i}(\vec{s})\vec{d}.
\end{equation}
The $i^{th}$ receiver constructs the vector $U_{i}^{\dagger}(\vec{s})\vec{\boldsymbol{y}}_{i}\Big|_{\vec{\boldsymbol{s}}_{i}=\vec{s}}$ upon reception of $\vec{\boldsymbol{y}}_{i}$. We have
\begin{eqnarray}
U_{i}^{\dagger}(\vec{s})\vec{\boldsymbol{y}}_{i}\Big|_{\vec{\boldsymbol{s}}_{i}=\vec{s}}=\beta h_{i,i}\boldsymbol{x}_{i}\vec{d}+U_{i}^{\dagger}(\vec{s})\left(\vec{\boldsymbol{w}}_{i}+\vec{\boldsymbol{z}}_{i}\right).\end{eqnarray}
We define
\begin{eqnarray}
\label{ }
\boldsymbol{\varphi}_{i}&\triangleq&\left[U_{i}^{\dagger}(\vec{s})\left(\vec{\boldsymbol{w}}_{i}+\vec{\boldsymbol{z}}_{i}\right)\right]_{1}\notag\\
&=&\frac{\beta\vec{s}^{\dagger}\left(\vec{\boldsymbol{w}}_{i}+\vec{\boldsymbol{z}}_{i}\right)}{\|\vec{s}\|_{2}}\end{eqnarray}
\begin{equation}
\label{ }
\boldsymbol{\omega}_{i}\triangleq\left[U_{i}^{\dagger}(\vec{s})\vec{\boldsymbol{y}}_{i}\right]_{1}=\beta h_{i,i}\|\vec{s}\|_{2}\boldsymbol{x}_{i}+\boldsymbol{\varphi}_{i}\end{equation}
and
\begin{eqnarray}
\vec{\boldsymbol{\vartheta}}_{i}&\triangleq&\left[U_{i}^{\dagger}(\vec{s})\vec{\boldsymbol{y}}_{i}\right]_{2}^{K}\notag\\&=&\left[U_{i}^{\dagger}(\vec{s})\left(\vec{\boldsymbol{w}}_{i}+\vec{\boldsymbol{z}}_{i}\right)\right]_{2}^{K}\notag\\
&\stackrel{}{=}&\left[U_{i}^{\dagger}(\vec{s})\left(\vec{\boldsymbol{w}}_{i}+\vec{\boldsymbol{z}}_{i}\right)\right]_{2}^{K}\notag\\&=&\left[\begin{pmatrix}
\frac{\vec{s}^{\dagger}}{\|\vec{s}\|_{2}} \\
G_{i}^{\dagger}(\vec{s})
\end{pmatrix}\left(\vec{\boldsymbol{w}}_{i}+\vec{\boldsymbol{z}}_{i}\right)\right]_{2}^{K}\notag\\
&=&G_{i}^{\dagger}(\vec{s})\left(\vec{\boldsymbol{w}}_{i}+\vec{\boldsymbol{z}}_{i}\right).\notag\\\end{eqnarray}
We have the following thread of equalities,
\begin{eqnarray}
\label{gh1}
\mathrm{I}(\boldsymbol{x}_{i};\vec{\boldsymbol{y}}_{i}|\vec{\boldsymbol{s}}_{i}=\vec{s})&=&\mathrm{I}(\boldsymbol{x}_{i};U_{i}^{\dagger}(\vec{s})\vec{\boldsymbol{y}}_{i}|\vec{\boldsymbol{s}}_{i}=\vec{s})\notag\\
&=&\mathrm{I}(\boldsymbol{x}_{i};\boldsymbol{\omega}_{i},\vec{\boldsymbol{\vartheta}}_{i})\notag\\
&=&\mathrm{I}(\boldsymbol{x}_{i};\vec{\boldsymbol{\vartheta}}_{i})+\mathrm{I}(\boldsymbol{x}_{i};\boldsymbol{\omega}_{i}|\vec{\boldsymbol{\vartheta}}_{i})\notag\\
&\stackrel{(a)}{=}&\mathrm{I}(\boldsymbol{x}_{i};\boldsymbol{\omega}_{i}|\vec{\boldsymbol{\vartheta}}_{i})\end{eqnarray}
where $(a)$ is by the fact that $\boldsymbol{x}_{i}$ and $\vec{\boldsymbol{\vartheta}}_{i}$ are independent, i.e., $\mathrm{I}(\boldsymbol{x}_{i};\vec{\boldsymbol{\vartheta}}_{i})=0$.
\textbf{Step 2-} In this part, we use the following Lemma without proof.
\begin{lem}
Let $\vec{\mathbf{\Theta}}_{1}$ and $\vec{\mathbf{\Theta}}_{2}$ be $t\times 1$ complex random vectors and $\mathbf{\Theta}_{3}$ be any random quantity (scalar or vector) with densities. Also, assume that the conditional densities $p_{\vec{\mathbf{\Theta}}_{1}|\mathbf{\Theta}_{3}}(.|.)$ and $p_{\vec{\mathbf{\Theta}}_{2}|\mathbf{\Theta}_{3}}(.|.)$ exist. If $\vec{\mathbf{\Theta}}_{1}$ and $\vec{\mathbf{\Theta}}_{2}$ are conditionally independent given $\mathbf{\Theta}_{3}$, then
\begin{equation}
\label{ }
2^{\frac{1}{t}\mathrm{h}(\vec{\mathbf{\Theta}}_{1}+\vec{\mathbf{\Theta}}_{2}|\mathbf{\Theta}_{3})}\geq 2^{\frac{1}{t}\mathrm{h}(\vec{\mathbf{\Theta}}_{1}|\mathbf{\Theta}_{3})}+2^{\frac{1}{t}\mathrm{h}(\vec{\mathbf{\Theta}}_{2}|\mathbf{\Theta}_{3})}.\end{equation}
\end{lem}
We have
\begin{eqnarray}
\label{gh2}
\mathrm{I}(\boldsymbol{x}_{i};\boldsymbol{\omega}_{i}|\vec{\boldsymbol{\vartheta}}_{i})&=&\mathrm{h}(\boldsymbol{\omega}_{i}|\vec{\boldsymbol{\vartheta}}_{i})-\mathrm{h}(\boldsymbol{\omega}_{i}|\boldsymbol{x}_{i},\vec{\boldsymbol{\vartheta}}_{i})\notag\\
&=&\mathrm{h}(\boldsymbol{\omega}_{i}|\vec{\boldsymbol{\vartheta}}_{i})-\mathrm{h}\left(\beta h_{i,i}\|\vec{s}\|_{2}\boldsymbol{x}_{i}+\boldsymbol{\varphi}_{i}\big|\boldsymbol{x}_{i},\vec{\boldsymbol{\vartheta}}_{i}\right)\notag\\
&\stackrel{}{=}&\mathrm{h}(\boldsymbol{\omega}_{i}|\vec{\boldsymbol{\vartheta}}_{i})-\mathrm{h}(\boldsymbol{\varphi}_{i}|\vec{\boldsymbol{\vartheta}}_{i}).\end{eqnarray}
On the other hand, we know that $\boldsymbol{\omega}_{i}=\beta h_{i,i}\|\vec{s}\|_{2}\boldsymbol{x}_{i}+\boldsymbol{\varphi}_{i}$. Defining $\mathbf{\Theta}_{1}\triangleq\beta h_{i,i}\|\vec{s}\|_{2}\boldsymbol{x}_{i}$ and $\mathbf{\Theta}_{2}\triangleq\boldsymbol{\varphi}_{i}$, it is clear that $\mathbf{\Theta}_{1}$ and $\mathbf{\Theta}_{2}$ are conditionally independent given the collection of random variables $\mathbf{\Theta}_{3}\triangleq\vec{\boldsymbol{\vartheta}}_{i}$. As the conditional densities $p_{\mathbf{\Theta}_{1}|\mathbf{\Theta}_{3}}(.|.)$ and $p_{\mathbf{\Theta}_{2}|\mathbf{\Theta}_{3}}(.|.)$ exist, by Lemma 1,
\begin{eqnarray}
\label{poe}
2^{\mathrm{h}\left(\boldsymbol{\omega}_{i}|\vec{\boldsymbol{\vartheta}}_{i}\right)}&\geq& 2^{\mathrm{h}\left(\beta h_{i,i}\|\vec{s}\|_{2}\boldsymbol{x}_{i}|\vec{\boldsymbol{\vartheta}}_{i}\right)}+2^{\mathrm{h}\left(\boldsymbol{\varphi}_{i}|\vec{\boldsymbol{\vartheta}}_{i}\right)}\notag\\
&\stackrel{(a)}{=}& 2^{\mathrm{h}\big(\beta h_{i,i}\|\vec{s}\|_{2}\boldsymbol{x}_{i}\big)}+2^{\mathrm{h}\left(\boldsymbol{\varphi}_{i}|\vec{\boldsymbol{\vartheta}}_{i}\right)}\end{eqnarray}
where $(a)$ is by the fact that the collection $\boldsymbol{x}_{i}$ is independent of $\vec{\boldsymbol{\vartheta}}_{i}$.
Dividing both sides of (\ref{poe}) by $2^{\frac{2}{v}\mathrm{h}(\vec{\boldsymbol{\varphi}}_{i}|\vec{\boldsymbol{\vartheta}}_{i})}$,
\begin{eqnarray}
\label{gh3}
&&\mathrm{h}\left(\boldsymbol{\omega}_{i}|\vec{\boldsymbol{\vartheta}}_{i}\right)-\mathrm{h}\left(\boldsymbol{\varphi}_{i}|\vec{\boldsymbol{\vartheta}}_{i}\right)\notag\\
&\geq& \log\left(2^{\left(\mathrm{h}\big(\beta h_{i,i}\|\vec{s}\|_{2}\boldsymbol{x}_{i}\big)-\mathrm{h}\left(\boldsymbol{\varphi}_{i}|\vec{\boldsymbol{\vartheta}}_{i}\right)\right)}+1\right).\notag\\\end{eqnarray}
By (\ref{gh1}), (\ref{gh2}) and (\ref{gh3}),
\begin{eqnarray}
\label{goosht}
\mathrm{I}(\boldsymbol{x}_{i};\vec{\boldsymbol{y}}_{i}|\vec{\boldsymbol{s}}_{i}=\vec{s})&\geq& \log\left(2^{\left(\mathrm{h}\big(\beta h_{i,i}\|\vec{s}\|_{2}\boldsymbol{x}_{i}\big)-\mathrm{h}\left(\boldsymbol{\varphi}_{i}|\vec{\boldsymbol{\vartheta}}_{i}\right)\right)}+1\right).\notag\\ \end{eqnarray}
\textbf{Step 3-}
We start by stating the following Lemma.
\begin{lem}
Let $\vec{\mathbf{\Theta}}$ be a $t\times 1$ mixed Gaussian random vector with the PDF
\begin{equation}
p_{\vec{\mathbf{\Theta}}}(\vec{\Theta})=\sum_{l=1}^{L}\frac{q_{l}}{\pi^{t}\det \Omega_{l}}\exp-\left(\vec{\Theta}^{T}\Omega_{l}^{-1}\vec{\Theta}\right)
\end{equation}
where $q_{l}\geq 0$ for $1\leq l\leq L$ and $\sum_{l=1}^{L}q_{l}=1$.
Then,
\begin{equation}
\label{polm}
\sum_{l=1}^{L}q_{l}\log\big((\pi e)^{t}\det \Omega_{l}\big)\leq \mathrm{h}(\vec{\mathbf{\Theta}})\leq \sum_{l=1}^{L}q_{l}\log\big((\pi e)^{t}\det \Omega_{l}\big)+\mathrm{H}((q_{l})_{l=1}^{L})
\end{equation}
\end{lem}
\begin{proof}
Let us define the random matrix $\mathbf{\Omega}\in\{\Omega_{l}: 1\leq l\leq L\}$ such that $\Pr\{\mathbf{\Omega}=\Omega_{l}\}=q_{l}$ and let $\vec{\mathbf{\Upsilon}}$ be a zero mean Gaussian vector with covariance matrix $I_{t}$. Then, one can easily see that $\vec{\mathbf{\Theta}}=\sqrt{\mathbf{\Omega}}\vec{\mathbf{\Upsilon}}$ in which $\sqrt{\mathbf{\Omega}}$ is the conventional square root of a positive semi-definite matrix. Using the inequalities
\begin{eqnarray}
\label{ }
\mathrm{h}(\vec{\mathbf{\Theta}}|\mathbf{\Omega})\leq \mathrm{h}(\vec{\mathbf{\Theta}})&\leq&\mathrm{h}(\vec{\mathbf{\Theta}},\mathbf{\Omega})\notag\\
&=&\mathrm{h}(\vec{\mathbf{\Theta}}|\mathbf{\Omega})+\mathrm{H}(\mathbf{\Omega})\notag\\
&=&\mathrm{h}(\vec{\mathbf{\Theta}}|\mathbf{\Omega})+\mathrm{H}((q_{l})_{l=1}^{L})
\end{eqnarray}
and noting that $\mathrm{h}(\vec{\mathbf{\Theta}}|\mathbf{\Omega})=\sum_{l=1}^{L}q_{l}\log\left((\pi e)^{t}\det\Omega_{l}\right)$, the result is immediate.
\end{proof}
The vector $\vec{\boldsymbol{w}}_{i}$ has a mixed Gaussian distribution where the covariance matrices of its separate Gaussian components correspond to different realizations of the matrix $\beta^{2}\gamma\sum_{j\neq i}|h_{j,i}|^{2}\boldsymbol{s}_{j}\boldsymbol{s}_{j}^{\dagger}$. This together with Lemma 2 yields
\begin{equation}
\label{bw3}
\mathrm{h}(\vec{\boldsymbol{w}}_{i}+\vec{\boldsymbol{z}}_{i})\leq\mathrm{h}(\vec{\boldsymbol{w}}_{i}+\vec{\boldsymbol{z}}_{i}|\boldsymbol{S}_{i})+\mathrm{H}\left(\sum_{j\neq i}|h_{j,i}|^{2}\boldsymbol{s}_{j}\boldsymbol{s}_{j}^{\dagger}\right)
\end{equation}
where we have used the fact that $\mathrm{H}\left(\beta^{2}\gamma\sum_{j\neq i}|h_{j,i}|^{2}\boldsymbol{s}_{j}\boldsymbol{s}_{j}^{\dagger}\right)=\mathrm{H}\left(\sum_{j\neq i}|h_{j,i}|^{2}\boldsymbol{s}_{j}\boldsymbol{s}_{j}^{\dagger}\right)$.
One has\begin{eqnarray}
\label{pork}
\mathrm{h}(\boldsymbol{\varphi}_{i}|\vec{\boldsymbol{\vartheta}}_{i})&=&\mathrm{h}(\boldsymbol{\varphi}_{i},\vec{\boldsymbol{\vartheta}}_{i})-\mathrm{h}(\vec{\boldsymbol{\vartheta}}_{i})\notag\\
&=&\mathrm{h}\left(U_{i}^{\dagger}(\vec{s})\left(\vec{\boldsymbol{w}}_{i}+\vec{\boldsymbol{z}}_{i}\right)\right)-\mathrm{h}\left(G_{i}^{\dagger}(\vec{s})\left(\vec{\boldsymbol{w}}_{i}+\vec{\boldsymbol{z}}_{i}\right)\right)\notag\\
&\stackrel{(a)}{=}&\mathrm{h}\left(\vec{\boldsymbol{w}}_{i}+\vec{\boldsymbol{z}}_{i}\right)-\mathrm{h}\left(G_{i}^{\dagger}(\vec{s})\left(\vec{\boldsymbol{w}}_{i}+\vec{\boldsymbol{z}}_{i}\right)\right)\notag\\
&\stackrel{(b)}{\leq}&\mathrm{h}\left(\vec{\boldsymbol{w}}_{i}+\vec{\boldsymbol{z}}_{i}|\boldsymbol{S}_{i}\right)+\mathrm{H}\left(\sum_{j\neq i}|h_{j,i}|^{2}\vec{\boldsymbol{s}}_{j}\vec{\boldsymbol{s}}_{j}^{\dagger}\right)\notag\\
&&-\mathrm{h}\left(G_{i}^{\dagger}(\vec{s})\left(\vec{\boldsymbol{w}}_{i}+\vec{\boldsymbol{z}}_{i}\right)\right)\notag\\
&\stackrel{(c)}{\leq}&\mathrm{h}\left(\vec{\boldsymbol{w}}_{i}+\vec{\boldsymbol{z}}_{i}|\boldsymbol{S}_{i}\right)+\mathrm{H}\left(\sum_{j\neq i}|h_{j,i}|^{2}\vec{\boldsymbol{s}}_{j}\vec{\boldsymbol{s}}_{j}^{\dagger}\right)\notag\\
&&-\mathrm{h}\left(G_{i}^{\dagger}(\vec{s})\left(\vec{\boldsymbol{w}}_{i}+\vec{\boldsymbol{z}}_{i}\right)|\boldsymbol{S}_{i}\right)\end{eqnarray}
where $(a)$ follows by the fact that the matrix $U_{i}(\vec{s})$ is unitary, i.e., $\log|\det(U_{i}(\vec{s}))|=0$, $(b)$ is by (\ref{bw3}) and $(c)$ is a direct consequence of Lemma 2. Having $\boldsymbol{S}_{i}$, the vector $\vec{\boldsymbol{w}}_{i}+\vec{\boldsymbol{z}}_{i}$ is a complex Gaussian vector. Hence,
\begin{eqnarray}
\label{lj1}
\mathrm{h}\left(\vec{\boldsymbol{w}}_{i}+\vec{\boldsymbol{z}}_{i}|\boldsymbol{S}_{i}\right)= K\log(\pi e)+\sum_{S\in\mathrm{supp}(\boldsymbol{S}_{i})}\Pr\{\boldsymbol{S}_{i}=S\}\log\det\left(I_{K}+\beta^{2}\gamma S\Xi_{i}\Xi_{i}^{\dagger}S^{\dagger}\right).\notag\\ \end{eqnarray}
By the same token,
\begin{eqnarray}
\label{lj2}
&&\mathrm{h}\left(G_{i}^{\dagger}(\vec{s})\left(\vec{\boldsymbol{w}}_{i}+\vec{\boldsymbol{z}}_{i}\right)|\boldsymbol{S}_{i}\right)=(K-1)\log(\pi e)\notag\\&&+\sum_{\substack{S\in\mathrm{supp}(\boldsymbol{S}_{i})}}\Pr\{\boldsymbol{S}_{i}=S\}\log\det\left(I_{K-1}+\beta^{2}\gamma G_{i}^{\dagger}(\vec{s})S\Xi_{i}\Xi_{i}^{\dagger}S^{\dagger}G_{i}^{\dagger}(\vec{s})\right)\notag\\ \end{eqnarray}
Using (\ref{lj1}) and (\ref{lj2}) in (\ref{pork}),
\begin{eqnarray}
&&\mathrm{h}(\boldsymbol{\varphi}_{i}|\vec{\boldsymbol{\vartheta}}_{i})\leq \log(\pi e)+\mathrm{H}\left(\sum_{j\neq i}|h_{j,i}|^{2}\vec{\boldsymbol{s}}_{j}\vec{\boldsymbol{s}}_{j}^{\dagger}\right)\notag\\
&&+\sum_{S\in\mathrm{supp}(\boldsymbol{S}_{i})}\Pr\{\boldsymbol{S}_{i}=S\}\log\det\left(I_{K}+\beta^{2}\gamma S\Xi_{i}\Xi_{i}^{\dagger}S^{\dagger}\right)\notag\\
&&-\sum_{\substack{S\in\mathrm{supp}(\boldsymbol{S}_{i})}}\Pr\{\boldsymbol{S}_{i}=S\}\log\det\left(I_{K-1}+\beta^{2}\gamma G_{i}^{\dagger}(\vec{s})S\Xi_{i}\Xi_{i}^{\dagger}S^{\dagger}G_{i}^{\dagger}(\vec{s})\right).\end{eqnarray}
Moreover, $\mathrm{h}\big(\beta h_{i,i}\|\vec{s}\|_{2}\boldsymbol{x}_{i}\big)=\log\left(\pi e\beta^{2}|h_{i,i}|^{2}\|\vec{s}\|_{2}^{2}\gamma\right)$. Hence, $\mathrm{h}\big(\beta h_{i,i}\|\vec{s}\|_{2}\boldsymbol{x}_{i}\big)-\mathrm{h}\left(\boldsymbol{\varphi}_{i}|\vec{\boldsymbol{\vartheta}}_{i}\right)$ appearing in (\ref{goosht}) can be bounded from below as
\begin{eqnarray}
\label{bone}
&&\mathrm{h}\big(\beta h_{i,i}\|\vec{s}\|_{2}\boldsymbol{x}_{i}\big)-\mathrm{h}\left(\boldsymbol{\varphi}_{i}|\vec{\boldsymbol{\vartheta}}_{i}\right)\geq\log\left(\beta^{2}|h_{i,i}|^{2}\|\vec{s}\|_{2}^{2}\gamma\right)-\mathrm{H}\left(\sum_{j\neq i}|h_{j,i}|^{2}\vec{\boldsymbol{s}}_{j}\vec{\boldsymbol{s}}_{j}^{\dagger}\right)\notag\\
&&-\sum_{S\in\mathrm{supp}(\boldsymbol{S}_{i})}\Pr\{\boldsymbol{S}_{i}=S\}\log\det\left(I_{K}+\beta^{2}\gamma S\Xi_{i}\Xi_{i}^{\dagger}S^{\dagger}\right)\notag\\
&&+\sum_{\substack{S\in\mathrm{supp}(\boldsymbol{S}_{i})}}\Pr\{\boldsymbol{S}_{i}=S\}\log\det\left(I_{K-1}+\beta^{2}\gamma G_{i}^{\dagger}(\vec{s})S\Xi_{i}\Xi_{i}^{\dagger}S^{\dagger}G_{i}^{\dagger}(\vec{s})\right).\end{eqnarray}
Substituting (\ref{bone}) in (\ref{goosht}),
\begin{eqnarray}
\mathrm{I}(\boldsymbol{x}_{i};\vec{\boldsymbol{y}}_{i}|\vec{\boldsymbol{s}}_{i}=\vec{s})\geq\log\left(2^{-\mathrm{H}\left(\sum_{j\neq i}|h_{j,i}|^{2}\vec{\boldsymbol{s}}_{j}\vec{\boldsymbol{s}}_{j}^{\dagger}\right)}\varrho_{i}(\gamma;\vec{s})+1\right)\end{eqnarray}
where
\begin{eqnarray}
\label{booh}
\varrho_{i}(\gamma;\vec{s})\triangleq \frac{|h_{i,i}|^{2}\|\vec{s}\|_{2}^{2}\gamma}{\mathrm{E}\{\|\vec{\boldsymbol{s}}_{i}\|_{2}^{2}\}}\prod_{\substack{S\in\mathrm{supp}(\boldsymbol{S}_{i})}}\left(\frac{\det\left(I_{K-1}+\beta^{2}\gamma G_{i}^{\dagger}(\vec{s})S\Xi_{i}\Xi_{i}^{\dagger}S^{\dagger}G_{i}^{\dagger}(\vec{s})\right)}{\det\left(I_{K}+\beta^{2}\gamma S\Xi_{i}\Xi_{i}^{\dagger}S^{\dagger}\right)}\right)^{\Pr\{\boldsymbol{S}_{i}=S\}}.\end{eqnarray}
Finally, we get the following lower bound on $\frac{\mathrm{I}(\boldsymbol{x}_{i};\vec{\boldsymbol{y}}_{i}|\vec{\boldsymbol{s}}_{i})}{K}$ denoted by $\mathsf{C}_{i}^{(\mathrm{lb})}(\vec{h}_{i})$, i.e.,
\begin{equation}
\label{fuv}
\mathsf{C}_{i}^{(\mathrm{lb})}(\vec{h}_{i})\triangleq\frac{1}{K}\sum_{\vec{s}\in\mathrm{supp}(\vec{\boldsymbol{s}}_{i})\backslash\{0_{K\times 1}\}}\Pr\{\vec{\boldsymbol{s}}_{i}=\vec{s}\}\log\left(2^{-\mathrm{H}\left(\sum_{j\neq i}|h_{j,i}|^{2}\vec{\boldsymbol{s}}_{j}\vec{\boldsymbol{s}}_{j}^{\dagger}\right)}\varrho_{i}(\gamma;\vec{s})+1\right).\end{equation}
An important observation is that if the $i^{th}$ user sets its transmission rate at $R_{i}=\mathsf{C}_{i}^{(\mathrm{lb})}(\vec{h}_{i})$, then
\begin{equation}\lim_{\gamma\to\infty}\frac{R_{i}}{\log\gamma}=\frac{\Pr\{\vec{\boldsymbol{s}}_{i}\notin\mathrm{csp}(\boldsymbol{S}_{i})\}}{K}.\end{equation} To prove this, we need some preliminary results in linear analysis.
\textit{Definition 2}- Let $\mathscr{E}$ be an Euclidean space over $\mathbb{R}$ and $\mathscr{U}$ be a subspace of $\mathscr{E}$. We define
\begin{equation}
\label{ }
\mathscr{U}^{\perp}\triangleq\{v\in \mathscr{E}: v\perp u, \forall u\in\mathscr{U}\}.
\end{equation}
\begin{lem}
Let $\mathscr{E}$ be an Euclidean space over $\mathbb{R}$. If $\mathscr{U}$ is a subspace of $\mathscr{E}$, then for each $v\in\mathscr{E}$, there are unique elements $v_{1}\in\mathscr{U}$ and $v_{2}\in\mathscr{U}^{\perp}$ such that $v=v_{1}+v_{2}$.
\end{lem}
\textit{Definition 3}- In the setup of Lemma 4, $v_{1}$ is called the projection of $v$ in $\mathscr{U}$ and is denoted by $\mathrm{proj}(v;\mathscr{U})$. By the same token, $v_{2}=\mathrm{proj}(v;\mathscr{U}^{\perp})$.
\textit{Definition 4}- Let $\mathscr{E}$ be an Euclidean space over $\mathbb{R}$ and $\mathscr{U}_{1}$ and $\mathscr{U}_{2}$ be subspaces of $\mathscr{E}$. We define
\begin{equation}
\label{ }
\mathrm{proj}(\mathscr{U}_{1};\mathscr{U}_{2})\triangleq \mathrm{span}\{\mathrm{proj}(v;\mathscr{U}_{2}) : v\in\mathscr{U}_{1}\}.
\end{equation}
\begin{lem}
Let $\mathscr{E}$ be an Euclidean vector space over $\mathbb{R}$ and $\mathscr{U}_{1}$ and $\mathscr{U}_{2}$ be subspaces of $\mathscr{E}$. Then,
\begin{equation}
\label{ }
\mathrm{dim}(\mathscr{U}_{1}\cup\mathscr{U}_{2})=\mathrm{dim}(\mathscr{U}_{1})+\mathrm{dim}(\mathrm{proj}(\mathscr{U}_{2};\mathscr{U}_{1}^{\perp})).
\end{equation}
\end{lem}
\begin{lem}
Let $X$ be a $p\times q$ matrix such that $\mathrm{rank}(X)=q$. Then, for any $q\times r$ matrix $Y$, we have $\mathrm{rank}(XY)=\mathrm{rank}(Y)$.
\end{lem}
\begin{proposition}
Regulating its transmission rate at $\mathsf{C}_{i}^{(\mathrm{lb})}(\vec{h}_{i})$, the $i^{th}$ user achieves an SNR scaling of \begin{equation}
\label{}
\lim_{\gamma\to\infty}\frac{\mathsf{C}_{i}^{(\mathrm{lb})}(\vec{h}_{i})}{\log\gamma}=\frac{\Pr\{\vec{\boldsymbol{s}}_{i}\notin\mathrm{csp}(\boldsymbol{S}_{i})\}}{K}.\end{equation}
\end{proposition}
\begin{proof}
Using the fact that for any matrix $X$, $\mathrm{rank}(XX^{\dagger})=\mathrm{rank}(X)$, it is easy to see that for any $\vec{s}\in\mathrm{supp}(\vec{\boldsymbol{s}}_{i})$ and $S\in\mathrm{supp}(\boldsymbol{S}_{i})$, we have $\log\det\left(I_{K-1}+\beta^{2}\gamma G_{i}^{\dagger}(\vec{s})S\Xi_{i}\Xi_{i}^{\dagger}S^{\dagger}G_{i}^{\dagger}(\vec{s})\right)$ scales like $\mathrm{rank}(G_{i}^{\dagger}S)\log\gamma$ and $\log\det\left(I_{K}+\beta^{2}\gamma S\Xi_{i}\Xi_{i}^{\dagger}S^{\dagger}\right)$ scales like $\mathrm{rank}(S)\log\gamma$. This yields
\begin{eqnarray}
\label{bghm}
\lim_{\gamma\to\infty}\frac{\mathsf{C}_{i}^{(\mathrm{lb})}(\vec{h}_{i})}{\log\gamma}&=&\sum_{\vec{s}\in\mathrm{supp}(\vec{\boldsymbol{s}}_{i})\backslash\{0_{K\times 1}\}}\Pr\{\vec{\boldsymbol{s}}_{i}=\vec{s}\}
\notag\\&&+\sum_{\substack{S\in\mathrm{supp}(\boldsymbol{S}_{i})\\\vec{s}\in\mathrm{supp}(\vec{\boldsymbol{s}}_{i})\backslash\{0_{K\times 1}\}}}\Pr\{\boldsymbol{S}_{i}=S\}\Pr\{\vec{\boldsymbol{s}}_{i}=\vec{s}\}\mathrm{rank}(G_{i}^{\dagger}(\vec{s})S)\notag\\
&&-\sum_{\substack{S\in\mathrm{supp}(\boldsymbol{S}_{i})\\\vec{s}\in\mathrm{supp}(\vec{\boldsymbol{s}}_{i})\backslash\{0_{K\times 1}\}}}\Pr\{\boldsymbol{S}_{i}=S\}\Pr\{\vec{\boldsymbol{s}}_{i}=\vec{s}\}\mathrm{rank}(S)\notag\\
&=&\Pr\{\vec{\boldsymbol{s}}_{i}\neq0_{K\times 1}\}\notag\\&&+\sum_{\substack{S\in\mathrm{supp}(\boldsymbol{S}_{i})\\\vec{s}\in\mathrm{supp}(\vec{\boldsymbol{s}}_{i})}}\Pr\{\boldsymbol{S}_{i}=S\}\Pr\{\vec{\boldsymbol{s}}_{i}=\vec{s}\}\mathrm{rank}(G_{i}^{\dagger}(\vec{s})S)\notag\\
&&-\sum_{S\in\mathrm{supp}(\boldsymbol{S}_{i})}\Pr\{\boldsymbol{S}_{i}=S\}\Pr\{\vec{\boldsymbol{s}}_{i}=0_{K\times 1}\}\mathrm{rank}(G_{i}^{\dagger}(0_{K\times 1})S)\notag\\&&-\mathrm{E}\{\mathrm{rank}(\boldsymbol{S}_{i})\}\Pr\{\vec{\boldsymbol{s}}_{i}\neq 0_{K\times 1}\}\notag\\
&\stackrel{(a)}{=}&\Pr\{\vec{\boldsymbol{s}}_{i}\neq0_{K\times 1}\}\notag\\&&+\mathrm{E}\left\{\mathrm{rank}(\boldsymbol{G}_{i}^{\dagger}(\vec{\boldsymbol{s}}_{i})\boldsymbol{S}_{i})\right\}\notag\\
&&-\mathrm{E}\{\mathrm{rank}(\boldsymbol{S}_{i})\}\Pr\{\vec{\boldsymbol{s}}_{i}= 0_{K\times 1}\}\notag\\&&-\mathrm{E}\{\mathrm{rank}(\boldsymbol{S}_{i})\}\Pr\{\vec{\boldsymbol{s}}_{i}\neq 0_{K\times 1}\}\notag\\&=&\Pr\{\vec{\boldsymbol{s}}_{i}\neq0_{K\times 1}\}+\mathrm{E}\left\{\mathrm{rank}(\boldsymbol{G}_{i}^{\dagger}(\vec{\boldsymbol{s}}_{i})\boldsymbol{S}_{i})-\mathrm{rank}(\boldsymbol{S}_{i})\right\}\end{eqnarray}
where $(a)$ is by the fact that $G_{i}(0_{K\times 1})=I_{K}$.
We show that
\begin{equation}
\label{ }
\mathbb{1}_{\{\vec{\boldsymbol{s}}_{i}\neq0_{K\times 1}\}}+\mathrm{rank}(\boldsymbol{G}_{i}^{\dagger}(\vec{\boldsymbol{s}}_{i})\boldsymbol{S}_{i})=\mathrm{rank}\left([\boldsymbol{S}_i|\vec{\boldsymbol{s}}_i]\right)\end{equation}
holds almost surely.
Let us write
\begin{equation}
\label{kom}
\mathrm{rank}([\boldsymbol{S}_i|\vec{\boldsymbol{s}}_i])=\mathrm{dim}(\mathrm{span}(\vec{\boldsymbol{s}}_i)\cup\mathrm{csp}(\boldsymbol{S}_{i})).
\end{equation}
Using this in Lemma 4,
\begin{eqnarray}
\label{bzz6}
\mathrm{rank}([\boldsymbol{S}_i|\vec{\boldsymbol{s}}_i])&=&\mathrm{dim}(\mathrm{span}(\vec{\boldsymbol{s}}_i))+\mathrm{dim}(\mathrm{proj}(\mathrm{csp}(\boldsymbol{S}_{i});(\mathrm{span}(\vec{\boldsymbol{s}}_i))^{\perp}))\notag\\
&=&\mathbb{1}_{\{\vec{\boldsymbol{s}}_{i}\neq0_{K\times 1}\}}+\mathrm{dim}(\mathrm{proj}(\mathrm{csp}(\boldsymbol{S}_{i});(\mathrm{span}(\vec{\boldsymbol{s}}_i))^{\perp})).\end{eqnarray}
On the other hand, by the definition of $\boldsymbol{G}_{i}(\vec{\boldsymbol{s}}_{i})$,
\begin{equation}
\label{bzz4}(\mathrm{span}(\vec{\boldsymbol{s}}_i))^{\perp}=\mathrm{csp}(\boldsymbol{G}_{i}(\vec{\boldsymbol{s}}_{i})).\end{equation} It is easily seen that for any $1\leq k\leq K-1$, the $k^{th}$ column of the matrix $\boldsymbol{G}_{i}^{\dagger}(\vec{\boldsymbol{s}}_{i})\boldsymbol{s}_{i}$ yields the proper linear combination of the columns of $\boldsymbol{G}_{i}(\vec{\boldsymbol{s}}_{i})$ which constructs the projection of the $k^{th}$ column of $\boldsymbol{s}_{i}$ into the space $\mathrm{csp}(\boldsymbol{G}_{i}(\vec{\boldsymbol{s}}_{i}))$, i.e.,
\begin{equation}
\label{bzz3}
\boldsymbol{G}_{i}(\vec{\boldsymbol{s}}_{i})[\boldsymbol{G}_{i}^{\dagger}(\vec{\boldsymbol{s}}_{i})\boldsymbol{S}_{i}]_{k}=\mathrm{proj}\left([\boldsymbol{S}_{i}]_{k};\mathrm{csp}(\boldsymbol{G}_{i}(\vec{\boldsymbol{s}}_{i}))\right).
\end{equation}
Therefore,
\begin{eqnarray}
\label{bzz2}
\mathrm{span}\left(\Big\{\boldsymbol{G}_{i}(\vec{\boldsymbol{s}}_{i})[\boldsymbol{G}_{i}^{\dagger}(\vec{\boldsymbol{s}}_{i})\boldsymbol{s}_{i}]_{k}\Big\}_{k=1}^{K-1}\right)&=&\mathrm{proj}\left(\mathrm{span}\left(\Big\{[\boldsymbol{S}_{i}]_{k}\Big\}_{k=1}^{K-1}\right);\mathrm{csp}(\boldsymbol{G}_{i}(\vec{\boldsymbol{s}}_{i}))\right)\notag\\
&=&\mathrm{proj}(\mathrm{csp}(\boldsymbol{S}_{i});\mathrm{csp}(\boldsymbol{G}_{i}(\vec{\boldsymbol{s}}_{i}))).\end{eqnarray}
However, \begin{equation}
\label{bzz1}
\mathrm{span}\left(\Big\{\boldsymbol{G}_{i}(\vec{\boldsymbol{s}}_{i})[\boldsymbol{G}_{i}^{\dagger}(\vec{\boldsymbol{s}}_{i})\boldsymbol{S}_{i}]_{k}\Big\}_{k=1}^{K}\right)=\mathrm{csp}(\boldsymbol{G}_{i}(\vec{\boldsymbol{s}}_{i})\boldsymbol{G}_{i}^{\dagger}(\vec{\boldsymbol{s}}_{i})\boldsymbol{S}_{i}).\end{equation}
By (\ref{bzz1}) and (\ref{bzz2}),
\begin{equation}
\label{bzz5}
\mathrm{proj}(\mathrm{csp}(\boldsymbol{S}_{i});\mathrm{csp}(\boldsymbol{G}_{i}(\vec{\boldsymbol{s}}_{i})))=\mathrm{csp}(\boldsymbol{G}_{i}(\vec{\boldsymbol{s}}_{i})\boldsymbol{G}_{i}^{\dagger}(\vec{\boldsymbol{s}}_{i})\boldsymbol{S}_{i}).\end{equation}
Using (\ref{bzz5}) and (\ref{bzz4}) in (\ref{bzz6}),
\begin{eqnarray}
\mathrm{rank}([\boldsymbol{S}_{i}|\vec{\boldsymbol{s}}_{i}])&=&\mathbb{1}_{\{\vec{\boldsymbol{s}}_{i}\neq0_{K\times 1}\}}+\mathrm{dim}(\mathrm{csp}(\boldsymbol{G}_{i}(\vec{\boldsymbol{s}}_{i})\boldsymbol{G}_{i}^{\dagger}(\vec{\boldsymbol{s}}_{i})\boldsymbol{S}_{i}))\notag\\
&=&\mathbb{1}_{\{\vec{\boldsymbol{s}}_{i}\neq0_{K\times 1}\}}+\mathrm{rank}(\boldsymbol{G}_{i}(\vec{\boldsymbol{s}}_{i})\boldsymbol{G}_{i}^{\dagger}(\vec{\boldsymbol{s}}_{i})\boldsymbol{S}_{i})\notag\\
&\stackrel{(a)}{=}&\mathbb{1}_{\{\vec{\boldsymbol{s}}_{i}\neq0_{K\times 1}\}}+\mathrm{rank}(\boldsymbol{G}_{i}^{\dagger}(\vec{\boldsymbol{s}}_{i})\boldsymbol{S}_{i})\end{eqnarray}
where $(a)$ follows by Lemma 5 as $\boldsymbol{G}_{i}(\vec{\boldsymbol{s}}_{i})$ has independent columns.
Taking expectation from both sides,
\begin{equation}
\label{ }
\mathrm{E}\left\{\mathrm{rank}([\boldsymbol{S}_{i}|\vec{\boldsymbol{s}}_{i}])\right\}=\Pr\{\vec{\boldsymbol{s}}_{i}\neq0_{K\times 1}\}+\mathrm{E}\left\{\mathrm{rank}(\boldsymbol{G}_{i}^{\dagger}(\vec{\boldsymbol{s}}_{i})\boldsymbol{S}_{i})\right\}.
\end{equation}
Using this in (\ref{bghm}),
\begin{eqnarray}
\lim_{\gamma\to\infty}\frac{\mathsf{C}_{i}^{(\mathrm{lb})}}{\log\gamma}&=&\mathrm{E}\left\{\mathrm{rank}([\boldsymbol{S}_{i}|\vec{\boldsymbol{s}}_{i}])-\mathrm{rank}(\boldsymbol{S}_{i})\right\}\notag\\
&=&\Pr\left\{\vec{\boldsymbol{s}}_{i}\notin\mathrm{csp}(\boldsymbol{S}_{i})\right\}.
\end{eqnarray}
This completes the proof.
\end{proof}
Finally, the following Proposition proves that $\mathsf{C}_{i}$ and $\mathsf{C}_{i}^{(\mathrm{lb})}$ have the SNR scaling.
\begin{proposition}
$\mathsf{C}_{i}^{(\mathrm{lb})}$ and $\mathsf{C}_{i}$ have the same SNR scaling.
\end{proposition}
\begin{proof}
See Appendix A.
\end{proof}
An important consequence of Proposition 1 is the following observation. Since, all the users utilize the same algorithm to construct their randomized signature codes, the achievable SMG is
\begin{equation}
\label{canada}
\mathsf{SMG}(n)=\frac{n\Pr\left\{\vec{\boldsymbol{s}}_{1}\notin\mathrm{csp}\left([\vec{\boldsymbol{s}}_{2}|\vec{\boldsymbol{s}}_{3}|\cdots|\vec{\boldsymbol{s}}_{n-1}|\vec{\boldsymbol{s}}_{n}]\right)\right\}}{K}.
\end{equation}
Computing $\Pr\left\{\vec{\boldsymbol{s}}_{1}\notin\mathrm{csp}\left([\vec{\boldsymbol{s}}_{2}|\vec{\boldsymbol{s}}_{3}|\cdots|\vec{\boldsymbol{s}}_{n-1}|\vec{\boldsymbol{s}}_{n}]\right)\right\}$ can be quite a tedious task specially for $n\geq 3$. Let the underlying alphabet to construct the spreading codes be $\{-1,1\}$. Here, we examine two particular RSCs by computing the achieved $\mathsf{SMG}(n)$ through simulations for the cases where masking is applied or ignored. In each case, we assume the elements of any randomized spreading code are selected independently and uniformly over $\{-1,1\}$, i.e., $\mathsf{p}_{1}=\mathsf{p}_{-1}=\frac{1}{2}$. In case masking is applied, we set $\varepsilon=\frac{1}{2}$. Taking $K=n$, the results are sketched in fig. \ref{f2}. It is seen that
\textit{1-} By increasing $n$, the achieved $\mathsf{SMG}(n)$ approaches unity in both cases. This is the SMG of a frequency division scenario where interference is completely avoided.
\textit{2-} Masking improves the SMG.
\begin{figure}[h!b!t]
\centering
\includegraphics[scale=.7] {PND1.png}
\caption{Comparison of the achieved SMG with/without masking where users construct their spreading codes on $\{-1,1\}$ using a uniform PMF $\mathsf{p}_{1}=\mathsf{p}_{-1}=\frac{1}{2}$. It is assumed that $K=n$. In case masking is applied, we have $\varepsilon=\frac{1}{2}$.}
\label{f2}
\end{figure}
\textit{Example 1-} Let us consider an RSC scheme where $K=1$, i.e., no spreading is applied. In this case, for each $1\leq i\leq n$, the vector $\vec{\boldsymbol{s}}_{i}=\vec{\boldsymbol{s}}_{i}=\boldsymbol{s}_{i}\in\{0,1\}$ is simply a $\mathrm{Ber}(\varepsilon)$ random variable for some $\varepsilon\in(0,1]$. Hence,
\begin{eqnarray}
\Pr\left\{\vec{\boldsymbol{s}}_{1}\notin\mathrm{csp}\left([\vec{\boldsymbol{s}}_{2}|\vec{\boldsymbol{s}}_{3}|\cdots|\vec{\boldsymbol{s}}_{n-1}|\vec{\boldsymbol{s}}_{n}]\right)\right\}&=&\Pr\left\{\boldsymbol{s}_{1}\notin\mathrm{span}\big(\left\{\boldsymbol{s}_{2},\boldsymbol{s}_{3},\cdots,\boldsymbol{s}_{n-1},\boldsymbol{s}_{n}\right\}\big)\right\}\notag\\
&=&\bar{\varepsilon}\Pr\left\{0\notin\mathrm{span}\big(\left\{\boldsymbol{s}_{2},\boldsymbol{s}_{3},\cdots,\boldsymbol{s}_{n-1},\boldsymbol{s}_{n}\right\}\big)\right\}\notag\\
&&+\varepsilon\Pr\left\{1\notin\mathrm{span}\big(\left\{\boldsymbol{s}_{2},\boldsymbol{s}_{3},\cdots,\boldsymbol{s}_{n-1},\boldsymbol{s}_{n}\right\}\big)\right\}\notag\\
&\stackrel{(a)}{=}&\varepsilon\Pr\left\{1\notin\mathrm{span}\big(\left\{\boldsymbol{s}_{2},\boldsymbol{s}_{3},\cdots,\boldsymbol{s}_{n-1},\boldsymbol{s}_{n}\right\}\big)\right\}\notag\\
&\stackrel{(b)}{=}&\varepsilon\Pr\left\{\boldsymbol{s}_{2}=\boldsymbol{s}_{3}=\cdots=\boldsymbol{s}_{n-1}=\boldsymbol{s}_{n}=0\right\}\notag\\
&=&\varepsilon(1-\varepsilon)^{n-1}
\end{eqnarray}
where $(a)$ is by the fact that $\Pr\left\{0\notin\mathrm{span}\big(\left\{\boldsymbol{s}_{2},\boldsymbol{s}_{3},\cdots,\boldsymbol{s}_{n-1},\boldsymbol{s}_{n}\right\}\big)\right\}=0$ and $(b)$ is by the fact that $1\notin\mathrm{span}\big(\left\{\boldsymbol{s}_{2},\boldsymbol{s}_{3},\cdots,\boldsymbol{s}_{n-1},\boldsymbol{s}_{n}\right\}\big)$ whenever $\boldsymbol{s}_{i}=0$ for $2\leq i\leq n$. Maximizing $\varepsilon(1-\varepsilon)^{n-1}$ over $\varepsilon$, a Sum Multiplexing Gain of $\left(1-\frac{1}{n}\right)^{n-1}$ is achieved. Increasing $n$, the achieved $\mathsf{SMG}(n)$ drops to $\frac{1}{e}<1$. Comparing this to the results in fig. \ref{f2}, spreading the signals ($K=n$ compared to $K=1$) can highly improve the Sum Multiplexing Gain in the network. $\square$
\textit{Example 2-} Let $n=2$ and $\mathscr{A}=\{-1,1\}$. For $i\in\{1,2\}$, elements of $\vec{\boldsymbol{s}}_{i}$ are $\mathrm{i.i.d.}$ random variables taking the values $0$, $1$ and $-1$ with probabilities $\overline{\varepsilon}$, $\varepsilon\mathsf{p}_{1}$ and $\varepsilon\mathsf{p}_{-1}$ respectively. We have
\begin{eqnarray}
\Pr\{\vec{\boldsymbol{s}}_{1}\notin\mathrm{span}(\left\{\vec{\boldsymbol{s}}_{2}\right\})\}&=&1-\Pr\{\vec{\boldsymbol{s}}_{1}\in\mathrm{span}(\left\{\vec{\boldsymbol{s}}_{2}\right\})\}\notag\\
&=&1-\Pr\{\vec{\boldsymbol{s}}_{1}=0_{K\times 1}\}-\Pr\{\vec{\boldsymbol{s}}_{1}\neq0_{K\times 1},\vec{\boldsymbol{s}}_{1}=\pm\vec{\boldsymbol{s}}_{2}\}\notag\\
&=&1-\overline{\varepsilon}^{K}-\Pr\{\vec{\boldsymbol{s}}_{1}\neq0_{K\times 1},\vec{\boldsymbol{s}}_{1}=\vec{\boldsymbol{s}}_{2}\}-\Pr\{\vec{\boldsymbol{s}}_{1}\neq0_{K\times 1},\vec{\boldsymbol{s}}_{1}=-\vec{\boldsymbol{s}}_{2}\}.\end{eqnarray}
However,
\begin{eqnarray}
\Pr\{\vec{\boldsymbol{s}}_{1}\neq0_{K\times 1},\vec{\boldsymbol{s}}_{1}=\vec{\boldsymbol{s}}_{2}\}&=&\sum_{\vec{s}\in\mathrm{supp}(\vec{\boldsymbol{s}}_{1})\backslash\{0_{K\times 1}\}}\left(\Pr\{\vec{\boldsymbol{s}}_{1}=\vec{s}\}\right)^{2}\notag\\
&=&\sum_{k=0}^{K-1}\sum_{l=0}^{K-k}{K\choose k}{K-k\choose l}\overline{\varepsilon}^{2k}(\varepsilon\mathsf{p}_{1})^{2l}(\varepsilon\mathsf{p}_{-1})^{2(K-k-l)}\notag\\
&=&\left(\overline{\varepsilon}^{2}+\varepsilon^{2}(\mathsf{p}_{1}^{2}+\mathsf{p}_{-1}^{2})\right)^{K}-\overline{\varepsilon}^{2K}.
\end{eqnarray}
Similarly,
\begin{equation}
\label{ }
\Pr\{\vec{\boldsymbol{s}}_{1}\neq0_{K\times 1},\vec{\boldsymbol{s}}_{1}=\vec{\boldsymbol{s}}_{2}\}=\left(\overline{\varepsilon}^{2}+2\varepsilon^{2}\mathsf{p}_{1}\mathsf{p}_{-1}\right)^{K}-\overline{\varepsilon}^{2K}.
\end{equation}
Therefore,
\begin{eqnarray}
\label{ }
\mathsf{SMG}(2)&=&\frac{2\Pr\{\vec{\boldsymbol{s}}_{1}\notin\mathrm{span}(\left\{\vec{\boldsymbol{s}}_{2}\right\})\}}{K}\notag\\&=&\frac{2}{K}\left(1-\overline{\varepsilon}^{K}+2\overline{\varepsilon}^{2K}-\left(\overline{\varepsilon}^{2}+\varepsilon^{2}(\mathsf{p}_{1}^{2}+\mathsf{p}_{-1}^{2})\right)^{K}-\left(\overline{\varepsilon}^{2}+2\varepsilon^{2}\mathsf{p}_{1}\mathsf{p}_{-1}\right)^{K}\right).\end{eqnarray}
This expression is maximized at $\mathsf{p}_{1}=\mathsf{p}_{-1}=\frac{1}{2}$ uniformly for any $\varepsilon\in(0,1]$ and $K\geq 1$. Thus,
\begin{equation}
\label{boro}
\sup_{\mathsf{p}_{1},\mathsf{p}_{-1}}\mathsf{SMG}(2)=\frac{2\left(1-\overline{\varepsilon}^{K}+2\overline{\varepsilon}^{2K}-2\left(\overline{\varepsilon}^{2}+\frac{\varepsilon^{2}}{2}\right)^{K}\right)}{K}.
\end{equation}
This function is maximized at $K=2$ and $\varepsilon=0.756$ where an SMG of $\sup_{\varepsilon, K, \mathsf{p}_{1},\mathsf{p}_{-1}}\mathsf{SMG}(2)=0.7091$ is achieved. We notice that
\textit{1-} Although one's intuition expects $\varepsilon=\frac{1}{2}$ is the best choice of the On-Off probability, the \emph{optimum} masking probability is not $\frac{1}{2}$.
\textit{2-} Compared to the Sum Multiplexing Gain of $\frac{1}{2}$ achieved in example 1 without spreading, we see that spreading in fact increases the achieved SMG. $\square$
\textit{Remark 1-} For any $m_{1}\times m_{2}$ matrix $A$ and a $m_{1}\times m_{1}$ diagonal matrix $D$ we have $\mathrm{rank}(AA^{\dagger}D)=\mathrm{rank}(A)$. Using this, for any $\vec{s}\in\mathrm{supp}(\vec{\boldsymbol{s}}_{i})\backslash\{0_{K\times 1}\}$ one can write
\begin{eqnarray}
\label{booh}
\varrho_{i}(\gamma;\vec{s})\triangleq \frac{|h_{i,i}|^{2}\|\vec{s}\|_{2}^{2}\gamma}{\mathrm{E}\{\|\vec{\boldsymbol{s}}_{i}\|_{2}^{2}\}}\prod_{\substack{S\in\mathrm{supp}(\boldsymbol{S}_{i})}}\frac{\prod_{l=1}^{\mathrm{rank}(G_{i}^{\dagger}(\vec{s})S)}\left(1+\frac{\gamma \lambda_{G_{i}^{\dagger}(\vec{s})S\Xi_{i}}^{(l)}}{\mathrm{E}\left\{\|\vec{\boldsymbol{s}}_{i}\|_{2}^{2}\right\}}\right)^{\Pr\{\boldsymbol{S}_{i}=S\}}}{\prod_{l=1}^{\mathrm{rank(S)}}\left(1+\frac{\gamma \lambda_{S\Xi_{i}}^{(l)}}{\mathrm{E}\left\{\|\vec{\boldsymbol{s}}_{i}\|_{2}^{2}\right\}}\right)^{\Pr\{\boldsymbol{S}_{i}=S\}}}.\end{eqnarray}
where we have replaces $\beta^{2}=\frac{1}{\mathrm{E}\left\{\|\vec{\boldsymbol{s}}_{i}\|_{2}^{2}\right\}}$ and by definition, $(\lambda_{A}^{(l)})_{l=1}^{\mathrm{rank}(A)}$ are nonzero eigenvalues of the matrix $AA^{\dagger}$.
For sufficiently large SNR values, one can write $\mathsf{C}_{i}^{(\mathrm{lb})}(\vec{h}_{i})$ given in (\ref{fuv}) as
\begin{eqnarray}
\label{fuvvv}
&&\mathsf{C}_{i}^{(\mathrm{lb})}(\vec{h}_{i})\approx\frac{\Pr\{\vec{\boldsymbol{s}}_{i}\notin\mathrm{csp}(\boldsymbol{S}_{i})\}}{K}\log\gamma-\frac{\Pr\{\vec{\boldsymbol{s}}_{i}\neq 0_{K\times 1}\}\mathrm{H}\left(\sum_{j\neq i}|h_{j,i}|^{2}\vec{\boldsymbol{s}}_{j}\vec{\boldsymbol{s}}_{j}^{\dagger}\right)}{K}\notag\\
&&+\frac{1}{K}\sum_{\vec{s}\in\mathrm{supp}(\vec{\boldsymbol{s}}_{i})\backslash\{0_{K\times 1}\}}\Pr\{\vec{\boldsymbol{s}}_{i}=\vec{s}\}\log\left(\frac{|h_{i,i}|^{2}\|\vec{s}\|_{2}^{2}}{\mathrm{E}\{\|\vec{\boldsymbol{s}}_{i}\|_{2}^{2}\}} \prod_{\substack{S\in\mathrm{supp}(\boldsymbol{S}_{i})}}\frac{\prod_{l=1}^{\mathrm{rank}(G_{i}^{\dagger}(\vec{s})S)}\left(\frac{ \underline{\pi}_{i}\lambda_{G_{i}^{\dagger}(\vec{s})S\Xi_{i}}^{(l)}}{\mathrm{E}\{\|\vec{\boldsymbol{s}}_{i}\|_{2}^{2}\}}\right)^{\Pr\{\boldsymbol{S}_{i}=S\}}}{ \prod_{l=1}^{\mathrm{rank}(S)}\left(\frac{\overline{\pi}_{i}\lambda_{S\Xi_{i}}^{(l)}}{\mathrm{E}\{\|\vec{\boldsymbol{s}}_{i}\|_{2}^{2}\}}\right)^{\Pr\{\boldsymbol{S}_{i}=S\}}}\right).\notag\\\end{eqnarray}
There are three major factors playing role in the formulation of $\mathsf{C}_{i}^{(\mathrm{lb})}(\vec{h}_{i})$ in the high SNR regime, namely, the \emph{Multiplexing Gain per user},
\begin{equation}
\label{MG}
\mathsf{MG}\triangleq\frac{\Pr\{\vec{\boldsymbol{s}}_{i}\notin\mathrm{csp}(\boldsymbol{S}_{i})\}}{K},\end{equation}
the \emph{Interference Entropy Factor},
\begin{equation}
\label{ }
\mathsf{IEF}\triangleq \frac{\Pr\{\vec{\boldsymbol{s}}_{i}\neq0_{K\times 1}\}\mathrm{H}\left(\sum_{j\neq i}|h_{j,i}|^{2}\vec{\boldsymbol{s}}_{j}\vec{\boldsymbol{s}}_{j}^{\dagger}\right)}{K}\end{equation}
and the \emph{Channel plus Signature Factor}
\begin{equation}
\label{ }
\mathsf{CSF}_{i}\triangleq \frac{1}{K}\sum_{\vec{s}\in\mathrm{supp}(\vec{\boldsymbol{s}}_{i})\backslash\{0_{K\times 1}\}}\Pr\{\vec{\boldsymbol{s}}_{i}=\vec{s}\}\log\left(\frac{|h_{i,i}|^{2}\|\vec{s}\|_{2}^{2}}{\mathrm{E}\{\|\vec{\boldsymbol{s}}_{i}\|_{2}^{2}\}} \prod_{\substack{S\in\mathrm{supp}(\boldsymbol{S}_{i})}}\frac{\prod_{l=1}^{\mathrm{rank}(G_{i}^{\dagger}(\vec{s})S)}\left(\frac{\lambda_{G_{i}^{\dagger}(\vec{s})S\Xi_{i}}^{(l)}}{\mathrm{E}\{\|\vec{\boldsymbol{s}}_{i}\|_{2}^{2}\}}\right)^{\Pr\{\boldsymbol{S}_{i}=S\}}}{ \prod_{l=1}^{\mathrm{rank}(S)}\left(\frac{\lambda_{S\Xi_{i}}^{(l)}}{\mathrm{E}\{\|\vec{\boldsymbol{s}}_{i}\|_{2}^{2}\}}\right)^{\Pr\{\boldsymbol{S}_{i}=S\}}}\right).\end{equation}
In fact,
\begin{equation}
\label{ }
\mathsf{C}_{i}^{(\mathrm{lb})}(|h_{i,i}|^{2},\underline{\pi}_{i},\overline{\pi}_{i})\approx\mathsf{MG}\log\gamma-\mathsf{IEF}+\mathsf{CSF}_{i}.
\end{equation}
In general, $\mathsf{MG}$ does not depend on the user index. Also, assuming the channel gains are realizations of $\mathrm{i.i.d.}$ continuous random variables, the entropy $\mathrm{H}\left(\sum_{j\neq i}|h_{j,i}|^{2}\vec{\boldsymbol{s}}_{j}\vec{\boldsymbol{s}}_{j}^{\dagger}\right)$ is not a function of $i\in\{1,2,\cdots,n\}$, i.e., $\mathsf{IEF}$ does not depend on the user index either. In this case, a simple argument shows that
\begin{equation}
\label{ }
\mathrm{H}\left(\sum_{j\neq i}|h_{j,i}|^{2}\vec{\boldsymbol{s}}_{j}\vec{\boldsymbol{s}}_{j}^{\dagger}\right)=(n-1)\mathrm{H}\left(\vec{\boldsymbol{s}}_{1}\vec{\boldsymbol{s}}_{1}^{\dagger}\right).\end{equation}
The interplay between $\mathsf{MG}$, $\mathsf{IEF}$ and $\mathsf{CSF}_{i}$ determines the behavior of the achievable rate. This behavior highly depends on the randomized algorithm in constructing the Signature Codes. As we will see in the next section, a larger $\mathsf{MG}$ is usually achieved at the cost of a larger $\mathsf{IEF}$. It is clear that a larger $\mathsf{IEF}$ reduces the rate specially in moderate ranges of SNR. However, due to the fact that $\mathsf{MG}$ has also increased, the rate is lifted up is the high SNR regime . These opposing effects identify a tradeoff between rate in moderate SNR and high SNR regime. $\square$
\section{System Design}
In this section, we assume the channel gains $(h_{i,j})_{i,j=1}^{n}$ are realizations of independent $\mathcal{CN}(0,1)$ random variables $(\boldsymbol{h}_{i,j})_{i,j=1}^{n}$ representing Rayleigh fading. In the previous section, we have developed a lower bound
\begin{equation}\mathsf{C}_{i}^{(\mathrm{lb})}(\vec{\boldsymbol{h}}_{i})=\frac{1}{K}\sum_{\vec{s}\in\mathrm{supp}(\vec{\boldsymbol{s}}_{i}\backslash\{0_{K\times 1}\})}\Pr\{\vec{\boldsymbol{s}}_{i}=\vec{s}\}\log\left(2^{-\mathrm{H}\left(\sum_{j\neq i}|\boldsymbol{h}_{j,i}|^{2}\vec{\boldsymbol{s}}_{j}\vec{\boldsymbol{s}}_{j}^{\dagger}\right)}\boldsymbol{\varrho}_{i}(\gamma;\vec{s})+1\right)\end{equation} where
\begin{equation}
\label{}
\boldsymbol{\varrho}_{i}(\gamma;\vec{s})\triangleq \frac{|\boldsymbol{h}_{i,i}|^{2}\|\vec{s}\|_{2}^{2}\gamma}{\mathrm{E}\{\|\vec{\boldsymbol{s}}_{i}\|_{2}^{2}\}}\prod_{\substack{S\in\mathrm{supp}(\boldsymbol{S}_{i})}}\left(\frac{\det\left(I_{K-1}+\beta^{2}\gamma G_{i}^{\dagger}(\vec{s})S\mathbf{\Xi}_{i}\mathbf{\Xi}_{i}^{\dagger}S^{\dagger}G_{i}^{\dagger}(\vec{s})\right)}{\det\left(I_{K}+\beta^{2}\gamma S\mathbf{\Xi}_{i}\mathbf{\Xi}_{i}^{\dagger}S^{\dagger}\right)}\right)^{\Pr\{\boldsymbol{S}_{i}=S\}},\end{equation}
and
\begin{equation}
\label{ }
\mathbf{\Xi}_{i}=\mathrm{diag}\left(\boldsymbol{h}_{1,i},\cdots,\boldsymbol{h}_{i-1,i},\boldsymbol{h}_{i+1,i},\cdots,\boldsymbol{h}_{n,i}\right).
\end{equation}
The global design criteria is to choose $K$, $(\mathsf{p}_{a})_{a\in\mathscr{A}}$ and $\varepsilon$ based on
\begin{equation}
\label{ }
(\hat{K},(\hat{\mathsf{p}}_{a})_{a\in\mathscr{A}}, \hat{\varepsilon})=\arg\sup_{K,(\mathsf{p}_{a})_{a\in\mathscr{A}},\varepsilon}\mathrm{E}\left\{\mathsf{C}_{i}^{(\mathrm{lb})}(\vec{\boldsymbol{h}}_{i})\right\}.\end{equation}
\textit{Example 3-} Let us consider a network with $n=2$ users. For $i\in\{1,2\}$, we define
\begin{equation}
\label{ }
i'=\left\{\begin{array}{cc}
2 & i=1 \\
1& i=2
\end{array}\right..
\end{equation}
In this case, we have
\textit{1-} $\mathbf{\Xi}_{i}=\boldsymbol{h}_{i',i}$.
\textit{2-} Since $\boldsymbol{S}_{i}=\vec{\boldsymbol{s}}_{i'}$, for each $\vec{t}\in\mathrm{supp}(\boldsymbol{S}_{i})\backslash\{0_{K\times 1}\}$, we have $\mathrm{rank}(\boldsymbol{h}_{i',i}\vec{t})= 1$ and $\lambda_{\boldsymbol{h}_{i',i}\vec{t}}^{(1)}=|\boldsymbol{h}_{i',i}|^{2}\|\vec{t}\|_{2}^{2}$.
\textit{3-} For each $\vec{s}\in\mathrm{supp}(\vec{\boldsymbol{s}}_{i})$ and $\vec{t}\in\mathrm{supp}(\boldsymbol{S}_{i})$, we have $\mathrm{rank}(\boldsymbol{h}_{i',i}G_{i}^{\dagger}(\vec{s})\vec{t})\leq 1$. Indeed, $G_{i}^{\dagger}(\vec{s})\vec{t}\in\mathbb{R}$ and if $G_{i}^{\dagger}(\vec{s})\vec{t}\neq 0$, then $\lambda_{\boldsymbol{h}_{i',i}G_{i}^{\dagger}(\vec{s})\vec{t}}^{(1)}=|\boldsymbol{h}_{i',i}|^{2}|G_{i}^{\dagger}(\vec{s})\vec{t}|^{2}$.
Therefore, $\boldsymbol{\varrho}_{i}(\gamma;\vec{s})$ can be written as
\begin{equation}
\label{lol}
\boldsymbol{\varrho}_{i}(\gamma;\vec{s})= \frac{|\boldsymbol{h}_{i,i}|^{2}\|\vec{s}\|_{2}^{2}\gamma}{\mathrm{E}\{\|\vec{\boldsymbol{s}}_{i}\|^{2}_{2}\}}\prod_{\substack{\vec{t}\in\mathrm{supp}(\vec{\boldsymbol{s}}_{i'})}}\left(\frac{1+\frac{ |\boldsymbol{h}_{i',i}|^{2}|G_{i}^{\dagger}(\vec{s})\vec{t}|^{2}\gamma}{\mathrm{E}\{\|\vec{\boldsymbol{s}}_{i}\|^{2}_{2}\}}}{1+\frac{|\boldsymbol{h}_{i',i}|^{2}\|\vec{t}\|_{2}^{2}\gamma}{\mathrm{E}\{\|\vec{\boldsymbol{s}}_{i}\|^{2}_{2}\}}}\right)^{\Pr\{\vec{\boldsymbol{s}}_{i'}=\vec{t}\}}.\end{equation}
\textbf{Scheme A-} Let $K=2$ and $\mathscr{A}=\{-1,1\}$ with $\mathsf{p}_{1}=\nu$ and $\mathsf{p}_{-1}=\overline{\nu}$ for some $\nu\in(0,1]$. To simplify the expression for $\boldsymbol{\varrho}_{i}(\gamma;\vec{s})$ in (\ref{lol}), we make the following observations:
\textit{1-} If $\vec{s}$ has only one nonzero element, then
\begin{equation}
\label{ }
|G_{i}^{\dagger}(\vec{s})\vec{t}|^{2}=\left\{\begin{array}{cc}
0 & \textrm{$\vec{t}=0_{2\times 1}$ or $\vec{t}=\pm\vec{s}$} \\
1 & \mathrm{oth.}
\end{array}\right..
\end{equation}
\textit{2-} If $\vec{s}$ has no zero elements, then
\begin{equation}
\label{ }
|G_{i}^{\dagger}(\vec{s})\vec{t}|^{2}=\left\{\begin{array}{cc}
0 & \textrm{$\vec{t}=0_{2\times 1}$ or $\vec{t}=\pm\vec{s}$} \\
2 & \vec{t}^{T}\vec{s}=0, \vec{t}\ne0_{2\times 1}\\
\frac{1}{2} & \vec{t}^{T}\vec{s}\neq0\end{array}\right..
\end{equation}
As such, it is easy to see that
\textit{1-} If $\vec{s}$ has only one nonzero element, then
\begin{equation}
\label{ }
\boldsymbol{\varrho}_{i}(\gamma;\vec{s})=\frac{|\boldsymbol{h}_{i,i}|^{2}\gamma}{2\varepsilon\left(1+\frac{|\boldsymbol{h}_{i',i}|^{2}\gamma}{2\varepsilon}\right)^{\varepsilon\overline{\varepsilon}-\varepsilon^{2}}\left(1+\frac{|\boldsymbol{h}_{i',i}|^{2}\gamma}{\varepsilon}\right)^{\varepsilon^{2}}}.
\end{equation}
\textit{2-} If $\vec{s}\in\{(1,1)^{T},(-1,-1)^{T}\}$, then
\begin{equation}
\label{ }
\boldsymbol{\varrho}_{i}(\gamma;\vec{s})=\frac{|\boldsymbol{h}_{i,i}|^{2}\gamma\left(1+\frac{|\boldsymbol{h}_{i',i}|^{2}\gamma}{4\varepsilon}\right)^{2\varepsilon\overline{\varepsilon}}}{\varepsilon\left(1+\frac{|\boldsymbol{h}_{i',i}|^{2}\gamma}{2\varepsilon}\right)^{2\varepsilon\overline{\varepsilon}}\left(1+\frac{|\boldsymbol{h}_{i',i}|^{2}\gamma}{\varepsilon}\right)^{\varepsilon^{2}(\nu^{2}+\overline{\nu}^{2})}}.
\end{equation}
\textit{3-} If $\vec{s}\in\{(1,-1)^{T},(-1,1)^{T}\}$, then
\begin{equation}
\label{ }
\boldsymbol{\varrho}_{i}(\gamma;\vec{s})=\frac{|\boldsymbol{h}_{i,i}|^{2}\gamma\left(1+\frac{|\boldsymbol{h}_{i',i}|^{2}\gamma}{4\varepsilon}\right)^{2\varepsilon\overline{\varepsilon}}}{\varepsilon\left(1+\frac{|\boldsymbol{h}_{i',i}|^{2}\gamma}{2\varepsilon}\right)^{2\varepsilon\overline{\varepsilon}}\left(1+\frac{|\boldsymbol{h}_{i',i}|^{2}\gamma}{\varepsilon}\right)^{2\varepsilon^{2}\nu\overline{\nu}}}.
\end{equation}
Finally, it is shown in appendix A that
\begin{equation}
\label{ }
\mathrm{H}\left(\vec{\boldsymbol{s}}_{1}\vec{\boldsymbol{s}}_{1}^{\dagger}\right)=2\mathscr{H}(\varepsilon)+\varepsilon^{2}\mathscr{H}(\nu^{2}+\overline{\nu}^{2})\end{equation}
Simulation results indicate that $\mathsf{C}_{i}^{(\mathrm{lb})}(\vec{\boldsymbol{h}}_{i})$ is maximized at $\nu=\frac{1}{2}$. Setting $\nu=\frac{1}{2}$,
\begin{eqnarray}
\label{lol34}
\mathsf{C}_{i}^{(\mathrm{lb})}(\vec{\boldsymbol{h}}_{i})&=&\varepsilon\overline{\varepsilon}\log\left(1+\frac{2^{-2\mathscr{H}(\varepsilon)-\varepsilon^{2}}|\boldsymbol{h}_{i,i}|^{2}\gamma}{2\varepsilon\left(1+\frac{|\boldsymbol{h}_{i',i}|^{2}\gamma}{2\varepsilon}\right)^{\varepsilon\overline{\varepsilon}-\varepsilon^{2}}\left(1+\frac{|\boldsymbol{h}_{i',i}|^{2}\gamma}{\varepsilon}\right)^{\varepsilon^{2}}}\right)\notag\\
&&+\frac{\varepsilon^{2}}{2}\log\left(1+\frac{2^{-2\mathscr{H}(\varepsilon)-\varepsilon^{2}}|\boldsymbol{h}_{i,i}|^{2}\gamma\left(1+\frac{|\boldsymbol{h}_{i',i}|^{2}\gamma}{4\varepsilon}\right)^{2\varepsilon\overline{\varepsilon}}}{\varepsilon\left(1+\frac{|\boldsymbol{h}_{i',i}|^{2}\gamma}{2\varepsilon}\right)^{2\varepsilon\overline{\varepsilon}}\left(1+\frac{|\boldsymbol{h}_{i',i}|^{2}\gamma}{\varepsilon}\right)^{\frac{\varepsilon^{2}}{2}}}\right). \notag\\
\end{eqnarray}
It is also evident that
\begin{equation}
\label{ }
\mathsf{MG}_{\textrm{scheme A}}=\frac{1}{2}-\frac{\overline{\varepsilon}^{2}}{2}-(\varepsilon\overline{\varepsilon})^{2}-\frac{\varepsilon^{4}}{4}
\end{equation}
and
\begin{equation}
\label{ }
\mathsf{IEF}_{\textrm{scheme A}}=\varepsilon\left(\mathscr{H}(\varepsilon)+\frac{\varepsilon^{2}}{2}\right).
\end{equation}
\textbf{Scheme B-} Assuming no spreading is performed, let $K=1$. Noting the fact that $\mathrm{supp}(\vec{\boldsymbol{s}}_{i})=\mathrm{supp}(\vec{\boldsymbol{s}}_{i'})=\{0,1\}$ and $G_{i}(1)=0$,
\begin{equation}
\label{ }
\boldsymbol{\varrho}_{i}(\gamma;1)=\frac{|\boldsymbol{h}_{i,i}|\gamma}{\varepsilon\left(1+\frac{|\boldsymbol{h}_{i',i}|^{2}\gamma}{\varepsilon}\right)^{\varepsilon}}.
\end{equation}
It is easily seen that $\mathrm{H}(\vec{\boldsymbol{s}}_{1}\vec{\boldsymbol{s}}_{1}^{\dagger})=\mathscr{H}(\varepsilon)$. Therefore,
\begin{equation}
\label{ }
\mathsf{C}_{i}^{(\mathrm{lb})}(\vec{\boldsymbol{h}}_{i})=\varepsilon\log\left(1+\frac{2^{-\mathscr{H}(\varepsilon)}|\boldsymbol{h}_{i,i}|^{2}\gamma}{\varepsilon\left(1+\frac{|\boldsymbol{h}_{i',i}|^{2}\gamma}{\varepsilon}\right)^{\varepsilon}}\right),
\end{equation}
\begin{equation}
\label{ }
\mathsf{MG}_{\textrm{scheme B}}=\varepsilon\overline{\varepsilon}
\end{equation}
and
\begin{equation}
\label{ }
\mathsf{IEF}_{\textrm{scheme B}}=\varepsilon\mathscr{H}(\varepsilon).
\end{equation}
Fig. \ref{f5} sketches $\sup_{\varepsilon}\mathrm{E}\left\{\mathsf{C}_{i}^{(\mathrm{lb})}(\vec{\boldsymbol{h}}_{i})\right\}$ for the schemes A and B. It is seen that there is a tradeoff between the rates at medium and high SNR values. Fig. \ref{f8} demonstrates the best $\varepsilon$ chosen by the users. It is seen that any user in both schemes starts with $\varepsilon=1$ at $\gamma=5\mathrm{dB}$. Selecting $\varepsilon=1$ in scheme B leads to $\mathsf{IEF}_{\textrm{scheme B}}=0$. However, $\mathsf{MG}_{\textrm{scheme B}}$ is kept at zero as well. Therefore, by increasing SNR, the average achievable rate starts to saturate, and hence, users switch to $\varepsilon=0.45$ for $\gamma>20\mathrm{dB}$ to avoid saturation. In scheme A, $\varepsilon$ is set at $1$ for SNR values up to $35\mathrm{dB}$. The yields $\mathsf{IEF}_{\textrm{scheme A}}=\frac{1}{2}$ which can be considered as a reason for poor performance of scheme A in the range $\gamma<15\mathrm{dB}$ compared to scheme B. Since $\mathsf{MG}_{\textrm{scheme A}}\Big|_{\varepsilon=1}=0.25$ is larger than $\mathsf{MG}_{\textrm{scheme A}}\Big|_{\varepsilon=0.45}$, the average achievable rate per user becomes eventually larger in scheme A compared to scheme B as SNR increases.
\begin{figure}[h!b!t]
\centering
\includegraphics[scale=.7] {PND1000.png}
\caption{Comparison between $\sup_{\varepsilon}\mathrm{E}\left\{\mathsf{C}_{i}^{(\mathrm{lb})}(\vec{\boldsymbol{h}}_{i})\right\}$ in schemes A and B for different SNR values.}
\label{f5}
\end{figure}
\begin{figure}[h!b!t]
\centering
\includegraphics[scale=.7] {PND10000.png}
\caption{Comparison between $\sup_{\varepsilon}\mathrm{E}\left\{\mathsf{C}_{i}^{(\mathrm{lb})}(\vec{\boldsymbol{h}}_{i})\right\}$ in schemes A and B for different SNR values.}
\label{f8}
\end{figure}
\begin{figure}[h!b!t]
\centering
\includegraphics[scale=.7] {MG.png}
\caption{Comparison between $\mathsf{MG}$ in schemes A and B in terms of $\varepsilon$.}
\label{f6}
\end{figure}
\begin{figure}[h!b!t]
\centering
\includegraphics[scale=.7] {IEP.png}
\caption{Comparison between $\mathsf{IEF}$ in schemes A and B in terms of $\varepsilon$.}
\label{f7}
\end{figure}
\textit{Example 4-} We consider a decentralized network of $n>2$ users. We present the following scenarios:
The signature sequence of any user consists of an spreading code over the alphabet $\{-1,1\}$ where $\mathsf{p}_{1}=\nu$ and $\mathsf{p}_{-1}=\overline{\nu}$, i.e., masking is not applied. The purpose of this example is to show that in contrast to example 3, the optimum value of $\nu$ is not necessarily $\frac{1}{2}$.
Before proceeding, let us explain why the common intuition is to set $\nu=\frac{1}{2}$. It is well-known that in an additive noise channel with a stationary noise process, as far as the correlation function\footnote{The correlation function of a zero-mean process $\boldsymbol{\mathsf{x}}[t]$ is the function $\mathrm{E}\{\boldsymbol{\mathsf{x}}[t]\boldsymbol{\mathsf{x}}^{\dagger}[t-\Delta t]\}$ for $\Delta t\in\mathbb{R}$. } is fixed, a stationary Gaussian noise process yields the least mutual information between the input and output. WE call this the \emph{Gaussian} bounding technique. Using this fact, one can obtain a lower bound on $\mathrm{I}(\boldsymbol{x}_{i};\vec{\boldsymbol{y}}_{i}|\vec{\boldsymbol{s}}_{i})$ as
\begin{equation}
\label{ }
\mathrm{I}(\boldsymbol{x}_{i};\vec{\boldsymbol{y}}_{i}|\vec{\boldsymbol{s}}_{i})\geq \log\frac{\det\mathrm{Cov}(\vec{\boldsymbol{y}}_{i})}{\det\mathrm{Cov}(\vec{\boldsymbol{w}}_{i}+\vec{\boldsymbol{z}}_{i})}.
\end{equation}
It is easy to see that
\begin{equation}
\label{ }
\mathrm{Cov}(\vec{\boldsymbol{y}}_{i})=I_{K}+\frac{\gamma}{K}\sum_{j=1}^{n}|\boldsymbol{h}_{j,i}|^{2}\left((1-(2\nu-1)^{2})I_{K}+(2\nu-1)^{2}1_{K\times K}\right)
\end{equation}
and
\begin{equation}
\label{ }
\mathrm{Cov}(\vec{\boldsymbol{w}}_{i}+\vec{\boldsymbol{z}}_{i})=I_{K}+\frac{\gamma}{K}\sum_{j\neq i}|\boldsymbol{h}_{j,i}|^{2}\left((1-(2\nu-1)^{2})I_{K}+(2\nu-1)^{2}1_{K\times K}\right).
\end{equation}
Therefore\footnote{Note that $1_{K\times K}=1_{K\times 1}1^{\mathrm{T}}_{K\times 1}$. Then, one can use the identity $\det(I_{m_{1}}+AB)=\det(I_{m_{2}}+BA)$ for any $m_{1}\times m_{2}$ and $m_{2}\times m_{1}$ matrices $A$ and $B$.},
\begin{equation}
\label{ }
\det\mathrm{Cov}(\vec{\boldsymbol{y}}_{i})=\left(1+\frac{(1-(2\nu-1)^{2})\gamma\sum_{j=1}^{n}|\boldsymbol{h}_{j,i}|^{2}}{K}\right)^{K}\left(1+\frac{(2\nu-1)^{2}\gamma\sum_{j=1}^{n}|\boldsymbol{h}_{j,i}|^{2}}{1+\frac{(1-(2\nu-1)^{2})\gamma\sum_{j=1}^{n}|\boldsymbol{h}_{j,i}|^{2}}{K}}\right)
\end{equation}
and
\begin{equation}
\label{ }
\det\mathrm{Cov}(\vec{\boldsymbol{w}}_{i}+\vec{\boldsymbol{z}}_{i})=\left(1+\frac{(1-(2\nu-1)^{2})\gamma\sum_{j\neq i}|\boldsymbol{h}_{j,i}|^{2}}{K}\right)^{K}\left(1+\frac{(2\nu-1)^{2}\gamma\sum_{j\neq i}|\boldsymbol{h}_{j,i}|^{2}}{1+\frac{(1-(2\nu-1)^{2})\gamma\sum_{j\neq i}|\boldsymbol{h}_{j,i}|^{2}}{K}}\right).
\end{equation}
Finally, we come up with the following lower bound on $\frac{\mathrm{I}(\boldsymbol{x}_{i};\vec{\boldsymbol{y}}_{i}|\vec{\boldsymbol{s}}_{i})}{K}$,
\begin{eqnarray}
\frac{\mathrm{I}(\boldsymbol{x}_{i};\vec{\boldsymbol{y}}_{i}|\vec{\boldsymbol{s}}_{i})}{K}\geq\log\left(1+\frac{\frac{(1-(2\nu-1)^{2})\gamma|\boldsymbol{h}_{i,i}|^{2}}{K}}{1+\frac{(1-(2\nu-1)^{2})\gamma\sum_{j\neq i}|\boldsymbol{h}_{j,i}|^{2}}{K}}\right)+\frac{1}{K}\log\frac{1+\frac{(2\nu-1)^{2}\gamma\sum_{j=1}^{n}|\boldsymbol{h}_{j,i}|^{2}}{1+\frac{(1-(2\nu-1)^{2})\gamma\sum_{j=1}^{n}|\boldsymbol{h}_{j,i}|^{2}}{K}}}{1+\frac{(2\nu-1)^{2}\gamma\sum_{j\neq i}|\boldsymbol{h}_{j,i}|^{2}}{1+\frac{(1-(2\nu-1)^{2})\gamma\sum_{j\neq i}|\boldsymbol{h}_{j,i}|^{2}}{K}}}.
\end{eqnarray}
It is straightforward to see that this lower bound is maximized at $K=1$ and $\nu=\frac{1}{2}$ for any realization of the channel gains. Hence,
\begin{equation}
\label{ }
\sup_{\nu, K}\frac{\mathrm{I}(\boldsymbol{x}_{i};\vec{\boldsymbol{y}}_{i}|\vec{\boldsymbol{s}}_{i})}{K}\geq\log\left(1+\frac{\gamma |\boldsymbol{h}_{i,i}|^{2}}{1+\gamma\sum_{j\neq i}|\boldsymbol{h}_{j,i}|^{2}}\right).\end{equation}
Although, this lower bound suggests to set $K=1$ and in case $K>1$, it requires $\nu=\frac{1}{2}$, we demonstrate that taking a $K>1$ and regulating at some $\nu\neq \frac{1}{2}$ yield achievable rates larger than the threshold
\begin{eqnarray}
\label{wind}
\tau_{n}&\triangleq& \sup_{\gamma}\mathrm{E}\left\{\log\left(1+\frac{\gamma |\boldsymbol{h}_{i,i}|^{2}}{1+\gamma\sum_{j\neq i}|\boldsymbol{h}_{j,i}|^{2}}\right)\right\}\notag\\
&=&\frac{1}{(n-2)!}\int_{\zeta\in\mathbb{R}}\int_{\eta\in\mathbb{R}}\eta^{n-2}\log\left(1+\frac{\zeta}{\eta}\right)e^{-\zeta-\eta}d\zeta d\eta.\end{eqnarray}
In fact, $\tau_{n}$ is the maximum average achievable rate by regulating the transmission rate of the $i^{th}$ user at $\log\left(1+\frac{\gamma |\boldsymbol{h}_{i,i}|^{2}}{1+\gamma\sum_{j\neq i}|\boldsymbol{h}_{j,i}|^{2}}\right)$. In (\ref{wind}), we have used the fact that $|\boldsymbol{h}_{i,i}|^{2}$ is an exponential random variable with parameter $1$ and $2\sum_{j\neq i}|\boldsymbol{h}_{j,i}|^{2}$ is a $\chi^{2}_{2(n-1)}$ random variable.
Let $n=4$. In this case, $\tau_{4}=0.4809$. To compute $\mathsf{C}_{i}^{(\mathrm{lb})}(\vec{\boldsymbol{h}}_{i})$, we notice that
\textit{1-} For any $\vec{s}\in\mathrm{supp}(\vec{\boldsymbol{s}}_{i})$, $\|\vec{s}_{i}\|_{2}^{2}=K$.
\textit{2-} In appendix B, it is shown that
\begin{equation}
\label{ }
\mathrm{H}\left(\vec{\boldsymbol{s}}_{1}\vec{\boldsymbol{s}}_{1}^{\dagger}\right)=-\sum_{k=0}^{K}{K\choose k}\left(\nu^{k+1}\overline{\nu}^{K-k}+\nu^{K-k}\overline{\nu}^{k+1}\right)\log\left(\nu^{k+1}\overline{\nu}^{K-k}+\nu^{K-k}\overline{\nu}^{k+1}\right).\end{equation}
In contrast to example 3, computing $\mathsf{C}_{i}^{(\mathrm{lb})}(\vec{\boldsymbol{h}}_{i})$ in closed form is a tedious task. As such, we calculate $\mathrm{E}\left\{\mathsf{C}_{i}^{(\mathrm{lb})}(\vec{\boldsymbol{h}}_{i})\right\}$ through simulations. Setting the SNR at $\gamma=60\mathrm{dB}$, fig. \ref{f77} sketches $\mathrm{E}\left\{\mathsf{C}_{i}^{(\mathrm{lb})}(\vec{\boldsymbol{h}}_{i})\right\}$ in terms of $\mathsf{p}_{1}=\nu$ for different values of $K$. In spite of one's intuition, the average achievable rate per user has a double-hump shape and is not maximized at $\nu=\frac{1}{2}$. It is seen that the best performance is obtained at $K=3$ and $\nu\in\{0.09,0.91\}$. $\square$
\begin{figure}[h!b!t]
\centering
\includegraphics[scale=.7] {key.png}
\caption{Comparison between $\mathsf{IEF}$ in schemes A and B in terms of $\varepsilon$.}
\label{f77}
\end{figure}
\textit{Remark 2-} To gain some insight on why $\mathrm{E}\left\{\mathsf{C}_{i}^{(\mathrm{lb})}(\vec{\boldsymbol{h}}_{i})\right\}$ is double-hump in example 4, one can study the multiplexing gain per user given in (\ref{MG}). Let us consider a network with $n\geq 4$ users\footnote{It can be shown that this phenomenon does not hold for $n=2,3$.} where the signature codes only consist of spreading over the alphabet $\mathscr{A}=\{-1,1\}$. In general, one can write
\begin{equation}
\label{ }
\Big\{\vec{\boldsymbol{s}}_{i}\in\mathrm{csp}(\boldsymbol{S}_{i})\Big\}=\Big\{\vec{\boldsymbol{s}}_{i}\in\mathrm{col}(\boldsymbol{S}_{i})\cup\mathrm{col}(-\boldsymbol{S}_{i})\Big\}\bigcup\Big\{\vec{\boldsymbol{s}}_{i}\in\mathrm{csp}(\boldsymbol{S}_{i})\backslash(\mathrm{col}(\boldsymbol{S}_{i})\cup\mathrm{col}(-\boldsymbol{S}_{i}))\Big\}.
\end{equation}
Therefore,
\begin{eqnarray}
\Pr\left\{\vec{\boldsymbol{s}}_{i}\notin\mathrm{csp}(\boldsymbol{S}_{i})\right\}&=&1-\Pr\left\{\vec{\boldsymbol{s}}_{i}\in\mathrm{col}(\boldsymbol{S}_{i})\cup\mathrm{col}(-\boldsymbol{S}_{i})\right\}\notag\\&&-\Pr\left\{\vec{\boldsymbol{s}}_{i}\in\mathrm{csp}(\boldsymbol{S}_{i})\backslash(\mathrm{col}(\boldsymbol{S}_{i})\cup\mathrm{col}(-\boldsymbol{S}_{i}))\right\}\notag\\
&=&\Pr\left\{\vec{\boldsymbol{s}}_{i}\notin\mathrm{col}(\boldsymbol{S}_{i})\cup\mathrm{col}(-\boldsymbol{S}_{i})\right\}-\Pr\left\{\vec{\boldsymbol{s}}_{i}\in\mathrm{csp}(\boldsymbol{S}_{i})\backslash(\mathrm{col}(\boldsymbol{S}_{i})\cup\mathrm{col}(-\boldsymbol{S}_{i}))\right\}.\notag\\\end{eqnarray}
The term $\Pr\left\{\vec{\boldsymbol{s}}_{i}\notin\mathrm{col}(\boldsymbol{S}_{i})\cup\mathrm{col}(-\boldsymbol{S}_{i})\right\}$ can be easily calculated as
\begin{equation}
\label{ }
\Pr\left\{\vec{\boldsymbol{s}}_{i}\notin\mathrm{col}(\boldsymbol{S}_{i})\cup\mathrm{col}(-\boldsymbol{S}_{i})\right\}=\sum_{k=0}^{K}{K\choose k}\nu^{k}\bar{\nu}^{K-k}\left(1-\nu^{k}\bar{\nu}^{K-k}-\bar{\nu}^{k}\nu^{K-k}\right)^{n-1}.
\end{equation}
On the other hand, computation of the term $\Pr\left\{\vec{\boldsymbol{s}}_{i}\in\mathrm{csp}(\boldsymbol{S}_{i})\backslash(\mathrm{col}(\boldsymbol{S}_{i})\cup\mathrm{col}(-\boldsymbol{S}_{i}))\right\}$ is not an easy task. However, the point is that both $\Pr\left\{\vec{\boldsymbol{s}}_{i}\notin\mathrm{col}(\boldsymbol{S}_{i})\cup\mathrm{col}(-\boldsymbol{S}_{i})\right\}$ and $\Pr\left\{\vec{\boldsymbol{s}}_{i}\in\mathrm{csp}(\boldsymbol{S}_{i})\backslash(\mathrm{col}(\boldsymbol{S}_{i})\cup\mathrm{col}(-\boldsymbol{S}_{i}))\right\}$ have a global maximum at $\nu=\frac{1}{2}$. Hence, there is a chance that their difference is maximized at some $\nu\neq \frac{1}{2}$. This is exactly what happens here. As an example, fig. \ref{f77c} sketches multiplexing gain per user in terms of $\mathsf{p}_{1}$ in a network with $n=10$ users. It is assumed that the spreading code length is $K=6$.
\begin{figure}[h!b!t]
\centering
\includegraphics[scale=.7] {keyy.png}
\caption{Sketch of $\frac{\Pr\{\vec{\boldsymbol{s}}_{i}\notin\mathrm{csp}(\boldsymbol{S}_{i})\}}{K}$ in terms of $\mathsf{p}_{1}=\nu$ in a network of $n=10$ users with $K=6$.}
\label{f77c}
\end{figure}
\textit{Remark 3-} The expression for the SMG given in (\ref{canada}) does not depend on the spreading/masking strategy. In fact, one can consider a more general scheme where the $i^{th}$ user randomly selects its code $\vec{\boldsymbol{s}}_{i}$ out of a globally known set of codes $\mathfrak{C}\subset \mathbb{R}^{K}\backslash\{0_{K\times 1}\}$ based on a globally known PMF. In case $n=2$,
\begin{eqnarray}
\label{ }
\mathsf{SMG}(2)&=&\frac{2\Pr\{\vec{\boldsymbol{s}}_{1}\notin\mathrm{span}(\vec{\boldsymbol{s}}_{2})\}}{K}\notag\\
&=&\frac{2\Pr\left\{\textrm{$\vec{\boldsymbol{s}}_{1}$ and $\vec{\boldsymbol{s}}_{2}$ are not parallel in $\mathbb{R}^{K}$}\right\}}{K}.\end{eqnarray}
Taking $K=2$, let us assume that $\mathfrak{C}$ consists of $L$ vectors in $\mathbb{R}^{2}$ no two of which are parallel with each other. Therefore,
\begin{equation}
\label{ }
\mathsf{SMG}(2)=1-\frac{1}{L}.
\end{equation}
Since $L$ can be arbitrarily large, the SMG of a network of two users is equal to $1$. In this case, it is easy to see that
\begin{equation}
\label{ }
\mathsf{IEF}=\log L.
\end{equation}
If $n>2$, taking a set of arbitrarily large non-parallel vectors in some space $\mathbb{R}^{K}$ is by no means a necessarily appropriate collection. Let $\mathfrak{C}=\{\vec{\mathfrak{c}}_{1},\cdots,\vec{\mathfrak{c}}_{L}\}$ consist of $L$ vectors in $\mathbb{R}^{K}$. For each $1\leq l\leq L$ and $1\leq r\leq L-1$, we denote by $\omega_{l,r}$ the number of distinct subsets $\mathfrak{B}$ of $\mathfrak{C}$ of size $r$ such that $\vec{\mathfrak{c}}_{l}\notin\mathrm{span}(\mathfrak{B})$. We denote these subsets explicitly by $\mathfrak{B}_{l,r}(1),\cdots,\mathfrak{B}_{l,r}(\omega_{l,r})$. Assuming all users select their codes equally likely over $\mathfrak{C}$,
\begin{eqnarray}
\label{ }
\Pr\left\{\vec{\boldsymbol{s}}_{1}\notin\mathrm{csp}\left([\vec{\boldsymbol{s}}_{2}|\vec{\boldsymbol{s}}_{3}|\cdots|\vec{\boldsymbol{s}}_{n-1}|\vec{\boldsymbol{s}}_{n}]\right)\right\}=\frac{1}{L}\sum_{l=1}^{L}\Pr\left\{\vec{\mathfrak{c}}_{l}\notin\mathrm{csp}\left([\vec{\boldsymbol{s}}_{2}|\vec{\boldsymbol{s}}_{3}|\cdots|\vec{\boldsymbol{s}}_{n-1}|\vec{\boldsymbol{s}}_{n}]\right)\right\}.\end{eqnarray}
For each $1\leq l\leq L$,
\begin{eqnarray}
\Pr\left\{\vec{\mathfrak{c}}_{l}\notin\mathrm{csp}\left([\vec{\boldsymbol{s}}_{2}|\vec{\boldsymbol{s}}_{3}|\cdots|\vec{\boldsymbol{s}}_{n-1}|\vec{\boldsymbol{s}}_{n}]\right)\right\}=\sum_{r=1}^{L-1}\sum_{m=1}^{\omega_{l,r}}\Pr\left\{\forall\vec{\mathfrak{b}}\in\mathfrak{B}_{l,r}(m),\exists j\geq 2: \vec{\boldsymbol{s}}_{j}=\vec{\mathfrak{b}}\right\}.
\end{eqnarray}
It is easy to see that\footnote{Assuming the $1^{st},\cdots,(r-1)^{th}$ and $r^{th}$ elements of $\mathfrak{B}_{l,r}(m)$ are chosen by $t_{1},\cdots,t_{r-1}$ and $t_{r}$ users respectively, this can happen in $\sum_{\substack{t_{1}+\cdots+t_{r}=n-1\\t_{1},\cdots,t_{r}\geq 1}}\frac{(n-1)!}{t_{1}!\cdots t_{r}!}$ different ways.} \begin{equation}\Pr\left\{\forall\vec{\mathfrak{b}}\in\mathfrak{B}_{l,r}(m),\exists j\geq 2: \vec{\boldsymbol{s}}_{j}=\vec{\mathfrak{b}}\right\}=\frac{\sum_{\substack{t_{1}+\cdots+t_{r}=n-1\\t_{1},\cdots,t_{r}\geq 1}}\frac{(n-1)!}{t_{1}!\cdots t_{r}!}}{L^{n-1}}\end{equation} for any $1\leq l\leq L$ and $1\leq m\leq \omega_{l,r}$. Therefore,
\begin{eqnarray}
\Pr\left\{\vec{\boldsymbol{s}}_{1}\notin\mathrm{csp}\left([\vec{\boldsymbol{s}}_{2}|\vec{\boldsymbol{s}}_{3}|\cdots|\vec{\boldsymbol{s}}_{n-1}|\vec{\boldsymbol{s}}_{n}]\right)\right\}=\frac{1}{L^{n}}\sum_{l=1}^{L}\sum_{r=1}^{L-1}\omega_{l,r}\rho_{r,n}
\end{eqnarray}
where
\begin{equation}
\label{ }
\rho_{r,n}\triangleq \sum_{\substack{t_{1}+\cdots+t_{r}=n-1\\t_{1},\cdots,t_{r}\geq 1}}\frac{(n-1)!}{t_{1}!\cdots t_{r}!}.\end{equation}
Finally, the achieved SMG is
\begin{equation}
\label{bool}
\mathsf{SMG}(n)=\frac{n\sum_{l=1}^{L}\sum_{r=1}^{L-1}\omega_{l,r}\rho_{r,n}}{KL^{n }}.\end{equation}
We remark that there is no closed formula for $\rho_{r,n}$, however, one can use the recursion
\begin{equation}
\label{ }
r^{n-1}=\rho_{r,n}+\sum_{r'=1}^{r-1}{r\choose r'}\rho_{r-r',n}
\end{equation}
to compute this quantity. By (\ref{bool}), one can easily see that $\mathsf{SMG}(n)$ is maximized if $\omega_{l,r}$ is as large as possible for each $1\leq l\leq L$ and $1\leq r\leq L-1$. We know that $\omega_{l,r}\leq {L-1\choose r}$. This upper bound is achieved if $\mathfrak{C}$ consists of $L\leq K$ independent vectors in $\mathbb{R}^{K}$. In this case,
\begin{equation}
\label{ }
\mathsf{SMG}(n)=\frac{n\sum_{r=1}^{L-1}{L-1\choose r}\rho_{r,n}}{KL^{n-1}}.
\end{equation}
It is not hard to see that $\sum_{r=1}^{L-1}{L-1\choose r}\rho_{r,n}=(L-1)^{n-1}$. Hence,
\begin{equation}
\label{ }
\mathsf{SMG}(n)=\frac{n}{K}\left(1-\frac{1}{L}\right)^{n-1}.
\end{equation}
To get the largest SMG, one may let $L=K$ yielding
\begin{eqnarray}
\label{ }
\sup_{K\geq 1}\mathsf{SMG}(n)=\left(1-\frac{1}{n}\right)^{n-1}
\end{eqnarray}
which is the result obtained in example 1 via masking without spreading.
\section{Optimality Results}
We have already seen that applying masking on top of spreading can result in larger achievable rates due to increasing the attained multiplexing gain. However, our results so far are based on the achievable rate $\mathsf{C}_{i}^{(\mathrm{lb})}(\vec{h}_{i})$ which is only a lower bound on the \emph{capacity} of the $i^{th}$ user. In deriving $\mathsf{C}_{i}^{(\mathrm{lb})}(\vec{h}_{i})$, the PDF of the transmitted signals is taken to be complex Gaussian which is not necessarily optimal. As such, we have no optimality arguments so far.
In this section, we question the optimality of masking without spreading. In fact, we are interested to see if at any SNR level, there is an optimal PDF such that generating the transmitted signals based on this PDF makes spreading unnecessary. For this purpose, we define the \emph{masking capacity} of a user as the largest achievable rate by this user assuming all users follow the masking strategy with no spreading applied. We also require \emph{fairness} conditions by which we imply that users generate their signals using the same PDF. Fixing $\varepsilon\in(0,1]$, the masking capacity of the $i^{th}$ user is defined by
\begin{equation}
\label{ }
\mathscr{MC}_{i}(\varepsilon;\gamma,(h_{j,i})_{j=1}^{n})\triangleq\sup_{\substack{\boldsymbol{x}_{1},\cdots,\boldsymbol{x}_{n}\sim\mathrm{i.i.d}\\\mathrm{E}\{|\boldsymbol{x}_{1}|^{2}\}\leq \gamma}}\mathrm{I}(\boldsymbol{x}_{i},\boldsymbol{\mathfrak{m}}_{i};\boldsymbol{y}_{i})\end{equation}
where
\begin{equation}
\label{ }
\boldsymbol{y}_{i}=\varepsilon^{-\frac{1}{2}} h_{i,i}\boldsymbol{\mathfrak{m}}_{i}\boldsymbol{x}_{i}+\varepsilon^{-\frac{1}{2}}\sum_{j\neq i}\beta h_{j,i}\boldsymbol{\mathfrak{m}}_{j}\boldsymbol{x}_{j}+\boldsymbol{z}_{i}
\end{equation}
in which $\boldsymbol{\mathfrak{m}}_{i}$ is the masking coefficient of the $i^{th}$ user which is a $\mathrm{Ber}(\varepsilon)$ random variable and $\boldsymbol{z}_{i}$ is the $\mathcal{CN}(0,1)$ ambient noise random variable. The parameter $\varepsilon$ is designed based on maximizing a globally available utility function such as $\mathrm{E}\left\{\mathsf{C}_{i}^{(\mathrm{lb})}(\vec{\boldsymbol{h}}_{i})\right\}$ assuming $(h_{i,j})_{i,j=1}^{n}$ are realizations of $\mathrm{i.i.d.}$ random variables with a continuous PDF.
We focus on a decentralized network of $n=2$ users. We call the users as user \#1 and user \#2. According to the results in example 3 (scheme B), the decision rule to regulate $\varepsilon$ is
\begin{eqnarray}
\label{pel}
\hat{\varepsilon}=\arg\max_{\varepsilon\in(0,1]}\mathrm{E}\left\{\varepsilon\log\left(1+\frac{2^{-\mathscr{H}(\varepsilon)}|\boldsymbol{h}_{1,1}|^{2}\gamma}{\varepsilon\left(1+\frac{|\boldsymbol{h}_{2,1}|^{2}\gamma}{\varepsilon}\right)^{\varepsilon}}\right)\right\}.
\end{eqnarray}
The main result of the paper is the following.
\begin{thm}
There exist $\alpha_{1}\in [0,\frac{1}{2})$ and $\alpha_{2}\in(\frac{1}{2},1]$ such that for any $h_{1,1},h_{2,1}\in\mathbb{C}$, it is possible to achieve rates larger than $\mathscr{MC}_{1}(\hat{\varepsilon};\gamma,h_{1,1},h_{2,1})$ for sufficiently large values of $\gamma$ where $\hat{\varepsilon}\in(\alpha_{1},\alpha_{2})$ is given in (\ref{pel}).
\end{thm}
To prove Theorem 1, we need the following Lemma.
\begin{lem}
Let $\mathbf{Z}_{1}$ and $\mathbf{Z}_{2}$ be circularly symmetric complex Gaussian random variables with variances $\sigma_{1}^{2}$ and $\sigma_{2}^{2}$ respectively and $\mathbf{X}$ be independent of $(\mathbf{Z}_{1},\mathbf{Z}_{2})$. Then, the answer to the optimization problem
\begin{equation}
\label{ }
\sup_{\mathbf{X}:\mathrm{E}\{|\mathbf{X}|^{2}\}\leq P}\mathrm{h}(\mathbf{X}+\mathbf{Z}_{1})-\xi\mathrm{h}(\mathbf{X}+\mathbf{Z}_{2})
\end{equation}
is a circularly symmetric complex Gaussian $\mathbf{X}$ for any $P>0$ and any $\xi\geq 1$. Also, if $\sigma_{1}^{2}\leq \sigma_{2}^{2}$, the same conclusion holds for any $\xi\in\mathbb{R}$. \end{lem}
\begin{proof}
This is a direct consequence of Theorem 1 in \cite{LV}. \end{proof}
Our strategy is to find an upper bound on $\mathscr{MC}_{1}(\varepsilon;\gamma,h_{1,1},h_{2,1})$ for arbitrary $\varepsilon\in(0,1]$ and proposing an achievable rate which surpasses this upper bound.
\subsection{Upper Bound on $\mathscr{MC}_{1}(\varepsilon;\gamma,h_{1,1},h_{2,1})$}
We proceed as follows. We have
\begin{eqnarray}
\label{ee1}
\mathrm{I}(\boldsymbol{x}_{1},\boldsymbol{\mathfrak{m}}_{1};\boldsymbol{y}_{1})&=&\mathrm{I}(\boldsymbol{x}_{1};\boldsymbol{y}_{1}|\boldsymbol{\mathfrak{m}}_{1})+\mathrm{I}(\boldsymbol{\mathfrak{m}}_{1};\boldsymbol{y}_{1})\notag\\
&\leq &\mathrm{I}(\boldsymbol{x}_{1};\boldsymbol{y}_{1}|\boldsymbol{\mathfrak{m}}_{1})+\mathrm{H}(\boldsymbol{\mathfrak{m}}_{1})\notag\\
&=&\varepsilon\,\mathrm{I}(\boldsymbol{x}_{1};\varepsilon^{-\frac{1}{2}}h_{1,1}\boldsymbol{\mathfrak{m}}_{1}\boldsymbol{x}_{1}+h_{2,1}\boldsymbol{\mathfrak{m}}_{2}\boldsymbol{x}_{2}+\boldsymbol{z}_{1}|\boldsymbol{\mathfrak{m}}_{1}=1)\notag\\
&&+\bar{\varepsilon}\,\,\mathrm{I}(\boldsymbol{x}_{1};\varepsilon^{-\frac{1}{2}}h_{1,1}\boldsymbol{\mathfrak{m}}_{1}\boldsymbol{x}_{1}+\varepsilon^{-\frac{1}{2}}h_{2,1}\boldsymbol{\mathfrak{m}}_{2}\boldsymbol{x}_{2}+\boldsymbol{z}_{1}|\boldsymbol{\mathfrak{m}}_{1}=0)+\mathscr{H}(\varepsilon)\notag\\
&\stackrel{(a)}{=}&\varepsilon\mathrm{I}(\boldsymbol{x}_{1};\varepsilon^{-\frac{1}{2}}h_{1,1}\boldsymbol{x}_{1}+\varepsilon^{-\frac{1}{2}}h_{2,1}\boldsymbol{\mathfrak{m}}_{2}\boldsymbol{x}_{2}+\boldsymbol{z}_{1})+\mathscr{H}(\varepsilon)\notag\\
&\stackrel{(b)}{\leq}&\varepsilon\mathrm{I}(\boldsymbol{x}_{1};\varepsilon^{-\frac{1}{2}}h_{1,1}\boldsymbol{x}_{1}+\varepsilon^{-\frac{1}{2}}h_{2,1}\boldsymbol{\mathfrak{m}}_{2}\boldsymbol{x}_{2}+\boldsymbol{z}_{1}|\boldsymbol{\mathfrak{m}}_{2})+\mathscr{H}(\varepsilon)\notag\\
&=&\varepsilon\bar{\varepsilon}\,\mathrm{I}(\boldsymbol{x}_{1};\varepsilon^{-\frac{1}{2}}h_{1,1}\boldsymbol{x}_{2}+\boldsymbol{z}_{1})+\varepsilon^{2} \mathrm{I}(\boldsymbol{x}_{1};\varepsilon^{-\frac{1}{2}}h_{1,1}\boldsymbol{x}_{1}+\varepsilon^{-\frac{1}{2}}h_{2,1}\boldsymbol{x}_{2}+\boldsymbol{z}_{1})+\mathscr{H}(\varepsilon)\notag\\
&&=\varepsilon\bar{\varepsilon}\Big(\mathrm{h}(\varepsilon^{-\frac{1}{2}}h_{1,1}\boldsymbol{x}_{1}+\boldsymbol{z}_{1})-\mathrm{h}(\boldsymbol{z}_{1})\Big)\notag\\
&&+\varepsilon^{2}\Big(\mathrm{h}(\varepsilon^{-\frac{1}{2}}h_{1,1}\boldsymbol{x}_{1}+\varepsilon^{-\frac{1}{2}}h_{2,1}\boldsymbol{x}_{2}+\boldsymbol{z}_{1})-\mathrm{h}(\varepsilon^{-\frac{1}{2}}h_{2,1}\boldsymbol{x}_{2}+\boldsymbol{z}_{1})\Big)+\mathscr{H}(\varepsilon)\notag\\
&\stackrel{(c)}{=}&\varepsilon\bar{\varepsilon}\,\left(\mathrm{h}(\varepsilon^{-\frac{1}{2}}h_{1,1}\boldsymbol{x}_{1}+\boldsymbol{z}_{1})-\frac{\varepsilon}{\bar{\varepsilon}}\,\,\,\mathrm{h}(\varepsilon^{-\frac{1}{2}}h_{2,1}\boldsymbol{x}_{1}+\boldsymbol{z}_{1})\right)\notag\\
&&+\varepsilon^{2} \mathrm{h}(\varepsilon^{-\frac{1}{2}}h_{1,1}\boldsymbol{x}_{1}+\varepsilon^{-\frac{1}{2}}h_{2,1}\boldsymbol{x}_{2}+\boldsymbol{z}_{1})-\varepsilon\bar{\varepsilon}\log(\pi e)+\mathscr{H}(\varepsilon)\notag\\
&\stackrel{(d)}{=}&\varepsilon\bar{\varepsilon}\,\left(\mathrm{h}(\boldsymbol{x}_{1}+\boldsymbol{z}'_{1})-\frac{\varepsilon}{\bar{\varepsilon}}\,\,\,\mathrm{h}(\boldsymbol{x}_{1}+\boldsymbol{z}''_{1})\right)
+\varepsilon^{2} \mathrm{h}(\varepsilon^{-\frac{1}{2}}h_{1,1}\boldsymbol{x}_{1}+\varepsilon^{-\frac{1}{2}}h_{2,1}\boldsymbol{x}_{2}+\boldsymbol{z}_{1})\notag\\
&&+\varepsilon\bar{\varepsilon}\log (\varepsilon^{-1}|h_{1,1}|^{2})-\varepsilon^{2}\log (\varepsilon^{-1}|h_{2,1}|^{2})-\varepsilon\bar{\varepsilon}\log(\pi e)+\mathscr{H}(\varepsilon)\end{eqnarray}
where $(a)$ follows by the fact that $\mathrm{I}(\boldsymbol{x}_{1};h_{1,1}\boldsymbol{\mathfrak{m}}_{1}\boldsymbol{x}_{1}+h_{2,1}\boldsymbol{\mathfrak{m}}_{2}\boldsymbol{x}_{2}+\boldsymbol{z}_{1}|\boldsymbol{\mathfrak{m}}_{1}=0)=0$, $(b)$ is by the fact that the mutual information between the input and output of the channel increases if a ``genie'' provides the receiver side of user \#1 with $\boldsymbol{\mathfrak{m}}_{2}$, $(c)$ follows by the fact that $\boldsymbol{x}_{1}$ and $\boldsymbol{x}_{2}$ are identically distributed and the fact that $\mathrm{h}(\boldsymbol{z}_{1})=\log(\pi e)$ and finally $(d)$ follows by the fact that for any complex random variable $\mathbf{X}$ and $a\in\mathbb{C}$, we have $\mathrm{h}(a\mathbf{X})=\mathrm{h}(\mathbf{X})+\log |a|^{2}$. Also, we have $\boldsymbol{z}'_{1}\sim\mathcal{CN}\left(0,\frac{\varepsilon}{|h_{1,1}|^{2}}\right)$ and $\boldsymbol{z}''_{1}\sim\mathcal{CN}\left(0,\frac{\varepsilon}{|h_{2,1}|^{2}}\right)$ in the last equality in (\ref{ee1}). Denoting the upper bound in (\ref{ee1}) by $\mathsf{UB}$,
\begin{eqnarray}
\mathscr{MC}_{1}(\varepsilon;\gamma,h_{1,1},h_{2,1})&\leq&\sup_{\substack{\boldsymbol{x}_{1},\boldsymbol{x}_{2}\sim\mathrm{i.i.d}\\\mathrm{E}\{|\boldsymbol{x}_{1}|^{2}\}\leq \gamma}}\mathsf{UB}\notag\\
&\stackrel{}{\leq}&\varepsilon\bar{\varepsilon}\sup_{\substack{\boldsymbol{x}_{1},\boldsymbol{x}_{2}\sim\mathrm{i.i.d}\\\mathrm{E}\{|\boldsymbol{x}_{1}|^{2}\}\leq \gamma}}\left(\mathrm{h}(\boldsymbol{x}_{1}+\boldsymbol{z}'_{1})-\frac{\varepsilon}{\bar{\varepsilon}}\,\,\mathrm{h}(\boldsymbol{x}_{1}+\boldsymbol{z}''_{1})\right)\notag\\&&+\varepsilon^{2}\sup_{\substack{\boldsymbol{x}_{1},\boldsymbol{x}_{2}\sim\mathrm{i.i.d}\\\mathrm{E}\{|\boldsymbol{x}_{1}|^{2}\}\leq \gamma}}\mathrm{h}(\varepsilon^{-\frac{1}{2}}h_{1,1}\boldsymbol{x}_{1}+\varepsilon^{-\frac{1}{2}}h_{2,1}\boldsymbol{x}_{2}+\boldsymbol{z}_{1})\notag\\
\label{koo}
&&+\varepsilon\bar{\varepsilon}\log (\varepsilon^{-1}|h_{1,1}|^{2})-\varepsilon^{2}\log (\varepsilon^{-1}|h_{2,1}|^{2})-\varepsilon\bar{\varepsilon}\log(\pi e).\end{eqnarray}
It is trivial that
\begin{eqnarray}
\label{th2}
\sup_{\substack{\boldsymbol{x}_{1},\boldsymbol{x}_{2}\sim\mathrm{i.i.d}\\\mathrm{E}\{|\boldsymbol{x}_{1}|^{2}\}\leq \gamma}}\mathrm{h}(\varepsilon^{-\frac{1}{2}}h_{1,1}\boldsymbol{x}_{1}+\varepsilon^{-\frac{1}{2}}h_{2,1}\boldsymbol{x}_{2}+\boldsymbol{z}_{1})=\log\left(\pi e\varepsilon^{-1}\left(|h_{1,1}|^{2}+|h_{2,1}|^{2}\right)\gamma+1\right)
\end{eqnarray}
which follows by the maximum entropy Lemma\cite{53}.
Applying Lemma 1, if $\frac{\varepsilon}{\bar{\varepsilon}}\geq 1$ or $|h_{1,1}|>|h_{2,1}|$, or equivalently, $\varepsilon\geq \frac{1}{2}$ or $|h_{1,1}|>|h_{2,1}|$, the answer to the optimization $\max_{\substack{\boldsymbol{x}_{1},\boldsymbol{x}_{2}\sim\mathrm{i.i.d}\\\mathrm{E}\{|\boldsymbol{x}_{1}|^{2}\}\leq \gamma}}\left(\mathrm{h}(\boldsymbol{x}_{1}+\boldsymbol{z}'_{1})-\frac{\varepsilon}{\bar{\varepsilon}}\,\,\mathrm{h}(\boldsymbol{x}_{1}+\boldsymbol{z}''_{1})\right)$ is a complex Gaussian $\boldsymbol{x}_{1}$. We note that the power of the optimum Gaussian signal $\boldsymbol{x}_{1}$ is not necessarily $\gamma$. Let the optimum $\boldsymbol{x}_{1}$ be a $\mathcal{N}(0,v)$ random variable. We distinguish the following cases.
\textit{Case 1-} If $\varepsilon\geq\frac{1}{2}$ and $\frac{h_{1,1}}{h_{2,1}}<\left(\frac{\varepsilon}{\bar{\varepsilon}}\right)^{\frac{1}{2}}$, then $v=0$.
\textit{Case 2-} If $\varepsilon>\frac{1}{2}$, $\frac{h_{1,1}}{h_{2,1}}>\left(\frac{\varepsilon}{\bar{\varepsilon}}\right)^{\frac{1}{2}}$ and $\gamma>\frac{\varepsilon^{2}\bar{\varepsilon}}{2\varepsilon-1}\left(\frac{1}{h_{2,1}^{2}}-\frac{\varepsilon}{\bar{\varepsilon}h_{1,1}^{2}}\right)$, then $v=\frac{\varepsilon\bar{\varepsilon}}{2\varepsilon-1}\left(\frac{1}{h_{2,1}^{2}}-\frac{\varepsilon}{\bar{\varepsilon}h_{1,1}^{2}}\right)$.
\textit{Case 3-} If $\varepsilon\leq\frac{1}{2}$ and $\frac{h_{1,1}}{h_{2,1}}>1$, then $v=\frac{\gamma}{\varepsilon}$.
Verification of these cases is a straightforward task which is omitted here for the sake of brevity. Therefore, as far as $\varepsilon\geq \frac{1}{2}$, the term $\sup_{\substack{\boldsymbol{x}_{1},\boldsymbol{x}_{2}\sim\mathrm{i.i.d}\\\mathrm{E}\{|\boldsymbol{x}_{1}|^{2}\}\leq \gamma}}\left(\mathrm{h}(\boldsymbol{x}_{1}+\boldsymbol{z}'_{1})-\frac{\varepsilon}{\bar{\varepsilon}}\,\,\mathrm{h}(\boldsymbol{x}_{1}+\boldsymbol{z}''_{1})\right)$ saturates by increasing $\gamma$. Using this fact together with (\ref{koo}) and (\ref{th2}),
\begin{equation}
\label{s222}
\mathscr{MC}_{1}(\varepsilon; h_{1,1},h_{2,1},\gamma)\lesssim \varepsilon^{2}\log\gamma.
\end{equation}
as far as $\varepsilon\geq \frac{1}{2}$. On the other hand, if $\varepsilon<\frac{1}{2}$ and $\frac{h_{1,1}}{h_{2,1}}>1$,
\begin{equation}
\sup_{\substack{\boldsymbol{x}_{1},\boldsymbol{x}_{2}\sim\mathrm{i.i.d}\\\mathrm{E}\{|\boldsymbol{x}_{1}|^{2}\}\leq \gamma}}\left(\mathrm{h}(\boldsymbol{x}_{1}+\boldsymbol{z}'_{1})-\frac{\varepsilon}{\bar{\varepsilon}}\,\,\mathrm{h}(\boldsymbol{x}_{1}+\boldsymbol{z}''_{1})\right)\sim \frac{\bar{\varepsilon}-\varepsilon}{\bar{\varepsilon}}\log\gamma.
\end{equation}
Using this together with (\ref{koo}) and (\ref{th2}),
\begin{equation}
\label{s444}
\mathscr{MC}_{1}(\varepsilon; h_{1,1},h_{2,1},\gamma)\lesssim \varepsilon\bar{\varepsilon}\log\gamma
\end{equation}
as far as $\varepsilon<\frac{1}{2}$ and $\frac{h_{1,1}}{h_{2,1}}<1$. However, we can remove the condition $\frac{h_{1,1}}{h_{2,1}}>1$ by a simple arguement. Let us fix $h_{2,1}$. It is clear that $\mathscr{MC}_{1}(\varepsilon; h,h_{2,1},\gamma)<\mathscr{MC}_{1}(\varepsilon; h',h_{2,1},\gamma)$ for $h<h_{2,1}<h'$. Since $\mathscr{MC}_{1}(\varepsilon; h',h_{2,1},\gamma)\lesssim \varepsilon\bar{\varepsilon}\log\gamma$, we get $\mathscr{MC}_{1}(\varepsilon;h,h_{2,1},\gamma)\lesssim \varepsilon\bar{\varepsilon}\log\gamma$. Hence, (\ref{s444}) holds for all $\varepsilon<\frac{1}{2}$ regardless of the values of $h_{1,1}$ and $h_{2,1}$.
To recap, we have shown that
\begin{equation}
\label{s555}
\mathscr{MC}_{1}(\varepsilon; h_{1,1},h_{2,1},\gamma)\lesssim\left\{\begin{array}{cc}
\varepsilon^{2}\log\gamma & \varepsilon\geq \frac{1}{2}\\
\varepsilon\bar{\varepsilon}\log\gamma & \varepsilon<\frac{1}{2}
\end{array}\right.\end{equation}
We end this subsection with the following Corollary.
\begin{coro}
If $\varepsilon\leq \frac{1}{2}$,
\begin{equation}
\mathscr{MC}_{1}(\varepsilon; h_{1,1},h_{2,1},\gamma)\sim\varepsilon\bar{\varepsilon}\log\gamma.
\end{equation}
\end{coro}
\begin{proof}
By the results in example 3, $\mathscr{MC}_{1}(\varepsilon; h_{1,1},h_{2,1},\gamma)\gtrsim\varepsilon\bar{\varepsilon}\log\gamma$ for every $\varepsilon\in(0,1)$. However, by (\ref{s555}), $\mathscr{MC}_{1}(\varepsilon; h_{1,1},h_{2,1},\gamma)\lesssim\varepsilon\bar{\varepsilon}\log\gamma$ for all $\varepsilon\leq \frac{1}{2}$. This concludes the proof. \end{proof}
\subsection{Achieving Rates Larger Than $\mathscr{MC}_{1}(\varepsilon; h_{1,1},h_{2,1},\gamma)$}
Applying spreading on top of masking, we show that there is a range of $\varepsilon$ such that it is possible to achieve rates larger than $\mathscr{MC}_{1}(\varepsilon; h_{1,1},h_{2,1},\gamma)$ as far as $\gamma$ is sufficiently large. To transmit its Gaussian signal $\boldsymbol{x}_{i}\sim\mathcal{CN}\left(0,\gamma\right)$, user \#$i$ spreads $\boldsymbol{x}_{i}$ along a $2\times 1$ random vector $\vec{\boldsymbol{\mathfrak{s}}}_{i}$ consisting of $\mathrm{i.i.d.}$ random numbers taking values in a finite alphabet $\mathscr{A}$ with equal probability. Thereafter, this user applies the masking process by constructing the $2\times 1$ masking vector $\vec{\boldsymbol{\mathfrak{m}}}_{i}$ consisting of $\mathrm{i.i.d.}$ Bernoulli random variables taking the values $0$ and $1$ with probabilities $\bar{\varepsilon}$ and $\varepsilon$ respectively. We assume that $\vec{\boldsymbol{\mathfrak{s}}}_{i}$ and $\vec{\boldsymbol{\mathfrak{m}}}_{i}$ are known to both ends of user \#$i$. Finally, this user transmits $\beta\boldsymbol{x}_{i}\vec{\boldsymbol{\mathfrak{m}}}_{i}\odot\vec{\boldsymbol{\mathfrak{s}}}_{i}$ in two consecutive transmission slots where $\beta$ is to ensure the total transmission power per symbol $\boldsymbol{x}_{i}$ is $\gamma$. Assuming both users are synchronous, the following vector is received at the receiver side of user \#1
\begin{eqnarray}
\label{nb}
\vec{\boldsymbol{y}}_{1}=\beta h_{1,1}\boldsymbol{x}_{1}\vec{\boldsymbol{\mathfrak{m}}}_{1}\odot\vec{\boldsymbol{\mathfrak{s}}}_{1}+\beta h_{2,1}\boldsymbol{x}_{2}\vec{\boldsymbol{\mathfrak{m}}}_{2}\odot\vec{\boldsymbol{\mathfrak{s}}}_{2}+\vec{\boldsymbol{z}}_{1}
\end{eqnarray}
where $\vec{\boldsymbol{z}}_{1}$ is a vector of independent $\mathcal{CN}(0,1)$ random variables representing the ambient noise samples at the receiver side of user \#1. The achievable rate for this user is
\begin{equation}
\label{ }
R_{1}\triangleq\frac{\mathrm{I}\left(\boldsymbol{x}_{1};\vec{\boldsymbol{y}}_{1}|\vec{\boldsymbol{\mathfrak{s}}}_{1},\vec{\boldsymbol{\mathfrak{m}}}_{1}\right)}{2}.
\end{equation}
By our results in section III,
\begin{equation}
\label{ }
\mathrm{I}\left(\boldsymbol{x}_{1};\vec{\boldsymbol{y}}_{1}|\vec{\boldsymbol{\mathfrak{s}}}_{1},\vec{\boldsymbol{\mathfrak{m}}}_{1}\right)\sim\Pr\Big\{\vec{\boldsymbol{\mathfrak{m}}}_{1}\odot\vec{\boldsymbol{\mathfrak{s}}}_{1}\notin\mathrm{span}(\vec{\boldsymbol{\mathfrak{m}}}_{2}\odot\vec{\boldsymbol{\mathfrak{s}}}_{2})\Big\}\log\gamma.
\end{equation}
Hence,
\begin{equation}
\label{e33}
R_{1}\sim\frac{1}{2}\Pr\Big\{\vec{\boldsymbol{\mathfrak{m}}}_{1}\odot\vec{\boldsymbol{\mathfrak{s}}}_{1}\notin\mathrm{span}(\vec{\boldsymbol{\mathfrak{m}}}_{2}\odot\vec{\boldsymbol{\mathfrak{s}}}_{2})\Big\}\log\gamma.
\end{equation}
We are interested in values of $\varepsilon$ so that $\mathscr{MC}_{1}(\varepsilon; h_{1,1},h_{2,1},\gamma)\lesssim R_{1}$ is strictly satisfied. By (\ref{s555}) and (\ref{e33}), it is sufficient to show that there is a range for $\varepsilon$ such that
\begin{equation}
\label{x2}
\frac{1}{2}\Pr\Big\{\vec{\boldsymbol{\mathfrak{m}}}_{1}\odot\vec{\boldsymbol{\mathfrak{s}}}_{1}\notin\mathrm{span}(\vec{\boldsymbol{\mathfrak{m}}}_{2}\odot\vec{\boldsymbol{\mathfrak{s}}}_{2})\Big\}>\max\{\varepsilon^{2},\varepsilon\bar{\varepsilon}\}.
\end{equation}
Let $\mathscr{A}=\{-1,1\}$. In this case the elements of $\vec{\boldsymbol{\mathfrak{m}}}_{1}\odot\vec{\boldsymbol{\mathfrak{s}}}_{1}$ and $\vec{\boldsymbol{\mathfrak{m}}}_{2}\odot\vec{\boldsymbol{\mathfrak{s}}}_{2}$ are $\mathrm{i.i.d.}$ random variables taking the values $0$, $1$ and $-1$ with probabilities $\bar{\varepsilon}$, $\frac{\varepsilon}{2}$ and $\frac{\varepsilon }{2}$ respectively. The event $\vec{\boldsymbol{\mathfrak{m}}}_{1}\odot\vec{\boldsymbol{\mathfrak{s}}}_{1}\in\mathrm{span}(\vec{\boldsymbol{\mathfrak{m}}}_{2}\odot\vec{\boldsymbol{\mathfrak{s}}}_{2})$ occurs if and only if $\vec{\boldsymbol{\mathfrak{m}}}_{1}\odot\vec{\boldsymbol{\mathfrak{s}}}_{1}=0_{2\times 1}$ or $\vec{\boldsymbol{\mathfrak{m}}}_{1}\odot\vec{\boldsymbol{\mathfrak{s}}}_{1}\neq0_{2\times 1}$ while $\vec{\boldsymbol{\mathfrak{m}}}_{1}\odot\vec{\boldsymbol{\mathfrak{s}}}_{1}=\pm \vec{\boldsymbol{\mathfrak{m}}}_{2}\odot\vec{\boldsymbol{\mathfrak{s}}}_{2}$. Then, one can easily see that $\Pr\Big\{\vec{\boldsymbol{\mathfrak{m}}}_{1}\odot\vec{\boldsymbol{\mathfrak{s}}}_{1}\notin\mathrm{span}(\vec{\boldsymbol{\mathfrak{m}}}_{2}\odot\vec{\boldsymbol{\mathfrak{s}}}_{2})\Big\}=1-\bar{\varepsilon}^{2}-2(\varepsilon\bar{\varepsilon})^{2}-\frac{\varepsilon^{4}}{2}$. This can also be deduced from (\ref{boro}). Substituting this in (\ref{x2}) requires
\begin{equation}
1-\bar{\varepsilon}^{2}-2(\varepsilon\bar{\varepsilon})^{2}-\frac{\varepsilon^{4}}{2}>2\max\{\varepsilon^{2},\varepsilon\bar{\varepsilon}\}.
\end{equation}
This simplifies to $5\varepsilon^{2}-8\varepsilon+2<0$ for $\varepsilon<\frac{1}{2}$ and $5\varepsilon^{3}-8\varepsilon^{2}+10\varepsilon-4<0$ for $\varepsilon\geq \frac{1}{2}$. Solving these inequalities, we get $\varepsilon\in(0.3101,0.5653)$.
It is not hard to see that $\hat{\varepsilon}$ given in (\ref{pel}) is in the interval $(0.4,0.5)$ for all $\gamma>30\mathrm{dB}$. Setting $\alpha_{1}=0.3101$ and $\alpha_{2}=0.5653$, we see that $\hat{\varepsilon}\in(\alpha_{1},\alpha_{2})$ and $R_{1}$ is larger than $\mathscr{MC}_{1}(\hat{\varepsilon}; h_{1,1},h_{2,1},\gamma)$ for large values of $\gamma$. This completes the proof of Theorem 1.
Next, we demonstrate that increasing the size of the underlying alphabet can expand the range of $\varepsilon$ for which achieving a rate larger than $\mathscr{MC}_{1}(\varepsilon; h_{1,1},h_{2,1},\gamma)$ is possible.
\textit{Remark 3-} If $\mathscr{A}=\{-2,-1,1,2\}$, the elements of $\vec{\boldsymbol{\mathfrak{m}}}_{1}\odot\vec{\boldsymbol{\mathfrak{s}}}_{1}$ and $\vec{\boldsymbol{\mathfrak{m}}}_{2}\odot\vec{\boldsymbol{\mathfrak{s}}}_{2}$ are $\mathrm{i.i.d.}$ random variables taking the values $0$, $-2$, $-1$, $1$ and $2$ with probabilities $\bar{\varepsilon}$, $\frac{\varepsilon}{4}$, $\frac{\varepsilon }{4}$, $\frac{\varepsilon}{4}$ and $\frac{\varepsilon}{4}$ respectively. The event $\vec{\boldsymbol{\mathfrak{m}}}_{1}\odot\vec{\boldsymbol{\mathfrak{s}}}_{1}\in\mathrm{span}(\vec{\boldsymbol{\mathfrak{m}}}_{2}\odot\vec{\boldsymbol{\mathfrak{s}}}_{2})$ occurs if and only if $\vec{\boldsymbol{\mathfrak{m}}}_{1}\odot\vec{\boldsymbol{\mathfrak{s}}}_{1}=0_{2\times 1}$ or $\vec{\boldsymbol{\mathfrak{m}}}_{1}\odot\vec{\boldsymbol{\mathfrak{s}}}_{1}\neq0_{2\times 1}$ while $\vec{\boldsymbol{\mathfrak{m}}}_{1}\odot\vec{\boldsymbol{\mathfrak{s}}}_{1}=\pm \vec{\boldsymbol{\mathfrak{m}}}_{2}\odot\vec{\boldsymbol{\mathfrak{s}}}_{2}$ or $\vec{\boldsymbol{\mathfrak{m}}}_{1}\odot\vec{\boldsymbol{\mathfrak{s}}}_{1}=\pm2 \vec{\boldsymbol{\mathfrak{m}}}_{2}\odot\vec{\boldsymbol{\mathfrak{s}}}_{2}$ or $\vec{\boldsymbol{\mathfrak{m}}}_{1}\odot\vec{\boldsymbol{\mathfrak{s}}}_{1}=\pm\frac{1}{2} \vec{\boldsymbol{\mathfrak{m}}}_{2}\odot\vec{\boldsymbol{\mathfrak{s}}}_{2}$. We get $\Pr\Big\{\vec{\boldsymbol{\mathfrak{m}}}_{1}\odot\vec{\boldsymbol{\mathfrak{s}}}_{1}\notin\mathrm{span}(\vec{\boldsymbol{\mathfrak{m}}}_{2}\odot\vec{\boldsymbol{\mathfrak{s}}}_{2})\Big\}=1-\bar{\varepsilon}^{2}-2(\varepsilon\bar{\varepsilon})^{2}-\frac{3\varepsilon^{4}}{16}$.
Substituting this in (\ref{x2}) requires
\begin{equation}
1-\bar{\varepsilon}^{2}-2(\varepsilon\bar{\varepsilon})^{2}-\frac{3\varepsilon^{4}}{16}>2\max\{\varepsilon^{2},\varepsilon\bar{\varepsilon}\}.
\end{equation}
Hence, $35\varepsilon^{2}-64\varepsilon+16<0$ for $\varepsilon<\frac{1}{2}$ and $35\varepsilon^{3}-64\varepsilon^{2}+80\varepsilon-32<0$ for $\varepsilon\geq \frac{1}{2}$. Solving these inequalities, $\varepsilon\in(0.2988,0.5873)$. $\square$
\section{Conclusion}
We proposed an approach towards communication in decentralized wireless networks of separate transmitter-receiver pairs. A randomized signaling scheme was introduced in which each user locally spreads its Gaussian signal along a randomly generated spreading code comprised of a sequence of nonzero elements over a certain alphabet. Along with spreading, each transmitter also masks its output independently from transmission to transmission. Using a conditional version of entropy power inequality and a key lemma on the differential entropy of mixed Gaussian random vectors, achievable rates were developed for the users. Assuming the channel gains are realization of independent continuous random variables, each user finds the optimum parameters in constructing the randomized spreading and masking sequences by maximizing the average achievable rate per user. It was seen that as the number of users increases, the achievable Sum Multiplexing Gain of the network approaches that of a centralized orthogonal scheme where multiuser interference is completely avoided. It was observed that in general the elements of a spreading code are not equiprobable over the underlying alphabet. This particularly happens if the number of active users is greater than three. Finally, using the recently developed extremal inequality of Liu-Viswanath, we presented an optimality result showing that transmission of Gaussian signals via spreading and masking yields higher achievable rates than the maximum achievable rate attained by applying masking only.
\section*{Appendix A}
By Proposition 1,
\begin{equation}
\label{ }
\lim_{\gamma\to\infty}\frac{\mathsf{C}_{i}}{\log\gamma}\geq \frac{\Pr\left\{\vec{\boldsymbol{s}}_{i}\notin\mathrm{csp}(\boldsymbol{S}_{i})\right\}}{K}.\end{equation}
In this appendix, we prove that
\begin{equation}
\label{ }
\lim_{\gamma\to\infty}\frac{\mathsf{C}_{i}}{\log\gamma}\leq \frac{\Pr\left\{\vec{\boldsymbol{s}}_{i}\notin\mathrm{csp}(\boldsymbol{S}_{i})\right\}}{K}.
\end{equation}
By (\ref{e14}), it suffices to show that $\lim_{\gamma\to\infty}\frac{\mathrm{I}(\vec{\boldsymbol{x}}_{i};\vec{\boldsymbol{y}}_{i}|\vec{\boldsymbol{s}}_{i})}{\log\gamma}\leq \Pr\left\{\vec{\boldsymbol{s}}_{i}\notin\mathrm{csp}(\boldsymbol{S}_{i})\right\}$. Let us consider the \emph{informed} $i^{th}$ user where the receiver is aware of $\vec{\boldsymbol{s}}_{i}$ and $\boldsymbol{S}_{i}$. The achievable rate of this virtual user is $\frac{\mathrm{I}(\vec{\boldsymbol{x}}_{i};\vec{\boldsymbol{y}}_{i}|\boldsymbol{s}_{i},\boldsymbol{S}_{i})}{K}$. It is clear that $\mathrm{I}(\vec{\boldsymbol{x}}_{i};\vec{\boldsymbol{y}}_{i}|\vec{\boldsymbol{s}}_{i})\leq \mathrm{I}(\vec{\boldsymbol{x}}_{i};\vec{\boldsymbol{y}}_{i}|\vec{\boldsymbol{s}}_{i},\boldsymbol{S}_{i})$. However,
\begin{eqnarray}
\label{bol12}
\mathrm{I}(\vec{\boldsymbol{x}}_{i};\vec{\boldsymbol{y}}_{i}|\vec{\boldsymbol{s}}_{i},\boldsymbol{S}_{i})&=&\sum_{\substack{\vec{s}\in\mathrm{supp}(\vec{\boldsymbol{s}}_{i})\\S\in\mathrm{range}(\boldsymbol{S}_{i})}}\Pr\{\vec{\boldsymbol{s}}_{i}=\vec{s}\}\Pr\{\boldsymbol{S}_{i}=S\}\mathrm{I}(\vec{\boldsymbol{x}}_{i};\vec{\boldsymbol{y}}_{i}|\vec{\boldsymbol{s}}_{i}=\vec{s},\boldsymbol{S}_{i}=S)\notag\\
&\stackrel{(a)}{=}&\sum_{\substack{\vec{s}\in\mathrm{supp}(\vec{\boldsymbol{s}}_{i})\\S\in\mathrm{range}(\boldsymbol{S}_{i})}}\Pr\{\vec{\boldsymbol{s}}_{i}=\vec{s}\}\Pr\{\boldsymbol{S}_{i}=S\}\log\frac{\det\left(\mathrm{cov}\left(\vec{\boldsymbol{y}}_{i}|\vec{\boldsymbol{s}}_{i}=\vec{s},\boldsymbol{S}_{i}=S\right)\right)}{\det\left(\mathrm{cov}\left(\vec{\boldsymbol{w}}_{i}+\vec{\boldsymbol{z}}_{i}|\boldsymbol{S}_{i}=S\right)\right)}\notag\\
&=&\sum_{\substack{\vec{s}\in\mathrm{supp}(\vec{\boldsymbol{s}}_{i})\\S\in\mathrm{range}(\boldsymbol{S}_{i})}}\Pr\{\vec{\boldsymbol{s}}_{i}=\vec{s}\}\Pr\{\boldsymbol{S}_{i}=S\}\log\det\left(\mathrm{cov}\left(\vec{\boldsymbol{y}}_{i}|\vec{\boldsymbol{s}}_{i}=\vec{s},\boldsymbol{S}_{i}=S \right)\right)\notag\\
&&-\sum_{S\in\mathrm{range}(\boldsymbol{S}_{i})}\Pr\{\boldsymbol{S}_{i}=S\}\log\det\left(\mathrm{cov}\left(\vec{\boldsymbol{w}}_{i}+\vec{\boldsymbol{z}}_{i}|\boldsymbol{S}_{i}=S\right)\right)\end{eqnarray}
where $(a)$ follows by the fact that fixing $\boldsymbol{S}_{i}=S$ converts the channel of the $i^{th}$ informed user to an additive Gaussian channel. On the other hand,
\begin{eqnarray}
\label{bol11}
&&\sum_{\substack{\vec{s}\in\mathrm{supp}(\vec{\boldsymbol{s}}_{i})\\S\in\mathrm{range}(\boldsymbol{S}_{i})}}\Pr\{\vec{\boldsymbol{s}}_{i}=\vec{s}\}\Pr\{\boldsymbol{S}_{i}=S\}\log\det\left(\mathrm{cov}\left(\vec{\boldsymbol{y}}_{i}|\vec{\boldsymbol{s}}_{i}=\vec{s},\boldsymbol{S}_{i}=S \right)\right)\notag\\
&=&\sum_{\substack{\vec{s}\in\mathrm{supp}(\vec{\boldsymbol{s}}_{i})\\S\in\mathrm{range}(\boldsymbol{S}_{i})}}\Pr\{\vec{\boldsymbol{s}}_{i}=\vec{s}\}\Pr\{\boldsymbol{S}_{i}=S\}\log\det\left(I_{K}+\beta^{2}\gamma|h_{i,i}|^{2}\vec{s}\vec{s}^{T}+\beta^{2}\gamma S\Xi_{i}\Xi_{i}^{T}S^{T}\right).\notag\\
\end{eqnarray}
Noting that $\log\det\left(I_{K}+\beta^{2}\gamma|h_{i,i}|^{2}\vec{s}\vec{s}^{T}+\beta^{2}\gamma S\Xi_{i}\Xi_{i}^{T}S^{T}\right)$ scales like $\mathrm{rank}\left(\left[\vec{s}\,\,\,S\right]\right)\log\gamma$, we conclude that the first term on the right hand side of (\ref{bol12}) scales like $\mathrm{E}\left\{\mathrm{rank}\left(\left[\vec{\boldsymbol{s}}_{i}\,\,\,\boldsymbol{S}_{i}\right]\right)\right\}\log\gamma$. By the same token, the second term on the right hand side of (\ref{bol12}) scales like $\mathrm{E}\left\{\mathrm{rank}(\boldsymbol{S}_{i})\right\}\log\gamma$. Therefore, $\mathrm{I}(\vec{\boldsymbol{x}}_{i};\vec{\boldsymbol{y}}_{i}|\vec{\boldsymbol{s}}_{i})$ is upper bounded by a quantity which scales like $\Big(\mathrm{E}\{\mathrm{rank}\left([\vec{\boldsymbol{s}}_{i}\,\,\,\boldsymbol{S}_{i}\right])\}-\mathrm{E}\{\mathrm{rank}(\boldsymbol{S}_{i})\}\Big)\log\gamma$. The result of the Proposition is immediate.
\section*{Appendix B}
Let $\vec{\boldsymbol{s}}_{1}=\begin{pmatrix}
\boldsymbol{s}_{1,1} & \boldsymbol{s}_{1,2}
\end{pmatrix}^{T}$. Therefore,
\begin{eqnarray}
\label{lol1}
\mathrm{H}\left(\vec{\boldsymbol{s}}_{1}\vec{\boldsymbol{s}}_{1}^{\dagger}\right)&=&\mathrm{H}\left(|\boldsymbol{s}_{1,1}|^{2},|\boldsymbol{s}_{1,2}|^{2},\boldsymbol{s}_{1,1}\boldsymbol{s}_{1,2}\right)\notag\\
&=&\mathrm{H}\left(|\boldsymbol{s}_{1,1}|^{2},|\boldsymbol{s}_{1,2}|^{2}\right)+\mathrm{H}\left(\boldsymbol{s}_{1,1}\boldsymbol{s}_{1,2}\big||\boldsymbol{s}_{1,1}|^{2},|\boldsymbol{s}_{1,2}|^{2}\right)\notag\\
&=&\mathrm{H}\left(|\boldsymbol{s}_{1,1}|^{2}\right)+\mathrm{H}\left(|\boldsymbol{s}_{1,2}|^{2}\right)+\mathrm{H}\left(\boldsymbol{s}_{1,1}\boldsymbol{s}_{1,2}\big||\boldsymbol{s}_{1,1}|^{2},|\boldsymbol{s}_{1,2}|^{2}\right)\notag\\
&=&\mathrm{H}\left(|\boldsymbol{s}_{1,1}|\right)+\mathrm{H}\left(|\boldsymbol{s}_{1,2}|\right)+\mathrm{H}\left(\boldsymbol{s}_{1,1}\boldsymbol{s}_{1,2}\big||\boldsymbol{s}_{1,1}|,|\boldsymbol{s}_{1,2}|\right)\notag\\
&=&2\mathscr{H}(\varepsilon)+\mathrm{H}\left(\boldsymbol{s}_{1,1}\boldsymbol{s}_{1,2}\big||\boldsymbol{s}_{1,1}|,|\boldsymbol{s}_{1,2}|\right).\end{eqnarray}
To compute $\mathrm{H}\left(\boldsymbol{s}_{1,1}\boldsymbol{s}_{1,2}\big||\boldsymbol{s}_{1,1}|,|\boldsymbol{s}_{1,2}|\right)$, we have
\begin{eqnarray}
\mathrm{H}\left(\boldsymbol{s}_{1,1}\boldsymbol{s}_{1,2}\big||\boldsymbol{s}_{1,1}|,|\boldsymbol{s}_{1,2}|\right)&=&\mathrm{H}\left(\boldsymbol{s}_{1,1}\boldsymbol{s}_{1,2}\big||\boldsymbol{s}_{1,1}|=1,|\boldsymbol{s}_{1,2}|=1\right)\Pr\{|\boldsymbol{s}_{1,1}|=1,|\boldsymbol{s}_{1,2}|=1\}\notag\\
&&+\mathrm{H}\left(\boldsymbol{s}_{1,1}\boldsymbol{s}_{1,2}\big||\boldsymbol{s}_{1,1}|=0,|\boldsymbol{s}_{1,2}|=1\right)\Pr\{|\boldsymbol{s}_{1,1}|=0,|\boldsymbol{s}_{1,2}|=1\}\notag\\
&&+\mathrm{H}\left(\boldsymbol{s}_{1,1}\boldsymbol{s}_{1,2}\big||\boldsymbol{s}_{1,1}|=1,|\boldsymbol{s}_{1,2}|=0\right)\Pr\{|\boldsymbol{s}_{1,1}|=1,|\boldsymbol{s}_{1,2}|=0\}\notag\\
&&+\mathrm{H}\left(\boldsymbol{s}_{1,1}\boldsymbol{s}_{1,2}\big||\boldsymbol{s}_{1,1}|=0,|\boldsymbol{s}_{1,2}|=0\right)\Pr\{|\boldsymbol{s}_{1,1}|=0,|\boldsymbol{s}_{1,2}|=0\}\notag\\
&\stackrel{(a)}{=}&\mathrm{H}\left(\boldsymbol{s}_{1,1}\boldsymbol{s}_{1,2}\big||\boldsymbol{s}_{1,1}|=1,|\boldsymbol{s}_{1,2}|=1\right)\Pr\{|\boldsymbol{s}_{1,1}|=1,|\boldsymbol{s}_{1,2}|=1\}\end{eqnarray}
where $(a)$ is by the fact that the terms $\mathrm{H}\left(\boldsymbol{s}_{1,1}\boldsymbol{s}_{1,2}\big||\boldsymbol{s}_{1,1}|=0,|\boldsymbol{s}_{1,2}|=1\right)$, $\mathrm{H}\left(\boldsymbol{s}_{1,1}\boldsymbol{s}_{1,2}\big||\boldsymbol{s}_{1,1}|=1,|\boldsymbol{s}_{1,2}|=0\right)$ and $\mathrm{H}\left(\boldsymbol{s}_{1,1}\boldsymbol{s}_{1,2}\big||\boldsymbol{s}_{1,1}|=0,|\boldsymbol{s}_{1,2}|=0\right)$ are zero. On the other hand, it is easy to see that $\Pr\{\boldsymbol{s}_{1,1}\boldsymbol{s}_{1,2}=1\big||\boldsymbol{s}_{1,1}|=1,|\boldsymbol{s}_{1,2}|=1\}=\nu^{2}+\overline{\nu}^{2}$. This implies $\mathrm{H}\left(\boldsymbol{s}_{1,1}\boldsymbol{s}_{1,2}\big||\boldsymbol{s}_{1,1}|=1,|\boldsymbol{s}_{1,2}|=1\right)=\mathscr{H}(\nu^{2}+\overline{\nu}^{2})$. Therefore,
\begin{eqnarray}
\label{lol2}
\mathrm{H}\left(\boldsymbol{s}_{1,1}\boldsymbol{s}_{1,2}\big||\boldsymbol{s}_{1,1}|,|\boldsymbol{s}_{1,2}|\right)=\mathscr{H}(\nu^{2}+\overline{\nu}^{2})\Pr\{|\boldsymbol{s}_{1,1}|=1,|\boldsymbol{s}_{1,2}|=1\}
=\varepsilon^{2}\mathscr{H}(\nu^{2}+\overline{\nu}^{2}).\end{eqnarray}
Using (\ref{lol1}) and (\ref{lol2}),
\begin{equation}
\label{ }
\mathrm{H}\left(\vec{\boldsymbol{s}}_{1}\vec{\boldsymbol{s}}_{1}^{\dagger}\right)=2\mathscr{H}(\varepsilon)+\varepsilon^{2}\mathscr{H}(\nu^{2}+\overline{\nu}^{2}).
\end{equation}
\section*{Appendix B}
Let $\vec{\boldsymbol{s}}_{1}=\begin{pmatrix}
\boldsymbol{s}_{1,1}&\cdots&\boldsymbol{s}_{1,K}
\end{pmatrix}$. We have
\begin{eqnarray}
\label{nol}
\mathrm{H}(\vec{\boldsymbol{s}}_{1}\vec{\boldsymbol{s}}_{1}^{\dagger})&=&\mathrm{H}((\boldsymbol{s}_{1,k}\boldsymbol{s}_{1,l})_{k,l=1}^{K})\notag\\
&\stackrel{(a)}{=}&\mathrm{H}\left(\big(\boldsymbol{s}_{1,k}\boldsymbol{s}_{1,l}\big)_{\substack{k,l=1\\k\neq l}}^{K}\right)\notag\\
&\stackrel{(b)}{=}&\mathrm{H}\left(\boldsymbol{s}_{1,1}\boldsymbol{s}_{1,2},\boldsymbol{s}_{1,1}\boldsymbol{s}_{1,3},\cdots,\boldsymbol{s}_{1,1}\boldsymbol{s}_{1,K}\right)\notag\\\end{eqnarray}
where $(a)$ is by the fact that $\boldsymbol{s}_{1,k}^{2}=1$ for any $1\leq k\leq K$ and $(b)$ is by the fact that for any two distinct numbers $k,l\in\{2,\cdots,K\}$, the knowledge about $\boldsymbol{s}_{1,k}\boldsymbol{s}_{1,l}$ can be obtained by knowing $\boldsymbol{s}_{1,1}\boldsymbol{s}_{1,k}$ and $\boldsymbol{s}_{1,1}\boldsymbol{s}_{1,l}$ by the fact that $\boldsymbol{s}_{1,1}\boldsymbol{s}_{1,k}\boldsymbol{s}_{1,1}\boldsymbol{s}_{1,l}=\boldsymbol{s}_{1,k}\boldsymbol{s}_{1,l}\boldsymbol{s}_{1,1}^{2}=\boldsymbol{s}_{1,k}\boldsymbol{s}_{1,l}$. Let us define
\begin{equation}
\label{ }
\widetilde{\vec{\boldsymbol{s}}}_{1}=\begin{pmatrix}
\boldsymbol{s}_{1,2} &\cdots&\boldsymbol{s}_{1,K}
\end{pmatrix}^{T}.
\end{equation}
By (\ref{nol}), $\mathrm{H}(\vec{\boldsymbol{s}}_{1}\vec{\boldsymbol{s}}_{1}^{\dagger})=\mathrm{H}(\boldsymbol{s}_{1,1}\widetilde{\vec{\boldsymbol{s}}}_{1})$. Let $\mathcal{E}$ be the event where $\boldsymbol{s}_{1,1}=1$, while $k$ of the elements of $\widetilde{\vec{\boldsymbol{s}}}_{1}$, namely, $\boldsymbol{s}_{1,l_{1}},\cdots,\boldsymbol{s}_{1,l_{k-1}}$ and $\boldsymbol{s}_{1,l_{k}}$ are $1$ and the rest are $-1$ for some $0\leq k\leq K$ and $2\leq l_{1}<l_{2}<\cdots<l_{k}\leq K$. Also, let $\mathcal{F}$ be the event where $\boldsymbol{s}_{1,1}=-1$, $\boldsymbol{s}_{1,l}=-1$ for $l\in\{l_{1},l_{2},\cdots,l_{k}\}$ and $\boldsymbol{s}_{1,l}=1$ for $l\notin\{l_{1},l_{2},\cdots,l_{k}\}$. It is clear that
\begin{equation}
\label{mol}
\boldsymbol{s}_{1,1}\widetilde{\vec{\boldsymbol{s}}}_{1}\mathbb{1}_{\mathcal{E}}=\boldsymbol{s}_{1,1}\widetilde{\vec{\boldsymbol{s}}}_{1}\mathbb{1}_{\mathcal{F}}.
\end{equation}
We know that $\Pr\{\mathcal{E}\}=\nu^{k+1}\overline{\nu}^{K-k}$ and $\Pr\{\mathcal{F}\}=\nu^{K-k}\overline{\nu}^{k+1}$. Hence, using (\ref{mol}),
\begin{equation}
\label{ }
\mathrm{H}(\vec{\boldsymbol{s}}_{1}\vec{\boldsymbol{s}}_{1}^{\dagger})=-\sum_{k=0}^{K}{K\choose k}\left(\nu^{k+1}\overline{\nu}^{K-k}+\nu^{K-k}\overline{\nu}^{k+1}\right)\log\left(\nu^{k+1}\overline{\nu}^{K-k}+\nu^{K-k}\overline{\nu}^{k+1}\right).
\end{equation}
\bibliographystyle{IEEEbib}
| proofpile-arXiv_065-5431 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
Let $H\subseteq \mathbb{R}^n$ be a finite set and denote by
\begin{equation}
\mathrm{int.cone}(H):=\{\lambda_1x_1+\cdots+\lambda_kx_k\mid x_1,\ldots,x_k\in H, \lambda_1,\ldots,\lambda_k\in \mathbb{Z}_{\geq 0}\}
\end{equation}
the integer cone generated by $H$. The \emph{Carath\'eodory rank} of $H$, denoted $\mathrm{cr}(H)$, is the least integer $t$ such that every element in $\mathrm{int.cone}(H)$ is the nonnegative integer combination of $t$ elements from $H$.
The set $H$ is called a \emph{Hilbert base} if $\mathrm{int.cone}(H)=\mathrm{cone}(H)\cap \mathrm{lattice}(H)$, where $\mathrm{cone}(H)$ and $\mathrm{lattice}(H)$ are the convex cone and the lattice generated by $H$, respectively.
Cook et al.\cite{CookFonluptSchrijver} showed that when $H$ is a Hilbert base generating a pointed cone, the bound $\mathrm{cr}(H)\leq 2n-1$ holds. This bound was improved to $2n-2$ by Seb\H o \cite{Sebo}. In the same paper, Seb\H o conjectured that $\mathrm{cr}(H)\leq n$ holds for any Hilbert base generating a pointed cone. A counterexample to this conjecture was found by Bruns et al.\cite{Brunsetal}.
Here we consider the case that $H$ is the set of incidence vectors of the bases of a matroid on $n$ elements. In his paper on testing membership in matroid polyhedra, Cunningham \cite{Cunningham} first asked for an upper bound on the number of different bases needed in a representation of a vector as a nonnegative integer sum of bases. It follows from Edmonds matroid partitioning theorem \cite{Edmonds} that the incidence vectors of matroid bases form a Hilbert base for the pointed cone they generate. Hence the upper bound of $2n-2$ applies. This bound was improved by de Pina and Soares \cite{dePina} to $n+r-1$, where $r$ is the rank of the matroid. Chaourar \cite{Chaourar} showed that an upper bound of $n$ holds for a certain minor closed class of matroids.
In this paper we show that the conjecture of Seb\H o holds for the bases of (poly)matroids. That is, the Carath\'eodory rank of the set of bases of a matroid is upper bounded by the cardinality of the ground set. More generally, we show that for an integer valued submodular function $f$, the Carath\'eodory rank of the set of bases of $f$ equals the maximum number of affinely independent bases of $f$.
\section{Preliminaries}
In this section we introduce the basic notions concerning submodular functions. For background and more details, we refer the reader to \cite{Fujishige,Schrijver}.
Let $E$ be a finite set and denote its power set by $\mathcal{P}(E)$. A function $f:\mathcal{P}(E)\to \mathbb{Z}$ is called \emph{submodular} if $f(\emptyset)=0$ and for any $A,B\subseteq E$ the inequality $f(A)+f(B)\geq f(A\cup B)+f(A\cap B)$ holds. The set
\begin{equation}
EP_f:=\{x\in \mathbb{R}^E\mid x(U)\leq f(U)\text{ for all $U\subseteq E$}\}
\end{equation}
is called the \emph{extended polymatroid} associated to $f$, and
\begin{equation}
B_f=\{x\in EP_f\mid x(E)=f(E)\}
\end{equation}
is called the \emph{base polytope} of $f$. Observe that $B_f$ is indeed a polytope, since for $x\in B_f$ and $e\in E$, the inequalities $f(E)-f(E-e)\leq x(e)\leq f(\{e\})$ hold, showing that $B_f$ is bounded.
A submodular function $f:\mathcal{P}(E)\to \mathbb{Z}$ is the rank function of a matroid $M$ on $E$ if and only if $f$ is nonnegative, nondecreasing and $f(U)\leq |U|$ for every set $U\subseteq E$. In that case, $B_f$ is the convex hull of the incidence vectors of the bases of $M$.
Let $f:\mathcal{P}(E)\to \mathbb{Z}$ be submodular. We will construct new submodular functions from $f$. The \emph{dual} of $f$, denoted $f^*$, is defined by
\begin{eqnarray}
f^*(U):=f(E\setminus U)-f(E).
\end{eqnarray}
It is easy to check that $f^*$ is again submodular, that $(f^*)^*=f$ and that $B_{f^*}=-B_f$. For $a:E\to \mathbb{Z}$, the function $f+a$ given by $(f+a)(U):=f(U)+a(U)$ is submodular and $B_{f+a}=a+B_f$. The \emph{reduction of $f$ by $a$}, denoted $f|a$ is defined by
\begin{equation}
(f|a)(U):=\min_{T\subseteq U}(f(T)+a(U\setminus T)).
\end{equation}
It is not hard to check that $f|a$ is submodular and that $EP_{f|a}=\{x\in EP_f\mid x\leq a\}$. Hence we have that $B_{f|a}=\{x\in B_f\mid x\leq a\}$ when $B_f\cap\{x\mid x\leq a\}$ is nonempty. We will only need the following special case. Let $e_0\in E$ and $c\in \mathbb{Z}$ and define $a:E\to \mathbb{Z}$ by
\begin{equation}
a(e):=\begin{cases}c&\text{ if $e=e_0$,}\\f(\{e\})&\text{ if $e\neq e_0$.}\end{cases}
\end{equation}
Denote $f|(e_0,c):=f|a$. If $x_{e_0}\leq c$ for some $x\in B_f$, we obtain
\begin{equation}
B_{f|(e_0,c)}=\{x\in B_f\mid x(e_0)\leq c\}.
\end{equation}
Our main tool is Edmonds' \cite{Edmonds} polymatroid intersection theorem which we state for the base polytope.
\begin{theorem}\label{edmonds}
Let $f,f':\mathcal{P}(E)\to \mathbb{Z}$ be submodular. Then $B_f\cap B_{f'}$ is an integer polytope.
\end{theorem}
We will also use the following corollary (see \cite{Edmonds}).
\begin{theorem}\label{idp}
Let $f:\mathcal{P}(E)\to \mathbb{Z}$ be submodular. Let $k$ be a positive integer and let $x\in (kB_f)\cap \mathbb{Z}^E$. Then there exist $x_1,\ldots,x_k\in B_f\cap \mathbb{Z}^E$ such that $x=x_1+\cdots+x_k$.
\end{theorem}
\begin{proof}
By the above constructions, the polytope $x-(k-1)B_f$ is the base polytope of the submodular function $f'=x+(k-1)f^*$. Consider the polytope $P:=B_f\cap B_{f'}$. It is nonempty, since $\frac{1}{k}x\in P$ and integer by Theorem \ref{edmonds}. Let $x_k\in P$ be an integer point. Then $x-x_k$ is an integer point in $(k-1)B_f$ and we can apply induction.
\end{proof}
Important in our proof will be the fact that faces of the base polytope of a submodular function are themselves base polytopes as the following proposition shows.
\begin{proposition}\label{faces}
Let $f:\mathcal{P}(E)\to \mathbb{Z}$ be submodular and let $F\subseteq B_f$ be a face of dimension $|E|-t$. Then there exist a partition $E=E_1\cup\cdots\cup E_t$ and submodular functions $f_i:\mathcal{P}(E_i)\to \mathbb{Z}$ such that $F=B_{f_1}\oplus\cdots\oplus B_{f_t}$. In particular, $F$ is the base polytope of a submodular function.
\end{proposition}
A proof was given in \cite{Schrijver}, but for convenience of the reader, we will also give a proof here.
\begin{proof}
Let $\mathcal{T}\subseteq \mathcal{P}(E)$ correspond to the tight constraints on $F$:
$$
\mathcal{T}=\{U\subseteq E\mid x(U)=f(U) \text{ for all $x\in F$}\}.
$$
It follows from the submodularity of $f$ that $\mathcal{T}$ is closed under taking unions and intersections.
Observe that the characteristic vectors $\{\chi^A\mid A\in \mathcal{T}\}$ span a $t$-dimensional space $V$.
Let $\emptyset=A_0\subset A_1\subset\cdots\subset A_{t'}=E$ be a maximal chain of sets in $\mathcal{T}$. We claim that $t'=t$. Observe that the characteristic vectors $\chi^{A_1},\ldots, \chi^{A_{t'}}$ are linearly independent and span a $t'$-dimensional subspace $V'\subseteq V$. Hence $t'\leq t$.
To prove equality, suppose that there exists an $A\in \mathcal{T}$ such that $\chi^A\not\in V'$. Take such an $A$ that is inclusionwise maximal. Now let $i\geq 0$ be maximal, such that $A_i\subseteq A$. Then $A_i\subseteq A_{i+1}\cap A\subsetneq A_{i+1}$. Hence by maximality of the chain, $A_{i+1}\cap A=A_i$. By maximality of $A$, we have $\chi^{A\cup A_{i+1}}\in V'$ and hence, $\chi^A=\chi^{A\cap A_{i+1}}+\chi^{A\cup A_{i+1}}-\chi^{A_{i+1}}\in V'$, contradiction the choice of $A$. This shows that $t'=t$.
Define $E_i=A_i\setminus A_{i-1}$ for $i=1,\ldots, t$. Define $f_i:\mathcal{P}(E_i)\to \mathbb{Z}$ by $f_i(U):=f(A_{i-1}\cup U)-f(A_{i-1})$ for all $U\subseteq E_i$. We will show that
\begin{equation}
F=B_{f_1}\oplus\cdots\oplus B_{f_t}.
\end{equation}
To see the inclusion `$\subseteq$', let $x=(x_1,\ldots,x_t)\in F$. Then $x(A_i)=f(A_i)$ holds for $i=0,\ldots,t$. Hence for any $i=1,\ldots,t$ and any $U\subseteq E_i$ we have
\begin{equation}
x_i(U)=x(A_{i-1}\cup U)-x(A_{i-1})\leq f(A_{i-1}\cup U)-f(A_{i-1})=f_i(U),
\end{equation}
and equality holds for $U=E_i$.
To see the converse inclusion `$\supseteq$', let $x=(x_1,\ldots,x_t)\in B_{f_1}\oplus\cdots\oplus B_{f_t}$. Clearly
\begin{equation}
x(A_k)=\sum_{i=1}^k x_i(E_i)=\sum_{i=1}^k (f(A_i)-f(A_{i-1}))=f(A_k),
\end{equation}
in particular $x(E)=f(E)$. To complete the proof, we have to show that $x(U)\leq f(U)$ holds for all $U\subseteq E$. Suppose for contradiction that $x(U)>f(U)$ for some $U$. Choose such a $U$ inclusionwise minimal. Now take $k$ minimal such that $U\subseteq A_k$. Then we have
\begin{eqnarray}
x(U\cup A_{k-1})&=&x(A_{k-1})+x_k(E_k\cap U)\nonumber\\
&\leq& f(A_{k-1})+f_k(E_k\cap U)=f(U\cup A_{k-1}).
\end{eqnarray}
Since $x(A_{k-1}\cap U)\leq f(A_{k-1}\cap U)$ by minimality of $U$, we have
\begin{eqnarray}
x(U)&=&x(A_{k-1}\cup U)+x(A_{k-1}\cap U)-x(A_{k-1})\nonumber\\
&\leq &f(A_{k-1}\cup U)+f(A_{k-1}\cap U)-f(A_{k-1})\leq f(U).
\end{eqnarray}
This contradicts the choice of $U$.
\end{proof}
\section{The main theorem}
In this section we prove our main theorem. For $B_f\subseteq \mathbb{R}^E$, denote $\mathrm{cr} (B_f):=\mathrm{cr} (B_f\cap \mathbb{Z}^E)$.
\begin{theorem}\label{main}
Let $f:\mathcal{P}(E)\to \mathbb{Z}$ be a submodular function. Then $\mathrm{cr} (B_f)=\dim B_f+1$.
\end{theorem}
We will need the following lemma.
\begin{lemma}\label{directsum}
Let $B_{f_1}, \ldots, B_{f_t}$ be base polytopes. Then $\mathrm{cr}(B_{f_1}\oplus\cdots\oplus B_{f_t})\leq \mathrm{cr}(B_{f_1})+\cdots+\mathrm{cr}(B_{f_t})-(t-1)$.
\end{lemma}
\begin{proof}
It suffices to show the lemma in the case $t=2$.
Let $k$ be a positive integer and let $w=(w_1,w_2)$ be an integer vector in $k\cdot(B_{f_1}\oplus B_{f_2})$. Let $w_1=\sum_{i=1}^r m_ix_i$ and $w_2=\sum_{i=1}^s n_iy_1$, where the $n_i,m_i$ are positive integers, the $x_i\in B_{f_1}$ and $y_i\in B_{f_2}$ integer vectors. Denote
\begin{eqnarray}
\{0,m_1,m_1+m_2,\ldots,m_1+\cdots+m_r\}&\cup&\nonumber\\
\{0,n_1,n_1+n_2,\ldots,n_1+\cdots+n_s\}&=&\{l_0,l_1,\ldots,l_q\},
\end{eqnarray}
where $0=l_0<l_1<\cdots<l_q=k$. Since $m_1+\cdots+m_r=n_1+\cdots+n_s=k$, we have $q\leq r+s-1$. For any $i=1,\ldots,q$, there exist unique $j,j'$ such that $m_1+\cdots+m_{j-1}<l_i\leq m_1+\cdots+m_j$ and $n_1+\cdots+n_{j'-1}<l_i\leq n_1+\cdots+n_{j'}$. Denote
$z_i:=(x_j,y_{j'})$. We now have the decomposition $w=\sum_{i=1}^q (l_i-l_{i-1})z_i$.
\end{proof}
We conclude this section with a proof of Theorem \ref{main}.
\begin{proof}[Proof of Theorem \ref{main}.]
The inequality $\mathrm{cr} (B_f)\geq \dim B_f+1$ is clear. We will prove the converse inequality by induction on $\dim B_f+|E|$, the case $|E|=1$ being clear. Let $E$ be a finite set, $|E|\geq 2$ and let $f:\mathcal{P}(E)\to \mathbb{Z}$ be submodular.
Let $k$ be a positive integer and let $w\in kB_f\cap \mathbb{Z}^E$. We have to prove that $w$ is the positive integer combination of at most $\dim B_f+1$ integer points in $B_f$. We may assume that
\begin{equation}
\dim B_f=|E|-1.
\end{equation}
Indeed, suppose that $\dim B_f=|E|-t$ for some $t\geq 2$. Then by Proposition \ref{faces}, there exist a partition $E=E_1\cup\cdots\cup E_t$ and submodular functions $f_i:\mathcal{P}(E_i)\to \mathbb{Z}$ such that $B_f=B_{f_1}\oplus\cdots\oplus B_{f_t}$. By induction, $\mathrm{cr} (B_{f_i})=\dim B_{f_i}+1$ for every $i$. Hence by Lemma \ref{directsum}
\begin{eqnarray}
\mathrm{cr} (B_f)&\leq&\mathrm{cr} (B_{f_1})+\cdots+\mathrm{cr} (B_{f_t})-(t-1)\nonumber\\
&=&\dim B_{f_1}+\cdots+\dim B_{f_t}+1=\dim B_f+1.
\end{eqnarray}
Fix an element $e\in E$. Write $w(e)=kq+r$ where $r,q$ are integers and $0\leq r\leq k-1$. Let $f'=f|(e,q+1)$.
By Theorem \ref{idp}, we can find integer vectors $y_1,\ldots, y_k\in B_{f'}$ such that $w=y_1+\cdots+y_k$. We may assume that $y_i(e)=q+1$ for $i=1,\ldots,r$. Indeed, if $y_i(e)\leq q$ would hold for at least $k-r+1$ values of $i$, then we would arrive at the contradiction $w(e)\leq (k-r+1)q+(r-1)(q+1)\leq kq+r-1<w(e)$.
Let $f'':=f|(e,q)$. Denote $w':=y_1+\cdots+y_r$. So we have decomposed $w$ into integer vectors
\begin{eqnarray}
w'&\in &rB_{f'}=B_{rf'}\nonumber\\
w-w'&=&y_{r+1}+\cdots+y_k\in (k-r)B_{f''}=B_{(k-r)f''}.
\end{eqnarray}
We may assume that $r\neq 0$, since otherwise $w\in kF$, where $F$ is the face $B_{f''}\cap \{x\mid x(e)=q, x(E)=f(E)\}$ of dimension $\dim F\leq |E|-2$ (since $|E|\geq 2$). Then by induction we could write $w$ as a nonnegative integer linear combination of at most $1+(\dim F)<\dim B_f+1$ integer vectors in $B_{f''}\subseteq B_f$.
Consider the intersection
\begin{equation}
P:=B_{rf'}\cap B_{w+(k-r)(f'')^*}.
\end{equation}
Observe that $P$ is nonempty, since it contains $w'$. Furthermore, by Theorem \ref{edmonds}, $P$ is an integer polytope. Hence taking an integer vertex $x'$ of $P$ and denoting $x'':=w-x'$, we have that $x'$ is an integer vector of $B_{rf'}$ and $x''$ is an integer vector of $B_{(k-r)f''}$.
Let $F'$ be the inclusionwise minimal face of $B_{rf'}$ containing $x'$ and let $F''$ be the inclusionwise minimal face of $B_{w+(k-r)(f'')^*}$ containing $x'$. Denote $H':=\mathrm{aff.hull}(F')$ and $H'':=\mathrm{aff.hull}(F'')$. Since $x'$ is a vertex of $P$, we have
\begin{equation}
H'\cap H''=\{x'\}.
\end{equation}
Indeed, every supporting hyperplane of $B_{rf'}$ containing $x'$ also contains $F'$ by minimality of $F'$, and hence contains $H'$. Similarly, every supporting hyperplane of $B_{w+(k-r)(f'')*}$ containing $x'$ also contains $H''$. Since $x'$ is the intersection of supporting hyperplanes for the two polytopes, the claim follows.
Observe that both $F'$ and $F''$ are contained in the affine space
\begin{equation}
\{x\in\mathbb{R}^n\mid x(E)=rf(E),\ x(e)=r(q+1)\},
\end{equation}
which has dimension $n-2$ since $|E|\geq 2$. It follows that
\begin{eqnarray}
\dim F'+\dim F''&=&\dim H'+\dim H''\nonumber\\
&=&\dim(\mathrm{aff.hull}(H'\cup H''))+\dim(H'\cap H'')\nonumber\\
&\leq& n-2.
\end{eqnarray}
Since $F''$ is a face of $B_{w+(k-r)(f'')^*}$ containing $x'$, we have that $w-F''$ is a face of $B_{(k-r)f''}$ containing $x''$. By induction we see that
\begin{eqnarray}
\mathrm{cr} (F')+\mathrm{cr} (w-F'')&\leq& (\dim F'+1)+(\dim (w-F'')+1)\nonumber\\
&=&\dim F'+\dim F''+2\leq n.
\end{eqnarray}
This gives a decomposition of $w=x'+x''$ using at most $n$ different bases of $B_f$, completing the proof.
\end{proof}
| proofpile-arXiv_065-5437 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{motivation and preliminaries}
Let us start with some motivation. Let $E$ be a Banach lattice. It is known that there are several non-equivalent notions related to the Banach-Saks property on $E$ which have many considerable connections together. In particular, we are able to characterize order continuity and also reflexivity of $E$ in terms of these relations ( see \cite{GTX, Z2} for more information).
Although, Banach lattices are the most important part of the category of all locally solid vector lattices, but the general theory of locally solid vector lattices has practical applications in economics, risk measures, etc ( see \cite[Chapter 9]{AB1}). Furthermore, the set of all examples in this category is much more than the examples in Banach lattice theory so that in this case, we have a wider range of spaces. All of these reasons, motivate us to investigate several known notions which are defined in Banach lattices in the category of all locally solid vector lattices; one principal notion is the Banach-Saks property.
In this paper, we consider the Banach-Saks property ( in four non-equivalent ways) in the category of all locally solid vector lattices as an extension for the corresponding properties in the category of all Banach lattices. Recently, some different notions for the Banach-Saks property in Banach lattices have been considered in \cite{Z2}. Our attempt in this note is to investigate and possibly extend the related results in that paper to the category of all locally solid vector lattices.
Now, we recall some preliminaries we need in the sequel.
A vector lattice $X$ is called {\em order complete} ( {\em $\sigma$-order complete}) if every non-empty bounded above subset ( bounded above countable subset) of $X$ has a supremum. A set $S\subseteq X$ is called a {\em solid} set if $x\in X$, $y\in S$ and $|x|\leq |y|$ imply that $x\in S$. Also, recall that a linear topology $\tau$ on a vector lattice $X$ is referred to as {\em locally solid} if it has a local basis at zero consisting of solid sets.
Suppose $X$ is a locally solid vector lattice. A net $(x_{\alpha})\subseteq X$ is said to be unbounded convergent to $x\in X$ ( in brief, $x_{\alpha}\xrightarrow{u\tau}x$) if for each $u\in X_{+}$, $|x_{\alpha}-x|\wedge u\xrightarrow{\tau}0$; suppose $\tau_1=\|.\|$ and $\tau_2$= the absolute weak topology ( $|\sigma|(X,X')$, in notation), we write $x_{\alpha}\xrightarrow{un}x$ and $x_{\alpha}\xrightarrow{uaw}x$, respectively; for more details, see \cite{DOT,T,Z}. Observe that a locally solid vector lattice $(X,\tau)$ is said to have the {\em Levi property} if every $\tau$-bounded upward directed set in $X_{+}$ has a supremum. Moreover, recall that a locally solid vector lattice $(X,\tau)$ possesses the {\em Lebesgue property} if for every net $(u_{\alpha})$ in $X$, $u_{\alpha}\downarrow 0$ implies that $u_{\alpha}\xrightarrow{\tau}0$. Finally, observe that $(X,\tau)$ has the {\em pre-Lebesgue property} if $0\leq u_{\alpha}\uparrow\leq u$ implies that the net $(u_{\alpha})$ is $\tau$-Cauchy.
For undefined terminology and related notions, see \cite{AA,AB1,AB}. All locally solid vector lattices in this note are assumed to be Hausdorff.
\section{main result}
First, we present the main definition of the paper: The Banach-Saks property with a topological flavor; for recent progress in Banach lattices, see \cite{GTX,Z2}.
\begin{definition}
Suppose $(X,\tau)$ is a locally solid vector lattice. $X$ is said to have
\begin{itemize}
\item[\em (i)] {The Banach-Saks property ( {\bf LSBSP}, for short) if every $\tau$-bounded sequence $(x_n)\subseteq X$ has a subsequence $(x_{n_k})$ whose Ces\`{a}ro means is $\tau$-convergent; that is $(\frac{1}{n}\Sigma_{k=1}^{n}x_{n_k})$ is $\tau$-convergent.}
\item[\em (ii)]
{ The strong unbounded Banach-Saks property ( {\bf LSSUBSP}, in notation) if every $\tau$-bounded $u\tau$-null sequence $(x_n)\subseteq X$ has a subsequence $(x_{n_k})$ whose Ces\`{a}ro means is $\tau$-convergent.}
\item[\em (iii)]
{ The unbounded Banach-Saks property ( {\bf LSUBSP}, in notation) if every $\tau$-bounded $u|\sigma|(X,X')$-null sequence $(x_n)\subseteq X$ has a subsequence $(x_{n_k})$ whose Ces\`{a}ro means is $\tau$-convergent.}
\item[\em (iv)] {The disjoint Banach-Saks property ( {\bf LSDBSP}, in brief) if every $\tau$-bounded disjoint sequence $(x_n)\subseteq X$ has a subsequence $(x_{n_k})$ whose Ces\`{a}ro means is $\tau$-convergent.}
\end{itemize}
\end{definition}
It is clear that {\bf LSBSP} implies {\bf LSSUBSP}, {\bf LSUBSP} and {\bf LSDBSP}.
Now, we extend \cite[Theorem 3.2]{DOT} to the category of all Fr$\acute{e}$chet spaces. The proof is essentially the same; just, it is enough to replace norm with the compatible translation-invariant metric. Recall that a metrizable locally solid vector lattice $X$ is a Fr$\acute{e}$chet space if it is complete.
\begin{lemma}\label{701}
Suppose $(X, \tau)$ is a Fr$\acute{e}$chet space and $(x_{\alpha})$ is a $u\tau$-null net in $X$. Then there exists an increasing sequence of indices $(\alpha_k)$ and a disjoint sequence $(d_k)$ such that $x_{\alpha_k}-d_k\xrightarrow{\tau} 0$.
\end{lemma}
Note that metrizability is essential in Lemma \ref{701} and can not be removed. Consider $X=\ell_1$ with the absolute weak topology. Put $u_n=(\underbrace{0,\ldots,0}_{n-times},\underbrace{n,\ldots,n}_{n-times},0,\ldots)$. It is easy to see that $u_n\rightarrow0$ in the unbounded absolute weak topology ( $uaw$-topology); for more details, see \cite{Z}. Now, suppose $(d_n)$ is any disjoint sequence in $X$. It is easy to see that the sequence $(x_n-d_n)$ has at least one component with value $n$. This means that $x_n-d_n\nrightarrow 0$ in the weak topology so that in the absolute weak topology.
In this step, let us recall definition of the $AM$-property; for more details, see \cite{Z1}.
Suppose $(X,\tau)$ is a locally solid vector lattice. We say that $X$ has the $AM$-property provided that for every bounded set $B\subseteq X$, $B^{\vee}$ is also bounded with the same scalars; namely, given a zero neighborhood $V$ and any positive scalar $\alpha$ with $B\subseteq \alpha V$, we also have $B^{\vee}\subseteq \alpha V$. Note that by $B^{\vee}$, we mean the set of all finite suprema of elements of $B$. Observe that $B^{\vee}$ can be considered as an increasing net in $X$.
\begin{lemma}\label{5001}
Suppose $X$ is a locally solid vector lattice with the $AM$-property and $U$ is an arbitrary solid zero neighborhood in $X$. Then, for each $m\in \Bbb N$, $U\vee\ldots\vee U=U$, in which $U$ is appeared $m$-times.
\end{lemma}
\begin{proof}
Suppose $x\in U$; without loss of generality, we may assume that $x\geq 0$ so that $x=x\vee 0\ldots\vee 0$. Therefore, $x\in U\vee\ldots\vee U$. For the other direction, suppose that $x_1,\ldots,x_m\ \in U$. Put $F=\{x_1,\ldots,x_m\}$; $F$ is bounded so that by definition of the $AM$-property, $x_1\vee\ldots\vee x_m \in U$.
\end{proof}
\begin{lemma}\label{702}
Suppose $X$ is a locally solid vector lattice with the $AM$-property. Then $X$ possesses the {\bf LSDBSP}.
\end{lemma}
\begin{proof}
Suppose $(x_{n})$ is a $\tau$-bounded disjoint sequence in $X$ and $V$ is an arbitrary solid zero neighborhood in $X$. There exists a positive scalar $\gamma$ with $(x_n)\subseteq \gamma V$. Assume that $B$ is the set of all finite suprema of elements of $(x_n)$. By the $AM$-property, $B$ is also bounded and $B\subseteq \gamma V$. Therefore, for any subsequence $(x_{n_k})$ of $(x_n)$, we have
\[\frac{1}{n}\Sigma_{k=1}^{n}x_{\alpha_k}=\frac{1}{n}\bigvee_{k=1}^{n}x_{\alpha_k}\in \frac{\gamma}{n}V\subseteq V,\]
for sufficiently large $n$. This would complete the proof.
\end{proof}
\begin{proposition}\label{103}
Suppose $X$ is a $\sigma$-order complete locally solid vector lattice. Then {\bf LSUBSP} in $X$ implies {\bf LSDBSP}.
\end{proposition}
\begin{proof}
First, assume that $X$ possesses the pre-Lebesgue property. Suppose $(x_n)$ is a $\tau$-bounded disjoint sequence in $X$. By \cite[Theorem 4.2]{T}, $x_{n}\xrightarrow{u\tau}0$ so that $x_{n}\xrightarrow{u|\sigma|(X,X')}0$. Therefore, there exists an increasing sequence $(n_k)$ of indices such that the sequence $(\frac{1}{n}\Sigma_{k=1}^{n}x_{n_k})$ is $\tau$-convergent. Now, suppose $X$ does not have the pre-Lebesgue property. By \cite[Theorem 3.28]{AB1}, $\ell_{\infty}$ is lattice embeddable in $X$ so that $X$ can not have {\bf LSUBSP}. This would complete the proof.
\end{proof}
For the converse, we have the following.
\begin{theorem}\label{703}
Suppose $(X,\tau)$ is a $\sigma$-order complete locally convex-solid vector lattice which is also a Fr$\acute{e}$chet space. Then {\bf LSDBSP} results {\bf LSUBSP} in $X$ if and only if $X$ does not contain a lattice copy of $\ell_{\infty}$.
\end{theorem}
\begin{proof}
The direct implication is trivial since $\ell_{\infty}$ does not have {\bf LSUBSP}. For the other side, assume that $(x_n)$ is a $\tau$-bounded $u|\sigma|(X,X')$-null sequence in $X$. This means that for each $u\in X_{+}$, $x_{n}\wedge u\rightarrow 0$ absolutely weakly.
By \cite[Theorem 6.17 and Theorem 3.28]{AB1}, $x_n\wedge u \xrightarrow{\tau}0$. By Lemma \ref{701}, there exist an increasing sequence $(n_k)$ of indices and a disjoint sequence $(d_k)$ in $X$ such that $x_{n_k}-d_k\xrightarrow{\tau}0$. By passing to a subsequence, we may assume that $\lim \frac{1}{m}\Sigma_{k=1}^{m}d_k\rightarrow 0$. Now, the result follows from the following inequality; observe that when a sequence in a topological vector space is null, then so is its Ces\`{a}ro means.
\[\frac{1}{m}\Sigma_{k=1}^{m}x_{n_k}-\frac{1}{m}\Sigma_{k=1}^{m}d_k= \frac{1}{m}\Sigma_{k=1}^{m}(x_{n_k}-d_k)\rightarrow 0.\]
\end{proof}
It is clear that {\bf LSUBSP} implies {\bf LSSUBSP}. For the converse, we have the following characterization.
\begin{theorem}
Suppose $(X,\tau)$ is a $\sigma$-order complete locally convex-solid vector lattice. Then {\bf LSSUBSP} in $X$ implies {\bf LSUBSP} if and only if $X$ does not contain a lattice copy of $\ell_{\infty}$.
\end{theorem}
\begin{proof}
Note that $\ell_{\infty}$ possesses {\bf LSSUBSP} by \cite[Theorem 2.3]{KMT} but it fails to have {\bf LSUBSP}; in this case, it should have the weak Banach-Saks property which is not possible.
Now, suppose $X$ does not contain a lattice copy of $\ell_{\infty}$. By \cite[Theorem 3.28]{AB1}, $X$ possesses the pre-Lebesgue property. Now, suppose $(x_n)\subseteq X$ is a bounded sequence which is $u|\sigma|(X,X')$-null. Therefore, by \cite[Theorem 6.17]{AB1}, $(x_n)$ is also $u\tau$-null. By the assumption, there exists a subsequence $(x_{n_k})$ of $(x_n)$ whose Ces\`{a}ro means is convergent, as claimed.
\end{proof}
\begin{proposition}
Suppose a topologically complete locally convex-solid vector lattice $(X,\tau)$ possesses {\bf LSBSP}. Then $(X,\tau)$ possesses the Lebesgue and Levi properties.
\end{proposition}
\begin{proof}
Suppose not. By \cite[Theorem 1]{W}, $X$ contains a lattice copy of $c_0$. This means that $X$ fails to have {\bf LSBSP}.
\end{proof}
\begin{proposition}\label{6000}
Suppose $(X,\tau)$ is an atomic locally solid vector lattice which possesses the Lebesgue and Levi properties. Then {\bf LSSUBSP} in $X$ implies {\bf LSBSP}.
\end{proposition}
\begin{proof}
Suppose $(x_n)$ is a $\tau$-bounded sequence in $X$. By \cite[Theorem 6]{DEM}, there exists a subsequence $(x_{n_k})$ of $(x_n)$ which is $u\tau$-convergent. By the assumption, there is a sequence $(n_{k_{k'}})$ of indices such that the sequence $(x_{n_{k_{k'}}})$ has a convergent Ces\`{a}ro means. This would complete the proof.
\end{proof}
\begin{corollary}
Suppose $(X,\tau)$ is an atomic locally solid vector lattice which possesses the Lebesgue and Levi properties. Then {\bf LSUBSP} in $X$ implies {\bf LSBSP}.
\end{corollary}
Observe that atomic assumption is necessary in Proposition \ref{6000} and can not be omitted; consider $X=L^1[0,1]$ with the norm topology. It possesses the Lebesgue and Levi properties but it is not atomic. Suppose $(f_n)$ is a bounded sequence in $X$ which is $un$-convergent to zero; by \cite[Corollary 4.2]{DOT}, it is convergent in measure. By \cite[Theorem 1.82]{AA}, there exists a subsequence $(f_{{n}_k})$ of $(f_{n})$ which is also pointwise convergent. By \cite[Corollary 1.86]{AA}, it is uniformly relatively convergent to zero. This implies that the Ces\`{a}ro means of this subsequence is convergent. So, $X$ possesses {\bf LSSUBSP}; nevertheless, it fails to have {\bf LSBSP}, certainly.
| proofpile-arXiv_065-5447 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
The large hadron collider (LHC) has provided significant opportunities for searches of new physics beyond the standard model.
In the future extensions of the LHC, the sign of new physics may appear behind high $p_T$ jets originating from the massive gauge bosons, top quarks, or Higgs bosons.
Those boosted jets can be identified by examining jet substructures \cite{Butterworth:2008iy}, and recently, there are considerable efforts on using deep learning for tagging them \cite{Almeida:2015jua,deOliveira:2015xxd,Louppe:2017ipp,Cheng:2017rdo,Qu:2019gqs}.
The jet classification relies on substructures of jets from boosted massive particles.
\cite{Butterworth:2008iy,Dasgupta:2013ihk,Larkoski:2014wba, Krohn:2009th,Ellis:2009su,Ellis:2009me,Dreyer:2018tjj}.
The quantification of those features may be performed with jet shape variables, such as $n$-subjettiness \cite{Thaler:2010tr}, or energy correlation functions \cite{Larkoski:2013eya}.
In particular, these variables are often described by a set of $n$-point energy correlators \cite{Tkachov:1995kk,Komiske:2017aww}, which is a basis of jet substructure variables with infrared and collinear (IRC) safety conditions.
On the other hand, counting variables, such as the number of charged tracks \cite{Gallicchio:2011xq}, are yet another type of discriminative variable in jet tagging, but there is some subtlety in predicting them by QCD because they are not IRC safe.
Those IRC unsafe features are often empirically modeled in event simulations.
The predicted distribution often has a sizable deviation from the experimental data.
We have to use them carefully, so that classification models are not biased to a particular simulators.
Meanwhile, these feature engineering may be replaced with deep learning.
For example, convolution-based networks \cite{deOliveira:2015xxd,Qu:2019gqs} using (pixelated) particle distributions, and recurrent neural networks \cite{Louppe:2017ipp,Cheng:2017rdo} using predefined sequence of particles are known for good jet tagging performance \cite{Kasieczka:2019dbj}.
Those networks can represent a wide variety of functions, and they cover the high-dimensional phase space of inputs.
However, some phase space of the training sample may be underrepresented by a finite number of samples, and the jet taggers based on them require high-quality samples to get the best performance.
Because of that, it is often necessary to use dimensionality reductions, such as introducing bottlenecks in the middle of their architecture.
But those reduction techniques may not respect the physical constraints of the system, and explaining the outputs in domain-specific languages is less straightforward.
Intensive post-analysis is often required in order to get an insight from the trained networks.
In this regard, starting from physics-inspired inputs and network architectures \cite{Komiske:2018cqr,Chakraborty:2019imr,Chakraborty:2020yfc,Andreassen:2018apy,Andreassen:2019txo} has advantages over the general functional model trained on primitive inputs in controllability and interpretability.
For example, the energy flow network (EFN) \cite{Komiske:2018cqr,NIPS2017_f22e4747} and the relation network (RN) \cite{Chakraborty:2019imr,Chakraborty:2020yfc, raposo2017discovering,NIPS2017_7082} is known for its good tagging performance under the IRC-safe constraints \cite{Kasieczka:2019dbj,Chakraborty:2020yfc}.
If those constrained models cover all the relevant features for solving the given problem, the model will have equal performance compared to the general-purpose models \cite{Chakraborty:2019imr,Chakraborty:2020yfc}.
So far, the networks covering IRC safe variables are well studied,
but constrained models for IRC unsafe variables are not available yet.
We need architectures bridging between general models and IRC unsafe variables.
Although deep learning models that systematically cover those IRC unsafe variables are not available, there are several frameworks based on multiplicities in coarse-graining
\cite{Davighi:2017hok},
dilation and Minkowski functionals \cite{Chakraborty:2020yfc},
and Delaunay triangulation and its topology \cite{Li:2020jdb}.
In this paper, we thoroughly reintroduce the approach in \cite{Chakraborty:2020yfc} in terms of the mathematical morphology and integral geometry, build a constrained model for the IRC unsafe variables, and show its analytic representation in the large network width limit.
This paper is organized as follows.
In \sectionref{sec:mf}, we introduce the morphological analysis on jet images using Minkowski functionals (MF), which is a generalization of counting variables by using its abstract algebraic features.
We point out that the MFs can be represented bya chain of convolutions of the jet images and $2\times 2$ filters, and therefore, convolutional neural networks (CNN) can utilize it.
\Sectionref{sec:irc_safe_net} reviews the two IRC-safe energy correlator-based networks
which may provide complementary information to the MFs.
In the case of jet image analysis, we show that the RN simplifies to a multilayer perceptron (MLP) taking a two-point energy correlation $S_2(R)$, which is
{an energy-weighted count of pairs of jet constituents} at a given angular scale.
On the other hand, the EFN is an MLP taking the jet image itself, where the jet image is an energy flow with a finite angular resolution.
In \sectionref{sec:combined_network}, we introduce a modular architecture combining the morphological analysis and RN (or EFN).
We simply combine outputs of each network using another MLP to get the final outputs.
We are going to compare the RN (or EFN) augmentedwith the morphological analysis, against the baseline CNN.
\Sectionref{sec:benchmark_tagging} is devoted to the jet tagging performance between the combined setup using RN or EFN and the CNN.
We consider two benchmark scenarios: tagging semi-visible jets \cite{Cohen:2015toa}, and top jets.
By using the semi-visible jet tagging example, we show that CNN can learn the distinctive feature of the MFs when the difference in the MF distributions between the signal and background is significant.
Besides, our combined architectures and CNN augmented by the MFs show better performance than baseline CNN.
This contradicts the observation that CNN can represent the MFs.
These performance differences may be originated from the finite network size effects and regularization.
\Sectionref{sec:training_performance} discusses computational advantages of our constrained architecture compared to those of CNN.
We show that the constrained architecture has better generalization performance when the number of training samples are small.
We also point out that our setup is faster and memory-efficient because of lower computational complexity.
In short, the MFs can efficiently represent IRC-unsafe information about the jet constituents.
Existing event simulation tools such as \texttt{Pythia} \cite{Sjostrand:2014zea} and \texttt{Herwig} \cite{Bellm:2015jjp,Bahr:2008pv} predict different soft particle distributions.
Therefore, special care is needed to estimate the classification performance using simulated datasets.
\Sectionref{sec:ps_and_mf} shows the generator dependence of jet constituent distributions in terms of MFs and describes the connection of qualitative features to the shower algorithms.
\section{Generalization of Counting Variables in Jet Physics}
\label{sec:mf}
In order to generalize the counting variables, such as particle multiplicities, we need to
introduce the mathematical concepts called \emph{valuation}.
The particle multiplicities, which is essentially the number of elements in a set, has the following characteristic property for union and intersection of two sets of particles, $A$ and $B$,
\begin{equation}
n(A \cup B) = n(A) + n(B) - n(A \cap B).
\end{equation}
This abstract mathematical feature is called valuation in measure theory.
For example, area of a region is a valuation.
It would be worth exploring the space of valuations to generalize the counting variables,
and Minkowski functionals and Hadwiger's theorem are the important tools for that.
\subsection{Minkowski Functionals and Hadwiger's theorem}
The MFs of the jet constituents are the key characteristics for analyzing the space of valuations of jet substructures.
Since we are going to analyze jet images on the pseudorapidity-polar coordinate plane, we will focus on discussing the MFs for two-dimensional Euclidean space $\mathbb{R}^2$.
We also denote the coordinate vector as $\vec{R}=(\eta,\phi)$.
For a closed and bounded set $S$ in $\mathbb{R}^2$, there are the three MFs: area $A$, boundary length $L$, and Euler characteristic $\chi$.
They can be expressed as the integral of local features of $S$ as follows,
\begin{equation}
\label{eqn:mfs:continuum}
A
=
\int_S d^2 \vec{R},
\quad
L
=
\int_{\partial S} d \vec{R},
\quad
\chi
=
\frac{1}{2\pi} \int_{\partial S} \kappa \, d \vec{R}
\end{equation}
where $\kappa$ is the curvature of the boundary $\partial S$.
The integral representation of the Euler characteristic is the Gauss-Bonnet theorem.
The MFs are useful measures because of its completeness.
Hadwiger's theorem \cite{hadwigeb1956integralsatze, https://doi.org/10.1112/S0025579300014625} states that these three functionals are complete basis for the translation and rotation invariant valuations of convex bodies
where the convex body is a closed and bounded convex set with non-empty interior.
Let $F$ be a function that satisfies the following properties,
\begin{itemize}
\item
\emph{Valuation}:
for any two convex bodies $B_i$ and $B_j$,
\begin{equation}
F(B_i \cup B_j) = F(B_i)+ F(B_j) - F(B_i \cap B_j).
\label{eq:association}
\end{equation}
\item
\emph{Invariance}: for any translation or rotation $g$, the measure $F$ is invariant, i.e, for any convex body $B$,
\begin{equation}
F(B) = F(gB).
\label{eq:tr}
\end{equation}
\item
\emph{Continuity}: for any convergent sequence of convex bodies, $B_i~\rightarrow~B$,
\begin{equation}
\lim_{i \rightarrow \infty} F(B_i) = F(B)
\end{equation}
\end{itemize}
Then for any $F$, there exist three constants $c_0$, $c_1$, and $c_2$ such that
\begin{equation}
F = \sum_{\nu=0,1,2} c_\nu \mathrm{MF}_\nu = c_0 \, A + c_1 \, L + c_2 \, \chi.
\end{equation}
where $\mathrm{MF}_\nu$ is $(A, L, \chi)$ for $\nu=(0,1,2)$, respectively.
\begin{figure}
\hspace*{-0.5em}
\subfloat[\label{fig:morphology:a}]{
\includegraphics[width=0.24\textwidth]{{Figures/image_darkqcd_jet/Event_002030_JetImage_morphology.regularized}.pdf}
}\hspace*{-1.5em}
\subfloat[\label{fig:morphology:b}]{
\includegraphics[width=0.24\textwidth]{{Figures/image_qcd_jet/Event_000203_JetImage_morphology.regularized}.pdf}
}
\caption{\label{fig:morphology}
Binary jet images of \subfigref{fig:morphology:a} a dark jet and \subfigref{fig:morphology:b} a QCD jet.
Black dots are the active pixels in $P^{(0)}$ without any filtering.
Dark gray, gray, blue, and light blue pixels are pixels in $P^{(i)}-P^{(i-1)}$ for $i=1, 2, 3, 4$, respectively.
Both of the binary images have $A^{(0)}=30$.
The dark jet model is described in \sectionref{sec:benchmark_tagging}.
}
\end{figure}
Hadwiger's theorem also holds in the geometry of the square lattice and pixelated image, but the context should be modified accordingly \cite{LEINSTER201281}.
The geometry of the square lattice has a different distance function called the $L_1$ distance, which is a sum of the absolute value of the difference in each component as follows.
\begin{equation}
||\vec{R}_1 - \vec{R}_2||_1 = |\eta_1 - \eta_1| + |\phi_1 - \phi_1|.
\end{equation}
This distance is essentially identical to the length of the shortest path between two points on a square grid.
The points within unit $L_1$ distance from the origin is different to those in Euclidean geometry.
They form a square whose vertices are at $(0,1)$, $(0,-1)$, $(1,0)$, and $(-1, 0)$.
The statements of Hadwiger's theorem still holds under this geometry, but there are two modifications.
First, the invariance under translation and rotation is replaced by the isometry of the $L_1$ space.
The convexivity is replaced with $L_1$-convexivity.
A given set $B$ is called $L_1$-convex if and only if
there always exists a path connecting two points $\vec{R}_1$ and $\vec{R}_2$ in $B$,
and the components of the path are monotonic along the path.
One clear example illustrating the difference between those two convexivities is an L-shaped region: it is not convex but $L_1$-convex.
After these modifications, we may safely use the MFs for the pixelated image analysis.
\subsection{Morphological Analysis on Jet Images}
The morphological analysis on jet images is then performed on the filtered distribution of jet constituents projected on the pseudorapidity-polar coordinate $(\eta,\phi)$.
We consider superlevel sets of the jet image, $P^{(0)}$, i.e., the set of pixels whose energy deposit $p_{T}^{(i,j)}$ is higher than the threshold value $p_T$ \cite{PhysRevE.53.4794},
\begin{equation}
P^{(0)}[p_T] = \{(i,j) \, |\, p_{T}^{(i,j)} > p_T\},
\end{equation}
where $(i,j)$ is the integer coordinate of the given pixel.\footnote{The physical unit length of the grid is the hadronic calorimeter resolution $\Delta R = 0.1$ of our analysis.
The physical coordinates $(\eta, \phi)$ are obtained by multiplying $\Delta R$ to those integer coordinates.
}
The resulting binary images on a two-dimensional integer grid are used for the morphological analysis.
For the following discussion, we will omit the threshold argument $[p_T]$ unless it is required explicitly.
We then analyze the MFs of the images after dilation by a square called a structuring element to understand the geometric structure with the aid of mathematical morphology.
The dilation is useful for probing geometric features that are visible at the angular resolution of the size of the square.
For our pixellated image analysis, the structuring element $B^{(k)}$ is a square with side length $2 k+1$.
The dilated image $P^{(k)}$ is defined as follows.
\begin{eqnarray}
P^{(k)}
& = &
\{a + b \, \vert \, a\in P^{(0)}, b\in B^{(k)} \},
\\
B^{(k)}
& = &
\left\{(i,j) \, \vert \, i,j\in\{-k,-k+1,...,k-1,k\}\right\}.
\end{eqnarray}
Sample binary images are in \figref{fig:morphology}.
The binary image $P^{(k)}$ is analogous to a coarse-graining or smearing of the original binary image $P^{(0)}$.
We denote the three MFs of $P^{(k)}$ as $A^{(k)}$, $L^{(k)}$, and $\chi^{(k)}$.
In \cite{Chakraborty:2020yfc}, we have shown that the MFs $ A^{(0)}$ and $A^{(1)}$ improve the top jet vs.~QCD jet classification.
We also note that the dilation by a square is good enough for retrieving the topology of an underlying smooth body where the point clouds are sampled.
The topology of the dilated image is sensitive to the structuring element in general, especially when we are using a finite number of samples.
Still, the square is connected and sufficiently round so that the dilation by the square is a good topology estimation process without any glitches \cite{10.1145/1810959.1811015}.
We can get some intuitive idea of how the sequences of the MFs encode the geometric information of a given image by considering its limiting behavior.
For a scale $k$ much larger than the size of the image, $A^{(k)} \rightarrow (2k+1)^2$ because the details of the images are irrelevant to $P^{(k)}$.
In a different extreme case where $P^{(k)}$ is consisted by $N$ sufficiently isolated clusters, the asymptotic behavior changes to $A^{(k)} \rightarrow N (2k+1)^2$.
Therefore, the sequence $A^{(k)}$ is sensitive to the number of clusters of active pixels in the jet image.
The intermediate behavior of the MF sequences $(A^{(k)}$,~$L^{(k)}$,~$\chi^{(k)})$ contains more details about the pixel distributions.
When $P^{(k)}$ is a convex body, the MFs of $P^{(k)}$ and $P^{(k+1)}$ satisfies the following recurrence relation \cite{LEINSTER201281},
\footnote{The equation can be derived from the theorem 6.2 of \cite{LEINSTER201281}, where the $L_1$-intrinsic volume $(V'_0,V'_1,V'_2) $ $= (\chi, L/2, A)$ and the scale factor $\lambda=2$}
\begin{eqnarray}
A^{(k+1)} & = & A^{(k+1)}_{\mathrm{ext}} \equiv A^{(k)} + L^{(k)} + 4 \chi^{(k)}, \\
L^{(k+1)} & = & L^{(k+1)}_{\mathrm{ext}} \equiv L^{(k)} + 8 \chi^{(k)}, \\
\chi^{(k+1)} & = & \chi^{(k+1)}_{\mathrm{ext}} \equiv \chi^{(k)}.
\label{eqn:recurrence}
\end{eqnarray}
The deviation from this relation signals that some change of the shape or topology occurs at the given angular scale.
For example, if a hole or dent is completely filled during the dilation, the above recurrence relation is violated.
Therefore, the full sequences of the MFs contain useful information about the geometry of the binary image in general.
The analysis is also a persistent analysis of geometric features of jet substructures, similar to \cite{Li:2020jdb}.
The recurrence relation also explains the asymptotic behavior of $A^{(k)}$.
Suppose that the recurrence relations of the MFs hold after the given scale $k_0$.
The solution for $A^{(k+k_0)}$ in terms of the MFs of $P^{(k_0)}$ are as follows.
\begin{equation}
A^{(k_0+k)}_{\mathrm{ext}} = A^{(k_0)} + k \, L^{(k_0)} + 4 k^2 \chi^{(k_0)}
\end{equation}
For $k \gg k_0$, the area $A^{(k_0+k)}$ is approximately $4 k^2 \chi^{(k_0)}$, and the Euler
characteristic $\chi^{(k_0)}$ can be interpreted as the number of clusters.
We now compare the area $A^{(k)}$ with the extrapolated area $A_{\mathrm{ext}}^{(k)}$ from the MFs of $P^{(k-1)}$ in order to check whether the dilation preserves the geometric features.
The difference $\Delta A^{(k)}$ is a useful measure for checking the geometric persistence,
\begin{eqnarray}
\Delta A^{(k)}
& = &
A^{(k)}-A^{(k)}_{\mathrm{ext}}.
\end{eqnarray}
\Figref{fig:independence} shows 2D histograms of $(A^{(k)}_{\mathrm{ext}},$ $ \Delta A^{(k)})$ of the leading $p_T$ jets of QCD dijet events with
$p_{T,\mathbf{J}}~$ $\in~[500,600]$~GeV.
\Figref{fig:independence:a} for $k=2$ shows typical jets has lots of vibrant activities at the short scale so that the condition $\Delta A^{(k)} = 0$ can be easily violated for a small $k$.
A smeared image becomes more regular at the large scale, so that many of the samples has $\Delta A^{(k)} = 0$ as shown in \figref{fig:independence:b} for $k=4$.
A similar behavior can be directly seen in the Euler characteristics.
For a small $k$, the jets occasionally have subclusters, i.e., the intrinsic topology $\chi^{(k)}$ variate a lot.
Therefore, the extrapolation $\chi_{\mathrm{ext}}^{(k)} = \chi^{(k-1)}$ is also quite different from $\chi^{(k)}$, {as shown in}
the 2D histogram of $(\chi^{(k-1)}, \chi^{(k)}-\chi^{(k-1)})$ in \figref{fig:independence:c}.
For a large $k$, since we are analyzing a single jet, we expect that most of the events has $\chi^{(k-1)} \simeq \chi^{(k)} \simeq 1$ as in \figref{fig:independence:d}.
Note that $\chi^{(k)}-\chi^{(k-1)}$ is positive for some events, indicating that there are holes at the scale $k-1$ and they are filled at the scale $k$.
\begin{figure}[tb]
\subfloat[\label{fig:independence:a}]{
\includegraphics[width=0.235\textwidth]{{Figures/pydijet_010_s2mf_1.0}.pdf}
}\hspace*{-0.7em}
\subfloat[\label{fig:independence:b}]{
\includegraphics[width=0.235\textwidth]{{Figures/pydijet_030_s2mf_1.0}.pdf}
}
\subfloat[\label{fig:independence:c}]{
\includegraphics[width=0.235\textwidth]{{Figures/pydijet_012_s2mf_1.0}.pdf}
}\hspace*{-0.7em}
\subfloat[\label{fig:independence:d}]{
\includegraphics[width=0.235\textwidth]{{Figures/pydijet_032_s2mf_1.0}.pdf}
}
\caption{
The correlation between the MFs at a given scale $k$ and its extrapolated values from the scale $k-1$.
The horizontal axis is the extrapolated value, and the vertical axis is the difference between the truth and extrapolated values.
The upper plots (a) and (b) are for the area $A^{(k)}$, and the lower plots (c) and (d) are for the Euler characteristic $\chi^{(k)}$.
The left plots (a) and (c) are for $k=2$, and the right plots (b) and (d) are for $k=4$.
For $k=4$, more samples have the difference zero since the dilation smooth out detailed features of jets and its geometry and topology becomes more and more trivial.
}
\label{fig:independence}
\end{figure}
Note that MFs are aggregated features, and their statistical fluctuations are smaller than the primitive inputs.
For example, the number of active pixels $A^{(0)}$ has fluctuation $\delta A^{(0)}/A^{(0)}\sim 1/\sqrt{A^{(0)}}$ but its pixel-by-pixel fluctuation is order 1.
As a result, the training of RN with MFs is potentially more stable against the fluctuation of the energy deposit of pixels, while CNN is more susceptible to that.
Neural networks trained on these MFs has useful geometric measures for solving the given task.
The MFs do not use energy weighting in contrast to other energy-weighted IRC safe jet substructure observables, so that all the jet constituents are treated equally once they pass the $p_T$ threshold.
\subsection{Convolution Representation of Minkowski Functionals}
The MFs are defined as an integral of local features in the continuum limit as in \eqref{eqn:mfs:continuum} so that they can be written as a sum of all the local contributions from finite-sized patches.
This leads an interesting property of the MFs; they can be embedded in the CNN with finite-sized filters.
For example, the area of a two-dimensional region $S$ can be written as a following double integral of an indicator function $K$ of a square with side length $\ell$ and centered at $(0,0)$.
\begin{eqnarray}
A & = &
\int_S d^2\vec{r} \int_{\mathbb{R}^2} d^2 \vec{r}_0 \, \frac{1}{\ell^2} K_{\ell}(\vec{r}-\vec{r}_0)
\\
K_{\ell}(x,y) & = &
\begin{cases}
1 & x, y \in \left[-\frac{\ell}{2} , \frac{\ell}{2}\right] \\
0 & \mathrm{otherwise}
\end{cases}
\end{eqnarray}
By swap the order of the integration, we obtain the expression in the form of the sum of the local contribution of finite patches,
\begin{equation}
A = \int_{\mathbb{R}^2} d^2 \vec{r}_0 \left[ \int_S d^2\vec{r} \, \frac{1}{\ell^2} K_{\ell}(\vec{r}-\vec{r}_0) \right]
\end{equation}
To discretize and evaluate this integral for the binary image on a square grid, the following marching square algorithm \cite{Goring:2013qya} is a fast and useful.
The marching square algorithm for square lattice process all the $2\times2$ subimages of given binary images and collect its local features for calculating the MFs.
The local features are summarized in \tableref{Table:lookup}.
Note that we {do not include} the boundary of the $2\times2$ subimages for this calculation.
\begin{itemize}
\item
For the area $A$, a subimage contribution is 1/4 of the number of its active pixels because a pixel belongs to four subimages.
\item
For the boundary length $L$, the contribution is local boundary length divided by 2 since every boundary belongs to two subimages.
\item
For the Euler characteristics $\chi$, we only need to count the number of outward corners, $N_\mathrm{out}$, and the number of inward corners, $N_\mathrm{in}$.
Since inward and outward corners have exterior angle $\pi/2$ and $-\pi/2$ respectively, the total curvature is just proportion to the difference between $N_\mathrm{out}$ and $N_\mathrm{in}$.
The Euler characteristic is then as follows,
\begin{equation}
\chi
=
\frac{1}{2\pi} \left[\frac{\pi}{2} \left( N_\mathrm{out} - N_\mathrm{in} \right) \right]
=
\frac{1}{4} \left( N_\mathrm{out} - N_\mathrm{in} \right).
\end{equation}
Each corners are considered only once during the marching,
the local contributions are $1/4$ for the outward corners and -$1/4$ for the inward corners.
Note that the Euler characteristic depends on the definition of the connectivity between two diagonally neighboring pixels.
We define that pixels sharing a same vertex are connected, and the corresponding subimages have two inward corners.
\end{itemize}
For example, $(A, L, \chi)$ of an isolated pixel is the sum of 1, 2, 4, and 8 of the \tableref{Table:lookup}, and the value is $(1, 4, 1)$.
This algorithm can be generalized for calculating MFs of images on other types of lattice, such as hexagonal pixels,
\footnote{
Note that the hexagonal grids are essentially identical to the plane of $\mathbb{R}^3$ with $L_1$ distance, with constraints ${x+y+z=0}$ \cite{413166, hexagon}.
The hexagonal pixels have rounder shape and larger symmetry groups than the square pixels, but its integral geometry is not trivial because of the projection.
Nevertheless, Hadwiger's theorem still holds in the $\mathbb{R}^3$, and the nontrivial $L_1$-intrinsic volumes $V_1'$ and $V_2'$ are proportional to the perimeter and area of the hexagonal pixels.
}
or for approximating MFs of raw images without pixelation \cite{Mantz_2008}.
\input{Figures/lookup_table}
Since there are only 16 unique configurations for the $2\times2$ subimages, we may use the look-up table $\mathbf{v}^{k}(a)$, where $a=0,\cdots,15$ and $k\in\{A,L,\chi\}$, in \tableref{Table:lookup} for parameterizing the local contribution.
{The MFs are then the sum of look-up table values as follows,}
\begin{equation}\label{eqn:filter}
(A^{(k)}, L^{(k)}, \chi^{(k)})
=
\sum_{i,j} \sum_{n,m \in \{0,1\}} \mathbf{v}\left(P_{(i+n)(j+m)}^{(k)} f_{nm} \right) ,
\end{equation}
where $f_{nm} = ((1,2),(4,8))$, and $P_{ij}^{(k)}$ is 1 or 0 if $(i,j)$-th pixel of $P^{(k)}$ is active or not, respectively.
Note that all the steps for calculating MFs in this section can be written in terms of convolutions.
Let $p_T^{(i,j)}$ be the energy deposit of $(i,j)$-th pixel.
The calculation method of MFs discussed in this section can be summarized as follows.
\begin{eqnarray}
\nonumber
P^{(0)}_{ij}[p_T]
& = &
\theta( p_T^{(i,j)} - p_T)
\\*
\nonumber
P^{(k)}
& = &
\theta( P^{(0)} * B^{(k)} )
\\*
\label{eqn:mfs:conv}
(A^{(k)}, L^{(k)}, \chi^{(k)})
& = &
\mathbf{v}(P^{(k)} * f)
\end{eqnarray}
where all the binary images in the above equations are considered as a function that gives 1 for active pixels and 0 for otherwise, $*$ is the discrete convolution.
The stacked convolution layers can simulate this algorithm, i.e., $B^{(k)}$ and $f$ can be considered as the weights of convolution layers, and the functions $\theta$ and $\mathbf{v}$ can be modelled by $1\times1$ convolutions \cite{DBLP:journals/corr/LinCY13}.
Therefore, $A^{(k)}$, $L^{(k)}$, and $\chi^{(k)}$ are in principle covered by a CNN trained on jet images.
One subtle point is that this closed expression contains a step function, which has a point of discontinuity.
The CNN with a finite number of filters and smooth activation functions may have difficulty on accessing this variable set since the network itself is a smooth function.
A similar situation may happen on the CNN with $L_2$ regularizers.
We will show an example that the tagging performance of the CNN is improved by adding MFs to the inputs.
\section{Energy Correlator based Neural Networks for Jet Substructure}
\label{sec:irc_safe_net}
The energy dependence of MFs in \eqref{eqn:mfs:conv} is nonlinear, while
many theory-motivated jet substructure variables typically have a multilinear energy dependence; these types of variables are called IRC safe energy correlators \cite{Tkachov:1995kk, Komiske:2017aww}.
Since the counting variables complement those variables,
we may use a neural network model representing the IRC-safe energy correlators and provide the MFs as additional inputs.
In this section, we briefly review two examples: the IRC-safe relation network \cite{Lim:2018toa,Chakraborty:2019imr,Chakraborty:2020yfc}, and the energy flow network \cite{Komiske:2018cqr}
\subsection{Relation Network}
The relation network (RN) is mainly designed for capturing the common properties of relational reasoning.
For example, if we use the momentum $p_i$ of the $i$-th constituents of the jet as a network input, we can build one simplest model of RN with two scalar functions $f$ and $g$ as follows,
\begin{equation}
\label{eqn:rn:simplest}
f \left[ \sum_{i\in a, j\in b} g(p_i, p_j) \right],
\end{equation}
where $a$ and $b$ are labels for subsets of jet constituents.
If we impose the IRC-safe constraints \cite{Tkachov:1995kk, Komiske:2017aww}, the function $g$ should be bilinear in the constituent $p_T$ and the coefficients $\Phi_{ab}$ should depend only on the relative angular distance between the jet constituents, $R_{ij}$.
The following is then the basic form of the IRC-safe RN for the jet substructure,
\begin{equation}
\label{eqn:rn:irc_safe}
f \left[ \sum_{i\in a, j\in b} p_{T,i} p_{T,j} \Phi_{ab}(R_{ij}) \right].
\end{equation}
The summation in the above equation is a nested loop over the jet constituents.
Nevertheless, this part can be simplified to a single summation as we describe below.
We introduce the following two-point energy correlation $S_{2,ab}$ that accumulates energy correlations at a given angular scale $R$.
\begin{equation}
\label{eqn:s2}
S_{2, ab}(R)= \sum_{i\in a, j\in b} p_{T,i} p_{T,j} \delta(R-R_{ij}).
\end{equation}
By using $S_{2,ab}$, the nested summation in \eqref{eqn:rn:irc_safe} can be replaced to a single integral as follows,
\begin{equation}
\label{eqn:s2:rn}
\int dR \,S_{2,ab}(R) \Phi_{ab}(R).
\end{equation}
This model covers various jet substructure variables.
For example, the two-point energy correlation functions {EFP}$^n_2$ \cite{Larkoski:2013eya,Komiske:2017aww} can be written in terms of a linear combination of the $S_2$ as follows,
\begin{equation}
{\rm EFP}^{n}_{2,ab} = \int^{\infty}_{0} dR \,
S_{2, ab}(R)\, R^{n},
\end{equation}
Therefore, this network covers all information encoded in $\mathrm{EFP}^n_2$.
For the practical use of this RN with IRC-safe constraints, we discretize the integral in \eqref{eqn:s2:rn} by binning the integrand with bin size $\Delta R$.
The discrete version of $S_{2,ab}$ is then defined as follows.
\begin{equation}
S_{2,ab}^{(k)} = \int_{ k\Delta R}^{ (k+1)\Delta R } dR \, S_{2,ab}(R),
\end{equation}
where $k$ is the bin index.
The integral in \eqref{eqn:s2:rn} can be expressed as a inner product between $S_{2,ab}^{(k)}$ and a weight vector $\Phi_{ab}^{(k)}$,
\begin{equation}
\int dR \,S_{2,ab}(R) \Phi(R) = \sum_k S_{2,ab}^{(k)} \Phi_{ab}^{(k)}.
\end{equation}
For our numerical study, we take bin size $\Delta R=0.1$, which is the hadronic calorimeter resolution.
The $S_2$'s are directly calculated from the HCAL and ECAL outputs.
If we use an MLP to model the function $f$ of the RN in \eqref{eqn:rn:irc_safe}, we can embed $\Phi^{(k)}$ to the first fully-connected layer.
The fully-connected layer that maps one input $\sum_k S_{2,ab}^{(k)} \Phi_{ab}^{(k)}$ to the latent dimension is equivalent to a fully connected layer that maps $S_{2,ab}^{(k)}$'s to the latent dimension, i.e.,
\begin{equation}
W_{l} \sum_k S_{2,ab}^{(k)} \Phi_{ab}^{(k)} = \sum_k W_{lk} S_{2,ab}^{(k)},
\quad
W_{lk} = W_{l} \Phi_{ab}^{(k)}.
\end{equation}
The RN is modelled by an MLP taking $S_{2,ab}^{(k)}$, and the first layer can be regarded as \emph{a trainable two-point energy correlation}.
\subsection{Energy Flow Network}
Energy flow network (EFN) \cite{Komiske:2018cqr} is also a graph neural network based on the energy correlators, but this network uses only pointwise features.
This network is based on the deep set architecture \cite{NIPS2017_f22e4747}, i.e.,
\begin{equation}
f\left[\sum_{i\in a} g (p_i) \right].
\end{equation}
As discussed before, this pointwise feature $g(p_i)$ should be a linear function of energy when the IRC-safe constraint is assumed, and we have the following model of the EFN.
\begin{equation}
f\left[\sum_{i\in a} p_{T,i} \Phi (\vec{R}_i) \right]
\end{equation}
For the pixelated image analysis,
the $p_T$-weighted sum over the jet constituents is replaced to the energy-weighted sum over all pixels,
\begin{equation}
\sum_{i\in a} p_{T,i} \Phi (\vec{R}_i) \approx \sum_{i,j} P_{T}^{(ij)} \Phi_{ij},
\end{equation}
where $P_T^{ij}$ is the energy deposit of the $(i,j)$-th pixel, and $\Phi_{ij}$ is the corresponding angular weights.
When we replace $f$ with an MLP, the angular weights $\Phi_{ij}$ can be absorbed into the MLP.
The product between the weights $W_\ell$ of the first dense layer and $\Phi_{ij}$ can be considered as an effective weights $W_{\ell ij}$ of an MLP taking $P_{T}^{(ij)}$ as inputs, i.e., the dense layer can be rewritten as follows.
\begin{equation}
W_{\ell} \left[ \sum_{i,j} P_{T}^{(ij)} \Phi_{ij} \right] = \sum_{i,j} P_T^{(ij)} W_{\ell ij}, \quad W_{\ell ij} = W_{\ell} \Phi_{ij}.
\end{equation}
Therefore, an MLP for the pixelated image analysis models the EFN for the pixelated jet image.
Note that using the standardized inputs results does not change the conclusion since the standardization is a linear transformations.
Let us consider the following transformation of the inputs and parameters of the dense layer transforms,
\begin{eqnarray}
P^{(ij)}_T
& \rightarrow &
\frac{P^{(ij)}_T - \mu^{(ij)}}{\sigma^{(ij)}},
\\
W_{\ell ij}
& \rightarrow &
\sigma^{(ij)} W_{\ell ij},
\\
B_{\ell}
& \rightarrow &
\sum_{i,j} \mu^{(ij)} W_{\ell ij} + B_{\ell},
\end{eqnarray}
where $\mu^{(ij)}$ and $\sigma^{(ij)}$ are the mean and standard deviation of the inputs.\footnote{For the pixels which do not have energy variations, we assign $\sigma^{(ij)} = 1$.}
The first dense layer, $ \sum_{i,j} P^{(ij)}_T W_{\ell ij} + B_{\ell}$ is invariant under this transformation, we may safely use the MLP for the standardized image to model the EFN.
\section{Combined Network Setup}
\label{sec:combined_network}
In this section, we describe the network that combines the morphological analysis and the RN or EFN.
\subsection{Network Inputs}
For the morphological analysis, we use the MFs up to $k=6$
and denote them as $x_{\mathrm{morph}}$,
\begin{equation}
x_{\mathrm{morph}} = \bigcup_{p_T\,\mathrm{threshold}}\{
A^{(k)}, L^{(k)}, \chi^{(k)} \, | \,
k=0,\cdots,6
\}.
\end{equation}
We use the following $p_T$ thresholds: default threshold of the detector simulation\footnote{0.5 GeV for the electronic calorimeters and 1.0 GeV for the hadronic calorimeters. This filtering is performed before the pixellation.}, 2, 4, and 8 GeV.
For the IRC-safe relation network, we used the two-point energy correlation $S_{2,ab}$ of the following subsets of jet constituents.
\begin{itemize}
\item the trimmed jet $\mathbf{J}_{\mathrm{trim}}$ \cite{Krohn:2009th}, denoted by $h$,
\item the compliment set of $\mathbf{J}_{\mathrm{trim}}$, denoted by $s$,
\item the leading $p_T$ subjet $\mathbf{J}_1$, denoted by $1$,
\item the compliment set of $\mathbf{J}_1$, denoted by $c$.
\end{itemize}
Using these subsets is effective in the top tagging \cite{Chakraborty:2020yfc}.
We use the following sets of binned two-point correlations as inputs of the RN,
\begin{eqnarray}
\nonumber
x_{\trim}
& = &
\{ S^{(k)}_{2, hh}, S^{(k)}_{2, {\rm soft}} \equiv 2 S^{(k)}_{2, hs} + S^{(k)}_{2, ss}\, | \, k=0,\cdots,14 \},
\\
\nonumber
x_{\jet_1}
& = &
\{ S_{2,11}^{(k)} \, | \, k=0,1,2 \} \cup
\{ S_{2,1c}^{(k)} \, | \, k=0,\cdots,9 \}
\\
& &
\phantom{00} \cup
\{ S_{2,cc}^{(k)} \, | \, k=0,\cdots,14 \},
\end{eqnarray}
In addition to those MFs and two-point energy correlations,
we provide $p_T$ and mass for each jet, trimmed jet, and leading $p_T$ subjets as additional inputs to give information regarding jet kinematics, and we denote them as $x_{\mathrm{kin}}$.
\begin{equation}
x_{\mathrm{kin}}= \{ p_{T,\mathbf{J}}, m_{\mathbf{J}}, p_{T,\mathbf{J}_{\mathrm{trim}}}, m_{\mathbf{J}_{\mathrm{trim}}} , p_{T,\mathbf{J}_1}, m_{\mathbf{J}_1} \}.
\end{equation}
\subsection{Network Architecture}
We use the following setup to transform the given inputs to the desired outputs for the binary classification.
We first use MLPs to encode each of the primitive inputs $x_{\mathrm{morph}}$, $x_{\trim}$, and $x_{\jet_1}$ into latent spaces of dimension 5,
\begin{eqnarray}
h_{\mathrm{morph}}
&=&
\mathrm{MLP}_{\mathrm{morph}}(x_{\mathrm{morph}}, x_{\mathrm{kin}}),
\\
h_{\mathrm{trim}}
&=&
\mathrm{MLP}_{\mathrm{trim}}(x_{\mathrm{trim}}, x_{\mathrm{kin}}),
\\
h_{\mathbf{J}_1}
&=&
\mathrm{MLP}_{\mathbf{J}_1}(x_{\mathbf{J}_1}, x_{\mathrm{kin}}).
\end{eqnarray}
All the MLPs used in this section take the kinematic inputs $x_{\mathrm{kin}}$ as additional inputs.
Those latent space features are mapped into the {classifier outputs} $\hat{y}$ the by another MLP,
\begin{equation}
\logit(\hat{y}) = \mathrm{MLP}_{\mathrm{out}} (h_{\mathrm{morph}}, h_{\mathrm{trim}}, h_{\mathbf{J}_1}, x_{\mathrm{kin}}),
\end{equation}
where $\logit(\hat{y})$ is the inverse of the standard logistic function, $\log(\hat{y}) - \log(1-\hat{y})$.
For the analysis using only the subset of the inputs, we take only the relevant latent space features.
We denote this setup as RN+MF, and the pure RN setup without morphological analysis as RN.
We will use this network for binary classifications, trained by minimizing the binary cross-entropy loss function.
\begin{equation}
\label{eqn:loss_ce}
\mathcal{L}_{\mathrm{CE}}
=
-\frac{1}{2} \E\left( \log \hat{y} \,|\, y=1 \right)
- \frac{1}{2} \E\left( \log (1-\hat{y}) \,|\, y=0 \right),
\end{equation}
where $y=1$ indicates the signal samples, and $y=0$ indicates the background samples.
The priors for each class is $1/2$.
All the hidden layer's weights are L2 regularized with a {weight decay coefficient} of 0.001.
The network is trained by ADAM optimizer \cite{ADAM} with default parameters, and we adopt the temporal exponential moving average on trainable parameters after ignoring the early 50 epochs.
The ratio between training, validation, and test datasets is 9:1:10.
We stop training when the validation loss does not improve for 50 epochs.
We iterate this procedure for different numbers of minibatches of 20, 50, 100, and 200,
and choose the results with the largest validation AUC.
All of these setups are implemented using \texttt{Keras} \cite{chollet2015keras} with \texttt{TensorFlow} backend \cite{tensorflow2015-whitepaper}.
Finally, all inputs are standardized, and we also reweight events to make
the $p_T$ distribution flat in order to marginalize learning from $p_{T,\mathbf{J}}$ distribution.
We also remark that in a limit of large width of the MLPs and small bin size for $S_2$ and MFs, this network setup corresponds to the following smooth model,
\begin{eqnarray}
\nonumber
h_{\mathrm{MA}} & = & \Psi_{\mathrm{MA}} \left[ \int_0^\infty d p_T \int_0^\infty dR \, \mathrm{MF}_j(R; p_T) \Phi_j(R; p_T) ; x_{\mathrm{kin}} \right]
\\
\nonumber
h_{\mathrm{RN}} & = &
\Psi_{\mathrm{RN}} \left[ \sum_{a,b} \int_0^\infty d R\, S_{2,ab}(R) \Phi(R) ; x_{\mathrm{kin}} \right]
\\
\hat{y}
& = &
\Psi_{\mathrm{out}} \left[ h_{\mathrm{MA}}, h_{\mathrm{RN}}; x_{\mathrm{kin}} \right],
\end{eqnarray}
where all the $\Phi$ and $\Psi$ are some scalar functions.
This expression can help discuss the relationship between the morphological analysis and other networks working on the momenta of jet constituents without pixelation, such as ParticleNet \cite{Qu:2019gqs}. However, the discussion is beyond the scope of this paper.
\subsection{Convolutional Neural Network and Energy Flow Network}
We compare this RN+MF to the following CNN and EFN.
Our baseline CNN is trained on the preprocessed jet images, as described in \cite{Chakraborty:2020yfc}.
\begin{enumerate}
\item The jet constituents are reclustered by $k_T$ algorithm \cite{Catani:1993hr,Ellis:1993tq} with radius parameter 0.2.
\item
Set the center of $(\eta,\phi)$ coordinate to be the leading $p_T$ subjet axis.
\item
Rotate $(\eta,\phi)$ plane about the origin so that the subleading $p_T$ subjet is on the positive $y$ axis.
\item
If the third leading $p_T$ subjet exists and has negative $x$ value,
flip the $x$ axis so that the third subjet is always on the right side of the image.
\item
Pixelate the jet constituents to get the jet image.
\end{enumerate}
The preprocessed jet image is a two-dimensional $p_T$ weighted histogram of jet constituents on a range $[-1.5, 1.5] \times [-1.5, 1.5]$ with bin size $0.1\times 0.1$.
We denote the set of energy deposits for each pixels as follows,
\begin{equation}
x_\mathrm{image} = \{ P_T^{(ij)}\,|\, i,j=-15,\cdots,14 \}.
\end{equation}
The image input $x_\mathrm{image}$ is provided to networks after standardization.
In summary, the preprocessed images are aware of the most energetic subjet locations, and the relative position of the two subleading $p_T$ subjets.
The CNN consists of six convolutional layers.
The filter size is $3\times 3$, and a pooling layer with pool size $2\times 2$ is inserted for every three convolutional layers.
After then, all the spatial dimensions are flattened, and a $1\times1$ convolution maps the intermediate outputs to latent space with dimension 10.\footnote{We have checked the classification performance of the CNNs with the latent dimensions 5, 10, 20, and 100, and 10 was the best.}
These latent space features are then concatenated to the kinematic inputs $x_{\mathrm{kin}}$, and we use an MLP to transform them into the desired classifier output.
The training setups are the same as RN+MF, but we scan by minibatch numbers 100, 200, and 500.
Although CNN can represent MFs, we may explicitly provide the MFs to the CNN.
As discussed earlier, CNN may experience technical difficulty expressing MFs through the training because the MFs are not smooth functions of the jet image.
We additionally consider a CNN
whose MLP at the end receives $h_{\mathrm{morph}}$ as additional latent space inputs.
We denote this setup as CNN+MF.
We model the pointwise correlation of the EFN by an MLP {with three} hidden layers and 10 outputs.
The {first} hidden layer has 50 (200) outputs, while the others have 200 outputs.
The input image is concatenated with
$x_{\mathrm{kin}}$.
The outputs are then provided to another MLP that converts those inputs to the classifier, similar to that of the CNN.
\Tableref{tableii} lists the combination of inputs we study in this paper, and training costs for the classification problems that is discussed in \sectionref{sec:training_performance}.
Some notable differences between the inputs to the CNNs and the RN+MFs are as follows.
The baseline CNN takes a large number of inputs since they are taking the whole image.
However, the {detector hits are} sparsely distributed over the images since the center of the images contains more information while the outer region of the jet image has sparse soft activities.
The CNN has to distill the useful information from this sparse dataset.
On the other hand, RN only takes the basis for the two-point energy correlators.
The soft activities are collected to each bin of $S_2$, and the resulting number of inputs is only O[100].
The number of MF inputs is $3\times7$ for each binary image given energy thresholds.
This is also a relatively small number compared to the dimension of the image inputs.
We also note that as $k$ increases, the change in geometry of the dilated image $P^{(k)}$ becomes more regular, and the MFs are getting dependent on their previous values in the sequences.
The cutoff for $k$ may be fine-tuned further, but we use 7, which effectively smoothes out geometric features below the angular scale of 1.5.
The latter terms in the sequence merely validate the regularity in
dilation, and dropping some of them may not affect the performance significantly.
\begin{table}
\caption{
The number of inputs $N_{\mathrm{input}}$, the number of trainable parameters $N_{\mathrm{param}}$.
The number of inputs includes dummy inputs since each $S_2$'s are saved on length 20 vectors.
For EFNs, the number of params in parenthesis is the number for reduced setup with 50 energy correlators while the nominal setup has 200 energy correlators.
}
\begin{ruledtabular}
\begin{tabular}{llcc}
&
inputs &
$N_{\mathrm{input}}$ &
$N_{\mathrm{param}}$
\\
\hline
MF & $x_{\mathrm{morph}}$, $x_{\mathrm{kin}}$ & \phantom{0}90 & 102,407 \\
RN & $x_{\trim}$, $x_{\jet_1}$, $x_{\mathrm{kin}}$ & 106 & 149,212 \\
RN+MF & $x_{\mathrm{morph}}$, $x_{\trim}$, $x_{\jet_1}$, $x_{\mathrm{kin}}$ & 190 & 209,617 \\
CNN & $x_{\mathrm{image}}$, $x_{\mathrm{kin}}$ & 906 & 131,740 \\
CNN+MF & $x_{\mathrm{image}}$, $x_{\mathrm{morph}}$, $x_{\mathrm{kin}}$ & 990 & 228,235 \\
\multirow{2}{*}{EFN} & \multirow{2}{*}{$x_{\mathrm{image}}$, $x_{\mathrm{kin}}$} & \multirow{2}{*}{906} & 202,167 \\
& & & (141,762) \\
\multirow{2}{*}{EFN+MF} & \multirow{2}{*}{$x_{\mathrm{image}}$, $x_{\mathrm{morph}}$, $x_{\mathrm{kin}}$} & \multirow{2}{*}{990} & 408,417 \\
& & & (348,012) \\
\end{tabular}
\end{ruledtabular}\label{tableii}
\end{table}
\section{Jet Tagging Performance Comparison }
\label{sec:benchmark_tagging}
\subsection{Semi-visible Jet Tagging}
\label{sec:benchmark_tagging:semi_visible}
\begin{figure*}
\subfloat[\label{fig:ms:a}]{
\includegraphics[width=0.45\textwidth]{{Figures/hist_135}.pdf}
}
\subfloat[\label{fig:ms:b}]{
\includegraphics[width=0.45\textwidth]{{Figures/hist_cut_135}.pdf}
}
\caption{
\label{fig:ms}
Left: distributions of MFs:
$A^{(0)}$ (light color), $A^{(1)}$ (solid), $A^{(3)}$ (dashed), and $A^{(5)}$ (dotted) of dark jets (red)
and QCD jets (blue).
We select leading $p_T$ jets with $p_{T,\mathbf{J}} \in [ 150, 300]\,\mathrm{GeV}$,
and $m_{\mathbf{J}} \in [30,70]\,\mathrm{GeV}$.
Right: The distribution of MFs after rejecting 10\% signal events by the CNN.
1.5\% of QCD events remain after the selection.
}
\end{figure*}
\begin{figure}
\includegraphics[width=0.45\textwidth]{{Figures/ROC_pub.dark_qcd}.pdf}
\caption{\label{fig:roc:darkjet}
ROC curves of various classification models for dark jets vs.~QCD jets.
}
\end{figure}
\begin{table}
\caption{
\label{table:auc:dark_jet}
AUCs of various dark jet taggers.
The EFN models have 200 hidden features at the first dense layer.
We also show the training time $t_{\mathrm{train}}$ and the number of epochs at the end of the training, $N_{\mathrm{train}}$ {for} mini-batch numbers $N_{\mathrm{batch}}=$ 20 and 200.
}
\begin{ruledtabular}
\begin{tabular}{lccc}
&
\multirow{2}{*}{AUC}
&
\multicolumn{2}{c}{$t_{\mathrm{train}} / N_{\mathrm{epoch}}$}
\\
& & $N_{\mathrm{batch}}=20$ & $N_{\mathrm{batch}}=200$ \\
\hline
MF & 0.9897 & \phantom{0}793 s / 564 epochs & \phantom{00}954 s / 363 epochs \\
\hline
RN & 0.9950 & \phantom{0}929 s / 434 epochs & \phantom{0}2468 s / 560 epochs\\
RN+MF & 0.9955 & 1128 s / 429 epochs & \phantom{0}2288 s / 556 epochs\\
CNN & 0.9953 & & 11401 s / 327 epochs \\
CNN+MF & 0.9956 & & 19610 s / 543 epochs \\
\hline
EFN & 0.9950 & 2222 s / 220 epochs & \phantom{0}2141 s / 163 epochs \\
EFN+MF & 0.9955 & 1988 s / 190 epochs & \phantom{0}2270 s / 172 epochs \\
\end{tabular}
\end{ruledtabular}\label{tableiii}
\end{table}
\begin{table*}
\caption{
\label{table:corr:dark_jet}
The correlation coefficients of the logits of the model output, $\mathrm{logit}(\hat{y})$, between the trained models for the dark jet samples.
The coefficients of the same models are the correlation coefficients of the outputs between
the same networks trained with different random number seeds.
}
\begin{ruledtabular}
\begin{tabular}{cccccccc}
& MF & RN & RN+MF & CNN & CNN+MF & EFN & EFN+ MF \\
\hline
MF & 0.976 & 0.681 & 0.801 & 0.736 & 0.780 & 0.609 & 0.712\\
RN & & 0.942 & 0.868 & 0.745 & 0.732 & 0.705 & 0.723\\
RN+MF & & & 0.973 & 0.793 & 0.839 & 0.679 & 0.777\\
CNN & & & & 0.958 & 0.924 & 0.763 & 0.809\\
CNN+MF & & & & & 0.967 & 0.727 & 0.822 \\
EFN & & & & & & 0.902 & 0.873 \\
EFN+MF & & & & & & & 0.933 \\
\end{tabular}
\end{ruledtabular}\label{tableiV}
\end{table*}
As a working example of our network, a toy Hidden Valley model \cite{Strassler:2006im,Carloni:2011kk} whose signature a semi-visible jet \cite{Cohen:2015toa,Bernreuther:2020vhm} is considered.
The hidden sector may include a fermion $q_v$ charged under the secluded gauge group and a massive leptophobic gauge boson $Z'$ that mediates the interaction between the SM particles and the hidden sector.
At the hadron collider, $q_v$ may be produced through the process $q\bar{q}\rightarrow Z' \rightarrow q_v \bar{q}_v$.
The secluded gauge interaction confines $q_v$ and $\bar{q}_v$ and forms pions $\pi_v$ and rho mesons $\rho_v$ after the hidden sector parton shower and hadronization.
We consider a scenario that only $\rho_v$ leaves visible signatures via the decay $\rho_v \rightarrow q\bar{q}$ while the other mesons are not visible at the detectors.
The resulting semi-visible jet, which we call a dark jet, contains many color-singlet quark pairs fragmenting into hadrons and missing particles.
Therefore, the dark jets have different geometric and hard substructures compared to the QCD jets.
For the simulation of the dark jet, we use \texttt{Pythia 8} \cite{Sjostrand:2014zea} and its Hidden Valley model implementation \cite{Carloni:2011kk}.
The mass spectrum is assigned as follows: $m_{Z'}=1400 \, \mathrm{GeV}$, $m_{q_v} = 10\,\mathrm{GeV}$, and $m_{\pi_v} = m_{\rho_v} = 20 \,\mathrm{GeV}$.
The fraction of $\pi_v$ and $\rho_v$ during the hadronization is 1:3, as the spin counting suggests.
The QCD jet samples are the leading $p_T$ jets of the process $pp\rightarrow 2j$, and they are generated using \texttt{MadGraph5 2.6.6} \cite{Alwall:2014hca} together with \texttt{Pythia 8}.
Detector effect is modeled by \texttt{Delphes 3.4.1} \cite{deFavereau:2013fsa} with the default ATLAS detector card.
The training and test samples are the leading $p_T$ jets with $p_{T,\mathbf{J}} \in [150,300]$~GeV and $m_{\mathbf{J}} \in [30,70]$~GeV.
The number of selected events is $6.0\times 10^5$ for the dark jet samples and $1.9\times10^6$ for the QCD jet samples.
\Figref{fig:ms:a} shows the $A^{(k)}$ distributions of dark jets and QCD jets.
The most left curve is the $A^{(0)}$ distributions, and they are close to each other.
On the other hand, the average of $A^{(i)}$ ($i>0$) of the QCD jets is much larger, and the $A^{(i)}$ distribution extends far beyond the endpoint of the dark jet $A^{(i)}$ distribution.
The RN+MF model can {explicitly} use the feature in the classification.
Given the apparent difference of $A^{(i)}$ distributions, the CNN is also capable of learning this phase space where only QCD jets exist.
The classifier reasoning appears in the dijet distribution in \figref{fig:ms:b}.
The distributions are after applying the mild cut of 90\% signal dark jet efficiencies using the CNN.
The cut significantly suppresses the events beyond the endpoint of the dark jet distribution.
We show the receiver operator characteristic (ROC) curves of RN,\footnote{EFN results are explained in \sectionref{sec:benchmark_tagging:efn}.} and CNN with and without the MFs on \figref{fig:roc:darkjet}.
The corresponding area under the ROC curve (AUC) in \tableref{table:auc:dark_jet}.
Both RN and CNN models reject more than 90\% QCD jets on the phase space of large MFs without losing any dark jet events as illustrated in \figref{fig:ms}.
Even a simple classifier using only the MFs and kinematical variables rejects most of the QCD jet samples, as seen by the orange curve.
This shows that the MFs describe the boundary of the phase space of the dark jet events quite efficiently.
The model with MF consistently outperforms the one without MF, as can be seen in \tableref{table:auc:dark_jet}.
The AUC of RN+MF is slightly better than CNN, and the AUC of CNN+MF is the best among the CNN and RN models.
The ROC curves show some crossovers in the region of small dark jet tagging efficiency below $\epsilon_{dark}=0.6$, and RN+MF rejection efficiency looks better than CNN+MF in such regions.
However, the rejection rate is so high that a relatively small training sample of O(1000) events is available for the training. A slight difference in the rejection efficiency is therefore not statistically significant.
We can estimate the difference between the CNN and RN+MF models by calculating the correlation coefficient of the logit outputs $\mathrm{logit}(\hat{y}_{\mathrm{CNN}})$ {and} $\mathrm{logit}(\hat{y}_{\mathrm{RN+MF}})$ for the same testing event set.
We list the values in \tableref{table:corr:dark_jet}.
Here $\hat{y}$ is the outputs of each model, and its logit is $\mathrm{logit}(\hat{y})=\log(\hat{y})-\log(1-\hat{y})$.
The correlation coefficient $\rho$ between CNN and RN+MF is relatively small, {and} $\rho=0.793$ for {the dark jet dataset.}
But once we give the MF information to the CNN model, the correlation improves, {and} $\rho=0.893$ between CNN+MF and RN+MF.
The improvement of correlation and classification performance indicates that the CNN is not fully utilizing those MFs unless explicitly provided as inputs.
The correlation coefficient between the network outputs trained with different random number seeds is significantly larger than the correlation between the different models.
This indicates that the difference between the network outputs is primarily due to the systematic difference in the network architectures.
\subsection{Top Jet Tagging}
\begin{figure}
\includegraphics[width=0.45\textwidth]{{Figures/ROC_pub.top_qcd}.pdf}
\caption{\label{fig:roc:top}
ROC curves of various classification models for top jets vs.~QCD jets.
}
\end{figure}
\begin{table}
\caption{
AUC of various top jet taggers.
The EFN models have 50 hidden features at the first dense layer.
We also show the training time $t_{\mathrm{train}}$ and the number of epochs at the end of the training, $N_{\mathrm{train}}$ {for} mini-batch numbers $N_{\mathrm{batch}}=$ 20 and 200.
}
\begin{ruledtabular}
\begin{tabular}{lccc}
&
\multirow{2}{*}{AUC}
&
\multicolumn{2}{c}{$t_{\mathrm{train}} / N_{\mathrm{epoch}}$}
\\
& & $N_{\mathrm{batch}}=20$ & $N_{\mathrm{batch}}=200$ \\
\hline
MF & 0.9467 & 793 s / 564 epochs & \phantom{00}954 s / \phantom{0}363 epochs \\
\hline
RN & 0.9038 & 288 s / 186 epochs & \phantom{00}619 s / \phantom{0}214 epochs \\
RN+MF & 0.9552 & 418 s / 255 epochs & \phantom{0}1057 s / \phantom{0}288 epochs \\
CNN & 0.9529 & & 31020 s / 1483 epochs \\
CNN+MF & 0.9547 & & 12319 s / \phantom{0}530 epochs \\
\hline
EFN & 0.8900 & 535 s / 120 epochs & \phantom{00}723 s / \phantom{0}108 epochs \\
EFN+MF & 0.9521 & 725 s / 149 epochs & \phantom{00}813 s / \phantom{0}111 epochs \\
\end{tabular}
\end{ruledtabular}\label{tablev}
\end{table}
\begin{table*}
\caption{
The correlation coefficients of the logit of outputs between the trained models for top jet samples.
Diagonal elements are the correlation coefficents between the same networks trained with different random number seeds.
}
\begin{ruledtabular}
\begin{tabular}{lccccccc}
& MF & RN & RN+MF & CNN & CNN+MF & EFN & EFN+MF\\
\hline
MF & 0.990 & 0.670 & 0.922 & 0.808 & 0.924 & 0.635 & 0.911 \\
RN & & 0.978 & 0.778 & 0.738 & 0.730 & 0.847 & 0.714\\
RN+MF & & & 0.986 & 0.847 & 0.941 & 0.711 & 0.931\\
CNN & & & & 0.933 & 0.866 & 0.739 & 0.849\\
CNN+MF & & & & & 0.979 & 0.723 & 0.945\\
EFN & & & & & & 0.913 & 0.727\\
EFN+MF & & & & & & & 0.960 \\
\end{tabular}
\end{ruledtabular}\label{tablevi}
\end{table*}
For the top jet study, we use the samples described in \cite{Chakraborty:2020yfc}.
We use the events with $p_{T,\mathbf{J}} \in [500,600]$~GeV and $m_{\mathbf{J}} \in [150,200]$~GeV.
The number of selected events is $9.5\times 10^5$ for top jets and $3.5\times 10^5$ for QCD jets.
The ratio between training, validation, and test samples and {the training method} is the same as the dark jet case.
We show the ROC curves in \figref{fig:roc:top}.
The model MF, which uses only the MFs as inputs (without any IRC safe correlators), performs better than the RN model.
This indicates that the geometric and topological information is the primary information for the top jet classification.
As can be seen in \tableref{tablev}, the model using IRC safe variable with MFs is better than the one without MFs as the dark jet case.
The MFs are enhancing the performance of the RN much more than the dark jet tagging case.
The CNN+MF shows a similar tagging performance to the RN+MF, but the baseline CNN does not.
As discussed earlier, the convolutional representation of the MFs involves a discontinuous step function.
However, the step function is hard to be modeled by convolutional layers with a finite number of filters and L2 regularizers.
This CNN setup effectively penalizes functions with discontinuity because it requires large weights or a large number of filters with small weights.
The correlation coefficient $\rho$ of the logit of outputs among the training of the same model with different random number seeds is 0.986 for RN+MF.
On the other hand, the $\rho$ of CNN is 0.933.
The difference shows that the training of the CNN model suffers the local minimum problem relative to RN+MF.
In gradient-based training methods, easily classifiable samples dominate the early phase of the training.
The different training may show us different local minima that mainly describe the classification boundary for the dominant samples.
In such cases, confusing events are underrepresented, and the training results will have some variance.
This variance is larger for the more generic function model, and the CNNs have a larger correlation coefficient than the RN+MFs.
The local minimum problem of the CNN can be relaxed by explicitly providing some components, such as the MFs.
Adding the MFs to CNN inputs improves {the situation, and CNN+MF has the correlation coefficient 0.979.}
Furthermore, the correlation between CNN+MF and RN+MF is 0.941, much higher than {the correlation between CNN and RN+MF.}
Namely, the two models are now quite {correlated} to each other.
To visualize the fine difference between the RN+MF and CNN, we compare the $(A^{(0)},A^{(2)})$ distribution of dijet samples, conditioned on the classifier outputs.
We select the dijet samples with classifier outputs $\hat{y}_{\mathrm{CNN}}$ and $\hat{y}_{\mathrm{RN+MF}}$ of CNN and RN+MF models less than its value at the 70\% of top jet selection efficiency, respectively.
By taking the ratio of the histograms of the MFs, we can visualize the difference in classification boundaries of RN+MF and CNN.
In \figref{fig:correlation}, we consider the ratio
\begin{equation}
\mathcal{I}=\frac{N(\mathrm{CNN})}{N(\textrm{RN+MF})+\epsilon}
\end{equation}
where $N$ is the density at a given bin of the histogram of the samples selected by the CNN or RN+MF, and $\epsilon=0.1$ is the regularization to avoid dividing by zero.
\Figref{fig:correlation:a} is distribution of $\mathcal{I}$ in $( A^{(0)}, A^{(2)})$ plane,
and \figref{fig:correlation:a} is the same plot but for the MFs obtained from the pixels above the 8~GeV threshold, $( A^{(0)}[8~\mathrm{GeV}], A^{(2)}[8~\mathrm{GeV}])$.
Because the RN+MF model rejects more dijet events, the ratios tend to be bigger than 1 for most of the bins.
In the figure, the red bins represent ${\cal I}>1$, while the blue bins correspond to ${\cal I}<1$.
For \figref{fig:correlation:a}, the bins with large $A^{(0)}$ and small $A^{(2)}$ is red, indicating the RN+MF improves the classification by selecting more samples on this region.
For \figref{fig:correlation:b}, the region with large $A^{(0)}$ and large $A^{(2)}$ tend to have larger values, but the red region is less prominent.
This may indicate that the CNN is utilizing the geometric features of the pixels with energy above 8~GeV, but the CNN may also have difficulty in fully utilizing the geometric information of soft energy deposits.
\begin{figure}
\hspace*{-1.2em}
\subfloat[\label{fig:correlation:a}]{
\includegraphics[height=0.225\textwidth]{{Figures/ratio_cnn_mf_area_ptcut_0002_0.7}.pdf}
}\hspace*{-4.0em}
\subfloat[\label{fig:correlation:b}]{
\includegraphics[height=0.225\textwidth]{{Figures/ratio_cnn_mf_area_ptcut_8082_0.7}.pdf}
}
\caption{\label{fig:correlation}
The PDF of the dijet event in CNN model
divided by the one in RN+MF model with the same signal efficiency
at $\epsilon_{top}=0.7$ . }
\end{figure}
\subsection{Comment on EFN and EFN+MF }
\label{sec:benchmark_tagging:efn}
In addition to CNN, we study the classification using EFN and EFN+MF models.
The EFN model uses the same jet images as inputs, but the model itself is constrained to be IRC safe.
Because of the constraint, the EFN cannot fully use the geometric information of the soft activities encoded in the MFs.
As a result, the classification performance of EFN is worse than that of the networks taking MFs as inputs and the CNN, which implicitly cover the MFs.
Nevertheless, the EFN+MF works nearly as equal as the CNN+MF and RN+MF, and it covers sufficiently useful IRC safe information for both dark jet tagging and top jet tagging.
In the dark jet tagging, the IRC safe variables are the key information for the jet tagging, and EFN performs well in the classification as illustrated in \figref{fig:roc:darkjet}.
In addition, considering MFs as extra inputs improves the performance slightly.
At the low signal efficiency, the EFN+MF model has the best among all models in \figref{fig:ms}.
As discussed already, due to the large background rejection in the region, the number of the training sample is enough, and {we suspect that} the difference is within the statistical fluctuations.
In the top tagging, the geometric and topological information is important.
The performance of sole EFN is comparable to that of the RN, but it is significantly improved when MFs are considered as additional inputs.
Our RN model uses the two-point correlation to the leading $p_T$ subjet and two-point correlation after removing the leading subjet to capture the three-point correlation inside the top jet.
The inputs for the EFN are also sensitive to this topological three-prong structure of the top jet because we preprocess the jet images, and those three subjets always appear at particular points on the jet image.
The EFN+MF covers more geometric information than the EFN, and its performance is comparable to the CNN as a result.
But the improved performance mostly comes from the MFs, and the EFN+MF works nearly as equal as the CNN+MF and RN+MF.
\section{Computational Advantages of Morphological Analysis and Relation Network}
\label{sec:training_performance}
\subsection{Overcoming a Small Dataset}
As discussed in the previous section, the RN+MF model has some merit over the CNN model on better training performance.
Models with broader coverage, such as CNN, are capable of modeling generic functions.
The price of the high expressive power is often the high variance in the trained outputs and the high sensitivity to the statistical noise.
These errors may degrade the generalization performance of the network.
In this respect, using a simpler model helps to maintain the performance for some cases.
\begin{figure}
\includegraphics[width=0.234\textwidth]{{Figures/auc_nevent.top_qcd}.pdf}
\includegraphics[width=0.234\textwidth]{{Figures/auc_nevent.dark_qcd}.pdf}
\caption{\label{fig:auc_nevent}
The AUCs of RN+MF and CNN trained with a given number of training samples.
The $x$-axis $N_{\mathrm{event}}$ denotes the number of samples in each class.
The rightmost entries are the AUCs of the networks trained on the full training dataset.
Since the number of the signal and background samples are not identical in this case, we put their average value on the $x$-axis.
}
\end{figure}
\Figref{fig:auc_nevent} shows the AUCs of RN+MF and CNN as the functions of the number of training samples.
We can see from the figure that the AUC of RN+MF is significantly larger than that of CNN for a small dataset, although their gap decreases as the size of the training dataset increases.
For the top jet classification, RN+MF achieves the AUC higher than 0.9 already at 1000 training samples, and the AUC is only 4\% smaller at most than the best AUC.
Meanwhile, CNN needs $\mathcal{O}[10,000]$ samples to achieve the same performance as RN+MF.
We find similar behavior of the AUC curves in the dark jet classification.
The curves for RN+MF and CNN meet at 4000 events, which is much smaller than the meeting point of the curves in the top jet tagging case.
This is because there are no dark jet samples at the tail of the MF distributions of QCD jets, as shown in \figref{fig:ms}.
The training of the CNN could easily find this difference with a small number of samples, and the curves will meet much earlier.
Since the CNN model has comparable performance to RN+MF, we may consider optimizing learning steps to improve the performance when the dataset is small.
For example, we may adjust learning dynamics by replacing the cross-entropy loss $\mathcal{L}_{\mathrm{CE}}$ with a focal loss $\mathcal{L}_{\mathrm{FL}}$ \cite{Lin_2017_ICCV},
\begin{eqnarray}
\nonumber
\mathcal{L}_{\mathrm{FL}}
& = &
-\frac{1}{2} \E \left( (1-\hat{y})^2 \log \hat{y} \,|\, y=1 \right)
\\ \label{eqn:loss_fl}
& &
-
\frac{1}{2} \E \left( (\hat{y})^2 \log (1-\hat{y}) \,|\, y=0 \right).
\end{eqnarray}
The results are shown in dotted lines in \figref{fig:auc_nevent}.
The focal loss penalizes the contribution from easily-classifiable examples by extra factors $(1-\hat{y})^2$ and $(\hat{y})^2$, and it helps training when the dataset is sparse.
The jet image dataset is sparse, so that we can see the improvement in the low statistics.
However, there are no improvements to RN+MF since $\mathrm{MF}$ and $S_2$ distributions are mostly dense and smooth.
Note that the training using focal loss does not converge to the maximum likelihood estimatiion of the binary classifier, i.e., $\hat{y} \nrightarrow p(y=1|x)$ in the asymptotic limit.
Therefore, the performance is generally less than the one using the cross-entropy loss when enough data is available.
\subsection{Less Computational Complexity and Training Time}
Another advantage of the RN+MF is its low computation complexity.
Networks with less computational complexity can be evaluated much faster and takes less memory.
\Tableref{tableiii} and \tableref{tablev} show that the training time of RN+MF is about ten times shorter than that of CNN.
We also note that RN+MF takes about 300 MB GPU memory during the training with 200 mini-batches, while CNN takes about 6000 MB GPU memory in our setup.
We can estimate the computational complexity difference between CNN and RN+MF from the complexities of network evaluations and the input calculations.
Because input calculations can be cached, the network evaluation complexity is the dominant factor to the complexity during the training.
The evaluation complexity is proportional to the number of multiplications since the networks mostly consist of tensor multiplications.
One of the most expensive layers of our CNN is a convolution layer with $3\times3$ filters mapping images with $30\times30$ pixels and 16 channels to the images of the same size.
This layer has the following number of multiplications,
\begin{equation}
(3 \times 3) \times (16 \times 16) \times (30 \times 30) = 2,073,600.
\end{equation}
Our CNN has two convolutional layers with this configuration, so that those two layers used about $4,000,000$ multiplications.
Meanwhile, our RN+MF has only fully connected layers, and the most expensive one has 200 incoming features and 200 outgoing features.
This layer has $200 \times 200 = 40,000$ multiplications.
We use three dense layers for each of the MLPs of RN+MF, which have four MLPs.
Then the number of multiplications is at most
\begin{equation}
3\times4\times40,000 = 480,000.
\end{equation}
The estimated computational complexity is factor 10 less than the convolutional layers, and it qualitatively explains the difference in training time.
It also explains the difference in GPU memory usage since the backpropagation algorithm has to record the entire operations.
More operation is involved, more GPU memory is needed during the training.
On the other hand, the complexity of input calculations only matters when the network inputs are not cached.
The computational complexity of evaluating the inputs of RN+MF is as following.
The calculation of MFs has two convolutions with filter sizes $(2k+1) \times (2k+1)$ and $2 \times 2$ for the dilation and local feature identification, respectively.
Those two convolutions have the number of multiplications,
\begin{equation}
(2k+1) \times (2k+1) \times (30 \times 30) + (2\times 2) \times (30 \times 30),
\end{equation}
which is 4,500 for $k=0$ and 155,700 for $k=6$.
Note that the complexity of dilation, $(2k+1) \times (2k+1) \times (30 \times 30)$, can be further reduced by using optimized algorithms.
We may consider this number as the upper bound of the complexity.
The calculation complexity of the two-point correlation $S_{2,ab}$ is a function of the number of jet constituents, $N$.
The jet reclustering has $N \log N$ complexity \cite{Cacciari:2005hq}, and the two-point correlation calculation has $N^2$ complexity in general.
In the case of $N=50$, which is approximately the largest number of jet constituents in our sample according to \figref{fig:ms}, the total complexity is $ \approx 2,700$.
The second $N^2$ factor can be reduced to $N^2 / 2$ if $a$ and $b$ of $S_{2,ab}$ are the same.
Those two complexities of evaluating the inputs of RN+MF, 155,700 and 2,700, are still much smaller than the complexity of the two convolutions layers.
We conclude that the RN+MF setup is computationally efficient than the CNN.
\section{Parton Shower Modeling and Minkowski Functionals}
\label{sec:ps_and_mf}
So far, we have been discussing jets generated by \texttt{PYTHIA8}{}, but the predicted jet substructure has a simulator dependency in general because of different parton shower schemes.
\texttt{PYTHIA8}{} adopts $p_T$-ordered showering \cite{Rasmussen:2015qgr,Corke:2010yf} while \texttt{HERWIG7}{} adopts angular-ordered showering.
The distributions of MFs with energy thresholds can capture the geometric differences between those two shower schemes, and the two simulated distributions may be different from each other.
We quickly check the difference in $A^{(k)}[p_T]$ distributions and discuss the origin of difference in terms of the shower scheme.
\begin{figure}[tb]
\hspace*{-1.0em}
\subfloat[\label{fig:pyhw:a}]{
\includegraphics[width=0.27\textwidth]{{Figures/pyhw_ratio_s2mf_mf_area_ptcut_0001_1.0}.pdf}
}
\hspace*{-3.4em}
\subfloat[\label{fig:pyhw:b}]{
\includegraphics[width=0.27\textwidth]{{Figures/pyhw_ratio_s2mf_mf_area_ptcut_0003_1.0}.pdf}
}
\hspace*{-1.0em}
\subfloat[\label{fig:pyhw:c}]{
\includegraphics[width=0.27\textwidth]{{Figures/pyhw_ratio_s2mf_mf_area_ptcut_8081_1.0}.pdf}
}
\hspace*{-3.4em}
\subfloat[\label{fig:pyhw:d}]{
\includegraphics[width=0.27\textwidth]{{Figures/pyhw_ratio_s2mf_mf_area_ptcut_8083_1.0}.pdf}
}
\caption{\label{fig:pyhw}
The asymmetry $\mathcal{D}$ of the $(A^{(0)}, A^{(k)})$ distributions simulated by \texttt{PYTHIA8}{} and \texttt{HERWIG7}{}.
Figures (a) and (c) show the asymmetry of $(A^{(0)},A^{(1)})$ distributions.
Figures (b) and (d) show the asymmetry of $(A^{(0)}, A^{(3)})$ distributions.
No $p_T$ filter is applied to (a) and (b), while $p_T> 8$~GeV filter is applied for (c) and (d).
}
\end{figure}
In \figref{fig:pyhw}, we show the following asymmetry ratio $\mathcal{D}$ of the distribution of two selected $A^{(k)}[p_T]$.
\begin{equation}
\mathcal{D}(i) = \frac{f_P(i) - f_H(i)}{f_P(i)+ f_H(i)}, \quad f_A(i) = \frac{N_A(i)}{\sum_i N_A(i)} \; \mathrm{for} \;A\in\{P,H\}
\end{equation}
where $N_P(i)$ and $N_H(i)$ are the number of \texttt{PYTHIA8}{} and \texttt{HERWIG7}{} events in the $i$-th bin, and $f_P(i)$ and $f_H(i)$ are its fraction with respect to the total number of events, respectively.
Here, the samples are the QCD jets of the top jet classification, with $p_{T,\mathbf{J}} \in [500,600]$~GeV and $m_{\mathbf{J}} \in [150,200]$~GeV.
In \figref{fig:pyhw:a} and \figref{fig:pyhw:b}, we show the asymmetry ratio of $(A^{(0)}, A^{(1)})$ without $p_T$ filters.
The darkest red bins has $\mathcal{D}=1$, where no \texttt{HERWIG7}{} events are observed.
The darkest blue region corresponds to $\mathcal{D}=-1$, and no \texttt{PYTHIA8}{} samples are in there.
The dark red pixels tend to be in large $A^{(0)}$ region, because \texttt{PYTHIA8}{} predicts higher $A^{(0)}$.
For the same $A^{(0)}$ value, \texttt{PYTHIA8}{} predicts smaller values of $A^{(1)}$ than \texttt{HERWIG7}{}.
This means the jet constituents are more clustered in \texttt{PYTHIA8}{}.
The trend is common for all $k>1$ (See \figref{fig:pyhw:b} for $k=3$.)
The situation is different for $A^{(k)}$ with $p_T$ filter.
As illustrated in \figref{fig:pyhw:c} and \figref{fig:pyhw:d},
the $A^{(k)}[8\;\mathrm{GeV}]$ of \texttt{PYTHIA8}{} tend to be higher than that of \texttt{HERWIG7}{} for given $A^{(0)}[8\;\mathrm{GeV}]$.
This means high $p_T$ pixels are more sparsely distributed in \texttt{PYTHIA8}{} generated samples.
Recall that \texttt{PYTHIA8}{} adopts a transverse-momentum-ordered evolution scheme.
A high $p_{\perp}$ radiation in \texttt{PYTHIA8}{} tends to be emitted at a larger angle.
For the case of \texttt{HERWIG7}{}, the first emission in the evolution is typically a large angle soft radiation.
The asymmetry $\mathcal{D}$ for $A^{(k)}[p_T]$ distributions is consistent with the expectation of the shower modeling.
\texttt{HERWIG7}{} QCD jet emits soft particles at a large angle while \texttt{PYTHIA8}{} QCD jet emits higher $p_T$ objects at a large angle.
For the best classification performance with less simulator bias in the application stage,
the distribution of inputs, especially the MFs, has to be tuned carefully to the real experimental data.
The calibration of MF distributions will be helpful to reduce the simulator dependency in the prediction of more general models, such as the CNN, because the MFs are important features in the jet classifications, as shown in \sectionref{sec:benchmark_tagging}.
\section{Summary}
In this paper, we introduce a neural network covering the space of ``valuations" of jet constituents.
The valuations introduced in this paper can be considered as a generalization of particle multiplicities which is a useful variable in quark vs. gluon jet tagging, but it is not IRC safe in general.
The space of IRC unsafe variables is less explored compared to that of IRC safe variables because of its theoretical difficulties.
Nevertheless, Hadwiger's theorem in integral geometry tells us some structure of the valuation space, which is an interest to this paper.
The dimension of the valuation space is finite, and its basis is called the Minkowski functionals (MFs).
In the two dimensional Euclidean space, the MFs are Euler characteristic, perimeter, and area.
We utilized these geometric features to build a neural network covering the space of valuations, and the resulting network is a multilayer perceptron taking the MFs as inputs.
In the case of jet image analysis, we showed that the MFs of dilated jet images could be represented by a chain of convolutional layers.
Therefore, convolutional neural networks (CNN) can explicitly utilize this information.
In the semi-visible jet tagging example, the CNN finds out the phase-space region of MFs where only QCD background can be found without difficulties.
However, the MFs is not a smooth function of jet images, and the CNN had a problem accessing that information when $L_2$ regularization is involved.
By explicitly adding the MFs as inputs to the CNN, we showed that its classification performance is improved.
We further build up a neural network architecture combining these valuations to the IRC safe information.
In particular, we consider energy correlator based networks: the relation network and the energy flow network.
We combine the outputs from those IRC safe neural networks to the network covering IRC unsafe MFs.
This combined setup has a comparable performance to the CNN.
The combined model is constrained compared to the CNN, but its classification performance is similar; moreover, it has computational advantages.
First, it has a smaller computational complexity than the CNN so that its evaluation is fast and less memory-demanding.
Second, constrained model generally requires a less number of training samples in order to reach its best performance.
This network is especially useful when data is expensive.
Since MFs can be embedded to the CNN, they could potentially be interpreting variables of the CNN.
Deep neural networks are a highly expressive model of a function, but their prediction is not explainable \cite{xie2020explainable, 10.1007/978-3-030-76657-3_1} in general.
If we are aware of potentially important features for modeling, we may distill the features \cite{hinton2015distilling,xie2020explainable} by using interpretable models built from the important features in order to get an insight.
It will also allow us to control the network predictions systematically by using domain-specific knowledge.
We built a network based on MFs, which have clear geometric interpretations, and this type of network combined with interpretable IRC-safe neural networks \cite{Komiske:2018cqr,Chakraborty:2019imr} can be an answer for that in jet tagging problems.
For example, the distributions of IRC unsafe variables, including the MFs, have to be appropriately tuned in order to reduce the simulation bias.
Tuning the distribution of jet constituents themselves for that purpose is not trivial because parton shower simulations are approximation and they do not fully cover the phase space of radiated particles.
The expression of the valuation space using MFs is significantly small in dimensions
and includes important counting variables that also also have geometric meanings.
Tuning the distribution of MFs by reweighting \cite{Chakraborty:2020yfc,Diefenbacher:2020rna} can be a more feasible method for controlling {systematical errors of modeling} the space of IRC unsafe features.
Finally, although we limit our discussion to the pixelated image analysis, but it would also be interesting to develop a continuum version of this morphological analysis in order to compare it with graph convolutional neural networks \cite{Qu:2019gqs}.
We will leave these interesting possibilities in future studies.
\begin{acknowledgements}
The authors thank to Benjamin Nachman, David Shih, Iftah Galon, Kyoungchul Kong, Mengchao Zhang, Myeonghun Park, and Takeshi Tsuboi for useful discussions.
This work is supported by the Grant-in-Aid for Scientific Research on Scientific Research B (No.~16H03991, 17H02878) and Innovative Areas (16H06492);
World Premier International Research Center Initiative (WPI Initiative), MEXT, Japan. The work of SHL was also supported by the US Department of Energy under
grant DE-SC0010008.
\end{acknowledgements}
| proofpile-arXiv_065-5451 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\subsubsection*{\bibname}}
\usepackage[utf8]{inputenc}
\usepackage{amsmath,amssymb,amsfonts,mathrsfs}
\usepackage{graphicx}
\graphicspath{./figures}
\input{extrapackages}
\input{macrosetup}
\renewcommand{\bibname}{References}
\renewcommand{\bibsection}{\subsubsection*{\bibname}}
\begin{document}
\twocolumn[
\aistatstitle{Scalable Gaussian Process Variational Autoencoders}
\aistatsauthor{ Metod Jazbec\footnotemark[1] \And Matthew Ashman \And Vincent Fortuin}
\aistatsaddress{ ETH Z\"urich \And University of Cambridge \And ETH Z\"urich}
\aistatsauthor{ Michael Pearce \And Stephan Mandt \And Gunnar R\"atsch }
\aistatsaddress{ University of Warwick \And University of California, Irvine \And ETH Z\"urich}
\runningauthor{Jazbec, Ashman, Fortuin, Pearce, Mandt, and R\"atsch}
]
\begin{abstract}
Conventional variational autoencoders fail in modeling correlations between data points due to their use of factorized priors.
Amortized Gaussian process inference through GP-VAEs has led to significant improvements in this regard, but is still inhibited by the intrinsic complexity of exact GP inference.
We improve the scalability of these methods through principled sparse inference approaches.
We propose a new scalable GP-VAE model that outperforms existing approaches in terms of runtime and memory footprint, is easy to implement, and allows for joint end-to-end optimization of all components.
\end{abstract}
\input{sections/10-intro}
\input{sections/20-related-work}
\input{sections/41-scalable_gp} %
\input{sections/50-experiments}
\section{Conclusion}
We have proposed a novel sparse inference method for GP-VAE models and have shown theoretically and empirically that it is more scalable than existing approaches, while achieving competitive performance. Our approach bridges the gap between sparse variational GP approximations and GP-VAE models, thus enabling the utilization of a large body of work in the sparse GP literature. As such, it represents an important step towards unlocking the possibility to perform amortized GP regression on large datasets with complex likelihoods (e.g., natural images).
Fruitful avenues for future work include considering even more recently proposed sparse GP approaches \citep{Cheng2017VariationalComplexity, evans2020quadruply} and comparing our proposed scalable GP-VAE solution against other families of deep generative models \citep{ mirza2014conditional, Eslami1204}. This would help identify real-world applications where GP-VAEs could be most impactful.
\subsubsection*{Acknowledgements}
M.J. acknowledges funding from the Public Scholarship and Development Fund of the Republic of Slovenia.
V.F. was supported by a PhD fellowship from the Swiss Data Science Center and by the grant \#2017-110 of the Strategic Focus Area ``Personalized Health
and Related Technologies (PHRT)'' of the ETH Domain.
M.J. and V.F. were also supported by ETH core funding (to G.R.).
S.M. is supported by the Defense Advanced Research Projects Agency (DARPA) under Contract No. HR001120C0021. Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the Defense Advanced Research Projects Agency (DARPA). Furthermore, S.M. was supported by the National Science Foundation under Grants 1928718, 2003237 and 2007719, and by Qualcomm.
\newpage
\bibliographystyle{abbrvnat}
\section{Experimental details}
\label{sec:exp-details}
Here we report the details and parameter settings for our experiments to foster reproducibility.
\subsection{Moving ball experiment}
For the moving ball experiment described in Section \ref{sec:exp_ball}, we use the same neural networks architectures and training setting as in \cite{Pearce2019ThePixels}.
\begin{table}[h!]
\centering
\caption{Parameter settings for the moving ball experiment.}
\begin{tabular}{lc}
\toprule
Parameter & Value \\
\midrule
Nr. of feedforward layers in inference network & 2 \\
Nr. of feedforward layers in generative network & 2 \\
Width of a hidden feedforward layer & 500 \\
Dimensionality of latent space (L) & 2\\
Activation function & \emph{tanh} \\
Learning rate & 0.001 \\
Optimizer & Adam \\
Nr. of epochs & 25000 \\
Nr. of frames in each video (N) & 30 \\
Dimension of each frame & $32 \times 32$ \\
\bottomrule
\end{tabular}
\label{tab:ball_parameters}
\end{table}
A squared-exponential GP kernel with length scale $l=2$ was used. For the exact data generation procedure, we refer to \cite{Pearce2019ThePixels}. During training, 35 videos were generated in each epoch. The test MSE is reported on a held-out set of 350 videos. For the Adam optimizer \citep{Kingma2014Adam:Optimization}, the default Tensforflow parameters are used.
\subsection{MNIST experiment}
\label{sec:app_rot_MNIST}
For the rotated MNIST experiment described in Section \ref{sec:MNIST}, we used the same neural networks architectures as in \cite{Casale2018GaussianAutoencoders}: three convolutional layers followed by a fully connected layer in the inference network and vice-versa in the generative network.
\begin{table}[h!]
\centering
\caption{Neural networks architectures for the MNIST experiment.}
\begin{tabular}{lc}
\toprule
Parameter & Value \\
\midrule
Nr. of CNN layers in inference network & 3 \\
Nr. of CNN layers in generative network & 3 \\
Nr. of filters per CNN layer & 8 \\
Filter size & $3 \times 3$ \\
Nr. of feedforward layers in inference network & 1 \\
Nr. of feedforward layers in generative network & 1 \\
Activation function in CNN layers & ELU \\
Dimensionality of latent space (L) & 16 \\
\bottomrule
\end{tabular}
\label{tab:mnist_parameters}
\end{table}
The SVGP-VAE model is trained for 1000 epochs with a batch size of 256. The Adam optimizer \citep{Kingma2014Adam:Optimization} is used with its default parameters and a learning rate of 0.001. Moreover, the GECO algorithm \citep{Rezende2018TamingVAEs} was used for training our SVGP-VAE model in this experiment. The reconstruction parameter in GECO was set to $\kappa = 0.020$ in all reported experiments.
For the GP-VAE model from \cite{Casale2018GaussianAutoencoders}, we used the same training procedure as reported in \cite{Casale2018GaussianAutoencoders}. We have observed in our reimplementation that a joint optimization at the end does not improve performance. Hence, we report results for the regime where the VAE parameters are optimized for the first 100 epochs, followed by 100 epochs during which the GP parameters are optimized. Moreover, we could not get their proposed low-memory modified forward pass to work, so in our reimplementation the entire dataset is loaded into the memory at one point during the forward pass. Our reimplementation of the GP-VAE model from \cite{Casale2018GaussianAutoencoders} is publicly available at \url{https://github.com/ratschlab/SVGP-VAE}.
For both models, the GP kernel proposed in \cite{Casale2018GaussianAutoencoders} is used. For more details on the kernel, we refer to Appendix \ref{sec:app_low_rank}. Note that the auxiliary data $\textbf{X}$ is only partially observed in this experiment --- for both models we use a GP-LVM to learn the missing parts of $\textbf{X}$. For both models, we use Principal Component Analysis (PCA) to initialize the GP-LVM vectors, as it was observed to lead to a slight increase in performance. PCA is also used in SVGP-VAE to initialize the inducing points. For more details see Appendix \ref{app:PCA_init}.
\subsection{SPRITES experiment}
\label{sec:app_SPRITES}
For the SPRITES experiment described in Section \ref{sec:SPRITES}, we used similar neural networks architectures as for the rotated MNIST experiment. Details are provided in Table \ref{tab:SPRITES_parameters}.
\begin{table}[h!]
\centering
\caption{Neural networks architectures for the SPRITES experiment.}
\begin{tabular}{lc}
\toprule
Parameter & Value \\
\midrule
Nr. of CNN layers in inference network & 6 \\
Nr. of CNN layers in generative network & 6 \\
Nr. of filters per CNN layer & 16 \\
Filter size & $3 \times 3$ \\
Nr. of feedforward layers in inference network & 1 \\
Nr. of feedforward layers in generative network & 1 \\
Activation function in CNN layers & ELU \\
Dimensionality of latent space (L) & 64 \\
\bottomrule
\end{tabular}
\label{tab:SPRITES_parameters}
\end{table}
The SVGP-VAE model is trained for 50 epochs with a batch size of 500. The Adam optimizer \citep{Kingma2014Adam:Optimization} is used with its default parameters and a learning rate of 0.001. Moreover, the GECO algorithm \citep{Rezende2018TamingVAEs} was used for training our SVGP-VAE model in this experiment. The reconstruction parameter in GECO was set to $\kappa = 0.0075$.
The auxiliary data $\textbf{X}$ is fully unobserved in this experiment. Recall that in SPRITES, the auxiliary data has two parts $\textbf{X} = [\textbf{X}_{s}, \: \textbf{X}_{a}]$, with $\textbf{X}_s \in \mathbb{R}^{N \times p_1}$ containing information about the character \textbf{s}tyle and $\textbf{X}_a \in \mathbb{R}^{N \times p_2}$ containing information about the specific \textbf{a}ction/pose. Let $\textbf{x}_i = [\textbf{x}_{s,i} \: \textbf{x}_{a,i}]$ denote auxiliary data for the $i$-th image (corresponding to the $i$-th row of the $\textbf{X}$ matrix). A product kernel between two linear kernels is used:\footnote{$\delta_{ij} = 1$ if $i = j$ and $0$ else.}
\begin{align*}
k_{\theta}(\textbf{x}_i, \textbf{x}_j) = \frac{\textbf{x}_{s,i}^T \textbf{x}_{s,j}}{\norm{\textbf{x}_{s,i}}\norm{\textbf{x}_{s,j}}} \cdot \frac{\textbf{x}_{a,i}^T \textbf{x}_{a,j}}{\norm{\textbf{x}_{a,i}}\norm{\textbf{x}_{a,j}}} + \sigma^2 \cdot \delta_{ij} \; .
\end{align*}
The kernel normalization and the addition of the diagonal noise are used to improve the numerical stability of kernel matrices.
To learn the action part of the auxiliary data $\textbf{X}_a$, we rely on a GP-LVM \citep{Lawrence2004GaussianData}, that is, we try to directly learn the matrix $\bm{A} \in \mathbb{R}^{72 \times p_2}$ consisting of GP-LVM vectors that each represent a specific action/pose. Since we want to extrapolate to new characters during the test phase\footnote{Note that an easier version of the SPRITES experiment would be to generate actions for characters already seen during the training phase. Such a conditional generation task would closely resemble the one from the face experiment in \cite{Casale2018GaussianAutoencoders}.}, the GP-LVM approach can not be used to learn the part of the auxiliary data that captures the character style information $\textbf{X}_s$. This would require rerunning the optimization at test time to obtain a corresponding GP-LVM vector for the new, previously unseen style. To get around this, we introduce the \emph{representation network} $r_{\zeta}: \mathbb{R}^K \to \mathbb{R}^{p_1}$, similar to what is done in \cite{AliEslami2018NeuralRendering}, with which we aim to amortize the learning of the unobserved parts of the auxiliary data. Specifically, the representation for the $i$-th character style is then
\begin{align*}
\mathbf{s}_i = f\big(r_{\zeta}(\yb_{1}), \: \dots, \: r_{\zeta}(\yb_{N_i})\big) \in \mathbb{R}^{p_1} \: ,
\end{align*}
where $\textbf{Y}_i = [\yb_1 \dots \yb_{N_i}]^T \in \mathbb{R}^{N_i \times K}$ represents all images of the $i$-th character, and $f$ is a chosen aggregation function (in our experiment we used the sum function). Instead of the GP-LVM vectors, the parameters of the representation network $\zeta$ are jointly optimized with the rest of the SVGP-VAE parameters. During training, we pass all 50 images (50 different actions) for each character through $r_{\zeta}$ to obtain the corresponding style representation. During the test phase, we first pass 36 actions through $r_{\zeta}$ and then use the resulting style representation vector to conditionally generate the remaining 36 actions. To help with the stability of training, we additionally pretrain the representation network on the classification task using the training data. Concretely, we train a classifier on top of the representations of the training data $r_{\zeta}(\yb_i), \: i=1, ..., N$. The (pretraining) label for each representation is a given character ID.\footnote{Recall that there are 1000 different characters in our training dataset, i.e., the pretraining task is a 1000-class classification problem.}
The details on the architecture of the representation network are provided in Table \ref{tab:SPRITES_reprnn} (it is essentially a downsized inference network).
\begin{table}[h!]
\centering
\caption{The architecture for the representation network $r_{\zeta}$ and some additional parameters in the SPRITES experiment.}
\begin{tabular}{lc}
\toprule
Parameter & Value \\
\midrule
Nr. of CNN layers & 3 \\
Nr. of filters per CNN layer & 16 \\
Filter size & $2 \times 2$ \\
Nr. of pooling layers & 1 \\
Activation function in CNN layers & ELU \\
Dimensionality of style representation ($p_1$) & 16 \\
Dimensionality of action GP-LVM vectors ($p_2$) & 8
\\
Nr. of epochs for pretraining of $r_{\zeta}$ & 400 \\
\bottomrule
\end{tabular}
\label{tab:SPRITES_reprnn}
\end{table}
\subsection{On the training of GP-VAE models (a practitioner's perspective)}
While working on implementations of different GP-VAE models, we have noticed that balancing the absolute magnitudes of the reconstruction and the KL-term is critical for achieving optimal results, even more so than in standard VAE models. In \cite{Fortuin2019GP-VAE:Imputation}, this was tackled by introducing a weighting $\beta$ parameter, whereas in \cite{Casale2018GaussianAutoencoders} a CV search on the noise parameter $\sigma_y^2$ of the likelihood $p_{\psi}(\yb_i | \zb_i)$ is performed. One downside of both solutions is that they introduce (yet) another training hyperparameter that needs to be manually tuned for every new dataset/model architecture considered.
To get around this, we instead used the GECO algorithm \citep{Rezende2018TamingVAEs} to train our SVGP-VAE in the rotated MNIST experiment. Compared to the original GECO algorithm in \cite{Rezende2018TamingVAEs}, where the maximization objective is the KL divergence between a standard Gaussian prior and the variational distribution, the GECO maximization objective in the SVGP-VAE is composed of a cross-entropy term $\mathbb{E}_{q_S}[\log\Tilde{q}_{\phi}(\cdot)]$ and a sparse GP ELBO $\cL_{H}(\cdot)$. We have observed that GECO greatly simplifies training of GP-VAE models as it eliminates the need to manually tune the different magnitudes of the ELBO terms. Based on this, we would make a general recommendation for GECO to be used for training such models.
\newpage
\section{Supporting derivations}
\label{sec:app_supp_deriv}
\subsection{Vanishing of the inference network in GP-VAE with ELBO from \cite{Hensman2013GaussianData}}
\label{sec:app_vanishing_SVGP-VAE_Hensman}
In this section we show that working with the sparse GP approach presented in \cite{Hensman2013GaussianData} leads to vanishing of the inference network parameters $\phi$ in the GP-VAE model from \cite{Pearce2019ThePixels}. Recall that the sparse GP posterior from \cite{Hensman2013GaussianData} for the $l$-th latent channel has the form
\begin{align*}
q_S^{l} (\zb^l_{1:N} | \cdot) = \mathcal{N}\big(\zb^l_{1:N}| \Kb_{Nm} \Kb_{mm}^{-1} \bm{\mu}^l, \: \Kb_{NN} - \Kb_{Nm}\Kb_{mm}^{-1}\Kb_{mN} + \Kb_{Nm}\Kb_{mm}^{-1}\textbf{A}^l \Kb_{mm}^{-1} \Kb_{mN}\big) \:
\end{align*}
with $\bm{\mu}^l \in \mathbb{R}^m, \textbf{A}^l \in \mathbb{R}^{m \times m}$ as free variational parameters, while the sparse GP ELBO for the $l$-th latent channel is given as
\begin{multline*}
\cL_{H}^{l}(\textbf{U}, \bm{\mu}^l, \textbf{A}^l, \phi, \theta) =
\sum_{i=1}^N \bigg\{\log\mathcal{N}\big(\Tilde{y}_{l,i} \: | \: \bm{k}_i^T \Kb_{mm}^{-1}\bm{\mu}^l, \: \Tilde{\sigma}^{-2}_{l,i}\big) - \frac{1}{2 \Tilde{\sigma}^{-2}_{l,i}} \big( \Tilde{k}_{ii} + Tr(\textbf{A}^l\: \Lambda_i)\big) \bigg\} \\ - KL\big(q^{l}_S(\textbf{f}_m | \cdot) \: || \: p_{\theta}(\textbf{f}_m | \cdot)\big)
\end{multline*}
with $q^{l}_S(\textbf{f}_m | \cdot) = \mathcal{N}(\textbf{f}_m | \bm{\mu^l}, \bm{A^l})$ and $p_{\theta}(\textbf{f}_m | \cdot) = \mathcal{N}(\textbf{f}_m | \bm{0}, \: \Kb_{mm})$.
$\bm{k}_i$ represents the $i$-th column of $\Kb_{mN}$, $\Lambda_i := \Kb_{mm}^{-1}\bm{k}_i \bm{k}_i^T\Kb_{mm}^{-1}$ and $\Tilde{k}_{ii}$ is the $i$-th diagonal element of $\Kb_{NN} - \Kb_{Nm} \Kb_{mm}^{-1}\Kb_{mN}$. As mentioned in Section \ref{sec:methods}, $\cL_H^{l}$ depends on the inference network parameters $\phi$ through the (amortized) $l$-th latent dataset $\tilde{\yb}_l=\mu^l_\phi(\textbf{Y}), \:{\tilde{\bm{\sigma}}}_l = \sigma^l_\phi(\textbf{Y})$.
Note that the full sparse GP posterior equals $q_S(\Zb) = \prod_{l=1}^L q_S^{l} (\zb^l_{1:N} | \cdot) $. Similarly, the full sparse GP ELBO is $\cL_H = \sum_{l=1}^L \cL_{H}^{l}(\textbf{U}, \bm{\mu}^l, \textbf{A}^l, \phi, \theta)$.
\vspace{10pt}
\begin{proposition}
For the $l$-th latent channel in the GP-VAE model with the bound from \cite{Hensman2013GaussianData}, the following relation holds:
\begin{align*}
\mathbb{E}_{q_S^l}\big[\log\Tilde{q}_{\phi}(\zb^{\,l}_{1:N} | \textbf{Y})\big] =
\sum_{i=1}^N \bigg\{\log\mathcal{N}\big(\Tilde{y}_{l,i} \: | \: \bm{k}_i^T \Kb_{mm}^{-1}\bm{\mu}^l, \: \Tilde{\sigma}^{-2}_{l,i}\big) - \frac{1}{2 \Tilde{\sigma}^{-2}_{l,i}} \big( \Tilde{k}_{ii} + Tr(\textbf{A}^l\: \Lambda_i)\big) \bigg\} \: .
\end{align*}
\end{proposition}
\begin{proof}\label{prop:hensman_vanish_1}
For notational convenience, define $\Tilde{D}_l := \text{diag}({\tilde{\bm{\sigma}}}_l^2)$ and $\bm{B} := \Kb_{Nm}\Kb_{mm}^{-1}$. Also recall that $ \Tilde{q}_{\phi}(\zb^{l}_{1:N} | \textbf{Y}) = \mathcal{N}\big( \zb^{l}_{1:N} | \tilde{\yb}_l, \: \Tilde{D}_l\big)$. Using the formula for the cross-entropy between two multivariate Gaussian distributions, we proceed as
\begin{align*}
\mathbb{E}_{q_S^l}\big[\log\Tilde{q}_{\phi}(\zb^{l}_{1:N} | \textbf{Y})\big] &= -\frac{N}{2}\log (2\pi) -\frac{1}{2} \log |\Tilde{D}_l|
- \frac{1}{2}\big(\tilde{\yb}_l - \bm{B}\bm{\mu}^l\big)^T \Tilde{D}_l^{-1}\big(\tilde{\yb}_l - \bm{B}\bm{\mu}^l\big) - \frac{1}{2} Tr\big(\Tilde{D}_l^{-1}(\Tilde{\Kb} + \bm{B}\bm{A}^l\bm{B}^T)\big) \\
&= \log \mathcal{N}\big(\tilde{\yb}_l | \bm{B} \bm{\mu^l}, \: \Tilde{D}_l \big) - \frac{1}{2} Tr(\Tilde{D}_l^{-1}\Tilde{\Kb}) - \frac{1}{2}Tr(\Tilde{D}_l^{-1}\bm{B}\bm{A}^l\bm{B}^T) \: .
\end{align*}
It remains to show that the last trace term equals $\sum_{i=1}^N \Tilde{\sigma}^{-2}_{l, i}Tr(\textbf{A}^l\: \Lambda_i)$, which follows from
\begin{align*}
Tr(\Tilde{D}_l^{-1}\bm{B}\bm{A}^l\bm{B}^T) = Tr(\bm{A}^l\bm{B}^T\Tilde{D}_l^{-1}\bm{B}) = Tr\big(\bm{A}^l\Kb_{mm}^{-1}\big(\sum_{i=1}^N \Tilde{\sigma}^{-2}_{l, i}\bm{k_i} \bm{k_i}^T\big)\Kb_{mm}^{-1}\big) = \sum_{i=1}^N \Tilde{\sigma}^{-2}_{l, i} Tr(\bm{A}^l\Lambda_i) \: .
\end{align*}
\end{proof}
\begin{proposition}
The GP-VAE ELBO with the bound from \cite{Hensman2013GaussianData} reduces to
\begin{align*}
\mathcal{L}_{PH}(\textbf{U}, \psi, \theta, \bm{\mu}^{1:L}, \textbf{A}^{1:L}) =
\sum_{i=1}^N \mathbb{E}_{q_S} \bigg[ \log p_{\psi}(\yb_i | \zb_i) \bigg] - \sum_{l=1}^L KL\big(q_S^{l}(\textbf{f}_m|\cdot) \: || \: p_\theta^l(\textbf{f}_m|\cdot)\big)
\end{align*}
\end{proposition}
\begin{proof}
Using the above proposition, we have
\begin{align*}
& \mathbb{E}_{q_S} \bigg[ \sum_{i=1}^N \log p_{\psi}(\yb_i | \zb_i) - \log \Tilde{q}_{\phi}(\zb_i | \yb_i)\bigg] + \sum_{l=1}^L \cL_{H}^{l} \\
&= \mathbb{E}_{q_S} \bigg[ \sum_{i=1}^N \log p_{\psi}(\yb_i | \zb_i)\bigg] - \mathbb{E}_{q_S} \bigg[ \sum_{l=1}^L\log \Tilde{q}_{\phi}(\zb^{l}_{1:N} | \textbf{Y})\bigg] + \sum_{l=1}^L \cL_{H}^{l} \\
&= \mathbb{E}_{q_S} \bigg[ \sum_{i=1}^N \log p_{\psi}(\yb_i | \zb_i)\bigg] - \sum_{l=1}^L\ \bigg(\mathbb{E}_{q_{S}^l} \big[ \log \Tilde{q}_{\phi}(\zb^{l}_{1:N} | \textbf{Y})\big] - \cL_{H}^{l}\bigg) \\
&= \mathbb{E}_{q_S} \bigg[ \sum_{i=1}^N \log p_{\psi}(\yb_i | \zb_i) \bigg] - \sum_{l=1}^L KL\big(q^{l}_S(\textbf{f}_m | \cdot) \: || \: p_{\theta}(\textbf{f}_m | \cdot)\big) \: .
\end{align*}
\end{proof}
Observe that in $\cL_{PH}(\cdot)$ all terms that include $\tilde{\yb}_l$ or ${\tilde{\bm{\sigma}}}_l$ cancel out, hence such ELBO is independent of the inference network parameters $\phi$.
\subsection{Monte Carlo estimators in the SVGP-VAE}
\label{sec:app_MC_SVGP-VAE}
The idea behind the estimators used in $q_S$ in our SVGP-VAE is based on the work presented in \cite{evans2020quadruply}. The main insight is to rewrite the matrix operations as expectations with respect to the empirical distribution of the training data. Those expectations are then approximated with Monte Carlo estimators.
Recall that the (amortized) latent dataset for the $l$-th channel is denoted by $\{\textbf{X}, \tilde{\yb}_l, {\tilde{\bm{\sigma}}}_l\}$, with $\tilde{\yb}_l:=\mu^l_\phi(\textbf{Y})$ and ${\tilde{\bm{\sigma}}}_l := \sigma^l_\phi(\textbf{Y})$. For notational convenience, additionally denote $\Tilde{D}_l := \text{diag}({\tilde{\bm{\sigma}}}_l^2)$. First, observe that the matrix product $\Kb_{mN}\Tilde{D}_l^{-1}\Kb_{Nm}$ in $\bm{\Sigma}^l$ can be rewritten as a sum over data points $\sum_{i=1}^N B_i (\textbf{x}_i, \yb_i)$
with
\begin{align*}
B_i(\textbf{x}_i, \yb_i) := \frac{1}{\Tilde{\sigma}^2_{l, i}} \begin{bmatrix}
k_{\theta}(\textbf{u}_1, \textbf{x}_i)k_{\theta}(\textbf{u}_1, \textbf{x}_i) & \hdots & k_{\theta}(\textbf{u}_1, \textbf{x}_i)k_{\theta}(\textbf{u}_m, \textbf{x}_i) \\[0.6em]
\vdots & \ddots & \vdots\\[0.6em]
k_{\theta}(\textbf{u}_m, \textbf{x}_i)k_{\theta}(\textbf{u}_1, \textbf{x}_i) & \hdots & k_{\theta}(\textbf{u}_m, \textbf{x}_i)k_{\theta}(\textbf{u}_m, \textbf{x}_i)
\end{bmatrix} \: .
\end{align*}
Let $\Bar{b}$ represent a set of indices of data points in the current batch with size $b$.
Moreover, define $\Kb_{bm} \in \mathbb{R}^{b \times m}, \: \Tilde{D}_{l, b} \in \mathbb{R}^{b \times b}, \: \tilde{\yb}_b^l \in \mathbb{R}^b$ as the sub-sampled versions of $\Kb_{Nm} \in \mathbb{R}^{N \times m}, \: \Tilde{D}_{l} \in \mathbb{R}^{N \times N}$ and $\tilde{\yb}_l \in \mathbb{R}^{N}$, respectively, consisting only of data points in $\bar{b}$. An (unbiased) Monte Carlo estimator for $\bm{\Sigma}^l$ is then derived as follows
\begin{align*}
\bm{\Sigma}^l &= \Kb_{mm} + \Kb_{mN}\Tilde{D}_l^{-1}\Kb_{Nm} =
\Kb_{mm} + N\sum_{i=1}^N \frac{1}{N}B_i (\textbf{x}_i, \yb_i) =
\Kb_{mm} + N \cdot \mathbb{E}_{i \sim \{1, ..., N \} } \big[B_i(\textbf{x}_i, \yb_i)\big] \\ & \approx
\Kb_{mm} + \frac{N}{b}\sum_{i \in \Bar{b}} B_i(\textbf{x}_i, \yb_i) =
\Kb_{mm} + \frac{N}{b}\Kb_{mb} \Tilde{D}_{l, b}^{-1} \Kb_{bm}=: \bm{\Sigma}^l_b \: .
\end{align*}
Additionally, define $\bm{c}_l := \Kb_{mN} \Tilde{D}_l^{-1} \tilde{\yb}_l$ and proceed similarly as above
\begin{align*}
\bm{c}_l = \sum_{i=1}^n b_i(\textbf{x}_i, \yb_i) = N \cdot \mathbb{E}_{i\sim \{1, ..., N \}}[b_i(\textbf{x}_i, \yb_i)] \approx \frac{N}{b} \sum_{i \in \Bar{b}} b_i(\textbf{x}_i, \yb_i) = \frac{N}{b} \Kb_{mb} \Tilde{D}_{l, b}^{-1}\tilde{\yb}_b^l=: \bm{c}^l_b \: ,
\end{align*}
where
\begin{align*}
b_i(\textbf{x}_i, \yb_i) := \frac{\Tilde{y}_{l,i}}{\Tilde{\sigma}^2_{l,i}}
\begin{bmatrix}
k_{\theta}(\textbf{u}_1, \textbf{x}_i) \\[0.6em]
\vdots \\[0.6em]
k(\textbf{u}_m, \textbf{x}_i)
\end{bmatrix} \: .
\end{align*}
The estimators for $\bm{\mu}^l_T$ and $\bm{A}^l_T$ are then obtained using a plug-in approach,
\begin{align*}
\bm{\mu}^l_T = \Kb_{mm} (\bm{\Sigma}^{l})^{-1}\bm{c}_l \approx \Kb_{mm} (\bm{\Sigma}_b^{l})^{-1} \bm{c}^l_b =: \bm{\mu}_b^l \: ,\\
\bm{A}_T^l = \Kb_{mm} (\bm{\Sigma}^{l})^{-1} \Kb_{mm} \approx \Kb_{mm} (\bm{\Sigma}_b^{l})^{-1} \Kb_{mm} =: \bm{A}^l _b \: .
\end{align*}
Note that neither of the above estimators is unbiased, since both depend on the inverse $(\bm{\Sigma}_b^{l})^{-1}$. For the empirical investigation of the magnitude of the bias, see Appendix \ref{sec:app_bias}. However, $\bm{A}^l_b$ can be shown to be approximately (up to the first order Taylor approximation) unbiased.
\vspace{10pt}
\begin{proposition}For the estimator $\bm{A}^l_b$ in SVGP-VAE, it holds that
\begin{align*}
\mathbb{E}[\bm{A}^l_b] - \bm{A}_T^l \approx 0 \: .
\end{align*}
\end{proposition}
\begin{proof}
Note that expectation here is taken with respect to the empirical distribution of the training data, that is, $\mathbb{E}_{i \sim \{1,...,N\}}$. Using the definitions of $\bm{A}^l_b$ and $\bm{A}_T^l$, we get
\begin{align*}
\mathbb{E}[\bm{A}^l_b] - \bm{A}_T^l = \Kb_{mm} \big(\mathbb{E}\big[(\bm{\Sigma}^l_b)^{-1}\big] - (\bm{\Sigma}^l)^{-1}\big) \Kb_{mm} \: ,
\end{align*}
so it remains to show that $\mathbb{E}[(\bm{\Sigma}^l_b)^{-1}] - (\bm{\Sigma}^l)^{-1} \approx 0$. To this end, we exploit the positive definiteness of the kernel matrix $\Kb_{mm}$ and we approximate both inverse terms with the first order Taylor expansion:
\begin{align*}
(\bm{\Sigma}^l_b)^{-1} = \big(\Kb_{mm} + \frac{N}{b} \Kb_{mb} \Tilde{D}_{l, b}^{-1} \Kb_{bm}\big)^{-1} = \Kb_{mm}^{-\frac{1}{2}}\big(\textbf{I} + \frac{N}{b}\Kb_{mm}^{-\frac{1}{2}}\Kb_{mb} \Tilde{D}_{l, b}^{-1} \Kb_{bm}\Kb_{mm}^{-\frac{1}{2}}\big)^{-1} \Kb_{mm}^{-\frac{1}{2}} \\ \approx \Kb_{mm}^{-\frac{1}{2}}\big(\textbf{I} - \frac{N}{b}\Kb_{mm}^{-\frac{1}{2}}\Kb_{mb} \Tilde{D}_{l, b}^{-1} \Kb_{bm}\Kb_{mm}^{-\frac{1}{2}}\big) \Kb_{mm}^{-\frac{1}{2}} = \Kb_{mm}^{-1} - \frac{N}{b}\Kb_{mm}^{-1} \Kb_{mb} \Tilde{D}_{l, b}^{-1} \Kb_{bm}\Kb_{mm}^{-1} \: .
\end{align*}
Similarly, we have $(\bm{\Sigma}^l)^{-1} \approx \Kb_{mm}^{-1} - \Kb_{mm}^{-1}\Kb_{mN}\Tilde{D}_l^{-1}\Kb_{Nm}\Kb_{mm}^{-1}$ . Using this, we proceed as
\begin{align*}
\mathbb{E}[(\bm{\Sigma}^l_b)^{-1}] - (\bm{\Sigma}^l)^{-1} & \approx - \frac{N}{b}\Kb_{mm}^{-1} \mathbb{E} \big[ \Kb_{mb} \Tilde{D}_{l, b}^{-1} \Kb_{bm} \big]\Kb_{mm}^{-1} + \Kb_{mm}^{-1}\Kb_{mN}\Tilde{D}_l^{-1}\Kb_{Nm}\Kb_{mm}^{-1} \\
&= - \frac{N}{b}\Kb_{mm}^{-1} \mathbb{E} \bigg[ \sum_{i \in \Bar{b}} B_i(\textbf{x}_i, \yb_i) \bigg]\Kb_{mm}^{-1} + \Kb_{mm}^{-1}\bigg(\sum_{i=1}^N B_i (\textbf{x}_i, \yb_i)\bigg)\Kb_{mm}^{-1} \\ &= - N\Kb_{mm}^{-1} \mathbb{E} \big[ B_i(\textbf{x}_i, \yb_i) \big]\Kb_{mm}^{-1} + N \Kb_{mm}^{-1}\mathbb{E} \big[ B_i(\textbf{x}_i, \yb_i) \big]\Kb_{mm}^{-1} = 0
\end{align*}
\vspace{1em}
\end{proof}
Note that a similar proof technique unfortunately cannot be used to show that $\bm{\mu}^l_b$ is approximately unbiased for $\bm{\mu}^l_T$, due to the product of two plug-in estimators that both depend on the data in the same batch.
\subsection{Low-rank kernel matrix in \cite{Casale2018GaussianAutoencoders}}
\label{sec:app_low_rank}
In the following, we present an approach from \cite{Casale2018GaussianAutoencoders} to reduce the cubic GP complexity in their GP-VAE model. Note that the exact approach is not given in \cite{Casale2018GaussianAutoencoders} and the derivation shown here is our best attempt at recreating the results.
In \cite{Casale2018GaussianAutoencoders}, datasets composed of $P$ unique objects observed in $Q$ unique views are considered, for instance, images of faces captured from different angles. In total, this amounts to $N = P \cdot Q$ images. The auxiliary data consist of two sets of features $\textbf{X} = \big [\textbf{X}_o \: \textbf{X}_v \big]$, with $\textbf{X}_o \in \mathbb{R}^{N \times p_1}$ containing information about objects (e.g., drawing style of the digit or characteristics of the face) and $\textbf{X}_v \in \mathbb{R}^{N \times p_2}$ containing information about views (e.g., an angle or position in space). Let $\textbf{x}_i = [\textbf{x}_{o,i} \: \textbf{x}_{v,i}]$ denote auxiliary data for the $i$-th image (corresponding to the $i$-th row of the $\textbf{X}$ matrix). Additionally, denote by $\textbf{P} \in \mathbb{R}^{P \times p_1}$ and $\textbf{Q} \in \mathbb{R}^{Q \times p_2}$ matrices consisting of all unique object and view representations, respectively. A product kernel between a linear kernel for object information and a periodic kernel for view information is used:
\begin{align*}
k_{\theta}(\textbf{x}_i, \textbf{x}_j) = \sigma^2 \exp\bigg(-\frac{2\sin ^2 \big(\norm{\textbf{x}_{v,i} - \textbf{x}_{v,j}}\big)}{ l^2}\bigg) \cdot \textbf{x}_{o,i}^T \textbf{x}_{o,j} \:, \; \theta = \{\sigma^2, l\} \: .
\end{align*}
Exploiting the product and (partial) linear structure of the kernel and using properties of the Kronecker product, $\Kb_{NN}$ can be written in a low-rank form as
\begin{align*}
\Kb_{NN}(\textbf{X}, \textbf{X}) = \textbf{P} \textbf{P}^T \otimes \Kb(\textbf{Q}) = \textbf{P} \textbf{P}^T \otimes \textbf{L} \textbf{L}^T =
\big(\textbf{P} \otimes \textbf{L}\big) \big( \textbf{P}^T \otimes \textbf{L}^T\big) = \big(\textbf{P} \otimes \textbf{L}\big) \big( \textbf{P} \otimes \textbf{L}\big)^T =: \textbf{V} \textbf{V}^T,
\end{align*}
where $\Kb(\textbf{Q}) \in \mathbb{R}^{Q \times Q}$ is a kernel matrix of all unique view vectors based on the periodic kernel, $\textbf{L}$ is its Cholesky decomposition and $\textbf{V} \in \mathbb{R}^{N \times H}, \: H = Q \cdot p_{1},$ is the obtained low-rank matrix ($H \ll N$ due to the assumption that the number of unique views $Q$ is not large). For such matrices, the inverse and log-determinant can be computed in $O(NH^2)$ using a matrix inversion lemma \citep{henderson1981deriving} and a matrix determinant lemma \citep{harville1998matrix}, respectively.
While the above approach elegantly reduces the GP complexity for a given dataset (for auxiliary data $\textbf{X}$ with a product structure), it is not readily extensible for other types of datasets (e.g. time series). In contrast, our SVGP-VAE makes no assumptions on neither the data nor the GP kernel used. Therefore, it is a more general solution to scale GP-VAE models.
\subsection{Sparse GP-VAE based on \cite{Titsias2009VariationalProcesses}}
\label{sec:app_one_shot_SVGP-VAE}
Using the sparse GP posterior $q_S$ (Equation \ref{eq:svgp_approx_post}) and ELBO $\cL_T$ (Equation \ref{eq:ELBO_T}) from \cite{Titsias2009VariationalProcesses} gives rise to the following sparse GP-VAE ELBO:
\begin{align*}\label{eq:SVGP-VAE_Titsias_ELBO}
\cL_{PT}\big(\textbf{U}, \psi, \phi, \theta):= \sum_{l=1}^L \cL_{T}^{l}(\textbf{U}, \, \phi, \, \theta)
+\sum_{i=1}^N\mathbb{E}_{q_S} \bigg[ \log p_{\psi}(\yb_i | \zb_i) - %
\log \tilde{q}_\phi(\zb_i | \yb_i)\bigg] \: .
\end{align*}
In Section \ref{sec:latent_sparse_GP}, we have outlined how to obtain the above sparse ELBO from the GP-VAE ELBO proposed in \cite{Pearce2019ThePixels}. Alternatively, $\cL_{PT}$ can be derived in the standard way by directly considering the KL divergence between the sparse GP posterior and the (intractable) true posterior for the latent variables $\text{KL}\big(q_S(\Zb|\cdot)||p_{\psi, \theta}(\Zb|\textbf{Y}, \textbf{X})\big) $.
Following \cite{Titsias2009VariationalProcesses}, we consider the joint distribution of observed and \emph{augmented} latent variables $p_{\psi, \theta}(\Zb, \textbf{F}_m, \textbf{Y} | \textbf{X})$ where
$\textbf{F}_m := \big[ \textbf{f}^1, \dots, \textbf{f}^L \big], \; \textbf{f}^l := f^l(\textbf{U}) \in \mathbb{R}^{m}$.
The sparse GP posterior decomposes as $q_S(\Zb, \textbf{F}_m | \cdot) = p_{\theta}(\Zb | \textbf{F}_m) p_S(\textbf{F}_m)$,
where $p_S(\textbf{F}_m) := \prod_{l=1}^L \mathcal{N}(\textbf{f}^l_m | \bm{\mu}^l, \textbf{A}^l)$ is a free variational distribution and $p_{\theta}(\Zb | \textbf{F}_m)$ is a (standard) conditional GP prior. The problem of minimizing the KL divergence is then equivalently posed as a maximization of a lower bound of the model evidence as follows,
where in the first steps we introduce $\Tilde{q}_{\phi}(\Zb | \textbf{Y})$
and $q_S(\Zb, \textbf{F}_m | \cdot)$ and apply Jensen's inequality:
\begin{align*}
\log p(\textbf{Y} | \textbf{X})
& = \log \int p_{\psi, \theta}(\Zb, \textbf{F}_m, \textbf{Y}| \textbf{X})
\frac{q_S(\Zb, \textbf{F}_m | \cdot)}{q_S(\Zb, \textbf{F}_m | \cdot)}
\frac{\Tilde{q}_{\phi}(\Zb | \textbf{Y})}{\Tilde{q}_{\phi}(\Zb | \textbf{Y})}
d\Zb d\textbf{F}_m
\\
\\
&\ge \int q_S(\Zb, \textbf{F}_m | \cdot) \log
\frac{p_{\psi, \theta}(\Zb, \textbf{F}_m, \textbf{Y}| \textbf{X})}{q_S(\Zb, \textbf{F}_m | \cdot)}
\frac{\Tilde{q}_{\phi}(\Zb | \textbf{Y})}{\Tilde{q}_{\phi}(\Zb | \textbf{Y})}
d\Zb d\textbf{F}_m
\\ \\
&=
\int q_S(\Zb, \textbf{F}_m | \cdot) \log \frac{\Tilde{q}_{\phi}(\Zb | \textbf{Y})p_{ \psi}(\textbf{Y}| \Zb) p_{\theta}(\Zb | \textbf{F}_m )p_{\theta}(\textbf{F}_m | \textbf{X})}{\Tilde{q}_{\phi}(\Zb | \textbf{Y})p_{\theta}(\Zb | \textbf{F}_m) p_S(\textbf{F}_m)} d\Zb d\textbf{F}_m
\\ \\
&=\sum_{l=1}^L \int q_S(\zb^l, \textbf{f}_m^l | \cdot) \log \frac{\Tilde{q}_{\phi}(\zb^l | \textbf{Y}) p_{\theta}(\textbf{f}_m^l | \textbf{X})}{ p_S(\textbf{f}_m^l)} d\zb^l d\textbf{f}_m^l
+
\sum_{i=1}^N \int q_S(\zb_i | \cdot)\bigg( \log p_{ \psi}(\yb_i| \zb_i) - \log \Tilde{q}_{\phi}(\zb_i | \yb_i) \bigg)d\zb_i \\ \\[5pt]
&= \sum_{l=1}^L \mathcal{L}_{T}(\textbf{U}, \phi, \theta, \bm{\mu}^l, \textbf{A}^l) + \sum_{i=1}^N \mathbb{E}_{q_S}\big[ \log p_{ \psi}(\yb_i| \zb_i) - \log \Tilde{q}_{\phi}(\zb_i | \yb_i) \big]
\end{align*}
Recall the symmetry of the Gaussian distribution $\tilde{q}_\phi(\zb_i^l|\yb_i)=\mathcal{N}(\zb^l_i|\bm{\mu}^l(\yb_i), \sigma^l(\yb_i)) = \mathcal{N}(\bm{\mu}^l(\yb_i)|\zb^l_i, \sigma^l(\yb_i))$. Hence, %
the first term of the penultimate expression is a sum over sparse Gaussian processes, one for each latent channel, and each term is precisely Equation 8 of \citet{Titsias2009VariationalProcesses} for sparse Gaussian process regression.
Therefore we write $\mathcal{L}_T^l$ and let $\bm{\mu}^l = \bm{\mu}_T^l$ and $\textbf{A}^l = \textbf{A}^l_T$. For further derivation steps see \cite{Titsias2009VariationalProcesses}.
\newpage
\section{Additional experiments}
\label{sec:app_experiments_chapter}
\subsection{PCA initialization of GP-LVM vectors and inducing points}
\label{app:PCA_init}
In this section, we describe how Principal Component Analysis (PCA) is used to initialize the GP-LVM digit representations as well as the inducing points in the rotated MNIST experiment. Note that both the GP-VAE \citep{Casale2018GaussianAutoencoders} and the SVGP-VAE depend on GP-LVM vectors, with the SVGP-VAE additionally relying on inducing points.
To obtain a continuous digit representations for each digit instance, we start with the data matrix $\textbf{X} \in \mathbb{R}^{P \times K}$ that consists of unrotated MNIST images. PCA is then performed on $\textbf{X}$, yielding a matrix $\bm{D} \in \mathbb{R}^{P \times M}$ whose rows $\bm{d}_i$ are used as initial values for the GP-LVM vectors. $M$ represents the number of principal components kept.
For initialization of the inducing points, we sample $n$ GP-LVM vectors from the empirical distribution based on the PCA matrix $\bm{D}$ for each of the $Q$ angles. This results in a matrix $\textbf{U}_{init} \in \mathbb{R}^{m \times (1 + M)}$ with $m = n \cdot Q$ representing the number of inducing points. The exact procedure is given in Algorithm \ref{alg:PCA_init}. Results from the ablation study on the PCA initialization described here are presented in Table \ref{table:PCA_init_ablation}.
\begin{algorithm}
\caption{Initialization of inducing points in the SVGP-VAE\label{alg:PCA_init} (rotated MNIST experiment)}
\SetAlgoLined
\SetKwInOut{Input}{input}
\SetKwInOut{Output}{output}
\Input{PCA matrix $\bm{D}$, number of inducing points per angle $n$, set of angles $ \{\frac{2\pi k}{Q} \: | \: k=1,...,Q\}$}
\BlankLine
$\textbf{U}_{init} = [\:] $
\# sample $m=n \cdot Q$ points from empirical distribution of each principle component
\For{$i=1,...,M$}{
$ \textbf{U}_{init} = \big[\textbf{U}_{init}, \; \emph{sample}\big(\bm{D}[: \: , i], \: nr\_samples=n\big)\big]$
}
\# add column with angle information
$\bm{a} = \big[\underbrace{2 \pi / Q, ..., 2 \pi / Q}_{n \times}, \: ... \: , \underbrace{2 \pi , ..., 2 \pi}_{n \times} \big]^T \in \mathbb{R}^{m} $
$ \textbf{U}_{init} = \big[\bm{a}, \; \textbf{U}_{init \big]}$
\Return{$\textbf{U}_{init}$}
\end{algorithm}
\begin{table}[H]
\setlength{\tabcolsep}{10pt}
\centering
\begin{tabular}{l l l}
\toprule
& \textbf{PCA init} & \textbf{random init} \\
\midrule
\textbf{GP-VAE} \cite{Casale2018GaussianAutoencoders} & $0.0370 \pm 0.0012$ & $0.0374 \pm 0.0009$ \\[0.08cm]
\textbf{SVGP-VAE} & $0.0251 \pm 0.0005$ & $0.0272 \pm 0.0006$ \\[0.08cm]
\bottomrule
\end{tabular}
\caption[Rotated MNIST - PCA initialization of GP-LVM vectors and inducing points]{A comparison of different initialization regimes for GP-LVM vectors and inducing points in the rotated MNIST experiment. For random initialization, a Gaussian distribution with mean $0$ and standard deviation $1.5$ was used.}
\label{table:PCA_init_ablation}
\end{table}
\subsection{SVGP-VAE latent space visualization}
In Figure \ref{fig:SVGP-VAE_latents}, we depict two-dimensional t-SNE \citep{vanDerMaaten2008} embeddings of SVGP-VAE latent vectors ($L=16$). Visualized here are latent vectors for training data of the \emph{five-digit} version of the rotated MNIST dataset ($N=20250$). As expected, the model clusters images based on the digit identity. More interestingly, SVGP-VAE also seems to order images within each digit cluster with respect to angles. For example, looking at the cluster of the digit 3 (the blue cluster in the middle of the lower plot), we observe that embeddings of rotated images are ordered continuously from $0$ to $2\pi$ as we move in clockwise direction around the circular shape of the cluster.
\begin{figure}[H]
\centering
\includegraphics[width=\textwidth]{figures/SVGPVAE_tsne.PNG}
\caption[Rotated MNIST - SVGP-VAE latent space visualization]{t-SNE embeddings of SVGP-VAE latent vectors on the training data for rotated MNIST. On the upper scatter plot, each image embedding is colored with respect to its associated angle. On the lower scatter plot, each image embedding is colored with respect to its associated digit. The t-SNE perplexity parameter was set to 50.}
\label{fig:SVGP-VAE_latents}
\end{figure}
\subsection{Rotated MNIST: generated images}
\label{sec:app_rot_MNIST_generations}
\begin{figure}[H]
\centering
\includegraphics[width=0.7\textwidth]{figures/rot_MNIST_recons_cVAE_2.pdf}
\caption[Rotated MNIST - generated images 3]{Generated test images in the rotated MNIST experiment for all considered models.}
\label{fig:rot_MNIST_recons_main_3}
\end{figure}
\subsection{Bias analysis of MC estimators in SVGP-VAE}
\label{sec:app_bias}
\begin{figure}
\centering
\includegraphics[width=\textwidth]{figures/SVGPVAE_b_m.pdf}
\caption[Rotated MNIST - SVGP-VAE results for varying batch size and number of inducing point]{SVGP-VAE results on the rotated MNIST dataset (digit 3) for varying batch size (left) and number of inducing points (right). For the batch size experiment, $m$ was set to 32. For the inducing points experiment, $b$ was set to 256. For each configuration, a mean MSE together with a standard deviation based on 5 runs is shown.}
\label{fig:SVGP-VAE_b_M}
\end{figure}
Here we look at some additional experiments that were conducted to get a better understanding of the SVGP-VAE model. Depicted in Figure \ref{fig:SVGP-VAE_b_M} are the results when varying the batch size and the number of inducing points. We first notice that the SVGP-VAE performance improves as the batch size is increased. As pointed out in Section~\ref{sec:best_ELBOs}, this is a consequence of the Monte Carlo estimators from (\ref{eq:MC_estimators}) used in $q_S$ whose quality depends on the batch size. While the dependence on the batch size can surely be seen as one limitation of the model, it is encouraging to see that the model achieves good performance already for a reasonably small batch size (e.g., $b=128$). Moreover, the batch size parameter in the SVGP-VAE offers a simple and intuitive way to navigate a trade-off between performance and computational demands. If one is more concerned regarding the performance, a higher batch size should be used. On the other hand, if one only has limited computational resources at disposal, a lower batch size can be utilized resulting in a faster and less memory-demanding model.
Looking at the plot with the varying number of inducing points next, we observe that the model achieves a solid performance with as little as 16 inducing points on the rotated MNIST data. However, increasing the number of inducing points $m$ starts to have a negative impact on the performance after a certain point. This can be partly attributed to numerical issues that arise during training --- the higher the $m$, the more numerically unstable the inducing point kernel matrix $\Kb_{mm}$ becomes. Moreover, since the number of inducing points equals the dimension of the Monte Carlo estimators in (\ref{eq:MC_estimators}), increasing $m$ results in a larger dimension of the space, potentially increasing the complexity of the estimation problem.
To better understand the effect of the number of inducing points $m$ on the quality of estimation in our proposed MC estimators, we investigate here the trajectory of the bias throughout training. To this end, for each epoch $i$ an estimator $\bm{\mu}^{l}_{j,i}$ is calculated for each latent channel $l$ and for each batch $j$. Additionally, the true value $\bm{\mu}_{T,i}^l$ is obtained (based on the entire dataset) for every epoch and every latent channel using model weights from the end of the epoch. The bias for the $l$-th latent channel and $i$-th epoch is then computed as
\begin{align*}
\bm{b}_i^l := \frac{1}{B}\sum_{j=1}^B \bm{\mu}^{l}_{j,i} - \bm{\mu}_{T,i}^l
\end{align*}
where $B := \lceil{\frac{N}{b}}\rceil$ represents the number of batches in a single epoch. Finally, for each epoch $i$ the $L1$ norms of the bias vectors for each latent channel are averaged $\bm{b}_i = \frac{1}{L}\sum_{l=1}^L \bm{b}_i^l$.
Moving averages of the resulting bias trajectories are depicted in Figure \ref{fig:rot_MNIST_bias_SVGP-VAE_vary_m}. For comparison purposes, each trajectory is normalized by the number of inducing points used. Notice how for smaller $m$, the bias trajectories display the expected behavior and converge (or stay close) to 0. Conversely, for larger numbers of inducing points ($m=64$ and $m=96$), the bias is larger and does not decline as the training progresses. This suggests that the proposed estimation might get worse in larger dimensions.
However, despite seemingly deteriorating approximation in higher dimensions, it is also evident that the approximation does not completely break down --- the model still achieves a solid performance even for a larger number of inducing points. Nevertheless, we note that getting a better theoretical grasp of the quality of estimation or reparameterizing the SVGP-VAE ELBO in a way such that these estimators are no longer needed could be a fruitful area of future work.
\begin{figure}[H]
\centering
\includegraphics[width=\textwidth]{figures/SVGPVAE_bias_mean_vector_b256_varying_m.pdf}
\caption{Bias trajectories in the SVGP-VAE model for a varying number of inducing points. For all runs, the batch size was set to 256.}
\label{fig:rot_MNIST_bias_SVGP-VAE_vary_m}
\end{figure}
\subsection{Deep sparse GP from \cite{Hensman2013GaussianData} for conditional generation}
\label{sec:app_Hensman_baseline}
In Section \ref{sec:latent_sparse_GP}, we demonstrate that a sparse GP approach from \cite{Hensman2013GaussianData} cannot be used in the GP-VAE framework as it does not lend itself to amortization. In Section \ref{sec:app_vanishing_SVGP-VAE_Hensman}, we then provide a detailed derivation of this phenomenon. Here, we leave out the amortization completely and consider directly the sparse GP from \citet{Hensman2013GaussianData}. To this end, we modify the ELBO in eq. (4) in \cite{Hensman2013GaussianData}. To model our high-dimensional data $\yb_i \in \mathbb{R}^K$, we utilize a deep likelihood parameterized by a neural network $\psi: \mathbb{R}^L \xrightarrow[]{} \mathbb{R}^K$ (instead of a simple Gaussian likelihood). Moreover, we replicate a GP regression L times (across all latent channels), which yields the following objective function
\begin{align*}
\cL(\textbf{U}, \psi, \theta, \bm{\mu}^{1:L}, \textbf{A}^{1:L}, \sigma) = &\sum_{i=1}^N \bigg\{ \log\mathcal{N}\big(\yb_i \:|\: \psi( \bm{m}_i), \: \sigma^{2} \textbf{I} \big) - \frac{1}{2 \sigma^{2}} \sum_{l=1}^L\: (\Tilde{k}_{ii} + Tr(\textbf{A}^l\: \Lambda_i)) \bigg\} \: - \\& \sum_{l=1}^L \text{KL}\big(q_S^l(\textbf{f}_m|\cdot) \: || \: p_\theta(\textbf{f}_m|\cdot)\big)
\end{align*}
where $\bm{m}_i := [\bm{k}_i \Kb_{mm}^{-1}\bm{\mu}^1, \: ... \:, \bm{k}_i \Kb_{mm}^{-1}\bm{\mu}^L ]^T \in \mathbb{R}^L$. Also recall that $q^{l}_S(\textbf{f}_m | \cdot) = \mathcal{N}(\textbf{f}_m | \bm{\mu^l}, \bm{A^l})$, $p_{\theta}(\textbf{f}_m | \cdot) = \mathcal{N}(\textbf{f}_m | \bm{0}, \: \Kb_{mm})$, $\Lambda_i := \Kb_{mm}^{-1}\bm{k}_i \bm{k}_i^T\Kb_{mm}^{-1}$ and $\Tilde{k}_{ii}$ is the $i$-th diagonal element of $\Kb_{NN} - \Kb_{Nm} \Kb_{mm}^{-1}\Kb_{mN}$.
For a test point $\bm{x_{*}}$, we first obtain $\bm{m}_* = [\bm{k}_* \Kb_{mm}^{-1}\bm{\mu}^1, \: ... \:, \bm{k}_* \Kb_{mm}^{-1}\bm{\mu}^L ]^T, \: \bm{k_*} = [k_{\theta}(\textbf{x}_*, \textbf{u}_1), ..., k_{\theta}(\textbf{x}_*, \textbf{u}_m)]^T \in \mathbb{R}^m$, and then pass it through the network $\psi$ to generate $\yb_*$.
For comparison purposes, the same number of latent channels ($L=16$) and the same architecture for the network $\psi$ as in our SVGP-VAE is used. We train this baseline model for 2000 epochs using the Adam optimizer and a batch size of 256.
The strong performance (see Table \ref{table:rot_MNIST_main}) of this baseline provides interesting new insights into the role of amortization in GP-VAE models. For the task of conditional generation, where a single GP prior is placed over the entire dataset, the amortization is not necessary, and one can modify existing sparse GP approaches \citep{Hensman2013GaussianData} to achieve good results in a computationally efficient way. Note that this is not the case for tasks like learning interpretable low-dimensional embeddings \citep{Pearce2019ThePixels} or time-series imputation \citep{Fortuin2019GP-VAE:Imputation}. For such tasks, the inference network is needed in order to be able to quickly obtain predictions for new test points without rerunning the optimization.
More thorough investigation of this baseline, its interpretation, and its comparison to the existing work on deep Gaussian Processes \citep{damianou2013deep, wilson2016deep} is left for future work.
\section{Scalable SVGP-VAE}
\label{sec:methods}
This work's main contribution is the sparsification of the GP-VAE using the sparse GP approaches mentioned above. To this end, two separate variational approximation problems have to be solved jointly: an outer amortized inference procedure from the high-dimensional space to the latent space, and the inner sparse variational inference scheme on the GP. To motivate our proposed solution, we begin by pointing out the problems that arise when na\"ively combining the two objectives.
\subsection{Problem setting and notation}
In this work, we consider high-dimensional data
$\textbf{Y} = [\yb_1, \dots, \yb_N]^\top \in \mathbb{R}^{N \times K}$. Each data point has a corresponding low-dimensional auxiliary data entry, summarized as
$\textbf{X} = [\textbf{x}_1, \dots, \textbf{x}_N]^\top \in \mathcal{X}^{N}, \mathcal{X}\subseteq \mathbb{R}^D$.
For example, $\yb_i$ could be a video frame and $\textbf{x}_i$ the corresponding time stamp. Our goal is to train a model for (1) generating $\textbf{Y}$ conditioned on $\textbf{X}$ and (2) infering an interpretable
and disentangled low-dimensional representations.
To this end, we adopt a latent GP approach, summarized below. First, we need to model a prior distribution over the collection of latent variables $\Zb = [\zb_1,\dots,\zb_N]^T\in\mathbb{R}^{N\times L}$, each latent variable $\zb_i$ living in an $L$-dimensional latent space. To model their joint distribution, we
assume $L$ independent latent
functions $f^1,\dots,f^L \sim GP(0, \: k_{\theta})$
with kernel parameters
$\theta$ that result in $\Zb$ when being evaluated on $\textbf{X}$. More precisely, $\zb_i = [f^1(\textbf{x}_i),\dots,f^L(\textbf{x}_i)]$.
By construction, the $l^{th}$ latent channel of all latent variables $\zb^{l}_{1:N}\in\mathbb{R}^N$ (the $l^{th}$ column of $\Zb$) has a correlated Gaussian prior
with covariance $\Kb_{NN}=k_\theta(\textbf{X}, \textbf{X})$. %
Setting $\Kb_{NN}=I$ recovers the fully factorized prior commonly used
in standard VAEs.
As in regular VAEs, each $\zb_i\in\mathbb{R}^L$ is then ``decoded" to parameterize the distribution over observations
$\yb_i = \mu_{\psi}(\zb_i) + \bm{\epsilon}_i$ where $\mu_\psi:\mathbb{R}^L\to\mathbb{R}^K$ is a network with parameters $\psi$ and $\bm{\epsilon}_i \sim \mathcal{N}(\bm{0}, \: \sigma_y^2 \: \textbf{I}_K)$.
Mathematically, the full generative model is given by
\begin{align*}
p_\theta(\Zb|\textbf{X}) &= \prod_{l=1}^L\mathcal{N}(\zb_{1:N}^{l}| 0, \textbf{K}_{NN}), \\
p_\psi(\textbf{Y}|\Zb) &= \prod_{i=1}^Np_\psi(\yb_i|\zb_i) = \prod_{i=1}^N\mathcal{N}(\yb_i| \mu_\psi(\zb_i), \sigma_y^2 \: \textbf{I}_K).
\end{align*}
The joint distribution is $p_{\psi,\theta}(\textbf{Y}, \Zb|\textbf{X}) = p_\psi(\textbf{Y}|\Zb)p_\theta(\Zb|\textbf{X})$.
The true posterior for the latent variables
$p_{\psi,\theta}(\Zb | \textbf{Y}, \textbf{X}) = p_{\psi,\theta}(\textbf{Y}, \Zb|\textbf{X}) / p_{\psi,\theta}(\textbf{Y}|\textbf{X})$
is intractable due to the denominator which requires integrating over $\Zb$. Hence, approximate
inference methods are required to infer the unobserved $\Zb$ given the observed $\textbf{X}$ and $\textbf{Y}$.
\subsection{Amortized variational inference}
Amortization in the typical VAE architecture
uses a second (inference) network from the high-dimensional data $\yb_i$ to
the mean and variance
of a fully factorized Gaussian distribution over $\zb_i\in\mathbb{R}^L$ \citep{zhang2018advances}. We denote it as $\tilde q_\phi(\zb_i|\yb_i) = \mathcal{N}(\zb_i|\mu_\phi(\yb_i), \text{diag}(\sigma^2_\phi(\yb_i)))$
and it has network parameters $\phi$. In
\citet{Casale2018GaussianAutoencoders}, this Gaussian
distribution is used directly to approximate the posterior, $p_{\psi,\theta}(\Zb|\textbf{Y})\approx \prod_i\tilde{q}_\phi(\zb_i|\yb_i)$. %
While this approach mirrors classical VAE design, the approximate posterior for
a latent variable $\zb_i$ only depends on $\yb_i$ and ignores $\textbf{x}_i$. This is
in stark contrast to traditional Gaussian processes where
latent function values
$f(x)$ are informed by all $y$ values according to the similarity of
the corresponding $x$ values.
Building on this model, \citet{Pearce2019ThePixels} instead
proposed to use the inference network $\tilde q_\phi(\zb_i|\yb_i)$ to
replace only the intractable likelihood $p_\psi(\yb_i|\zb_i)$ in the posterior. By combining $\tilde q_{\phi}$ with tractable terms, the approximate posterior could be explicitly normalized as
\begin{align}\label{eq:GP_reg_view}
q(\Zb | \textbf{Y}, \textbf{X}, \phi, \theta) := \prod_{l=1}^L\frac{\prod_{i=1}^N\Tilde{q}_{\phi}(\zb^{l}_{i} | \yb_i) \, p_{\theta}(\zb^{l}_{1:N} | \textbf{X})}{Z^{l}_{\phi,\theta}(\textbf{Y}, \textbf{X})},
\end{align}
where the normalizing constant $Z^{l}_{\phi,\theta}(\textbf{Y}, \textbf{X})$ can be computed
analytically. Noting the symmetry of the Gaussian distribution,
$\mathcal{N}(z| \mu, \sigma) = \mathcal{N}(\mu |z, \sigma)$,
the approximate posterior for channel $l$ is mathematically equivalent to the (exact) GP posterior in the
traditional GP regression with inputs $\textbf{X}$
and outputs $\tilde{\yb}_l:=\mu^l_\phi(\textbf{Y})$ with heteroscedastic
noise ${\tilde{\bm{\sigma}}}_l := \sigma^l_\phi(\textbf{Y})$.
We therefore refer to each $\{\textbf{X}, \tilde{\yb}_l, {\tilde{\bm{\sigma}}}_l\}$ %
as the \emph{latent dataset} for the $l^{th}$ channel. Each normalizing constant of Equation~\ref{eq:GP_reg_view}
is also the GP marginal likelihood of the $l^{th}$ latent dataset.
The parameters $\{\psi, \phi, \theta\}$ are learnt by maximizing the
evidence lower bound (ELBO) in the \textbf{P}earce model,
\begin{align}\label{eq:PearceELBO}
\mathcal{L}_P(\psi,\phi, \theta) = \; \sum_{i=1}^N &\mathbb{E}_{q(\zb_i|\cdot)} \bigg[ \log p_{\psi}(\yb_i | \zb_i) - \log \tilde{q}_\phi(\zb_i | \yb_i)\bigg] \nonumber \\ &+ \sum_{l=1}^L \log Z_{\phi, \theta}^{l}(\textbf{Y}, \textbf{X}).
\end{align}
The first term is the difference between the true likelihood
and inference network approximate likelihood, while the second term is
the sum over GP marginal likelihoods of each latent dataset.
One subtle, yet important, characteristic of the variational approximation from \cite{Pearce2019ThePixels} is that it gives rise to the ELBO $ \mathcal{L}_P(\cdot)$ that contains the GP posterior. Note that this is in contrast to \cite{Casale2018GaussianAutoencoders} and \cite{Fortuin2019GP-VAE:Imputation}, where the GP prior is part of the ELBO. As we will show in Section \ref{sec:latent_sparse_GP}, the ELBO that contains the GP posterior naturally lends itself to "sparsification" through the use of sparse GP posterior approximations.
The computational challenges of $\mathcal{L}_P(\cdot)$ are twofold.
Firstly, for the latent GP regression, an inverse
and a log-determinant of the kernel matrix
$\Kb_{NN} \in \mathbb{R}^{N \times N}$ must be computed,
resulting in $\mathcal{O}(N^3)$ time complexity. Secondly, the
ELBO does not decompose as a sum over data points, so the
entire dataset $\{\textbf{X},\textbf{Y}\}$ is needed for one evaluation of $\mathcal{L}_P(\cdot)$.
Given the latent dataset, at first glance, we may simply apply
sparse GP regression techniques instead of traditional regression.
We next look at two widely used methods (\cite{Titsias2009VariationalProcesses} and \cite{Hensman2013GaussianData}) and highlight their drawbacks
for this task. We then propose a new hybrid approach solving these
issues.
\subsection{Latent Sparse GP Regression}
\label{sec:latent_sparse_GP}
To simplify the notation, we focus on a single channel and
suppress $l$, resulting in $\tilde{\yb}$ and $\bm{\tilde{\sigma}}$,
$\log Z_{\theta, \phi}(\cdot)$ and $f$.
Given an (amortized latent) regression dataset $\textbf{X},\, \tilde{\yb},\, \bm{\tilde{\sigma}}$,
sparse Gaussian process methods assume that there exists a set of $m\ll N$
inducing points with inputs $\textbf{U}=[\textbf{u}_1,\dots,\textbf{u}_m]\in \mathcal{X}^m$ and outputs
$\textbf{f}_m:=f(\textbf{U})\sim\mathcal{N}(f(\textbf{U})|\: \bm{\mu}, \textbf{A})$ that summarize the regression dataset.
$\textbf{U},\, \bm{\mu},\, \textbf{A}$ are parameters to be learnt. Given a (test) set
of $r$ new inputs $\textbf{X}_r$, the sparse approximate (predictive) distribution
over outputs $\textbf{f}_r = f(\textbf{X}_r)$ is
\begin{multline}%
q_S(\textbf{f}_r|\textbf{X}_r, \textbf{U}, \bm{\mu}, \textbf{A}, \theta) =\\
\mathcal{N}\big( \textbf{f}_r |\Kb_{rm} \Kb_{mm}^{-1} \bm{\mu}, \: \Kb_{rr} - \Kb_{rm}\Kb_{mm}^{-1}\Kb_{mr} \\
\quad\quad\quad\quad\quad\quad\quad \quad \,\, +\Kb_{rm}\Kb_{mm}^{-1}\textbf{A} \Kb_{mm}^{-1} \Kb_{mr}\big), \label{eq:svgp_approx_post}
\end{multline}
where kernel matrices are $\Kb_{mm}=k_\theta(\textbf{U},\textbf{U})$, $\Kb_{rr}=k_\theta(\textbf{X}_r,\textbf{X}_r)$, and $\Kb_{mr}=\Kb_{rm}^{\top}=k_\theta(\textbf{U}, \textbf{X}_r)$.
By introducing inducing points, the
cost of learning the model is reduced from $\mathcal{O}(N^3)$
in $\log Z_{\phi, \theta}(\cdot)$ to $\mathcal{O}(Nm^2)$ in a
modified objective.
We next describe two of the most
popular ways to learn the variational parameters $\textbf{U},\, \bm{\mu},\, \textbf{A}$
that are based on a second \emph{inner} variational approximation
for the Gaussian process regression that lower bounds $\log Z_{\phi, \theta}(\cdot)$.
For this second inner variational inference, we aim to learn
a cheap $q_S(\cdot)$ (Equation~\ref{eq:svgp_approx_post}) that closely
approximates the expensive $q(\cdot)$ (Equation~\ref{eq:GP_reg_view}).
\textbf{\citet{Titsias2009VariationalProcesses}}. %
Let $\zb=\zb_{1:N}^l$, then
the parameters $\textbf{U},\, \bm{\mu},\, \textbf{A}$ may be learnt by
minimizing $\text{KL}\big(q_S(\zb\,|\cdot) \: || \: q(\zb \,|\cdot) \big)$,
or equivalently by maximizing a lower bound to the
marginal likelihood of the latent dataset $\log Z_{\phi, \theta}^{l}(\cdot)$.
Let $\bm{\Sigma} := \Kb_{mm} +\Kb_{mN}\text{diag}({\tilde{\bm{\sigma}}}^{-2})\Kb_{Nm} \:$,
then the optimal $\bm{\mu}$ and $\textbf{A}$ may be found analytically:
\begin{align} \label{eq:mu_star}
\bm{\mu}_T &= \Kb_{mm} \bm{\Sigma}^{-1} \Kb_{mN} \text{diag}({\tilde{\bm{\sigma}}}^{-2}) \tilde{\yb}, \\% \,\,
\textbf{A}_T &= \Kb_{mm}\bm{\Sigma}^{-1}\Kb_{mm}, \label{eq:A_star} %
\end{align}
where $\Kb_{mN}=k_\theta(\textbf{U}, \textbf{X})$. Plugging $\bm{\mu}_T$
and $\textbf{A}_T$ back into the appropriate evidence lower
bound yields the final lower bound for learning $\textbf{U}$ in the \textbf{T}itsias model
\begin{align}\label{eq:ELBO_T}
\cL_{T}(\textbf{U}, \phi, \theta) &= \\
\log \mathcal{N}\big(\tilde{\yb} | &\textbf{0}, \: \Kb_{Nm} \Kb_{mm}^{-1}\Kb_{mN} + \text{diag}({\tilde{\bm{\sigma}}}^2)\big) \nonumber\\
\quad -& \frac{1}{2}Tr\big(\text{diag}({\tilde{\bm{\sigma}}}^{-2})\:(\Kb_{NN} - \Kb_{Nm} \Kb_{mm}^{-1}\Kb_{mN})\big).\nonumber
\end{align}
Note that the bound is a function of $\tilde{\yb}$ and ${\tilde{\bm{\sigma}}}$
which depend on the inference network with parameters $\phi$
and the kernel matrices which depend upon $\theta$ hence we make
these arguments explicit.
In the full GP-VAE ELBO $\mathcal{L}_P(\cdot)$, substituting $q_S(\cdot)$, $\mathcal{L}_T(\cdot)$
in place of $q(\cdot)$, $\log Z_{\phi,\theta}(\cdot)$
yields a sparse GP-VAE ELBO that can be readily used to reduce
computational complexity of existing GP-VAE methods for a generic
dataset and an arbitrary GP kernel function.\footnote{As an aside, this sparse GP-VAE ELBO may also be derived in the standard way
using $\text{KL}\big(q_S(\Zb|\cdot)||p_{\psi, \theta}(\Zb|\textbf{Y}, \textbf{X})\big)$, see Appendix B.4.} %
However, observe from
Equations~\ref{eq:mu_star}, \ref{eq:A_star} and \ref{eq:ELBO_T} that
the entire dataset $\{\textbf{X}, \textbf{Y}\}$ enters through $\Kb_{NN}$
and $\tilde{\yb}$, ${\tilde{\bm{\sigma}}}$ respectively. Therefore, this ELBO is not
amenable to mini-batching and has large memory requirements.
\textbf{\citet{Hensman2013GaussianData}}.
In order to make variational sparse GP regression amenable to mini-batching, \citet{Hensman2013GaussianData}
proposed an ELBO that lower bounds $\cL_{T}$ and, more importantly, decomposes as a sum of terms over data points.
Adopting our notation with explicit parameters, the
\textbf{H}ensman ELBO is given by
\begin{multline}
\label{eq:ELBO_H}
\cL_{H}(\textbf{U}, \bm{\mu}, \textbf{A}, \phi, \theta) = - \text{KL}\big(q_S(\textbf{f}_m|\cdot) \: || \: p_\theta(\textbf{f}_m|\cdot)\big) \\
+\sum_{i=1}^N \bigg\{ \log\mathcal{N}\big(\tilde{y}_i | \bm{k}_i \Kb_{mm}^{-1}\bm{\mu}, \: \tilde{\sigma}_i^{-2} \big)
\: - \\ \frac{1}{2 \tilde{\sigma}_i^{2}} \: (\Tilde{k}_{ii} + Tr(\textbf{A}\: \Lambda_i)) \bigg\}.
\end{multline}
Above, $\bm{k}_i$ is the $i$-th row of $\Kb_{Nm}$,
$\Lambda_i = \Kb_{mm}^{-1}\bm{k}_i \bm{k}_i^\top\Kb_{mm}^{-1}$
and $\Tilde{k}_{ii}$ is the $i$-th diagonal element of the
matrix $\Kb_{NN} - \Kb_{Nm} \Kb_{mm}^{-1}\Kb_{mN}$. Due to
the decomposition over data points, the gradients $\nabla\cL_{H}(\cdot)$ in
stochastic or mini-batch gradient descent are unbiased and only the data in
the current batch are needed in memory for the gradient updates. Consequently,
with batch size $b$ the GP complexity is further reduced to $\mathcal{O}(bm^2 + m^3)$. Note that for $\bm{\mu} = \bm{\mu}_T, \textbf{A}= \textbf{A}_T$ and $b =N$, $\cL_H(\cdot)$ recovers $\cL_T(\cdot)$ \citep{Hensman2013GaussianData}.
While this method may seem to meet our requirements, it has a fatal drawback.
Firstly, it is not amortized as $\bm{\mu}$ and $\textbf{A}$ are not
functions of the observed data $\{\textbf{X}, \, \textbf{Y}\}$ but instead need to be optimized once
for each dataset.
Secondly, as a consequence, in the full GP-VAE ELBO $\mathcal{L}_P(\cdot)$,
substituting $q_S(\cdot)$, $\mathcal{L}_H(\cdot)$ in place of $q(\cdot)$, $\log Z_{\phi, \theta}(\cdot)$
and simplifying yields the following expression
\begin{align}\label{eq:SVGP-VAE_Hensman_ELBO}
\mathcal{L}_{PH}&(\textbf{U}, \psi, \theta, \bm{\mu}^{1:L}, \textbf{A}^{1:L}) = \\
\sum_{i=1}^N \mathbb{E}_{q_S} &\bigg[ \log p_{\psi}(\yb_i | \zb_i) \bigg] - \sum_{l=1}^L KL\big(q_S^{l}(\textbf{f}_m|\cdot) \: || \: p_\theta^l(\textbf{f}_m|\cdot)\big) \nonumber
\end{align}
where $q_S^l(\textbf{f}_m|\cdot) = \mathcal{N}(\textbf{f}_m|\bm{\mu}^l, \textbf{A}^l)$.
Note that the ELBO above is not a function of the inference network
parameters $\phi$ (for the full derivation, we refer to Appendix B.1). %
The sparse approximate posterior is parameterized by $\textbf{U}, \bm{\mu}, \textbf{A}, \theta$ which
are all treated as free parameters to be optimized, that is, they are not %
functions of the latent dataset or the inference network.
Maximizing the full GP-VAE ELBO is equivalent to minimizing the KL divergence from
the approximate to the true posterior and neither of these depend upon the latent dataset or the inference network.
Therefore, using the Hensman sparse GP within an amortized GP-VAE model
causes the ELBO to be independent of the inference network parameters.
Hence, this method also cannot be used as-is to amortize the sparse GP-VAE with mini-batches.
\subsection{The best of both ELBOs}
\label{sec:best_ELBOs}
Recall our goal to make GP-VAE models amenable to large datasets. This requires avoiding the large memory requirements and
being able to amortize inference. To alleviate these problems, \cite{Casale2018GaussianAutoencoders}
propose to use a Taylor approximation of the GP prior term in their ELBO.
However, this significantly increases implementation complexity and gives rise to
potential risks in ignoring curvature.
We take a different approach utilising sparse GPs. We desire
a model that can scale to large datasets, like \citet{Hensman2013GaussianData},
while also being able to directly compute variational parameters from
the latent regression dataset, like \citet{Titsias2009VariationalProcesses}.
To this end, we take a mini-batch of the data, $\textbf{X}_b \subset \textbf{X}$, $\textbf{Y}_b\subset \textbf{Y}$,
and with the network $\tilde{q}_\phi(\cdot)$ create a mini-batch
of the latent dataset $\textbf{X}_b$, $\tilde{\yb}_b$, ${\tilde{\bm{\sigma}}}_b$. Following
\citet{Titsias2009VariationalProcesses}, with Equations~\ref{eq:mu_star}
and \ref{eq:A_star} for the optimal $\bm{\mu}_T$ and $\textbf{A}_T$, we
analytically compute stochastic estimates for each latent channel $l$ given by
\begin{align}\label{eq:MC_estimators}
\bm{\Sigma}_{b}^l &:= \Kb_{mm} + \frac{N}{b} \Kb_{mb} \, \text{diag}({\tilde{\bm{\sigma}}}^{-2}_b) \, \Kb_{bm} , \nonumber \\
\bm{\mu}_{b}^l &:= \frac{N}{b} \Kb_{mm}\left({\bm{\Sigma}^l_b}\right)^{-1} \Kb_{mb}\, \text{diag}({\tilde{\bm{\sigma}}}^{-2}_b)\,\tilde{\yb}^l_b, \nonumber\\
\textbf{A}_{b}^l &:= \Kb_{mm}\left({\bm{\Sigma}^l_b}\right)^{-1}\Kb_{mm}. \:
\end{align}
where $\Kb_{mb}=k_\theta(\textbf{U}, \textbf{X}_b)\in \mathbb{R}^{m\times b}$.
For a full derivation of these estimators, see Appendix B.2. %
All these estimators are consistent, so they converge to
the true values for $b \rightarrow N$.
However, while $\bm{\Sigma}^l_b$ is an unbiased
estimator for $\bm{\Sigma}^l$, the same does not
hold for $\bm{\mu}^l_{b}$ and $\textbf{A}_{b}^l$. We
investigate the magnitude of the bias in
Appendix C.4 %
finding that it is generally
small in practice. We believe this result to be in line
with sparse Gaussian process approximations that assume
the whole dataset may be summarized by a set of inducing
points. Alternatively, this may be interpreted as
assuming that the dataset contains redundancy,
that is, that we have more than enough data to learn the latent function. In such a
case, (cheaply) learning an average of latent functions of
multiple mini-batches would closely approximate (expensively)
learning one latent function using the full dataset.
$\bm{\mu}_{b}^l$ and $\textbf{A}_{b}^l$ parameterize the approximate posterior
$q_S(\cdot)$ which is, therefore, a direct function of the data $\textbf{X}_b$, $\textbf{Y}_b$
and hence it is an amortized approximate posterior. By taking a mini-batch
of data, one may assume that we may also compute $\mathcal{L}_T(\cdot)$
of the mini-batch latent dataset. However, note that such an $\mathcal{L}_T(\cdot)$
is a lower bound for $\log Z_{\phi, \theta}(\cdot)$ of the mini-batch latent
dataset, not a lower bound for the full latent dataset.
Instead, we use $\bm{\mu}_{b}^l$ and $\textbf{A}_{b}^l$ along with $\textbf{U}$ and $\theta$
to compute the GP evidence lower bound of
\citet{Hensman2013GaussianData} given in Equation \ref{eq:ELBO_H},
which is also suitable to mini-batching and lower bounds the
marginal likelihood of the full latent dataset.
Finally, the evidence lower bound of our \textbf{S}parse (\textbf{V}ariational) \textbf{G}aussian \textbf{P}rocess
\textbf{V}ariational \textbf{A}uto\textbf{e}ncoder, for a single mini-batch $\textbf{X}_b, \textbf{Y}_b$, is thus
\begin{multline}\label{eq:SVGP-VAE_ELBO}
\cL_{SVGP-VAE}\big(\textbf{U}, \psi, \phi, \theta):= \\
\sum_{i=1}^b\mathbb{E}_{q_S} \bigg[ \log p_{\psi}(\yb_i | \zb_i) - %
\log \tilde{q}_\phi(\zb_i | \yb_i)\bigg] \\
+ \frac{b}{N}\sum_{l=1}^L \cL_{H}^{l}(\textbf{U}, \, \phi, \, \theta, \, \bm{\mu}_{b}^l, \,\, \textbf{A}_{b}^l),
\end{multline}
where each $\mathcal{L}^l_H(\cdot)$ is computed using the mini-batch
of the latent dataset $\textbf{X}_b$, $\tilde{\yb}_b^l$, $\tilde{\bm{\sigma}}_b^l$.
By naturally combining well known
approaches, we arrive at a sparse GP-VAE that is both amortized and
can be trained using mini-batches. The VAE parameters $\phi, \psi$, inducing points $\textbf{U}$, and the GP kernel $\theta$ can all be optimized jointly in an end-to-end fashion as we show in the next section.
Also note that during training, $\bm{\mu}_{b}^1,...,\bm{\mu}^L_b$ and
$\textbf{A}_{b}^1,...,\textbf{A}_{b}^L$ are computed from a mini-batch
$\textbf{X}_b$, $\textbf{Y}_b$. However at test time, given a new dataset,
all available data $\textbf{X}$, $\textbf{Y}$ may be used to compute the $\bm{\mu}^1,..,\bm{\mu}^L$
and $\textbf{A}^1,...,\textbf{A}^L$. The Gaussian process structure places
no theoretical restriction upon the number of observations
that are incorporated into the approximate posterior parameters,
any amount of data can be pooled simply according to the kernel operations.
In contrast, neural networks typically assume fixed input and
output sizes and pooling data in a principled way requires much more
attention.
While we have treated the auxiliary data $\textbf{X}$ as observed throughout this section, our model can also be used when $\textbf{X}$ is not given (or is only partly observed). In such cases, we make use of the Gaussian Process Latent Variable Model (GP-LVM) introduced by \cite{Lawrence2004GaussianData} to learn the missing part of $\textbf{X}$, similar to what is done in \cite{Casale2018GaussianAutoencoders}. In SVGP-VAE, (missing parts of) $\textbf{X}$ can be learned jointly with the rest of the model parameters.
\section{Experiments}
\label{sec:experiments}
We compared our proposed model with existing approaches measuring both performance and scalability on some simple synthetic data and large high-dimensional benchmark datasets. Implementation details can be found in Appendix A %
and additional experiments in Appendix C. %
The implementation of our model as well as our experiments are publicly available at \url{https://github.com/ratschlab/SVGP-VAE}.
\subsection{Synthetic moving ball data}
\label{sec:exp_ball}
\begin{figure}
\centering
\includegraphics[width=1.0\linewidth]{figures/figure_moving_ball_MSE_vary_m_15_RMSE.pdf}
\caption{Performance of our SVGP-VAE models as a function of the number of inducing points. We see that as we increase the number of inducing points, the performance gracefully approaches the one of the exact GP-VAE baseline model.}
\label{fig:ball_scaling_curve}
\end{figure}
The moving ball data was utilized in \citet{Pearce2019ThePixels}.
It consists of black-and-white videos of a moving circle, where the 2D trajectory is sampled from a GP with radial basis function (RBF) kernel.
The goal is to reconstruct the correct underlying trajectory in the two-dimensional latent space from the frames in pixel space.
Since the videos are short (30 frames), full GP inference is still feasible in this setting, such that we can compare our sparse approach against the gold standard. Note that due to the small dataset size we do not perform mini-batching within each video here.
\begin{figure}
\centering
\includegraphics[width=1.0\linewidth]{figures/figure_moving_ball_main_3.pdf}
\caption{Reconstructions of the latent trajectories for the moving ball data. Frames of each test video are overlaid and shaded by time in the first column. Ground truth trajectories are depicted in blue, while predicted trajectories are shown in orange. We can see that the standard VAE fails to model the trajectories faithfully, while the GP-VAE models (including our sparse approximation) match them closely. Note that $b = N$ in SVGP-VAE for this experiment. For SVGP-VAE, the number of inducing points was set to $m=15$.}
\label{fig:ball_trajectories}
\end{figure}
\paragraph{Scaling behavior.}
We see in Figure~\ref{fig:ball_scaling_curve} that as we increase the number of inducing points our method uses, its performance in terms of root mean squared error (RMSE) approaches the performance of the full GP baselines. It reaches the baseline performance already with 15 inducing points, which is half the number of data points in the trajectory and therefore four times less computationally intensive than the baseline.
The reconstructions of the trajectories also qualitatively agree with the baseline, as can be seen in Figure~\ref{fig:ball_trajectories}.
\paragraph{Optimization of kernel parameters.}
Another advantage of our proposed method over the previous approaches is that it is agnostic to the kernel choice and even allows to optimize the kernel parameters (and thereby learn a better kernel) jointly during training. In \cite{Pearce2019ThePixels}, joint optimization of kernel parameters was not considered, while in \cite{Casale2018GaussianAutoencoders} a special training regime is deployed where VAE and GP parameters are optimized at different stages. Since the moving ball data is generated by a GP, we know the optimal kernel length scale for the RBF kernel in this case, which is namely the one of the generating process.
We optimized the length scale of our SVGP-VAE kernel and found that when using a sufficient number of inducing points, we indeed recover the true length scale almost perfectly (Fig.~\ref{fig:ball_lengthscales}).
Note that when too few inducing points are used, the \emph{effective} length scale of the observed process in the subspace spanned by these inducing points is indeed larger, since some of the variation in the data will be orthogonal to that subspace.
It is thus to be expected that our model would also choose a larger length scale to model the observations in this subspace.
\begin{figure}
\centering
\includegraphics[width=0.9\linewidth]{figures/figure_moving_ball_GP_joint_m_15_20.pdf}
\caption{Optimized length scales of our SVGP-VAE model during training on the moving ball data. With sufficiently many inducing points, the model recovers the true length scale of the generating process.}
\label{fig:ball_lengthscales}
\end{figure}
\paragraph{Optimization of inducing points.}
When working with sparse Gaussian processes, the selection of inducing point locations can often be crucial for the quality of the approximation \citep{Titsias2009VariationalProcesses, fortuin2018scalable, jahnichen2018scalable, burt2019rates}.
In our model, we can optimize these inducing point locations jointly with the other components.
On the moving ball data, since the trajectories are generated from stationary GPs, the optimal inducing point locations should be roughly equally spaced along the time dimension.
When we adversarially initialize the inducing points in a small region of the time series, we see that the model pushes them apart over the course of training and converges to this optimal spacing (Fig.~\ref{fig:ball_inducing_points}).
Together with the previous experiment, these observations suggest that the model is able to choose close-to-optimal inducing points and kernel functions in a data-driven way during the normal training process.
\subsection{Conditional generation of rotated MNIST digits}\label{sec:MNIST}
\begin{figure}
\centering
\includegraphics[width=1.0\linewidth]{figures/figure_moving_ball_ip_joint_slides.pdf}
\caption{Optimized inducing points of our SVGP-VAE model during training on the moving ball data for three different (suboptimal) initializations. We can see that the model correctly learns to spread the inducing points evenly over the time series, which should be expected as a stationary GP kernel is used in the data generating process.}
\label{fig:ball_inducing_points}
\end{figure}
To benchmark our model against existing scalable GP-VAE approaches, we follow the experimental setup from \citet{Casale2018GaussianAutoencoders} and use rotated MNIST digits \citep{lecun1998gradient} in a conditional generation task.
The task is to condition on a number of digits that have been rotated at different angles and to generate an image of one of these digits rotated at an unseen angle. In the original work, they consider 400 images of the digit 3, each rotated at multiple angles in $[0, 2\pi].$ Using identical architectures, kernel, and dataset ($N = 4050$), we report results
for both the GP-VAE of \citet{Casale2018GaussianAutoencoders} and our SVGP-VAE.
The full GP-VAE model from \citet{Pearce2019ThePixels} cannot be applied to this size of data, hence it is omitted.
As alternative baselines, we report results for a conditional VAE (CVAE) \citep{sohn2015learning} as well as for an extension of a sparse GP (SVIGP) approach from \cite{Hensman2013GaussianData}. We use the GECO algorithm \citep{Rezende2018TamingVAEs} to train our SVGP-VAE model, which greatly improves the stability of the training procedure.
\begin{table*}
\setlength{\tabcolsep}{10pt}
\centering
\caption{Results on the rotated MNIST digit 3 dataset. Reported here are mean values together with standard deviations based on 5 runs. We see that our proposed model performs comparably to the sparse GP baseline from \citet{Hensman2013GaussianData} and outperforms the VAE baselines while still being more scalable than the \citet{Casale2018GaussianAutoencoders} model.}
\begin{tabular}{l l l l}
\toprule
& \textbf{MSE} & \textbf{GP complexity} & \textbf{Time/epoch [s]}\\
\midrule
\textbf{CVAE} \citep{sohn2015learning} & $0.0796 \pm 0.0023$ & - & $0.39 \pm 0.01$ \\[0.08cm]
\textbf{GPPVAE} \citep{Casale2018GaussianAutoencoders} & $0.0370 \pm 0.0012$ & $\mathcal{O}(NH^2)$ & $19.10 \pm 0.66$\\[0.08cm]
\textbf{SVGP-VAE} (ours) & $0.0251 \pm 0.0005
$ & $\mathcal{O}(bm^2 + m^3)$ & $1.90 \pm 0.02$ \\[0.08cm]
\textbf{Deep SVIGP} \citep{Hensman2013GaussianData} & $0.0233 \pm 0.0014
$ & $\mathcal{O}(bm^2 + m^3)$ & $1.15 \pm 0.04$ \\[0.08cm]
\bottomrule
\end{tabular}
\label{table:rot_MNIST_main}
\end{table*}
\paragraph{Performance of conditional generation.}
We see in Table~\ref{table:rot_MNIST_main} that our proposed model outperforms the VAE baselines in terms of MSE, while still being computationally more efficient than the model from \cite{Casale2018GaussianAutoencoders} (in theory and practice).\footnote{Note that in their paper, \citet{Casale2018GaussianAutoencoders} report a performance of 0.028 on this task. However, their code for the MNIST experiment is not openly available and we could not reproduce this result with our reimplementation (which is also available at \url{https://github.com/ratschlab/SVGP-VAE}).}
This can also be seen visually in Figure~\ref{fig:mnist_images} as our model produces the most faithful generations.
For the SVGP-VAE, the number of inducing points was set to $m=32$ and the batch size was set to $b = 256$. For the GP-VAE \citep{Casale2018GaussianAutoencoders}, the low-rank matrix factor $H$ depends on the dimension of the linear kernel $M$ used in their model ($M = 8$ and $H= 128$).
Moreover, our SVGP-VAE model comes close in performance to the unamortized sparse GP model with deep likelihood from \cite{Hensman2013GaussianData}.
This shows that the amortization gap of our model is small \citep{cremer2018inference}.
Note that this baseline was not considered in the previous GP-VAE literature \citep{Casale2018GaussianAutoencoders}, even though for the task of conditional generation, where we try to learn a single GP over the entire dataset, amortization is not strictly needed.
However, in tasks where the inference has to be amortized across several GPs, this model could not be used.
More details on this baseline are provided in Appendix \ref{sec:app_Hensman_baseline}.
\begin{figure}
\centering
\includegraphics[width=1.0\linewidth]{figures/rot_MNIST_recons_cVAE_main_3.pdf}
\caption{Conditionally generated rotated MNIST images. The generations of our proposed model are qualitatively more faithful to the ground truth. For more examples see Appendix C.3.}
\label{fig:mnist_images}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=1.0\linewidth]{figures/figure_MNIST_M_m.pdf}
\caption{Performance of our proposed model with different numbers of inducing points and the \citet{Casale2018GaussianAutoencoders} model with different kernel dimensionalities as a function of runtime. For the SVGP-VAE, we consider four different configurations of inducing points, while for the \citet{Casale2018GaussianAutoencoders} model, we use four different dimensions of the linear kernel: $m, M \in \{8, 16, 24, 32\}$. }
\label{fig:mnist_tradeoff}
\end{figure}
\paragraph{Tradeoff between runtime and performance.}
The performance of our sparse approximation can be increased by choosing a larger number of inducing points, at a quadratic cost in terms of runtime.
The \citet{Casale2018GaussianAutoencoders} model, while being more restricted in its kernel choice, offers a similar tradeoff between runtime and performance by choosing a different dimensionality for the low-rank linear kernel used in their latent space (see Appendix B.3). %
In Figure \ref{fig:mnist_tradeoff} we depict performance for both models when varying the number of inducing points and the dimension of the linear kernel, respectively. We observe that SVGP-VAE, besides being much faster, exhibits a steeper decline in the MSE as the model's capacity is increased.
\begin{figure}
\centering
\includegraphics[width=1.0\linewidth]{figures/figure_MNIST_datasets.pdf}
\caption{Performance and runtime of our proposed model on differently sized subsets of the MNIST dataset, including the full set. We see that the performance stays roughly the same, regardless of dataset size, while the runtime grows linearly as expected. The size of each dataset equals $4050 \times \textrm{nr. of MNIST digits}$.}
\label{fig:mnist_scaling}
\end{figure}
\paragraph{Scaling to larger data.}
As mentioned above, \citet{Casale2018GaussianAutoencoders} restrict their experiment to a small subset of the MNIST dataset and indeed we did also not manage to scale their model to the whole dataset on our hardware (11~GB GPU memory).
Our SVGP-VAE, however, is easily scalable to such dataset sizes.
We report its performance on larger subsets of MNIST (including the full dataset) in Figure~\ref{fig:mnist_scaling}.
We see that the performance of our proposed model does not deteriorate with increased dataset size, while the runtime grows linearly as expected.
All in all, we thus see that our model is more flexible than the previous GP-VAE approaches, scales to larger datasets, and achieves a better performance at lower computational cost.
\subsection{SPRITES experiment}
\label{sec:SPRITES}
We additionally assessed the performance of our model on the SPRITES dataset \citep{li2018disentangled}. It consists of images of cartoon characters in different actions/poses. Each character has a unique style (skin color, tops, pants, hairstyle). There are in total 1296 characters, each observed in 72 different poses. For training, we use 1000 characters and we randomly sample 50 poses for each ($N = 50,000$). Auxiliary data for each image frame consists of a character style and a specific pose. The task is to conditionally generate characters not seen during training in different poses.
For the pose part of the auxiliary data, we use a GP-LVM \citep{Lawrence2004GaussianData}, similar to what was done in the rotated MNIST experiment for the digit style. Using the GP-LVM also for the character style would not allow us to extrapolate to new character styles during the test phase. To overcome this, we introduce a \textit{representation network}, with which we learn the unobserved parts of the auxiliary data in an amortized way.
Our model easily scales to the size of the SPRITES dataset (time per training epoch: $51.8 \pm 0.8$ seconds). Moreover, on the test set of 296 characters, our SVGP-VAE achieves a solid performance of $0.0079 \pm 0.0009$ pixel-wise MSE. In Figure \ref{fig:SPRITES}, we depict some generations for two test characters. We observe that model faithfully generates the pose information. However, it sometimes wrongly generates parts of the character style. We attribute this to the additional complexity of trying to amortize the learning of the auxiliary data. Extending our initial attempt of using the representation network for such purposes, together with more extensive benchmarking of our model performance, is left for future work. More details on the SPRITES experiment are provided in Appendix \ref{sec:app_SPRITES}.
\begin{figure}
\centering
\includegraphics[width=1.0\linewidth]{figures/SPRITES.pdf}
\caption{Conditionally generated SPRITES images for characters not observed during training. Images in the respective upper row are the ground truths, while the images in the respective lower row are conditional generations using our model.}
\label{fig:SPRITES}
\end{figure}
\section{Related Work}
\paragraph{Sparse Gaussian processes.}
There has been a
long line of work on sparse Gaussian process approximations,
dating back to \cite{Snelson2005SparsePseudo-inputs}, \cite{Quinonero-Candela2005ARegression},
and others. Most of these sparse methods rely on a summarizing set of points referred to as \emph{inducing points} and mainly differ in the exact way of selecting those. Variational learning of inducing points was first considered in \cite{Titsias2009VariationalProcesses} and was shown to lead to significant performance gains. Instead of optimizing an approximate marginal GP likelihood as done in non-variational sparse models, a lower bound on the exact GP marginal likelihood is derived and used as a training objective. Another approach relevant for our work is the stochastic variational approach from \cite{Hensman2013GaussianData}, where the authors proposed a sparse model that can, in addition to reducing the GP complexity, also be trained in mini-batches, enabling the use of GP models on (extremely) large datasets.
\paragraph{Improving VAEs.}
Extending the expressiveness and representational power of VAEs can be roughly divided into two (orthogonal) approaches. The first one focuses on increasing the flexibility of the approximate posterior \citep{rezende2016variational, Kinga2019variational}, while the second one consists of imposing a richer prior distribution on the latent space. Various extensions to the standard Gaussian prior have been proposed, including a Gaussian mixture prior \citep{Dilokthanakul2016DeepUC, kopf2019mixture}, hierarchical structured priors \citep{johnson2016composing,deng2017factorized}, and a von Mises-Fisher distribution prior \citep{Davidson2018HypersphericalAuto-encoders}. GP-VAE models are part of this second group and, contrary to other work on extending VAE priors, aim to relax the \emph{iid} assumption between data points. Moreover, GP-VAEs are also related to approaches that aim to learn more structured and interpretable representations of the data by incorporating auxiliary information, such as time or viewpoints \citep{sohn2015learning, lin2018variational, johnson2016composing}.
\paragraph{Gaussian process VAEs.}
As mentioned above, the most related approaches to our work are the GP-VAE models of \citet{Casale2018GaussianAutoencoders} and \citet{Pearce2019ThePixels}.
However, neither of these are scalable for generic kernel choices and data types. The model from \cite{Pearce2019ThePixels} relies on exact GP inference, while \cite{Casale2018GaussianAutoencoders} exploit a (partially) linear structure of their GP kernel and use a Taylor approximation of the ELBO to get around computational challenges. Another GP-VAE model is proposed in \citet{Fortuin2019GP-VAE:Imputation} where it is used for multivariate time series imputation. Their model is indeed scalable (even in linear time complexity), but it works exclusively on time series data since it exploits the Markov assumption. Additionally, it does not support a joint optimization of GP parameters, but assumes a fixed GP kernel.
\section{Introduction}
Variational autoencoders (VAEs) are among the most widely used models in representation learning and generative modeling \citep{Kingma2014Auto-encodingBayes,kingma2019introduction, rezende2014stochastic}. As VAEs typically use factorized priors, they fall short when modeling correlations between different data points. However, more expressive priors that capture correlations enable useful applications. \citet{Casale2018GaussianAutoencoders}, for instance, showed that by modeling prior correlations between the data, one could generate a digit's rotated image based on rotations of the same digit at different angles.
Gaussian process VAEs (GP-VAEs) have been designed to overcome this shortcoming \citep{Casale2018GaussianAutoencoders}. These models introduce a Gaussian process (GP) prior over the latent variables that correlates the latent variables through a kernel function. While GP-VAEs have outperformed standard VAEs on many tasks \citep{Casale2018GaussianAutoencoders, Fortuin2019GP-VAE:Imputation, Pearce2019ThePixels}, combining the GPs and VAEs brings along fundamental computational challenges. On the one hand, neural networks reveal their full power in conjunction with large datasets, making mini-batching a practical necessity.
GPs, on the other hand, are traditionally restricted to medium-scale datasets due to their unfavorable scaling. In GP-VAEs, these contradictory demands must be reconciled, preferably by reducing the $\mathcal{O}(N^3)$ complexity of GP inference, where $N$ is the number of data points.
Despite recent attempts to improve the scalability of GP-VAE models by using specifically designed kernels and inference methods \citep{Casale2018GaussianAutoencoders, Fortuin2019GP-VAE:Imputation}, a generic way to scale these models, regardless of data type or kernel choice, has remained elusive. This limits current GP-VAE implementations to small-scale datasets. In this work, we introduce the first generically scalable method for training GP-VAEs based on inducing points. We thereby improve the computational complexity from $\mathcal{O}(N^3)$ to $\mathcal{O}(bm^2 + m^3)$, where $m$ is the number of inducing points and $b$ is the batch size.
We show that applying the well-known inducing point approaches \citep{Hensman2013GaussianData,Titsias2009VariationalProcesses} to GP-VAEs is a non-trivial task: existing sparse GP approaches cannot be used off-the-shelf within GP-VAE models as they either necessitate having the entire dataset in the memory or do not lend themselves to being amortized. To address this issue, we propose a simple hybrid sparse GP method that is amenable to both
mini-batching and amortization.
\footnotetext[1]{Contact: \href{mailto:[email protected]}{\texttt{[email protected]}}}
We make the following contributions:
\begin{itemize}
\item We propose the first scalable GP-VAE framework based on sparse GP inference (Sec.~\ref{sec:methods}). In contrast to existing methods, our model is agnostic to the kernel choice, makes no assumption on the structure of the data at hand and allows for joint optimization of all model components.
\item We provide theoretical motivations for the proposed method and introduce a hybrid sparse GP model that accommodates a crucial demand of GP-VAEs for simultaneous amortization and batching.
\item We show empirically that the proposed approximation scheme maintains a high accuracy while being much more scalable and efficient (Sec.~\ref{sec:experiments}). Importantly from a practitioner's point of view, our model is easy to implement as it requires no special modification of the training procedure.
\end{itemize}
\section{FORMATTING INSTRUCTIONS}
To prepare a supplementary pdf file, we ask the authors to use \texttt{aistats2021.sty} as a style file and to follow the same formatting instructions as in the main paper.
The only difference is that the supplementary material must be in a \emph{single-column} format.
You can use \texttt{supplement.tex} in our starter pack as a starting point, or append the supplementary content to the main paper and split the final PDF into two separate files.
Note that reviewers are under no obligation to examine your supplementary material.
\section{MISSING PROOFS}
The supplementary materials may contain detailed proofs of the results that are missing in the main paper.
\subsection{Proof of Lemma 3}
\textit{In this section, we present the detailed proof of Lemma 3 and then [ ... ]}
\section{ADDITIONAL EXPERIMENTS}
If you have additional experimental results, you may include them in the supplementary materials.
\subsection{The Effect of Regularization Parameter}
\textit{Our algorithm depends on the regularization parameter $\lambda$. Figure 1 below illustrates the effect of this parameter on the performance of our algorithm. As we can see, [ ... ]}
\vfill
\end{document}
| proofpile-arXiv_065-5459 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
\label{sc:introduction}
\PARstart{I}{mage} segmentation is a classical problem in computer vision aiming at distinguishing meaningful units in processed images. To this end, image pixels are grouped into regions that on many occasions are expected to correspond to the scene object projections. One step further identifies each unit as belonging to a particular class among a set of object classes to be recognized, giving rise to the Multi-Class Semantic Segmentation (MCSS) problem. From classical methods (e.g. region growing~\cite{Gonzalez2018}) to more robust methods (e.g. level-set~\cite{Wang2020} and graph-cut~\cite{Boykov2006}), various techniques have been proposed to achieve automatic image segmentation in a wide range of problems. Nevertheless, it has not been until recently that the performance of image segmentation algorithms has attained truly competitive levels, and this has been mostly thanks to the power of machine learning-based methodologies.
On the basis of the concept of Convolutional Neural Networks (CNN) proposed by LeCun and his collaborators (e.g. in the form of the well-known LeNet networks~\cite{LeCun1998}) and followed by the technological breaktrough that allowed training artificial neural structures with a number of parameters amounting to millions~\cite{Krizhevsky2012}, deep CNNs have demonstrated remarkable capabilities for problems so complex as image classification, multi-instance multi-object detection or multi-class semantic segmentation. And all this has been accomplished because of the "learning the representation" capacity of CNNs, embedded in the set of multi-scale feature maps defined in their architecture through non-linear activation functions and a number of convolutional filters that are automatically learnt during the training process by means of iterative back-propagation of prediction errors between current and expected output.
Regarding DCNN-based image segmentation, Guo et al.~\cite{Guo2018} distinguish among three categories of MCSS approaches in accordance to the methodology adopted while dealing with the input images (and correspondingly the required network structure): region-based semantic segmentation, semantic segmentation based on Fully Convolutional Networks (FCN) and Weakly-Supervised semantic segmentation (WSSS). While the former follows a \textit{segmentation using recognition} pipeline, which first detects free-form image regions, and next describes and classifies them, the second approach adopts a \textit{pixel-to-pixel map learning} strategy as key idea without resorting to the image region concept, and, lastly, WSSS methods focus on achieving a performance level similar to that of Fully-Supervised methods (FSSS) but with a weaker labelling of the training image set, i.e. less spatially-informative annotations than the pixel level, to simplify the generation of ground truth data. It is true that powerful interactive tools have been developed for annotating images at the pixel level, which, in particular, just require that the annotator draws a minimal polygon surrounding the targets (see e.g. the open annotation tool by the MIT~\cite{Labelme2016}). However, it still takes a few minutes on average to label the target areas for every picture (e.g. around 10 minutes on average for MS COCO labellers~\cite{Lin2014, Chan2020}), what makes WSSS methods interesting by themselves and actually quite convenient in general.
In this work, we focus on this last class of methods and propose a novel WSSS strategy based on a new loss function combining several terms to counteract the simplicity of the annotations. The strategy is, in turn, evaluated through a benchmark comprising two industry-related application cases of a totally different nature. One of these cases involves the detection of instances of a number of different object classes in the context of a \textit{quality control} application, while the other stems from the \textit{visual inspection} domain and deals with the detection of irregularly-shaped sets of pixels corresponding to scene defective areas. The details about the two application cases can be found in Section~\ref{sc:scenarios}.
WSSS methods are characterized, among others, by the sort of weak annotation that is assumed. In this regard, Chan \textit{et al.}{} highlight in~\cite{Chan2020} several weak annotation methodologies, namely bounding boxes, scribbles, image points and image-level labels (see Fig.~\ref{fg:weak-annotations} for an illustration of all of them). In this work, we adopt a scribble-based methodology from which training masks are derived to propagate the category information from the labelled pixels to the unlabelled pixels during network training.
The main contributions of this work are summarized as follows:
\begin{itemize}
\item A new loss function $L$ comprising several partial cross entropy terms is developed to account for the vagueness of the annotations and the inherent noise of the training masks that are derived from them. This function includes a cluster centroids-based loss term, named as the Centroid Loss, which integrates a clustering process within the semantic segmentation approach.
\item Another term of $L$ is defined through a Mean Squared Error (MSE) loss that cooperates with the other partial cross-entropy losses to refine the segmentation results.
\item The Centroid Loss is embedded over a particular implementation of Attention U-Net~\cite{oktay2018attention}.
\item We assess the performance of the full approach on a benchmark comprising two industry-related applications connected with, respectively, quality control and visual inspection.
\end{itemize}
\begin{figure}
\centering
\begin{tabular}{@{\hspace{0mm}}c@{\hspace{0mm}}c@{\hspace{0mm}}c@{\hspace{0mm}}c@{\hspace{0mm}}}
\includegraphics[width=0.25\columnwidth]{leaf_bbx.png} &
\includegraphics[width=0.25\columnwidth]{leaf_scr.png} &
\includegraphics[width=0.25\columnwidth]{leaf_pts.png} &
\includegraphics[width=0.25\columnwidth]{leaf_lab.png} \\
\footnotesize (a) & \footnotesize (b) & \footnotesize (c) & \footnotesize (d)
\end{tabular}
\caption{Examples of weak annotations, from more to less informative: (a) bounding boxes, (b) scribbles, (c) point-level labels, (d) image-level labels.}
\label{fg:weak-annotations}
\end{figure}
A preliminary version of this work can be found in \cite{yao2020centroid} as a work-in progress paper.
The rest of the paper is organized as follows: Section~\ref{sc:scenarios} describes the two application scenarios we use as a benchmark of the semantic segmentation approach; Section~\ref{sc:related_work} reviews previous works on WSSS; Section~\ref{sc:methodology} describes the weakly-supervised methodology developed in this work; Section~\ref{sc:experiments} reports on the results of a number of experiments aiming at showing the performance of our approach from different points of view; and Section~\ref{sc:conclusions} concludes the paper and outlines future work.
\section{Application Scenarios}
\label{sc:scenarios}
In this work, we use the two following industry-related application cases as a benchmark of the WSSS strategy that is developed:
\begin{itemize}
\item In the first case, we deal with the detection of a number of control elements that the sterilization unit of a hospital places in boxes and bags containing surgical tools that surgeons and nurses have to be supplied with prior to starting surgery. These elements provide evidence that the tools have been properly submitted to the required cleaning processes, i.e. they have been placed long enough inside the autoclave at a certain temperature, what makes them change their appearance. Figure~\ref{fig:targets_qc} shows, from left to right and top to bottom, examples of six kinds of elements to be detected for this application: the label/bar code used to track a box/bag of tools, the yellowish seal, the three kinds of paper tape which show the black-, blue- and pink-stripped appearance that can be observed in the figure, and an internal filter which is placed inside certain boxes and creates the white-dotted texture that can be noticed (instead of black-dotted when the filter is missing). All these elements, except the label, which is only for box/bag recording and tracking purposes, aim at corroborating the sterilization of the surgery tools contained in the box/bag. Finally, all of them may appear anywhere in the boxes/bags and in a different number, depending on the kind of box/bag.
\item In the second case, we deal with the detection of one of the most common defects that can affect steel surfaces, i.e. coating breakdown and/or corrosion (CBC) in any of its many different forms. This is of particular relevance where the integrity of steel-based structures is critical, such as e.g. in large-tonnage vessels. An early detection, through suitable maintenance programmes, prevents vessel structures from suffering major damage which can ultimately compromise their integrity and lead to accidents with maybe catastrophic consequences for the crew (and passengers), environmental pollution or damage and/or total loss of the ship, its equipment and its cargo. The inspection of those ship-board structures by humans is a time-consuming, expensive and commonly hazardous activity, what, altogether, suggests the introduction of defect detection tools to alleviate the total cost of an inspection. Figure~\ref{fig:targets_insp} shows images of metallic vessel surfaces affected by CBC.
\end{itemize}
The quality control problem involves the detection of man-made, regular objects in a large variety of situations what leads to an important number of images to cover all cases, while the inspection problem requires the localization of image areas of irregular shape, and this makes harder and longer the labelling (specially for inexperienced staff). In both cases, it turns out to be relevant the use of a training methodology that reduces the cost of image annotation. As will be shown, our approach succeeds in both cases, despite the particular challenges and the differences among them, and the use of weak annotations do not prevent from achieving a competitive performance level for those particular problems (see also~\cite{Chan2020} for a discussion on some of the challenges the WSSS methods typically can have to face).
\begin{figure}[t]
\centering
\begin{tabular}{@{\hspace{0mm}}c@{\hspace{1mm}}c@{\hspace{1mm}}c@{\hspace{0mm}}}
\includegraphics[width=0.31\columnwidth,height=0.31\columnwidth]{label.png}
&
\includegraphics[width=0.31\columnwidth,height=0.31\columnwidth]{seal.png}
&
\includegraphics[width=0.31\columnwidth,height=0.31\columnwidth]{black_paper_tape.png} \\
\includegraphics[width=0.31\columnwidth,height=0.31\columnwidth]{blue_paper_tape.png}
&
\includegraphics[width=0.31\columnwidth,height=0.31\columnwidth]{pink_paper_tape.png}
&
\includegraphics[width=0.31\columnwidth,height=0.31\columnwidth]{filter.png}
\end{tabular}
\caption{Targets to be detected in the quality-control application considered in this work: from left to right and from top to bottom, box/bag tracking label, yellowish seal, three kinds of paper tape (black-, blue-, and pink-stripped) and white-dotted texture related to the presence of a whitish internal filter. All these elements, except the label, aim at evidencing the sterilization of the surgery tools contained in the box.}
\label{fig:targets_qc}
\end{figure}
\begin{figure}[t]
\centering
\begin{tabular}{@{\hspace{0mm}}c@{\hspace{1mm}}c@{\hspace{1mm}}c@{\hspace{0mm}}}
\includegraphics[width=0.31\columnwidth,height=0.31\columnwidth]{cbc1_image004.png}
&
\includegraphics[width=0.31\columnwidth,height=0.31\columnwidth]{cbc2_image026.png}
&
\includegraphics[width=0.31\columnwidth,height=0.31\columnwidth]{cbc3_image021.png}
\end{tabular}
\caption{Targets to be detected in the visual inspection application considered in this work. The different images show examples of coating breakdown and corrosion affecting ship surfaces.}
\label{fig:targets_insp}
\end{figure}
\section{Related Work}
\label{sc:related_work}
Although fully-supervised segmentation approaches based on DCNNs can achieve an excellent performance, they require plenty of pixel-wise annotations, what turns out to be very costly in practice. In response to this fact, researchers have recently paid attention to the use of weak annotations for addressing MCSS problems. Among others, \cite{wei2017object,choe2019attention,kolesnikov2016seed,yang2020combinational} consider image-level labels, while~\cite{khoreva2017simple,papandreou2015weakly} assume the availability of bounding boxes-based annotations, and~\cite{boykov2001interactive,grady2006random,bai2009geodesic,tang2018regularized,kervadec2019constrained,lin2016scribblesup,tang2018normalized,xu2015learning} make use of scribbles.
In more detail, most WSSS methods involving the use of image level-labels are based on the so-called Class Activation Maps (CAMs)~\cite{wei2017object}, as obtained from a classification network. In~\cite{choe2019attention}, an Attention-based Dropout Layer (ADL) is developed to obtain the entire outline of the target from the CAMs. The ADL relies on a self-attention mechanism to process the feature maps. More precisely, this layer is intended for a double purpose: one task is to hide the most discriminating parts in the feature maps, what induces the model to learn also the less discriminative part of the target, while the other task intends to highlight the informative region of the target for improving the recognition ability of the model. A three-term loss function is proposed in~\cite{kolesnikov2016seed} to seed, expand, and constrain the object regions progressively when the network is trained. Their loss function is based on three guiding principles: seed with weak localization cues, expand objects based on the information about which classes can occur in an image, and constrain the segmentation so as to coincide with object boundaries. In~\cite{yang2020combinational}, the focus is placed on how to obtain better CAMs. To this end, the work aims at solving the incorrect high response in CAMs through a linear combination of higher-dimensional CAMs.
Regarding the use of bounding boxes as weakly-supervised annotations for semantic segmentation, in~\cite{khoreva2017simple}, GraphCut and Holistically-nested Edge Detection (HED) algorithms are combined to refine the bounding boxes ground truth and make predictions, while the refined ground truth is used to train the network iteratively. Similarly, in~\cite{papandreou2015weakly}, the authors develop a WSSS model using bounding boxes-based annotations, where: firstly, a segmentation network based on the DeepLab-CRF model obtains a series of coarse segmentation results, and, secondly, a dense Conditional Random Field (CRF)-based step is used to facilitate the predictions and preserve object edges. In their work, they develop novel online Expectation-Maximization (EM) methods for DCNN training under the weakly-supervised setting. Extensive experimental evaluation shows that the proposed techniques can learn models delivering competitive result from bounding boxes annotations.
Scribbles have been widely used in connection with interactive image segmentation, being recognized as one of the most user-friendly ways for interacting. These approaches require the user to provide annotations interactively. The topic has been explored through graph cuts~\cite{boykov2001interactive}, random walks~\cite{grady2006random}, and weighted geodesic distances~\cite{bai2009geodesic}. As an improvement, \cite{tang2018regularized} proposes two regularization terms based on, respectively, normalized cuts and CRF. In this work, there are no extra inference steps explicitly generating masks, and their two loss terms are trained jointly with a partial cross-entropy loss function. In another work~\cite{kervadec2019constrained}, the authors enforce high-order (global) inequality constraints on the network output to leverage unlabelled data, guiding the training process with domain-specific knowledge (e.g. to constrain the size of the target region). To this end, they incorporate a differentiable penalty in the loss function avoiding expensive Lagrangian dual iterates and proposal generation. In the paper, the authors show a segmentation performance that is comparable to full supervision on three separate tasks.
Aiming at using scribbles to annotate images, ScribbleSup~\cite{lin2016scribblesup} learns CNN parameters by means of a graphical model that jointly propagates information from sparse scribbles to unlabelled pixels based on spatial constraints, appearance, and semantic content. In~\cite{tang2018normalized}, the authors propose a new principled loss function to evaluate the output of the network discriminating between labelled and unlabelled pixels, to avoid the poorer training that can result from standard loss functions, e.g. cross entropy, because of the presence of potentially mislabelled pixels in masks derived from scribbles or seeds. Unlike prior work, the cross entropy part of their loss evaluates only seeds where labels are known while a normalized cut term softly accounts for consistency of all pixels. Finally, \cite{xu2015learning} proposes a unified approach involving max-margin clustering (MMC) to take any form of weak supervision, e.g. tags, bounding boxes, and/or partial labels (strokes, scribbles), to infer pixel-level semantic labels, as well as learn an appearance model for each semantic class. Their loss function penalizes more or less the errors in positive or negative examples depending on whether the number of negative examples is larger than the positive examples or not.
In this paper, we focus on the use of scribbles as weak image annotations and propose a semantic segmentation approach that combines them with superpixels for propagating category information to obtain training masks, named as pseudo-masks because of the labelling mistakes they can contain. Further, we propose a specific loss function $L$ that makes use of those pseudo-masks, but at the same time intends to counteract their potential mistakes. To this end, $L$ consists of a partial cross-entropy term that uses the pseudo-masks, together with the Centroid Loss and a regularizing normalized MSE term that cooperate with the former to produce refined segmentation results through a joint training strategy. The Centroid Loss employs the labelling of pixels belonging to the scribbles to guide the training.
\section{Methodology}
\label{sc:methodology}
Figure~\ref{fig:cluster_segmentation}(a) illustrates fully supervised semantic segmentation approaches based on DCNN, which, applying a pixel-wise training strategy, try to make network predictions resemble the full labelling as much as possible, thus achieving good segmentation performance levels in general. By design, this kind of approach ignores the fact that pixels of the same category tend to be similar to their adjacent pixels. This similarity can, however, be exploited when addressing the WSSS problem by propagating the known pixel categories towards unlabelled pixels. In this respect, several works reliant on pixel-similarity to train the WSSS network can be found in the literature: e.g. a dense CRF is used in~\cite{papandreou2015weakly}, the GraphCut approach is adopted in~\cite{zhao2018pseudo}, and superpixels are used in ScribbleSup~\cite{lin2016scribblesup}.
\begin{figure*}[t]
\centering
\begin{tabular}{cc}
\includegraphics[scale=0.3]{seg_meth_full_supervision.png} &
\includegraphics[scale=0.3]{seg_meth_scribble_cluster.png} \\
\footnotesize (a) & \footnotesize (b)
\end{tabular}
\caption{Illustration of (a) full supervision and (b) our weakly-supervised approach for semantic segmentation: (a) all pixels are labelled to make the prediction [bottom layer of the drawing] resemble the ground truth [top layer of the drawing] as much as possible after pixel-wise training; (b) to solve the WSSS problem, the category information from the incomplete ground truth, i.e. the weak annotations, is propagated towards the rest of pixels making use of pixel similarity and minimizing distances to class centroids derived from the weak annotations. }
\label{fig:cluster_segmentation}
\end{figure*}
Inspired by the aforementioned, in this work, we propose a semantic segmentation approach using scribble annotations and a specific loss function intended to compensate for missing labels and errors in the training masks. To this end, class centroids determined from pixels coinciding with the scribbles, whose labelling is actually the ground truth of the problem, are used in the loss function to guide the training of the network so as to obtain improved segmentation outcomes. The process is illustrated in Fig.~\ref{fig:cluster_segmentation}(b).
Furthermore, similarly to ScribbleSup~\cite{lin2016scribblesup}, we also combine superpixels and scribble annotations to propagate category information and generate pseudo-masks as segmentation proposals, thus making the network converge fast and achieve competitive performance. By way of example, Fig.~\ref{fig:weak_labels}(b) and (c) show, respectively, the scribble annotations and the superpixels-based segmentations obtained for two images of the two application cases considered. The corresponding pseudo-masks, containing more annotated pixels than the scribbles, are shown in Fig.~\ref{fig:weak_labels}(d). As can be observed, not all pixels of the pseudo-masks are correctly labelled, what may affect segmentation performance. It is because of this fact that we incorporate the Centroid Loss and a normalized MSE term into the full loss function. This is discussed in Section~\ref{sec:cen_loss}.
\begin{figure*}[t]
\centering
\begin{tabular}{@{\hspace{0mm}}c@{\hspace{1mm}}c@{\hspace{1mm}}c@{\hspace{1mm}}c@{\hspace{0mm}}}
\includegraphics[width=35mm,height=35mm]{gk2_fp_exp28_0380_90_ROI_org.png}
&
\includegraphics[width=35mm,height=35mm]{gk2_fp_exp28_0380_90_ROI_spx.png}
&
\includegraphics[width=35mm,height=35mm]{gk2_fp_exp28_0380_90_ROI_all.png}
&
\includegraphics[width=35mm,height=35mm]{gk2_fp_exp28_0380_90_ROI_lab.png}
\\
\includegraphics[width=35mm,height=35mm]{image0494_org.png}
&
\includegraphics[width=35mm,height=35mm]{image0494_spx.png}
&
\includegraphics[width=35mm,height=35mm]{image0494_all.png}
&
\includegraphics[width=35mm,height=35mm]{image0494_lab.png} \\
\footnotesize (a) & \footnotesize (b) & \footnotesize (c) & \footnotesize (d)
\end{tabular}
\caption{Weak annotation and propagation example: (a) original images; (b) scribbles superimposed over the original image; (c) scribbles superimposed over the superpixels segmentation result; (d) resulting pseudo-masks. Regarding the scribble annotations: (1st row) red and green scribbles respectively denote corrosion and background; (2nd row) black, red and blue scribbles respectively denote background, tracking label and the internal filter texture. As for the pseudo-masks: (1st row) red, black and green pixels respectively denote corrosion, background and unlabelled pixels; (2nd row) red, blue, black and green pixels respectively denote the tracking label, the internal filter texture, the background and the unlabelled pixels.}
\label{fig:weak_labels}
\end{figure*}
The remaining methodological details are given along the rest of this section: we begin with the way how weak annotations are handled and how the pseudo-masks are obtained in Section~\ref{sec:pseudo_mask}, while the architecture of the network is described in Section~\ref{sec:net_arch} and the different loss terms are detailed and discussed in Sections~\ref{sec:pce_loss} (partial Cross-Entropy loss, $L_\text{pCE}$), \ref{sec:cen_loss} (Centroid Loss, $L_\text{cen}$) and~\ref{sec:full_loss} (normalized MSE-term, $L_\text{mse}$, and the full loss function $L$).
\subsection{Weak annotations and pseudo-masks generation}
\label{sec:pseudo_mask}
As already said, Fig.~\ref{fig:weak_labels}(b) shows two examples of scribble annotations, one for the visual inspection case (top) and the other for the quality control case (bottom). Because scribbles represent only a few pixels, the segmentation performance that the network can be expected to achieve will be far from satisfactory for any task that is considered. To enhance the network performance, we combine the scribbles with an oversegmentation of the image to generate pseudo-masks as segmentation proposals for training. For the oversegmentation, we make use of the Adaptive-SLIC (SLICO) algorithm~\cite{achanta2012slic}, requesting enough superpixels so as not to mix different classes in the same superpixel. Figure~\ref{fig:weak_labels}(c,top) shows an oversegmentation in 50 superpixels, while 80 are selected for Fig.~\ref{fig:weak_labels}(c,bottom). Next, those pixels belonging to a superpixel that intersects with a scribble are labelled with the same class as the scribble, as shown in Fig.~\ref{fig:weak_labels}(d). In Fig.~\ref{fig:weak_labels}(d,top), the black pixels represent the background, the red pixels indicate corrosion, and the green pixels denote unlabelled pixels. In Fig.~\ref{fig:weak_labels}(d,bottom), black and green pixels denote the same as for the top mask, while the red pixels represent the tracking label and the blue pixels refer to the internal filter class.
\subsection{Network Architecture}
\label{sec:net_arch}
In this work, we adopt U-Net~\cite{Ronneberger2015} as the base network architecture. As it is well known, U-Net evolves from the fully convolutional neural network concept and consists of a contracting path followed by an expansive path. It was developed for biomedical image segmentation, though it has been shown to exhibit good performance in general for natural images even for small training sets. Furthermore, we also embed Attention Gates (AG) in U-Net, similarly to Attention U-Net (AUN)~\cite{oktay2018attention}. These attention modules have been widely used in e.g. Natural Language Processing (NLP)~\cite{vaswani2017attention,clark2019does,serrano2019attention,jain2019attention}. Other works related with image segmentation~\cite{hu2018squeeze,jetley2018learn,oktay2018attention,sinha2020multi} have introduced them for enhanced performance. In our case, AGs are integrated into the decoding part of U-Net to improve its ability to segment small targets.
For completeness, we include in Fig.~\ref{fig:AG} a schematic about the operation of the AG that we make use of in this work, which, in our case, implements (\ref{eq:AG}) as described below:
\begin{align}
(x_{i,c}^l)^\prime &= \alpha_i^l \, x_{i,c}^l \label{eq:AG} \\
\alpha_i^l &= \sigma_2(W_{\phi}^T(\sigma_1(W_x^T x_i^l + W_g^T g_i + b_g)) + b_{\phi}) \nonumber
\end{align}
where the feature-map $x_i^l \in \mathbb{R}^{F_l}$ is obtained at the output of layer $l$ for pixel $i$, $c$ denotes a channel in $x_{i,c}^l$, $F_l$ is the number of feature maps at that layer, the gating vector $g_i$ is used for each pixel $i$ to determine focus regions and is such that $g_i \in \mathbb{R}^{F_l}$ (after up-sampling the input from the lower layer), $W_g \in \mathbb{R}^{F_l \times 1}$, $W_x \in \mathbb{R}^{F_l \times 1}$, and $W_{\phi} \in \mathbb{R}^{1 \times 1}$ are linear mappings, while $b_g \in \mathbb{R}$ and $b_{\phi} \in \mathbb{R}$ denote bias terms, $\sigma_1$ and $\sigma_2$ respectively represent the ReLU and the sigmoid activation functions, $\alpha_i^l \in [0,1]$ are the resulting attention coefficients, and $\Phi_\text{att} = \{W_g, W_x, b_g; W_{\phi}, b_{\phi}\}$ is the set of parameters of the AG.
The attention coefficients $\alpha_i$ are intended to identify salient image regions and discard feature responses so as to preserve only the activations relevant to the specific task. In~\cite{hu2018squeeze}, the Squeeze-and-Excitation (SE) block obtains attention weights in channels for filter selection. In our approach, the AGs involved calculate attention weights at the spatial level.
\begin{figure*
\centering
\includegraphics[scale=0.35]{attention_gate_new.png}
\caption{Schematic diagram of an Attention Gate (AG). $N$ is the size of the mini-batch.}
\label{fig:AG}
\end{figure*}
As shown in Fig.~\ref{fig:Unet_cluster}, AGs are fed by two input tensors, one from the encoder side of U-Net and the other from the decoder side, respectively $x$ and $g$ in Fig.~\ref{fig:AG}. With the AG approach, spatial regions are selected on the basis of both the activations $x$ and the contextual information provided by the gating signal $g$ which is collected from a coarser scale. The contextual information carried by the gating vector $g$ is hence used to highlight salient features that are passed through the skip connections. In our case, $g$ enters the AG after an up-sampling operation that makes $g$ and $x$ have compatible shapes (see Fig.~\ref{fig:AG}).
\begin{figure*
\centering
\includegraphics[scale=0.25]{attention_unet_cluster.png}
\caption{Block diagram of the Centroids AUN model. The size decreases gradually by a factor of 2 at each scale in the encoding part and increases by the same factor in the decoding part. In the latter, AGs are used to help the network focus on the areas of high-response in the feature maps. The \textit{Conv~Skip} block is the \textit{skip~connection} of ResNet \cite{he2016deep}. The sub-network of the lower part of the diagram is intended to predict class centroids. In the drawing, $C$ denotes the number of classes and $M$ is the dimension of the class centroids.}
\label{fig:Unet_cluster}
\end{figure*}
Apart from the particularities of the AG that we use, which have been described above, another difference with the original AUN is the sub-network that we attach to the main segmentation network, as can be observed from the network architecture that is shown in Fig.~\ref{fig:Unet_cluster}. This sub-network is intended to predict class centroids on the basis of the scribbles that are available in the image, with the aim of improving the training of the main network from the possibly noisy pseudo-masks, and hence achieve a higher level of segmentation performance. Consequently, during training: (1) our network handles two sorts of ground truth, namely scribble annotations $Y_\text{scr}$ to train the attached sub-network for proper centroid predictions, and the pseudo-masks $Y_\text{seg}$ for image segmentation; and (2) the augmented network yields two outputs, a set of centroids $P_\text{cen}$ and the segmentation of the image $P_\text{seg}$ (while during inference only the segmentation output $P_\text{seg}$ is relevant). Predicted cluster centroids are used to calculate the Centroid Loss term $L_\text{cen}$ (described in Section~\ref{sec:cen_loss}) of the full loss function $L$, which comprises two more terms (as described in Section~\ref{sec:full_loss}). Thanks to the design of $L$, the full network --i.e. the AUN for semantic segmentation and the sub-net for centroids prediction-- is trained through a joint training strategy following an end-to-end learning model. During training, the optimization of $L_\text{cen}$ induces updates in the main network weights via back-propagation that are intended to reach enhanced training and therefore produce better segmentations.
As can be observed, the centroids prediction sub-net is embedded into the intermediate part of the network, being fed by the last layer of the encoder side of our AUN. As shown in Fig.~\ref{fig:Unet_cluster}, this sub-net consists of three blocks, each of which comprises a fully connected layer, a batch-normalization layer, and a ReLU activation function. The shape of $P_\text{cen}$ is $C\times M$, where $C$ is the number of categories and $M$ denotes the dimension of the feature space where the class centroids are defined. In our approach, centroid features are defined from the softmax layer of the AUN, and hence comprises $C$ components, though we foresee to combine them with $K$ additional features from the classes which are incorporated externally to the operation of the network, and hence $M = C+K$. On the other side, the shape of $P_\text{seg}$ is $C\times W\times H$, where $(H,W)$ is the size of the input image.
\subsection{Partial Cross-Entropy Loss}
\label{sec:pce_loss}
Given a $C$-class problem and a training set $\Omega$, comprising a subset $\Omega_L$ of labelled pixels and a subset $\Omega_U$ of unlabelled pixels, the Partial Cross-Entropy Loss $L_\text{pCE}$, widely used for WSSS, computes the cross-entropy only for labelled pixels $p \in \Omega_L$, ignoring $p \in \Omega_U$:
\begin{equation}
L_\text{pCE} = \sum\limits_{c=1}^{C}\sum\limits_{p\in \Omega_{L}^{(1)}} -y_{g(p),c}~\log~y_{s(p),c}
\label{func:partial_cross-entropy}
\end{equation}
where $y_{g(p),c} \in \{0,1\}$ and $y_{s(p),c} \in [0,1]$ represent respectively the ground truth and the segmentation output. In our case, and for $L_\text{pCE}$, $\Omega_L^{(1)}$ is defined as the pixels labelled in the pseudo-masks (hence, pixels from superpixels not intersecting with any scribble belong to $\Omega_U$ and are not used by (\ref{func:partial_cross-entropy})). Hence, $y_{g(p),c}$ refers to the pseudo-masks, i.e. $Y_\text{seg}$, while $y_{s(p),c}$ is the prediction, i.e. $P_\text{seg}$, as supplied by the softmax final network layer.
\subsection{Centroid Loss}
\label{sec:cen_loss}
As can be easily foreseen, when the network is trained using the pseudo-masks, the segmentation performance depends on how accurate the pseudo-masks are and hence on the quality of superpixels, i.e. how they adhere to object boundaries and avoid mixing classes. The Centroid Loss function is introduced in this section for the purpose of compensating a dependence of this kind and improving the quality of the segmentation output.
In more detail, we define the Centroid Loss term $L_\text{cen}$ as another partial cross-entropy loss:
\begin{equation}
L_\text{cen} = \sum\limits_{c=1}^{C} \sum\limits_{p\in \Omega_{L}^{(2)}} -y_{g(p),c}^{*}~\log~y_{s(p),c}^{*}
\label{func:cen_loss}
\end{equation}
defining in this case:
\begin{itemize}
\item $\Omega_{L}^{(2)}$ as the set of pixels coinciding with the scribbles,
\item $y_{g(p),c}^{*}$ as the corresponding labelling, and
\end{itemize}
\begin{align}
y_{s(p),c}^{*} &= \frac{\exp(-d_{p,c})}{\sum\limits^{C}_{c^{'}=1} \exp(-d_{p,c^{'}})} \label{func:cen_loss_2} \\
d_{p,c} &=\frac{||f_p-\mu_c||_2^2}{\sum\limits^{C}_{c^{'}=1}||f_p-\mu_{c^{\prime}}||_2^2} \label{func:cen_loss_3}
\end{align}
where: (1) $f_p$ is the feature vector associated to pixel $p$ and (2) $\mu_c$ denotes the centroid predicted for class $c$, i.e. $\mu_c \in P_\text{cen}$. $f_p$ is built from the section of the softmax layer of the main network corresponding to pixel $p$, though $f_p$ can be extended with the incorporation of additional external features, as already mentioned. This link between $L_\text{pCE}$ and $L_\text{cen}$ through the softmax layer makes both terms decrease through the joint optimization, in the sense that for a reduction in $L_\text{cen}$ to take place, and hence in the full loss $L$, also $L_\text{pCE}$ has to decrease by better predicting the class of the pixels involved. The additional features that can be incorporated in $f_p$ try to introduce information from the classes, e.g. predominant colour, to guide even more the optimization.
In practice, this loss term \textit{pushes} pixel class predictions towards, ideally, a subset of the corners of the C-dimensional hypercube, in accordance with the scribbles, i.e. the available ground truth. Some similarity can be certainly established with the K-means algorithm. Briefly speaking, K-means iteratively calculates a set of centroids for the considered number of clusters/classes, and associates the samples to the closest cluster in feature space, thus minimizing the intra-class variance until convergence. Some DCNN-based clustering approaches reformulate K-means as a neural network optimizing the intra-class variance loss by means of a back-propagation-style scheme~\cite{Wen2016,Peng2019}. Differently from the latter, in this work, (\ref{func:cen_loss}) reformulates the unsupervised process of minimizing the distances from samples to centroids into a supervised process since the clustering takes place around the true classes defined by the labelling of the scribbles $y_{g(p),c}^{*}$ and the extra information that may be incorporated.
\subsection{Full Loss Function}
\label{sec:full_loss}
Since $L_\text{pCE}$ applies only to pixels labelled in the pseudo-mask and $L_\text{cen}$ is also restricted to a subset of image pixels, namely the pixels coinciding with the scribbles, we add a third loss term in the form of a normalized MSE loss $L_\text{mse}$ to behave as a regularization term that involves all pixels for which a class label must be predicted $\Omega_{L}^{(3)}$, i.e. the full image. This term calculates the normalized distances between the segmentation result for every pixel and its corresponding centroid:
\begin{equation}
L_\text{mse} = \frac{\sum\limits_{p\in \Omega_{L}^{(3)}} d_{p,c(p)}}{|\Omega_{L}^{(3)}|}
\label{func:mse_reg}
\end{equation}
where $|\mathcal{A}|$ stands for the cardinality of set $\mathcal{A}$, and $d_{p,c(p)}$ is as defined by (\ref{func:cen_loss_3}), with $c(p)$ as the class prediction for pixel $p$ (and $\mu_{c(p)}$ the corresponding predicted centroid), taken from the softmax layer.
Finally, the complete loss function is given by:
\begin{equation}
L = L_\text{pCE} + \lambda_\text{cen} L_\text{cen} + \lambda_\text{mse} L_\text{mse}
\label{func:final_loss}
\end{equation}
where $\lambda_\text{cen}$ and $\lambda_\text{mse}$ are trade-off constants.
\section{Experiments and Discussion}
\label{sc:experiments}
In this section, we report on the results obtained for the two application cases that constitute our benchmark. For a start, Section~\ref{sec:exp_setup} describes the experimental setup. Next, in Section~\ref{sec:dis_feature}, we discuss about the feature space where the Centroid Loss is defined and its relationship with the weak annotations, while Section~\ref{sec:effect_cen} evaluates the effect on the segmentation performance of several combinations of the terms of the loss function $L$, and Section~\ref{sec:influence_weak} analyzes the impact of weak annotations and their propagation. Subsequently, our approach is compared against two previously proposed methods in Section~\ref{sec:performance_compare}. To finish, we address final tuning and show segmentation results, for qualitative evaluation purposes, for some images of both application cases in Section~\ref{sec:result_displays}.
\subsection{Experimental Setup}
\label{sec:exp_setup}
\subsubsection{Datasets}
The dataset from the quality control application case consists of a total of 484 images, two thirds of which are designated for training and the rest for testing. Regarding the dataset for the visual inspection application, it comprises 241 images and the same strategy is adopted for splitting into the training and test sets. Both datasets have been in turn augmented with rotations and scaled versions of the original images, together with random croppings, to increase the diversity of the training set. Finally, as already explained, the ground truth for both datasets comprises scribbles and pseudo-masks (generated in accordance to the process described in Section~\ref{sec:pseudo_mask}).
By way of illustration, Fig.~\ref{fig:pseudo_mask_exp} shows, for the two application cases, some examples of weak annotations with different settings as for the width of the scribbles and the number of superpixels used for generating the pseudo-masks.
\begin{figure}[t]
\centering
\begin{tabular}{@{\hspace{-3mm}}c@{\hspace{1mm}}c@{\hspace{1mm}}c@{\hspace{1mm}}c@{\hspace{0mm}}}
\includegraphics[width=0.23\columnwidth,height=0.23\columnwidth]{pseudo_mask_gk2_fp_exp28_0480_scr2.png}
&
\includegraphics[width=0.23\columnwidth,height=0.23\columnwidth]{pseudo_mask_gk2_fp_exp28_0480_scr5.png}
&
\includegraphics[width=0.23\columnwidth,height=0.23\columnwidth]{pseudo_mask_gk2_fp_exp28_0480_scr10.png}
&
\includegraphics[width=0.23\columnwidth,height=0.23\columnwidth]{pseudo_mask_gk2_fp_exp28_0480_scr20.png} \\[-1mm]
\footnotesize 2 & \footnotesize 5 & \footnotesize 10 & \footnotesize 20 \\
\includegraphics[width=0.23\columnwidth,height=0.23\columnwidth]{pseudo_mask_gk2_fp_exp28_0480.png}
&
\includegraphics[width=0.23\columnwidth,height=0.23\columnwidth]{pseudo_mask_gk2_fp_exp28_0480_30_mask.png}
&
\includegraphics[width=0.23\columnwidth,height=0.23\columnwidth]{pseudo_mask_gk2_fp_exp28_0480_50_mask.png}
&
\includegraphics[width=0.23\columnwidth,height=0.23\columnwidth]{pseudo_mask_gk2_fp_exp28_0480_80_mask.png} \\[-1mm]
\footnotesize full mask & \footnotesize 30 & \footnotesize 50 & \footnotesize 80 \\
\includegraphics[width=0.23\columnwidth,height=0.23\columnwidth]{pseudo_mask_image0494.png}
&
\includegraphics[width=0.23\columnwidth,height=0.23\columnwidth]{pseudo_mask_image0494_sup30.png}
&
\includegraphics[width=0.23\columnwidth,height=0.23\columnwidth]{pseudo_mask_image0494_sup50.png}
&
\includegraphics[width=0.23\columnwidth,height=0.23\columnwidth]{pseudo_mask_image0494_sup80.png} \\[-1mm]
\footnotesize full mask & \footnotesize 30 & \footnotesize 50 & \footnotesize 80
\end{tabular}
\caption{Examples of weak annotations and their propagation for the two application cases: (1st row) examples of scribble annotations of different widths, namely, from left to right, 2, 5, 10 and 20 pixels, for the visual inspection case; (2nd and 3rd rows) the leftmost image shows the fully supervised ground truth, while the remaining images are examples of pseudo-masks generated from 20-pixel scribbles and for different amounts of superpixels, namely 30, 50, and 80, for the two images of Fig.~\ref{fig:weak_labels} and, hence, for the visual inspection and the quality control application cases. (The colour code is the same as for Fig.~\ref{fig:weak_labels}.)}
\label{fig:pseudo_mask_exp}
\end{figure}
\subsubsection{Evaluation metrics}
For quantitative evaluation of our approach, we consider the following metrics:
\begin{itemize}
\item The mean Intersection Over Union (mIOU), which can be formally stated as follows: given $n_{ij}$ as the number of pixels of class $i$ that fall into class $j$, for a total of $C$ different classes, the mIOU is defined as (\ref{func:miou}), see e.g.~\cite{long2015fully},
\begin{equation}
\text{mIOU} = \frac{1}{C} \sum_i \frac{n_{ii}}{\sum_j n_{ij} + \sum_j n_{ji} - n_{ii}}\,.
\label{func:miou}
\end{equation}
\item The mean Recall and mean Precision are also calculated to evaluate the segmentation performance for all classes. True Positive (TP), False Positive (FP) and False Negative (FN) samples are determined from the segmentation results and the ground truth. Using a macro-averaging
approach~\cite{Zhang2014}, the mean Recall (mRec) and mean Precision (mPrec) are expressed as follows:
\begin{align}
\text{mRec} & = \frac{1}{C} \left( \sum_i \frac{TP_i}{TP_i + FN_i} \right) = \frac{1}{C} \left( \sum_i \frac{TP_i}{T_i} \right)
\label{func:rec_prec_1}
\\
\text{mPrec} & = \frac{1}{C} \left( \sum_i \frac{TP_i}{TP_i + FP_i} \right) = \frac{1}{C} \left( \sum_i \frac{TP_i}{P_i} \right)
\label{func:rec_prec_2}
\end{align}
where $TP_i$, $FP_i$ and $FN_i$ are, respectively, the true positives, false positives and false negatives for class $i$, and $T_i$ and $P_i$ are, respectively, the number of positives in the ground truth and the number of predicted positives, both for class $i$. From now on, to shorten the notation, when we refer to precision and recall, it must be understood that we are actually referring to mean precision and mean recall.
\item The F$_1$ score as the harmonic mean of precision and recall:
\begin{equation}
F_1 = \frac{2\cdot\text{mPrec}\cdot\text{mRec}}{\text{mPrec} + \text{mRec}}
\end{equation}
\end{itemize}
In all experiments, we make use of fully supervised masks/ground truth for both datasets in order to be able to report accurate calculations about the segmentation performance. This ground truth has been manually generated only for this purpose, it has been used for training only when referring to the performance of the full-supervised approach, for comparison purposes between the full- and weakly-supervised solutions.
To finish, in a number of experiments we also report on the quality of the pseudo-masks, so that the segmentation performance reported can be correctly valued. To this end, we calculate a weak mIOU (wmIOU) using \ref{func:miou} between the psedo-mask and the fully-supervised mask involved.
\subsubsection{Implementation details and main settings}
All experiments have been conducted using the Pytorch framework running in a PC fitted with an NVIDIA GeForce RTX 2080 Ti GPU, a 2.9GHz 12-core CPU with 32 GB RAM, and Ubuntu 64-bit. The batch size is 8 for all experiments and the size of the input image is $320\times 320$ pixels, since this has turned out to be the best configuration for the aforementioned GPU.
As already mentioned, the AUN for semantic segmentation and the sub-net for centroid prediction are jointly trained following an end-to-end learning model. The network weights are initialized by means of the Kaiming method~\cite{He2015b}, and they are updated using a $10^{-4}$ learning rate for 200 epochs.
Best results have been obtained for the balance parameters $\lambda_\text{cen}$ and $\lambda_\text{mse}$ set to 1.
\subsubsection{Overall view of the experiments}
The experiments that are going to be discussed along the next sections consider different configurations for the different elements that are involved in our semantic segmentation approach. These configurations, which are enumerated in Table~\ref{tab:exp_define}, involve:
\begin{itemize}
\item different widths of the scribble annotations used as ground truth, namely 2, 5, 10 and 20 pixels,
\item different amounts of superpixels for generating the pseudo-masks, namely 30, 50 and 80,
\item two ways of defining the feature space for the class centroids: from exclusively the softmax layer of AUN and combining those features with other features from the classes.
\end{itemize}
Notice that the first rows of Table~\ref{tab:exp_define} refer to experiments where the loss function used for training is just the partial cross-entropy, as described in (\ref{func:partial_cross-entropy}), and therefore can be taken as a lower baseline method. The upper baseline would correspond to the configuration using full masks and the cross entropy loss $L_\text{CE}$ for training, i.e. full supervised semantic segmentation, which can also be found in Table~\ref{tab:exp_define} as the last row.
Apart from the aforementioned variations, we also analyse the effect of several combinations of the loss function terms, as described in (\ref{func:final_loss}), defining three groups of experiments: Group 1 (G1), which indicates that the network is trained by means of only $L_\text{pCE}$, and hence would also coincide with the lower baseline; Group 2 (G2), which denotes that the network is trained by means of the combination of $L_\text{pCE}$ and $L_\text{cen}$; and Group 3 (G3), for which the network is trained using the full loss function as described in (\ref{func:final_loss}).
Finally, we compare our segmentation approach with two other alternative approaches also aimed at solving the WSSS problem through a modified loss function. These loss functions are the Constrained-size Loss ($L_\text{size}$)~\cite{kervadec2019constrained} and the Seed, Expand, and Constrain (SEC) Loss ($L_\text{sec}$)~\cite{kolesnikov2016seed}:
\begin{align}
L_\text{size} &= L_\text{pCE} + \lambda_\text{size}L_{\mathcal{C}(V_S)}
\label{func:compare_exp_loss_1}
\\
L_\text{sec} &= L_\text{seed} + L_\text{expand} + L_\text{constrain}
\label{func:compare_exp_loss_2}
\end{align}
On the one hand, $\lambda_\text{size}$ for the $L_{\mathcal{C}(V_S)}$ term is set to $10^{-3}$. On the other hand, regarding $L_\text{sec}$, it consists of three terms, the seed loss $L_\text{seed}$, the expand loss $L_\text{expand}$, and the constrain loss $L_\text{constrain}$. In our case, we feed $L_\text{seed}$ from the scribble annotations, while, regarding $L_\text{expand}$ and $L_\text{constrain}$, we adopt the same configuration as in the original work.
\begin{table*}[t]
\centering
\caption{Labels for the different experiments performed, varying the width of scribbles, the number of superpixels employed for generating the pseudo-masks, and the terms involved in the loss function employed during training. SMX stands for \textit{softmax}.}
\label{tab:exp_define}
\begin{tabular}{m{1.8cm}|m{3cm}|m{1.5cm}|m{1.5cm}|m{1.6cm}|m{1.5cm}|m{2.5cm}}
\hline
\textbf{Configuration} & \textbf{Label} & \textbf{Scribbles width} & \textbf{Num. superpixels} & \textbf{Centroid features} & \textbf{Supervision} & \textbf{Loss function} \\
\hline
\multirow{7}{1.8cm}{lower baseline}
& E-SCR2 & 2 & - & - & \multirow{4}{*}{only scribbles} & \multirow{4}{*}{$L_\text{pCE}$} \\
& E-SCR5 & 5 & - & - & & \\
& E-SCR10 & 10 & - & - & & \\
& E-SCR20 & 20 & - & - & & \\
\cline{2-7}
& E-SCR20-SUP30 & 20 & 30 & - & \multirow{3}{*}{pseudo-masks} & \multirow{3}{*}{$L_\text{pCE}$} \\
& E-SCR20-SUP50 & 20 & 50 & - & & \\
& E-SCR20-SUP80 & 20 & 80 & - & & \\
\hline
& E-SCR2-N & 2 & - & SMX & \multirow{8}{*}{only scribbles} & \multirow{8}{*}{$L_\text{pCE} + L_\text{cen}\ [+ L_\text{mse}]$} \\
& E-SCR2-NRGB & 2 & - & SMX \& RGB & & \\
& E-SCR5-N & 5 & - & SMX & & \\
& E-SCR5-NRGB & 5 & - & SMX \& RGB & & \\
& E-SCR10-N & 10 & - & SMX & & \\
& E-SCR10-NRGB & 10 & - & SMX \& RGB & & \\
& E-SCR20-N & 20 & - & SMX & & \\
& E-SCR20-NRGB & 20 & - & SMX \& RGB & & \\
\cline{2-7}
& E-SCR20-SUP30-N & 20 & 30 & SMX & \multirow{6}{*}{pseudo-masks} & \multirow{6}{*}{$L_\text{pCE} + L_\text{cen}\ [+ L_\text{mse}]$} \\
& E-SCR20-SUP30-NRGB & 20 & 30 & SMX \& RGB & & \\
& E-SCR20-SUP50-N & 20 & 50 & SMX & & \\
& E-SCR20-SUP50-NRGB & 20 & 50 & SMX \& RGB & & \\
& E-SCR20-SUP80-N & 20 & 80 & SMX & & \\
& E-SCR20-SUP80-NRGB & 20 & 80 & SMX \& RGB & & \\
\hline
upper baseline
& E-FULL & - & - & - & full mask & $L_\text{CE}$ \\
\hline
\end{tabular}
\end{table*}
\subsection{About the Centroid loss feature space and the weak annotations}
\label{sec:dis_feature}
Given the relevance that color features can have in image semantic segmentation performance~\cite{Liu2018}, the experiments reported in this section consider the incorporation of color data from the classes into the calculation and minimization of the Centroid and the MSE loss functions, $L_\text{cen}$ and $L_\text{mse}$. More specifically, we adopt a simple strategy by making use of normalized RGB features~\footnote{If $R_p = G_p = B_p = 0$, then $\text{nRGB}_p = (0,0,0)$.}:
\begin{equation}
\text{nRGB}_p = \frac{1}{R_p+G_p+B_p}\left(R_p, G_p, B_p\right)
\end{equation}
As mentioned in Section~\ref{sec:net_arch}, the shape of $P_{cen}$ is $C\times M$, where $M = C + K$, and $K$ is a number of additional features from the classes that we incorporate into the network optimization problem. Therefore, in our case, $K = 3$. Of course, more sophisticated hand-crafted features can be incorporated into the process, though the idea of this experiment has been to make use of simple features.
Tables~\ref{tab:scribbles_compare} and~\ref{tab:superpixles_compare} evaluate the performance of our approach for different combinations of loss terms, for the two centroid feature spaces outlined before, and also depending on the kind of weak annotation that is employed as ground truth and their main feature value, i.e. width for scribbles and number of superpixels for pseudo-masks. Besides, we consider two possibilities of producing the final labelling: from the output of the segmentation network and from the clustering deriving from the predicted class centroids, i.e. label each pixel with the class label of the closest centroid; from now on, to simplify the discussion despite the language abuse, we will refer to the latter kind of output as that resulting from \textit{clustering}. Finally, Table~\ref{tab:scribbles_compare} only shows results for the visual inspection task because scribbles alone have been shown not enough for obtaining proper segmentations in the quality control case.
As can be observed in Table~\ref{tab:scribbles_compare}, segmentation and clustering mIOU for experiments E-SCR*-NRGB is lower than the mIOU for experiments E-SCR*-N, with a large gap in performance in a number of cases, what suggests that the RGB features actually do not contribute ---rather the opposite--- on improving segmentation performance when scribble annotations alone are used as supervision information for the visual inspection dataset.
As for Table~\ref{tab:superpixles_compare}, contrary to the results shown in Table~\ref{tab:scribbles_compare}, the performance that can be observed from experiments E-SCR20-SUP*-NRGB results to be similar to that of experiments E-SCR20-SUP*-N. Additionally, the mIOU of some experiments where the integrated features, i.e. \textit{softmax} and colour, are used is even higher than if only the \textit{softmax} features are used (e.g. E-SCR20-SUP80-N/NRGB, sixth row of Table~\ref{tab:superpixles_compare}).
At a global level, both Tables~\ref{tab:scribbles_compare} and~\ref{tab:superpixles_compare} show that our approach requires a higher number of labelled pixels to achieve higher segmentation performance when the integrated features are employed. In contrast, the use of \textit{softmax} features only requires the scribble annotations to produce good performance for the visual inspection task. Nevertheless, our approach using \textit{softmax} features achieves higher mIOU than using the integrated features in most of the experiments. As a consequence, only \textit{softmax} features are involved in the next experiments.
\begin{sidewaystable}
\centering
\caption{Segmentation performance for different centroid feature spaces and different widths of the scribble annotations. \textit{*N} denotes that only the SMX (\textit{softmax}) features are used to compute $L_\text{cen}$ and $L_\text{mse}$, while \textit{*NR} denotes that the feature space for centroids prediction comprises both SMX and RGB features. \textit{Seg} denotes that the segmentation output comes directly from the segmentation network, while \textit{Clu} denotes that the segmentation output is obtained from clustering.}
\label{tab:scribbles_compare}
\begin{tabular}{ c|c||c||ccc||c|c|c||c|c}
\toprule
Task & Experiments & wmIOU & $L_\text{pCE}$ & $L_\text{cen}$ & $L_\text{mse}$ & mIOU (Seg) & mIOU (Seg,*N) & mIOU (Seg,*NR) & mIOU (Clu,*N) & mIOU (Clu,*NR) \\
\hline
\multirow{12}{1.5cm}{Visual Inspection}
& E-SCR2 & 0.2721 & $\checkmark$ & & & 0.3733 & - & - & - & - \\
& E-SCR5 & 0.2902 & $\checkmark$ & & & 0.4621 & - & - & - & - \\
& E-SCR10 & 0.3074 & $\checkmark$ & & & 0.4711 & - & - & - & - \\
& E-SCR20 & 0.3233 & $\checkmark$ & & & 0.5286 & - & - & - & - \\
\cline{2-11}
& E-SCR2-* & 0.2721 & $\checkmark$ & $\checkmark$ & & - & 0.6851 & 0.4729 & 0.6758 & 0.3889 \\
& E-SCR5-* & 0.2902 & $\checkmark$ & $\checkmark$ & & - & 0.6798 & 0.4989 & 0.6706 & 0.6020 \\
& E-SCR10-* & 0.3074 & $\checkmark$ & $\checkmark$ & & - & 0.6992 & 0.5130 & 0.6710 & 0.6267 \\
& E-SCR20-* & 0.3233 & $\checkmark$ & $\checkmark$ & & - & 0.6852 & 0.5562 & 0.6741 & 0.6164 \\
\cline{2-11}
& E-SCR2-* & 0.2721 & $\checkmark$ & $\checkmark$ & $\checkmark$ & - & 0.6995 & 0.4724 & 0.6828 & 0.3274 \\
& E-SCR5-* & 0.2902 & $\checkmark$ & $\checkmark$ & $\checkmark$ & - & 0.7134 & 0.4772 & 0.7001 & 0.2982 \\
& E-SCR10-* & 0.3074 & $\checkmark$ & $\checkmark$ & $\checkmark$ & - & 0.7047 & 0.4796 & 0.6817 & 0.3130 \\
& E-SCR20-* & 0.3233 & $\checkmark$ & $\checkmark$ & $\checkmark$ & - & 0.6904 & 0.5075 & 0.6894 & 0.6187 \\
\bottomrule
\end{tabular}
\end{sidewaystable}
\begin{sidewaystable
\centering
\caption{Segmentation performance for different centroid feature spaces and for different amounts of superpixels to generate the pseudo-masks. \textit{*N} denotes that only the SMX (\textit{softmax}) features are used to compute $L_\text{cen}$ and $L_\text{mse}$, while \textit{*NR} denotes that the feature space comprises both SMX and RGB features. \textit{Seg} denotes that the segmentation output comes directly from the segmentation network, while \textit{Clu} denotes that the segmentation output is obtained from clustering.}
\label{tab:superpixles_compare}
\begin{tabular}{ c|c||c||ccc||c|c|c||c|c }
\toprule
Task & Experiments & wmIOU & $L_\text{pCE}$ & $L_\text{cen}$ & $L_\text{mse}$ & mIOU (Seg) & mIOU (Seg,*N) & mIOU (Seg,*NR) & mIOU (Clu,*N) & mIOU (Clu,*NR) \\
\hline
\multirow{9}{1.5cm}{Visual Inspection}
& E-SCR20-SUP30 & 0.6272 & $\checkmark$ & & & 0.6613 & - & - & - & -\\
& E-SCR20-SUP50 & 0.6431 & $\checkmark$ & & & 0.7133 & - & - & - & -\\
& E-SCR20-SUP80 & 0.6311 & $\checkmark$ & & & 0.7017 & - & - & - & -\\
\cline{2-11}
& E-SCR20-SUP30-* & 0.6272 & $\checkmark$ & $\checkmark$ & & - & 0.6848 & 0.6847 & 0.7081 & 0.6859\\
& E-SCR20-SUP50-* & 0.6431 & $\checkmark$ & $\checkmark$ & & - & 0.7447 & 0.7368 & 0.7372 & 0.7136\\
& E-SCR20-SUP80-* & 0.6311 & $\checkmark$ & $\checkmark$ & & - & 0.7242 & 0.7355 & 0.7127 & 0.6761\\
\cline{2-11}
& E-SCR20-SUP30-* & 0.6272 & $\checkmark$ & $\checkmark$ & $\checkmark$ & - & 0.6919 & 0.7071 & 0.6987 & 0.7076 \\
& E-SCR20-SUP50-* & 0.6431 & $\checkmark$ & $\checkmark$ & $\checkmark$ & - & 0.7542 & 0.7133 & 0.7491 & 0.7294 \\
& E-SCR20-SUP80-* & 0.6311 & $\checkmark$ & $\checkmark$ & $\checkmark$ & - & 0.7294 & 0.7246 & 0.7268 & 0.7118 \\
\midrule[.6pt]
\multirow{9}{1.5cm}{\raggedright Quality Control}
& E-SCR20-SUP30 & 0.4710 & $\checkmark$ & & & 0.5419 & - & - & - & -\\
& E-SCR20-SUP50 & 0.5133 & $\checkmark$ & & & 0.6483 & - & - & - & -\\
& E-SCR20-SUP80 & 0.5888 & $\checkmark$ & & & 0.7015 & - & - & - & -\\
\cline{2-11}
& E-SCR20-SUP30-* & 0.4710 & $\checkmark$ & $\checkmark$ & & - & 0.6882 & 0.6889 & 0.6142 & 0.6062\\
& E-SCR20-SUP50-* & 0.5133 & $\checkmark$ & $\checkmark$ & & - & 0.7236 & 0.7203 & 0.6644 & 0.6480\\
& E-SCR20-SUP80-* & 0.5888 & $\checkmark$ & $\checkmark$ & & - & 0.7594 & 0.7337 & 0.6768 & 0.6451\\
\cline{2-11}
& E-SCR20-SUP30-* & 0.4710 & $\checkmark$ & $\checkmark$ & $\checkmark$ & - & 0.7030 & 0.6237 & 0.5910 & 0.6077 \\
& E-SCR20-SUP50-* & 0.5133 & $\checkmark$ & $\checkmark$ & $\checkmark$ & - & 0.7291 & 0.7046 & 0.6605 & 0.6372 \\
& E-SCR20-SUP80-* & 0.5888 & $\checkmark$ & $\checkmark$ & $\checkmark$ & - & 0.7679 & 0.7409 & 0.6687 & 0.6780 \\
\bottomrule
\end{tabular}
\end{sidewaystable}
\subsection{Effect of the loss function terms}
\label{sec:effect_cen}
This section considers the effect of $L_\text{cen}$ and $L_\text{mse}$ on the segmentation results by analyzing performance of experiments in groups G1, G2 and G3. From Table~\ref{tab:scribbles_compare}, one can see that the mIOU of experiments in G2 is significantly higher than that of experiments in G1, where the maximum gap in mIOU between G1 and G2 is 0.3118 (E-SCR2 and E-SCR2-N). As for the segmentation performance for G3 experiments, it is systematically above that of G2 experiments for the same width of the scribble annotations and if centroids are built only from the \textit{softmax} features. When the colour data is incorporated, segmentation performance decreases from G2 to G3.
Table~\ref{tab:superpixles_compare} also shows performance improvements from G2 experiments, i.e. when $L_\text{cen}$ is incorporated into the loss function, over the performance observed from experiments in G1, and, in turn, segmentation results from G3 experiments are superior to that of G2 experiments, and this is observed for both tasks. Therefore, the incorporation of the $L_\text{cen}$ and $L_\text{mse}$ terms into the loss function benefits performance, gradually increasing the mIOU of the resulting segmentations.
Regarding the segmentation computed from clustering, the mIOU of experiments in G3 is also higher than that of experiments in G2. In addition, it can be found out in Table~\ref{tab:scribbles_compare} and Table~\ref{tab:superpixles_compare} that the mIOU from clustering in some G2 experiments is slightly higher than that for G3 experiments (E-SCR20-SUP30-N on both tasks and specifically E-SCR20-SUP80-N for the quality control task), while the mIOU from segmentation in G2 is lower than that of G3. In other words, it seems that $L_\text{mse}$, in some cases, makes the segmentation quality from clustering deteriorate.
Overall, the incorporation of $L_\text{cen}$ and $L_\text{mse}$ improves segmentation performance for both tasks, and labelling from segmentation turns out to be superior to that deriving from class centroids.
\subsection{Impact of weak annotations and their propagation}
\label{sec:influence_weak}
In this section, we evaluate our approach under different weak annotations and their propagation, and discuss on their impact on segmentation performance for both tasks. To this end, we plot in Fig.~\ref{fig:influence_weak_annotation} the mIOU (complementarily to Tables~\ref{tab:scribbles_compare} and~\ref{tab:superpixles_compare}), recall and precision values resulting after the supervision of different sorts of weak annotations for the two tasks. A first analysis of these plots reveals that the curves corresponding to the G3 experiments are above than those for G1 and G2 groups for all the performance metrics considered.
Regarding the visual inspection task, Fig.~\ref{fig:influence_weak_annotation}(a) shows that the mIOU values for the G2 and G3 groups are above those for G1 (the lower baseline), which follows a similar shape as the wmIOU values, while those from G2 and G3 groups keep at a more or less constant level for the different sorts of weak annotations. As for the quality control task, the mIOU values for all groups are similar among all groups and similar to the wmIOU values, as shown in Fig. \ref{fig:influence_weak_annotation}(d). Globally, this behaviour clearly shows that the scribbles are enough for describing the classes in the case of the visual inspection task, which is a binary classification problem, while this is not true for the quality control class, a multi-class problem, and this makes necessary resort to the pseudo-masks (G2 and G3 groups) to achieve a higher performance. The fact that for both tasks the lower baseline (G1 group) always achieves lower mIOU values also corroborates the relevance of the Centroid Loss, despite its ultimate contribution to the segmentation performance is also affected by the quality of the weak annotations involved, i.e. the pseudo-masks deriving from scribbles and superpixels for the cases of the G2 and G3 groups.
Additionally, observing the precision curves shown in Fig.~\ref{fig:influence_weak_annotation}(c) and (f), one can notice that the precision for exclusively the weak annotations show a sharp decline when the weak annotations shift from scribbles to pseudo-masks. As can be noticed from the pseudo-masks shown in the second and third rows of Fig.~\ref{fig:pseudo_mask_exp}, when the number of superpixels is low, e.g. 30, the pseudo-masks contain an important number of incorrectly labelled pixels, significantly more than that of the scribble annotations, and this is the reason for the aforementioned decline. The recall curves, however, exhibit an upward trend as can be observed in Fig.~\ref{fig:influence_weak_annotation}(b) and (e) because of the larger amount of information ultimately provided by the weak annotations. On the other side, we can also notice that, in general, precision and recall values are higher for the G3 group than for the G2 group, and both curves are above those for the G1 group, and this behaviour replicates for the two tasks. Finally, the output from clustering does not clearly lead to a different performance, better or worse, over the alternative outcome from the segmentation network, showing that clustering is less appropriate for the quality control task from the point of view of the recall metric.
From a global perspective, all this suggests that (a) segmentation quality benefits from the use of pseudo-masks, (b) overcoming always the lower baseline based on the use of exclusively scribbles, (c) despite the incorrectly labelled pixels contained in pseudo-masks, (d) provided that the proper loss function is adopted, e.g. the full loss expressed in (\ref{func:final_loss}), which in particular (e) comprises the Centroid Loss.
\begin{sidewaysfigure*}
\centering
\begin{tabular}{@{\hspace{0mm}}c@{\hspace{0mm}}c@{\hspace{0mm}}c@{\hspace{0mm}}}
\includegraphics[width=0.34\textwidth,height=0.30\textwidth]{miou_ins.png}
&
\includegraphics[width=0.34\textwidth,height=0.30\textwidth]{rec_ins.png}
&
\includegraphics[width=0.34\textwidth,height=0.30\textwidth]{prec_ins.png} \\
\footnotesize (a) & \footnotesize (b) & \footnotesize (c) \\
\includegraphics[width=0.34\textwidth,height=0.30\textwidth]{miou_q.png}
&
\includegraphics[width=0.34\textwidth,height=0.30\textwidth]{rec_q.png}
&
\includegraphics[width=0.34\textwidth,height=0.30\textwidth]{prec_q.png} \\
\footnotesize (d) & \footnotesize (e) & \footnotesize (f)
\end{tabular}
\caption{Performance metrics for our approach under different sorts of weak annotations. The first row plots are for the visual inspection task, while those of the second row are for the quality control task. In both rows, from left to right, the three figures plot respectively the mIOU, the mean Recall, and the mean Precision. SUP30, SUP50 and SUP80 labels correspond to the use of 20 pixel-wide scribbles.}
\label{fig:influence_weak_annotation}
\end{sidewaysfigure*}
\subsection{Comparison with other loss functions}
\label{sec:performance_compare}
In Table~\ref{tab:previous_compare}, we compare the segmentation performance of our approach for the two tasks with that resulting from the use of the Constrained-size Loss $L_\text{size}$~\cite{kervadec2019constrained} and the SEC Loss $L_\text{sec}$~\cite{kolesnikov2016seed} for different variations of weak annotations. As for the visual inspection task, the network trained with $L_\text{sec}$ is clearly inferior to the one resulting for our loss function, and the same can be said for $L_\text{size}$, although, in this case, the performance gap is shorter, even negligible when the width of the scribbles is of 20 pixels. When the pseudo-masks are involved, our approach is also better though the difference with both $L_\text{size}$ and $L_\text{sec}$ is more reduced. Regarding the quality control task, Table~\ref{tab:previous_compare} shows that our approach overcomes both, at a significant distance, far more than for the visual inspection task.
Summing up, we can conclude that the loss function proposed in (\ref{func:final_loss}) outperforms both the Constrained-size Loss $L_\text{size}$ and the SEC Loss $L_\text{sec}$ on the visual inspection and the quality control tasks.
\begin{table}
\centering
\caption{Comparison of different loss functions for both the visual inspection and the quality control tasks. mIOU values are provided. Best performance is highlighted in bold.}
\label{tab:previous_compare}
\begin{tabular}{@{\hspace{1mm}}l|c|c|c|c@{\hspace{1mm}}}
\toprule
Task & Weak Annotation & $L_\text{size}$~\cite{kervadec2019constrained} & $L_\text{sec}$~\cite{kolesnikov2016seed} & Ours \\
\hline
\multirow{9}{1.1cm}[3.2mm]{Visual Inspection}
& E-SCR2-N & 0.6098 & 0.4366 & \textbf{0.6995} \\
& E-SCR5-N & 0.6537 & 0.4372 & \textbf{0.7134} \\
& E-SCR10-N & 0.6754 & 0.5486 & \textbf{0.7047} \\
& E-SCR20-N & \textbf{0.6909} & 0.5624 & 0.6904 \\
\cline{2-5}
& E-SCR20-SUP30-N & \textbf{0.7068} & 0.6397 & 0.6919 \\
& E-SCR20-SUP50-N & 0.6769 & 0.7428 & \textbf{0.7542} \\
& E-SCR20-SUP80-N & 0.7107 & 0.6546 & \textbf{0.7294} \\
\midrule[.6pt]
\multirow{3}{1.1cm}[0mm]{Quality Control}
& E-SCR20-SUP30-N & 0.4724 & 0.5808 & \textbf{0.7030} \\
& E-SCR20-SUP50-N & 0.4985 & 0.6262 & \textbf{0.7291} \\
& E-SCR20-SUP80-N & 0.5051 & 0.6918 & \textbf{0.7679} \\
\bottomrule
\end{tabular}
\end{table}
\subsection{Final tuning and results}
\label{sec:result_displays}
As has been already highlighted along the previous sections, the network trained by means of the loss function described in (\ref{func:final_loss}), which in particular comprises the Centroid Loss, attains the best segmentation performance against other approaches and for the two tasks considered in this work. In order to check whether segmentation performance can increase further, in this section we incorporate a dense CRF as a post-processing stage of the outcome of the network. Table~\ref{tab:annotation_compare} collects metric values for the final performance attained by the proposed WSSS method and as well by the upper baseline method (E-FULL). To assess the influence of the CRF-based stage, in Table~\ref{tab:annotation_compare}, we report mIOU, precision and recall values, together with the F$_1$ score, as well as the wmIOU values.
Regarding the visual inspection task, Table~\ref{tab:annotation_compare} shows that case E-SCR20-SUP50-N leads to the best segmentation mIOU (0.7542). After dense CRF, the mIOU reaches a value of 0.7859, with a performance gap with E-FULL of 0.0474. Case E-SCR20-SUP30-N attains the highest recall (0.7937), but the corresponding precision (0.7081) and F$_1$ score (0.7485) are not the highest; the mIOU is also the second lowest (0.6919). This is because the segmentation result for E-SCR20-SUP30-N contains more incorrect predictions than E-SCR20-SUP50-N. Consequently, a configuration of 20-pixel scribbles and 50 superpixels for pseudo-mask generation leads to the best performance, with a slightly increase thanks to the CRF post-processing stage. The outcome from clustering is not far in quality to those values, but, as can be observed, it is not as good (the best mIOU and F$_1$ scores are, respectively, 0.7491 and 0.7250).
As for the quality control task, the E-SCR20-SUP80-N case reaches the highest mIOU (0.7679) among all cases, with the second best F$_1$ (0.8350). For this task, the precision metric highlights a different case, E-SCR20-SUP50-N, as the best configuration which as well attains the largest F$_1$ score, though at a very short distance to the E-SCR20-SUP80-N case. After dense CRF, the final segmentation mIOU is 0.7707. The most adequate configuration seems to be, hence, 20-pixel scribbles and 80 superpixels for psedudo-mask generation. The gap in this case with regard to full supervision is 0.0897. Similarly to the visual inspection task, results from clustering are close in accuracy to the previous levels, but not better (for this task, the best mIOU and F$_1$ scores are, respectively, 0.6687 and 0.8086).
From a global perspective, the results obtained indicate that 20-pixel scribbles, together with a rather higher number of superpixels, so that they adhere better to object boundaries, are the best options for both tasks. In comparison with the lower baseline (G1 group), the use of the full loss function, involving the Centroid Loss, clearly makes training improve segmentation performance significantly, with a slight decrease regarding full supervision. Segmentation results deriving from clustering are not better for any of the tasks considered.
Figure~\ref{fig:seg_results} shows examples of segmentation results for the visual inspection task. As can be observed, the segmentations resulting from our approach are very similar to those from the upper baseline (E-FULL). Moreover, as expected, results from clustering are basically correct though tend to label incorrectly pixels (false positives) from around correct labellings (true positives). Similar conclusions can be drawn for the quality control task, whose results are shown in Fig.~\ref{fig:box_seg_clu_results}.
Summing up, the use of the Centroid Loss has made possible training a semantic segmentation network using a small number of labelled pixels. Though the performance of the approach is inferior to that of a fully supervised approach, the resulting gap for the two tasks considered has turned out to be rather short, given the challenges arising from the use of weak annotations.
\begin{sidewaystable}
\centering
\caption{Segmentation results for the full loss function (G3). \textit{Seg} denotes that the segmentation output comes directly from the segmentation network, while \textit{Clu} denotes that the segmentation output is obtained from clustering. *CRF refers to the performance (mIOU) after dense CRF post-processing. Best performance is highlighted in bold.}
\label{tab:annotation_compare}
\small
\begin{tabular}{c|c||c||c|c|c|c||c|c|c|c||c}
\toprule
Task & Experiments & wmIOU & mIOU (seg) & mRec (seg) & mPrec (seg) & F$_1$ (seg) & mIOU (clu) & mRec (clu) & mPrec (clu) & F$_1$ (clu) & *CRF (seg) \\
\hline
\multirow{9}{1.5cm}{Visual Inspection}
& E-SCR2-N & 0.2721 & 0.6995 & 0.6447 & 0.6452 & 0.6449 & 0.6828 & 0.7663 & 0.5803 & 0.6605 & 0.7068 \\
& E-SCR5-N & 0.2902 & 0.7134 & 0.6539 & 0.6542 & 0.6540 & 0.7001 & 0.7447 & 0.6015 & 0.6655 & 0.7212 \\
& E-SCR10-N & 0.3074 & 0.7047 & 0.6797 & 0.6332 & 0.6556 & 0.6817 & 0.7741 & 0.5772 & 0.6613 & 0.7241 \\
& E-SCR20-N & 0.3233 & 0.6904 & 0.6917 & 0.6081 & 0.6472 & 0.6894 & \textbf{0.7816} & 0.5507 & 0.6461 & 0.7172 \\
\cline{2-12}
& E-SCR20-SUP30-N & 0.6272 & 0.6919 & \textbf{0.7937} & 0.7081 & 0.7485 & 0.6987 & 0.7806 & 0.5946 & 0.6750 & 0.7489 \\
& E-SCR20-SUP50-N & 0.6431 & \textbf{0.7542} & 0.7543 & \textbf{0.7567} & \textbf{0.7555} & \textbf{0.7491} & 0.7725 & \textbf{0.6830} & \textbf{0.7250} & \textbf{0.7859} \\
& E-SCR20-SUP80-N & 0.6311 & 0.7294 & 0.7452 & 0.7397 & 0.7424 & 0.7268 & 0.7758 & 0.6200 & 0.6892 & 0.7693 \\
\cline{2-12}
& E-FULL & 1.0000 & 0.8333 & 0.8537 & 0.9119 & 0.8818 & - & - & - & - & 0.8218 \\
\hline
\multirow{4}{1.5cm}{Quality Control}
& E-SCR20-SUP30-N & 0.4710 & 0.7030 & 0.7937 & 0.7924 & 0.7930 & 0.5910 & 0.7600 & 0.6798 & 0.7177 & 0.7142 \\
& E-SCR20-SUP50-N & 0.5133 & 0.7291 & 0.8298 & \textbf{0.8439} & \textbf{0.8368} & 0.6605 & 0.7777 & 0.7332 & 0.7548 & 0.7143 \\
& E-SCR20-SUP80-N & 0.5888 & \textbf{0.7679} & \textbf{0.8303} & 0.8398 & 0.8350 & \textbf{0.6687} & \textbf{0.8630} & \textbf{0.7606} & \textbf{0.8086} & \textbf{0.7707} \\
\cline{2-12}
& E-FULL & 1.0000 & 0.8604 & 0.8058 & 0.8432 & 0.8241 & - & - & - & - & 0.8459 \\
\bottomrule
\end{tabular}
\end{sidewaystable}
\begin{sidewaysfigure*}[htb]
\centering
\begin{tabular}{@{\hspace{0mm}}c@{\hspace{1mm}}c@{\hspace{1mm}}c@{\hspace{1mm}}c@{\hspace{1mm}}c@{\hspace{1mm}}c@{\hspace{1mm}}c@{\hspace{0mm}}}
\includegraphics[width=30mm,height=30mm]{org_gk2_fp_exp28_0480.png}
&
\includegraphics[width=30mm,height=30mm]{RGBLabels_gk2_fp_exp28_0480.png}
&
\includegraphics[width=30mm,height=30mm]{full_gk2_fp_exp28_0480_show.png}
&
\includegraphics[width=30mm,height=30mm]{scr20_seg_gk2_fp_exp28_0480_show.png}
&
\includegraphics[width=30mm,height=30mm]{sup80_seg_gk2_fp_exp28_0480_show.png}
&
\includegraphics[width=30mm,height=30mm]{scr20_clu_gk2_fp_exp28_0480_cluster.png}
&
\includegraphics[width=30mm,height=30mm]{sup80_clu_gk2_fp_exp28_0480_cluster.png}
\\
\includegraphics[width=30mm,height=30mm]{org_gk2_fp_exp33_1270.png}
&
\includegraphics[width=30mm,height=30mm]{RGBLabels_gk2_fp_exp33_1270.png}
&
\includegraphics[width=30mm,height=30mm]{full_gk2_fp_exp33_1270_show.png}
&
\includegraphics[width=30mm,height=30mm]{scr20_seg_gk2_fp_exp33_1270_show.png}
&
\includegraphics[width=30mm,height=30mm]{sup80_seg_gk2_fp_exp33_1270_show.png}
&
\includegraphics[width=30mm,height=30mm]{scr20_clu_gk2_fp_exp33_1270_cluster.png}
&
\includegraphics[width=30mm,height=30mm]{sup80_clu_gk2_fp_exp33_1270_cluster.png}
\\
\includegraphics[width=30mm,height=30mm]{org_image072_30.png}
&
\includegraphics[width=30mm,height=30mm]{RGBLabels_image072_30.png}
&
\includegraphics[width=30mm,height=30mm]{full_image072_30_show.png}
&
\includegraphics[width=30mm,height=30mm]{scr20_seg_image072_30_show.png}
&
\includegraphics[width=30mm,height=30mm]{sup80_seg_image072_30_show.png}
&
\includegraphics[width=30mm,height=30mm]{scr20_clu_image072_30_cluster.png}
&
\includegraphics[width=30mm,height=30mm]{sup80_clu_image072_30_cluster.png}
\\
\includegraphics[width=30mm,height=30mm]{org_image026.png}
&
\includegraphics[width=30mm,height=30mm]{RGBLabels_image026.png}
&
\includegraphics[width=30mm,height=30mm]{full_image026_show.png}
&
\includegraphics[width=30mm,height=30mm]{scr20_seg_image026_show.png}
&
\includegraphics[width=30mm,height=30mm]{sup80_seg_image026_show.png}
&
\includegraphics[width=30mm,height=30mm]{scr20_clu_image026_cluster.png}
&
\includegraphics[width=30mm,height=30mm]{sup80_clu_image026_cluster.png}
\\
\footnotesize original image &
\footnotesize full mask &
\footnotesize E-FULL &
\footnotesize E-SCR20-N (seg) &
\footnotesize E-SCR20-SUP50-N (seg) &
\footnotesize E-SCR20-N (clu) &
\footnotesize E-SCR20-SUP50-N (clu)
\end{tabular}
\caption{Examples of segmentation results for the visual inspection task: (1st column) original images, (2nd column) full mask, (3rd column) results of the fully supervised approach, (4th \& 5th columns) segmentation output for E-SCR20-N and E-SCR20-SUP50-N after dense CRF, (6th \& 7th columns) segmentation output from clustering for the same configurations.}
\label{fig:seg_results}
\end{sidewaysfigure*}
\begin{figure*}[htb]
\centering
\begin{tabular}{@{\hspace{0mm}}c@{\hspace{1mm}}c@{\hspace{1mm}}c@{\hspace{1mm}}c@{\hspace{1mm}}c@{\hspace{1mm}}}
\includegraphics[width=30mm,height=30mm]{org_GOPR0886_0002.png}
&
\includegraphics[width=30mm,height=30mm]{gt_GOPR0886_0002.png}
&
\includegraphics[width=30mm,height=30mm]{full_GOPR0886_0002_show.png}
&
\includegraphics[width=30mm,height=30mm]{seg_GOPR0886_0002_show.png}
&
\includegraphics[width=30mm,height=30mm]{clu_GOPR0886_0002_cluster.png}
\\
\includegraphics[width=30mm,height=30mm]{GOPR0892_3039_org.png}
&
\includegraphics[width=30mm,height=30mm]{GOPR0892_3039_gt.png}
&
\includegraphics[width=30mm,height=30mm]{GOPR0892_3039_show_full.png}
&
\includegraphics[width=30mm,height=30mm]{GOPR0892_3039_show_seg.png}
&
\includegraphics[width=30mm,height=30mm]{GOPR0892_3039_show_clu.png}
\\
\includegraphics[width=30mm,height=30mm]{GOPR0888_0019_org.png}
&
\includegraphics[width=30mm,height=30mm]{GOPR0888_0019_gt.png}
&
\includegraphics[width=30mm,height=30mm]{GOPR0888_0019_show_full.png}
&
\includegraphics[width=30mm,height=30mm]{GOPR0888_0019_show_seg.png}
&
\includegraphics[width=30mm,height=30mm]{GOPR0888_0019_show_clu.png}
\\
\includegraphics[width=30mm,height=30mm]{org_GOPR1637_0553.png}
&
\includegraphics[width=30mm,height=30mm]{gt_GOPR1637_0553.png}
&
\includegraphics[width=30mm,height=30mm]{full_GOPR1637_0553_show.png}
&
\includegraphics[width=30mm,height=30mm]{seg_GOPR1637_0553_show.png}
&
\includegraphics[width=30mm,height=30mm]{clu_GOPR1637_0553_cluster.png}
\\
\includegraphics[width=30mm,height=30mm]{org_image0482.png}
&
\includegraphics[width=30mm,height=30mm]{gt_image0482.png}
&
\includegraphics[width=30mm,height=30mm]{full_image0482_show.png}
&
\includegraphics[width=30mm,height=30mm]{seg_image0482_show.png}
&
\includegraphics[width=30mm,height=30mm]{clu_image0482_cluster.png}
\\
\includegraphics[width=30mm,height=30mm]{org_image0536.png}
&
\includegraphics[width=30mm,height=30mm]{gt_image0536.png}
&
\includegraphics[width=30mm,height=30mm]{full_image0536_show.png}
&
\includegraphics[width=30mm,height=30mm]{seg_image0536_show.png}
&
\includegraphics[width=30mm,height=30mm]{clu_image0536_cluster.png}
\\
\includegraphics[width=30mm,height=30mm]{org_IMG_20180313_152303.png}
&
\includegraphics[width=30mm,height=30mm]{gt_IMG_20180313_152303.png}
&
\includegraphics[width=30mm,height=30mm]{full_IMG_20180313_152303_show.png}
&
\includegraphics[width=30mm,height=30mm]{seg_IMG_20180313_152303_show.png}
&
\includegraphics[width=30mm,height=30mm]{clu_IMG_20180313_152303_cluster.png}
\\
\footnotesize original image &
\footnotesize full mask &
\footnotesize E-FULL &
\footnotesize E-SCR20-SUP80-N (seg) &
\footnotesize E-SCR20-SUP80-N (clu)
\end{tabular}
\caption{Examples of segmentation results for the quality control task: (1st column) original images, (2nd column) full mask, (3rd column) results of the fully supervised approach, (4th column) segmentation output from E-SCR20-SUP80-N after dense CRF, (5th column) segmentation output from clustering for the same configuration.}
\label{fig:box_seg_clu_results}
\end{figure*}
\section{Conclusions and Future Work}
\label{sc:conclusions}
This paper describes a weakly-supervised segmentation approach based on Attention U-Net. The loss function comprises three terms, namely a partial cross-entropy term, the so-called Centroid Loss and a regularization term based on the mean squared error. They all are jointly optimized using an end-to-end learning model. Two industry-related application cases and the corresponding datasets have been used as benchmark for our approach. As has been reported in the experimental results section, our approach can achieve competitive performance, with regard to full supervision, with a reduced labelling cost to generate the necessary semantic segmentation ground truth. Under weak annotations of varying quality, our approach has been able to achieve good segmentation performance, counteracting the negative impact of the imperfect labellings employed.
The performance gap between our weakly-supervised approach and the corresponding fully-supervised approach has shown to be rather reduced regarding the mIOU values. As for precision and recall, they are quite similar for the quality control task for both the weakly-supervised and the fully-supvervised versions. A non-negligible difference is however observed for the visual inspection task, what suggests looking for alternatives even less sensitive to the imperfections of the ground truth deriving from the weak annotations, aiming at closing the aforementioned gap. In this regard, future work will focus on other deep backbones for semantic segmentation, e.g. DeepLab.
\bibliographystyle{unsrt}
\section{Introduction}
\label{sc:introduction}
\PARstart{I}{mage} segmentation is a classical problem in computer vision aiming at distinguishing meaningful units in processed images. To this end, image pixels are grouped into regions that on many occasions are expected to correspond to the scene object projections. One step further identifies each unit as belonging to a particular class among a set of object classes to be recognized, giving rise to the Multi-Class Semantic Segmentation (MCSS) problem. From classical methods (e.g. region growing~\cite{Gonzalez2018}) to more robust methods (e.g. level-set~\cite{Wang2020} and graph-cut~\cite{Boykov2006}), various techniques have been proposed to achieve automatic image segmentation in a wide range of problems. Nevertheless, it has not been until recently that the performance of image segmentation algorithms has attained truly competitive levels, and this has been mostly thanks to the power of machine learning-based methodologies.
On the basis of the concept of Convolutional Neural Networks (CNN) proposed by LeCun and his collaborators (e.g. in the form of the well-known LeNet networks~\cite{LeCun1998}) and followed by the technological breaktrough that allowed training artificial neural structures with a number of parameters amounting to millions~\cite{Krizhevsky2012}, deep CNNs have demonstrated remarkable capabilities for problems so complex as image classification, multi-instance multi-object detection or multi-class semantic segmentation. And all this has been accomplished because of the "learning the representation" capacity of CNNs, embedded in the set of multi-scale feature maps defined in their architecture through non-linear activation functions and a number of convolutional filters that are automatically learnt during the training process by means of iterative back-propagation of prediction errors between current and expected output.
Regarding DCNN-based image segmentation, Guo et al.~\cite{Guo2018} distinguish among three categories of MCSS approaches in accordance to the methodology adopted while dealing with the input images (and correspondingly the required network structure): region-based semantic segmentation, semantic segmentation based on Fully Convolutional Networks (FCN) and Weakly-Supervised semantic segmentation (WSSS). While the former follows a \textit{segmentation using recognition} pipeline, which first detects free-form image regions, and next describes and classifies them, the second approach adopts a \textit{pixel-to-pixel map learning} strategy as key idea without resorting to the image region concept, and, lastly, WSSS methods focus on achieving a performance level similar to that of Fully-Supervised methods (FSSS) but with a weaker labelling of the training image set, i.e. less spatially-informative annotations than the pixel level, to simplify the generation of ground truth data. It is true that powerful interactive tools have been developed for annotating images at the pixel level, which, in particular, just require that the annotator draws a minimal polygon surrounding the targets (see e.g. the open annotation tool by the MIT~\cite{Labelme2016}). However, it still takes a few minutes on average to label the target areas for every picture (e.g. around 10 minutes on average for MS COCO labellers~\cite{Lin2014, Chan2020}), what makes WSSS methods interesting by themselves and actually quite convenient in general.
In this work, we focus on this last class of methods and propose a novel WSSS strategy based on a new loss function combining several terms to counteract the simplicity of the annotations. The strategy is, in turn, evaluated through a benchmark comprising two industry-related application cases of a totally different nature. One of these cases involves the detection of instances of a number of different object classes in the context of a \textit{quality control} application, while the other stems from the \textit{visual inspection} domain and deals with the detection of irregularly-shaped sets of pixels corresponding to scene defective areas. The details about the two application cases can be found in Section~\ref{sc:scenarios}.
WSSS methods are characterized, among others, by the sort of weak annotation that is assumed. In this regard, Chan \textit{et al.}{} highlight in~\cite{Chan2020} several weak annotation methodologies, namely bounding boxes, scribbles, image points and image-level labels (see Fig.~\ref{fg:weak-annotations} for an illustration of all of them). In this work, we adopt a scribble-based methodology from which training masks are derived to propagate the category information from the labelled pixels to the unlabelled pixels during network training.
The main contributions of this work are summarized as follows:
\begin{itemize}
\item A new loss function $L$ comprising several partial cross entropy terms is developed to account for the vagueness of the annotations and the inherent noise of the training masks that are derived from them. This function includes a cluster centroids-based loss term, named as the Centroid Loss, which integrates a clustering process within the semantic segmentation approach.
\item Another term of $L$ is defined through a Mean Squared Error (MSE) loss that cooperates with the other partial cross-entropy losses to refine the segmentation results.
\item The Centroid Loss is embedded over a particular implementation of Attention U-Net~\cite{oktay2018attention}.
\item We assess the performance of the full approach on a benchmark comprising two industry-related applications connected with, respectively, quality control and visual inspection.
\end{itemize}
\begin{figure}
\centering
\begin{tabular}{@{\hspace{0mm}}c@{\hspace{0mm}}c@{\hspace{0mm}}c@{\hspace{0mm}}c@{\hspace{0mm}}}
\includegraphics[width=0.25\columnwidth]{leaf_bbx.png} &
\includegraphics[width=0.25\columnwidth]{leaf_scr.png} &
\includegraphics[width=0.25\columnwidth]{leaf_pts.png} &
\includegraphics[width=0.25\columnwidth]{leaf_lab.png} \\
\footnotesize (a) & \footnotesize (b) & \footnotesize (c) & \footnotesize (d)
\end{tabular}
\caption{Examples of weak annotations, from more to less informative: (a) bounding boxes, (b) scribbles, (c) point-level labels, (d) image-level labels.}
\label{fg:weak-annotations}
\end{figure}
A preliminary version of this work can be found in \cite{yao2020centroid} as a work-in progress paper.
The rest of the paper is organized as follows: Section~\ref{sc:scenarios} describes the two application scenarios we use as a benchmark of the semantic segmentation approach; Section~\ref{sc:related_work} reviews previous works on WSSS; Section~\ref{sc:methodology} describes the weakly-supervised methodology developed in this work; Section~\ref{sc:experiments} reports on the results of a number of experiments aiming at showing the performance of our approach from different points of view; and Section~\ref{sc:conclusions} concludes the paper and outlines future work.
\section{Application Scenarios}
\label{sc:scenarios}
In this work, we use the two following industry-related application cases as a benchmark of the WSSS strategy that is developed:
\begin{itemize}
\item In the first case, we deal with the detection of a number of control elements that the sterilization unit of a hospital places in boxes and bags containing surgical tools that surgeons and nurses have to be supplied with prior to starting surgery. These elements provide evidence that the tools have been properly submitted to the required cleaning processes, i.e. they have been placed long enough inside the autoclave at a certain temperature, what makes them change their appearance. Figure~\ref{fig:targets_qc} shows, from left to right and top to bottom, examples of six kinds of elements to be detected for this application: the label/bar code used to track a box/bag of tools, the yellowish seal, the three kinds of paper tape which show the black-, blue- and pink-stripped appearance that can be observed in the figure, and an internal filter which is placed inside certain boxes and creates the white-dotted texture that can be noticed (instead of black-dotted when the filter is missing). All these elements, except the label, which is only for box/bag recording and tracking purposes, aim at corroborating the sterilization of the surgery tools contained in the box/bag. Finally, all of them may appear anywhere in the boxes/bags and in a different number, depending on the kind of box/bag.
\item In the second case, we deal with the detection of one of the most common defects that can affect steel surfaces, i.e. coating breakdown and/or corrosion (CBC) in any of its many different forms. This is of particular relevance where the integrity of steel-based structures is critical, such as e.g. in large-tonnage vessels. An early detection, through suitable maintenance programmes, prevents vessel structures from suffering major damage which can ultimately compromise their integrity and lead to accidents with maybe catastrophic consequences for the crew (and passengers), environmental pollution or damage and/or total loss of the ship, its equipment and its cargo. The inspection of those ship-board structures by humans is a time-consuming, expensive and commonly hazardous activity, what, altogether, suggests the introduction of defect detection tools to alleviate the total cost of an inspection. Figure~\ref{fig:targets_insp} shows images of metallic vessel surfaces affected by CBC.
\end{itemize}
The quality control problem involves the detection of man-made, regular objects in a large variety of situations what leads to an important number of images to cover all cases, while the inspection problem requires the localization of image areas of irregular shape, and this makes harder and longer the labelling (specially for inexperienced staff). In both cases, it turns out to be relevant the use of a training methodology that reduces the cost of image annotation. As will be shown, our approach succeeds in both cases, despite the particular challenges and the differences among them, and the use of weak annotations do not prevent from achieving a competitive performance level for those particular problems (see also~\cite{Chan2020} for a discussion on some of the challenges the WSSS methods typically can have to face).
\begin{figure}[t]
\centering
\begin{tabular}{@{\hspace{0mm}}c@{\hspace{1mm}}c@{\hspace{1mm}}c@{\hspace{0mm}}}
\includegraphics[width=0.31\columnwidth,height=0.31\columnwidth]{label.png}
&
\includegraphics[width=0.31\columnwidth,height=0.31\columnwidth]{seal.png}
&
\includegraphics[width=0.31\columnwidth,height=0.31\columnwidth]{black_paper_tape.png} \\
\includegraphics[width=0.31\columnwidth,height=0.31\columnwidth]{blue_paper_tape.png}
&
\includegraphics[width=0.31\columnwidth,height=0.31\columnwidth]{pink_paper_tape.png}
&
\includegraphics[width=0.31\columnwidth,height=0.31\columnwidth]{filter.png}
\end{tabular}
\caption{Targets to be detected in the quality-control application considered in this work: from left to right and from top to bottom, box/bag tracking label, yellowish seal, three kinds of paper tape (black-, blue-, and pink-stripped) and white-dotted texture related to the presence of a whitish internal filter. All these elements, except the label, aim at evidencing the sterilization of the surgery tools contained in the box.}
\label{fig:targets_qc}
\end{figure}
\begin{figure}[t]
\centering
\begin{tabular}{@{\hspace{0mm}}c@{\hspace{1mm}}c@{\hspace{1mm}}c@{\hspace{0mm}}}
\includegraphics[width=0.31\columnwidth,height=0.31\columnwidth]{cbc1_image004.png}
&
\includegraphics[width=0.31\columnwidth,height=0.31\columnwidth]{cbc2_image026.png}
&
\includegraphics[width=0.31\columnwidth,height=0.31\columnwidth]{cbc3_image021.png}
\end{tabular}
\caption{Targets to be detected in the visual inspection application considered in this work. The different images show examples of coating breakdown and corrosion affecting ship surfaces.}
\label{fig:targets_insp}
\end{figure}
\section{Related Work}
\label{sc:related_work}
Although fully-supervised segmentation approaches based on DCNNs can achieve an excellent performance, they require plenty of pixel-wise annotations, what turns out to be very costly in practice. In response to this fact, researchers have recently paid attention to the use of weak annotations for addressing MCSS problems. Among others, \cite{wei2017object,choe2019attention,kolesnikov2016seed,yang2020combinational} consider image-level labels, while~\cite{khoreva2017simple,papandreou2015weakly} assume the availability of bounding boxes-based annotations, and~\cite{boykov2001interactive,grady2006random,bai2009geodesic,tang2018regularized,kervadec2019constrained,lin2016scribblesup,tang2018normalized,xu2015learning} make use of scribbles.
In more detail, most WSSS methods involving the use of image level-labels are based on the so-called Class Activation Maps (CAMs)~\cite{wei2017object}, as obtained from a classification network. In~\cite{choe2019attention}, an Attention-based Dropout Layer (ADL) is developed to obtain the entire outline of the target from the CAMs. The ADL relies on a self-attention mechanism to process the feature maps. More precisely, this layer is intended for a double purpose: one task is to hide the most discriminating parts in the feature maps, what induces the model to learn also the less discriminative part of the target, while the other task intends to highlight the informative region of the target for improving the recognition ability of the model. A three-term loss function is proposed in~\cite{kolesnikov2016seed} to seed, expand, and constrain the object regions progressively when the network is trained. Their loss function is based on three guiding principles: seed with weak localization cues, expand objects based on the information about which classes can occur in an image, and constrain the segmentation so as to coincide with object boundaries. In~\cite{yang2020combinational}, the focus is placed on how to obtain better CAMs. To this end, the work aims at solving the incorrect high response in CAMs through a linear combination of higher-dimensional CAMs.
Regarding the use of bounding boxes as weakly-supervised annotations for semantic segmentation, in~\cite{khoreva2017simple}, GraphCut and Holistically-nested Edge Detection (HED) algorithms are combined to refine the bounding boxes ground truth and make predictions, while the refined ground truth is used to train the network iteratively. Similarly, in~\cite{papandreou2015weakly}, the authors develop a WSSS model using bounding boxes-based annotations, where: firstly, a segmentation network based on the DeepLab-CRF model obtains a series of coarse segmentation results, and, secondly, a dense Conditional Random Field (CRF)-based step is used to facilitate the predictions and preserve object edges. In their work, they develop novel online Expectation-Maximization (EM) methods for DCNN training under the weakly-supervised setting. Extensive experimental evaluation shows that the proposed techniques can learn models delivering competitive result from bounding boxes annotations.
Scribbles have been widely used in connection with interactive image segmentation, being recognized as one of the most user-friendly ways for interacting. These approaches require the user to provide annotations interactively. The topic has been explored through graph cuts~\cite{boykov2001interactive}, random walks~\cite{grady2006random}, and weighted geodesic distances~\cite{bai2009geodesic}. As an improvement, \cite{tang2018regularized} proposes two regularization terms based on, respectively, normalized cuts and CRF. In this work, there are no extra inference steps explicitly generating masks, and their two loss terms are trained jointly with a partial cross-entropy loss function. In another work~\cite{kervadec2019constrained}, the authors enforce high-order (global) inequality constraints on the network output to leverage unlabelled data, guiding the training process with domain-specific knowledge (e.g. to constrain the size of the target region). To this end, they incorporate a differentiable penalty in the loss function avoiding expensive Lagrangian dual iterates and proposal generation. In the paper, the authors show a segmentation performance that is comparable to full supervision on three separate tasks.
Aiming at using scribbles to annotate images, ScribbleSup~\cite{lin2016scribblesup} learns CNN parameters by means of a graphical model that jointly propagates information from sparse scribbles to unlabelled pixels based on spatial constraints, appearance, and semantic content. In~\cite{tang2018normalized}, the authors propose a new principled loss function to evaluate the output of the network discriminating between labelled and unlabelled pixels, to avoid the poorer training that can result from standard loss functions, e.g. cross entropy, because of the presence of potentially mislabelled pixels in masks derived from scribbles or seeds. Unlike prior work, the cross entropy part of their loss evaluates only seeds where labels are known while a normalized cut term softly accounts for consistency of all pixels. Finally, \cite{xu2015learning} proposes a unified approach involving max-margin clustering (MMC) to take any form of weak supervision, e.g. tags, bounding boxes, and/or partial labels (strokes, scribbles), to infer pixel-level semantic labels, as well as learn an appearance model for each semantic class. Their loss function penalizes more or less the errors in positive or negative examples depending on whether the number of negative examples is larger than the positive examples or not.
In this paper, we focus on the use of scribbles as weak image annotations and propose a semantic segmentation approach that combines them with superpixels for propagating category information to obtain training masks, named as pseudo-masks because of the labelling mistakes they can contain. Further, we propose a specific loss function $L$ that makes use of those pseudo-masks, but at the same time intends to counteract their potential mistakes. To this end, $L$ consists of a partial cross-entropy term that uses the pseudo-masks, together with the Centroid Loss and a regularizing normalized MSE term that cooperate with the former to produce refined segmentation results through a joint training strategy. The Centroid Loss employs the labelling of pixels belonging to the scribbles to guide the training.
\section{Methodology}
\label{sc:methodology}
Figure~\ref{fig:cluster_segmentation}(a) illustrates fully supervised semantic segmentation approaches based on DCNN, which, applying a pixel-wise training strategy, try to make network predictions resemble the full labelling as much as possible, thus achieving good segmentation performance levels in general. By design, this kind of approach ignores the fact that pixels of the same category tend to be similar to their adjacent pixels. This similarity can, however, be exploited when addressing the WSSS problem by propagating the known pixel categories towards unlabelled pixels. In this respect, several works reliant on pixel-similarity to train the WSSS network can be found in the literature: e.g. a dense CRF is used in~\cite{papandreou2015weakly}, the GraphCut approach is adopted in~\cite{zhao2018pseudo}, and superpixels are used in ScribbleSup~\cite{lin2016scribblesup}.
\begin{figure*}[t]
\centering
\begin{tabular}{cc}
\includegraphics[scale=0.3]{seg_meth_full_supervision.png} &
\includegraphics[scale=0.3]{seg_meth_scribble_cluster.png} \\
\footnotesize (a) & \footnotesize (b)
\end{tabular}
\caption{Illustration of (a) full supervision and (b) our weakly-supervised approach for semantic segmentation: (a) all pixels are labelled to make the prediction [bottom layer of the drawing] resemble the ground truth [top layer of the drawing] as much as possible after pixel-wise training; (b) to solve the WSSS problem, the category information from the incomplete ground truth, i.e. the weak annotations, is propagated towards the rest of pixels making use of pixel similarity and minimizing distances to class centroids derived from the weak annotations. }
\label{fig:cluster_segmentation}
\end{figure*}
Inspired by the aforementioned, in this work, we propose a semantic segmentation approach using scribble annotations and a specific loss function intended to compensate for missing labels and errors in the training masks. To this end, class centroids determined from pixels coinciding with the scribbles, whose labelling is actually the ground truth of the problem, are used in the loss function to guide the training of the network so as to obtain improved segmentation outcomes. The process is illustrated in Fig.~\ref{fig:cluster_segmentation}(b).
Furthermore, similarly to ScribbleSup~\cite{lin2016scribblesup}, we also combine superpixels and scribble annotations to propagate category information and generate pseudo-masks as segmentation proposals, thus making the network converge fast and achieve competitive performance. By way of example, Fig.~\ref{fig:weak_labels}(b) and (c) show, respectively, the scribble annotations and the superpixels-based segmentations obtained for two images of the two application cases considered. The corresponding pseudo-masks, containing more annotated pixels than the scribbles, are shown in Fig.~\ref{fig:weak_labels}(d). As can be observed, not all pixels of the pseudo-masks are correctly labelled, what may affect segmentation performance. It is because of this fact that we incorporate the Centroid Loss and a normalized MSE term into the full loss function. This is discussed in Section~\ref{sec:cen_loss}.
\begin{figure*}[t]
\centering
\begin{tabular}{@{\hspace{0mm}}c@{\hspace{1mm}}c@{\hspace{1mm}}c@{\hspace{1mm}}c@{\hspace{0mm}}}
\includegraphics[width=35mm,height=35mm]{gk2_fp_exp28_0380_90_ROI_org.png}
&
\includegraphics[width=35mm,height=35mm]{gk2_fp_exp28_0380_90_ROI_spx.png}
&
\includegraphics[width=35mm,height=35mm]{gk2_fp_exp28_0380_90_ROI_all.png}
&
\includegraphics[width=35mm,height=35mm]{gk2_fp_exp28_0380_90_ROI_lab.png}
\\
\includegraphics[width=35mm,height=35mm]{image0494_org.png}
&
\includegraphics[width=35mm,height=35mm]{image0494_spx.png}
&
\includegraphics[width=35mm,height=35mm]{image0494_all.png}
&
\includegraphics[width=35mm,height=35mm]{image0494_lab.png} \\
\footnotesize (a) & \footnotesize (b) & \footnotesize (c) & \footnotesize (d)
\end{tabular}
\caption{Weak annotation and propagation example: (a) original images; (b) scribbles superimposed over the original image; (c) scribbles superimposed over the superpixels segmentation result; (d) resulting pseudo-masks. Regarding the scribble annotations: (1st row) red and green scribbles respectively denote corrosion and background; (2nd row) black, red and blue scribbles respectively denote background, tracking label and the internal filter texture. As for the pseudo-masks: (1st row) red, black and green pixels respectively denote corrosion, background and unlabelled pixels; (2nd row) red, blue, black and green pixels respectively denote the tracking label, the internal filter texture, the background and the unlabelled pixels.}
\label{fig:weak_labels}
\end{figure*}
The remaining methodological details are given along the rest of this section: we begin with the way how weak annotations are handled and how the pseudo-masks are obtained in Section~\ref{sec:pseudo_mask}, while the architecture of the network is described in Section~\ref{sec:net_arch} and the different loss terms are detailed and discussed in Sections~\ref{sec:pce_loss} (partial Cross-Entropy loss, $L_\text{pCE}$), \ref{sec:cen_loss} (Centroid Loss, $L_\text{cen}$) and~\ref{sec:full_loss} (normalized MSE-term, $L_\text{mse}$, and the full loss function $L$).
\subsection{Weak annotations and pseudo-masks generation}
\label{sec:pseudo_mask}
As already said, Fig.~\ref{fig:weak_labels}(b) shows two examples of scribble annotations, one for the visual inspection case (top) and the other for the quality control case (bottom). Because scribbles represent only a few pixels, the segmentation performance that the network can be expected to achieve will be far from satisfactory for any task that is considered. To enhance the network performance, we combine the scribbles with an oversegmentation of the image to generate pseudo-masks as segmentation proposals for training. For the oversegmentation, we make use of the Adaptive-SLIC (SLICO) algorithm~\cite{achanta2012slic}, requesting enough superpixels so as not to mix different classes in the same superpixel. Figure~\ref{fig:weak_labels}(c,top) shows an oversegmentation in 50 superpixels, while 80 are selected for Fig.~\ref{fig:weak_labels}(c,bottom). Next, those pixels belonging to a superpixel that intersects with a scribble are labelled with the same class as the scribble, as shown in Fig.~\ref{fig:weak_labels}(d). In Fig.~\ref{fig:weak_labels}(d,top), the black pixels represent the background, the red pixels indicate corrosion, and the green pixels denote unlabelled pixels. In Fig.~\ref{fig:weak_labels}(d,bottom), black and green pixels denote the same as for the top mask, while the red pixels represent the tracking label and the blue pixels refer to the internal filter class.
\subsection{Network Architecture}
\label{sec:net_arch}
In this work, we adopt U-Net~\cite{Ronneberger2015} as the base network architecture. As it is well known, U-Net evolves from the fully convolutional neural network concept and consists of a contracting path followed by an expansive path. It was developed for biomedical image segmentation, though it has been shown to exhibit good performance in general for natural images even for small training sets. Furthermore, we also embed Attention Gates (AG) in U-Net, similarly to Attention U-Net (AUN)~\cite{oktay2018attention}. These attention modules have been widely used in e.g. Natural Language Processing (NLP)~\cite{vaswani2017attention,clark2019does,serrano2019attention,jain2019attention}. Other works related with image segmentation~\cite{hu2018squeeze,jetley2018learn,oktay2018attention,sinha2020multi} have introduced them for enhanced performance. In our case, AGs are integrated into the decoding part of U-Net to improve its ability to segment small targets.
For completeness, we include in Fig.~\ref{fig:AG} a schematic about the operation of the AG that we make use of in this work, which, in our case, implements (\ref{eq:AG}) as described below:
\begin{align}
(x_{i,c}^l)^\prime &= \alpha_i^l \, x_{i,c}^l \label{eq:AG} \\
\alpha_i^l &= \sigma_2(W_{\phi}^T(\sigma_1(W_x^T x_i^l + W_g^T g_i + b_g)) + b_{\phi}) \nonumber
\end{align}
where the feature-map $x_i^l \in \mathbb{R}^{F_l}$ is obtained at the output of layer $l$ for pixel $i$, $c$ denotes a channel in $x_{i,c}^l$, $F_l$ is the number of feature maps at that layer, the gating vector $g_i$ is used for each pixel $i$ to determine focus regions and is such that $g_i \in \mathbb{R}^{F_l}$ (after up-sampling the input from the lower layer), $W_g \in \mathbb{R}^{F_l \times 1}$, $W_x \in \mathbb{R}^{F_l \times 1}$, and $W_{\phi} \in \mathbb{R}^{1 \times 1}$ are linear mappings, while $b_g \in \mathbb{R}$ and $b_{\phi} \in \mathbb{R}$ denote bias terms, $\sigma_1$ and $\sigma_2$ respectively represent the ReLU and the sigmoid activation functions, $\alpha_i^l \in [0,1]$ are the resulting attention coefficients, and $\Phi_\text{att} = \{W_g, W_x, b_g; W_{\phi}, b_{\phi}\}$ is the set of parameters of the AG.
The attention coefficients $\alpha_i$ are intended to identify salient image regions and discard feature responses so as to preserve only the activations relevant to the specific task. In~\cite{hu2018squeeze}, the Squeeze-and-Excitation (SE) block obtains attention weights in channels for filter selection. In our approach, the AGs involved calculate attention weights at the spatial level.
\begin{figure*
\centering
\includegraphics[scale=0.35]{attention_gate_new.png}
\caption{Schematic diagram of an Attention Gate (AG). $N$ is the size of the mini-batch.}
\label{fig:AG}
\end{figure*}
As shown in Fig.~\ref{fig:Unet_cluster}, AGs are fed by two input tensors, one from the encoder side of U-Net and the other from the decoder side, respectively $x$ and $g$ in Fig.~\ref{fig:AG}. With the AG approach, spatial regions are selected on the basis of both the activations $x$ and the contextual information provided by the gating signal $g$ which is collected from a coarser scale. The contextual information carried by the gating vector $g$ is hence used to highlight salient features that are passed through the skip connections. In our case, $g$ enters the AG after an up-sampling operation that makes $g$ and $x$ have compatible shapes (see Fig.~\ref{fig:AG}).
\begin{figure*
\centering
\includegraphics[scale=0.25]{attention_unet_cluster.png}
\caption{Block diagram of the Centroids AUN model. The size decreases gradually by a factor of 2 at each scale in the encoding part and increases by the same factor in the decoding part. In the latter, AGs are used to help the network focus on the areas of high-response in the feature maps. The \textit{Conv~Skip} block is the \textit{skip~connection} of ResNet \cite{he2016deep}. The sub-network of the lower part of the diagram is intended to predict class centroids. In the drawing, $C$ denotes the number of classes and $M$ is the dimension of the class centroids.}
\label{fig:Unet_cluster}
\end{figure*}
Apart from the particularities of the AG that we use, which have been described above, another difference with the original AUN is the sub-network that we attach to the main segmentation network, as can be observed from the network architecture that is shown in Fig.~\ref{fig:Unet_cluster}. This sub-network is intended to predict class centroids on the basis of the scribbles that are available in the image, with the aim of improving the training of the main network from the possibly noisy pseudo-masks, and hence achieve a higher level of segmentation performance. Consequently, during training: (1) our network handles two sorts of ground truth, namely scribble annotations $Y_\text{scr}$ to train the attached sub-network for proper centroid predictions, and the pseudo-masks $Y_\text{seg}$ for image segmentation; and (2) the augmented network yields two outputs, a set of centroids $P_\text{cen}$ and the segmentation of the image $P_\text{seg}$ (while during inference only the segmentation output $P_\text{seg}$ is relevant). Predicted cluster centroids are used to calculate the Centroid Loss term $L_\text{cen}$ (described in Section~\ref{sec:cen_loss}) of the full loss function $L$, which comprises two more terms (as described in Section~\ref{sec:full_loss}). Thanks to the design of $L$, the full network --i.e. the AUN for semantic segmentation and the sub-net for centroids prediction-- is trained through a joint training strategy following an end-to-end learning model. During training, the optimization of $L_\text{cen}$ induces updates in the main network weights via back-propagation that are intended to reach enhanced training and therefore produce better segmentations.
As can be observed, the centroids prediction sub-net is embedded into the intermediate part of the network, being fed by the last layer of the encoder side of our AUN. As shown in Fig.~\ref{fig:Unet_cluster}, this sub-net consists of three blocks, each of which comprises a fully connected layer, a batch-normalization layer, and a ReLU activation function. The shape of $P_\text{cen}$ is $C\times M$, where $C$ is the number of categories and $M$ denotes the dimension of the feature space where the class centroids are defined. In our approach, centroid features are defined from the softmax layer of the AUN, and hence comprises $C$ components, though we foresee to combine them with $K$ additional features from the classes which are incorporated externally to the operation of the network, and hence $M = C+K$. On the other side, the shape of $P_\text{seg}$ is $C\times W\times H$, where $(H,W)$ is the size of the input image.
\subsection{Partial Cross-Entropy Loss}
\label{sec:pce_loss}
Given a $C$-class problem and a training set $\Omega$, comprising a subset $\Omega_L$ of labelled pixels and a subset $\Omega_U$ of unlabelled pixels, the Partial Cross-Entropy Loss $L_\text{pCE}$, widely used for WSSS, computes the cross-entropy only for labelled pixels $p \in \Omega_L$, ignoring $p \in \Omega_U$:
\begin{equation}
L_\text{pCE} = \sum\limits_{c=1}^{C}\sum\limits_{p\in \Omega_{L}^{(1)}} -y_{g(p),c}~\log~y_{s(p),c}
\label{func:partial_cross-entropy}
\end{equation}
where $y_{g(p),c} \in \{0,1\}$ and $y_{s(p),c} \in [0,1]$ represent respectively the ground truth and the segmentation output. In our case, and for $L_\text{pCE}$, $\Omega_L^{(1)}$ is defined as the pixels labelled in the pseudo-masks (hence, pixels from superpixels not intersecting with any scribble belong to $\Omega_U$ and are not used by (\ref{func:partial_cross-entropy})). Hence, $y_{g(p),c}$ refers to the pseudo-masks, i.e. $Y_\text{seg}$, while $y_{s(p),c}$ is the prediction, i.e. $P_\text{seg}$, as supplied by the softmax final network layer.
\subsection{Centroid Loss}
\label{sec:cen_loss}
As can be easily foreseen, when the network is trained using the pseudo-masks, the segmentation performance depends on how accurate the pseudo-masks are and hence on the quality of superpixels, i.e. how they adhere to object boundaries and avoid mixing classes. The Centroid Loss function is introduced in this section for the purpose of compensating a dependence of this kind and improving the quality of the segmentation output.
In more detail, we define the Centroid Loss term $L_\text{cen}$ as another partial cross-entropy loss:
\begin{equation}
L_\text{cen} = \sum\limits_{c=1}^{C} \sum\limits_{p\in \Omega_{L}^{(2)}} -y_{g(p),c}^{*}~\log~y_{s(p),c}^{*}
\label{func:cen_loss}
\end{equation}
defining in this case:
\begin{itemize}
\item $\Omega_{L}^{(2)}$ as the set of pixels coinciding with the scribbles,
\item $y_{g(p),c}^{*}$ as the corresponding labelling, and
\end{itemize}
\begin{align}
y_{s(p),c}^{*} &= \frac{\exp(-d_{p,c})}{\sum\limits^{C}_{c^{'}=1} \exp(-d_{p,c^{'}})} \label{func:cen_loss_2} \\
d_{p,c} &=\frac{||f_p-\mu_c||_2^2}{\sum\limits^{C}_{c^{'}=1}||f_p-\mu_{c^{\prime}}||_2^2} \label{func:cen_loss_3}
\end{align}
where: (1) $f_p$ is the feature vector associated to pixel $p$ and (2) $\mu_c$ denotes the centroid predicted for class $c$, i.e. $\mu_c \in P_\text{cen}$. $f_p$ is built from the section of the softmax layer of the main network corresponding to pixel $p$, though $f_p$ can be extended with the incorporation of additional external features, as already mentioned. This link between $L_\text{pCE}$ and $L_\text{cen}$ through the softmax layer makes both terms decrease through the joint optimization, in the sense that for a reduction in $L_\text{cen}$ to take place, and hence in the full loss $L$, also $L_\text{pCE}$ has to decrease by better predicting the class of the pixels involved. The additional features that can be incorporated in $f_p$ try to introduce information from the classes, e.g. predominant colour, to guide even more the optimization.
In practice, this loss term \textit{pushes} pixel class predictions towards, ideally, a subset of the corners of the C-dimensional hypercube, in accordance with the scribbles, i.e. the available ground truth. Some similarity can be certainly established with the K-means algorithm. Briefly speaking, K-means iteratively calculates a set of centroids for the considered number of clusters/classes, and associates the samples to the closest cluster in feature space, thus minimizing the intra-class variance until convergence. Some DCNN-based clustering approaches reformulate K-means as a neural network optimizing the intra-class variance loss by means of a back-propagation-style scheme~\cite{Wen2016,Peng2019}. Differently from the latter, in this work, (\ref{func:cen_loss}) reformulates the unsupervised process of minimizing the distances from samples to centroids into a supervised process since the clustering takes place around the true classes defined by the labelling of the scribbles $y_{g(p),c}^{*}$ and the extra information that may be incorporated.
\subsection{Full Loss Function}
\label{sec:full_loss}
Since $L_\text{pCE}$ applies only to pixels labelled in the pseudo-mask and $L_\text{cen}$ is also restricted to a subset of image pixels, namely the pixels coinciding with the scribbles, we add a third loss term in the form of a normalized MSE loss $L_\text{mse}$ to behave as a regularization term that involves all pixels for which a class label must be predicted $\Omega_{L}^{(3)}$, i.e. the full image. This term calculates the normalized distances between the segmentation result for every pixel and its corresponding centroid:
\begin{equation}
L_\text{mse} = \frac{\sum\limits_{p\in \Omega_{L}^{(3)}} d_{p,c(p)}}{|\Omega_{L}^{(3)}|}
\label{func:mse_reg}
\end{equation}
where $|\mathcal{A}|$ stands for the cardinality of set $\mathcal{A}$, and $d_{p,c(p)}$ is as defined by (\ref{func:cen_loss_3}), with $c(p)$ as the class prediction for pixel $p$ (and $\mu_{c(p)}$ the corresponding predicted centroid), taken from the softmax layer.
Finally, the complete loss function is given by:
\begin{equation}
L = L_\text{pCE} + \lambda_\text{cen} L_\text{cen} + \lambda_\text{mse} L_\text{mse}
\label{func:final_loss}
\end{equation}
where $\lambda_\text{cen}$ and $\lambda_\text{mse}$ are trade-off constants.
\section{Experiments and Discussion}
\label{sc:experiments}
In this section, we report on the results obtained for the two application cases that constitute our benchmark. For a start, Section~\ref{sec:exp_setup} describes the experimental setup. Next, in Section~\ref{sec:dis_feature}, we discuss about the feature space where the Centroid Loss is defined and its relationship with the weak annotations, while Section~\ref{sec:effect_cen} evaluates the effect on the segmentation performance of several combinations of the terms of the loss function $L$, and Section~\ref{sec:influence_weak} analyzes the impact of weak annotations and their propagation. Subsequently, our approach is compared against two previously proposed methods in Section~\ref{sec:performance_compare}. To finish, we address final tuning and show segmentation results, for qualitative evaluation purposes, for some images of both application cases in Section~\ref{sec:result_displays}.
\subsection{Experimental Setup}
\label{sec:exp_setup}
\subsubsection{Datasets}
The dataset from the quality control application case consists of a total of 484 images, two thirds of which are designated for training and the rest for testing. Regarding the dataset for the visual inspection application, it comprises 241 images and the same strategy is adopted for splitting into the training and test sets. Both datasets have been in turn augmented with rotations and scaled versions of the original images, together with random croppings, to increase the diversity of the training set. Finally, as already explained, the ground truth for both datasets comprises scribbles and pseudo-masks (generated in accordance to the process described in Section~\ref{sec:pseudo_mask}).
By way of illustration, Fig.~\ref{fig:pseudo_mask_exp} shows, for the two application cases, some examples of weak annotations with different settings as for the width of the scribbles and the number of superpixels used for generating the pseudo-masks.
\begin{figure}[t]
\centering
\begin{tabular}{@{\hspace{-3mm}}c@{\hspace{1mm}}c@{\hspace{1mm}}c@{\hspace{1mm}}c@{\hspace{0mm}}}
\includegraphics[width=0.23\columnwidth,height=0.23\columnwidth]{pseudo_mask_gk2_fp_exp28_0480_scr2.png}
&
\includegraphics[width=0.23\columnwidth,height=0.23\columnwidth]{pseudo_mask_gk2_fp_exp28_0480_scr5.png}
&
\includegraphics[width=0.23\columnwidth,height=0.23\columnwidth]{pseudo_mask_gk2_fp_exp28_0480_scr10.png}
&
\includegraphics[width=0.23\columnwidth,height=0.23\columnwidth]{pseudo_mask_gk2_fp_exp28_0480_scr20.png} \\[-1mm]
\footnotesize 2 & \footnotesize 5 & \footnotesize 10 & \footnotesize 20 \\
\includegraphics[width=0.23\columnwidth,height=0.23\columnwidth]{pseudo_mask_gk2_fp_exp28_0480.png}
&
\includegraphics[width=0.23\columnwidth,height=0.23\columnwidth]{pseudo_mask_gk2_fp_exp28_0480_30_mask.png}
&
\includegraphics[width=0.23\columnwidth,height=0.23\columnwidth]{pseudo_mask_gk2_fp_exp28_0480_50_mask.png}
&
\includegraphics[width=0.23\columnwidth,height=0.23\columnwidth]{pseudo_mask_gk2_fp_exp28_0480_80_mask.png} \\[-1mm]
\footnotesize full mask & \footnotesize 30 & \footnotesize 50 & \footnotesize 80 \\
\includegraphics[width=0.23\columnwidth,height=0.23\columnwidth]{pseudo_mask_image0494.png}
&
\includegraphics[width=0.23\columnwidth,height=0.23\columnwidth]{pseudo_mask_image0494_sup30.png}
&
\includegraphics[width=0.23\columnwidth,height=0.23\columnwidth]{pseudo_mask_image0494_sup50.png}
&
\includegraphics[width=0.23\columnwidth,height=0.23\columnwidth]{pseudo_mask_image0494_sup80.png} \\[-1mm]
\footnotesize full mask & \footnotesize 30 & \footnotesize 50 & \footnotesize 80
\end{tabular}
\caption{Examples of weak annotations and their propagation for the two application cases: (1st row) examples of scribble annotations of different widths, namely, from left to right, 2, 5, 10 and 20 pixels, for the visual inspection case; (2nd and 3rd rows) the leftmost image shows the fully supervised ground truth, while the remaining images are examples of pseudo-masks generated from 20-pixel scribbles and for different amounts of superpixels, namely 30, 50, and 80, for the two images of Fig.~\ref{fig:weak_labels} and, hence, for the visual inspection and the quality control application cases. (The colour code is the same as for Fig.~\ref{fig:weak_labels}.)}
\label{fig:pseudo_mask_exp}
\end{figure}
\subsubsection{Evaluation metrics}
For quantitative evaluation of our approach, we consider the following metrics:
\begin{itemize}
\item The mean Intersection Over Union (mIOU), which can be formally stated as follows: given $n_{ij}$ as the number of pixels of class $i$ that fall into class $j$, for a total of $C$ different classes, the mIOU is defined as (\ref{func:miou}), see e.g.~\cite{long2015fully},
\begin{equation}
\text{mIOU} = \frac{1}{C} \sum_i \frac{n_{ii}}{\sum_j n_{ij} + \sum_j n_{ji} - n_{ii}}\,.
\label{func:miou}
\end{equation}
\item The mean Recall and mean Precision are also calculated to evaluate the segmentation performance for all classes. True Positive (TP), False Positive (FP) and False Negative (FN) samples are determined from the segmentation results and the ground truth. Using a macro-averaging
approach~\cite{Zhang2014}, the mean Recall (mRec) and mean Precision (mPrec) are expressed as follows:
\begin{align}
\text{mRec} & = \frac{1}{C} \left( \sum_i \frac{TP_i}{TP_i + FN_i} \right) = \frac{1}{C} \left( \sum_i \frac{TP_i}{T_i} \right)
\label{func:rec_prec_1}
\\
\text{mPrec} & = \frac{1}{C} \left( \sum_i \frac{TP_i}{TP_i + FP_i} \right) = \frac{1}{C} \left( \sum_i \frac{TP_i}{P_i} \right)
\label{func:rec_prec_2}
\end{align}
where $TP_i$, $FP_i$ and $FN_i$ are, respectively, the true positives, false positives and false negatives for class $i$, and $T_i$ and $P_i$ are, respectively, the number of positives in the ground truth and the number of predicted positives, both for class $i$. From now on, to shorten the notation, when we refer to precision and recall, it must be understood that we are actually referring to mean precision and mean recall.
\item The F$_1$ score as the harmonic mean of precision and recall:
\begin{equation}
F_1 = \frac{2\cdot\text{mPrec}\cdot\text{mRec}}{\text{mPrec} + \text{mRec}}
\end{equation}
\end{itemize}
In all experiments, we make use of fully supervised masks/ground truth for both datasets in order to be able to report accurate calculations about the segmentation performance. This ground truth has been manually generated only for this purpose, it has been used for training only when referring to the performance of the full-supervised approach, for comparison purposes between the full- and weakly-supervised solutions.
To finish, in a number of experiments we also report on the quality of the pseudo-masks, so that the segmentation performance reported can be correctly valued. To this end, we calculate a weak mIOU (wmIOU) using \ref{func:miou} between the psedo-mask and the fully-supervised mask involved.
\subsubsection{Implementation details and main settings}
All experiments have been conducted using the Pytorch framework running in a PC fitted with an NVIDIA GeForce RTX 2080 Ti GPU, a 2.9GHz 12-core CPU with 32 GB RAM, and Ubuntu 64-bit. The batch size is 8 for all experiments and the size of the input image is $320\times 320$ pixels, since this has turned out to be the best configuration for the aforementioned GPU.
As already mentioned, the AUN for semantic segmentation and the sub-net for centroid prediction are jointly trained following an end-to-end learning model. The network weights are initialized by means of the Kaiming method~\cite{He2015b}, and they are updated using a $10^{-4}$ learning rate for 200 epochs.
Best results have been obtained for the balance parameters $\lambda_\text{cen}$ and $\lambda_\text{mse}$ set to 1.
\subsubsection{Overall view of the experiments}
The experiments that are going to be discussed along the next sections consider different configurations for the different elements that are involved in our semantic segmentation approach. These configurations, which are enumerated in Table~\ref{tab:exp_define}, involve:
\begin{itemize}
\item different widths of the scribble annotations used as ground truth, namely 2, 5, 10 and 20 pixels,
\item different amounts of superpixels for generating the pseudo-masks, namely 30, 50 and 80,
\item two ways of defining the feature space for the class centroids: from exclusively the softmax layer of AUN and combining those features with other features from the classes.
\end{itemize}
Notice that the first rows of Table~\ref{tab:exp_define} refer to experiments where the loss function used for training is just the partial cross-entropy, as described in (\ref{func:partial_cross-entropy}), and therefore can be taken as a lower baseline method. The upper baseline would correspond to the configuration using full masks and the cross entropy loss $L_\text{CE}$ for training, i.e. full supervised semantic segmentation, which can also be found in Table~\ref{tab:exp_define} as the last row.
Apart from the aforementioned variations, we also analyse the effect of several combinations of the loss function terms, as described in (\ref{func:final_loss}), defining three groups of experiments: Group 1 (G1), which indicates that the network is trained by means of only $L_\text{pCE}$, and hence would also coincide with the lower baseline; Group 2 (G2), which denotes that the network is trained by means of the combination of $L_\text{pCE}$ and $L_\text{cen}$; and Group 3 (G3), for which the network is trained using the full loss function as described in (\ref{func:final_loss}).
Finally, we compare our segmentation approach with two other alternative approaches also aimed at solving the WSSS problem through a modified loss function. These loss functions are the Constrained-size Loss ($L_\text{size}$)~\cite{kervadec2019constrained} and the Seed, Expand, and Constrain (SEC) Loss ($L_\text{sec}$)~\cite{kolesnikov2016seed}:
\begin{align}
L_\text{size} &= L_\text{pCE} + \lambda_\text{size}L_{\mathcal{C}(V_S)}
\label{func:compare_exp_loss_1}
\\
L_\text{sec} &= L_\text{seed} + L_\text{expand} + L_\text{constrain}
\label{func:compare_exp_loss_2}
\end{align}
On the one hand, $\lambda_\text{size}$ for the $L_{\mathcal{C}(V_S)}$ term is set to $10^{-3}$. On the other hand, regarding $L_\text{sec}$, it consists of three terms, the seed loss $L_\text{seed}$, the expand loss $L_\text{expand}$, and the constrain loss $L_\text{constrain}$. In our case, we feed $L_\text{seed}$ from the scribble annotations, while, regarding $L_\text{expand}$ and $L_\text{constrain}$, we adopt the same configuration as in the original work.
\begin{table*}[t]
\centering
\caption{Labels for the different experiments performed, varying the width of scribbles, the number of superpixels employed for generating the pseudo-masks, and the terms involved in the loss function employed during training. SMX stands for \textit{softmax}.}
\label{tab:exp_define}
\begin{tabular}{m{1.8cm}|m{3cm}|m{1.5cm}|m{1.5cm}|m{1.6cm}|m{1.5cm}|m{2.5cm}}
\hline
\textbf{Configuration} & \textbf{Label} & \textbf{Scribbles width} & \textbf{Num. superpixels} & \textbf{Centroid features} & \textbf{Supervision} & \textbf{Loss function} \\
\hline
\multirow{7}{1.8cm}{lower baseline}
& E-SCR2 & 2 & - & - & \multirow{4}{*}{only scribbles} & \multirow{4}{*}{$L_\text{pCE}$} \\
& E-SCR5 & 5 & - & - & & \\
& E-SCR10 & 10 & - & - & & \\
& E-SCR20 & 20 & - & - & & \\
\cline{2-7}
& E-SCR20-SUP30 & 20 & 30 & - & \multirow{3}{*}{pseudo-masks} & \multirow{3}{*}{$L_\text{pCE}$} \\
& E-SCR20-SUP50 & 20 & 50 & - & & \\
& E-SCR20-SUP80 & 20 & 80 & - & & \\
\hline
& E-SCR2-N & 2 & - & SMX & \multirow{8}{*}{only scribbles} & \multirow{8}{*}{$L_\text{pCE} + L_\text{cen}\ [+ L_\text{mse}]$} \\
& E-SCR2-NRGB & 2 & - & SMX \& RGB & & \\
& E-SCR5-N & 5 & - & SMX & & \\
& E-SCR5-NRGB & 5 & - & SMX \& RGB & & \\
& E-SCR10-N & 10 & - & SMX & & \\
& E-SCR10-NRGB & 10 & - & SMX \& RGB & & \\
& E-SCR20-N & 20 & - & SMX & & \\
& E-SCR20-NRGB & 20 & - & SMX \& RGB & & \\
\cline{2-7}
& E-SCR20-SUP30-N & 20 & 30 & SMX & \multirow{6}{*}{pseudo-masks} & \multirow{6}{*}{$L_\text{pCE} + L_\text{cen}\ [+ L_\text{mse}]$} \\
& E-SCR20-SUP30-NRGB & 20 & 30 & SMX \& RGB & & \\
& E-SCR20-SUP50-N & 20 & 50 & SMX & & \\
& E-SCR20-SUP50-NRGB & 20 & 50 & SMX \& RGB & & \\
& E-SCR20-SUP80-N & 20 & 80 & SMX & & \\
& E-SCR20-SUP80-NRGB & 20 & 80 & SMX \& RGB & & \\
\hline
upper baseline
& E-FULL & - & - & - & full mask & $L_\text{CE}$ \\
\hline
\end{tabular}
\end{table*}
\subsection{About the Centroid loss feature space and the weak annotations}
\label{sec:dis_feature}
Given the relevance that color features can have in image semantic segmentation performance~\cite{Liu2018}, the experiments reported in this section consider the incorporation of color data from the classes into the calculation and minimization of the Centroid and the MSE loss functions, $L_\text{cen}$ and $L_\text{mse}$. More specifically, we adopt a simple strategy by making use of normalized RGB features~\footnote{If $R_p = G_p = B_p = 0$, then $\text{nRGB}_p = (0,0,0)$.}:
\begin{equation}
\text{nRGB}_p = \frac{1}{R_p+G_p+B_p}\left(R_p, G_p, B_p\right)
\end{equation}
As mentioned in Section~\ref{sec:net_arch}, the shape of $P_{cen}$ is $C\times M$, where $M = C + K$, and $K$ is a number of additional features from the classes that we incorporate into the network optimization problem. Therefore, in our case, $K = 3$. Of course, more sophisticated hand-crafted features can be incorporated into the process, though the idea of this experiment has been to make use of simple features.
Tables~\ref{tab:scribbles_compare} and~\ref{tab:superpixles_compare} evaluate the performance of our approach for different combinations of loss terms, for the two centroid feature spaces outlined before, and also depending on the kind of weak annotation that is employed as ground truth and their main feature value, i.e. width for scribbles and number of superpixels for pseudo-masks. Besides, we consider two possibilities of producing the final labelling: from the output of the segmentation network and from the clustering deriving from the predicted class centroids, i.e. label each pixel with the class label of the closest centroid; from now on, to simplify the discussion despite the language abuse, we will refer to the latter kind of output as that resulting from \textit{clustering}. Finally, Table~\ref{tab:scribbles_compare} only shows results for the visual inspection task because scribbles alone have been shown not enough for obtaining proper segmentations in the quality control case.
As can be observed in Table~\ref{tab:scribbles_compare}, segmentation and clustering mIOU for experiments E-SCR*-NRGB is lower than the mIOU for experiments E-SCR*-N, with a large gap in performance in a number of cases, what suggests that the RGB features actually do not contribute ---rather the opposite--- on improving segmentation performance when scribble annotations alone are used as supervision information for the visual inspection dataset.
As for Table~\ref{tab:superpixles_compare}, contrary to the results shown in Table~\ref{tab:scribbles_compare}, the performance that can be observed from experiments E-SCR20-SUP*-NRGB results to be similar to that of experiments E-SCR20-SUP*-N. Additionally, the mIOU of some experiments where the integrated features, i.e. \textit{softmax} and colour, are used is even higher than if only the \textit{softmax} features are used (e.g. E-SCR20-SUP80-N/NRGB, sixth row of Table~\ref{tab:superpixles_compare}).
At a global level, both Tables~\ref{tab:scribbles_compare} and~\ref{tab:superpixles_compare} show that our approach requires a higher number of labelled pixels to achieve higher segmentation performance when the integrated features are employed. In contrast, the use of \textit{softmax} features only requires the scribble annotations to produce good performance for the visual inspection task. Nevertheless, our approach using \textit{softmax} features achieves higher mIOU than using the integrated features in most of the experiments. As a consequence, only \textit{softmax} features are involved in the next experiments.
\begin{sidewaystable}
\centering
\caption{Segmentation performance for different centroid feature spaces and different widths of the scribble annotations. \textit{*N} denotes that only the SMX (\textit{softmax}) features are used to compute $L_\text{cen}$ and $L_\text{mse}$, while \textit{*NR} denotes that the feature space for centroids prediction comprises both SMX and RGB features. \textit{Seg} denotes that the segmentation output comes directly from the segmentation network, while \textit{Clu} denotes that the segmentation output is obtained from clustering.}
\label{tab:scribbles_compare}
\begin{tabular}{ c|c||c||ccc||c|c|c||c|c}
\toprule
Task & Experiments & wmIOU & $L_\text{pCE}$ & $L_\text{cen}$ & $L_\text{mse}$ & mIOU (Seg) & mIOU (Seg,*N) & mIOU (Seg,*NR) & mIOU (Clu,*N) & mIOU (Clu,*NR) \\
\hline
\multirow{12}{1.5cm}{Visual Inspection}
& E-SCR2 & 0.2721 & $\checkmark$ & & & 0.3733 & - & - & - & - \\
& E-SCR5 & 0.2902 & $\checkmark$ & & & 0.4621 & - & - & - & - \\
& E-SCR10 & 0.3074 & $\checkmark$ & & & 0.4711 & - & - & - & - \\
& E-SCR20 & 0.3233 & $\checkmark$ & & & 0.5286 & - & - & - & - \\
\cline{2-11}
& E-SCR2-* & 0.2721 & $\checkmark$ & $\checkmark$ & & - & 0.6851 & 0.4729 & 0.6758 & 0.3889 \\
& E-SCR5-* & 0.2902 & $\checkmark$ & $\checkmark$ & & - & 0.6798 & 0.4989 & 0.6706 & 0.6020 \\
& E-SCR10-* & 0.3074 & $\checkmark$ & $\checkmark$ & & - & 0.6992 & 0.5130 & 0.6710 & 0.6267 \\
& E-SCR20-* & 0.3233 & $\checkmark$ & $\checkmark$ & & - & 0.6852 & 0.5562 & 0.6741 & 0.6164 \\
\cline{2-11}
& E-SCR2-* & 0.2721 & $\checkmark$ & $\checkmark$ & $\checkmark$ & - & 0.6995 & 0.4724 & 0.6828 & 0.3274 \\
& E-SCR5-* & 0.2902 & $\checkmark$ & $\checkmark$ & $\checkmark$ & - & 0.7134 & 0.4772 & 0.7001 & 0.2982 \\
& E-SCR10-* & 0.3074 & $\checkmark$ & $\checkmark$ & $\checkmark$ & - & 0.7047 & 0.4796 & 0.6817 & 0.3130 \\
& E-SCR20-* & 0.3233 & $\checkmark$ & $\checkmark$ & $\checkmark$ & - & 0.6904 & 0.5075 & 0.6894 & 0.6187 \\
\bottomrule
\end{tabular}
\end{sidewaystable}
\begin{sidewaystable
\centering
\caption{Segmentation performance for different centroid feature spaces and for different amounts of superpixels to generate the pseudo-masks. \textit{*N} denotes that only the SMX (\textit{softmax}) features are used to compute $L_\text{cen}$ and $L_\text{mse}$, while \textit{*NR} denotes that the feature space comprises both SMX and RGB features. \textit{Seg} denotes that the segmentation output comes directly from the segmentation network, while \textit{Clu} denotes that the segmentation output is obtained from clustering.}
\label{tab:superpixles_compare}
\begin{tabular}{ c|c||c||ccc||c|c|c||c|c }
\toprule
Task & Experiments & wmIOU & $L_\text{pCE}$ & $L_\text{cen}$ & $L_\text{mse}$ & mIOU (Seg) & mIOU (Seg,*N) & mIOU (Seg,*NR) & mIOU (Clu,*N) & mIOU (Clu,*NR) \\
\hline
\multirow{9}{1.5cm}{Visual Inspection}
& E-SCR20-SUP30 & 0.6272 & $\checkmark$ & & & 0.6613 & - & - & - & -\\
& E-SCR20-SUP50 & 0.6431 & $\checkmark$ & & & 0.7133 & - & - & - & -\\
& E-SCR20-SUP80 & 0.6311 & $\checkmark$ & & & 0.7017 & - & - & - & -\\
\cline{2-11}
& E-SCR20-SUP30-* & 0.6272 & $\checkmark$ & $\checkmark$ & & - & 0.6848 & 0.6847 & 0.7081 & 0.6859\\
& E-SCR20-SUP50-* & 0.6431 & $\checkmark$ & $\checkmark$ & & - & 0.7447 & 0.7368 & 0.7372 & 0.7136\\
& E-SCR20-SUP80-* & 0.6311 & $\checkmark$ & $\checkmark$ & & - & 0.7242 & 0.7355 & 0.7127 & 0.6761\\
\cline{2-11}
& E-SCR20-SUP30-* & 0.6272 & $\checkmark$ & $\checkmark$ & $\checkmark$ & - & 0.6919 & 0.7071 & 0.6987 & 0.7076 \\
& E-SCR20-SUP50-* & 0.6431 & $\checkmark$ & $\checkmark$ & $\checkmark$ & - & 0.7542 & 0.7133 & 0.7491 & 0.7294 \\
& E-SCR20-SUP80-* & 0.6311 & $\checkmark$ & $\checkmark$ & $\checkmark$ & - & 0.7294 & 0.7246 & 0.7268 & 0.7118 \\
\midrule[.6pt]
\multirow{9}{1.5cm}{\raggedright Quality Control}
& E-SCR20-SUP30 & 0.4710 & $\checkmark$ & & & 0.5419 & - & - & - & -\\
& E-SCR20-SUP50 & 0.5133 & $\checkmark$ & & & 0.6483 & - & - & - & -\\
& E-SCR20-SUP80 & 0.5888 & $\checkmark$ & & & 0.7015 & - & - & - & -\\
\cline{2-11}
& E-SCR20-SUP30-* & 0.4710 & $\checkmark$ & $\checkmark$ & & - & 0.6882 & 0.6889 & 0.6142 & 0.6062\\
& E-SCR20-SUP50-* & 0.5133 & $\checkmark$ & $\checkmark$ & & - & 0.7236 & 0.7203 & 0.6644 & 0.6480\\
& E-SCR20-SUP80-* & 0.5888 & $\checkmark$ & $\checkmark$ & & - & 0.7594 & 0.7337 & 0.6768 & 0.6451\\
\cline{2-11}
& E-SCR20-SUP30-* & 0.4710 & $\checkmark$ & $\checkmark$ & $\checkmark$ & - & 0.7030 & 0.6237 & 0.5910 & 0.6077 \\
& E-SCR20-SUP50-* & 0.5133 & $\checkmark$ & $\checkmark$ & $\checkmark$ & - & 0.7291 & 0.7046 & 0.6605 & 0.6372 \\
& E-SCR20-SUP80-* & 0.5888 & $\checkmark$ & $\checkmark$ & $\checkmark$ & - & 0.7679 & 0.7409 & 0.6687 & 0.6780 \\
\bottomrule
\end{tabular}
\end{sidewaystable}
\subsection{Effect of the loss function terms}
\label{sec:effect_cen}
This section considers the effect of $L_\text{cen}$ and $L_\text{mse}$ on the segmentation results by analyzing performance of experiments in groups G1, G2 and G3. From Table~\ref{tab:scribbles_compare}, one can see that the mIOU of experiments in G2 is significantly higher than that of experiments in G1, where the maximum gap in mIOU between G1 and G2 is 0.3118 (E-SCR2 and E-SCR2-N). As for the segmentation performance for G3 experiments, it is systematically above that of G2 experiments for the same width of the scribble annotations and if centroids are built only from the \textit{softmax} features. When the colour data is incorporated, segmentation performance decreases from G2 to G3.
Table~\ref{tab:superpixles_compare} also shows performance improvements from G2 experiments, i.e. when $L_\text{cen}$ is incorporated into the loss function, over the performance observed from experiments in G1, and, in turn, segmentation results from G3 experiments are superior to that of G2 experiments, and this is observed for both tasks. Therefore, the incorporation of the $L_\text{cen}$ and $L_\text{mse}$ terms into the loss function benefits performance, gradually increasing the mIOU of the resulting segmentations.
Regarding the segmentation computed from clustering, the mIOU of experiments in G3 is also higher than that of experiments in G2. In addition, it can be found out in Table~\ref{tab:scribbles_compare} and Table~\ref{tab:superpixles_compare} that the mIOU from clustering in some G2 experiments is slightly higher than that for G3 experiments (E-SCR20-SUP30-N on both tasks and specifically E-SCR20-SUP80-N for the quality control task), while the mIOU from segmentation in G2 is lower than that of G3. In other words, it seems that $L_\text{mse}$, in some cases, makes the segmentation quality from clustering deteriorate.
Overall, the incorporation of $L_\text{cen}$ and $L_\text{mse}$ improves segmentation performance for both tasks, and labelling from segmentation turns out to be superior to that deriving from class centroids.
\subsection{Impact of weak annotations and their propagation}
\label{sec:influence_weak}
In this section, we evaluate our approach under different weak annotations and their propagation, and discuss on their impact on segmentation performance for both tasks. To this end, we plot in Fig.~\ref{fig:influence_weak_annotation} the mIOU (complementarily to Tables~\ref{tab:scribbles_compare} and~\ref{tab:superpixles_compare}), recall and precision values resulting after the supervision of different sorts of weak annotations for the two tasks. A first analysis of these plots reveals that the curves corresponding to the G3 experiments are above than those for G1 and G2 groups for all the performance metrics considered.
Regarding the visual inspection task, Fig.~\ref{fig:influence_weak_annotation}(a) shows that the mIOU values for the G2 and G3 groups are above those for G1 (the lower baseline), which follows a similar shape as the wmIOU values, while those from G2 and G3 groups keep at a more or less constant level for the different sorts of weak annotations. As for the quality control task, the mIOU values for all groups are similar among all groups and similar to the wmIOU values, as shown in Fig. \ref{fig:influence_weak_annotation}(d). Globally, this behaviour clearly shows that the scribbles are enough for describing the classes in the case of the visual inspection task, which is a binary classification problem, while this is not true for the quality control class, a multi-class problem, and this makes necessary resort to the pseudo-masks (G2 and G3 groups) to achieve a higher performance. The fact that for both tasks the lower baseline (G1 group) always achieves lower mIOU values also corroborates the relevance of the Centroid Loss, despite its ultimate contribution to the segmentation performance is also affected by the quality of the weak annotations involved, i.e. the pseudo-masks deriving from scribbles and superpixels for the cases of the G2 and G3 groups.
Additionally, observing the precision curves shown in Fig.~\ref{fig:influence_weak_annotation}(c) and (f), one can notice that the precision for exclusively the weak annotations show a sharp decline when the weak annotations shift from scribbles to pseudo-masks. As can be noticed from the pseudo-masks shown in the second and third rows of Fig.~\ref{fig:pseudo_mask_exp}, when the number of superpixels is low, e.g. 30, the pseudo-masks contain an important number of incorrectly labelled pixels, significantly more than that of the scribble annotations, and this is the reason for the aforementioned decline. The recall curves, however, exhibit an upward trend as can be observed in Fig.~\ref{fig:influence_weak_annotation}(b) and (e) because of the larger amount of information ultimately provided by the weak annotations. On the other side, we can also notice that, in general, precision and recall values are higher for the G3 group than for the G2 group, and both curves are above those for the G1 group, and this behaviour replicates for the two tasks. Finally, the output from clustering does not clearly lead to a different performance, better or worse, over the alternative outcome from the segmentation network, showing that clustering is less appropriate for the quality control task from the point of view of the recall metric.
From a global perspective, all this suggests that (a) segmentation quality benefits from the use of pseudo-masks, (b) overcoming always the lower baseline based on the use of exclusively scribbles, (c) despite the incorrectly labelled pixels contained in pseudo-masks, (d) provided that the proper loss function is adopted, e.g. the full loss expressed in (\ref{func:final_loss}), which in particular (e) comprises the Centroid Loss.
\begin{sidewaysfigure*}
\centering
\begin{tabular}{@{\hspace{0mm}}c@{\hspace{0mm}}c@{\hspace{0mm}}c@{\hspace{0mm}}}
\includegraphics[width=0.34\textwidth,height=0.30\textwidth]{miou_ins.png}
&
\includegraphics[width=0.34\textwidth,height=0.30\textwidth]{rec_ins.png}
&
\includegraphics[width=0.34\textwidth,height=0.30\textwidth]{prec_ins.png} \\
\footnotesize (a) & \footnotesize (b) & \footnotesize (c) \\
\includegraphics[width=0.34\textwidth,height=0.30\textwidth]{miou_q.png}
&
\includegraphics[width=0.34\textwidth,height=0.30\textwidth]{rec_q.png}
&
\includegraphics[width=0.34\textwidth,height=0.30\textwidth]{prec_q.png} \\
\footnotesize (d) & \footnotesize (e) & \footnotesize (f)
\end{tabular}
\caption{Performance metrics for our approach under different sorts of weak annotations. The first row plots are for the visual inspection task, while those of the second row are for the quality control task. In both rows, from left to right, the three figures plot respectively the mIOU, the mean Recall, and the mean Precision. SUP30, SUP50 and SUP80 labels correspond to the use of 20 pixel-wide scribbles.}
\label{fig:influence_weak_annotation}
\end{sidewaysfigure*}
\subsection{Comparison with other loss functions}
\label{sec:performance_compare}
In Table~\ref{tab:previous_compare}, we compare the segmentation performance of our approach for the two tasks with that resulting from the use of the Constrained-size Loss $L_\text{size}$~\cite{kervadec2019constrained} and the SEC Loss $L_\text{sec}$~\cite{kolesnikov2016seed} for different variations of weak annotations. As for the visual inspection task, the network trained with $L_\text{sec}$ is clearly inferior to the one resulting for our loss function, and the same can be said for $L_\text{size}$, although, in this case, the performance gap is shorter, even negligible when the width of the scribbles is of 20 pixels. When the pseudo-masks are involved, our approach is also better though the difference with both $L_\text{size}$ and $L_\text{sec}$ is more reduced. Regarding the quality control task, Table~\ref{tab:previous_compare} shows that our approach overcomes both, at a significant distance, far more than for the visual inspection task.
Summing up, we can conclude that the loss function proposed in (\ref{func:final_loss}) outperforms both the Constrained-size Loss $L_\text{size}$ and the SEC Loss $L_\text{sec}$ on the visual inspection and the quality control tasks.
\begin{table}
\centering
\caption{Comparison of different loss functions for both the visual inspection and the quality control tasks. mIOU values are provided. Best performance is highlighted in bold.}
\label{tab:previous_compare}
\begin{tabular}{@{\hspace{1mm}}l|c|c|c|c@{\hspace{1mm}}}
\toprule
Task & Weak Annotation & $L_\text{size}$~\cite{kervadec2019constrained} & $L_\text{sec}$~\cite{kolesnikov2016seed} & Ours \\
\hline
\multirow{9}{1.1cm}[3.2mm]{Visual Inspection}
& E-SCR2-N & 0.6098 & 0.4366 & \textbf{0.6995} \\
& E-SCR5-N & 0.6537 & 0.4372 & \textbf{0.7134} \\
& E-SCR10-N & 0.6754 & 0.5486 & \textbf{0.7047} \\
& E-SCR20-N & \textbf{0.6909} & 0.5624 & 0.6904 \\
\cline{2-5}
& E-SCR20-SUP30-N & \textbf{0.7068} & 0.6397 & 0.6919 \\
& E-SCR20-SUP50-N & 0.6769 & 0.7428 & \textbf{0.7542} \\
& E-SCR20-SUP80-N & 0.7107 & 0.6546 & \textbf{0.7294} \\
\midrule[.6pt]
\multirow{3}{1.1cm}[0mm]{Quality Control}
& E-SCR20-SUP30-N & 0.4724 & 0.5808 & \textbf{0.7030} \\
& E-SCR20-SUP50-N & 0.4985 & 0.6262 & \textbf{0.7291} \\
& E-SCR20-SUP80-N & 0.5051 & 0.6918 & \textbf{0.7679} \\
\bottomrule
\end{tabular}
\end{table}
\subsection{Final tuning and results}
\label{sec:result_displays}
As has been already highlighted along the previous sections, the network trained by means of the loss function described in (\ref{func:final_loss}), which in particular comprises the Centroid Loss, attains the best segmentation performance against other approaches and for the two tasks considered in this work. In order to check whether segmentation performance can increase further, in this section we incorporate a dense CRF as a post-processing stage of the outcome of the network. Table~\ref{tab:annotation_compare} collects metric values for the final performance attained by the proposed WSSS method and as well by the upper baseline method (E-FULL). To assess the influence of the CRF-based stage, in Table~\ref{tab:annotation_compare}, we report mIOU, precision and recall values, together with the F$_1$ score, as well as the wmIOU values.
Regarding the visual inspection task, Table~\ref{tab:annotation_compare} shows that case E-SCR20-SUP50-N leads to the best segmentation mIOU (0.7542). After dense CRF, the mIOU reaches a value of 0.7859, with a performance gap with E-FULL of 0.0474. Case E-SCR20-SUP30-N attains the highest recall (0.7937), but the corresponding precision (0.7081) and F$_1$ score (0.7485) are not the highest; the mIOU is also the second lowest (0.6919). This is because the segmentation result for E-SCR20-SUP30-N contains more incorrect predictions than E-SCR20-SUP50-N. Consequently, a configuration of 20-pixel scribbles and 50 superpixels for pseudo-mask generation leads to the best performance, with a slightly increase thanks to the CRF post-processing stage. The outcome from clustering is not far in quality to those values, but, as can be observed, it is not as good (the best mIOU and F$_1$ scores are, respectively, 0.7491 and 0.7250).
As for the quality control task, the E-SCR20-SUP80-N case reaches the highest mIOU (0.7679) among all cases, with the second best F$_1$ (0.8350). For this task, the precision metric highlights a different case, E-SCR20-SUP50-N, as the best configuration which as well attains the largest F$_1$ score, though at a very short distance to the E-SCR20-SUP80-N case. After dense CRF, the final segmentation mIOU is 0.7707. The most adequate configuration seems to be, hence, 20-pixel scribbles and 80 superpixels for psedudo-mask generation. The gap in this case with regard to full supervision is 0.0897. Similarly to the visual inspection task, results from clustering are close in accuracy to the previous levels, but not better (for this task, the best mIOU and F$_1$ scores are, respectively, 0.6687 and 0.8086).
From a global perspective, the results obtained indicate that 20-pixel scribbles, together with a rather higher number of superpixels, so that they adhere better to object boundaries, are the best options for both tasks. In comparison with the lower baseline (G1 group), the use of the full loss function, involving the Centroid Loss, clearly makes training improve segmentation performance significantly, with a slight decrease regarding full supervision. Segmentation results deriving from clustering are not better for any of the tasks considered.
Figure~\ref{fig:seg_results} shows examples of segmentation results for the visual inspection task. As can be observed, the segmentations resulting from our approach are very similar to those from the upper baseline (E-FULL). Moreover, as expected, results from clustering are basically correct though tend to label incorrectly pixels (false positives) from around correct labellings (true positives). Similar conclusions can be drawn for the quality control task, whose results are shown in Fig.~\ref{fig:box_seg_clu_results}.
Summing up, the use of the Centroid Loss has made possible training a semantic segmentation network using a small number of labelled pixels. Though the performance of the approach is inferior to that of a fully supervised approach, the resulting gap for the two tasks considered has turned out to be rather short, given the challenges arising from the use of weak annotations.
\begin{sidewaystable}
\centering
\caption{Segmentation results for the full loss function (G3). \textit{Seg} denotes that the segmentation output comes directly from the segmentation network, while \textit{Clu} denotes that the segmentation output is obtained from clustering. *CRF refers to the performance (mIOU) after dense CRF post-processing. Best performance is highlighted in bold.}
\label{tab:annotation_compare}
\small
\begin{tabular}{c|c||c||c|c|c|c||c|c|c|c||c}
\toprule
Task & Experiments & wmIOU & mIOU (seg) & mRec (seg) & mPrec (seg) & F$_1$ (seg) & mIOU (clu) & mRec (clu) & mPrec (clu) & F$_1$ (clu) & *CRF (seg) \\
\hline
\multirow{9}{1.5cm}{Visual Inspection}
& E-SCR2-N & 0.2721 & 0.6995 & 0.6447 & 0.6452 & 0.6449 & 0.6828 & 0.7663 & 0.5803 & 0.6605 & 0.7068 \\
& E-SCR5-N & 0.2902 & 0.7134 & 0.6539 & 0.6542 & 0.6540 & 0.7001 & 0.7447 & 0.6015 & 0.6655 & 0.7212 \\
& E-SCR10-N & 0.3074 & 0.7047 & 0.6797 & 0.6332 & 0.6556 & 0.6817 & 0.7741 & 0.5772 & 0.6613 & 0.7241 \\
& E-SCR20-N & 0.3233 & 0.6904 & 0.6917 & 0.6081 & 0.6472 & 0.6894 & \textbf{0.7816} & 0.5507 & 0.6461 & 0.7172 \\
\cline{2-12}
& E-SCR20-SUP30-N & 0.6272 & 0.6919 & \textbf{0.7937} & 0.7081 & 0.7485 & 0.6987 & 0.7806 & 0.5946 & 0.6750 & 0.7489 \\
& E-SCR20-SUP50-N & 0.6431 & \textbf{0.7542} & 0.7543 & \textbf{0.7567} & \textbf{0.7555} & \textbf{0.7491} & 0.7725 & \textbf{0.6830} & \textbf{0.7250} & \textbf{0.7859} \\
& E-SCR20-SUP80-N & 0.6311 & 0.7294 & 0.7452 & 0.7397 & 0.7424 & 0.7268 & 0.7758 & 0.6200 & 0.6892 & 0.7693 \\
\cline{2-12}
& E-FULL & 1.0000 & 0.8333 & 0.8537 & 0.9119 & 0.8818 & - & - & - & - & 0.8218 \\
\hline
\multirow{4}{1.5cm}{Quality Control}
& E-SCR20-SUP30-N & 0.4710 & 0.7030 & 0.7937 & 0.7924 & 0.7930 & 0.5910 & 0.7600 & 0.6798 & 0.7177 & 0.7142 \\
& E-SCR20-SUP50-N & 0.5133 & 0.7291 & 0.8298 & \textbf{0.8439} & \textbf{0.8368} & 0.6605 & 0.7777 & 0.7332 & 0.7548 & 0.7143 \\
& E-SCR20-SUP80-N & 0.5888 & \textbf{0.7679} & \textbf{0.8303} & 0.8398 & 0.8350 & \textbf{0.6687} & \textbf{0.8630} & \textbf{0.7606} & \textbf{0.8086} & \textbf{0.7707} \\
\cline{2-12}
& E-FULL & 1.0000 & 0.8604 & 0.8058 & 0.8432 & 0.8241 & - & - & - & - & 0.8459 \\
\bottomrule
\end{tabular}
\end{sidewaystable}
\begin{sidewaysfigure*}[htb]
\centering
\begin{tabular}{@{\hspace{0mm}}c@{\hspace{1mm}}c@{\hspace{1mm}}c@{\hspace{1mm}}c@{\hspace{1mm}}c@{\hspace{1mm}}c@{\hspace{1mm}}c@{\hspace{0mm}}}
\includegraphics[width=30mm,height=30mm]{org_gk2_fp_exp28_0480.png}
&
\includegraphics[width=30mm,height=30mm]{RGBLabels_gk2_fp_exp28_0480.png}
&
\includegraphics[width=30mm,height=30mm]{full_gk2_fp_exp28_0480_show.png}
&
\includegraphics[width=30mm,height=30mm]{scr20_seg_gk2_fp_exp28_0480_show.png}
&
\includegraphics[width=30mm,height=30mm]{sup80_seg_gk2_fp_exp28_0480_show.png}
&
\includegraphics[width=30mm,height=30mm]{scr20_clu_gk2_fp_exp28_0480_cluster.png}
&
\includegraphics[width=30mm,height=30mm]{sup80_clu_gk2_fp_exp28_0480_cluster.png}
\\
\includegraphics[width=30mm,height=30mm]{org_gk2_fp_exp33_1270.png}
&
\includegraphics[width=30mm,height=30mm]{RGBLabels_gk2_fp_exp33_1270.png}
&
\includegraphics[width=30mm,height=30mm]{full_gk2_fp_exp33_1270_show.png}
&
\includegraphics[width=30mm,height=30mm]{scr20_seg_gk2_fp_exp33_1270_show.png}
&
\includegraphics[width=30mm,height=30mm]{sup80_seg_gk2_fp_exp33_1270_show.png}
&
\includegraphics[width=30mm,height=30mm]{scr20_clu_gk2_fp_exp33_1270_cluster.png}
&
\includegraphics[width=30mm,height=30mm]{sup80_clu_gk2_fp_exp33_1270_cluster.png}
\\
\includegraphics[width=30mm,height=30mm]{org_image072_30.png}
&
\includegraphics[width=30mm,height=30mm]{RGBLabels_image072_30.png}
&
\includegraphics[width=30mm,height=30mm]{full_image072_30_show.png}
&
\includegraphics[width=30mm,height=30mm]{scr20_seg_image072_30_show.png}
&
\includegraphics[width=30mm,height=30mm]{sup80_seg_image072_30_show.png}
&
\includegraphics[width=30mm,height=30mm]{scr20_clu_image072_30_cluster.png}
&
\includegraphics[width=30mm,height=30mm]{sup80_clu_image072_30_cluster.png}
\\
\includegraphics[width=30mm,height=30mm]{org_image026.png}
&
\includegraphics[width=30mm,height=30mm]{RGBLabels_image026.png}
&
\includegraphics[width=30mm,height=30mm]{full_image026_show.png}
&
\includegraphics[width=30mm,height=30mm]{scr20_seg_image026_show.png}
&
\includegraphics[width=30mm,height=30mm]{sup80_seg_image026_show.png}
&
\includegraphics[width=30mm,height=30mm]{scr20_clu_image026_cluster.png}
&
\includegraphics[width=30mm,height=30mm]{sup80_clu_image026_cluster.png}
\\
\footnotesize original image &
\footnotesize full mask &
\footnotesize E-FULL &
\footnotesize E-SCR20-N (seg) &
\footnotesize E-SCR20-SUP50-N (seg) &
\footnotesize E-SCR20-N (clu) &
\footnotesize E-SCR20-SUP50-N (clu)
\end{tabular}
\caption{Examples of segmentation results for the visual inspection task: (1st column) original images, (2nd column) full mask, (3rd column) results of the fully supervised approach, (4th \& 5th columns) segmentation output for E-SCR20-N and E-SCR20-SUP50-N after dense CRF, (6th \& 7th columns) segmentation output from clustering for the same configurations.}
\label{fig:seg_results}
\end{sidewaysfigure*}
\begin{figure*}[htb]
\centering
\begin{tabular}{@{\hspace{0mm}}c@{\hspace{1mm}}c@{\hspace{1mm}}c@{\hspace{1mm}}c@{\hspace{1mm}}c@{\hspace{1mm}}}
\includegraphics[width=30mm,height=30mm]{org_GOPR0886_0002.png}
&
\includegraphics[width=30mm,height=30mm]{gt_GOPR0886_0002.png}
&
\includegraphics[width=30mm,height=30mm]{full_GOPR0886_0002_show.png}
&
\includegraphics[width=30mm,height=30mm]{seg_GOPR0886_0002_show.png}
&
\includegraphics[width=30mm,height=30mm]{clu_GOPR0886_0002_cluster.png}
\\
\includegraphics[width=30mm,height=30mm]{GOPR0892_3039_org.png}
&
\includegraphics[width=30mm,height=30mm]{GOPR0892_3039_gt.png}
&
\includegraphics[width=30mm,height=30mm]{GOPR0892_3039_show_full.png}
&
\includegraphics[width=30mm,height=30mm]{GOPR0892_3039_show_seg.png}
&
\includegraphics[width=30mm,height=30mm]{GOPR0892_3039_show_clu.png}
\\
\includegraphics[width=30mm,height=30mm]{GOPR0888_0019_org.png}
&
\includegraphics[width=30mm,height=30mm]{GOPR0888_0019_gt.png}
&
\includegraphics[width=30mm,height=30mm]{GOPR0888_0019_show_full.png}
&
\includegraphics[width=30mm,height=30mm]{GOPR0888_0019_show_seg.png}
&
\includegraphics[width=30mm,height=30mm]{GOPR0888_0019_show_clu.png}
\\
\includegraphics[width=30mm,height=30mm]{org_GOPR1637_0553.png}
&
\includegraphics[width=30mm,height=30mm]{gt_GOPR1637_0553.png}
&
\includegraphics[width=30mm,height=30mm]{full_GOPR1637_0553_show.png}
&
\includegraphics[width=30mm,height=30mm]{seg_GOPR1637_0553_show.png}
&
\includegraphics[width=30mm,height=30mm]{clu_GOPR1637_0553_cluster.png}
\\
\includegraphics[width=30mm,height=30mm]{org_image0482.png}
&
\includegraphics[width=30mm,height=30mm]{gt_image0482.png}
&
\includegraphics[width=30mm,height=30mm]{full_image0482_show.png}
&
\includegraphics[width=30mm,height=30mm]{seg_image0482_show.png}
&
\includegraphics[width=30mm,height=30mm]{clu_image0482_cluster.png}
\\
\includegraphics[width=30mm,height=30mm]{org_image0536.png}
&
\includegraphics[width=30mm,height=30mm]{gt_image0536.png}
&
\includegraphics[width=30mm,height=30mm]{full_image0536_show.png}
&
\includegraphics[width=30mm,height=30mm]{seg_image0536_show.png}
&
\includegraphics[width=30mm,height=30mm]{clu_image0536_cluster.png}
\\
\includegraphics[width=30mm,height=30mm]{org_IMG_20180313_152303.png}
&
\includegraphics[width=30mm,height=30mm]{gt_IMG_20180313_152303.png}
&
\includegraphics[width=30mm,height=30mm]{full_IMG_20180313_152303_show.png}
&
\includegraphics[width=30mm,height=30mm]{seg_IMG_20180313_152303_show.png}
&
\includegraphics[width=30mm,height=30mm]{clu_IMG_20180313_152303_cluster.png}
\\
\footnotesize original image &
\footnotesize full mask &
\footnotesize E-FULL &
\footnotesize E-SCR20-SUP80-N (seg) &
\footnotesize E-SCR20-SUP80-N (clu)
\end{tabular}
\caption{Examples of segmentation results for the quality control task: (1st column) original images, (2nd column) full mask, (3rd column) results of the fully supervised approach, (4th column) segmentation output from E-SCR20-SUP80-N after dense CRF, (5th column) segmentation output from clustering for the same configuration.}
\label{fig:box_seg_clu_results}
\end{figure*}
\section{Conclusions and Future Work}
\label{sc:conclusions}
This paper describes a weakly-supervised segmentation approach based on Attention U-Net. The loss function comprises three terms, namely a partial cross-entropy term, the so-called Centroid Loss and a regularization term based on the mean squared error. They all are jointly optimized using an end-to-end learning model. Two industry-related application cases and the corresponding datasets have been used as benchmark for our approach. As has been reported in the experimental results section, our approach can achieve competitive performance, with regard to full supervision, with a reduced labelling cost to generate the necessary semantic segmentation ground truth. Under weak annotations of varying quality, our approach has been able to achieve good segmentation performance, counteracting the negative impact of the imperfect labellings employed.
The performance gap between our weakly-supervised approach and the corresponding fully-supervised approach has shown to be rather reduced regarding the mIOU values. As for precision and recall, they are quite similar for the quality control task for both the weakly-supervised and the fully-supvervised versions. A non-negligible difference is however observed for the visual inspection task, what suggests looking for alternatives even less sensitive to the imperfections of the ground truth deriving from the weak annotations, aiming at closing the aforementioned gap. In this regard, future work will focus on other deep backbones for semantic segmentation, e.g. DeepLab.
\bibliographystyle{unsrt}
| proofpile-arXiv_065-5474 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{\label{sec:introduction}Introduction}
Spintronics has proven its worth by causing a revolution in data storage \cite{1fert}. Nevertheless, the majority of spintronics devices continue to function via mobile electrons, which inherently dissipate power due to resistive losses. Propagating excitations of localized magnetic moments can also carry spin currents. However, in metals, spin excitations strongly attenuate due to considerable viscous damping. Only in magnetic insulators do spin currents propagate with significantly reduced dissipation since there are no conduction electrons dissipating heat. The coined term {\it spin insulatronics} includes efforts to transport spin information without electronic carrier transport. Additionally, the clear separation between spin dynamics and electron motion in hybrid systems of magnetic insulators and metals (semiconductors) can induce new states of matter. These new ways of utilizing the spin in magnetic insulators can be of a fundamental interest and pave the way for new devices. The aim is to facilitate a revolution in information and communication technologies by controlling electric signals through the deployment of antiferromagnetic insulators (AFIs) and ferromagnetic insulators (FIs). For spin insulatronics to succeed, spin signals in magnetic insulators must seamlessly integrate with conventional electronics, which is the only way the manipulation of a charge signal in an insulator can become feasible and useful in devices.
Experimental and theoretical studies of spin dynamics in ferromagnetic and antiferromagnetic insulators have a long history \cite{Gurevich:CRC96}. The field, however, has been living through a rapid revival and excitement in recent years due to the fabrication advances in nanostructured hybrid structures of magnetic insulators and normal metals. It was reported that currents in metals can be converted to spin currents therein that in turn can surprisingly efficiently pass across the metal-insulator interface \cite{Kajiwara:nat2010}. Inside the magnetic insulators, the exceptional low spin memory loss facilitates new ways of long-range spin transport and manipulation. Spin transport and dynamics in magnetic insulator heterostructures are fundamentally different from its counterpart in metals and semiconductors. Significant progress in understanding the similarities and differences as compared to conventional spin transport is emerging.
Spin insulatronics offers several novelties. The small energy losses in insulators enable transport of spin information across distances of tens of microns \cite{2cornelissen,3lebrun}, much farther than in metals. Furthermore, magnetic insulators can transfer and receive spin information in new ways with respect to metals \cite{Heinrich:PRL2011,121jungfleisch,84xiao,80kapelrud,87hamadeh,85du,86demidov}. Since the transport is anisotropic, such devices can also be controlled by using an external field to change the magnetic configuration. In magnetic insulators, it is possible to realize magnon-based transistors \cite{116cornelissen}. One can also vision the electrical control of magnon-based majority gates \cite{Fischer:APL2017}. In other words, critical logical operations can be carried out by spin excitations without mobile electrons.
The reduced dissipation also facilitates quantum coherent phenomena. Magnons are bosons and can condense \cite{4demokritov}. Condensation occurs when a macroscopic fraction of the bosons occupy the lowest energy state and when all these states are phase-coherent. Normally, at equilibrium, condensation occurs below a critical temperature set by the density of the bosons. In driven systems, like parametric pumping of magnons, condensation is an out-of-equilibrium phenomenon that sets in when the external perturbation is strong enough. One can achieve the state of spin superfluidity \cite{5halperin}, an entirely new route to mediate spin information \cite{95bender,136sonin,137sonin,138takei,142qaiuzadeh}. A boson condensate has a fixed phase. When the phase varies slowly along a spatial dimension, the system drives a supercurrent that is proportional to the gradient of the phase. In the case of magnons, the supercurrent is a pure spin current. The system can exhibit spin superfluidity. Spin fluctuations such as magnons can also induce new properties in adjacent metals or semiconductors. An exciting possibility is magnon-induced superconductivity \cite{Kargarian:PRL2016,Gong:ScienceAdvances:2017,Rohling:PRB2018}. Magnons can replace phonons that cause electron pairing in low-temperature superconductors. This replacement opens news ways of controlling unconventional forms of superconductivity. Another possibility is to use spin fluctuations to assist in the creation of exciton condensates \cite{Johansen:PRL2019}.
We will review the recent developments in spin insulatronics. While magnonics \cite{7kruglyak}, the exploration of spin waves in magnetic structures, is part of this emerging field \cite{8cumak}, we focus on the new developments and possibilities exlusively in insulators enabled by electrical and thermal drives and detection in neighboring metals \cite{Kajiwara:nat2010}. These new paths sometimes involve magnons in conventional ways as in magnonics and in new ways as mediators of attractions between electrons. Beyond magnons, there are also other collective ways to convey spin information in insulators. The title of our review, spin insulatronics, reflects the broader scope of these developments and that we focus on spins in insulating materials with ultra-low damping enabling new phenomena. We will consider the semiclassical regime of the generation, manipulation, and detection of low-energy coherent and incoherent spin waves and the collective quantum domain. Furthermore, super spin insulatronics constitutes magnon condensation and spin superfluidity, dissipationless transport of spin, which in (anti)ferromagnetic insulators may occur at room temperature. It also contains magnon-induced superconductivity, where the focus is on the new properties of the metals in contact with magnetic insulators.
\section{\label{sec:magneticinsulators}Magnetic Insulators}
Ferromagnets are often electrical conductors since ferromagnetic exchange relies on electron delocalization. Conversely, magnetic insulators are usually governed by indirect antiferromagnetic superexchange. The coupling generates either pure antiferromagnets or ferrimagnets when the magnetizations of the different sublattices do not entirely compensate each other. Tuning (or doping) of the two (or more) sublattice structures and adjustment of their magnetization from zero for antiferromagnets to rather large values on the order of 100 kA/m are sometimes possible. In magnetic insulators, the only way to carry a spin current is via the localized magnetic moments. The spin flow is the propagation of their local disturbance. In its simplest manifestation, the spin current propagates via spin-waves (SWs), or their quanta magnons. The characteristic frequencies range from GHz to THz and the associated wavelengths range from $\mu$m to nm \cite{Gurevich:CRC96}. A key feature of magnetic materials is that the SW dispersion relation can be continuously tuned by an external magnetic field over a very wide range. Additionally, changing or controlling the material alters the magnetic anisotropy, which provides additional means for tuning the spin transport properties.
In ferrimagnets, high-frequency excitations (up to THz) exist naturally, not only at very short wavelengths, through excitation of the magnetic moments in the two sublattices. Antiferromagnets are ferrimagnets with no net magnetization and an associated absence of the low-energy dispersion branch. While all magnetic insulators are useful in spin insulatronics, AFIs are of particular high interest. Their THz response is a real advantage and can facilitate ultrafast spintronics devices. They are also robust against an external magnetic field. Early theories of antiferromagnetic metals as active spintronic elements \cite{11nunez,12haney} inspired their validation as memory devices \cite{13wadley,14marrows}. Significantly for our focus, AFIs have been demonstrated to be good spin conductors \cite{3lebrun,15hahn,16wang,Hou:NPGAsia2019}. The first pieces of evidence were indirect. Relatively short-range spin current propagation occurs in AFIs coupled to FIs \cite{15hahn,16wang}. The spin transport properties can vary by orders of magnitude by external magnetic fields\cite{Qiu:NatMat2018}. Importantly, truly long-range spin transport in an AFI, hematite, has been reported more recently \cite{3lebrun}. AFIs can also be good sources of spin currents \cite{li:nature2020,vaidya:science2020}. Other possible materials for room-temperature operation are the archetype NiO, CoO, and the magnetoelectric materials Cr\textsubscript{2}O\textsubscript{3} and BiFeO\textsubscript{3}.
Many insulating materials can function as spintronic elements. However, this variety is predominantly unexplored, as most published reports utilized yttrium iron garnet Y\textsubscript{3}Fe\textsubscript{5}O\textsubscript{12} (YIG). YIG is a ferrimagnetic insulator with the lowest known amount of spin dissipation as characterized by its exceptionally small Gilbert damping constant. Therefore, this material is optimal for propagating SWs. At low energies, the excitations in YIG resemble that of a ferromagnet. In ferromagnets, the excitations frequencies are set by the magnetic anisotropy that is typically much smaller than the exchange energy. The ferromagnetic resonance energies are therefore in the GHz regime in contrast to the THz excitations in antiferromagnets. The well-mastered growth of YIG thin films, either by liquid phase epitaxy \cite{17hahn,18dubs,19beaulieu}, pulsed laser deposition \cite{20sun,21kelly,22onbasli,hauser:scirep2016}, or sputtering \cite{wang:prb2013,chang:maglett2014}, so far has prevented other materials from being competitive. Surprisingly, spin propagation across microns in paramagnetic insulators with a larger spin conductivity than YIG has recently been reported \cite{23oyangi}. These results challenge the conventional view of spin transport, and we encourage a further elucidation of its fundamental origin.
Insulating ferrimagnets also potentially represent a fantastic playground for domain wall dynamics and the design of devices derived from the racetrack memory. Indeed, we can harvest two crucial exchange interactions to optimize the dynamical properties. Firstly, the antiferromagnetic (super-) exchange can play a role similar to its influence in antiferromagnets. Secondly, it is possible to induce a Dzyaloshinskii-Moriya interaction (DMI) through a well-chosen interface, mainly using the properties of direct contact with heavy metal \cite{Avci:NatNan2019,Velez:NatCom2019,Lee:PRL2020}. Rotating the local moments from their equilibrium position, generates several torques \cite{Ivanov:LTPhys2019} including a longitudinal field torque derived from the DMI field, a damping torque proportional to the time derivative of the moments, and that coming from the anisotropy field. For ferrimagnets, there is an additional torque derived from the exchange coupling field. All these torques contribute to the domain wall (DW) motion, but their amplitudes can be very different. Generally, damping and anisotropy field torques are small. The DMI torque can be larger in systems with well-chosen interfaces, but the exchange coupling torque is by far the greatest. It thus provides the main driving mechanism for the DW. The DW dynamics can be modeled by rescaling the gyromagnetic ratio and damping constant \cite{Avci:NatNan2019}. This approach results in an equation of motion for the Néel vector describing the collective dynamics. Unlike ferromagnets, DWs move very fast because saturation takes place at much higher current densities. This high current-driven mobility enabled by antiferromagnetic spin dynamics allows for DW velocities reaching 800 m/s for a current density around $1.2 \times 10^{12}$ A/m$^2$ \cite{Avci:NatNan2019}. These exciting properties are achieved (again) in garnets, particularly Tm$_3$Fe$_5$O$_{12}$ (TmIG), when in a bilayer with Pt. They are intimately linked to the interface-induced topology of the DWs. The presence of interfacial Dzyaloshinskii–Moriya interaction in magnetic garnets offers the handle to stabilize chiral spin textures in centrosymmetric magnetic insulators. Interestingly, the domain walls of TmIG thin films grown on Gd$_3$Sc$_2$Ga$_3$O$_{12}$ exhibit left-handed Néel chirality, changing to an intermediate Néel–Bloch configuration upon Pt deposition \cite{Velez:NatCom2019}. The DMI interaction seems, therefore, to emerge from both interfaces, but their exact balance is still unclear \cite{Avci:NatNan2019,Velez:NatCom2019,Xia:APL2020}. Still, recent measurements of topological Hall effect in ultra-thin TmIG/metal systems reveal the crucial role played by the FI/heavy metal interface \cite{Lee:PRL2020}.
Concepts related to the topology in condensed matter physics have been developed intensely in the past fifteen years. In magnetic materials, topology often arises as a consequence of spin-orbit coupling (SOC) and the breaking of inversion symmetry as typically exemplified by the Dzyaloshinskii-Moriya interaction. When this interaction is strong, it is sometimes able to curl the spins into hedgehog-type spin arrangements carrying a finite topological charge, the magnetic skyrmions. Their practical implementation in the field of spintronics is an important part of a new research area called spin-orbitronics. Magnetic skyrmions are emerging as a potential information carrier \cite{Bogdanov:JETP1989,Roessler:Nature2006}. Importantly, we can manipulate these topologically robust nanoscale spin textures with low current density \cite{Schultz:NatPhys2012,Nagaosa:NatNano:2013,Woo:NatMat2016,Nayak:Nature2017,Caretta:NatNan2018}. So far, ferromagnetic skyrmions are the only, most reliable and stable, topological quasi-particles (solitons) at room temperature in real-space condensed matter. They were originally discovered at low temperature and under a large field in bulk metallic non-centrosymmetric compounds \cite{Muhlbauer:Science2009}, but the community rapidly realized that we could generate them in metallic multilayers with broken inversion symmetry \cite{Heinze:NatPhys2011,Moreau-Luchaire:NatNano:2016,Boulle:NatNano:2016}. These skyrmion systems, based on thin films or multilayers, offer more flexibility, room-temperature operation in the hope of being used as ultrasmall bits of magnetic information for mass data storage. As a first step towards a prototypal skyrmion based device, current-induced displacement was demonstrated \cite{Fert:NatNano2013,Jiang:NatPhys2017}. However, these metallic systems present several drawbacks, including too large power consumption. Therefore, it is technologically appealing for low-power information processing to avail of skyrmions in insulators, due to their low damping and the absence of Ohmic loss. It is also of fundamental interest for studying magnon-skyrmion interactions \cite{Schutte:PRB2014}. There are observations of skyrmions in the bulk of one insulating material: multiferroic Cu$_2$OSeO$_3$, albeit at low temperatures \cite{Seki:Science2012}. There are very recent reports that skyrmions can be generated at room-temperature in TmIG/Pt bilayers \cite{Shao:NatEle2019}. This very new avenue for insulating skyrmion-based room temperature systems will certainly offer opportunities for the next-generation of energy-efficient and high-density spintronic devices.
\section{Spin Injection and Detection in Magnetic Insulators}
Insulators, in contrast to their metallic counterparts, have no (polarized) conduction electrons that can inject, detect or transport spin angular momentum. The absence of this simple link between charge and spin requires other types of couplings to the magnetic system. Spin insulatronics aims to deliver, control and eventually measure electric signals associated with spins in insulators. Fig.\ \ref{fig:control} provides a summary of the five interconversion processes that one could use in insulators to interconvert the spin information and another signal type: inductive, magnetooptical, magnetoelastic, magnetoelectric, and spin-transfer processes. All of these processes have been envisioned, but thus far, in spin insulatronics, most reported measurements have used electric current-driven spin current injection and detection. Before focusing on this approach, we present alternative means, including strain, light, electric and magnetic fields.
\begin{figure}[htbp]
\includegraphics[width=0.99\columnwidth]{control.png}
\caption{Different ways to control the spin in magnetic materials. Inductive coupling with microwaves, elastic vibrations, optical irradiations, and electric fields causing spin-orbit torques exert torques on the magnetic moments.}
\label{fig:control}
\end{figure}
\subsection{Probing Spins via Strain, Light, Electric and Magnetic fields}
The classical way to excite and detect SWs is inductive coupling. To excite SWs, a wire carrying an electric current is placed next to the magnetic insulators. In turn, the current induces a magnetic field via Ampere`s law. Similarly, to detect SWs, we can measure electric currents induced by the stray fields of the SWs. However, the inductive coupling approach has shortcomings when applied to the microscopic scale exploited by spin. The spatial resolution of the nanopatterning technology exploited to fabricate a microwave antenna limits the smallest wavelength that can be excited and detected \cite{24yu}. In practice, the resolution limit is in the submicron range. Therefore, the inductive scheme only addresses SWs with energies that are orders of magnitude below those of thermal magnons. More importantly, the coupling is a volumetric effect. The wavelength mismatch between the microwaves and SWs makes the process inherently inefficient in detecting SWs in thin films \cite{25collet}, implying either poor sensitivity or an inability to induce large power excitation.
Applying strain is a method for controlling magnetic excitations. Magnetostriction is very common in magnetic materials \cite{26tremolet}. In antiferromagnets, strain can also couple to the Néel vector through linear magnetoelastic coupling \cite{27slack,28greenwald}. The ability to trigger collective excitations using magnetoelastic coupling \cite{29dreher} and to interconvert these excitations and elastic waves \cite{30kittel} through interdigitated piezo-electric transducers is an alternative way to detect angular momentum \cite{31holanda,32zhang}. Spin-pumping in YIG using acoustic waves providing an electrically tunable source of spin current \cite{33matthews}, albeit at rather low frequencies (MHz), has also been demonstrated. The use of magnon polarons processes \cite{34kikkawa} enables investigations of high-frequency spin transport in ferromagnetic materials. Moreover, the development of high-frequency coherent shear acoustic waves \cite{35litvinenko} has opened opportunities for coupling magnons in antiferromagnets with acoustic phonons \cite{36watanbe,37ozhogin}.
The issue of weak coupling to the magnetic information is the drawback of magneto-optical techniques \cite{38tabuchi,39jorzick}. Nevertheless, hybridization of whispering gallery optical modes with Walker SW modes propagating in the equatorial plane of a YIG sphere was recently demonstrated \cite{40osada}. The development of ultrafast light sources has enabled triggering and detection of magnetic excitations in ferromagnets and antiferromagnets in the time domain. This control is achieved using ultrafast femtosecond lasers \cite{41reid} in a pump-probe fashion, through different interactions. Ultrafast shock wave generation \cite{42kim,43ruello} can generate magnons, either directly inside the material of interest or with the help of a thin layer of a transducer material \cite{44kovalenko}. The injection of spin currents through ultrafast demagnetization \cite{45choi} or the even more direct inverse Faraday effect \cite{46vander,47mikhaylovskiy,48satoh} is also an important phenomenon. Another photoinduced mechanism is the ultrafast change of anisotropy used for triggering SWs in NiO \cite{49duong,50rubano}, as well as the direct torque induced by the magnetic component of THz monocycle pulses \cite{51kampfrath}. Noticeably, many of the pump-probe studies have been performed on bulk insulators because optimal dynamical properties require the low damping obtained when electrical currents cannot flow. The detection of signals from the magnetic excitations often requires a significant sample volume. Thus, measurements carried out on thin films in the framework of spintronics and magnonics remain scarce. Moreover, precise control of the SW emission using these techniques is currently lacking.
In materials where magnetic and electric orders are intrinsically coupled, pure electric fields can enable electric control of magnetic properties. Such magnetoelectric phenomena have been at the center of research since the turn of the century \cite{52eerendtein}. These 'multiferroic' compounds have rich physics and are potentially appealing for applications, especially when they are ferromagnetic and ferroelectric \cite{53hill}. Significant magnetoelectric coupling between the two order parameters allows for magnetic manipulation of the ferroelectricity or, conversely, electrical control of the magnetic order parameter \cite{54fiebig,55hur}. A common way of expressing this coupling is by introducing terms into the free energy coupling of the polarization and the magnetization (or, more often, the antiferromagnetic Néel vector). The coupling can be linear or quadratic \cite{56pyatakov}. Most magnetic multiferroics are antiferromagnets, including the archetypical BiFeO3 (BFO), the only multiferroic with its two ordering temperatures well above 300K \cite{57catalan}. Rotating the ferroelectric polarization in this compound results in a rotation of the sublattice Néel vector \cite{58zhao,59lebeungle}, a property used to design low-consumption memories \cite{60allibe}. Surprisingly, the dynamical properties of the magnetoelectric coupling and its utilization in the design of magnonic structures are underexplored. Conceptually, it has been shown that magnons in magnetoelectrics can be hybrid entities because of their coupling to the electric order \cite{61rovillain,62pimenov}. In principle, one can envision that electric fields can launch the resulting 'electromagnons' to generate and control magnonic transport at the micron scale, a feature not yet demonstrated. In any case, the magnetoelectric effect could be very useful in the field of spin insulatronics, particularly in addressing and controlling antiferromagnets, for which magnetic fields are inoperable.
\subsection{Magneto-elastic Coupling}
Interest in magneto-elastic properties at high frequencies started at the end of the 50s. The landmark is probably the seminal paper of Kittel \cite{30kittel}, who explained the benefits of coupling acoustic-waves (AW) with spin-waves (SW) in order to confer to the former waveform a tunable and non-reciprocal character. Since then, the subject has accumulated a vast amount of literature that spans a wide range of ferromagnetic materials from metals \cite{Seavey:IEEE1963}, to magnetic semiconductors \cite{Thevenard:PRB2014,Thevenard:PRB2016}, and electrical insulators \cite{Fine:PR1954,Gibbons:JAP1957,Spencer:PRL1958}. From the start, ferrite garnets \cite{Lecraw:1965} have been the subject of an intense focus because of their ultra-low internal frictions. These features are beneficial not only to the magnetization dynamics but also to the ultrasonic propagation. Benchmarking the latter performance, it turns out that the sound attenuation in garnets (both YIG and GGG) is ten times lower than in quartz \cite{Lecraw:1965}. This remarkable property led to an intense research program in the 60s, in particular at Bell Labs. Spencer and LeCraw envisioned the interest of developing tunable delay lines relying on gyromagnetic effect as front-end microwave analog filters for heterodyne detection of wireless signals. Spencer and LeCraw's efforts were thorough. They went as far as putting a YIG sphere in magnetic levitation \cite{Lecraw:1965} in order to minimize contact loss of sound waves to investigate the limits of acoustic decays in these materials. Eventually, garnets delay lines never reached market products principally because of their reduced piezoelectric coefficient, which implies substantial insertion loss for phonon interconversion, which are orders of magnitude higher than their non-magnetic counterpart.
The subject of phononic currents inside magnetic insulators gained a renewed interest with the recent developments of spintronics \cite{Streib:PRL2018,an:PRB2020} and in quantum information technology. Current research focuses on two properties of the magneto-elastic coupling. The first is the ability to reach the strong coupling regime between magnons and phonons, which implies the coherent transfer of information \cite{Bienfait:Science2019}. The second is the ability of circularly polarized phonons to carry angular momentum \cite{30kittel,Garanin:PRB2015,Holanda:NatPhys2018,rueckriegel:PRB2020,rueckriegel:PRL2020}, allowing sound waves to carry spin information across vast distances that exceed previous benchmarks set by magnon diffusion \cite{2cornelissen,3lebrun} by orders of magnitude.
In ferromagnets, the hybridization between spin-waves (magnons) and lattice vibrations (phonons) stems from the magnetic anisotropy and strain dependence of the magnetocrystalline energy \cite{30kittel,Bommel:PRL1959,Damon:IEEE1965,Seavey:IEEE65}. It implies that crystal growth orientation and magnetic configuration determine the selection rules for the coupled eigenvectors. In the following, we shall concentrate on thin films magnetized along the normal direction ($z$-axis). In this case, the dominant coupling with the Kittel mode is achieved by coherent acoustic shear waves. The coupled equation of motion,
\begin{equation}
(1-i \alpha_m ) \omega m^+ = \gamma (H m^+ - D \partial_z^2 m^+ + B \partial_z u^+ - M_s h^+)
\label{eq:m+}
\end{equation}
and
\begin{equation}
-(1-2 i \alpha_a) \omega^2 \rho u^+ = C_{44} \partial_z^2 u^+ + \frac{B}{M_s} \partial_z m^+
\label{eq:u+}
\end{equation}
are governed by the vertical derivatives of the rotating fields. In Eqs.\ (\ref{eq:m+}) and (\ref{eq:u+}), $m^+$, $u^+$, and $h^+$ are respectively the circularly polarized oscillating part of the magnetization, lattice displacements, and rf magnetic field. $H$ is the effective bias magnetic field, comprising the externally applied field, the anisotropy and the demagnetizing magnetic field; $D$ is the exchange constant; $B=(B_2+2B_1 )/3=7 \times 10^5$~J/m$^3$ is the effective coupling constant, with $B_1$ and $B_2$ being the magneto-elastic coupling constants for a cubic crystal; $\rho =5.1$~g/cm$^3$ is the mass density; $\alpha_m$ and $\alpha_a$ are respectively the magnetic and acoustic damping coefficient; and finally $C_{44}$ is the elastic constant, which governs the transverse sound velocity through the relation $v=\sqrt{C_{44}/\rho}$ (see chapter 12 of Ref.\ \cite{Gurevich:CRC96}). The set of coupled equations (\ref{eq:m+}) and (\ref{eq:u+}) is more easily expressed by linearizing Eq.\ (\ref{eq:u+}) around the Kittel frequency $\omega_m $ and integrating the coupling over the total crystal thickness. The integration will be done for the case of a phononic bi-layer system of a YIG thin film grown on top of a GGG substrate, a non-magnetic dielectric (see Fig.\ \ref{fig:magnonphononhybrid}c). We will neglect any impedance mismatch between the two layers, and for any practical purpose the sound properties are governed by the total crystal thickness. In our notation, $d$ and $s$ are respectively the thickness of the magnetic layer (indicated in red) and the substrate (indicated in grey) in Fig.\ \ref{fig:magnonphononhybrid}c. Assuming that both waveform $m^+$ and $u^+$ are plane waves, the coupled equations of motion simplify to the standard form
\begin{equation}
(\omega - \omega_m + i \alpha_m \omega) m^+ = \frac{\Omega}{2} u^+ + \kappa h^+
\label{eq:msimple}
\end{equation}
and
\begin{equation}
(\omega - \omega_a + i \alpha_a \omega) u^+ = \frac{\Omega}{2} m^+
\label{eq:usimple}
\end{equation}
where $\kappa$ is the inductive coupling to a microwave antenna and $\Omega$ is the magneto-elastic overlap integral between the AW and the SW
\begin{equation}
\Omega = \frac{B}{\sqrt{2}} \sqrt{\frac{\gamma}{\omega M_s \rho s d}} \left(1 - \cos{\omega \frac{d}{v}} \right) \, .
\label{eq:Omega}
\end{equation}
\begin{figure}[htbp]
\includegraphics[width=0.99\columnwidth]{magnonphononhybrid.png}
\caption{
a) Schematic representation of the dispersion relation of acoustic waves (AW) and spin-waves (SW). Magneto-elasticity leads to new hybrid quasi-particles, "magnon polarons", when spin-wave and acoustic-wave dispersions anti-cross. Note that a linearly polarized field can always be considered as a superposition of two circularly polarized fields $u^+$ and $u^-$ rotating in the opposite direction. Since spin-waves are circularly polarized vectors rotating around their equilibrium direction anti-clockwise (or in the gyromagnetic direction), the coherent coupling confers to the acoustic waves a finite angular momentum. b) Frequency dependence of the overlap integral $\Omega$ defined by Eq.\ (\ref{eq:Omega}). c) Schematic illustration of coherent shear waves propagating in a perpendicularly magnetized thin film of thickness $d$ on a substrate of thickness $s$.
}
\label{fig:magnonphononhybrid}
\end{figure}
Although the absolute value of $B$ is usually quite small inside garnets, the strong coupling regime is possible because $B$ is comparable to the damping rates of two hybrid waveforms. To illustrate this point, we have traced in Fig.\ \ref{fig:magnonphononhybrid}b the frequency variation of the overlap integral $\Omega$ for a $d=200$~nm thick YIG film deposited on top of a $s=0.5$~mm GGG substrate. The oscillating behavior of $\Omega$ translates the fact that the optimal coupling occurs when the film thickness reaches the half-wave condition. Consequently, there are sweet spots in the frequency spectrum at which the coupling $\Omega$ is maximal, and these occur at finite rf frequencies ($\Omega=0$ at $\omega=0$). In the case of our 200 nm thick YIG film, the first maximum occurs at about 6 GHz.
Reaching the strong coupling regime requires that the dimensionless quantity formed by the ratio of the coupling strength squared to the product of the relaxation rates of each hybrid waveform, $C= \Omega^2/(2 \eta_a \eta_m)$, called the cooperativity, becomes greater than 1. The associated signature of reaching the strong coupling regime is the avoided crossing of energy levels, as shown in Fig.\ \ref{fig:magnonphononhybrid}a, which proves that a coherent hybridization between the two waveforms occurs leading to the formation of new hybrid quasiparticles called ``magnon polarons''. Since, in our case, the excitation of the magnetic order conserves the axial symmetry, the magnetic excitation must be purely circularly polarized, as shown in Fig.\ \ref{fig:magnonphononhybrid}a. The magneto-elastic interaction (a conservative linear tensor) transfers the polarization faithfully to the elastic shear-wave deformation. Thus the circular polarization of the phonons is a consequence of the axially symmetric configuration and the associated conservation of angular momentum (Noether theorem).
In the following, we shall illustrate experimentally that indeed it is possible to reach the strong coupling between the Kittel mode and coherent acoustic shear waves by working at high frequency in the normal configuration \cite{an:PRB2020}. The crystal is an epitaxial film of YIG of 200nm thickness deposited on top of a 0.5 mm GGG substrate. Fig.\ \ref{fig:densityplotFMR} shows the FMR absorption of the Kittel mode around 5.56 GHz. These spectra are in the perpendicular configuration, where the magnetic precession is precisely circular. A fine periodic structure appears in the absorption spectrum when one performs a very fine scan of both the external magnetic field and the frequency. A canted arrow indicates the Kittel relationship linking the field and the frequency labeled FMR. The 3.50 MHz modulation periodicity in the absorption signal due to standing phonon modes that resonate across the whole crystal of thickness $s+d$. The quantification of their wavevector is governed by the transverse sound velocity inside GGG along (111) of $v=3.53 \times 10^3$~m/s \cite{Ye:PRB1991} via $v/(2d+2s) \approx 3.53$~MHz. At 5.5 GHz, the intercept between the transverse AW and SW dispersion relations occurs at $2 \pi/ \lambda n = \omega/v \approx 10^5$ cm$^{-1}$, which corresponds to a phonon wavelength of about $\lambda_n \approx700$~nm with index number $n \approx 1400$. The modulation is strong evidence for the high acoustic quality that allows elastic waves to propagate coherently with a decay length exceeding twice the substrate thickness, which is 1 mm in this case.
\begin{figure}[htbp]
\includegraphics[width=0.99\columnwidth]{densityplotFMR.png}
\caption{
a) Density plot of the ferromagnetic absorption signal of a YIG(200 nm)|GGG(0.5 mm) epitaxial crystal, revealing the anti-crossing between the FMR mode (indicated by blue arrow) and the coherent standing shear acoustic waves that resonates across the total crystal thickness. These acoustic resonances form a regular comb of absorption lines at $\omega_n=n \pi v/(d+s)$, where n is their mode number, thus producing a periodic modulation at 3.50~MHz represented in the figure as horizontal dash lines in orange and green. The right panels (b,c,d) show the intensity modulation for 3 different cuts (blue, magenta and red) along the gyromagnetic ratio, i.e., parallel to the resonance condition. The solid lines in the 4 panels are fits by the coupled oscillator model (cf. Eqs.\ (\ref{eq:msimple}) and (\ref{eq:usimple})). Figure from ref.\cite{an:PRB2020}.
}
\label{fig:densityplotFMR}
\end{figure}
We also show in Fig.\ \ref{fig:densityplotFMR} what happens to the line shapes when one records the modulation at detunings parallel to the FMR resonance as a function of field and frequency indicated by the blue, magenta, and red cuts. The amplitude of the main resonance (blue) in Fig.\ \ref{fig:densityplotFMR}b dips and the lines broaden at the phonon frequencies \cite{Ye:PRB1991}. The minima transform via dispersive-looking signal (magenta) into peaks (red) once sufficiently far from the Kittel resonance as expected from the complex impedance of two detuned resonant circuits, illustrating a constant phase between $m^+$ and $u^+$ along these cuts. Such a behavior is already the signature of a coherent coupling between the two waveforms.
The observed line shapes can be used to extract the lifetime parameters of both the acoustic and magnetic waveform. The 0.7 MHz full line width of the acoustic resonances in Figure 2d indicates the
AW lifetime $\eta_a/(2 \pi) =0.35$~MHz as the half line width of the acoustic resonance. The SW lifetime $1/\eta_m$ follows from the broadening of the absorbed power at the Kittel condition which contains a constant inhomogeneous contribution and a frequency-dependent viscous damping term $\eta_m= \alpha_m \omega_m$. When plotted as function of frequency, the slope gives the Gilbert phenomenology of the homogeneous broadening corresponds to a $\eta_m/(2 \pi) = 0.50$~MHz at 5.5 GHz.
A remarkable feature in Fig.\ \ref{fig:densityplotFMR}a are the clearly resolved avoided crossings of SW and AW dispersion relations, which prove the strong coupling between two oscillators. Fitting by hand the dispersions of two coupled oscillators through the data points (white lines), we extract a gap of $\Omega /(2 \pi)=1$~MHz and thus a large cooperativity $C \approx 3$ in agreement with the observation of avoided crossing. This coupling value can then be injected in the set of coupled equations (\ref{eq:msimple}) and (\ref{eq:usimple}) to infer the expected behavior at the various detunings. The results are shown as solid lines in the three panels of Fig.\ \ref{fig:densityplotFMR} bcd. Such an agreement between the data and the model establishes that efficient power transfer can be achieved between the magnon tank and the phonon tank.
These findings unambiguously show that magnets can act as source and detector for phononic angular momentum currents and they suggest that these currents should probably be able to provide a coupling, analogous to the dynamic coupling in metallic spin valves, but with an insulating spacer, over much larger distances, and in the ballistic/coherent rather than diffuse/dissipative regime \cite{an:PRB2020,rueckriegel:PRL2020}. These findings might also have implications on the non-local spin transport experiments\cite{2cornelissen,116cornelissen,139cornelissen}, in which phonons provide a parallel channel for the transport of angular momentum.
\subsection{Spin Hall Effect}
The recent discovery of spin-transfer and spin-orbit effects enables injection of an external spin current through the interface from an adjacent nonmagnetic layer \cite{63brataas}. This method provides direct electric control of spin transport and has overcome many limitations of previously established routes. We will, therefore, focus on this method. Electric currents passing through conductors can generate pure spin signals. In metals with a significant spin-orbit interaction, such as Pt and W, the spin Hall effect (SHE) converts a charge current into a transverse spin current \cite{64sinova}. The generated spin current is
\begin{equation}
j_{\alpha \beta}^{(s)}=\theta_{s H} \varepsilon_{\alpha \beta \gamma} j_{\gamma}^{(c)}
\end{equation}
where \( \theta _{sH} \) is the spin Hall angle, and \( \varepsilon _{\alpha \beta \gamma} \) is the Levi-Civita tensor. The charge current \( j_{\gamma}^{ \left( c \right) } \) is the component that flows along the \( \gamma \) direction. The spin current \( j_{\alpha \beta}^{ \left( s \right) } \) flows along the \( \beta \) direction and is polarized along the \( \alpha \) direction. We can in turn inject the generated transverse spin current into magnetic insulators. Detection of spin currents is also feasible. An effect reciprocal to the spin Hall effect exists, in which a spin current can cause a secondary transverse charge current via the inverse spin Hall effect (ISHE). In this, so-called inverse spin Hall effect, the generated charge current is proportional to the primary spin current
\begin{equation}
j_{\alpha}^{(c)}=\theta_{s H} \varepsilon_{\alpha \beta \gamma} j_{\beta \gamma}^{(s)} \, ,
\end{equation}
The magnitude of the spin Hall angle $\theta_{s H}$ is an important factor. Its deduced value differs in various experiments. Care should be taken in extracting values of the spin Hall angle from different experiments \cite{Nguyen:PRL2016,Zhu:PRL2019}.
Rasbha-coupled interfaces \cite{65sanchez} or topological insulators, such as Bi\textsubscript{2}Se\textsubscript{3}, also facilitate an analogous spin-charge coupling \cite{66mellnik}. The spin Hall magnetoresistance, the dependence of the resistance in metals on the magnetic configuration of adjacent insulators, can be used to probe ferromagnetic \cite{17hahn,67nakayama} and antiferromagnetic arrangements \cite{68aqeel}.
\subsection{Spin-transfer, Spin-orbit torques, and Spin-pumping}
Spin angular momentum can flow from metals to magnetic insulators or in the opposite direction via the exchange coupling at the metal-insulator interfaces. At these connections, the energy depends on the relative orientations of the localized and the itinerant spins. A disturbance in either of the spin subsystems can therefore propagate from metals into insulators and vice versa. In the metal, spin-polarized transport or spin-orbit coupling together with charge transport can cause a spin imbalance, resulting in a spin-transfer torque (STT) or spin-orbit torque (SOT), respectively. With the spin-transfer torques in ferromagnets, a spin accumulation, resulting, for instance, from the spin Hall effect, is transferred as a torque on the magnetization \cite{63brataas,89berger,90slonczewski,Waintal:PRB2000,Brataas:PRL2000,Brataas:EPJ2001,91stiles},
\begin{equation}
\tau=a m \times\left(m \times \mu^{(s)}\right)
\end{equation}
where $a$ is a measure of the efficiency that is proportional to the transverse spin conductance (spin mixing conductance) \cite{Brataas:PRL2000,Brataas:EPJ2001}, $m$ is a unit vector along the magnetization direction, and \( \mu ^{ \left( s \right) } \) is the spin accumulation. The reciprocal effect also exists. A dynamical magnet pumps spin-currents to adjacent conductors
\begin{equation}
j_{i z}^{(s)}=b m \times \frac{\partial m}{\partial t} \, ,
\end{equation}
where the spin-pumping efficiency $b$ is related to the spin-transfer efficiency $a$ \cite{81tserkovnyak,83tserkovnyak}. While spin-pumping and spin-transfer torque are reciprocal effects, in insulators, the former effect is easier to measure. Analogous expressions exist for the spin-transfer torques in AFIs \cite{69cheng} where the unit vector along the magnetization direction can be replaced by the unit vector along the Néel vector.
A broad range of experiments on a variety of systems have unambiguously established spin-pumping in insulating ferromagnets \cite{70sandweg,71sandweg,72vilela,73rezende,74azenvedo,75burrowes,76kurebayashi,77kurebayashi,78hahn,wang:prb2013}. Theoretically, spin-pumping is predicted to be as strong from FIs \cite{80kapelrud} and AFIs \cite{69cheng,Kamra:PRL2017,79johansen} as from ferromagnetic metals \cite{81tserkovnyak,83tserkovnyak,82brataas}. This essential mechanism should, therefore, be robust. Indeed, very recently, two independent groups provided direction demonstrations of spin-pumping from AFIs \cite{li:nature2020,vaidya:science2020}.
Spin-transfer and spin-orbit torques provide new avenues to alter the magnon energy distribution in insulators \cite{84xiao,85du,86demidov}. However, the generation of magnons in these ways lacks frequency selectivity, and can therefore lead to excitations in a broad frequency range \cite{87hamadeh,88demidov}. This lack of selectivity poses a challenge in identifying the SW modes that are propagating the spin information.
Although the selection rules of spin-transfer effects seem insensitive to the SW spatial pattern, the spin-transfer efficiency increases with decreasing magnon frequency \cite{89berger,90slonczewski,91stiles}. The energy transfer relies on a stimulated emission process \cite{89berger,90slonczewski}. By favoring the SW eigen-modes with the most substantial fluctuations, spin-transfer preferentially targets the modes with the lowest damping rates, the lowest energy eigen-modes since the relaxation rate is proportional to the energy \cite{88demidov,92haidar}. The situation is different for SWs excited by thermal heating, where the excitations predominantly consist of thermal magnons, whose number overwhelmingly exceeds the number of lowest energy modes \cite{93xiao,94hoffman}.
Additional opportunities arise from the improved efficiency of the spin-transfer process. Injecting strong spin currents into small areas and bringing the magnetization dynamics into the deep nonlinear regime are possible. Insulating magnetic materials are particularly promising since they have exceptionally low damping. Indeed, the relevant quantity is the amount of external spin density injected relative to the linewidth. Using spin-transfer in insulators provides a unique opportunity to probe nonequilibrium states, where new collective behaviors are expected to emerge, such as Bose-Einstein (BEC) condensation at room temperature \cite{95bender,96bender}. Importantly, nonlinear processes are responsible for energy-dependent magnon-magnon interactions, which lead to threshold effects such as SW instabilities \cite{97suhl}. Such phenomena can be described as turbulence, which is most well-known as the mechanism behind the saturation of the Kittel mode or the rapid decay of coherent SWs into incoherent motions \cite{Lvov:94}. As a consequence, these effects alter the energy distribution of magnons inside the magnetic body. These processes should conserve both energy and angular momentum. Therefore, a critical parameter that controls magnon-magnon interactions is confinement. In fact, finite size effects lift the degeneracy between modes, limiting the existence of quasi-degenerate modes available at the first harmonics, which substantially increases the nonlinear threshold values. In closed geometries (e.g., nano-pillars), when confinement fully lifts the degeneracy between the SW modes, spin-transfer processes can generate large coherent GHz-frequency magnon dynamics. In this case, a single mode tends to dominate the dynamics beyond the critical spin current for damping compensation \cite{84xiao}:
\begin{equation}
J_s^*=\frac{1}{\gamma} \left(\frac{\partial \omega}{\partial H}\right) \frac{\alpha \omega M_s t_\mathrm{FI}}{\gamma}\, ,
\label{eq:Jsc}
\end{equation}
in which the single SW mode is precessing at $\omega$ in the field $H$, and $\alpha$, $M_s$ and $t_\mathrm{FI}$ are respectively the damping, magnetization and thickness of the FI layer. This selection enables control of the amplitude. The demonstrations of current-induced torques affecting the magnons in YIG \cite{87hamadeh,92haidar,99collet,100demidov} use this feature.
\begin{figure}[htbp]
\includegraphics[width=0.99\columnwidth]{Fig-YIGPt-damping.png}
\caption{
a) Electrical control of the magnetic damping in a YIG/Pt microdisk (from ref. \cite{87hamadeh}). Top: Schematics of the sample (a 5~$\mu$m diameter YIG(20~nm)/Pt(7~nm) disk) and measurement configuration. Bottom: Variation of the full linewidth measured at 6.33~GHz as a function of the dc current for out-of-plane field (black) and two opposite in-plane field (red and blue) configurations. The inset shows the detection of $V_\mathrm{ISHE}$ in the transverse direction of the in-plane swept magnetic field at $I_\mathrm{dc}=0$. b) Stimulated amplification of SWs by SOT (from ref. \cite{evelt:apl2016}). Top: Schematics of the sample (a 1~$\mu$m wide YIG(20~nm)/Pt(8~nm) stripe waveguide) and measurement configuration. Middle: SW decay for different currents in the Pt. Bottom: Current dependence of the decay constant and of the propagation length. A nearly 10 times increase of the latter is observed just before the auto-oscillation threshold, marked by the vertical line.
}
\label{fig:yigpt-damping}
\end{figure}
To illustrate the effect of SOT from an adjacent heavy metal layer on the magnetization dynamics of FIs, we briefly discuss two seminal experiments on the control of magnetic damping in a YIG/Pt microdisk \cite{87hamadeh} (Fig.\ \ref{fig:yigpt-damping}a) and on the amplification of propagating SWs in a YIG/Pt waveguide \cite{evelt:apl2016} (Fig.\ \ref{fig:yigpt-damping}b).
In the experiment depicted in Fig.\ \ref{fig:yigpt-damping}a, a YIG(20~nm)/Pt(7~nm) bilayer is patterned into a microdisk of diameter 5$~\mu$m to which gold electrodes are contacted for dc current injection. In order to maximize the effect of spin-orbit torque, the sample is biased in the plane, with the magnetic field $H_0$ oriented transversely to the dc current $J_\mathrm{c}$ flowing in the Pt layer. This is indeed the most favorable configuration to modulate the damping, as spins accumulated at the YIG/Pt interface due to SHE in Pt will be collinear to its magnetization (see also Fig.\ \ref{fig:sotseebeck}a and b).
The ferromagnetic resonance is excited in the YIG microdisk using the rf field $h_1$ produced by a microwave antenna patterned on top of the sample. The evolution of the in-plane (negative/positive bias) and out-of-plane standing spin-wave spectra of the YIG/Pt disk is monitored by a magnetic resonance force microscope as a function of the dc current injected into the 7~nm thick Pt layer \cite{87hamadeh}. The values of the full linewidth measured at 6.33~GHz in these geometries with respectively blue/red and black symbols, are reported in Fig.\ \ref{fig:yigpt-damping}a as a function of current. The blue/red data points follow approximately a straight line, whose slope $\pm0.5$~Oe/mA reverses with the direction of the bias field, and whose intercept with the abscissa axis occurs at $I^*=\mp14$~mA, which corresponds to a current density of about $4~10^{11}$A/m$^2$. The variation of linewidth covers about a factor five on the full range of current explored. In contrast, the linewidth measured in the out-of-plane geometry does not change with current. In fact, no net spin-transfer torque is exerted by the spin current on the precessing magnetization in this configuration.
The inset of Fig.\ \ref{fig:yigpt-damping}a displays the inverse spin Hall voltage $V_\mathrm{ISHE}$ measured at $I=0$~mA and $f=6.33$~GHz on the same sample, for the two opposite orientations of the in-plane bias field. This allows to check that a spin current can be generated by spin pumping and transmitted from YIG to Pt, and which polarity of the current is required to compensate the damping. In this experiment, damping compensation occurs when the product of $V_\mathrm{ISHE}$ and $I$ is negative. This is consistent with having a positive spin Hall angle in Pt.
The results of Fig.\ \ref{fig:yigpt-damping}a unambiguously demonstrate that SOT can be used to control the relaxation of a YIG/Pt hybrid device, and the current density extrapolated to reach zero linewidth is close to the one calculated using the expression of the critical current for damping compensation Eq.\ref{eq:Jsc} with independently determined experimental parameters. By solely biasing similar YIG/Pt microdisks with a dc current (no microwave excitation applied), auto-oscillations of the YIG magnetization were also observed beyond the critical current \cite{99collet}. We remind here the important role of finite size effects to reach the critical current for full compensation of the damping. Similar experiments performed on extended YIG/Pt bilayers did not evidence sizeable modulation of the damping \cite{17hahn,21kelly}, while by monitoring the parametric threshold of SW generation, it was possible to demonstrate a SOT-induced variation of the damping by only up to 7.5\% \cite{lauer:apl2016}.
In the experiment depicted in Fig.\ \ref{fig:yigpt-damping}b, a YIG(20~nm)/Pt(8~nm) bilayer is patterned into a stripe waveguide of width 1$~\mu$m. The bias magnetic field is applied in the plane, transverse to the stripe direction, and an rf line patterned on top of the waveguide allows to excite a SW mode propagating away from the antenna. The SW propagation characteristics are monitored as a function of the dc current injected into the Pt layer by a microfocused Brillouin light scattering setup \cite{evelt:apl2016}. It is observed that the propagation length of the inductively excited SWs can be increased by nearly a factor 10 as the injected current is increased from 0 to 2.5~mA. Therefore, a highly efficient control of magnetic damping can be achieved in this device.
Even if SOT allows a linear modulation of SWs in the sub-critical regime, going above the auto-oscillation threshold, which is marked by a vertical line in the bottom panel of Fig.\ \ref{fig:yigpt-damping}b, leads to a dramatic decrease of the SW amplitude due to nonlinear coupling with other magnon modes, whose amplitudes exponentially grow in this strongly nonequilibrium regime. The same generic phenomenon was observed with stationary SW modes excited by pure spin currents in YIG/Pt microdisks \cite{99collet,100demidov}. As mentioned earlier, these signatures of nonlinear saturation are reminiscent of the well-known physics of FMR in extended films, where short wavelength SW instabilities quickly develop as the excitation power is increased, preventing to achieve large cone angles of uniform precession \cite{Lvov:94}. Interestingly, these nonlinear properties can be adjusted by the perpendicular anisotropy of the film. In this respect, the growth of ultra-thin films of Bi doped YIG exhibiting out-of-plane magnetization and maintaining a high dynamical quality is extremely interesting \cite{soumah:natcom2018}. As a matter of fact, it was demonstrated that their integration in SOT devices could allow the generation of coherent propagating magnons \cite{evelt:pra2018}.
\section{Spin Transport and Manipulation in Magnetic Insulators}
Transport of spin information is possible in many materials, including metals \cite{101johnson}, semiconductors \cite{102lou} and two-dimensional materials such as graphene \cite{103tombros}. In these materials, the conduction electrons mediate the spin flow. However, disturbances in the localized spins can also propagate and carry spin information. Remarkably, spin angular momentum can be transported over distances as large as 40 microns in YIG \cite{2cornelissen} and as far as 80 microns in hematite \cite{3lebrun}. The magnon diffusion length is approximately ten microns at room temperature in YIG \cite{2cornelissen}. When metals are put in contact with magnetic insulators, the spin transport between them can influence the charge transport properties of the conductors.
\subsection{Spin Hall Magnetoresistance}
In magnetic conductors, currents induced by electric fields depend on the orientation of the localized spins. It is the spin-orbit coupling that causes this connection between the electron flow and the orientation of the localized spins. In isotropic materials, manifestations of this behavior are magnetoresistance and anomalous Hall effect. When the current is along the electric field, the resistance depends on the relative orientation between the current and the localized spins. When the current flows along the electric field, the longitudinal resistivity is
\begin{equation}
\rho_l = \left( \rho_0 + (\hat{j} \cdot \hat{n})^2 \rho_{amr} \right) \, ,
\label{eq:magnetoresistance}
\end{equation}
where $\hat{j}$ is a unit vector along the current direction and $\hat{n}$ is a unit vector along the direction that describes the preferred spin direction. In ferromagnets, $\hat{n}$ is parallel to the magnetization and in antiferromagnets, $\hat{n}$ is parallel to the staggered order parameter. An out-of-plane orientation of the localized spins can also cause a transverse current. The anomalous Hall effect determines that the transverse current perpendicular to the electric field is governed by a transverse resistivity
\begin{equation}
\rho_t = \rho_{ah} n_z E
\label{eq:anomalousHall}
\end{equation}
where $n_t$ is the transverse component of $\hat{n}$. The magnetoresistance and Hall effects can be used to detect the orientation of the localized spins \cite{67nakayama,68aqeel,Chen:PRB2013,Hou:PRL2017}.
In magnetic insulators, there are no charge currents. Naturally, there are then neither an associated magnetoresistance nor an anomalous Hall effect. However, as we have discussed, spins can propagate across metal-magnetic insulators interfaces when they are perpendicular to the spins in the magnetic insulators. Consequently, in layered metal-magnetic insulator systems, spin transport within the metallic regions will be affected by the transverse spins that flow into the magnetic insulators. The spin Hall effect generates spin currents that will experience this additional interfacial transverse decay of spins. The reciprocal effect, the inverse spin Hall effect, generates a charge current from the spin current. There is, therefore, a second-order effect in the spin Hall angle on the charge transport properties. First, the spin Hall effect generates a transverse spin current from the primary charge current that, in turn, causes a change in the charge current via the inverse spin Hall effect. These mechanisms imply that the charge resistance will be affected by the spin transport properties and, hence, via the spin decay at the metal-magnetic insulator interface.
The result of combination of the spin Hall effect, spin-transfer, and inverse spin Hall effects is that the transport properties in layered metal-magnetic insulators are no longer isotropic. The current depends on how the metal is attached to the magnetic insulator. Such a magnetoresistance in heterostructures of metals and insulators is dubbed a spin Hall magnetoresistance since it heavily relies on the spin Hall effect \cite{17hahn,67nakayama,Chen:PRB2013}.
In a layered metal-magnetic insulator structure, where the $x$ and $y$ axes are in-plane, and the $z$ axis is perpendicular to the metal-insulator interface, the anisotropy of the current is richer than the behaviour of Eqs.\ \ref{eq:magnetoresistance} and \ref{eq:anomalousHall}. When the current flows along the $x$ direction, the longitudinal and transverse resistivities become \cite{67nakayama,68aqeel,Chen:PRB2013}
\begin{equation}
\rho_l = \rho_0 + \rho_1 (1-m_y^2) \, ,
\label{eq:smr}
\end{equation}
and
\begin{equation}
\rho_t = \rho_1 m_x m_y + \rho_2 m_z \, .
\label{eq:smrHall}
\end{equation}
The spin Hall magnetoresistance represented by the coefficient $\rho_1$ in Eq.\ \ref{eq:smr} is quadratic in the spin Hall angle since the underlying mechanism is the combination of the spin Hall effect and the inverse spin Hall effect \cite{Chen:PRB2013}. Furthermore, it depends on dissipative part the transverse conductance (the real part of the "mixing conductance" \cite{Brataas:PRL2000}) that describes the Slonczewski spin-transfer between the metal and the magnetic insulator that influences the spin accumulation in the metal. There is also a more conventional anomalous Hall contribution to the transverse resistance of Eq.\ \ref{eq:smrHall}, represented by the coefficient $\rho_2$. This term depends on the reactive part of the transverse conductance (the imaginary part of the "mixing conductance" \cite{Brataas:PRL2000}) \cite{Chen:PRB2013} that governs the the magnetic proximity effect in the normal metal.
It is also possible to understand the anisotropy of the longitudinal and transverse resistivities from symmetry arguments. In bilayers of metals and magnetic insulators, the reduced symmetry along the axis normal to the interface leads to a qualitative change of the properties of the resistivities. Experimentally, not only the qualitative behavior of the spin Hall magnetoresistance, but also its magnitude agrees well with the expression \cite{67nakayama,Chen:PRB2013}. This agreement shows that the combination of the spin Hall effect, the spin-transfer, and the inverse spin Hall effect describe the essential physics well \cite{Chen:PRB2013}. The spin Hall magnetoresistance is an interfacial effect on a bulk transport property. It is, therefore, small and only relevant in thin metallic films. Nevertheless, it is essential because it is a new way to detect the spin order in both ferromagnets and antiferromagnets.
\subsection{Nonlocal Devices}
In nonlocal devices, the electrical current flowing in the metallic wire can affect the magnon population inside the magnetic insulator in two ways. Spin-transfer is the first mechanism and can be coherent or incoherent. Coherently, spin-polarized currents generated inside the metallic wire due to spin-orbit effects transfer to the adjacent magnetic insulator layer and produce a torque on the magnetization. Incoherently, at elevated temperatures, there can be a spin flow parallel to the magnetization by thermal magnons. Joule heating is the second mechanism. The ohmic dissipation in the metallic wire locally increases the magnetic layer temperature, which is in thermal contact, and, correspondingly, there is a local increase in the number of thermal magnons.
To separate these two contributions, we rely on the symmetry of the signal with respect to the current or field polarity. We first concentrate on the current symmetries. In the case of SOTs, the electron trajectory inside the normal metal is deflected towards the interface depending on the spin polarization of the electron. For metals such as Pt, where the spin Hall angle \( \theta _{sh} \) is positive, the deflection follows the right-hand side rule as shown schematically in Fig.\ \ref{fig:sotseebeck} a) and b) for both positive and negative current. The result is an inversion of the polarity of the outward spin current when the electrical current direction is reversed. The net effect is an opposite change in the magnon population as shown in Fig.\ \ref{fig:sotseebeck} a) and b). The nominal occupation (thermal population at device temperature, \( T\)) of a particular spin-wave mode with eigen-value \( \omega_m \) is non-zero and it writes \( k_B T / (\hbar \omega_m) \). The corresponding amplitude is depicted as a gray cone in Fig.\ \ref{fig:sotseebeck}. If the spin is injected parallel to the magnetization direction, then the cone angle will decrease (magnon annihilation, Fig.\ \ref{fig:sotseebeck} b), while if the spin direction is opposite to the magnetization direction, then the cone angle will increase (magnon creation, Fig.\ \ref{fig:sotseebeck} a). Thus, an important signature of SOT is that the signal is odd with respect to the current polarity. In contrast, a change of the magnon population produced by thermal effects is even in current, since the origin is Joule heating which is proportional to $I^2$. This distinction between even and odd symmetries with respect to the current polarity translates into a signal appearing in different harmonics when performing lock-in measurements \cite{2cornelissen}. In the case of SOT, which is odd in current, the signal is captured by the 1st harmonics, while in the case of the spin Seebeck effect (SSE), which is even in current, the signal is captured by the 2nd harmonics. Ref.\ \cite{Goennenwein:APL2015} also found the symmetry expected for a magnon spin-accumulation-driven proces and confirmed the results in Ref.\ \cite{2cornelissen}.
\begin{figure}[htbp]
\includegraphics[width=0.99\columnwidth]{sotseebeck.png}
\caption{Schematic illustration of spin-transfer processes between a magnetic insulator and an adjacent metallic layer where an electric current circulates. The first row shows the principle of SOT, where the electron trajectory inside the normal metal is deflected towards the interface depending on the spin polarization of the electron. The second row shows the principle of SSE, where the temperature rise associated with Joule heating changes the distribution of thermal magnons. Changes in the amplitude of the magnon mode is illustrated as a deviation of the cone angle from the nominal occupation (thermal population at the device temperature).}
\label{fig:sotseebeck}
\end{figure}
These symmetries concerning the current direction and the symmetries related to the magnetization direction have a direct correspondence. The magnetization is, in the case of YIG, determined by the external magnetic field orientation. However, we also need to take into account that the signal polarity is inverted when the magnetization underneath the detector reverses. Thus, the net effect is that the SOT signal is even, while the SSE signal becomes odd with respect to the field polarity. If one observes the azimuthal angular dependence, the even signal follows a cos behavior, while the odd signal follows a $\cos^{2}$ behavior. Since the torque is exerted on the transverse magnetization (the oscillating part of the magnetization), the effect is maximum when the saturation magnetization is parallel to the injected spin direction, or, in other words, perpendicular to the current flow in the Pt, as first demonstrated in Ref. \cite{2cornelissen}.
Importantly, in the nonlocal transport experiment, the voltage drop produced on the detector side can also have an electrical origin \cite{104thiery}. Among the electrical effects are ohmic loss, the ordinary Hall effect, the thermoelectric effect, and the thermal Hall effect. These electrical contributions can originate from thermally activated conduction inside the YIG layer caused by dopants or grain boundaries \cite{104thiery} or spurious conduction channels inside the substrate or the presence of a capping layer. These electrical and thermal effects must be disentangled from pure spin effects. To separate the pure spin contribution, Ref. \cite{2cornelissen} proposed solely considering the anisotropic part of the transport as one varies the orientation of the in-plane magnetization. Among these anisotropic contributions, as emphasized above, one should thoroughly distinguish the ones that are even with respect to the magnetic field or current polarity from those that are odd. In Ref. \cite{2cornelissen} the authors, using the 1st and 2nd harmonic output of a lock-in amplifier, relied on the current symmetry to extract the SOT and SSE. These contributions can also be extracted by directly measuring the voltages obtained under the two possible polarities of the externally applied magnetic field, and constructing respectively a signal sum ($\Sigma$) and a signal difference ($\Delta$) from these measurements \cite{105thiery}. This process is summarized in Fig.\ \ref{fig:measurements} a) and b) and the corresponding current evolution is shown in Fig.\ \ref{fig:measurements} c) and d) for both $\Sigma$ and $\Delta$ when measured on a 19 nm YIG film. Importantly, the\ part that is even with respect to the magnetic field polarity ($\Sigma$) is odd with respect to the current polarity, while the part that is odd with respect to the magnetic field polarity ($\Delta$) is even with respect to the current polarity. These results are exactly the expected symmetries of SOT effect and SSE, as explained earlier.
\begin{figure}[htbp]
\includegraphics[width=0.99\columnwidth]{measurements.png}
\caption{a,b) Azimuthal dependence of the anisotropic voltage produced in a nonlocal device, which can be separated into two components : $\Sigma$, even in magnetic field polarization, and $\Delta$, odd in magnetic field polarization. c,d) Current dependence of respectively $\Sigma$ and $\Delta$ over a large amplitude of current density span measured on a 19nm thick YIG film. Panels a,b) are adapted from Refs.\ \cite{105thiery} and c,d) from Ref.\ \cite{104thiery}.}
\label{fig:measurements}
\end{figure}
The features in these nonlocal voltages also appear in the local voltage, which can be viewed as self-detection of the ISHE voltage on the injector side. In the case of the $\Sigma$-signal, the produced effect is the so-called spin Hall magneto-resistance \cite{17hahn,67nakayama}. As explained previously, the signal is maximum when the magnetization is perpendicular to the current flow inside the Pt and minimum when the magnetization is parallel to the current flow. Experimentally, however, this formulation seems at a first glance counterintuitive since the total resistance appears to drop when the magnetization is perpendicular to the current flow. This result occurs because the drop in the voltage produced by SOT is negative for positive current (and vice versa) and thus appears as a 'negative' resistance effect (see Fig.\ \ref{fig:measurements}c). This drop is a direct consequence of the Hall origin of the $\Sigma$-signal. Contrary to ohmic loss, where the voltage drops along the current direction, for the ISHE, the Hall voltage increases along the current direction \cite{104thiery}.
In the same way, a local-SSE signal \cite{106schreir} should appear in the device synchronously with the nonlocal $\Delta$-signal mentioned previously. This local $\Delta$-signal is synchronous with\ the $\Delta$ observed for a very small gap, but negative with respect to the sign, which just means that the vertical thermal gradient is negative, i.e. that the YIG$\vert$Pt interface is hotter than the YIG$\vert$GGG interface. The current dependence of this local-SSE should be quadratic (as $I^{2}$, Joule heating) in current. Note that this behavior has exactly the same signature as the uniaxial magneto-resistance \cite{107avci}. In this respect a resistance that is linear in current and change signs when the current is reversed is equivalent to a voltage that varies quadratically with $I^{2}$, which is usually interpreted as a Joule heating signature.
The incoherent magnon transport in insulating materials such as YIG is dominated by thermal magnons whose number overwhelmingly exceeds the number of other modes at finite temperature. This phenomenon is demonstrated in the SSE \cite{108uchida}, where a transverse voltage in a Pt electrode fabricated on a YIG layer develops as a result of thermally induced magnon spin transport. This process can also be reversed \cite{2cornelissen,109zhang} and can be used for electrically driven magnon spin injection. In the nonlocal geometry of Fig.\ \ref{fig:nonlocal}, a charge current through the Pt injector strip (left)\ generates an electron spin accumulation at the Pt/YIG interface via the spin Hall effect. Exchange processes across the interface result in a magnon spin accumulation and a non-zero magnon chemical potential. This potential drives magnon diffusion and at the detector electrode (right), the resulting nonzero magnon chemical potential results in an electronic spin accumulation, which drives an electron spin current and is partially converted into an electrical voltage via the ISHE.\ By varying the spacing between the injector and detector electrodes, a typical magnon spin relaxation length of 10 microns at room temperature could be determined. This result was confirmed by simultaneously measuring the effects of electrical as well as thermal magnon injection.\ The SSE under the injector electrode gave rise to the latter effect. The SSE has also been used to control damping \cite{110jungfleisch} and even to generate auto-oscillations of magnetization \cite{111safranski}.
\begin{figure}[htbp]
\includegraphics[width=0.99\columnwidth]{nonlocal.pdf}
\caption{Principle of electrically injected and detected nonlocal magnon transport and a magnon spin transistor. An AC current thought the Pt injector strip generates an AC magnon accumulation via the spin Hall effect. These magnons diffuse to the Pt detector strip, generating a charge voltage via the ISHE. An intermediate gate electrode enables the control of the magnon density, and the magnon conductivity. In this way the nonlocal signal can be modulated by a dc current through the gate electrode. The figure is from Ref.\ \cite{116cornelissen} }
\label{fig:nonlocal}
\end{figure}
This and other experiments \cite{Ganzhorn:APL2016,113wu,114li} confirmed that in addition to driving a magnon spin current by a temperature gradient, the magnon chemical potential plays a crucial role in driving magnon currents in magnetic insulators.
The universality of this nonlocal technique has been recently shown in the study of thermal magnon transport in the most ubiquitous antiferromagnet $\alpha$ Fe\textsubscript{2}O\textsubscript{3}, where typical magnon spin relaxation lengths of 10 microns\ \ were observed at temperatures of 200 K \cite{3lebrun}. An external magnetic field can control the flow of spin current across a platinum-hematite-platinum system by changing the antiferromagnetic resonance frequency. The spin flow is parallel to the Neel order. Magnon modes with a frequency of tens of GHz, or as large as 0.5 THz can carry spin information over microns. Importantly, this result demonstrates the suitability of antiferromagnets in replacing currently used components. Antiferromagnets can operate very quickly and are robust against external perturbations. Speed is not detrimental to the spin diffusion in these materials, which opens the possibility of developing faster devices. As we have discussed, AFIs are more prominent than FIs. These developments open the door towards exploring a wider class of materials with a richer physics.
In addition to magnon spin injection and detection using the spin Hall and inverse spin Hall effect, it was discovered that ferromagnetic metals, such as Py, can also inject and detect magnon spins effectively into a magnetic insulator via the anomalous spin Hall effect and its inverse \cite{115das}. Unlike the SHE, which can only produce in-plane polarized spins, a charge current though a ferromagnet can also produce an anomalous spin Hall effect (ASHE), which generates a spin current perpendicular to the charge current and magnetization direction, but where the spins are oriented parallel to the magnetization, see Fig.\ \ref{fig:nonlocalsheferro}. Since the latter can be controlled by a magnetic field or by other means, this allows the injection and/or detection of magnon spins which have an out-of-plane polarization component. The coexistence of this mechanisms for (magnon) spin injection/detection with ferromagnetic electrodes is discussed in \cite{Amin:PRB2019}.
\begin{figure}[htbp]
\includegraphics[width=0.99\columnwidth]{SHEASHE.png}
\caption{(Right panel) Efficient and polarization controlled injection and detection of magnon spins via the anomalous spin Hall effect and its inverse. (Left panel) (a) Magnon spin injection and detection via the spin Hall/inverse spin Hall effect. At the metal/YIG interface the spins have an in-plane orientation. (b,c,d) The anomalous spin Hall effect in a ferromagnetic produces spins which are oriented parallel to the magnetization. This allows magnetization control of the injected magnon spins. (from \cite{115das})}
\label{fig:nonlocalsheferro}
\end{figure}
Ref.\ \cite{116cornelissen} reported the realization of a spin transistor by inserting a third "gate" electrode in between the magnon injector and detector \cite{116cornelissen}; see Fig.\ \ref{fig:nonlocal}. A dc current flows through the intermediate electrode. Depending on the current direction, the magnon density in the YIG below is either enhanced or reduced, leading to a modulation of the magnon conductivity and the nonlocal signal by several percent. The experiments were modelled with a finite element model, which included the modification of the YIG magnon spin conductance by the magnon injection of the control electrode. The magnon spin conductance was typically found in the order of the spin conductance of a poor metal like Pt. The modelling showed that at the maximum control current, the magnon chemical potential was around $10 \mu eV$, which resulted in a modification of the magnon density of a few percent.
In a similar geometry, but by making the YIG thinner and using higher currents, Ref.\ \cite{Wimmer:PRL2019} realized full control of the magnetic damping via spin-orbit torques. When the current flowing through the intermediate electrode exceeds a critical value, the significantly reduced damping gives rise to a substantial increase of the magnon conductivity by almost two orders of magnitude. Ref.\ \cite{Wimmer:PRL2019} discusses three possible scenarios for the massive increase in the magnon conductivity, 1) injection of magnons in broad frequency spectrum compensates the damping without developing a coherent state, ii) compensation of magnetic damping leads to coherent magnetic auto-oscillations, and iii) the magnon system form a BEC with a macroscopic population of the ground state. In a control experiment, when microwaves coherently drive the magnetization, the spin conductivity decreases, exhibiting the opposite behavior. This measurement shows that it is the compensation of the magnetic damping that is responsible for the substantial enhancement in the spin conductivity rather than the coherent magnetization dynamics. The experiment opens news ways to actively control the magnon transport from regimes where the spin resistance is significantly reduced or vanishes to larger and finite values.
Although the role of thermal magnons dominates the observed behavior at a low current, the nonlocal $\Sigma$-conductivity can exhibit nonlinear behavior at a large current density due to distortion of the magnon distribution produced by SOT. SOT is a process whose efficiency increases with decreasing magnon frequency. This process thus favors low energy magnons present near the spectral bottom of the magnon manifold, the region of so-called magnetostatic waves. In the high out-of-equilibrium regime, these low-energy magnons can efficiently thermalize between themselves through the potent magnon-magnon interaction, whose strength increases with the magnon population. Describing the quasi-equlibrium state by a nonzero chemical potential \cite{4demokritov} and an effective temperature \cite{118serga} is insightful. This state forms the so-called subthermal magnons, a term coined in order to distinguish them from the thermal magnons, from which they are effectively decoupled, as under the intensive parametric pumping one can reach a state where the effective temperature of subthermal magnons exceeds the real temperature characterizing the thermal magnons by a factor of 100.
For nonlocal spin transport device, this modification in the magnon distribution implies a change in the spin transport properties. For very large SOT, the conductance becomes dominated by the propagation properties of the subthermal magnons, which leads to an increase of the $\Sigma$-signal at high current since these magnons have a higher characteristic propagation distance than thermal magnons. This distortion is visible as a gradual deviation of the spin conductance from a purely linear transport behavior at large I, as observed in Fig.\ \ref{fig:measurements} c). A crossover current (here approximately 1.4 mA) from linear to nonlinear regime \cite{105thiery} is observed. The deviations are subcritical spin fluctuations, and the crossover current occurs below the critical current for damping compensation of coherent modes. The slow rise, which is in contrast with the sudden surge of coherent magnons observed at the critical threshold in confined geometries \cite{119hamadeh}, should thus be considered as a signature of the phenomena of damping compensation in open geometries.
Two interesting consequences arise from this phenomenon. The first one is that these effects imprints some nonlinear properties on these devices, which could be advantageously exploited to produce harmonic generation, mixing or cut-off effects in insulatronics. An additional consequence could also emerge from the specific nature of the subthermal magnons. For example, while the transport of thermal magnons is difficult to control due to their relatively high energies, the subthermal magnons could be efficiently controlled by relatively weak magnetic fields.
\section{Future Perspectives}
Spin insulatronics is an emerging field. In magnetic insulators, the ultralow dissipation facilitates brand new exciting possibilities for the exploration of novel and rich physical phenomena. The development of new materials or material combinations will also empower future developments. Let us explore possible improvements in materials and how coherent spin dynamics can open novel avenues for spin transport.
\subsection{Materials and Interfaces}
The material properties of magnetic insulators and their interfaces are essential. We will first discuss the interface properties. The efficiency of the injection of spins into magnetic insulators from metals depends on the interfaces. The detection efficiency depends on the same parameters. Devices require injection and detection of spins. The performance is, therefore, quadratic in the interface spin transparency, which is an essential factor. Measurements on YIG/platinum systems and YIG/gold systems \cite{70sandweg,71sandweg,72vilela,73rezende,74azenvedo,75burrowes,Heinrich:PRL2011} have established that the efficiency substantially varies with the preparation technique even for the same material combinations \cite{121jungfleisch}. While many experiments show that there can be a robust coupling between FIs and metals, there has been less exploration of AFIs. Pt couples to hematite as strongly as to insulating ferrimagnets, such as YIG \cite{3lebrun}. For antiferromagnets, determining how the spins at the two sublattices couple, possibly in different and unusual ways, to the spins in the metals depending on the crystal structures and interface directions would also be of interest. Experimental demonstration of spin-pumping from antiferromagnets in direct contact with metals is desirable \cite{122johansen}. More precise insights into the electron-magnon coupling could enable a stronger and different control of the spin excitations in AFIs. Better spin injections, possibly the use of topological insulators in combination with magnetic insulators, could also increase the efficiency of devices.
In the bulk of magnetic insulators, reduced damping and anisotropy control are essential. Oxide materials offer a broad choice with numerous possible substitutions. Past efforts have mostly concentrated on spinels. Spinels are minerals that crystallize in the cubic form. Some candidates among the spinel ferrites are NiZnAl-ferrite\textsuperscript{1} ($ \alpha $ = 3 10\textsuperscript{-3} in thin film form) and the magnesium aluminum ferrite MgAl\textsubscript{0.5}Fe\textsubscript{1.5}O\textsubscript{4} ($ \alpha $ = 1.5 10\textsuperscript{-3\ }in 10 nm thick films)\textsuperscript{2}. Recent developments in pulsed laser deposition techniques give access to a new class of epitaxial thin films with improved dynamic properties. Illustrative examples are manganite materials such as LSMO \cite{123flovik}, with a reported low damping value. Among the oxide materials studied, apart from garnets, the other compounds that stand out are hexaferrites, of specific interest in the 1980s. Of particular interest are the strontium hexaferrites \cite{124song}, Ba-M hexaferrite \cite{124song}, and zinc lithium ferrite \cite{125song}, where the FMR linewidth can be as low as 30 MHZ at 60 GHz in thin films, making\ them strong contenders for excellent spin conductors. Antiferromagnets comprise the majority of magnetically ordered materials. AFIs commonly occur among transition metal compounds, where the interaction between the magnetic atoms is indirect (super exchange), e.g., through oxygen ions as in hematite (Fe\textsubscript{2}O\textsubscript{3}), nickel oxide (NiO), cobalt oxide (CoO) or chromium oxides (Cr\textsubscript{2}O\textsubscript{3}).\ Fluorides, such as MnF\textsubscript{2}, are also good potential candidates. In hematite, there long-range spin transport across 80 microns has been demonstrated \cite{3lebrun}. With a higher level of purity, we expect to see spins transport across microns in the broader class of AFIs. We anticipate further demonstrations and investigations of long-range spin transport and manipulation in these wide ranges of materials.
\subsection{Condensation and Superfluidity}
Spin-torque oscillators generate sustainable output ac outputs from dc inputs \cite{126silva,127kim}. In ferromagnets, these oscillators utilize the spin-transfer or spin-orbit torque to evolve into a steady-state oscillation of the magnetization that in turn generates the output signal via magnetoresistance effects. The principle to realize persistent oscillations in ferromagnets is as follows. The spin-transfer torque enhances or reduces the dissipation depending on the current direction. However, in dedicated geometries, the dependence on the precession cone angle can differ from that of the Gilbert damping. Therefore, for one current polarity, the spin-transfer torque compensates the Gilbert damping at an angle where steady-state oscillations occur. In FIs, the reduced dissipation rate reflected in the smaller Gilbert damping constant can facilitate spin-torque oscillators at lower applied currents. We can speculate that insulators might provide new ways to synchronize oscillators, thus producing a desired larger output signal. Spin-torque oscillators at much higher, i.e. THz, frequencies obtained by using antiferromagnets can be envisioned \cite{128cheng,129khymyn,130sulymenko,Shen:APL2019}. In antiferromagnets, it is the steady-state oscillation of the Néel order that generates the output signal. Typically, the antiferromagnetic resonance frequency is much higher than the resonance frequencies in ferromagnets. This feature enables the creation of THz electronics that can boost the field of spintronics and other branches of high-speed electronics. In AFIs, the small damping rate implies a reduced current to reach the auto-oscillation threshold, which might make the first experimental demonstration easier to carry out.
Super spin insulatronics is the ultimate quantum limit of magnon condensation and spin superfluidity. Traditionally, in magnetic insulators, the utilization of spin phenomena are utilized via semiclassical SWs \cite{7kruglyak,Serga:JPD2010}. Beyond this regime, at a sufficient density, magnons condense into a single Bose quantum state. In ferromagnets, magnon BEC manifests itself by a phase-coherent precession of the magnetization. Let us consider a ferromagnet in which the magnetization is homogenous in the ground state and the lowest energy excitations are also homogenous. Semiclassically, a unit vector along the magnetization represents the magnetic state. At equilibrium, this vector along an axis that we take to be the longitudinal \( z \) direction. Condensation is represented by a small deviation of magnetization in the transverse directions,
\begin{equation}
m_{+}=m_{x}+m_{y}=a \exp (i \omega t+\varphi) \, ,
\end{equation}
where the real number \( a \) is an amplitude, \(\omega\) is the ferromagnetic resonance precession frequency, and \(\varphi\) is the phase of the condensate. We sketch the magnon condensation in Fig.\ \ref{fig:condensation}. The reduction of the unit vector of the magnetization along the longitudinal direction \( \delta m_{z}=a^{2}/2 \) is proportional to the number of magnons. The condensation is manifested in the larger magnon population at the energy minimum of the magnon bands as compared to that described by the Bose-Einstein distribution or, at high temperatures, than the Rayleigh-Jeans distribution \cite{131ruckriegel}. In antiferromagnets, magnon condensation is similar to the phenomenon in ferromagnets, but it is the Néel field that undergoes phase-coherent precession rather than the magnetization.
\begin{figure}[htbp]
\includegraphics[width=0.6\columnwidth]{condensation.pdf}
\caption{Semiclassical picture of magnon condensation. Coherent dynamics with a phase $\phi$ exists. The transverse components of the magnetization precess with the ferromagnetic resonance frequency $\omega$.}
\label{fig:condensation}
\end{figure}
In thin ferromagnetic films, the dipole-dipole interaction dramatically changes the magnon dispersion. When the ground state magnetization is in-plane, the dispersion is anisotropic, and the energy minimum is at a finite wave-vector. The magnons will then condense around this energy minimum. Importantly, Ref.\ \cite{4demokritov} observed spectroscopically generated magnon condensates in a thin film of a ferrimagnetic insulator at room temperatures. The first observations were that the occupation of the lowest energy state was considerably higher than the surrounding states than expected according to the Bose-Einstein distribution. Subsequent studies showed that the condensate is coherent \cite{Demidov:PRL2008}. In these experiments, parametric pumping by microwave fields parallel to the equilibrium magnetization created a large number of out-of-equilibrium magnons. Four-magnon interactions cause some of these magnons to relax their energies towards the minimum energy. When the pump is turned off, evaporative cooling channels some magnons to much higher energies as compared to the vicinity of the energy minimum and the condensate forms. The typical time-scale related to the magnon-magnon relaxation is shorter than the spin relaxation time associated with the spin-lattic interaction. The stability of the condensate requires that the interaction between the magnons is repulsive \cite{Dalfovo:RMP99,Pitaevskii:2003}. This criterion can seems at odds with the experimental results \cite{Tupitsyn:PRL2008} because established theories predict that the interaction between the condensed magnons is attractive \cite{Lvov:94,Gurevich:CRC96}. It was recently experimentally shown that the Bose-Einstein condensate is stable since the effective interaction between the magnons in spatially inhomogeneous systems is repulsive \cite{Borisenko:NatCom2020}. A possible source of the repulsive interaction is the dipole field arising from the inhomogenous condensate density \cite{Borisenko:NatCom2020}. Recently, a new way to realize Bose-Einstein condensation by rapid cooling was also demonstrated \cite{Schneider:NatNano2020}.
NMR-induced condensation and superfluidity in antiferromagnets have also been reported \cite{132bunkov,133bunkov}. Crucial for spin insulatronics, a hypothesis that we can electrically control magnon condensation via spin-transfer and spin-pumping both in ferromagnets \cite{95bender,134bender} and antiferromagnets \cite{135fjarbu} has been proposed. These measurements and theoretical suggestions imply that coherent quantum phenomena that utilize magnons could possibly be demonstrated in the future. Eventually, it might become possible to use these aspects in devices without the need for complicated cooling devices.
Superfluidity is a dissipationless flow governed by the gradient of the condensate phase. In the absence of spin-relaxation, an analogy exists between the spin dynamics in planar magnetic systems and the hydrodynamic behavior of ideal liquids \cite{5halperin}. These concepts have been subsequent extended, dubbed spin superfluidity \cite{136sonin,137sonin}. When magnons condense, the phase of the order parameter equals the phase of the corresponding semiclassical precession angle. The spin current is proportional to the spatial variations of the precession angle \(\varphi\).
\begin{equation}
j_{s}=A \nabla \varphi \, ,
\end{equation}
where \(A\) is a constant related to the spin-stiffness. Fig.\ \ref{fig:superfluidity} presents the physics of spin superfluidity.
\begin{figure}[htbp]
\includegraphics[width=0.99\columnwidth]{superfluidity.pdf}
\caption{Spin superfluidity. The phase of the condensate exhibits spatial variation. The dissipationless spin supercurrent is proportional to the gradient of the phase.}
\label{fig:superfluidity}
\end{figure}
Electrical injection of superfluid transport is possible via spin-transfer and spin-pumping \cite{138takei}. In other words, the bulk superfluid spin current is supplemented with boundary conditions for the injection and detection of the spin currents. These boundary conditions depend on the contacts via the transverse (or "mixing" ) conductance. The physics is similar to the case of the conductance of normal metal-superconductor-normal metal systems that also depends on the contact interfaces between the normal metals and the superconductors. However, differences exist between the case of spin superfluidity and dissipationless charge transport. Although no free carriers exist in insulators, spins, nevertheless, interact with lattice vibrations, which causes dissipation. As a result, in a simple one-dimensional geometry, in a metal-magnetic insulator-metal system, the ratio between the emitted spin current and the spin accumulation bias becomes
\begin{equation}
\frac{j_{s}}{\mu_{s}}=\frac{1}{4 \pi} \frac{g_{L} g_{R}}{g_{L}+g_{R}+g_{\alpha}} \, ,
\end{equation}
where \(g_{L}\) is the transverse ("mixing" ) conductance of the left metal-magnetic insulator contact, \( g_{R} \) is the transverse ("mixing" ) conductance of the right magnetic insulator-metal contact, and \( g_{ \alpha } \) includes the effects of magnetization dissipation via Gilbert damping. When magnetization dissipation does not occur, \( g_{ \alpha } \) vanishes. In this case, the total resistance of the device is the sum of the resistances of the left and right metal-magnetic insulator contacts. This result implies that there is no resistance in the bulk of the magnetic insulators, and spin transport is dissipationless therein. \( g_{ \alpha } \) is proportional to the Gilbert damping coefficient and the length of the device, i.e., the total amount of dissipation versus the injection and detection efficiencies. A signature of spin supercurrent is, therefore, that it decays algebraically as a function of the length of the magnetic insulator \cite{138takei}. This phenomenon contrasts with the expectations of diffusive spin transport via magnons that decay exponentially when the system exceeds the spin relaxation length \cite{2cornelissen,139cornelissen}.
However, in ferromagnets, the simple picture above for the spin-injection and spin-detection in a ferromagnet where the current is governed by superflow is incomplete. In such setups, the ubiquitous dipole-dipole interaction dramatically affects spin transport and reduces the range to a few hundred nanometers even in ferromagnetic films as thin as five nanometers \cite{140brataas}. In antiferromagnets, long-range dipole-dipole interactions do not occur. Spin superfluidity is also possible in AFIs, but the first predictions for the range of the spin supercurrent were short and less than microns \cite{141takei}. In biaxial antiferromagnets, the anisotropy hinders superfluidity by creating a substantial threshold that the current must overcome. Nevertheless, the application of a magnetic field removes this obstacle near the spin-flop transition of the antiferromagnet. Spin superfluidity can then persist across many microns \cite{142qaiuzadeh}.
Thus far, in our opinion, electrical control of magnon condensation and spin superfluidity has not yet been experimentally realized. A recent report claims signatures of a long-distance spin supercurrent through an antiferromagnet \cite{143yuan}. However, thermal transport is the basis of the results, crucially without detecting the expected accompanying spin-injected transport signal. As discussed above, other effects, for instance, spatially-extended temperature gradients can explain nonlocal signals.
There are reports of supercurrents in other geometries using thin ferromagnetic layers of YIG. Ref.\ \cite{Bozhko:NatPhys2016} first created a Bose-Einstein condensate of magnons by parametric pumping. Additionally, they heated the condensate in a spatially confined region and measured the spatiotemporal decays of the condensate after turning the pump off. They found that the condensate decays faster in the hot region when there is a thermal gradient in the system. The interpretation is that there is a current from the hot condensate to the cold condensate that speeds up the decay in the hot area. This additional decay channel represents an indirect observation of a spin supercurrent in a magnonic system.
More complicated ways of transferred spin information are possible in condensates. Second sound is a propagation of the properties of the condensate. Ref.\ \cite{Tiberkevich:SciRep2019} reports the excitation of coherent second sound. Ref.\ \cite{Bozhko:NatCom2019} reports the observation of Bogoliubov waves. Humps in the condensate density propel many hundreds of micrometers out of heated areas.
\subsection{Magnon-induced Superconductivity}
In metals, superconductivity arises from an effective attractive electron-electron interaction. In low temperature superconductors, phonons mediate the attraction between the electrons via the electron-phonon interaction. The interaction is attractive for electrons with energies roughly within the Debye energy of the Fermi surface. This leads to a Cooper instability, the formation of Cooper pairs, and a phase transition from a normal state to a superconducting state. In the superconducting state, charge transport is dissipationless and the Meissner effect causes a perfect diamagnetic screening of external magnetic fields.
In high-temperature superconductors, the superconducting phase is typically close to the antiferromagnetic phase. It is then likely that the effective attraction between the electrons arises from or is related to the spin fluctuations \cite{Moriya2000ap}. However, a widely accepted and established theory of high-temperature superconductors is yet to be established. The superconducting gap is a spin-singlet with a d-wave orbital symmetry. There are also materials that exhibit superconductivity with more exotic symmetries.
Similar to phonons, magnons are bosons. Magnons also interact with electrons via the electron-magnon interaction. In the systems of the present interest, magnetic insulators in contact with normal metals, the electron-magnon interaction is at the metal-insulator interface. It is then natural to ask if magnons can cause superconductivity in layered systems of magnetic insulators and conducting materials. The advantage of such systems is that the properties of the electrons, the magnons, and the electron-magnon interaction can be engineered by selecting different materials with desired bulk and interface characteristics. In this way, the magnons and the electrons can be independently tuned. This opens a new door towards exploring superconducting properties, possibly with unprecedented properties. The exploration of such systems is therefore desirable. As an added benefit, unraveling these mysteries could perhaps also give additional insight into the mechanisms behind high-temperature superconductors.
Kargarian et al.\ theoretically explored the possible pairing induced by magnons at the surface of topological insulators \cite{Kargarian:PRL2016}. They considered the surface of a doped 3D topological insulator in contact with a ferromagnetic insulator. In topological insulators, the electron states are helical and spin and momentum are locked. Because of the unusual electronic states, the pairing also becomes unconventional. Ref.\ \cite{Kargarian:PRL2016} finds that the effective interaction between the electrons is rare. It is attractive between electrons near the Fermi surface with the same momentum. They the resulting pairing is called Amperian pairing.
There are experimental observations of superconductivity that breaks time-reversal symmetry in epitaxialy bilayer films of bismuth and nickel \cite{Gong:ScienceAdvances:2017}. The demonstration is the onset of the polar Kerr effect when the system becomes superconducting. The critical temperature is 4K. The results may arise from magnetic fluctuations in nickel. The strong spin-orbit coupling and lack of inversion symmetry can induce an exotic pairing.
Independent of, and before these developments and observations were reported, but likely after they were initiated, one of the present authors started exploring the possibility of magnon-induced superconductivity in thin normal metal films in contact with ferromagnetic and antiferromagnetic insulators. Motivated by the successful demonstration of spin-pumping and spin-transfer torques in bilayers of ferromagnetic insulators and normal metals, the question was if the same systems also could give rise to magnon-induced superconductivity. The combination of the well-explored features of the electron-magnon coupling across interfaces in such systems with the possibility of pairing therein gives a rich playground for exploring new superconducting states. We will now outline the simplest version of the theory of magnon-induced superconductivity in such system, hoping to motivate the first experimental detection of such a fascinating and underexplored phenomena.
Let us consider a two-dimensional metal in contact with ferromagnetic insulators. In the absence of interactions between the two subsystems, the electrons are spin-degenerate with a Hamiltonian
\begin{equation}
H_e = \int d {\bf q} \epsilon_q c^\dag_{{\bf q}s} c_{{\bf q}s} \, ,
\end{equation}
where $\epsilon_q$ is the electron eigenenergy, ${\bf q}$ is the momentum, $s$ is the spin, and $c_{{\bf q}s}$ annihilates an electron with momentum ${\bf q}$ and spin $s$. The Hamiltonian of the magnons in the left ferromagnetic insulator is
\begin{equation}
H_{ml} = \int d {\bf k} \hbar \omega_{\bf k} l^\dag_{{\bf k}} l_{{\bf k}}
\label{eq:Hlm}
\end{equation}
where $\hbar \omega_{\bf k}$ is the magnon energy and $l_{\bf k}$ annihilates a magnon with momentum ${\bf k}$. Similarly, the Hamiltonian of the magnons in the right ferromagnetic insulators is represented by the Hamiltonian $H_{rm}$ as in Eq.\ (\ref{eq:Hlm}) with $l_{\bf k} \rightarrow r_{\bf k}$, where $r_{\bf k}$ annihilates a magnon with momentum ${\bf k}$ in the right ferromagnetic insulator.
It is well established that localized spins in ferromagnetic insulators can interact with itinerant spins in adjacent conductors. The simplest effect of the electron-magnon interaction is to induce effective Zeeman fields in the normal metal. Such spin splittings can be detrimental to superconductivity when they are large enough. To simplify the discussions and avoid such complications, we assume that the left and right magnetic insulators are identical, but that the magnetizations are anti-parallel. The induced Zeeman fields from the left and right magnetic insulators then cancel each other. In this regime, the interaction is especially simple and determined by $H_{i}=H_{il}+H_{ir}$, where e.g. the interaction between the electrons and the magnons in the left ferromagnet is
\begin{equation}
H_{il} = \int d {\bf q} \int d {\bf k} V \left[ a_{\bf k} c_{{\bf q}+{\bf k}\downarrow}^{\dag} c_{{\bf q} \uparrow} + \mbox{h.c.} \right] \, ,
\label{eq:Heml}
\end{equation}
where $V$ is the momentum-representation of the interface exchange coupling strength between the electrons in the normal metal and the magnons in the left magnetic insulator. In terms of the real-space microscopic interface coupling between the conduction electrons and the localized spins $J_I$, $V=-2J_I \sqrt{s/2N}$, where $s$ is the spin quantum number of the localized spins and $N$ is the number of localized spins \cite{Rohling:PRB2018}. The expression describing the interaction between the electrons in the normal metal and the magnons in the right magnetic insulator is similar to Eq.\ (\ref{eq:Heml}). Taking both interfaces into account, to second order in the electron-magnon coupling strength $V$ and for electron pairs with opposite momenta, the effective electron-electron interaction becomes
\begin{equation}
H_{ee} = \sum_{{\bf k}{\bf q}} V_{{\bf k}{\bf q}} c_{{\bf k}\downarrow}^\dag c_{-{\bf k} \uparrow}^\dag c_{-{\bf q} \uparrow} c_{{\bf q}\downarrow} \, ,
\label{eq:Hee}
\end{equation}
where the strength
\begin{equation}
V_{{\bf k}{\bf q}} = 2 V^2 \frac{\hbar \omega_{{\bf k}+{\bf q}}}{(\hbar \omega_{{\bf k}+{\bf q}})^2-(\epsilon_{\bf k}-\epsilon_{\bf q})^2}
\end{equation}
is determined by the quasi-particle energies of the magnons and electrons, as well as the interface exchange coupling strength between the electrons and the magnons.
In the mean-field approximation, the superconducting properties are determined by the superconducting gap. The gap can be a spin singlet ($S=0$) or a spin triplet ($S=1$). When the gap is a spin singlet, it is an even function of the momentum. For spin triplet gaps, the gap is an odd function of momentum. In ferromagnetic insulator-metal-ferromagnetic insulator systems, the effective electron-electron interaction of Eq.\ (\ref{eq:Hee}) causes pairing between electrons with opposite spins. Consequently, for both spin singlet and spin triplet states, the total spin of the Cooper pair is zero, $S_z=0$.
We consider the gap at the critical temperature, $T=T_c$. For spin triplet pairing, the gap equation becomes \cite{Rohling:PRB2018,Fjaerbu:PRB2019,Erlandsen:PRB2019}
\begin{equation}
\Delta_{{\bf k}T} = - \sum_{{\bf q}} V_{{\bf k}qO} \Delta_{{\bf q}T} \chi_q \, ,
\label{eq:gap}
\end{equation}
where $\chi_q=\tanh{(|\epsilon_{\bf q}|/2k_B T)}/(2 |\epsilon_{{\bf q}}|)$ and $V_{{\bf k}qO}=( V_{{\bf k}{\bf q}} -V_{-{\bf k} {\bf q}})/2$ is odd (O) in the momentum ${\bf k}$. There is a similar equation for spin singlet pairing described by the gap $\Delta_{{\bf k}s}$, where $V_{{\bf k}qO} \rightarrow V_{{\bf k}qE}$ and $V_{{\bf k}qE}=( V_{{\bf k}{\bf q}} +V_{-{\bf k} {\bf q}})/2$ is even (E) in the momentum ${\bf k}$.
In the gap equation (\ref{eq:gap}), at the Fermi surface, $\epsilon_{\bf k}=\epsilon_{\bf q}=\epsilon_F$, where $\epsilon_F$ is the Fermi energy. Both when the momenta ${\bf k}$ and ${\bf q}$ are parallel (${\bf k}-{\bf q}$ is small) and when they are anti-parallel (${\bf k}+{\bf q}$ is small), the even combination of the effective interaction, $V_{{\bf k}qE}$, is positive and repulsive. The system will therefore not favor spin singlet supercondutivity. Instead, spin triplet superconductivity is preferred. When the momenta ${\bf k}$ and ${\bf q}$ are parallel (${\bf k}-{\bf q}$ is small), the odd combination of the effective interaction, $V_{{\bf k}qO}$, is negative and attractive. In contrast, when the momenta ${\bf k}$ and ${\bf q}$ are anti-parallel (${\bf k}+{\bf q}$ is small), $V_{{\bf k}qO}$, is positive and this is compensated by the fact that $\Delta_{-{\bf q}T}=-\Delta_{{\bf q}T}$. As well established, in ferromagnets, magnons induced triplet superconductivity \cite{Karchev:EPL2015}.
We can give estimates of the critical temperature based on the many studies of spin-transfer torques, spin-pumping and induced exchange fields in ferromagnetic insulators-normal metal systems. For instance, in the case of the ferromagnetic insulator YIG, we can base the estimates of the interface exchange coupling $J_I$ on the measured spin-mixing conductance \cite{Heinrich:PRL2011,75burrowes}. Whereas for the ferromagnetic insulator EuO, we can use the measurements of an induced Zeeman field \cite{Tkaczyk:1988}. For YIG-Au-YIG trilayers, it is found that the critical temperatures range between 0.5K and 10K. Similarly, for EuO-Au-EuO trilayers, it is found that the critical temperature is in the range 0.01K to 0.4K. Computations of the critical temperature are extremely sensitive to the values of the of the interface exchange interaction. The uncertainty in the extraction of this crucial parameter from experiments should motivate direct experimental exploration of the possible intriguing behavoir of magnon-induced superconductivity in layered systems.
As in ferromagnets, magnons in antiferromagnetic insulator also couple strongly to conduction electrons in adjacent metals. This interfacial tie can also lead to magnon-induced superconductivity in systems consisting of metals and antiferromagnetic insulators. The strong exchange interaction causes the resonance frequencies to be much higher in antiferromagnets than in ferromagnets. In simple antiferromagnets, there are two sublattice with antiparallel magnetic moments at equilibrium. There are therefore two kinds of magnons. The ratio between the coupling of these magnons to the electrons depends on the microscopic properties at the interface. In the simplest case, at compensated interfaces, the spins at the two sublattices couple equally strong to the itinerant electrons. In uncompensated interfaces, there is a stronger coupling to the localized spins in one sublattice with respect to the coupling to the localized spin in the other sublattice.
In tight-binding models, when there is a perfect lattice matching between the conduction band sites in the normal metal and the sites of the localized spins in the antiferromagnet, Umklapp scattering is important \cite{69cheng,141takei}. In antiferromagnets, the analogous electron-magnon coupling to Eq.\ (\ref{eq:Heml}) then involves the Umklapp momentum. At half-filling, an Umklapp process takes a state at the Fermi surface to another state at the Fermi surface \cite{Fjaerbu:PRB2019}. This feature influences the pairing interaction. As a consequence, at half-filling, the pairing is a spin-singlet with d-wave orbital symmetry \cite{Fjaerbu:PRB2019}. The estimates are that the critical temperature in MnF$_2$-Au-MnF$_2$ is of the order of 1K.
Away from half-filling, the strength of the effective electron-electron interaction is similar to the ferromagnetic case described by Eq.\ (\ref{eq:Hee}), but with a renormalized strength
\begin{equation}
V_{{\bf k}{\bf q}}^{(A)} = V_{{\bf k}{\bf q}} A({\bf k}+{\bf q},\Omega)
\label{eq:VkpAF}
\end{equation}
where the factor $A$ can be understood as arising from the constructive or destructive interference of squeezed magnons \cite{Erlandsen:PRB2019}. Similar to ferromagnets, the electron-magnon interaction of Eq.\ (\ref{eq:VkpAF}) gives rise to spin-triplet pairing.
In quantum mechanics, the uncertainty of two canonically conjugate variables cannot vanish at the same time. Nevertheless, squeezed states exists where the uncertainty in one variable is reduced while the uncertainty in the other variable is enhanced. In antiferromagnets, one physical picture is that the magnons that diagonalize the spin Hamiltonian result from squeezing \cite{Kamra:PRB2019}. The elementary excitations are squeezed magnon states. Unlike the magnons in the initial basis, the squeezed magnon states contain an average spin on each sublattice that is much larger than the unit net spin \cite{Kamra:PRB2019}. Yes, at the same time, the compensation between the spins at the two sub-lattices implies that only one unit spin is excited. These features have important implications for the physics when antiferromagnets are coupled to normal metal.
At interfaces, when the conduction electrons couple to only one of the sublattices, the spins in the normal metal effectively couples to a large spin in the antiferromagnetic insulator. As a result, the factor $A$ in Eq.\ (\ref{eq:VkpAF}) is greatly enhanced because the squeezed magnon carries a large at the interface. Therefore, the electron-magnon interaction becomes much stronger \cite{Erlandsen:PRB2019} at uncompensated interfaces as compared to interfaces that couple in similar ways to both sub-lattice. This effect can significantly enhance the coupling between the electrons and the magnons. In turn, the estimates of the critical temperature then significantly exceeds 1K.
We encourage experimental explorations of the possibility of observing superconductivity in thin metallic films sandwiched between magnetic insulators. It might become possible to engineer such systems with controlled superconducting properties at measurable temperatures.
\section{Conclusions}
We present in this review recent developments of the utilization of spins in magnetic insulators to control electric signals. In magnetic insulators, the lack of mobile charge carriers often implies a considerably longer spin coherence time as compared to metallic systems. Both ferromagnetic and antiferromagnetic insulators are of interest. While the former materials are quite widely studied, the latter systems have the advantages of faster response time, are less explored and features intriguing quantum aspects. In the Terahertz gap, there are no state-of-the-art technologies for generating and detecting radiation. Using antiferromagnet insulators, in combination with metals, can enable the development of Terahertz electronics devices.
We have seen that spins can propagate across micrometers in a wide range of magnetic insulators. This long spin coherence opens the possibility of exploring the fascinating phenomena of electrical control of magnon condensation and spin superfluidity. Such control enables the use of the magnon phase-coherence and will reveal details of the magnon-magnon and magnon-phonon interactions, also in confined systems.
At low temperatures, magnons might even mediate superconductivity in adjacent conductors. Such systems are of interest since the superconductivity might be unconventional. Furthermore, new ways of forming thin metallic systems or purely two-dimensional systems can open up new doors for controlling the electronic and magnonic properties, possibly with a larger critical temperature as a result.
\section{Acknowledgements}
A.\ B.\ would like to thank Eirik Løhaugen Fjærbu, Niklas Rohling, Akashdeep Kamra, Øyvind Johansen, Rembert Duine, Eirik Erlandsen, and Asle Sudbø for stimulating discussions and collaborations.
A.\ B.\ has received funding from the European Research Council via Advanced Grant No. 669442 "Insulatronics" and the Research Council of Norway Grant through its Center of Excellence funding scheme, Project No. 262633 "QuSpin".
\section{Declarations of Interest}
None.
| proofpile-arXiv_065-5501 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section*{Acknowledgements}
\vspace{0.5em}
\small
\noindent \textbf{Acknowledgments}. This paper is partially supported by AMAOS (Advanced Machine Learning for Automatic Omni-Channel Support), funded by Innovationsfonden, Denmark, and by DFG (German Research Foundation, project no. 407518790).
\input{acronimi-no-lista.tex}
\clearpage
\bibliographystyle{ACM-Reference-Format}
\balance
\section{Experimental Evaluation}
\label{sec:experiments_new}
We evaluate our measures in two ways. Firstly, using the first $4$ ``regular'' constellations described in Section~\ref{sec:dataset}, we check that our measures behave as expected in these known cases, roughly speaking we check that they tend to increase/decrease as expected. Secondly, using all the constellations described in Section~\ref{sec:dataset}, we check that our measures actually provide different viewpoints on replicability/reproducibilty by conducting a correlation analysis. To this end, as usual, we compute Kendall's $\tau$ correlation\footnote{We choose Kendall's $\tau$ because, differently from Spearman's correlation coefficient, it can handle ties and it also has better statistical properties than Pearson's correlation coefficient~\cite{CrouxAndDehon2010}. We did not consider AP correlation~\cite{YilmazEtAl2008} since, as shown in~\cite{Ferro2017}, it ranks measures in the same way as Kendall's $\tau$.} among the rankings of runs produced by each of our measures. Whenever the correlation between two measures is very high, we can report just one measure, since the other will likely represent redundant information~\cite{WebberEtAl2008}; furthermore, as suggested by~\citet{Voorhees1998}, we consider two measures equivalent if their correlation is greater than $0.9$, and noticeably different if Kendall's $\tau$ is below $0.8$.
As effectiveness measures used with \ac{ARP}, \ac{RMSE} and \ac{ER}, we select \ac{AP} and \ac{nDCG} with cut-off $1000$ and P@10. Even if P@10 might be redundant~\cite{WebberEtAl2008}, we want to investigate whether it is easier to replicate/reproduce an experiment with a set-based measure. RBO is computed with $\phi = 0.8$. Even if~\citet{WebberEtAl2010} instantiate \ac{RBO} with $\phi \geq 0.9$, we exploit a lower $\phi$. Inspired by the analysis for \ac{RBP} in~\citet{FerranteEtAl2015b-nf}, we select a lower $\phi$ to consider a less top-heavy measure, since for replicability we do not want to replicate just the top rank positions.
\subsection{Validation of Measures}
\label{subsec:performance_analysis}
\subsubsection*{Case Study: Replicability}
\begin{table*}[tb]
\caption{Replicability results for \texttt{WCrobust04}: \ac{ARP}, rank correlations, \ac{RMSE}, and $p$-values returned by the paired $t$-test.}
\label{tab:replicability_04}
\resizebox{0.8\textwidth}{!}{%
\begin{tabular}{@{}l|ccc|cc|ccc|ccc@{}}
\toprule
& \multicolumn{3}{c|}{\ac{ARP}} & \multicolumn{2}{c|}{Correlation} & \multicolumn{3}{c|}{RMSE} & \multicolumn{3}{c}{$p$-value} \\
run & P@10 & AP & nDCG & $\tau$ & \ac{RBO} & P@10 & AP & nDCG & P@10 & AP & nDCG \\
\midrule
\texttt{WCrobust04} & $0.6460$ & $0.3711$ & $0.6371$ & $1$ & $1$ & $0$ & $0$ & $0$ & $1$ & $1$ & $1$ \\
\midrule
\texttt{rpl\_wcr04\_tf\_1} & $0.6920$ & $0.3646$ & $0.6172$ & $0.0117$ & $0.5448$ & $0.2035$ & $0.0755$ & $0.0796$ & $0.110$ & $0.551$ & $0.077$ \\
\texttt{rpl\_wcr04\_tf\_2} & $0.6900$ & $0.3624$ & $0.6177$ & $0.0096$ & $0.5090$ & $0.2088$ & $0.0799$ & $0.0810$ & $0.137$ & $0.445$ & $0.090$ \\
\texttt{rpl\_wcr04\_tf\_3} & $0.6820$ & $0.3420$ & $0.6011$ & $0.0076$ & $0.4372$ & $0.2375$ & $0.1083$ & $0.0971$ & $0.288$ & $0.056$ & $0.007$ \\
\texttt{rpl\_wcr04\_tf\_4} & $0.6680$ & $0.3106$ & $0.5711$ & $0.0037$ & $0.3626$ & $0.2534$ & $0.1341$ & $0.1226$ & $0.544$ & $9E{-}04$ & $4E{-}05$ \\
\texttt{rpl\_wcr04\_tf\_5} & $0.6220$ & $0.2806$ & $0.5365$ & $0.0064$ & $0.2878$ & $0.2993$ & $0.1604$ & $0.1777$ & $0.575$ & $1E{-}05$ & $1E{-}05$ \\
\midrule
\texttt{rpl\_wcr04\_df\_1} & $0.6700$ & $0.3569$ & $0.6145$ & $0.0078$ & $0.5636$ & $0.2000$ & $0.0748$ & $0.0742$ & $0.401$ & $0.181$ & $0.029$ \\
\texttt{rpl\_wcr04\_df\_2} & $0.6560$ & $0.3425$ & $0.6039$ & $0.0073$ & $0.5455$ & $0.1772$ & $0.0779$ & $0.0802$ & $0.694$ & $0.008$ & $0.002$ \\
\texttt{rpl\_wcr04\_df\_3} & $0.6020$ & $0.3049$ & $0.5692$ & $0.0072$ & $0.5217$ & $0.1649$ & $0.1078$ & $0.1210$ & $0.058$ & $1E{-}06$ & $1E{-}05$ \\
\texttt{rpl\_wcr04\_df\_4} & $0.5220$ & $0.2519$ & $0.5058$ & $0.0048$ & $0.4467$ & $0.2098$ & $0.1695$ & $0.1987$ & $4E{-}06$ & $8E{-}09$ & $1E{-}07$ \\
\texttt{rpl\_wcr04\_df\_5} & $0.4480$ & $0.2121$ & $0.4512$ & $0.0019$ & $0.3532$ & $0.3102$ & $0.2053$ & $0.2572$ & $4E{-}07$ & $2E{-}11$ & $2E{-}09$ \\
\midrule
\texttt{rpl\_wcr04\_tol\_1} & $0.6700$ & $0.3479$ & $0.5992$ & $0.0033$ & $0.5504$ & $0.2010$ & $0.0783$ & $0.0928$ & $0.403$ & $0.035$ & $0.002$ \\
\texttt{rpl\_wcr04\_tol\_2} & $0.5680$ & $0.2877$ & $0.4901$ & $0.0061$ & $0.4568$ & $0.3216$ & $0.1868$ & $0.2931$ & $0.086$ & $0.001$ & $1E{-}04$ \\
\texttt{rpl\_wcr04\_tol\_3} & $0.3700$ & $0.1812$ & $0.3269$ & $0.0066$ & $0.2897$ & $0.4762$ & $0.2937$ & $0.4387$ & $8E{-}06$ & $2E{-}07$ & $6E{-}09$ \\
\texttt{rpl\_wcr04\_tol\_4} & $0.2180$ & $0.0903$ & $0.1728$ & $0.0066$ & $0.1621$ & $0.5488$ & $0.3512$ & $0.5382$ & $1E{-}11$ & $1E{-}12$ & $4E{-}16$ \\
\texttt{rpl\_wcr04\_tol\_5} & $0.0700$ & $0.0088$ & $0.0379$ & $0.0012$ & $0.0518$ & $0.6437$ & $0.4028$ & $0.6228$ & $8E{-}19$ & $3E{-}19$ & $2E{-}29$ \\
\midrule
\texttt{rpl\_wcr04\_C\_1} & $0.7020$ & $0.3671$ & $0.6191$ & $0.0039$ & $0.5847$ & $0.1744$ & $0.0631$ & $0.0640$ & $0.021$ & $0.656$ & $0.046$ \\
\texttt{rpl\_wcr04\_C\_2} & $0.6960$ & $0.3717$ & $0.6244$ & $0.0021$ & $0.5907$ & $0.1772$ & $0.0610$ & $0.0606$ & $0.044$ & $0.945$ & $0.142$ \\
\texttt{rpl\_wcr04\_C\_3} & $0.6840$ & $0.3532$ & $0.6093$ & $0.0096$ & $0.5607$ & $0.2168$ & $0.0833$ & $0.0850$ & $0.218$ & $0.130$ & $0.019$ \\
\texttt{rpl\_wcr04\_C\_4} & $0.6240$ & $0.3168$ & $0.5761$ & $0.0073$ & $0.4595$ & $0.2249$ & $0.1144$ & $0.1194$ & $0.494$ & $4E{-}04$ & $1E{-}04$ \\
\texttt{rpl\_wcr04\_C\_5} & $0.6140$ & $0.3085$ & $0.5689$ & $0.0068$ & $0.4483$ & $0.2315$ & $0.1192$ & $0.1248$ & $0.333$ & $7E{-}05$ & $3E{-}05$ \\
\bottomrule
\end{tabular}%
}
\end{table*}
\begin{figure}[tb]
\begin{subfigure}{.49\linewidth}
\centering
\includegraphics[width=\textwidth]{figures/wcrobust04_tau_tf.pdf}
\caption{Kendall's $\tau$ \texttt{WCrobust04}.}
\label{fig:tau_cutoff_04}
\end{subfigure}%
\hfill
\centering
\begin{subfigure}{.49\linewidth}
\includegraphics[width=\textwidth]{figures/wcrobust04_rmse.pdf}
\caption{\ac{RMSE} \texttt{WCrobust04}.}
\label{fig:rmse_cutoff_04}
\end{subfigure}%
\caption{Kendall's $\tau$ and \ac{RMSE} with \ac{nDCG} computed at different cut-offs for \texttt{WCrobust04}.}
\label{fig:rmse_tau_cutoff}
\end{figure}
Table~\ref{tab:replicability_04} reports the retrieval performance for the baseline $b$-run \texttt{WCrobust04} and the replicability measures: Kendall's $\tau$, \ac{RBO}, \ac{RMSE}, and the $p$-values returned by the paired $t$-test. The corresponding table for \texttt{WCrobust0405} reports similar results and is included in an online appendix~\footnote{\url{https://github.com/irgroup/sigir2020-measure-reproducibility/tree/master/appendix}}.
We report \ac{ER} in Table~\ref{tab:replicability_er} and plot \ac{ER} against DeltaRI in Figure~\ref{fig:rpl_er_ri}, additional \ac{ER}-DeltaRI plots are included in the online appendix.
In Table~~\ref{tab:replicability_04}, low values for Kendall's $\tau$ and \ac{RBO} highlights how hard it is to accurately replicate a run at ranking level. Replicability runs achieve higher \ac{RBO} scores than Kendall's $\tau$, showing that \ac{RBO} is somehow less strict.
\ac{RMSE} increases almost consistently when the difference between \ac{ARP} scores of the original and replicated runs decreases. In general, \ac{RMSE} values of P@10 are larger compared to those of \ac{AP} and \ac{nDCG}, due to P@10 having naturally higher variance (since it also considers a lower cut-off). For the constellation \texttt{rpl\_wcr04\_tf} and \texttt{rpl\_wcr04\_C}, \ac{RMSE} with P@10 increases, even if the difference between \ac{ARP} scores decreases. As pointed out in Section~\ref{subsec:replicability}, this is due to \ac{RMSE} which penalizes large errors.
On the other hand, \ac{RMSE} decreases almost consistently as the cut-off value increases, as shown in Figure~\ref{fig:rmse_cutoff_04}. As expected, if we consider the whole ranking, the replicability runs retrieve more relevant documents and thus achieve better \ac{RMSE} scores.
As a general observation, it is easier to replicate a run in terms of \ac{RMSE} rather than Kendall's $\tau$ or \ac{RBO}. This is further corroborated by the correlation results in Table~\ref{tab:replicability_correlation}, which shows low correlation between \ac{RMSE} and Kendall's $\tau$. Therefore, even if the original and the replicated runs place documents with the same relevance labels in the same rank positions, those documents are not the same, as shown in Figure~\ref{fig:tau_cutoff_04}, where Kendall's $\tau$ is computed at different cut-offs. This does not affect the system performance, but it might affect the user experience, which can be completely different.
For the paired $t$-test, as the difference in \ac{ARP} decreases, $p$-value increases, showing that the runs are more similar. This is further validated by high correlation results reported in Table~\ref{tab:replicability_correlation} between \ac{ARP} and $p$-values. Recall that the numerator of the $t$-value is basically computing the difference in \ac{ARP} scores, thus explaining the consistency of these results.
For \texttt{rpl\_wcr04\_tf} and \texttt{rpl\_wcr04\_C}, \ac{RMSE} and $p$-values are not consistent: \ac{RMSE} increases, thus the error increases, but $p$-values also increase, thus the runs are considered more similar. As aforementioned, this happens because \ac{RMSE} penalizes large errors per topic, while the $t$-statistic is tightly related to \ac{ARP} scores.
\begin{table}[tb] \setlength{\tabcolsep}{3.5pt}
\caption{ER results for replicability and reproducibility: the $a$-run
is \texttt{WCrobust0405} on TREC Common Core 2017; the
$b$-run is \texttt{WCrobust04}, for replicability on TREC Common
Core 2017, for reproducibility on TREC Common Core 2018.}
\label{tab:replicability_er}
\begin{tabular}{@{}l|ccc||ccc@{}}
\toprule
&\multicolumn{3}{c||}{replicability}&\multicolumn{3}{c}{reproducibility}\\
run & P@10 & \ac{AP} & \ac{nDCG} & P@10 & \ac{AP} & \ac{nDCG} \\
\midrule
\texttt{rpl\_tf\_1} & $0.8077$ & $1.0330$ & $1.1724$ & $1.1923$ & $1.2724$ & $2.0299$ \\
\texttt{rpl\_tf\_2} & $0.7308$ & $1.0347$ & $1.1336$ & $0.9615$ & $1.3195$ & $2.2139$ \\
\texttt{rpl\_tf\_3} & $0.9038$ & $1.3503$ & $1.3751$ & $1.5000$ & $1.5616$ & $2.5365$ \\
\texttt{rpl\_tf\_4} & $0.6346$ & $1.4719$ & $1.5703$ & $1.4231$ & $1.9493$ & $2.9317$ \\
\texttt{rpl\_tf\_5} & $1.1346$ & $1.5955$ & $1.8221$ & $1.5385$ & $1.7010$ & $3.0569$ \\
\midrule
\texttt{rpl\_df\_1} & $0.9615$ & $0.9995$ & $1.1006$ & $0.4615$ & $0.7033$ & $0.9547$ \\
\texttt{rpl\_df\_2} & $1.0192$ & $0.9207$ & $1.0656$ & $0.4231$ & $0.4934$ & $0.6586$ \\
\texttt{rpl\_df\_3} & $1.0385$ & $0.8016$ & $1.0137$ & $0.1923$ & $0.5429$ & $1.0607$ \\
\texttt{rpl\_df\_4} & $0.9615$ & $0.5911$ & $0.8747$ & $0.3846$ & $0.5136$ & $0.8333$ \\
\texttt{rpl\_df\_5} & $0.8654$ & $0.3506$ & $0.6459$ & $0.3846$ & $0.4857$ & $0.7260$ \\
\midrule
\texttt{rpl\_tol\_1} & $1.0769$ & $1.2013$ & $1.3455$ & $0.5769$ & $0.6574$ & $0.8780$ \\
\texttt{rpl\_tol\_2} & $1.3269$ & $1.4946$ & $1.9290$ & $0.8077$ & $0.5194$ & $0.8577$ \\
\texttt{rpl\_tol\_3} & $1.8654$ & $2.1485$ & $2.8496$ & $2.0000$ & $1.4524$ & $2.9193$ \\
\texttt{rpl\_tol\_4} & $2.0962$ & $2.2425$ & $3.3213$ & $2.3846$ & $2.1242$ & $3.9092$ \\
\texttt{rpl\_tol\_5} & $1.2500$ & $1.0469$ & $1.8504$ & $0.2692$ & $0.1116$ & $0.5595$ \\
\midrule
\texttt{rpl\_C\_1} & $0.6346$ & $0.6300$ & $0.8901$ & $2.1538$ & $1.8877$ & $3.7777$ \\
\texttt{rpl\_C\_2} & $0.8077$ & $0.7361$ & $0.9240$ & $2.2308$ & $1.9644$ & $3.8621$ \\
\texttt{rpl\_C\_3} & $0.8654$ & $1.1195$ & $1.2092$ & $2.3846$ & $2.2743$ & $4.2783$ \\
\texttt{rpl\_C\_4} & $0.9231$ & $1.1642$ & $1.2911$ & $0.6538$ & $0.7316$ & $1.0403$ \\
\texttt{rpl\_C\_5} & $0.8846$ & $1.1214$ & $1.2542$ & $0.5769$ & $0.6915$ & $0.9741$ \\
\bottomrule
\end{tabular}
\end{table}
Table~\ref{tab:replicability_er} (left) reports \ac{ER} scores for replicability runs. \texttt{WCrobust\_04} is the baseline $b$-run, while \texttt{WCrobust\_0405} is the advanced $a$-run, both of them on TREC Common Core 2017. Recall that, for \ac{ER}, the closer the score to $1$, the more successful the replication.
\ac{ER} behaves as expected: when the quality of the replicated runs deteriorates, \ac{ER} scores tend to move further from $1$. As for \ac{RMSE}, we can observe that the extent of success for the replication experiments depends on the effectiveness measure. Thus, the best practice is to consider multiple effectiveness measures.
Note that, for the constellations of runs \texttt{rpl\_wcr04\_tf} and \texttt{rpl\_\-wcr04\_\-C}, there is no agreement among the best replication experiment when different effectiveness measures are considered. This trend is similar to the one observed with \ac{RMSE}, $p$-values and delta in \ac{ARP}. For example, for \ac{ER} with P@10, the best replicability runs are \texttt{rpl\_wcr04\_tf3} and \texttt{rpl\_}\texttt{wcr0405\_tf3} but \ac{ER} scores are not stable, while for \ac{AP} and \ac{nDCG}, \ac{ER} values tends to move further from $1$, as we deteriorate the replicability runs. Again, this is due to the high variance of P@10.
\begin{figure}[tb]
\centering
\begin{subfigure}{.49\linewidth}
\includegraphics[width=\textwidth]{figures/er_ri_tf.pdf}
\caption{\texttt{rpl\_tf} runs.}
\label{fig:er_ri_tf}
\end{subfigure}%
\hfill
\begin{subfigure}{.49\linewidth}
\includegraphics[width=\textwidth]{figures/er_ri_df.pdf}
\caption{\texttt{rpl\_df} runs.}
\label{fig:er_ri_df}
\end{subfigure}%
\caption{Replicability: \ac{ER} on the $x$-axis against \ac{DeltaRI} on the $y$-axis.}
\label{fig:rpl_er_ri}
\end{figure}
Figure~\ref{fig:rpl_er_ri} illustrates \ac{ER} scores against \ac{DeltaRI} for $2$ constellations in Table~\ref{tab:replicability_er} and the other constellations are included in the online appendix. Recall that in Figure~\ref{fig:rpl_er_ri}, the closer a point to the reference $(1, 0)$, the better the replication experiment, both in terms of effect sizes and absolute differences.
The \ac{ER}-\ac{DeltaRI} plot, can be used as a visual tool to guide researcher on the exploration of the ``space of replicability'' runs. For example, in Figure~~\ref{fig:er_ri_tf}, for \ac{AP} and \ac{nDCG} the point $(1, 0)$ is reached from Region $4$, which is somehow the preferred region, since it corresponds to successful replication both in terms of effect sizes and relative improvements. Conversely, in Figure~\ref{fig:er_ri_df}, it is clear that for \ac{AP} the point $(1, 0)$ is reached from Region $1$, which corresponds to somehow a successful replication in terms of effect sizes, but not in terms of relative improvements.
\subsubsection*{Case Study: Reproducibility}
For reproducibility, Table~\ref{tab:reproducibility_p_value} reports \ac{ARP} and $p$-values in terms of P@10, \ac{AP}, and \ac{nDCG}, for the runs reproducing \texttt{WCrobust04} on TREC Common Core 2018. The corresponding table for \texttt{WCrobust0405} is included in the online appendix. Note that, in this case we do not have the original run scores, so we cannot directly compare \ac{ARP} values. This represents the main challenge when evaluating reproducibility runs.
From $p$-values in Table~\ref{tab:reproducibility_p_value}, we can conclude that all the reproducibility runs are statistically significantly different from the original run, being the highest $p$-value just $0.005$. Therefore, it seems that none of the runs successfully reproduced the original run.
However, this is likely due to the two collections being too different, which in turn makes the scores distribution also different. Consequently the $t$-test considers all the distributions as significantly different. To validate this hypothesis, we carried out an unpaired $t$-test between pairs of replicability and reproducibility runs in the $4$ different constellations. This means that each pair of runs is generated by the same system on two different collections. The $p$-values for this experiment are reported only in the online appendix. Again, the majority of the runs are considered statistically differerent, except for a few cases for \texttt{rpl\_wcr04\_df} and \texttt{rpl\_wcr04\_tol}, which exhibit higher $p$-values also in Table~\ref{tab:reproducibility_p_value}. This shows that, depending on the collections, the unpaired $t$-test can fail in correctly detecting reproduced runs.
Table~\ref{tab:replicability_er} (right) reports \ac{ER} scores for replicability runs. At a first sight, we can see that \ac{ER} scores are much lower (close to $0$) or much higher ($\gg 1$) than for the replicability case. If it is hard to perfectly replicate an experiment, it is even harder to perfectly reproduce it.
This is illustrated in the \ac{ER}-\ac{DeltaRI} plot in Figure~\ref{fig:rpd_er_ri}. In Figure~\ref{fig:rpd_er_ri_tf} the majority of the points are far from the best reproduction $(1, 0)$, even if they are in region $4$. In Figure~\ref{fig:rpd_er_ri_df} just one point is in the preferred region $4$, while many points are in region $2$, that is failure both in reproducing the effect size and the relative improvement.
\begin{figure}[tb]
\centering
\begin{subfigure}{.49\linewidth}
\includegraphics[width=\textwidth]{figures/rpd_er_ri_tf.pdf}
\caption{\texttt{rpd\_tf} runs.}
\label{fig:rpd_er_ri_tf}
\end{subfigure}%
\hfill
\begin{subfigure}{.49\linewidth}
\includegraphics[width=\textwidth]{figures/rpd_er_ri_df.pdf}
\caption{\texttt{rpd\_df} runs.}
\label{fig:rpd_er_ri_df}
\end{subfigure}%
\caption{Reproducibility: \ac{ER} on the $x$-axis against \ac{DeltaRI} on the $y$-axis.}
\label{fig:rpd_er_ri}
\end{figure}
\begin{table}[tb]
\caption{Reproducibility: \ac{ARP} and $p$-value (unpaired $t$-test), for \texttt{WCrobust04}. The original runs are on TREC Common Core 2017, and reproduced runs on TREC Common Core 2018.}
\label{tab:reproducibility_p_value}
\resizebox{\columnwidth}{!}{%
\begin{tabular}{@{}l|ccc|ccc@{}}
\toprule
& \multicolumn{3}{c|}{\ac{ARP}} & \multicolumn{3}{c}{$p$-value} \\
run & P@10 & \ac{AP} & \ac{nDCG} & P@10 & \ac{AP} & \ac{nDCG} \\
\midrule
\texttt{rpd\_tf\_1} & $0.3680$ & $0.1619$ & $0.3876$ & $7E{-}04$ & $6E{-}06$ & $6E{-}06$ \\
\texttt{rpd\_tf\_2} & $0.3760$ & $0.1628$ & $0.3793$ & $9E{-}04$ & $8E{-}06$ & $4E{-}06$ \\
\texttt{rpd\_tf\_3} & $0.3280$ & $0.1468$ & $0.3587$ & $8E{-}05$ & $1E{-}06$ & $8E{-}07$ \\
\texttt{rpd\_tf\_4} & $0.3040$ & $0.1180$ & $0.3225$ & $2E{-}05$ & $3E{-}08$ & $1E{-}08$ \\
\texttt{rpd\_tf\_5} & $0.2920$ & $0.1027$ & $0.2854$ & $1E{-}05$ & $6E{-}09$ & $4E{-}10$ \\
\midrule
\texttt{rpd\_df\_1} & $0.4240$ & $0.1895$ & $0.4543$ & $0.005$ & $8E{-}05$ & $3E{-}04$ \\
\texttt{rpd\_df\_2} & $0.4200$ & $0.1972$ & $0.4727$ & $0.003$ & $1E{-}04$ & $9E{-}04$ \\
\texttt{rpd\_df\_3} & $0.3880$ & $0.1757$ & $0.4304$ & $0.001$ & $2E{-}05$ & $8E{-}05$ \\
\texttt{rpd\_df\_4} & $0.3360$ & $0.1458$ & $0.4000$ & $7E{-}05$ & $8E{-}07$ & $6E{-}06$ \\
\texttt{rpd\_df\_5} & $0.2960$ & $0.1140$ & $0.3495$ & $9E{-}06$ & $1E{-}08$ & $1E{-}07$ \\
\midrule
\texttt{rpd\_tol\_1} & $0.4200$ & $0.1872$ & $0.4469$ & $0.005$ & $6E{-}05$ & $2E{-}04$ \\
\texttt{rpd\_tol\_2} & $0.3960$ & $0.1769$ & $0.4134$ & $0.002$ & $3E{-}05$ & $5E{-}05$ \\
\texttt{rpd\_tol\_3} & $0.2040$ & $0.0987$ & $0.2365$ & $7E{-}08$ & $8E{-}09$ & $1E{-}10$ \\
\texttt{rpd\_tol\_4} & $0.0720$ & $0.0183$ & $0.0572$ & $1E{-}12$ & $5E{-}14$ & $3E{-}22$ \\
\texttt{rpd\_tol\_5} & $0.0200$ & $0.0007$ & $0.0048$ & $5E{-}16$ & $1E{-}15$ & $3E{-}27$ \\
\midrule
\texttt{rpd\_C\_1} & $0.2600$ & $0.1228$ & $0.2786$ & $5E{-}06$ & $3E{-}07$ & $2E{-}08$ \\
\texttt{rpd\_C\_2} & $0.2600$ & $0.1216$ & $0.2790$ & $5E{-}06$ & $2E{-}07$ & $2E{-}08$ \\
\texttt{rpd\_C\_3} & $0.2360$ & $0.0969$ & $0.2507$ & $8E{-}07$ & $7E{-}09$ & $5E{-}10$ \\
\texttt{rpd\_C\_4} & $0.3600$ & $0.1609$ & $0.4095$ & $3E{-}04$ & $4E{-}06$ & $1E{-}05$ \\
\texttt{rpd\_C\_5} & $0.3520$ & $0.1565$ & $0.4026$ & $2E{-}04$ & $2E{-}06$ & $8E{-}06$ \\
\bottomrule
\end{tabular}%
}
\end{table}
\subsection{Correlation Analysis}
\label{subsec:correlation_analysis}
\subsubsection*{Replicability}
Note that for some measures, namely Kendall's $\tau$, \ac{RBO}, $p$-value, the higher the score the better the replicated run, conversely for \ac{RMSE} and Delta \ac{ARP} (absolute difference in \ac{ARP}), the lower the score the better the replicated run. Thus, before computing the correlation among measures, we ensure that all the measure scores are consistent with respect to each other. Practically we consider the opposite of $\tau$, \ac{RBO} and $p$-values, and for \ac{ER} we consider $|1 - ER|$, since the closer its score to $1$, the better the replicability performance.
Table~\ref{tab:replicability_correlation} reports Kendall's $\tau$ correlation for replicability measures on the set of runs replicating \texttt{WCrobust04} (upper triangle, white background) and \texttt{WCrobust0405} (lower triangle, turquoise background). The correlation between \ac{ARP} and $\tau$ is low, below $0.29$, and higher for \ac{RBO} $0.70$. This validates the findings from Section~\ref{subsec:performance_analysis}, showing that Kendall's $\tau$ assumes a totally different perspective when evaluating replicability runs. Between $\tau$ and \ac{RBO}, \ac{RBO} correlates more with \ac{ARP} than
$\tau$, especially with respect to \ac{AP} and \ac{nDCG}. Also, $\tau$ and \ac{RBO} are low correlated with respect to each other. This is due to \ac{RBO} being top-heavy, as \ac{AP} and \ac{nDCG}, while Kendall's $\tau$ considers each rank position as equally important.
The correlation among \ac{ARP} and \ac{RMSE} is higher, especially when the same measure is considered by both \ac{ARP} and \ac{RMSE}. Nevertheless, the correlation is always lower than $0.86$, showing that it is different to compare the overall average or the performance score topic by topic, as also shown by P@10 in Table~\ref{tab:replicability_04}. Furthermore, the correlation between \ac{RMSE} instantiated with \ac{AP} and \ac{nDCG} is high, above $0.9$, this is due to \ac{AP} and \ac{nDCG} being highly correlated, as also shown by the correlation between \ac{ARP} with \ac{AP} and \ac{nDCG} (above $0.90$) and between $p$-values with \ac{AP} and \ac{nDCG} (above $0.91$).
When using the same performance measure, \ac{ARP} and $p$-values approaches are highly correlated, even if from Table~\ref{tab:replicability_04}
several runs have small $p$-values and are statistically different. As mentioned in Section~\ref{subsec:performance_analysis}, the numerator of the $t$-stat is Delta \ac{ARP}, and likely due to low variance, Delta \ac{ARP} and $p$-values are tightly related.
As explained in Section~\ref{subsec:replicability}, \ac{ER} takes a different perspective when evaluating replicability runs. This is corroborated by correlation results, which show that this measure has low correlation with \ac{ARP} and any other evaluation approach. Indeed, replicating the overall improvement over a baseline, does not mean that there is perfect replication on each topic. Moreover, even the correlation among \ac{ER} instantiated with different measures is low, which means that a mean improvement over the baseline in terms of \ac{AP} does not necessarily correspond to a similar mean improvement for \ac{nDCG}.
\begin{table*}[tb]
\caption{Replicability: correlation among different measures for runs replicating \texttt{WCrobust04} (white background); and runs replicating \texttt{WCrobust0405} (turquoise background).}
\label{tab:replicability_correlation}
\resizebox{\textwidth}{!}{
\begin{tabular}{@{}l|ccc|cc|ccc|ccc|ccc@{}}
\toprule
& \multicolumn{3}{c|}{Delta \ac{ARP}} & \multicolumn{2}{c|}{Correlation} & \multicolumn{3}{c|}{\ac{RMSE}} & \multicolumn{3}{c|}{$p$-value} & \multicolumn{3}{c}{\ac{ER}} \\
& P@10 & \ac{AP} & \ac{nDCG} & $\tau$ & \ac{RBO} & P@10 & \ac{AP} & \ac{nDCG} & P@10 & \ac{AP} & \ac{nDCG} & P@10 & \ac{AP} & \ac{nDCG} \\
\midrule
$\Delta$arp\_P@10 & \cellcolor{lg}- & $0.4175$ & $0.3979$ & $0.2456$ & $0.3684$ & $0.3419$ & $0.4552$ & $0.4290$ & $0.9156$ & $0.3668$ & $0.3700$ & $0.2348$ & $0.1752$ & $0.0884$\\
$\Delta$arp\_\ac{AP} & \cellcolor{pt}$0.4535$ & \cellcolor{lg}- & $0.9118$ & $0.2718$ & $0.7045$ & $0.5209$ & $0.8514$ & $0.8090$ & $0.3855$ & $0.8841$ & $0.8596$ & $0.2145$ & $0.3012$ & $0.3731$\\
$\Delta$arp\_\ac{nDCG} & \cellcolor{pt}$0.4716$ & \cellcolor{pt}$0.9363$ & \cellcolor{lg}- & $0.2882$ & $0.6555$ & $0.5339$ & $0.8580$ & $0.8547$ & $0.3463$ & $0.8318$ & $0.8302$ & $0.2374$ & $0.3208$ & $0.4318$\\
\midrule
$\tau$ & \cellcolor{pt}$0.2620$ & \cellcolor{pt}$0.2865$ & \cellcolor{pt}$0.2620$ & \cellcolor{lg}- & $0.2180$ & $0.2788$ & $0.2702$ & $0.2898$ & $0.2434$ & $0.2376$ & $0.2457$ & $0.1834$ & $0.2718$ & $0.2098$\\
\ac{RBO} & \cellcolor{pt}$0.3946$ & \cellcolor{pt}$0.6637$ & \cellcolor{pt}$0.6457$ & \cellcolor{pt}$0.3584$ & \cellcolor{lg}- & $0.6026$ & $0.7616$ & $0.6898$ & $0.3201$ & $0.6376$ & $0.6490$ & $0.3307$ & $0.2049$ & $0.3029$\\
\midrule
\ac{RMSE}\_P@10 & \cellcolor{pt}$0.5420$ & \cellcolor{pt}$0.6713$ & \cellcolor{pt}$0.7089$ & \cellcolor{pt}$0.3213$ & \cellcolor{pt}$0.7433$ & \cellcolor{lg}- & $0.6239$ & $0.5944$ & $0.2544$ & $0.4080$ & $0.4129$ & $0.3452$ & $0.2706$ & $0.3753$\\
\ac{RMSE}\_\ac{AP} & \cellcolor{pt}$0.5076$ & \cellcolor{pt}$0.7747$ & \cellcolor{pt}$0.8188$ & \cellcolor{pt}$0.3224$ & \cellcolor{pt}$0.7910$ & \cellcolor{pt}$0.8136$ & \cellcolor{lg}- & $0.8988$ & $0.4034$ & $0.7355$ & $0.7273$ & $0.2734$ & $0.3453$ & $0.4171$\\
\ac{RMSE}\_\ac{nDCG} & \cellcolor{pt}$0.4666$ & \cellcolor{pt}$0.7616$ & \cellcolor{pt}$0.8188$ & \cellcolor{pt}$0.3094$ & \cellcolor{pt}$0.7682$ & \cellcolor{pt}$0.8054$ & \cellcolor{pt}$0.9184$ & \cellcolor{lg}- & $0.3806$ & $0.7127$ & $0.6849$ & $0.2767$ & $0.3649$ & $0.4498$\\
\midrule
p\_value\_P@10 & \cellcolor{pt}$0.8393$ & \cellcolor{pt}$0.3694$ & \cellcolor{pt}$0.3645$ & \cellcolor{pt}$0.2566$ & \cellcolor{pt}$0.2877$ & \cellcolor{pt}$0.3790$ & \cellcolor{pt}$0.3743$ & \cellcolor{pt}$0.3400$ & \cellcolor{lg}- & $0.3740$ & $0.3593$ & $0.2129$ & $0.1486$ & $0.0327$\\
p\_value\_\ac{AP} & \cellcolor{pt}$0.3913$ & \cellcolor{pt}$0.8498$ & \cellcolor{pt}$0.7927$ & \cellcolor{pt}$0.2506$ & \cellcolor{pt}$0.5657$ & \cellcolor{pt}$0.5470$ & \cellcolor{pt}$0.6245$ & \cellcolor{pt}$0.6180$ & \cellcolor{pt}$0.3564$ & \cellcolor{lg}- & $0.9135$ & $0.1736$ & $0.2343$ & $0.2898$\\
p\_value\_\ac{nDCG} & \cellcolor{pt}$0.3848$ & \cellcolor{pt}$0.8416$ & \cellcolor{pt}$0.7845$ & \cellcolor{pt}$0.2424$ & \cellcolor{pt}$0.5543$ & \cellcolor{pt}$0.5356$ & \cellcolor{pt}$0.6196$ & \cellcolor{pt}$0.6033$ & \cellcolor{pt}$0.3384$ & \cellcolor{pt}$0.9069$ & \cellcolor{lg}- & $0.2178$ & $0.2163$ & $0.3110$\\
\midrule
\ac{ER}\_P@10 & \cellcolor{pt}$0.0739$ & \cellcolor{pt}$0.2652$ & \cellcolor{pt}$0.2767$ & \cellcolor{pt}$0.2227$ & \cellcolor{pt}$0.3537$ & \cellcolor{pt}$0.3108$ & \cellcolor{pt}$0.3193$ & \cellcolor{pt}$0.3144$ & \cellcolor{pt}$0.0459$ & \cellcolor{pt}$0.1817$ & \cellcolor{pt}$0.1867$ & \cellcolor{lg}- & $0.2833$ & $0.1736$\\
\ac{ER}\_\ac{AP} & \cellcolor{pt}$0.3013$ & \cellcolor{pt}$0.2963$ & \cellcolor{pt}$0.3078$ & \cellcolor{pt}$0.1673$ & \cellcolor{pt}$0.2343$ & \cellcolor{pt}$0.3312$ & \cellcolor{pt}$0.3551$ & \cellcolor{pt}$0.3420$ & \cellcolor{pt}$0.2599$ & \cellcolor{pt}$0.1886$ & \cellcolor{pt}$0.1706$ & \cellcolor{pt}$0.2833$ & \cellcolor{lg}- & $0.3992$\\
\ac{ER}\_\ac{nDCG} & \cellcolor{pt}$0.2718$ & \cellcolor{pt}$0.2767$ & \cellcolor{pt}$0.3143$ & \cellcolor{pt}$0.1216$ & \cellcolor{pt}$0.2669$ & \cellcolor{pt}$0.3377$ & \cellcolor{pt}$0.3747$ & \cellcolor{pt}$0.3551$ & \cellcolor{pt}$0.1553$ & \cellcolor{pt}$0.1494$ & \cellcolor{pt}$0.1706$ & \cellcolor{pt}$0.1736$ & \cellcolor{pt}$0.3992$ & \cellcolor{lg}- \\
\bottomrule
\end{tabular}
}
\end{table*}
\subsubsection*{Reproducibility}
\begin{table}[tb]
\caption{Reproducibility: correlation among different measures for runs reproducing \texttt{WCrobust04} (white background); and runs reproducing \texttt{WCrobust0405} (turquoise background).}
\label{tab:reproducibility_correlation}
\resizebox{0.49\textwidth}{!}{
\begin{tabular}{@{}l|ccc|ccc@{}}
\toprule
& \multicolumn{3}{c|}{$p$-value} & \multicolumn{3}{c}{\ac{ER}} \\
& P@10 & \ac{AP} & \ac{nDCG} & P@10 & \ac{AP} & \ac{nDCG} \\
\midrule
p\_value\_P@10 & \cellcolor{lg}- & $0.8545$ & $0.8446$ & $-0.2050$ & $-0.1153$ & $0.0025$\\
p\_value\_\ac{AP} & \cellcolor{pt}$0.8168$ & \cellcolor{lg}- & $0.8694$ & $-0.1743$ & $-0.1151$ & $-0.0335$\\
p\_value\_\ac{nDCG} & \cellcolor{pt}$0.8054$ & \cellcolor{pt}$0.9216$ & \cellcolor{lg}- & $-0.2350$ & $-0.2033$ & $-0.0857$\\
\midrule
\ac{ER}\_P@10 & \cellcolor{pt}$0.0939$ & \cellcolor{pt}$0.0674$ & \cellcolor{pt}$0.0756$ & \cellcolor{lg}- & $0.5651$ & $0.3091$\\
\ac{ER}\_\ac{AP} & \cellcolor{pt}$0.2232$ & \cellcolor{pt}$0.2082$ & \cellcolor{pt}$0.2473$ & \cellcolor{pt}$0.5886$ & \cellcolor{lg}- & $0.5298$\\
\ac{ER}\_\ac{nDCG} & \cellcolor{pt}$0.1006$ & \cellcolor{pt}$0.1167$ & \cellcolor{pt}$0.1559$ & \cellcolor{pt}$0.2220$ & \cellcolor{pt}$0.4318$ & \cellcolor{lg}- \\
\bottomrule
\end{tabular}
}
\end{table}
For reproducibility we can not compare against \ac{ARP}: since the original and reproduced runs are defined on different collections, it is meaningless to contrast average scores. Table~\ref{tab:reproducibility_correlation} reports the correlation among reproducibility runs for \texttt{WCrobust04} (upper triangle, white background) and for \texttt{WC\-ro\-bust\-04\-05} (lower triangle, turquoise background). Again, before computing the correlation among different measures, we ensured that the meaning of their scores is consistent across measures, i.e.~the lower the score the better the reproduced results.
The correlation results for reproducibility show once more that \ac{ER} is low correlated to $p$-values approaches, thus these methods are taking two different evaluation perspectives. Furthermore, \ac{ER} has low correlation with itself when instantiated with different performance measures: even for reproducibility, two different performance measures do not exhibit an average improvement over baseline runs in a similar way.
Finally, all $p$-values approaches are fairly correlated with respect to each other, even stronger than in the replicability case of Table~\ref{tab:replicability_correlation}. This is surprising, if we consider that all the reproducibility runs are statistically significantly different, as shown in Table~\ref{tab:reproducibility_p_value}. However, it represents a further signal that the unpaired $t$-test is not able to recognise successfully reproduced runs, when the new collection and the original collection are too different, independently of the effectiveness measure.
\section{Conclusions and Future Work}
\label{sec:conclusions}
We faced the core issue of investigating measures to determine to what extent a system-oriented \ac{IR} experiment has been replicated or reproduced. To this end, we analysed and compared several measures at different levels of granularity and we developed the first reproducibility-oriented dataset. Due to the lack of a reproducibility-oriented dataset, these measures have never been validated so far.
We found that replicability measures behave as expected and consistently; in particular, \ac{RBO} provides more meaningfull comparisons than Kendall's $\tau$; \ac{RMSE} properly indicates whether we obtained a similar level of performance; finally, both \ac{ER}/\ac{DeltaRI} and the paired t-test successfully determine whether the same effects are replicated. On the other hand, quantifying reproducibility is more challenging and, while \ac{ER}/\ac{DeltaRI} are still able to provide sensible insights, the unpaired t-test seems to be too sensitive to the differences among the experimental collections.
As a suggestion to improve our community practices, it is important to always provide not only the source code but also the actual run, as to enable precise checking for replicability; luckily, this is already happening when we operate within evaluation campaigns which gather and make available runs by their participants.
In future work,
we will explore more advanced statistical methods to quantify reproducibility in a reliable way. Moreover, we will investigate how replicability and reproducibility are related to user experience. For example, a perfectly replicated run in terms of \ac{RMSE}, but with low \ac{RBO}, presents different documents to a user and this might greatly affect her/his experience. Therefore, we need to better understand which replicability/reproducibility level is needed to not impact (too much) on the user experience.
\section{Dataset}
\label{sec:dataset}
To evaluate the measures in Section~\ref{sec:framework}, we need a re\-pro\-du\-ci\-bi\-li\-ty-oriented dataset and, to the best of our knowledge, this is the first attempt to construct such a dataset. The use case behind our dataset is that of a researcher who tries to replicate the methods described in a paper and who also tries to reproduce those results on a different collection; the researcher uses the presented measures as a guidance to select the best replicated/reproduced run and understand when reproduced is reproduced.
Therefore, to cover both replicability and reproducibility, the dataset should contain both a baseline and an advanced run. Furthermore, the dataset should contain runs with different ``quality levels'', roughly meant as being more or less ``close'' to the orginal run, to mimic the different attempts of a researcher to get closer and closer to the original run.
We reimplement \texttt{WCrobust04} and \texttt{WCrobust0405}, two runs submitted by~\citet{DBLP:conf/trec/GrossmanC17} to the TREC 2017 Common Core track~\cite{AllanEtAl2017b}. \texttt{WCrobust04} and \texttt{WCrobust0405} rank documents by routing using profiles~\cite{robertson_callan}. In particular, \citeauthor{DBLP:conf/trec/GrossmanC17} extract relevance feedback from a \emph{training corpus}, train a logistic regression classifier with tfidf-features of relevant documents to a topic, and rank documents of a \emph{target corpus} by their probability of being relevant to the same topic. The baseline run and the advanced run differ by the training data used for the classifier -- one single corpus for \texttt{WCrobust04}, two corpora for \texttt{WCrobust0405}. We \emph{replicate} runs using The New York Times Corpus, our target corpus; we \emph{reproduce} runs using Washington Post Corpus.
It is a requirement that all test collections, i.e., those used for training as well as the target collection, share at least some of the same topics. Our replicated runs cover $50$ topics, whereas the reproduced runs cover $25$ topics.
Full details on the implementation can be found in ~\cite{DBLP:conf/clef/BreuerS19} and in the public repository\footnote{https://github.com/irgroup/sigir2020-measure-reproducibility}~\cite{timo_breuer_2020_3856042}, which also contains the full dataset, consisting of $200$ runs.
To generate replicated and reproduced runs, we systematically change a set of parameters and derive $4$ \emph{constellations} consisting of $20$ runs each, for a total of $80$ runs ($40$ runs for replicability and $40$ runs for reproducibility)\footnote{An alternative to our approach could be to artificially alter one or more existing runs by swapping and/or changing retrieved documents or, even, to generate artificial runs fully from scratch. However, these artificial runs would have had no connection with the principled way in which a researcher actually proceeds when trying to reproduce an experiment and with her/his need to get orientation during this process. As a result, an artificially constructed dataset would lack any clear use case behind it.}. We call them constellations because, by gradually changing the way in which training features are generated and the classifier is parameterized, we obtain sets of runs which are further and further away from the original run in a somehow controlled way and, in Section~\ref{subsec:performance_analysis}, we will exploit this regularity to validate the behaviour of our measures. The $4$ constellations are:
\begin{itemize}[leftmargin=*]
\item \texttt{rpl\_wcr04\_tf}\footnote{The exemplified denotation applies to the replicated baseline run. The advanced and reproduced runs are denotated according to this scheme.}: These runs incrementally reduce the vocabulary size by limiting it with the help of a threshold. Only those tfidf-features with a term frequency above the specified threshold are considered.
\item \texttt{rpl\_wcr04\_df}: Alternatively, the vocabulary size can be reduced by the document frequency. In this case, only terms with a document frequency below a specified maximum are considered. This means common terms included in many documents are excluded.
\item \texttt{rpl\_wcr04\_tol}: Starting from a default parametrization of the classifier, we increase the tolerance of the stopping criterion. Thus, the training is more likely to end earlier at the cost of accuracy.
\item \texttt{rpl\_wcr04\_C}: Comparable to the previous constellation, we start from a default parametrization and vary the $\ell^{2}$-regulari\-zation strength towards poorer accuracy.
\end{itemize}
These constellations are examples of typical implementation details that might be considered as part of the principled way of a reproducibility study. If no information on the exact configuration is given, the researcher has to guess reasonable values for these parameters and thus to produce different runs.
Beside the above constellations, the dataset includes runs with several other configurations obtained by excluding pre-processing steps, varying the generation of the vocabulary, applying different tfidf-formulations, using n-grams with varying lengths, or implementing a support-vector machine as the classifier. This additional constellation, containing $120$ runs ($60$ runs for replicability and $60$ runs for reproducibility), consists of runs which vary in a sharper and less regular way. In Section~\ref{subsec:correlation_analysis}, we will exploit this constellation together with the previous ones to conduct a correlation analysis and understand how our proposed measures are related in a more general case.
\section{Proposed Measures}
\label{sec:framework}
We first introduce our notation. In all cases we assume that the original run $r$ is available. For replicability (\S~\ref{subsec:replicability}), both the original run $r$ and the replicated run $r'$ contain documents from the original collection $C$. For reproducibility (\S~\ref{subsec:reproducibility}), $r$ denotes the original run on the original collection $C$, while $r'$ denotes the reproduced run on the new collection $D$. Topics are denoted by $j \in \{1, \ldots , n_C\}$ in $C$ and $j \in \{1, \ldots , n_D\}$ in $D$, while rank positions are denoted by $i$.
$M$ is any \ac{IR} evaluation measure e.g.,~P@10, \acs{AP}, \acs{nDCG}, where the superscript $C$ or $D$ refers to the collection. $M^{C}(r)$ is the vector of length $n_C$ where each component, $M_j^{C}(r)$, is the score of the run $r$ with respect to the measure $M$ and topic $j$. $\overline{M^{C}(r)}$ is the average score computed across topics.
\subsection{Replicability}
\label{subsec:replicability}
We evaluate replicability at different levels: (i) we consider the actual \emph{ordering of documents} by using Kendall's $\tau$ and \acf{RBO}~\cite{WebberEtAl2010}; (ii) we compare the runs in terms of \emph{effectivenes} with \ac{RMSE}; (iii) we consider whether the \emph{overall effect} can be replicated with \acf{ER} and \acf{DeltaRI}; and (iv) we compute \emph{statistical comparisons} and consider the $p$-value of a paired t-test. While Kendall's $\tau$, \ac{RMSE} and \ac{ER} were originally proposed for CENTRE, the other approaches has never been used for replicability.
It is worth mentioning that these approaches are presented from the most specific to the most general. Kendall's $\tau$ and \ac{RBO} compares the runs at document level, \ac{RMSE} accounts for the performance at topic level, \ac{ER} and \ac{DeltaRI} focus on the overall performance by considering the average across topics, while the $t$-test can just inform us on the significant differences between the original and replicated runs. Moreover, perfect equality for Kendall's $\tau$ and \ac{RBO} implies perfect equality for \ac{RMSE}, \ac{ER}/\ac{DeltaRI} and $t$-test, and perfect equality for \ac{RMSE} implies perfect equality for \ac{ER}/\ac{DeltaRI} and $t$-test, while viceversa is in general not true.
As reference point, we consider the average score across topics of the original and replicated runs, called \acf{ARP}. Its delta represents the current ``naive'' approach to replicability, simply contrasting the average scores of the original and replicated runs.
\subsubsection*{Ordering of Documents}
Kendall's $\tau$ is computed as follows~\cite{Kendall1948}:
\begin{equation}
\begin{aligned}
\tau_j(r, r') & = \frac{P-Q}{\sqrt{\big(P + Q + U\big)\big(P + Q + V\big) }} \\
\bar{\tau}(r, r') & = \frac{1}{n_C} \sum_{j = 1}^{n_C} \tau_j (r, r')
\end{aligned}
\label{eq:tau}
\end{equation}
where $\tau_j(r, r')$ is Kendall's $\tau$ for the $j$-th topic, $P$ is the total number of concordant pairs (document pairs that are ranked in the same order in both vectors), $Q$ the total number of discordant pairs (document pairs that are ranked in opposite order in the two vectors), $U$ and $V$ are the number of ties, in $r$ and $r'$ respectively.
This definition of Kendall's $\tau$ is originally proposed for permutations of the same set of items, therefore it is not directly applicable whenever two rankings do not contain the same set of documents. However, this is not the case of real runs, which often return different sets of documents. Therefore, as done in CENTRE@CLEF~\cite{FerroEtAl2018e,FerroEtAl2019c}, we consider the correlation with respect to the union of the rankings. We refer to this method as \emph{Kendall's $\tau$ Union}. The underlying idea is to compare the relative orders of documents in the original and replicated rankings. For each topic, we consider the union of $r$ and $r'$, by removing duplicate entries. Then we consider the rank positions of documents from the union in $r$ and $r'$, obtaining two lists of rank positions. Finally, we compute the correlation between these two lists of rank positions. Note that, whenever two rankings contain the same set of documents, Kendall's $\tau$ in Eq.~\eqref{eq:tau} and Kendall's $\tau$ Union are equivalent. To better understand how Kendall's $\tau$ Union is defined, consider two rankings: $r = [d_1, d_2, d_3]$ and $r' = [d_1, d_2, d_4]$, the union of $r$ and $r'$ is $[d_1, d_2, d_3, d_4]$, then the two lists of rank positions are $[1, 2, 3]$ and $[1, 2, 4]$ and the final Kendall's $\tau$ is equal to $1$. Similarly consider $r = [d_1, d_2, d_3, d_4]$ and $r' = [d_2, d_5, d_3, d_6]$, the union of $r$ and $r'$ is $[d_1, d_2, d_3, d_4, d_5,$ $d_6]$, then the two lists of rank positions are $[1, 2, 3, 4]$ and $[2, 5, 3, 6]$ and the final Kendall's $\tau$ is equal to $2/3$.
We also consider Kendall's $\tau$ on the intersection of the rankings instead of the union. As reported in~\cite{SandersonSoboroff2007}, Kendall's $\tau$ can be very noisy with small rankings and should be considered together with the size of the overlap between the $2$ rankings. However, this approach does not inform us on the rank positions of the common documents. Therefore, to seamlessly deal with rankings possibly containing different documents and to accout for their rank positions, we propose to use \acf{RBO}~\cite{WebberEtAl2010}, which assumes $r$ and $r'$ to be infinite runs:
\begin{equation}
\begin{aligned}
\text{RBO}_j(r,r') &= (1 - \phi) \sum_{i=1}^{\infty}\phi^{i - 1} \cdot A_{i} \\
\overline{\text{RBO}}(r,r') & = \frac{1}{n_C} \sum_{j = 1}^{n_C} \text{RBO}_j (r, r')
\end{aligned}
\label{eq:rbo}
\end{equation}
where $\text{RBO}_j(r,r')$ is \ac{RBO} for the $j$-th topic; $\phi \in [0, 1]$ is a parameter to adjust the measure top-heaviness: the smaller $\phi$, the more top-weighted the measure; and $A_{i}$ is the proportion of overlap up to rank $i$, which is defined as the cardinality of the intersection between $r$ and $r'$ up to $i$ divided by $i$.
Therefore, \ac{RBO} accounts for the overlap of two rankings and discounts the overlap while moving towards the end of the ranking, since it is more likely for two rankings to have a greater overlap when many rank positions are considered.
\subsubsection*{Effectiveness}
As reported in CENTRE@CLEF~\cite{FerroEtAl2018e,FerroEtAl2019c}, we exploit \acf{RMSE}~\cite{KenneyKeeping1954} to measure how close the effectiveness scores of the replicated and original runs are:
\begin{equation}
\mathrm{RMSE}\left(M^C(r), M^C(r')\right) = \sqrt {\frac{1}{n_C} \sum_{j = 1}^{n_C} \big(M_{j}^C(r) - M_{j}^C(r') \big)^2}
\label{eq:rmse}
\end{equation}
\ac{RMSE} depends just on the evaluation measure and on the relevance label of each document, not on the actual documents retrieved by each run. Therefore, if two runs $r$ and $r'$ retrieve different documents, but with the same relevance labels, then \ac{RMSE} is not affected and returns a perfect replicability score equal to $0$; on the other hand, Kendall's $\tau$ and \ac{RBO} will be able to detect such differences.
Although \ac{RMSE} and the naive comparison of \ac{ARP} scores can be thought as similar approaches, by taking the squares of the absolute differences, \ac{RMSE} penalizes large errors more. This can lead to different results, as shown in Section~\ref{sec:experiments_new}.
\subsubsection*{Overall Effect}
In this case, we define a replication task from a different perspective, as proposed in CENTRE@NTCIR~\cite{SakaiEtAl2019b}.
Given a pair of runs, $a$ and $b$, such that the advanced $a$-run has been reported to outperform the baseline $b$-run on the collection $C$,
can another research group replicate the improvement of the advanced run over the baseline run on $C$?
With this perspective, the per-topic improvements in the original and replicated experiments
are:
\begin{equation}
\Delta M_{j}^{C} = M_{j}^{C}(a) - M_{j}^{C}(b) \ , \,\,\,\,\, \Delta' M_{j}^{C} = M_{j}^{C}(a') - M_{j}^{C}(b')
\label{eq:per-topic-improvements}
\end{equation}
where $a'$ and $b'$ are the replicated advanced and baseline runs respectively.
Note that even if the $a$-run outperforms the $b$-run on average, the opposite may be true for some topics: that is, per-topic improvements may be negative.
Since IR experiments are usually based on comparing mean effectiveness scores, \acf{ER}~\cite{SakaiEtAl2019b} focuses on the replicability of the overall effect as follows:
\begin{equation}
\text{ER}\left(\Delta' M^{C}, \Delta M^{C}\right) =
\frac{
\overline{\Delta' M^{C}}
}{
\overline{\Delta M^{C}}
}
=
\frac{
\frac{1}{n_{C}}\sum_{j=1}^{n_{C}} \Delta' M_{j}^{C}
}{
\frac{1}{n_{C}}\sum_{j=1}^{n_{C}} \Delta M_{j}^{C}
}
\end{equation}
where the denominator of \ac{ER} is the mean improvement in the original experiment,
while the numerator is the mean improvement in the replicated experiment.
Assuming that the standard deviation for the difference in terms of measure $M$ is common across experiments,
ER is equivalent to the ratio of \emph{effect sizes} (or \emph{standardised mean differences} for the \emph{paired data} case)~\cite{Sakai2018}:
hence the name.
$\text{ER} \leq 0$ means that the replicated $a$-run failed to outperform the replicated $b$-run: the replication is a complete failure. If $0 < \text{ER} <1$, the replication is somewhat successful, but the effect is smaller compared to the original experiment. If $\text{ER}=1$, the replication is perfect in the sense that the original effect has been recovered as is. If $\text{ER}>1$, the replication is successful, and the effect is actually larger compared to the original experiment.
Note that having the same mean delta scores, i.e.~$ER = 1$, does not imply that the per-topic replication is perfect. For example, consider two topics $i$ and $j$ and assume that the original delta scores are $\Delta M_{i}^{C} = 0.2$ and $\Delta M_{j}^{C} = 0.8$ while the replicated delta scores are $\Delta'M_{i}^{C} = 0.8$ and $\Delta'M_{j}^{C} = 0.2$. Then \ac{ER} for this experiment is equal to $1$. While this difference is captured by \ac{RMSE} or Kendall's $\tau$, which focus on a per-topic level, \ac{ER} considers instead whether the sample effect size (standardised mean difference) can be replicated or not.
\ac{ER} focuses on the effect of the $a$-run over the $b$-run, isolating it from other factors, but we may also want to account for absolute scores that are similar to the original experiment. Therefore, we propose to complement \ac{ER} with \acf{DeltaRI} and to plot \ac{ER} against \ac{DeltaRI} to visually interpret the replicability of the effects. We define \ac{DeltaRI} as follows\footnote{In Equation~\eqref{eq:deltari} we assume that both $\overline{M^{C}(b)}$ and $\overline{M^{C}(b')}$ are $> 0$. If these two values are equal to $0$, it means that the run score is equal to $0$ on each topic. Therefore, we can simply remove that run from the evaluation, as it is done for topics which do not have any relevant document.}:
\begin{equation}
\text{RI} = \frac{\overline{M^{C}(a)} - \overline{M^{C}(b)}}{\overline{M^{C}(b)}}, \qquad \text{RI}' = \frac{\overline{M^{C}(a')} - \overline{M^{C}(b')}}{\overline{M^{C}(b')}}
\label{eq:deltari}
\end{equation}
where $\text{RI}$ and $\text{RI}'$ are the relative improvements for the original and replicated runs and $\overline{M^{C}(\cdot)}$ is the average score across topics. Now let \ac{DeltaRI} be
$\Delta\text{RI}(\text{RI}, \text{RI}') = \text{RI} - \text{RI}'$.
\ac{DeltaRI} ranges in $[-1, 1]$, $\Delta\text{RI} = 0$ means that the relative improvements are the same for the original and replicated runs; when $\Delta\text{RI} > 0$, the replicated relative improvement is smaller than the original relative improvement, and in case $\Delta\text{RI} < 0$, it is larger.
\ac{DeltaRI} can be used in combination with \ac{ER}, by plotting \ac{ER} ($x$-axis) against \ac{DeltaRI} ($y$-axis), as done in Figure~\ref{fig:rpl_er_ri}. If $ER = 1$ and $\Delta\text{RI} = 0$ both the effect and the relative improvements are replicated, therefore the closer a point to $(1, 0)$ the more successful the replication experiment.
We can now divide the \ac{ER}-\ac{DeltaRI} plane in $4$ regions, corresponding to the $4$ quadrants of the cartesian plane:
\begin{itemize}[leftmargin=*]
\item Region both $1$: \text{ER} $>0$ and \ac{DeltaRI} $>0$, the replication is somehow successful in terms of effect sizes, but not in terms of absolute scores;
\item Region $2$: \text{ER} $<0$ and \ac{DeltaRI} $>0$, the replication is a failure both in terms of effect sizes and absolute scores;
\item Region $3$: both \text{ER} $<0$ and \ac{DeltaRI} $<0$, the replication is a failure in terms of effect sizes, but not in terms of absolute scores;
\item Region $4$: \text{ER} $>0$ and \ac{DeltaRI} $<0$, this means that the replication is somehow successful both in terms of effect sizes and absolute scores.
\end{itemize}
Therefore, the preferred region is Region $4$, with the condition that the best replicability runs are close to $(1,0)$.
\subsubsection*{Statistical Comparison}
We propose to compare the original and replicated runs in terms of their statistical difference: we run a two-tailed paired $t$-test between the scores of $r$ and $r'$ for each topic in $C$ with respect to an evaluation measure $M$. The $p$-value returned by the $t$-test informs on the extent to which $r$ is successfully replicated: the smaller the $p$ value, the stronger the evidence that $r$ and $r'$ are significantly different, thus $r'$ failed in replicating $r$.
Note that the $p$-value does not inform on the overall effect, i.e.~we may know that $r'$ failed to replicate $r$, but we cannot infer whether $r'$ performed better or worse than $r$.
\subsection{Reproducibility}
\label{subsec:reproducibility}
Differently from replicability, for reproducibility the original and reproduced runs are not obtained on the same collection (different documents and/or topic sets), thus the original run cannot be used for direct comparison with the reproduced run. As a consequence, Kendall's $\tau$, \ac{RBO}, and \ac{RMSE} in Section~\ref{subsec:replicability} cannot be applied to the reproducibility task. Therefore, hereinafter we focus on: (i) reproducing the \emph{overall effect} with \ac{ER}; (ii) comparing the original and reproduced runs with \emph{statistical tests}.
\subsubsection*{Overall Effect}
CENTRE@NTCIR~\cite{SakaiEtAl2019b} defines \ac{ER} for reproducibility as follows: given a pair of runs, $a$-run and $b$-run, where the $a$-run has been reported to outperform the $b$-run on a test collection $C$, can another research group reproduce the improvement on a different test collection $D$?
The original per-topic improvements are the same as in Eq.~\eqref{eq:per-topic-improvements}, while the reproduced per-topic improvements are defined as in Eq.~\eqref{eq:per-topic-improvements} by replacing $C$ with $D$. Therefore, the resulting \acf{ER}~\cite{SakaiEtAl2019b} is defined as follows:
\begin{equation}
\text{ER}(\Delta' M^{D}, \Delta M^{C}) =
\frac{
\overline{\Delta' M^{D}}
}{
\overline{\Delta M^{C}}
}
=
\frac{
\frac{1}{n_{D}} \sum_{j=1}^{n_{D}} \Delta' M_{j}^{D}
}{
\frac{1}{n_{C}} \sum_{j=1}^{n_{C}} \Delta M_{j}^{C}
}
\label{eq:er_reproducibility}
\end{equation}
where $n_{D}$ is the number of topics in $D$. Assuming that the standard deviation of a measure $M$ is common across experiments, the above version of \ac{ER} is equivalent to the ratio of effect sizes (or standardised mean differences for the two-sample data case)~\cite{Sakai2018}; it can then be interpreted in a way similar to the \ac{ER} for replicability. Note that since we are considering the ratio of the mean improvements instead of the mean of the improvements ratio, Eq.~\eqref{eq:er_reproducibility} can be applied also when the number of topics in $C$ and $D$ is different.
Similarly to the replicability case, \ac{ER} can be complemented with \ac{DeltaRI}, whose definition is the same of Eq.~\eqref{eq:deltari}, but $\text{RI}'$ is computed over the new collection $D$, instead of the original collection $C$.
\ac{DeltaRI} has the same interpretation as in the replicability case, i.e.~to show if the improvement in terms of relative scores in the reproduced experiment are similar to the original experiment.
\subsubsection*{Statistical Comparison}
With a t-test, we can also handle the case when the original and the reproduced experiments are based on different datasets. In this case, we need to perform a two-tailed unpaired t-test to account, for the different subjects used in the comparison.
The unpaired t-test assumes equal variance and this is likely to not happen when, e.g., you have two different sets of topics in the two datasets. However, the unpaired t-test is known to be robust to such violations and \citet{Sakai2016b} has shown that Welch's t-test, which assumes unequal variance, may be less reliable when the sample sizes differ substantially and the larger sample has a substantially larger variance.
\section{Introduction}
\label{sec:introduction}
We are today facing the so-called \emph{reproducibility crisis}~\cite{Baker2016,OSC2015} across all areas of science, where researchers fail to reproduce and confirm previous experimental findings. This crisis obviously involves also the more recent computational and data-intensive sciences~\cite{zz-DagstuhlSeminar16041,NAP2019}, including hot areas such as artificial intelligence and machine learning~\cite{Gibney2020}. For example, \citet{Baker2016} reports that roughly 70\% of researchers in physics and engineering fail to reproduce someone else's experiments and roughly 50\% fail to reproduce even their own experiments.
\ac{IR} is not an exception and researchers are paying more and more attention to what the reproducibility crisis may mean for the field, even more with the raise of the new deep learning and neural approaches \cite{Crane2018,DacremaEtAl2019}.
In addition to all the well-known barriers to reproducibility~\cite{zz-DagstuhlSeminar16041}, a fundamental methodological question remains open: \emph{When we say that an experiment is reproduced, what exactly do we mean by it?} The current attitude is some sort of ``\emph{close enough}'': researchers put any reasonable effort to understand how an approach was implemented and how an experiment was conducted and, after some (several) iterations, when they obtain performance scores which somehow resemble the original ones, they decide that an experimental result is reproduced. Unfortunately, \ac{IR} completely lacks any means to \emph{objectively measure} when reproduced is reproduced and this severely hampers the possibility both to assess to what extent an experimental result has been reproduced and to sensibly compare among different alternatives for reproducing an experiment.
This severe methodological impediment is not limited to \ac{IR} but it has been recently brought up as a research challenge also in the 2019 report on ``Reproducibility and Replicability in Science'' by the US \citet[p.~62]{NAP2019}: ``The National Science Foundation should consider investing in research that explores the limits of computational reproducibility in instances in which bitwise reproducibility\footnote{``For computations, one may expect that the two results be identical (i.e., obtaining a bitwise identical numeric result). In most cases, this is a reasonable expectation, and the assessment of reproducibility is straightforward. However, there are legitimate reasons for reproduced results to differ while still being considered consistent''~\cite[p.~59]{NAP2019}. The latter is clearly the most common case in \ac{IR}.} is not reasonable in order to ensure that the \emph{meaning of consistent computational results} remains in step with the development of new computational hardware, tools, and methods''.
Another severe issue is that we lack any \emph{experimental collection} specifically focused on reproducibility and this prevents us from developing and comparing measures to assess the extent of achieved reproducibility.
In this paper, we tackle both these issues. Firstly, we consider different measures which allow for comparing experimental results at different levels from most specific to most general: the ranked lists of retrieved documents; the actual scores of effectiveness measures; the observed effects and significant differences.
As you can note these measures progressively depart more and more from the ``bitwise reproducibility''~\cite{NAP2019} which in the \ac{IR} case would mean producing exactly identical ranked lists of retrieved documents.
Secondly, starting from TREC data, we develop a reproducibility-oriented dataset and we use it to compare the presented measures.
The paper is organized as follows: Section~\ref{sec:related} discusses related work; Section~\ref{sec:framework} introduces the evaluation measures under investigation; Section~\ref{sec:dataset} describes how we created the reproducibility-oriented dataset; Section~\ref{sec:experiments_new} presents the experimental comparison of the evaluation measures; finally, Section~\ref{sec:conclusions} draws some conclusions and outlooks future work.
\section{Related Work}
\label{sec:related}
In defining what repeatability, replicability, reproducibility, and other of the so-called \emph{r-words} are~\cite{Plesser2018}, \citet{DeRoure2014} lists 21 r-words grouped in 6 categories, which range from scientific method to understanding and curation. In this paper, we broadly align with the definition of replicability and reproducibility currently adopted by the \ac{ACM}\footnote{\url{https://www.acm.org/publications/policies/artifact-review-badging}, April 2018.}:
\begin{itemize}[leftmargin=*]
\item \emph{Replicability (different team, same experimental setup)}: the measurement can be obtained with stated precision by a different team using the same measurement procedure, the same measuring system, under the same operating conditions, in the same or a different location on multiple trials. For computational experiments, an independent group can obtain the same result using the author’s own artifacts;
\item \emph{Reproducibility (different team, different experimental setup)}: the measurement can be obtained with stated precision by a different team, a different measuring system, in a different location on multiple trials. For computational experiments, an independent group can obtain the same result using artifacts which they develop completely independently.
\end{itemize}
\subsection*{Reproducibility Efforts in IR}
There have been and there are several initiatives related to reproducibility in \ac{IR}. Since 2015, the ECIR conference hosts a track dedicated to papers which reproduce existing studies, and all the major IR conferences ask an assessment of the ease of reproducibility of a paper in their review forms. The SIGIR group has started a task force~\cite{FerroKelly2018} to define what reproducibility is in system-oriented and user-oriented \ac{IR} and how to implement the \ac{ACM} badging policy in this context. \citet{Fuhr2017,Fuhr2019} urged the community to not forget about reproducibility and discussed reproducibility and validity in the context of the CLEF evaluation campaign. The recent ACM JDIQ special issue on reproducibility in \ac{IR}~\cite{FerroEtAl2018g,FerroEtAl2018h} provides an updated account of the state-of-the-art in reproducibility research as far as evaluation campaigns, collections, tools, infrastructures and analyses are concerned. The SIGIR 2015 RIGOR workshop~\cite{ArguelloEtAl2015} investigated reproducibility, inexplicability, and generalizability of results and held a reproducibility challenge for open source software~\cite{LinEtAl2016}. The SIGIR 2019 OSIRRC workshop~\cite{ClancyEtAl2019} conducted a replicability challenge based on Docker containers.
CENTRE\footnote{\url{https://www.centre-eval.org/}} is an effort across CLEF~\cite{FerroEtAl2018e,FerroEtAl2019c}, TREC~\cite{SoboroffEtAl2018b}, and NTCIR~\cite{SakaiEtAl2019b} to run a joint evaluation activity on reproducibility. One of the goals of CENTRE was to define measures to quantify to which extent experimental results were reproduced. However, the low participation in CENTRE prevented the development of an actual reproducibility-oriented dataset and hampered the possibility of developing and validating measures for reproducibility.
\subsection*{Measuring Reproducibility}
To measure reproducibility, CENTRE exploited: Kendall's $\tau$~\cite{Kendall1948}, to measure how close are the original and replicated list of documents; \ac{RMSE}~\cite{KenneyKeeping1954}, to quantify how close are the effectiveness scores of the original and replicated runs; and the \acf{ER}~\cite{SakaiEtAl2019b}, to quantify how close are the effects of the original and replicated/reproduced systems.
We compare against and improve with respect to previous work within CENTRE. Indeed, Kendall's $\tau$ cannot deal with rankings that do not contain the same elements; CENTRE overcomes this issue by considering the union of the original and replicated rankings and comparing with respect to it; we show how this is a somehow pessimistic approach, penalizing the systems and propose to use \acf{RBO}~\cite{WebberEtAl2010}, since it is natively capable to deal with rankings containing different elements. Furthermore, we complement \acf{ER} with the new \acf{DeltaRI} score, to better grasp replicability and reproducibility in terms of absolute scores and to provide a visual interpretation of the effects. Finally, we propose to test replicability and reproducibility with paired and unpaired t-test~\cite{Student1908} respectively, and to use $p$-values as an estimate of replicability and reproducibility success.
To the best of our knowledge, inspired by the somehow unsuccessful experience of CENTRE, we are the first to systematically investigate measures for guiding replicability and reproducibility in \ac{IR}, backing this with the development of a reproducibility-oriented dataset. As previously observed, there is a compelling need for reproducibility measures for computational and data-intensive sciences~\cite{NAP2019}, being the largest body of knowledge focused on traditional lab experiments and metrology~\cite{NAP2016,ISO-5725-2:2019}, and we try here to start addressing that need in the case of \ac{IR}.
| proofpile-arXiv_065-5513 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
\label{sec:1}
In recent years, the neural network (deep learning) technique has played a more and more important role in applications of machine learning. To comprehensively understand the mechanisms of neural network (NN) models and to explain their output results, however, still require more basic research \citep{roscher_explainable_2020}. To understand the mechanisms of NN models, that is, the \textit{transparency} of deep learning, there are mainly three ways: the training process \citep{du_gradient_2018}, generalizability \citep{liu_understanding_2020}, and loss or accuracy prediction \citep{arora_fine-grained_2019}.
In this study, we create a novel theory from scratch to approximately estimate the training accuracy for two-layer neural networks on random datasets. Its main idea is based on the regions of linearity represented by NN models \citep{pascanu_number_2014}, which derives from common insights of the Perceptron. Specifically, the studied subjects are:
\begin{itemize}
\item \textbf{Classifier model}: the two-layer fully-connected neural networks (FCNN) with $d-L-1$ architecture, in which the length of input vectors ($\in R^d$) is d, the hidden layer has $L$ neurons (with ReLU activation), and the output layer has one neuron with the Sigmoid function. This FCNN is for two-class classifications and outputs of the FCNN are values in [0,1]
\item \textbf{Dataset}: $N$ random (uniformly distributed) vectors in $R^d$ belonging to two classes with labels ‘0’ and ‘1’, and the number of samples for each class is the same.
\item \textbf{Metrics}: training accuracy.
\end{itemize}
The paradigm we use to study the FCNN is similar to that of research in physics. First, we find a simplified system to examine; second, we create a theory based on several hypotheses to predict or estimate results of the system. Third, for the most important step, we apply experiments to verify the proposed theory by comparing predicted results with real outcomes of the system. If the predictions are close to the real results, we could accept the theory or update it to make predictions/estimates more accurate. Otherwise, we abandon this theory and seek another one.
To the best of our knowledge, only a few studies have addressed the issue of training accuracy prediction/estimation for NN models. \citet{deng_peephole_2017} and \citet{istrate_tapas_2019} propose LSTM-based frameworks (\textit{Peephole} and \textit{TAP}) to predict a NN model’s performance before training the original model but the frameworks still need training by input data before making predictions. And the accuracy prediction method of \citet{unterthiner_predicting_2020} requires weights from the trained NN model. None of them, however, estimates training accuracy without using input data nor trained models. Through our method, to estimate the training accuracy for two-layer FCNN on random datasets (two classes) requires only three arguments: the dimensionality of inputs ($d$), the number of inputs ($N$), and the number of neurons in the hidden layer ($L$).
This paper has two contributions: 1) to introduce a novel theory to understand the mechanisms of NN models and 2) by applying that theory, to estimate the training accuracy for two-layer FCNN on random datasets. This study may raise other questions and discover the starting point of a new way for future researchers to make progress in the understanding of deep learning. In the next section, we will describe the proposed theory to estimate the results of training accuracy.
\section{The hidden layer: space partitioning}
\label{sec:2}
In general, the output of the $k$-th neuron in the first hidden layer is:
\[
s_k(x)=\sigma(w_k\cdot x+b_k )
\]
Where input $x\in R^d$; parameter $w_k$ is the input weight of the $k$-th neuron and its bias is $b_k$. We define $\sigma(\cdot)$ as the ReLU activation function, defined as:
\[
\sigma(x)=\max\{0,x\}
\]
The neuron can be considered as a hyperplane: $w_k\cdot x+b_k=0$ that divides the input space $R^d$ into two partitions \citep{pascanu_number_2014}. If the input $x$ is in one (lower) partition or on the hyperplane, then $w_k\cdot x+b_k\leq 0$ and then their output $s_k(x)=0$. If $x$ is in the other (upper) partition, its output $s_k(x)>0$. Specifically, the distance from $x$ to the hyperplane is:
\[
d_k(x)=\frac{|w_k\cdot x+b_k|}{\|w_k\|}
\]
If $w_k\cdot x+b_k>0$,
\[
s_k(x)=\sigma(w_k\cdot x+b_k)=|w_k\cdot x+b_k|=d_k(x)\|w_k\|
\]
For a given input data point, $L$ neurons assign it a unique code: \{$s_1$, $s_2$, $\cdots$, $s_L$\}; some values in the code could be zero. $L$ neurons divide the input space into many partitions, input data in the same partition will have codes that are more similar because of having the same zero positions. Conversely, it is obvious that the codes of data in different partitions have different zero positions, and the differences (the Hamming distances) of these codes are greater. It is apparent, therefore, that the case of input data separated into different partitions is favorable for classification.
\subsection{Complete separation}
\label{sec:2.1}
We suppose $L$ neurons divide the input space into $S$ partitions and hypothesize that:
\begin{guess} \label{hyp:1}
For the best performance of classification, all N input data have been separated in different partitions (named \textbf{complete separation}).
\end{guess}
Under this hypothesis, for complete separation, each partition contains at most one data point after space partitioning. Since the position of data points and hyper-planes can be considered (uniformly distributed) random, the probability of complete separation ($P_c$) is:
\begin{equation} \label{eq:1}
P_c=\frac{{S\choose N}}{\frac{S^N}{N!}}=\frac{S!}{\left(S-N\right)!S^N}
\end{equation}
By the Stirling's approximation,
\[
P_c=\frac{S!}{\left(S-N\right)!S^N}\approx\frac{\sqrt{2\pi S}\left(\frac{S}{e}\right)^S}{\sqrt{2\pi\left(S-N\right)}\left(\frac{S-N}{e}\right)^{S-N}S^N}
\]
\begin{equation} \label{eq:2}
P_c=\left(\frac{1}{e}\right)^N\left(\frac{S}{S-N}\right)^{S-N+0.5}
\end{equation}
Let $S=bN^a$ and for a large $N\rightarrow\infty$, by Eq. (\ref{eq:2}), the limitation of complete separation probability is:
\begin{equation} \label{eq:3}
\lim_{N\rightarrow\infty}P_c=\lim_{N\rightarrow\infty}\left(\frac{1}{e}\right)^N{\left(\frac{bN^a}{bN^a-N}\right)^{bN^a-N+0.5}}
\end{equation}
$S>0$ requires $b>0$; and for complete separation, $\forall N:\ S\geq N$ (Pigeonhole principle) requires $a\geq1$. By simplifying the limit in Eq. (\ref{eq:3}), we have\footnote{A derivation of this simplification is available in the \hyperref[sec:appx]{Appendix}.}:
\begin{equation} \label{eq:4}
\begin{array}{l p{1em} r r}
\displaystyle \lim_{N\rightarrow\infty}P_c=\lim_{N\rightarrow\infty}e^{-\frac{(a-1)N^{\mathbf{2}-a}}{ab}} & & \mbox{when} & a>1 \\
& & &\\
\displaystyle \lim_{N\rightarrow\infty}P_c=0 & & \mbox{when} & a=1
\end{array}
\end{equation}
Eq. (\ref{eq:4}) shows that for large $N$, the probability of complete separation is \textbf{nearly zero} when $1\le a<2$, and \textbf{close to one} when $a>2$. Only for $a=2$ is the probability controlled by the coefficient $b$:
\begin{equation} \label{eq:5}
\begin{array}{l p{1em} r r}
\displaystyle \lim_{N\rightarrow\infty}P_c=e^{-\left(\frac{1}{2b}\right)}& & \mbox{when} & S=bN^2
\end{array}
\end{equation}
Although complete separation holds ($\displaystyle \lim_{N\rightarrow\infty}P_c=1$) for $a>2$, there is no need to incur the exponential growth of $S$ with $a$ when compared to the linear growth with $b$. And a high probability of complete separation does not require even a large $b$. For example, when $a=2 \mbox{ and } b=10$ the $\displaystyle \lim_{N\rightarrow\infty}P_c\approx0.95$. Therefore, we let $S=bN^2$ throughout this study.
\subsection{Incomplete separation}
\label{sec:2.2}
To increase the $b$ in Eq. (\ref{eq:5}) can improve the probability of complete separation. Alternatively, to decrease the training accuracy could improve the probability of an \textbf{incomplete separation}, that is, some partitions have more than one data point after space partitioning. We define the separation ratio $\gamma\ \left(0\le\gamma\le1\right)$ for $N$ input data, which means at least $\gamma N$ data points have been completely separated (at least $\gamma N$ partitions contain only one data point). According to Eq. (\ref{eq:1}), the probability of such incomplete separation ($P_{inc}$) is:
\begin{equation} \label{eq:6}
P_{inc}=\frac{\frac{{S \choose \gamma N}}{{N \choose \gamma N}}\frac{\left(S-\gamma N\right)^{\left(1-\gamma\right)N}}{\left(1-\gamma\right)N!}}{\frac{S^N}{N!}}=\frac{S!\left(S-\gamma N\right)^{\left(1-\gamma\right)N}}{\left(S-\gamma N\right)!S^N}
\end{equation}
When $\gamma=1$, $P_{inc}=P_c$, \textit{i.e.}, it becomes the complete separation, and when $\gamma=0$, $P_{inc}=1$. We apply Stirling's approximation and let $S=bN^2$, $N\rightarrow\infty$, similar to Eq. (\ref{eq:5}), we have:
\begin{equation} \label{eq:7}
\lim_{N\rightarrow\infty}P_{inc}=e^\frac{\gamma\left(\gamma-2\right)}{2b}
\end{equation}
\subsection{Expectation of separation ratio}
\label{sec:2.3}
In fact, Eq. (\ref{eq:7}) shows the probability that \textbf{at least} $\gamma N$ data points (when $N$ is large enough) have been completely separated, which implies:
\begin{multline*}
P_{inc}\left(x\geq\gamma\right)=e^\frac{\gamma\left(\gamma-2\right)}{2b}\Rightarrow \\ P_{inc}\left(x=\gamma\right)=\frac{dP_{inc}\left(x<\gamma\right)}{d\gamma}=\frac{d\left(1-P_{inc}\left(x\geq\gamma\right)\right)}{d\gamma}= \\
\frac{d\left(1-e^\frac{\gamma\left(\gamma-2\right)}{2b}\right)}{d\gamma}=\frac{1-\gamma}{b}e^\frac{\gamma\left(\gamma-2\right)}{2b}=P_{inc}\left(\gamma\right)
\end{multline*}
We notice that the equation $P_{inc}\left(\gamma\right)$ does not include the probability of complete separation $P_c$ because $P_{inc}\left(1\right)=0$. Hence, $P_{inc}\left(1\right)$ is replaced by $P_c$ and the comprehensive probability for the separation ratio $\gamma$ is:
\begin{equation} \label{eq:8}
P(\gamma)=\left\{
\begin{array}{l p{1em} c}
P_c=e^{-\left(\frac{1}{2b}\right)}& & \gamma=1 \\
\frac{1-\gamma}{b}e^\frac{\gamma\left(\gamma-2\right)}{2b}& & 0\le\gamma<1
\end{array}\right.
\end{equation}
Since Eq. (\ref{eq:8}) is a function of probability, we could verify it by:
\[
\int_{0}^{1}P\left(\gamma\right)d\gamma=P_c+\int_{0}^{1}{\frac{1-\gamma}{b}e^\frac{\gamma\left(\gamma-2\right)}{2b}d\gamma}=e^{-\left(\frac{1}{2b}\right)}+\left(1-e^{-\left(\frac{1}{2b}\right)}\right)=1
\]
We compute the expectation of the separation ratio $\gamma$:
\[
E\left[\gamma\right]=\int_{0}^{1}{\gamma\cdot P\left(\gamma\right)d\gamma}=1\cdot P_c+\int_{0}^{1}{\gamma\cdot\frac{1-\gamma}{b}e^\frac{\gamma\left(\gamma-2\right)}{2b}d\gamma}\Rightarrow
\]
\begin{equation} \label{eq:9}
E\left[\gamma\right]=\frac{\sqrt{2\pi b}}{2}\mbox{erfi}{\left(\frac{1}{\sqrt{2b}}\right)}e^{-\left(\frac{1}{2b}\right)}
\end{equation}
where $\mbox{erfi}(x)$ is the imaginary error function:
\[
\mbox{erfi}(x)=\frac{2}{\sqrt\pi}\sum_{n=0}^{\infty}\frac{x^{2n+1}}{n!\left(2n+1\right)}
\]
\subsection{Expectation of training accuracy}
\label{sec:2.4}
Based on the hypothesis, a high separation ratio helps to obtain a high training accuracy, but it is not sufficient because the training accuracy also depends on the separating capacity of the second (output) layer. Nevertheless, we firstly ignore this fact and reinforce our Hypothesis \ref{hyp:1}.
\begin{guess} \label{hyp:2}
The separation ratio directly determines the training accuracy.
\end{guess}
Then, we will add \textbf{empirical corrections} to our theory to allow it to match the real situations. We suppose that, in the condition of incomplete separation, all completely separated data points can be predicted correctly, and that the other data points have a 50\% chance to be predicted correctly (equivalent to a random guess, since the number of samples for each class is the same). Specifically, if $ \gamma N$ data points have been completely separated, the training accuracy $ \alpha$ (based on our hypothesis) is:
\[
\alpha=\frac{\gamma N+0.5\left(1-\gamma\right)N}{N}=\frac{1+\gamma}{2}
\]
To take the expectation on both sides, we have:
\begin{equation} \label{eq:10}
E\left[\alpha\right]=\frac{1+E\left[\gamma\right]}{2}
\end{equation}
Eq. (\ref{eq:10}) shows the expectation relationship between the separation ratio and training accuracy. After replacing $E\left[\gamma\right]$ in Eq. (\ref{eq:10}) with Eq. (\ref{eq:9}), we obtain the formula to compute the \textbf{expectation of training accuracy}:
\begin{equation} \label{eq:11}
\boxed{
E\left[\alpha\right]=\frac{1}{2}+\frac{\sqrt{2\pi b}}{4}\mbox{erfi} {\left(\frac{1}{\sqrt{2b}}\right)}e^{-\left(\frac{1}{2b}\right)}
}
\end{equation}
To compute the expectation of training accuracy by Eq. (\ref{eq:11}), we must calculate the value of $b$. The expectation of training accuracy is a \textbf{monotonically increasing function of $b$ on its domain $(0,\infty)$ and its range is $(0.5,1)$}. Since the coefficient $b$ is very important to estimate the training accuracy, it is also called the \textbf{ensemble index} for training accuracy. This leads to the following theorem:
\begin{theorem} \label{thm:1}
The expectation of training accuracy for a $d-L-1$ architecture FCNN is determined by Eq. (\ref{eq:11}) with the ensemble index $b$.
\end{theorem}
We suppose that, in the input space $R^d$, $L$ hyperplanes (neurons) divide the space into $S$ partitions. By the space partitioning theory \citep{winder_partitions_1966}, the maximum number of partitions is:
\begin{equation} \label{eq:12}
S=\sum_{i=0}^{d}{L \choose i}
\end{equation}
Since:
\[
\sum_{i=0}^{d}{L \choose i}=O\left(\frac{L^d}{d!}\right)
\]
We let:
\begin{equation} \label{eq:13}
S=\frac{L^d}{d!}
\end{equation}
\begin{SCfigure}[][h]
\caption{Maximum number of partitions in 2-D}
\label{fig:1}
\includegraphics[width=0.7\textwidth]{figs/fig1.pdf}
\end{SCfigure}
Fig. \ref{fig:1} shows that the partition numbers calculated from Eq. (\ref{eq:12}) and (\ref{eq:13}) are very close in 2-D. In high dimensions, Eq. (\ref{eq:13}) is still a relatively tight \textbf{upper-bound} of Eq. (\ref{eq:12}). By our agreement in Eq. (\ref{eq:5}), which $S=bN^2$; we have:
\begin{equation} \label{eq:14}
b=\frac{L^d}{{d!N}^2}
\end{equation}
Now, we have introduced our main theory that could estimate the training accuracy for a $d-L-1$ structure FCNN and two classes of $N$ random (uniformly distributed) data points by using Eq. (\ref{eq:14}) and (\ref{eq:11}). For example, a dataset has two-class 200 random data in $R^3$ (100 samples for each class) and it is used to train a $3-200-1$ FCNN. In this case,
\[
b=\frac{{200}^3}{{3!200}^2}\approx33.33
\]
Substituting $b=33.33$ into Eq. (\ref{eq:11}) yields $E\left[\alpha\right]\approx0.995$, \textit{i.e.}, the expectation of training accuracy for this case is about 99.5\%.
\section{Empirical corrections}
\label{sec:3}
The empirical correction uses results from real experiments to update/correct the theoretical model we have proposed above. The correction is necessary because our hypothesis ignores the separating capacity of the second (output) layer and that the maximum number of partitions is not guaranteed for all situations; \textit{e.g.}, for a large $L$, the real partition number may be much smaller than $S$ in Eq. (\ref{eq:12}).
In experiments, we train a $d-L-1$ structure FCNN by two-class $N$ random (uniformly distributed) data points in $[0,\ 1)^d$ with labels ‘0’ and ‘1’ (and the number of samples for each class is equal). The training process ends when the training accuracy converges (loss change is smaller than ${10}^{-4}$ in 1000 epochs). For each $\left\{d,\ N,\ L\right\}$, the training repeats several times from scratch, and the recorded training accuracy is the average.
\subsection{Two dimensions}
\label{sec:3.1}
In 2-D, by Eq. (\ref{eq:14}), we have:
\[
b=\frac{1}{2}\left(\frac{L}{N}\right)^2
\]
If $\frac{L}{N}=c$, $b$ is not changed by $N$. To test this counter-intuitive inference, we let $L=N=\left\{100,\ 200,\ 500,\ 800,\ 1000,\ 2000,\ 5000\right\}$. Since $\frac{L}{N}=1$, $b$ and $E[\alpha]$ are unchanged. But Table 1 shows the real training accuracies vary with $N$. The predicted training accuracy is close to the real training accuracy only at $N=200$ and the real training accuracy decreases with the growth of $N$. Hence, our theory must be refined using empirical corrections.
\input{tables/tab1.tex}
The correction could be applied on either Eq. (\ref{eq:14}) or (\ref{eq:11}). We decide to modify the Eq. (\ref{eq:14}) because the range of function (\ref{eq:11}) is $(0.5, \ 1)$, which is an advantage for training accuracy estimation. In Table \ref{tab:1}, the real training accuracy decreases when $N$ increases; this suggests that the exponent of $N$ in Eq. (\ref{eq:14}) should be larger than that of $L$. Therefore, according to (\ref{eq:14}), we consider a more general equation to connect the ensemble index $b$ with parameters $d$, $N$, and $L$:
\begin{equation} \label{eq:15}
\boxed{
b=c_d\frac{L^{x_d}}{N^{y_d}}
}
\end{equation}
\begin{obs} \label{obs:1}
The ensemble index $b$ is computed by Eq. (\ref{eq:15}) with special parameters $\left\{x_d,\ y_d,\ c_d\right\}$. $x_d,\ y_d$ are exponents of $N$ and $L$, and $c_d$ is a constant. All the three parameters vary with the dimensionality of inputs $d$.
\end{obs}
In 2-D, to determine the $x_2,\ y_2,\ c_2$ in Eq. (\ref{eq:15}), we test 81 $\left\{N,\ L\right\}$, which are the combinations of: $L,\ N\ \in$ \{100, 200, 500, 800, 1000, 2000, 5000, 10000, 20000\}. For each $\left\{N[i],\ L\left[i\right]\right\}$, we could obtain a real training accuracy by experiment. Their corresponding ensemble indexes $b[i]$ are found using Eq. (\ref{eq:11}). Finally, we determine the $x_2,\ y_2,\ c_2$ by fitting the $\frac{1}{b},\ N,\ L$ to Eq. (\ref{eq:15}); this yields the expression for the ensemble index for 2-D:
\begin{equation} \label{eq:16}
b=8.4531\frac{L^{0.0744}}{N^{0.6017}}
\end{equation}
The fitting process uses the Curve Fitting Tool (cftool) in MATLAB. Figure \ref{fig:2} shows the 81 points of $\left\{N,\ L\right\}$ and the fitted curve. The R-squared value of fitting is about 99.8\%. The reason to fit $\frac{1}{b}$ instead of $b$ is to avoid $b=+\infty$ when the real accuracy is 1 (it is reachable). In this case, $\frac{1}{b}=0$. Conversely, $b=0$ when the real accuracy is 0.5, which never appears in our experiments. Using an effective classifier rather than random guess makes the accuracy $>0.5$ ($b>0$), thus, $\frac{1}{b}\neq+\infty$.
\begin{SCfigure}[][h]
\caption{
Fitting curve of $\frac{1}{b}=f(N,L)$ in 2-D
}
\label{fig:2}
\includegraphics[width=0.72\textwidth]{figs/fig2.pdf}
\end{SCfigure}
By using Eq. (\ref{eq:16}) and (\ref{eq:11}), we estimate training accuracy on selected values of $N$ and $L$ in 2-D. The results are shown in Table \ref{tab:2}. The differences between real and estimated training accuracies are small, except the first (row) one. For higher real-accuracy cases ($> 0.86$), the difference is larger because $\frac{1}{b}<1$ ($b>1$ when the accuracy $> 0.86$), while the effect is smaller in cases with $\frac{1}{b}>1$ during the fitting to find Eq. (\ref{eq:16}).
\input{tables/tab2.tex}
\subsection{Three and more dimensions}
\label{sec:3.2}
We repeat the same processes for 2-D to determine special parameters $\left\{x_d,\ y_d,\ c_d\right\}$ in Observation \ref{obs:1} for data dimensionality from 3 to 10. Results are shown in Table 3. The R-squared values of fitting are high.
\input{tables/tab3.tex}
Such results reaffirm the necessity of correction to our theory because, when compared to Eq. (\ref{eq:14}), parameters $\left\{x_d,\ y_d,\ c_d\right\}$ are not $\left\{d,\ 2,\ \frac{1}{d!}\right\}$. But the growth of $\frac{x_d}{y_d}$ is preserved. From Eq. (\ref{eq:14}),
\[
\frac{x_d}{y_d}=\frac{d}{2}
\]
The $\frac{x_d}{y_d}$ linearly increases with $d$. The real $d\ \mbox{v.s.} \ \frac{x_d}{y_d}$ (Figure \ref{fig:3}) shows the same tendency.
\begin{figure}[h]
\centering
\includegraphics[width=0.75\textwidth]{figs/fig3.pdf}
\caption{
Plots of $d\ \mbox{v.s.}\ \frac{x_d}{y_d}$ from Table \ref{tab:3}. Blue dot-line is linearly fitted by points to show the growth.
}
\label{fig:3}
\end{figure}
Table \ref{tab:3} indicates that $x_d,\ y_d, and\ c_d$ increase almost linearly with $d$. Thus, we apply linear fitting on $d\mbox{-}x_d$, $d\mbox{-}y_d$, and $d\mbox{-}c_d$ to obtain these fits:
\begin{equation} \label{eq:17}
\boxed{
\begin{array}{l p{1em} l}
x_d=0.0758\cdot d - 0.0349 & & (R^2 = 0.858)\\
y_d=0.0517\cdot d + 0.5268 & & (R^2 = 0.902)\\
c_d=9.4323\cdot d - 8.8558 & & (R^2 = 0.804)
\end{array}
}
\end{equation}
Eq. (\ref{eq:17}) is a supplement to Observation \ref{obs:1}, which employs empirical corrections to Eq. (\ref{eq:14}) for determining the ensemble index $b$ of Theorem \ref{thm:1}. To associate the two statements, we can estimate the training accuracy for a two-layer FCNN on two-class random datasets using only three arguments: the dimensionality of inputs ($d$), the number of inputs ($N$), and the number of neurons in the hidden layer ($L$), without actual training.
\subsection{Testing}
\label{sec:3.3}
To verify our theorem and observation, we firstly estimate training accuracy on a larger range of dimensions ($d=2$ to 24) of three situations:
\begin{itemize}
\item $N \gg L: N=10000,\ L=1000$
\item $N\cong L: N=L=10000$
\item $N\ll L: N=1000,\ L=10000$
\end{itemize}
\begin{figure}[h]
\centering
\includegraphics[width=0.75\textwidth]{figs/fig4.pdf}
\caption{
Estimated training accuracy results comparisons. y-axis is accuracy, x-axis is the dimensionality of inputs ($d$).
}
\label{fig:4}
\end{figure}
The results are shown in Figure \ref{fig:4}. The maximum differences between real and estimated training accuracies are about 0.130 ($N\gg L$), 0.076 ($N\cong L$), and 0.104 ($N\ll L$). There may have two reasons why the differences are not small in some cases of $N\gg L$: 1) in the fitting process, we do not have enough samples of $N\gg L$ so that the corrections are not perfect. 2) the reason why the differences are greater for higher real accuracies is similar to the 2-D situation discussed above.
\begin{figure}[h]
\centering
\includegraphics[width=0.75\textwidth]{figs/fig5.pdf}
\caption{
Evaluation of estimated training accuracy results. y-axis is estimated accuracy; x-axis is the real accuracy; each dot is for one case; red line is $y=x$. R-squared $\approx0.955$.
}
\label{fig:5}
\end{figure}
In addition, we estimate training accuracy on 40 random cases. For each case, the $N,\ L\in [100,\ 20000]$ and $d\in [2,\ 24]$, but principally $d\in [2,\ 10]$ because in high dimensions, almost all cases’ accuracies are close to 100\% (see Figure \ref{fig:4}). Figure \ref{fig:5} shows the results. Each case is plotted with its real and estimated training accuracy. That the overall R-squared value is about 0.955 indicates good estimation.
\section{Discussion}
\label{sec:4}
Obviously, the empirical corrections are required because the purely theoretical results (to estimate training accuracy by Eq. (\ref{eq:14}) and Theorem \ref{thm:1}) cannot match the real accuracies in 2-D (Table \ref{tab:1}). The corrections are not perfect so far, but the corrected estimation method can reflect some characteristics of the real training accuracies. For example, the training accuracies of the $N\gg L$ cases are smaller than those of $N\ll L$, and for specific $N,\ L$, the training accuracies of higher dimensionality of inputs are greater than those of lower dimensionality. These characteristics are shown by the estimation curves in Figure \ref{fig:4}.
Because of the limited number of fitting and testing samples, the empirically-corrected estimation is not very accurate for some cases. And either the theorem, observation or empirical corrections could be improved in the future. The improvements would be along three directions: 1) using more data for fitting, 2) rethinking the space partitioning problem to change the Observation \ref{obs:1}. We could use a different approximation formula from the Eq. (\ref{eq:13}), or involve the probability of reaching the maximum number of partitions. 3) modifying Theorem \ref{thm:1} by reconsidering the necessity of complete separation. In fact, in real classification problems, complete separation of all data points is a too strong requirement. Instead, to require only that no different-class samples are assigned to the same partition would be more appropriate. We could also involve the capacity of a separating plane \citep{duda_pattern_2012}:
\[
f(N,d)=\left\{
\begin{array}{l p{1em} c}
1& & N\le d+1 \\
\displaystyle \frac{2}{2^N}\sum_{i=0}^{d}{N-1 \choose i} & & N>d+1
\end{array}\right.
\]
Where $f(N,d)$ is the probability of the existence of a hyper-plane separating two-class $N$ points in $d$ dimensions. An ideal theory is to precisely estimate the accuracy with no, or very limited, empirical corrections. Such theory would be a purely theoretical model derived from several hypotheses and mathematical derivation.
Our proposed estimation theory could extend to multi-layer neural networks. As discussed at the beginning of the second section, $L$ neurons in the first hidden layer assign every input a unique code. The first hidden layer transforms $N$ inputs from $d$-D to $L$-D. Usually, $L>d$, and the higher dimensionality makes \textbf{data separation easier for successive layers}. Also, the effective $N$ decreases when data pass through layers if we consider \textbf{partition merging}. Specifically, if a partition and its neighboring partitions contain the same class data, they could be merged into one because these data are locally classified well. The decrease of actual inputs makes data separation easier for successive layers also.
Alternatively, the study of \citet{pascanu_number_2014} provides a calculation of the number of space partitions ($S$ regions) created by $k$ hidden layers:
\[
S=\left(\prod_{i=1}^{k-1}\left\lfloor\frac{n_i}{n_0}\right\rfloor\right)\sum_{i=0}^{n_0}{n_k \choose i}
\]
Where, $n_0$ is the size of input layer and $n_i$ is the $i$-th hidden layer. This reduces to Eq. (\ref{eq:12}) in the present case ($k=1$, $n_0=d$ and $n_1=L$). The theory for multi-layer neural networks could begin by using the approaches above.
In addition, the proposed theory could also extend to other distributed data besides uniform distribution and/or other types of neural networks by modifying the ways to calculate separation probabilities, such as in Eq. (\ref{eq:1}) and (\ref{eq:6}). This study still has several places that are worth attempting to enhance or extend and raises some questions for future works.
\section{Conclusions}
\label{sec:5}
Our main contribution is to build a novel theory to estimate the training accuracy for two-layer FCNN used on two-class random datasets without using input data or trained models (training); it uses only three arguments: the dimensionality of inputs ($d$), the number of inputs ($N$), and the number of neurons in the hidden layer ($L$). And there appear to be no other studies that have proposed a method to estimate training accuracy in this way.
Our theory is based on the notion that hidden layers in neural networks perform space partitioning and the hypothesis that the data separation ratio determines the training accuracy. Theorem \ref{thm:1} introduces a mapping function between training accuracy and the ensemble index. The ensemble index has a better characteristic than accuracy for prediction because its domain is $(0,\ \infty)$. It is good for designing prediction models or fitting experimental data. Observation \ref{obs:1} provides a calculation of the ensemble index based on empirical corrections.
The theory has been verified by real training accuracies in our experiments. And it has the potential to estimate deeper neural network models. This study may raise other questions and suggest a starting point for a new way for researchers to make progress on studying the \textit{transparency} of deep learning.
\section*{Appendix: Simplification from Eq. (\ref{eq:3}) to Eq. (\ref{eq:4})}
\label{sec:appx}
\addcontentsline{toc}{section}{Appendix}
In the paper, Eq. (\ref{eq:3}):
\[\lim_{N \to +\infty}P_c=\lim_{N \to + \infty}\left(\frac{1}{e}\right)^N{\left(\frac{bN^a}{bN^a-N}\right)^{bN^a-N+0.5}}\]
Ignoring the constant 0.5 (small to $N$),
\[\lim_{N \to +\infty}P_c=\lim_{N\to+\infty}\left(\frac{1}{e}\right)^N{\left(\frac{bN^a}{bN^a-N}\right)^{bN^a-N}}\]
Using equation ${x=e}^{\ln{x}}$,
\begin{multline*}
\lim_{N \to +\infty}P_c={\lim_{N \to +\infty}e}^{\ln{\left[\left(\frac{1}{e}\right)^N{\left(\frac{bN^a}{bN^a-N}\right)^{bN^a-N}}\right]}} \\
={\lim_{N \to +\infty}e}^{{
\underbrace{-N+\left(bN^a-N\right)\ln{\left(\frac{bN^a}{bN^a-N}\right)}}_{(\mathcal{A})}}}
\end{multline*}
Let $t=\frac{1}{N}\to +0$,
\[\left(\mathcal{A}\right)=-\frac{1}{t}+\left(\frac{b}{t^a}-\frac{1}{t}\right)\ln{\left(\frac{\frac{b}{t^a}}{\frac{b}{t^a}-\frac{1}{t}}\right)}=\frac{\left(b-t^{a-1}\right)\ln\left(\frac{b}{b-t^{a-1}}\right)-t^{a-1}}{t^a}\]
[i] If $a=1$,
\[(\mathcal{A})=\frac{\overbrace{\left(b-1\right)\ln\left(\frac{b}{b-1}\right)}^{(\mathcal{B})}-1}{t}\]
\[\left(\mathcal{B}\right)=\ln\left(\frac{b}{b-1}\right)^{\left(b-1\right)}
\]
In $R$, it is easy to show that, for $b>0$,
\[
1<\left(\frac{b}{b-1}\right)^{\left(b-1\right)}<e\]
Then,
\[0<\left(\mathcal{B}\right)<1
\]
\[\lim_{t\to+0}\left(\mathcal{A}\right)=\lim_{t\to+0}\frac{\left(\mathcal{B}\right)-1}{t}=-\infty
\]
Therefore,
\[\lim_{N\to+\infty}P_c=e^{-\infty}=0 \tag{Eq. (\ref{eq:4}) in paper, when $a=1$}
\]
[ii] For $a>1$, by applying L'H\^{o}pital's rule several times:
\begin{multline*}
\displaystyle \lim_{t\to+0}\left(\mathcal{A}\right)=\lim_{t\to+0}\frac{\left(b-t^{a-1}\right)\ln\left(\frac{b}{b-t^{a-1}}\right)-t^{a-1}}{t^a} \\
\displaystyle \myeq\lim_{t\to+0}\frac{(1-a)t^{a-2}\ln\left(\frac{b}{b-t^{a-1}}\right)}{at^{a-1}}=\lim_{t\to+0}\frac{(1-a)\ln\left(\frac{b}{b-t^{a-1}}\right)}{at} \\
\displaystyle \myeq\lim_{t\to+0}\frac{\left(a-1\right)^2}{a\left(t-bt^{2-a}\right)}=\lim_{t\to+0}\frac{\frac{\left(a-1\right)^2}{t}}{a\left(1-bt^{1-a}\right)} \\
\displaystyle \myeq\lim_{t\to+0}\frac{\frac{\left(a-1\right)^2}{-t^2}}{\left(a-1\right)abt^{-a}}=\lim_{t\to+0}-\frac{\left(a-1\right)}{abt^{2-a}}
\end{multline*}
Substitute $N=\frac{1}{t}$,
\[
\lim_{N\to+\infty}P_c=\lim_{t\to+0}e^{\left(\mathcal{A}\right)}=\lim_{N\to+\infty}e^{-\frac{\left(a-1\right)N^{\mathbf{2}-a}}{ab}} \tag{Eq. (\ref{eq:4}) in paper, when $a>1$}
\]
When $1<a<2$ (and $a=1$, shown before in [i]),
\[\lim_{N\to+\infty}P_c=e^{-\frac{+\infty}{ab}}=0\]
When $a>2$,
\[\lim_{N\to+\infty}P_c=e^{-\frac{0}{ab}}=1\]
For $a=2$,
\[\lim_{N\to+\infty}P_c=e^{-\left(\frac{1}{2b}\right)} \tag{Eq. (\ref{eq:5}) in paper}
\]
$\blacksquare$
| proofpile-arXiv_065-5515 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
\label{sec:intro}
With over 5 million annual diagnoses in the USA alone~\cite{FF2020}, skin cancer is the most common form of cancer. Melanoma, the deadliest form of skin cancer representing only a small fraction of all skin cancer diagnoses, accounts for over 75\% of all skin cancer related deaths, and is estimated to be responsible for 6,850 fatalities in the USA alone during 2020 ~\cite{CancerStatistics2020}. However, studies have shown that early detection of skin cancers can lead to five-year survival rate estimates of approximately 99\%~\cite{CancerStatistics2020}, necessitating early diagnosis and treatment. Computer-aided diagnosis and clinical decision support systems for skin cancer detection are reaching human expert-levels~\cite{esteva2017dermatologist,brinker2019deep}, and a crucial step for skin lesion diagnosis is the delineation of the skin lesion boundary to separate the affected region from the healthy skin, known as lesion segmentation. The recent advances in machine and deep learning have resulted in significant improvements in automated skin lesion diagnosis, but it remains a fairly unsolved task because of complications arising from the large variety in the presentation of these lesions, primarily, shape, color, and contrast.
Medical images often suffer from the data imbalance problem, where some classes occupy larger regions in the image than others. In the case of skin lesion images, this is frequently observed when the lesion is just a small fraction of the image with healthy skin occupying the majority of the image (for example, see the first two and the last rows in Figure~\ref{fig:results}). Unless accounted for while training a deep learning-based segmentation model, such an imbalance can lead to the model converging towards a local minimum of the loss function, yielding sub-optimal segmentation results biased towards the healthy skin~\cite{salehi2017tversky}.
Cross-entropy based loss values are often a poor reflection of segmentation quality on validation sets, and therefore overlap-metric based loss functions are preferred~\cite{berman2018lovasz}.
Variations of the Dice loss~\cite{milletari2016v}, a popular overlap-based loss function modeled using the Sørensen-Dice index, have been proposed to account for class imbalance in medical image segmentation tasks~\cite{salehi2017tversky,sudre2017generalised,abraham2019novel}. Similarly, some works have proposed using a combination of a distance-based loss (e.g. cross-entropy loss) and an overlap-based loss (e.g., Dice loss) to address the data imbalance issue~\cite{wong20183d,taghanaki2019combo}. For a detailed survey on segmentation loss functions, we direct the interested readers to Taghanaki et al.~\cite{taghanaki2020deep}. The Dice loss, however, does not include a penalty for misclassifying the false negative pixels~\cite{zhang2020kappa}, affecting the accuracy of background segmentation. We therefore propose a novel loss function based on the Matthews correlation coefficient (MCC)~\cite{matthews1975comparison}, a metric indicating the correlation between predicted and ground truth labels. MCC is an informative metric even when dealing with skewed distributions~\cite{maqc2010microarray} and has been shown to be an optimal metric when designing classifiers for imbalanced classes~\cite{boughorbel2017optimal}. Motivated by these meritorious properties of MCC, in this work, we present a MCC-based loss function that operates on soft probabilistic labels obtained from a deep neural network based segmentation model, making it differentiable with respect to the predictions and the model parameters. We evaluate this loss function by training and evaluating skin lesion segmentation models on three clinical and dermoscopic skin image datasets from different sources, and compare the performance to models trained using the popular Dice loss function.
\section{Method}
\label{sec:method}
Consider the binary segmentation task where each pixel in an image is labeled as either foreground or background. Figure~\ref{fig:overlap} shows a skin lesion along with the corresponding ground truth and predicted segmentation masks, denoted by $Y = \{y_i\}_{i=1}^N$ and $\hat{Y} = \{\hat{y}\}_{i=1}^N$ respectively.
Consider the two popular overlap metric based loss functions: intersection-over-union (IoU, also known as Jaccard) loss and Dice loss. They are modeled using the Jaccard index and the Dice similarity coefficient (DSC), respectively, which are defined as:
\begin{equation}
\mathrm{Jaccard} = \frac{\mathrm{TP}}{\mathrm{TP} + \mathrm{FP} + \mathrm{FN}},
\end{equation}
\begin{equation}
\mathrm{DSC} = \frac{\mathrm{2TP}}{\mathrm{2TP} + \mathrm{FP} + \mathrm{FN}}.
\end{equation}
\noindent where true positive (TP), false positive (FP), and FN (false negative) predictions are entries from the confusion matrix. Notice that neither of these metrics penalize misclassifications of the true negative (TN) pixels, making it difficult to optimize for accurate background prediction. We instead propose a loss based on the Matthews correlation coefficient (MCC). The MCC for a pair of binary classification predictions is defined as:
\begin{equation}
\mathrm{MCC} = \frac{(\mathrm{TP} \cdot \mathrm{TN}) - (\mathrm{FP} \cdot \mathrm{FN})}{\sqrt{(\mathrm{TP}+\mathrm{FP})(\mathrm{TP}+\mathrm{FN})(\mathrm{TN}+\mathrm{FP})(\mathrm{TN}+\mathrm{FN})}} \label{eqn:mcc_metric}
\end{equation}
\begin{figure}[!tbp]
\begin{subfigure}[t]{0.47\columnwidth}
\includegraphics[width=\columnwidth]{Figures/Lesion.png}
\caption{A skin lesion image with overlaid masks.}
\end{subfigure}
\hfill
\begin{subfigure}[t]{0.47\columnwidth}
\includegraphics[width=\columnwidth]{Figures/Overlap.png}
\caption{Overlap of the predicted and the ground truth masks.}
\end{subfigure}
\caption{A skin lesion image with the predicted (yellow) and ground truth (black) skin lesion segmentation masks: true positive (TP), true negative (TN), false positive (FP), and false negative (FN) predictions are denoted by green, grey, red, and blue respectively.}
\label{fig:overlap}
\end{figure}
MCC values range from $-1$ to $1$, with $-1$ and $1$ indicating a completely disjoint and a perfect prediction respectively. An MCC-based loss function, $\mathcal{L}_{\mathrm{MCC}}$, can be defined as:
\begin{equation}
\mathcal{L}_{\mathrm{MCC}} = 1 - \mathrm{MCC} \label{eqn:mcc_loss}
\end{equation}
For a differentiable loss function defined on pixelwise probabilistic predictions from the segmentation network, we define:
\begin{equation*}
\mathrm{TP} = \sum^N_i \hat y_i y_i \ ; \ \mathrm{TN} = \sum^N_i (1-\hat y_i) (1-y_i);
\end{equation*}
\begin{equation}
\mathrm{FP} = \sum^N_i \hat y_i (1-y_i) \ ; \ \mathrm{FN} = \sum^N_i (1-\hat y_i) y_i, \numberthis \label{eqn:conf_entries}
\end{equation}
\noindent where $\hat y_i$ and $y_i$ denote the prediction and the ground truth for the $i^{\mathrm{th}}$ pixel in the image. Dropping the summation limits for readability and plugging values from Eqn.~\ref{eqn:conf_entries} into Eqn.~\ref{eqn:mcc_metric}, we have:
\begin{equation}
\mathcal{L}_{\mathrm{MCC}} = 1 - \frac{\sum \hat y_i y_i - \frac{\sum \hat y_i \sum y_i}{N}}{f(\hat y_i, y_i)},
\end{equation}
\begin{align}
f(\hat y_i, y_i) = \sqrt{
\begin{aligned}
& \sum \hat y_i \sum y_i - \frac{\sum \hat y_i (\sum y_i)^2}{N} \\ & - \frac{(\sum \hat y_i)^2 \sum y_i)}{N} + (\frac{\sum \hat y_i \sum y_i}{N})^2.
\end{aligned}
}
\end{align}
The gradient of this formulation computed with respect to the $i^{\mathrm{th}}$ pixel in the predicted segmentation is:
\begin{equation}
\frac{\partial \mathcal{L}_{\mathrm{MCC}}}{\partial \hat y_i} = \frac{1}{2} \frac{g(\hat y_i, y_i)}{\left(f(\hat y_i, y_i)\right)^{\frac{3}{2}}} - \frac{y_i - \frac{\sum y_i}{N}}{f(\hat y_i, y_i)},
\end{equation}
\begin{align}
\begin{aligned}
g(\hat y_i, y_i) &= \left(\sum \hat y_i y_i - \frac{\sum \hat y_i \sum y_i}{N}\right) \cdot \left(\sum y_i -\right. \\ & \left.\frac{(\sum y_i)^2}{N}\right. \left.- 2\frac{\sum \hat y_i \sum y_i}{N} + 2\frac{\sum \hat y_i (\sum y_i)^2}{N} \right).
\end{aligned}
\end{align}
Finally, we optimize the deep segmentation model $f(\cdot)$ using error backpropagation as:
\begin{equation}
\Theta^* = \argminA_\Theta \mathcal{L}_{\mathrm{MCC}}\left(f(X,\Theta), Y\right),
\end{equation}
\noindent where $\hat{Y} = f(X, \Theta)$ denotes the segmentation for input image $X$ predicted by the model parameterized by $\Theta$.
\section{DATASETS AND EXPERIMENTAL DETAILS}
\label{sec:experiments}
Given that the goal of this work is to demonstrate the efficacy of using an MCC-based loss to optimize deep convolutional neural networks for segmentation, we use U-Net~\cite{ronneberger2015u} as baseline the segmentation network. The U-Net architecture consists of symmetric encoder-decoder networks with skip connections carrying features maps from corresponding layers in the encoder to the decoder, thus smoothing the loss landscape~\cite{li2018visualizing} and tackling the problem of gradient vanishing~\cite{taghanaki2020deep}.
We evaluate the efficacy of optimizing segmentation networks using the MCC-based loss on three clinical and dermoscopic skin lesion image datasets, namely the ISIC ISBI 2017 dataset, the DermoFit Image Library, and the PH2 dataset. The ISIC ISBI 2017 dataset~\cite{codella2018skin} contains skin lesion images and the corresponding lesion segmentation annotations for three diagnosis labels: benign nevi, melanoma, and seborrheic keratosis. The dataset is partitioned into training, validation, and testing splits with 2000, 150, and 600 image-mask pairs respectively. The DermoFit dataset~\cite{ballerini2013color} and the PH2 dataset~\cite{mendoncca2013ph} contain 1300 and 200 image-mask pairs belonging to 10 and 3 diagnosis classes, respectively. We randomly partition the DermoFit and the PH2 datasets into training, validation, and testing splits in the ratio of $60:10:30$.
For each dataset, we train two U-Net based segmentation models, one trained with the Dice loss ($\mathcal{L}_{\mathrm{Dice}}$) and another with the MCC loss ($\mathcal{L}_{\mathrm{MCC}}$) and compare their performance. All the images and the ground truth segmentation masks are resampled using nearest neighbor interpolation to $128 \times 128$ resolution using Python's SciPy library. All networks are trained using mini-batch stochastic gradient descent with a batch size of 40 (largest batch size that could fit in the GPU memory) and a learning rate of $1e-3$. While training, we use on-the-fly data augmentation with random horizontal and vertical flips and rotation in the range $[-45^{\circ}, 45^{\circ}]$. All models are implemented using the PyTorch framework. For evaluating the segmentation performance, we report the metrics used by the ISIC challenge, namely, pixelwise accuracy, Dice similarity coefficient, Jaccard index, sensitivity, and specificity, and use the Wilcoxon two-sided signed-rank test for statistical significance.
\section{RESULTS AND DISCUSSION}
\label{sec:results}
\begin{figure}[ht]
\centering
\includegraphics[width=0.8\columnwidth]{Figures/Results_Qualitative_NEW.pdf}
\caption{Qualitative skin lesion segmentation results for the three datasets with (a) the original image, (b) ground truth mask, and lesion segmentation predictions with models trained with (c) $\mathcal{L}_{\mathrm{Dice}}$ and (d) $\mathcal{L}_{\mathrm{MCC}}$. The first two rows contain images from the ISIC 2017 dataset, the next two from the DermoFit dataset, and the last two from the PH2 dataset.}
\label{fig:results}
\end{figure}
\begin{table*}[ht]
\centering
\caption{Quantitative results for segmentation models trained with the two loss function evaluated on the test partitions of the ISIC 2017 (600 images), DermoFit (390 images), and PH2 (60 images) datasets (mean $\pm$ standard error). \textbf{***} and \textbf{*} denote statistical significance of the Jaccard index at $p<0.001$ and $p<0.05$ respectively.}
\label{tab:results}
\resizebox{\textwidth}{!}{%
{\renewcommand{\arraystretch}{1.2
\begin{tabular}{c|cc|cc|cc}
\hline
\textbf{Dataset} & \multicolumn{2}{c|}{\textbf{ISIC 2017\textsuperscript{***}}} & \multicolumn{2}{c|}{\textbf{DermoFit\textsuperscript{***}}} & \multicolumn{2}{c}{\textbf{PH2\textsuperscript{*}}} \\ \hline
\textbf{Loss Function} & $\mathcal{L}_{\mathrm{Dice}}$ & $\mathcal{L}_{\mathrm{MCC}}$ & $\mathcal{L}_{\mathrm{Dice}}$ & $\mathcal{L}_{\mathrm{MCC}}$ & $\mathcal{L}_{\mathrm{Dice}}$ & $\mathcal{L}_{\mathrm{MCC}}$ \\ \hline
Dice & $0.7781 \pm 0.0086$ & \bm{$0.8384 \pm 0.0070$} & $0.8437 \pm 0.0043$ & \bm{$0.8709 \pm 0.0030$} & $0.8888 \pm 0.0027$ & \bm{$0.8937 \pm 0.0020$} \\
Jaccard & $0.6758 \pm 0.0095$ & \bm{$0.7518 \pm 0.0084$} & $0.7418 \pm 0.0056$ & \bm{$0.7779 \pm 0.0041$} & $0.8051 \pm 0.0038$ & \bm{$0.8112 \pm 0.0032$} \\
Accuracy & $0.9029 \pm 0.0053$ & \bm{$0.9217 \pm 0.0046$} & $0.9024 \pm 0.0063$ & \bm{$0.9137 \pm 0.0023$} & $0.9219 \pm 0.0032$ & \bm{$0.9300 \pm 0.0022$} \\
Sensitivity & $0.7470 \pm 0.0091$ & \bm{$0.8130 \pm 0.0080$} & $0.8080 \pm 0.0063$ & \bm{$0.8799 \pm 0.0040$} & $0.9132 \pm 0.0027$ & \bm{$0.9155 \pm 0.0029$} \\
Specificity & $0.9683 \pm 0.0031$ & \bm{$0.9710 \pm 0.0029$} & \bm{$0.9533 \pm 0.0024$} & $0.9300 \pm 0.0030$ & $0.8852 \pm 0.0074$ & \bm{$0.9075 \pm 0.0053$} \\ \hline
\end{tabular}
}
}
\end{table*}
\begin{figure*}[!htbp]
\centering
\includegraphics[width=0.9\textwidth]{Figures/Results_KDE.pdf}
\caption{Kernel density estimate plots for all the metrics from models trained using the Dice ($\mathcal{L}_{\mathrm{Dice}}$) and the MCC ($\mathcal{L}_{\mathrm{MCC}}$) losses evaluated on the three datasets.}
\label{fig:KDE}
\end{figure*}
To compare the performance of the models trained using the two losses, we present both qualitative and quantitative results for all the three datasets. Table~\ref{tab:results} contains the 5 evaluation metrics for two models for all the datasets. We see that models trained with $\mathcal{L}_{\mathrm{MCC}}$ outperform those trained with $\mathcal{L}_{\mathrm{Dice}}$ on all metrics for all the datasets (except the specificity on DermoFit), with improvements in both sensitivity and specificity values.
Even for the DermoFit dataset, we observe that the model trained with $\mathcal{L}_{\mathrm{MCC}}$ achieves a better trade-off between sensitivity and specificity (0.8799 and 0.9300 obtained using $\mathcal{L}_{\mathrm{MCC}}$ versus 0.8080 and 0.9533 obtained using $\mathcal{L}_{\mathrm{Dice}}$).
The models trained with $\mathcal{L}_{\mathrm{MCC}}$ improve the mean Jaccard index by 11.25\%, 4.87\%, and 0.76\% on ISIC 2017, DermoFit, and PH2 datasets, respectively.
Additionally, the performance on the ISIC 2017 dataset is within 1\% of the Jaccard index achieved by the top 3 entries on the challenge leaderboard
even with a vanilla U-Net architecture and without using any post-processing, external data, or an ensemble of prediction models.
Next, to demonstrate the improvement in the segmentation prediction, we plot kernel density estimates of all the metrics for the three datasets in Figure~\ref{fig:KDE} using the Epanechnikov kernel to estimate the respective probability density functions. The plots have been clipped to the observed values for the corresponding metrics. We observe higher peaks (i.e., higher densities) at higher values for models trained using $\mathcal{L}_{\mathrm{MCC}}$. The improvements in Jaccard index for ISIC 2017 and DermoFit are statistically significant at $p<0.001$, and for PH2 at $p<0.05$, possibly explained by the small sample size (60 test images).
Figure~\ref{fig:results} presents 6 images sampled from the test partitions of the three datasets as well as the corresponding ground truth segmentation masks and the predicted segmentation masks using the two models. The images capture a wide variety in the appearance of the lesions, in terms of the size and the shape of the lesion, the lesion contrast with respect to the surrounding healthy skin, and the presence of artifacts such as markers and dark corners. We observe that the models trained with $\mathcal{L}_{\mathrm{MCC}}$ produces more accurate outputs, with considerably fewer false positive and false negative predictions.
\section{CONCLUSION}
\label{sec:conclusion}
We proposed a novel differentiable loss function for binary segmentation based on the Matthews correlation coefficient that, unlike IoU and Dice losses, has the desirable property of considering all the entries of a confusion matrix including true negative predictions. Evaluations on three skin lesion image datasets demonstrate the superiority of using this loss function over the Dice loss for training deep semantic segmentation models, with more accurate delineations of the lesion boundary and fewer false positive and negative predictions. Interestingly, we observed in our experiments that a model trained using Dice loss yielded an inferior Dice coefficient upon evaluation as compared to a model trained using MCC-based loss, and is similar to the observations of Zhang et al.~\cite{zhang2020kappa}, therefore warranting further investigation.
Other future directions would be generalizing this loss function for $K$ classes using entries from a $K \times K$ confusion matrix and evaluating this loss function on other medical imaging modalities.
\section{Compliance with Ethical Standards}
\label{sec:ethics}
This research study was conducted retrospectively using human subject data made available in open access by the International Skin Imaging Collaboration: Melanoma Project for the ISIC 2017 dataset~\cite{codella2018skin} and the ADDI (Automatic computer-based Diagnosis system for Dermoscopy Images) Project for the PH2 dataset~\cite{mendoncca2013ph}, and through an academic license from the University of Edinburgh for the DermoFit dataset~\cite{ballerini2013color}. Ethical approval was not required as confirmed by the respective licenses attached with the data.
\section{Acknowledgments}
\label{sec:acknowledgments}
Funding for this work was provided by the Natural Sciences and Engineering Research Council of Canada
(NSERC RGPIN-06752) and Canadian Institutes of Health Research (CIHR OQI-137993). The authors are grateful to the NVIDIA Corporation for donating Titan X GPUs used in this research.
\bibliographystyle{IEEEbib}
| proofpile-arXiv_065-5529 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
The vacuum structure of quantum electrodynamics (QED) is modified by the presence of electromagnetic fields, leading to non-trivial polarization of the vacuum. In a seminal paper by Heisenberg and Euler \cite{Heisenberg:1935qt} as well as in several papers published in the same age \cite{sauter1931behavior, Euler:1935zz, Weisskopf:1996bu}, the authors addressed the vacuum polarization effects and showed their physical consequences such as the pair production from the vacuum by an electric field later called the Schwinger mechanism \cite{Schwinger:1951nm}. The vacuum polarization by electromagnetic fields also affects propagation of probe particles, in particular photons. Recall that, in the absence of electromagnetic fields, an on-shell real photon travels at the speed of light in vacuum without modification of the refractive index or conversion to di-lepton, even when the vacuum polarization effect is included. On the contrary, Toll showed that the vacuum polarization effects in the presence of electromagnetic fields give rise to a non-trivial refractive index that can deviate from unity in a polarization-dependent manner \cite{Toll:1952rq}. This phenomenon is called the {\it vacuum birefringence}, named after a similar optical property of materials. In turn, the unitarity, or the optical theorem, implies the existence of an imaginary part in the refractive index, meaning that a single real photon can be converted to a di-lepton in electromagnetic fields. According to the polarization dependence, this photon attenuation phenomenon may be called the {\it vacuum dichroism}. There are a number of studies on the complex-valued refractive index (see, e.g., Refs.~\cite{Adler:1971wn, Tsai:1974fa, Tsai:1975iz, Shabad:1975ik, Melrose:1976dr, Urrutia:1977xb, Heyl:1997hr, Baier:2007dw, Shabad:2010hx, Hattori:2012je, Hattori:2012ny}).
Experimental detection of the vacuum birefringence/dichroism,
induced by the vacuum polarization with electromagnetic fields, has been quite challenging.
Indeed, the vacuum birefringence/dichroism is strongly suppressed by the QED coupling constant
higher than the fourth order as represented by box diagrams.
Nevertheless, the QED coupling constant appears in a multiplicative form with the electromagnetic field strength, and thus this naive power counting is invalidated if the electromagnetic field strength compensates the suppression by the coupling constant. Namely, strong electromagnetic fields are demanded for successful detection of the vacuum birefringence/dichroism.
After more than a half century since publication of the classic papers, there is remarkable experimental progress to produce strong electromagnetic fields and/or to detect their signatures. The intensity of high-power lasers has been increasing continuously and rapidly since the invention of the chirped pulse amplification \cite{DiPiazza:2011tq, Zhang:2020lxl, Ejlli:2020yhk}. Also, as proposed in classic papers \cite{Fermi, Weizsacker, Williams, Breit:1934zz}, highly accelerated nuclei can be used as a source of strong electromagnetic fields. Implementation of such an idea has been realized recently with Relativistic Heavy Ion Collider (RHIC) and the Large Hadron Collider (LHC). In particular, much attention has been paid to ultra-peripheral collision events \cite{Adams:2004rz, Aaboud:2017bwk, Sirunyan:2018fhl, Aad:2019ock, Adam:2019mby} where the two colliding nuclei are distant enough from each other so that QED processes dominate over quantum chromodynamics (QCD). Besides, strong magnetic fields are thought to be realized in astronomical systems such as neutron stars/magnetars \cite{Kouveliotou:2003tb, Harding:2006qn, Staubert:2018xgw, bignami2003magnetic} and the primordial Universe \cite{Grasso:2000wj, Giovannini:2003yn, Kandus:2010nw}. There are several future programs planned around the world to detect strong-magnetic field effects in the astronomical systems through, e.g., observation of X-ray and gamma-ray photons.
Motivated by those developments, it is timely to enrich theoretical foundation of the vacuum polarization effects on photon by tractable analytic methods. In particular, we in this paper focus on the di-lepton production by a single photon in the presence of a strong magnetic field. We emphasize that the differential cross section for the di-lepton production computed in this paper provides more information than the aforementioned imaginary part of the refractive index, which corresponds to the integrated cross section. Our results will open a new avenue to study the energy and momentum distributions of di-leptons as signatures of the vacuum dichroism and implicitly of the vacuum birefringence, since the vacuum birefringence and dichroism are both sides of a coin. Besides, the differential di-lepton measurement is more feasible than a photon-polarization measurement in the gamma regime. We focus on effects of a magnetic field and do not consider those of an electric field. In general, a magnetic field is more stable and can have larger spacetime extension than an electric field, since a (constant) magnetic field does not exert work on charged particles and is not Debye screened by charge distributions. In addition, in actual physics systems such as ultra-peripheral heavy-ion collisions and magnetosphere of neutron stars/magnetars, it is plausible to assume that the strong electromagnetic fields are dominated by a magnetic-field component.
The crucial point of the calculation in our problem is to include the interaction between fermions and a strong magnetic field to all orders in the QED coupling constant. As mentioned above, this treatment is necessary when the magnetic-field strength is large enough to break down the naive perturbation theory. Indeed, the di-lepton production from a single real photon is not allowed in the naive perturbation theory, since the energy-momentum conservation and the on-shell conditions for a photon and fermions are not compatible with each other. One could obtain a finite di-lepton production rate from a single on-shell photon only after including a non-perturbative modification of the fermion dispersion relation by a strong magnetic field. We accomplish such a computation for on-shell as well as off-shell photons by the use of the Ritus-basis formalism, which is constructed with the exact wave function of a fermion in a magnetic field. Accordingly, the di-lepton spectrum is naturally specified by the Landau levels and a still continuous momentum along the magnetic field. This may be regarded as an extension of previous works~\cite{Hattori:2012je, Hattori:2012ny} in which one of the present authors showed the Landau-level structure appearing in the complex refractive index. Within a constant configuration of a magnetic field, we provide the most general analytic form of the photon--to--di-lepton conversion rate, or the lepton tensor in a magnetic field, with all-order Landau levels (see also Refs.~\cite{Sadooghi:2016jyf, Ghosh:2018xhh, Wang:2020dsr, Ghosh:2020xwp} for the Landau levels appearing in photon/di-lepton production from finite-temperature plasma).
This paper is organized as follows. In Sec.~\ref{sec-2}, we first briefly review the Ritus-basis formalism in a self-contained way. Based on this formalism, we show an analytic form of the lepton tensor in a magnetic field and its square in Sec.~\ref{sec-3} and inspect basic properties of the di-lepton production rate in Sec.~\ref{sec--4} with some numerical plots. Section~\ref{summary} is devoted to summary and outlook. In appendices, we provide the wave function of a charged particle in a magnetic field as an ingredient of the Ritus-basis formalism in Appendix~\ref{app-a} and rigorous consistency checks of the computation such as the gauge invariance in Appendix~\ref{app-b}.
Throughout the paper, we take the direction of a constant external magnetic field in the $z$-direction. Accordingly, we decompose the metric $g^{\mu\nu} \equiv {\rm diag}(+1,-1,-1,-1)$ and four-vectors $v^\mu$ into the longitudinal and transverse parts, respectively, as $ g^{\mu\nu}_\parallel \equiv {\rm diag}(1,0,0,-1) $ and $ g^{\mu\nu}_\perp \equiv {\rm diag}(0,-1,-1,0) $, and $v^{\mu}_\parallel \equiv g^{\mu\nu}_\parallel v_\nu $ and $ v^{\mu}_\perp \equiv g^{\mu\nu}_\perp v_\nu $.
\section{Preliminaries: Ritus-basis formalism for a Dirac fermion} \label{sec-2}
To provide a self-contained construction, we first review the Ritus-basis formalism~\cite{Ritus:1972ky, Ritus:1978cj}\footnote{This part is based on a forthcoming review article \cite{HIO}.}. In case of a constant external magnetic field, the energy spectrum of charged fermions is subjected to the Landau quantization and the Zeeman shift. The resultant energy level has two-fold spin degeneracy except for the unique ground state. The Ritus basis is, then, introduced as a superposition of (projection operators for) the two degenerate spin states. An advantage of the Ritus basis is that it maps the Dirac equation in an external field into a free Dirac equation, which is easier to handle.
We start with the Dirac equation in an external magnetic field $A_{\rm ext}^\mu$,
\begin{align}
\left( i \slashed D _{\rm ext} - m \right) \psi = 0 \, , \label{eq:Dirac}
\end{align}
where the covariant derivative $D^\mu_{\rm ext}$ is given by
\begin{align}
D^\mu_{\rm ext} \equiv \partial ^\mu + i q A_{\rm ext}^\mu \, . \label{eq:covariantD-QED}
\end{align}
The electric charge $q$ takes a positive (negative) value for positively (negatively) charged fermions. Since we only have a constant magnetic field, the longitudinal components of the gauge potential $[A_{\rm ext}]_\parallel$ must be constant in spacetime. Without loss of generality, one may set
\begin{align}
0 = A_{\rm ext}^0 = A_{\rm ext}^3\, .
\end{align}
Whereas there is still a residual gauge freedom in the transverse components $[A_{\rm ext}]_\perp$, we first discuss gauge-independent properties until we fix the residual gauge in Eq.~(\ref{eq:Landau-gauge}).
To proceed, we discuss the energy level of a charged fermion in the presence of a constant magnetic field. To this end, it is convenient to rewrite the Dirac equation (\ref{eq:Dirac}) into a Klein-Gordon type equation, by using an identity $\gamma^\mu \gamma^\nu = \frac{1}{2} [\gamma^\mu, \gamma^\nu] + \frac{1}{2} \{ \gamma^\mu, \gamma^\nu \}$, as
\begin{align}
\left( D_{\rm ext}^2 + m^2 + \frac{q}{2} F_{\rm ext}^{\mu\nu} \sigma_{\mu\nu}\right) \psi = 0\, , \label{eq:K-GandZeeman}
\end{align}
where $\sigma_{\mu\nu} \equiv \frac{i}{2}[\gamma_\mu, \gamma_\nu]$ is a spin operator and $F_{\rm ext}^{\mu\nu} \equiv [ D^{\mu}_{\rm ext}, D^{\nu}_{\rm ext} ]/iq = \partial^\mu A_{\rm ext}^{\nu}-\partial^\nu A_{\rm ext}^{\mu}$ is field strength. For a constant magnetic field pointing in the $z$-direction, only $(1,2)$- and $(2,1)$-components of the field strength $F_{\rm ext}^{\mu\nu}$ are nonvanishing, i.e., we have a nonvanishing commutation relation only for the transverse components of the covariant derivative
\begin{align}
[ i D_{\rm ext}^1 , iD_{\rm ext}^2] = - i q F_{\rm ext}^{12} \equiv iqB\, ,
\end{align}
with $B$ denoting the nonvanishing $ 3 $-component of the magnetic field. Those transverse components of the covariant derivative may be regarded as a pair of the canonical variables in nonrelativistic quantum mechanics. Motivated by this analogy, we introduce ``creation and annihilation operators,'' denoted by $\hat{a}$ and $\hat{a}^\dagger$, respectively, as
\begin{align}\label{eq:aadagger}
\hat a \equiv \frac{1}{ \sqrt{2 |qB|} } ( i D_{\rm ext}^1 - {\rm sgn} (qB) D_{\rm ext}^2 )\ ,\quad
\hat a^\dagger \equiv \frac{1}{ \sqrt{2 |qB|} } ( i D_{\rm ext}^1 + {\rm sgn} (qB) D_{\rm ext}^2 )\, ,
\end{align}
which satisfy the following ``canonical commutation relation'':
\begin{align}
1 = [\hat{a}, \hat{a}^\dagger]\ ,\quad 0 = [\hat{a}, \hat{a}] = [\hat{a}^\dagger, \hat{a}^\dagger]\, . \label{eq:com}
\end{align}
Using $\hat{a}$ and $\hat{a}^\dagger$, one may reexpress the Klein-Gordon operator as
\begin{align}
D_{\rm ext}^2 + m^2 = \partial_t^2 - \partial_z^2 + \left(2 \hat a^\dagger \hat a +1\right) \vert qB \vert + m^2 \, .
\end{align}
This expression gives the relativistic energy spectrum of the Landau level for a charged scalar particle. For a charged fermion, which obeys Eq.~(\ref{eq:K-GandZeeman}), we have another term $\frac{q}{2} F_{\rm ext}^{\mu\nu} \sigma_{\mu\nu}$ in addition to the Klein-Gordon operator. This term is responsible for the Zeeman effect. To confirm this point, we introduce spin projection operators along the magnetic field\footnote{These operators have useful properties: $ \prj_\pm^\dagger = \prj_\pm$, $ \prj_+ + \prj_- = 1 $, $ \prj_\pm \prj_\pm = \prj_\pm $, and $\prj_\pm \prj_\mp =0 $. Also, $\prj_\pm \gamma^\mu \prj_\pm = \gamma_\parallel^\mu \prj_\pm $ and $\prj_\pm \gamma^\mu \prj_\mp = \gamma_\perp^\mu \prj_\mp $. We will use those properties below. }
\begin{align}
\prj _\pm
\equiv \frac{1}{2} \left(1 \pm \sigma^{12} {\rm sgn} (qB) \right)
= \frac{1}{2} \left(1 \pm i \gamma^1 \gamma^2 {\rm sgn} (qB) \right)\, .
\end{align}
Using $ \frac{q}{2} F_{\rm ext}^{\mu\nu} \sigma_{\mu\nu} = \vert qB \vert ( - \prj_+ + \prj_-) $, one can reexpress the Klein-Gordon equation (\ref{eq:K-GandZeeman}) as
\begin{align}
\left[\, \partial_t^2 - \partial_z^2 + \left(2 \hat a^\dagger \hat a + 1 - 2s \right) \vert qB \vert + m^2 \, \right]
\psi_{ s} = 0 \, ,\label{eq:rel_EoM}
\end{align}
with $s=\pm 1/2$ being the eigenvalue of the spin operator along the magnetic field, i.e.,
$2s \psi _{ s} = {\rm sgn} (qB) \sigma^{12} \psi_{ s}$ (the factor $2$ accounts for the Land\'{e} $g$-factor). Namely, $s=+1/2$ and $-1/2$ ($s=-1/2$ and $+1/2$) if the spin direction is parallel and anti-parallel, respectively, with respect to the magnetic-field direction for positively (negatively) charged fermions. Since the operators $\hat{a}$ and $\hat{a}^\dagger$ satisfy the commutation relation (\ref{eq:com}), one understands that the energy level is given by
\begin{align}
\epsilon_n \equiv \sqrt{ m^2 + 2 n \vert qB \vert + p_z^2 } \, .\label{eq:fermion-rela}
\end{align}
The non-negative integer $n=0,1,2,\cdots \in {\mathbb N}$ is the resultant quantum number after the sum of the Landau level and the Zeeman shift. Therefore, the energy level has two-fold spin degeneracy ($s=\pm 1/2$) except for the unique ground state ($s=+1/2$), which is often called the lowest Landau level (after the Zeeman shift is included). $p_z$ is the longitudinal momentum, which is conserved because the longitudinal motion is not affected by a magnetic field. Accordingly, one can factorize the eigenfunction $\psi_{s, n }$ (such that $i\partial_t\psi_{s, n } = \pm \epsilon_n \psi_{s, n}$) as
\begin{align}
\psi_{s, n} \propto e^{-ip_\parallel \cdot x} \phi_{n} (x_\perp)\, , \label{eq:wave-functions}
\end{align}
where the transverse wave function $\phi_{n}$ depends on the transverse coordinates $x_\perp$ but not on the longitudinal coordinate $ x_\parallel $. One can construct $\phi_{n}$ as eigenstates of the ``number operator'' as $\phi_n=\langle x | n\rangle$ with $\hat a^\dagger \hat a |n\rangle =n |n\rangle$ and $ \hat a |0\rangle=0 $. For the moment, we do not need an explicit form of $\phi_{n}$ which can be obtained only after fixing the gauge, and just discuss gauge-invariant properties of $\phi_{n}$. Precisely speaking, there exists another good quantum number $\qn$ [e.g., $\qn=p_y$ in the Landau gauge (\ref{eq:Landau-gauge}) which we adopt later]. The existence of this additional good quantum number is anticipated, since there is a constant of motion in classical picture of the cyclotron motion, that is, the center coordinate of the cyclotron motion. In quantum theory, $ \qn$ corresponds to a label of the center coordinate, and each Landau level is degenerated with respect to $ \qn $ since shift of the center coordinate should not affect the energy level in a constant magnetic field. For notational simplicity, we label the eigenfunction $\psi_{s,n}$ only with $n$ and suppress the additional label $\qn$ unless needed. While $\qn$ is a gauge-dependent quantity, the Landau level constructed with the ``creation/annihilation operator'' is a gauge-invariant concept as clear in the construction of the operators (\ref{eq:aadagger}).
Next, we examine the Dirac spinor structure of $\psi_{s,n}$ and introduce the Ritus basis. In Eq.~(\ref{eq:rel_EoM}), we have seen that the two spin eigenstates such that $2s\psi_{s,n} = {\rm sgn} (qB)\sigma^{12}\psi_{s,n}$ ($s=\pm1/2$) provide appropriate bases for solutions of the Dirac equation. It is, therefore, convenient to introduce a basis for a superposition of the two degenerated spin eigenstates,
\begin{align}
\Ritus_{n} (x_\perp) \equiv \phi_{n} (x_\perp) \prj_+ + \phi_{n-1} (x_\perp) \prj_- \, , \label{eq:Ritus}
\end{align}
where $ \phi_{-1} \equiv 0 $ is understood. This is the so-called {\it Ritus basis}, which was introduced first for computation of the fermion self-energy in external fields \cite{Ritus:1972ky, Ritus:1978cj}. Using the Ritus basis, one may write
\begin{align}
\psi_{s,n} (x) = \left\{ \begin{array}{ll} \displaystyle e^{ - i p_\parallel \cdot x} \Ritus_{n} (x_\perp) \, u \vspace*{2mm} &\ {\rm for\ a\ positive\mathchar`-energy\ solution} \\ e^{ + i p_\parallel \cdot x} \Ritus_{n} (x_\perp) \, v \vspace*{2mm} &\ {\rm for\ a\ negative\mathchar`-energy\ solution} \end{array} \right. \, , \label{eq:Dirac-Ritus}
\end{align}
where $u$ and $v$ are four-component spinors. By noticing
\begin{align}
\left[ i \slashed D _{\rm ext} -m \right] \psi_{s,n}
&= \left[ ( i \slashed \partial _\parallel -m) - \sqrt{2|qB|} \ \gamma^1\, \left(\hat a \prj_+ + \hat a^\dagger \prj_- \right) \right] \psi_{s,n} \nonumber\\
&= \left\{ \begin{array}{l} \displaystyle e^{ - i p_\parallel \cdot x} \Ritus_{n} (x_\perp) \left( \slashed p _\parallel - \sqrt{2n |qB| }\ \gamma^1-m \right) u \vspace*{2mm}\\ \displaystyle e^{ + i p_\parallel \cdot x} \Ritus_{n} (x_\perp) \left( -\slashed p _\parallel - \sqrt{2n |qB| }\ \gamma^1-m \right) v \end{array} \right. \, , \label{Dirac_Ritus}
\end{align}
one finds that the ansatz (\ref{eq:Dirac-Ritus}) satisfies the Dirac equation (\ref{eq:Dirac}) if the spinors $u$ and $v$ satisfy the ``free" Dirac equations
\begin{subequations} \label{eq:free}
\begin{align}
0 &= ( {\slashed p}_n -m ) u(p_n)\ , \label{eq:free_u} \\
0 &= ( \bar{\slashed p}_n +m ) v(\bar{p}_n) \, , \label{eq:free_v}
\end{align}
\end{subequations}
where
\begin{align}
p^\mu_n \equiv ( \epsilon_n, \sqrt{2n |qB| },0,p_z) \, ,\quad
\bar p^\mu_{ n} \equiv ( \epsilon_n , - \sqrt{2n |qB| },0,p_z).
\end{align}
Each ``free'' Dirac equation has two solutions $u=u_\k, v=v_\k$,
corresponding to two spin degrees of freedom that we label with $\k = 1,2$.
We normalize the solutions $u_\k$ and $v_\k$ in such a way that they satisfy the following completeness relation
\begin{align}\label{eq:spin-sum}
\sum_{\k=1,2} u_\k (p_n) \bar u_\k (p_n) = ( \slashed p_n + m)
\, , \quad
\sum_{\k=1,2} v_\k (\bar p_n) \bar{v}_\k (\bar p_n) = ( \bar{\slashed p}_n - m)
\, .
\end{align}
The choice of $\k$ is arbitrary, and one could choose $\k$ different from
the spin label $s=\pm 1/2$ defined with respect to the magnetic field direction [see Eq.~(\ref{eq:rel_EoM})].
In general, $\k$ is a superposition of $s=\pm1/2$.
This is understood by explicitly writing down the normalized solutions
(although we do not use the explicit forms in our study).
In the chiral representation, they read \cite{Peskin:1995ev}
\begin{eqnarray}
\label{eq:freesol}
u_\k (p_n ) = ( \sqrt{ p_n \cdot \sigma} \, \xi_\k, \sqrt{ p_n \cdot \bar \sigma} \, \xi_\k)
\, , \quad
v_\k (\bar p_n ) = ( \sqrt{ \bar p_n \cdot \sigma} \, \eta_\k, - \sqrt{ \bar p_n \cdot \bar \sigma} \, \eta_\k)
\, ,
\end{eqnarray}
where $ \sigma^\mu = (1, \sigma^i) $ and $ \bar \sigma^\mu = (1, -\sigma^i) $.
The free Dirac equation (\ref {eq:free}) is satisfied with any two-component spinors $\xi_\k$ and $\eta_\k$,
resulting in the aforementioned arbitrariness in the choice of $ \k$.
One of the most convenient choices is to take the eigenvectors of $\sigma^3$ as $ \xi_1 = \eta_1 = (1,0) $ and $ \xi_2 = \eta_2 = (0,1) $. Note for this case that $u_\k$ and $v_\k$ are in general not eigenstates of $\sigma^{12} = {\rm diag}(\sigma^3, \sigma^3)$ because of the presence of nonvanishing $p_n^1, \, \bar p_n^1$ for the higher Landau levels $ n\geq 1 $.
We perform canonical quantization of the Dirac field operator $\psi$ to define creation and annihilation operators. We expand $\psi$ in terms of the solution of the Dirac equation (\ref{eq:Dirac-Ritus}) as
\begin{subequations}
\label{eq:Ritus-mode}
\begin{align}
\psi(x) &= \sum_{\k=1,2} \sum_{n=0}^{\infty} \int \frac{d p_z dp_y}{(2\pi)^2 \sqrt{2 \epsilon_n}}
\Ritus_{n,p_y} (x_\perp) \left[ \,
a_{p_n,p_y,\k} e^{ - i p_\parallel \cdot x} u_\k (p_n)
+ b^{\dagger}_{\bar{p}_n,p_y,\k} e^{ i p_\parallel \cdot x} v_\k(\bar p_n) \,\right]
,
\\
\bar{\psi}(x) &=\sum_{\k=1,2} \sum_{n=0}^{\infty} \int \frac{d p_z dp_y}{(2\pi)^2 \sqrt{2 \epsilon_n}}
\left[ \,
b_{\bar{p}_n,p_y,\k} e^{-i p_\parallel \cdot x} \bar v_\k (\bar{p}_n)
+ a^{\dagger}_{p_n,p_y,\k} e^{ i p_\parallel \cdot x} \bar u_\k (p_n) \,\right]
\Ritus_{n,p_y}^\dagger(x_\perp)
.
\end{align}
\end{subequations}
Here and hereafter, we assume the Landau gauge,
\begin{align}
A^\mu_{\rm ext} (x) = ( 0, 0, Bx, 0)\, . \label{eq:Landau-gauge}
\end{align}
It is clear that one of the canonical transverse momenta $p_y$ is conserved and specifies the Landau degeneracy. $p_y$ is a gauge-dependent quantity and hence is not an observable, while the energy $\epsilon_n$ (or the Landau level $n$) and the longitudinal momentum $p_z$ are gauge-independent and observable quantities\footnote{While one could define a (gauge-invariant) {\it kinetic} transverse momentum, its expectation value taken for the eigenstates of the Landau levels should be vanishing due to the closed cyclotron orbit. }. We chose the Landau gauge just for simplicity, and the mode expansion (\ref{eq:Ritus-mode}) as well as the calculations below can be done in a similar way in other gauges. Importantly, we explicitly prove that our final physical results are gauge-independent; see Appendix~\ref{app-b2}. Next, we impose the canonical commutation relations on the Dirac field operator $\psi$ (see Appendix~\ref{sec:commutation} for details) and normalize the transverse wave function $\phi_{n,p_y}$, which fixes the normalization of the Ritus basis $\Ritus_{n,p_y}$, as
\begin{align}
\int d^2x_\perp \phi^*_{n,p_y}(x_\perp) \phi_{n',p_y'}(x_\perp) = 2\pi \delta(p_y-p'_y) \delta_{n,n'} \, .
\end{align}
Then, $a_{p_n,p_y,\k}$ and $b_{p_n,p_y,\k}$ are quantized as
\begin{align}
\{ a_{p_n,p_y,\k}, a^{\dagger}_{p_{n'},p_y',\k'} \}
= \{ b_{\bar{p}_n,p_y,\k}, b^{\dagger}_{\bar{p}'_{n'},p_y',\k'} \}
= (2\pi)^2 \delta(p_y-p'_y) \delta( p_z - p'_z) \delta_{n,n'} \delta_{\k,\k'} \, , \label{eq:commutation}
\end{align}
while the other commutations are vanishing. The operators $a_{p_n,p_y,\k}$ and $b_{p_n,p_y,\k}$ can now be interpreted as annihilation operators of a fermion and an anti-fermion state, respectively, which are specified with the energy $\epsilon_n$, longitudinal momentum $p_z$, and spin label $\k$. Note again that $\k$ is a label for the spin basis for the free Dirac spinors $u_\k$ and $v_\k$, which can be chosen arbitrarily [see discussions above Eq.~(\ref{eq:spin-sum})] and is projected to the physical spin states $s=\pm1/2$ by $\prj_\pm$ in the mode expansion (\ref{eq:Ritus-mode}). After the quantization, one can identify the vacuum state $| 0 \rangle$ as
\begin{align}
0 = a_{p_n,p_y,\k} |0\rangle = b_{\bar{p}_n,p_y,\k} |0\rangle \quad {\rm for\ any\ } p_n,\bar{p}_n, p_y, \k \, ,
\end{align}
and construct multi-particle states as
\begin{align}\label{eq:multi}
&| f_{p_{n,1}, p_{y,1} \k_1} f_{p_{n,2}, p_{y,2} \k_2}
\cdots \bar{f}_{\bar{p}'_{n',1}, p'_{y,1} \k'_1} \bar{f}_{\bar{p}'_{n',2}, p'_{y,2} \k'_2} \cdots \;\rangle \\
&\equiv (\sqrt{2 \epsilon_{n,1}} \hat a^{\dagger}_{p_{n,1}, p_{y,1} \k_1})(\sqrt{2 \epsilon_{n,2}} \hat a^{\dagger}_{p_{n,2}, p_{y,2} \k_2}) \cdots
(\sqrt{2 \epsilon'_{n',1}} b^{\dagger}_{\bar{p}'_{n',1}, p'_{y,1} \k'_1})(\sqrt{2 \epsilon'_{n',2}} \hat b^{\dagger}_{\bar{p}'_{n',2}, p'_{y,2} \k'_2}) | 0 \rangle
\nonumber\, ,
\end{align}
where $ f $ $(\bar f) $ is a fermion (anti-fermion) state carrying the quantum numbers $p_n,p_y,\k$ ($\bar{p}'_{n'},p'_y,\k'$). We normalized the multi-particle states as
\begin{align}
&\| \, | f_{p_{n,1}, p_{y,1} \k_1} f_{p_{n,2}, p_{y,2} \k_2} \cdots \bar{f}_{\bar{p}'_{n',1}, p'_{y,1} \k'_1} \bar{f}_{\bar{p}'_{n',2}, p'_{y,2} \k'_2} \cdots \;\rangle\, \|^2 \nonumber\\
&= \left[ \prod_{f \; \rm states} (2\epsilon_n)(2\pi)^2 \delta^{(2)}(0)\right] \left[ \prod_{ \bar f \; \rm states} (2\epsilon^{\prime}_{n'})(2\pi)^2 \delta^{(2)}(0)\right] \, ,
\end{align}
where the products on the right-hand side are taken over the multi-fermion and anti-fermion states
and we use an abbreviation $\epsilon'_{n} \equiv \epsilon_{n} (p_z')= \sqrt{ m^2 + 2n|qB| + (p'_z)^2} $.
\section{Lepton tensor with all-order Landau levels} \label{sec-3}
\begin{figure}
\begin{center}
\includegraphics[width=0.55\hsize]{fig1}
\end{center}
\vspace*{-10mm}
\caption{The photon--to--di-lepton vertex in a magnetic field (\ref{eq:matrixelement}). The wave functions of the produced fermion and anti-fermion are non-perturbatively dressed by the magnetic field.
}
\label{fig:diagram}
\end{figure}
We analytically evaluate the photon--to--di-lepton vertex in a strong constant magnetic field. Since the magnetic field is assumed to be strong, we treat the interaction with the magnetic field non-perturbatively, which can be conveniently achieved by the Ritus-basis formalism reviewed in the preceding section. On the other hand, we treat the interaction with a dynamical photon $A$ perturbatively. At the leading order in $A$, one can write down the amplitude explicitly as (see Fig.~\ref{fig:diagram})
\begin{align}
\varepsilon_\mu q {\mathcal M}^\mu
&\equiv \langle f_{p_n,p_y,\k}, \bar{f}_{\bar p_{n^\prime}^\prime,p_y',\k'} | \, q \int d^4 x \, \bar \psi (x) \slashed A(x) \psi(x) \, | \gamma_k \rangle \nonumber \\
&= \varepsilon_\mu q \int d^4 x \, e^{ - ik\cdot x + i (p_\parallel + p'_\parallel)\cdot x } \bar u_\k(p_n)
\Ritus_{n,p_y}^\dagger(x_\perp) \gamma^\mu \Ritus_{n',p_y'} (x_\perp) v_{\k'}(\bar p'_{n'}) \, , \label{eq:matrixelement}
\end{align}
where the one-photon state $| \gamma_k \rangle$ is normalized as
\begin{align}
\| \, |\gamma_k \rangle \, \|^2 = 2k^0 (2\pi)^3 \delta^{(3)}(0) \label{eq:gammanorm}
\end{align}
and $ \varepsilon_\mu $ is a polarization vector of the initial dynamical photon (which is not necessarily on-shell).
The fermion field is expanded with the Ritus basis (\ref{eq:Ritus-mode})
and hence is dressed by the magnetic field non-perturbatively.
On the other hand, the mode expansion for the dynamical photon field $A$ is the usual Fourier decomposition because $A$ is charge neutral and still in a momentum eigenstate. As a consequence, the three-point vertex between the fermion current and the dynamical photon field is given by a convolution of two Ritus bases and a plane wave. Inserting the explicit form of the Ritus basis (\ref{eq:Ritus}) into the amplitude (\ref{eq:matrixelement}), we find
\begin{align}
\label{eq:Ritus-form}
{\mathcal M}^\mu
&= (2\pi)^2 \delta^{(2)}( k_\parallel - p_\parallel - p_\parallel') \\
&\quad \times \bar u_\k(p_n) \big[ \gamma_\parallel^\mu \left( \prj_+ \Gamma_{n,n'} + \prj_- \Gamma_{n-1,n'-1} \right) + \gamma_\perp^\mu \left( \prj_+ \Gamma_{n-1,n'} + \prj_- \Gamma_{n,n'-1} \right) \big] v_{\k'}(\bar p_{n'}^\prime) \, , \nonumber
\end{align}
where the scalar form factor $\Gamma_{n, n'}$ is defined
as an overlap between the two fermion wave functions in the transverse plane:
\begin{align}
\Gamma_{n, n'}(p,p';k) \equiv \int \!\! d^2 x_\perp \, e^{ i (k_x x + k_y y)} \phi^\ast _{n,p_y} (x_\perp) \phi _{n',p_y'} (x_\perp) \label{eq:Gam0}\, .
\end{align}
We will discuss properties of $\Gamma_{n, n'}$ in detail in Sec.~\ref{sec:3.3}. In Eq.~(\ref{eq:Ritus-form}), the first term coupled to $\gamma_\parallel$ gives the amplitude for spin-zero lepton pair production. This observation is based on the facts that the same spin projection operators are acting on the spinors $u$ and $v$ (cf. $ \gamma_\parallel^\mu \prj_\pm =\prj_\pm \gamma_\parallel^\mu \prj_\pm $) and that the anti-fermion spinor $ \prj_\pm v $ corresponds to the opposite spin direction to that of the fermion spinor $ \prj_\pm u $ \cite{Peskin:1995ev}. In the same manner, one understands that the second term coupled to $\gamma_\perp$ is responsible for spin-one lepton pair production according to a property $ \gamma_\perp^\mu \prj_\pm =\prj_\mp \gamma_\perp^\mu \prj_\pm $.
\subsection{Scalar form factor $\Gam_{n,n'}$ in the Landau gauge}
We explicitly evaluate the scalar form factor $\Gamma_{n,n'}$ (\ref{eq:Gam0}) in the Landau gauge (\ref{eq:Landau-gauge}).
While we choose a particular gauge, we keep track of the gauge dependence carefully
and confirm that the gauge invariance is finally restored in the squared amplitude (see the next subsection).
Moreover, we show that our amplitude (\ref{eq:FF-Landau}), and thus the squared amplitude,
satisfies the Ward identity $0=k_\mu {\mathcal M}^\mu$ for each pair of Landau levels,
which is the manifestation of the gauge invariance for the dynamical photon field; see Appendix~\ref{app-b1}.
In the Landau gauge (\ref{eq:Landau-gauge}), one can explicitly write down the wave function $\phi_n$ obtained as eigenstates of the number operator (see below Eq.~(\ref{eq:wave-functions}) and Appendix~\ref{sec:wf_Landau} for derivation):
\begin{align}
\phi_{n,p_y} (x_\perp) = e^{i p_y y }\, i^n \sqrt{ \frac{1}{ 2^n n! \pi^{\frac{1}{2}} \ell} } e^{- \frac{\xi^2}{2} } H_n (\xi) \, , \label{eq:WF_Landau}
\end{align}
where $H_n$ is the Hermite function $H_n(z) \equiv ( -1)^n e^{ +z^2 } \partial_z^n e^{ -z^2 }$
and $\xi \equiv (x-x_{\rm c})/\ell$ with $x_{\rm c}\equiv p_y/qB$ and $\ell \equiv 1/\sqrt{|qB|}$
representing the center and the typical radius of the cyclotron motion (called the magnetic length), respectively.
By substituting this expression (\ref{eq:WF_Landau}) into Eq.~(\ref{eq:Gam0}), one obtains\footnote{Note that one may extract the positive-power dependence on $ |\bar {\bm k}_\perp| $ by using $L_m^{-\ell} (\rho) = \frac{(m-\ell)!}{m!} (-\rho)^\ell L_{m-\ell}^{\ell} (\rho)$ as
\begin{align}
{\Gam}_{n, n'}
&= e^{ i \frac{k_x (p_y + p^\prime_y)}{2 qB} } \, e^{- \frac{1}{2} |\bar {\bm k}_\perp|^2} (-1)^{ n' - \min( n,n') } \sqrt{ n! / n^\prime!} ^{ \, {\rm sgn} (\Delta n)} e^{ - i \; {\rm sgn} (qB) \Delta n \, \theta_\perp} |\bar {\bm k}_\perp|^{|\Delta n|} L_{\min( n,n')}^{|\Delta n|} ( | \bar {\bm k}_\perp| ^2 )\, . \label{eq:FF-Landau-2}
\end{align}
}
\footnote{The square of the scalar form factor $|{{\Gam}}_{n,n'}|^2$ has precisely the same form as $C_\ell^m(\eta) $ defined in Eq.~(38) of Ref.~\cite{Hattori:2012je} if one identifies the variables as $ \eta = |\bar {\bm k}_\perp|^2 $, $ m = - \Delta n $ and $ \ell=n' $; namely, $| {\Gam}_{n , n'} |^2 = C_n'^{- \Delta n} (\eta) $. }
\begin{align}
&\Gamma_{n ,n^\prime}(p,p';k) \nonumber\\
&= 2 \pi \delta(k_y - p_y + p'_y) \times e^{ i \frac{k_x(p_y + p^\prime _y)}{2 qB} } \, e^{- \frac{1}{2} |\bar {\bm k}_\perp|^2} (-1)^{ \Delta n} \sqrt{ \frac{n!}{ n^\prime!}} \, e^{ - i \; {\rm sgn} (qB) \Delta n \, \theta_\perp} |\bar {\bm k}_\perp|^{\Delta n} L_{ n }^{ \Delta n } ( | \bar {\bm k}_\perp| ^2 ) \nonumber\\
&\equiv 2 \pi \delta(k_y - p_y + p'_y) \times {\Gam}_{n, n^\prime}(p,p';k) \, ,
\label{eq:FF-Landau}
\end{align}
where the scalar form factor $ {\Gam}_{n, n^\prime}(p,p';k) $ is defined for the Landau gauge
after factorizing the delta function, and $ \Gam_{n, n^\prime} = 0$ for $n$ or $n'<0$ is understood.
We also defined
\begin{align}
\Delta n \equiv n^\prime - n, \quad
|\bar {\bm k}_\perp|^2 \equiv \frac{ | {\bm k}_\perp|^2}{ 2| qB|} ,\quad
\theta_\perp \equiv \arg(k_x + i k_y) ,
\end{align}
where $\theta_\perp$ is the azimuthal angle of the photon momentum $ {\bm k} $.
$L_n^k$ is the associated Laguerre polynomial such that $L_n^k(z) \equiv \partial_z^k(e^{+z} \partial_z^n (z^n e^{-z}))$. While systems exposed to a constant magnetic field should maintain the gauge symmetry and the translational and rotational symmetries in the transverse plane, the gauge configuration $ A^\mu_{\rm ext} $ partially or completely hides those symmetries. Indeed, the scalar form factor ${\Gam}_{n, n'}$ itself does not possess the gauge symmetry due to the exponential phase factor $\exp\left[ i k_x(p_y + p^\prime _y)/2 qB \right]$ (the so-called Schwinger phase) because $p_y$ is a gauge-dependent label in the Landau gauge. The rotational symmetry also seems to be apparently broken by the dependence on the azimuthal angle $ \theta_\perp $. Nevertheless, the Schwinger factor does not depend on $n$ and thus is canceled out when the amplitude is squared. Similar to this, one can also show that the rotational symmetry is restored after squaring the amplitude; see Appendix~\ref{app-b2}.
\subsection{Analytic form of the lepton tensor}
\label{sec:general}
We turn to evaluate the lepton tensor in a magnetic field $L^{\mu\nu}_{n,n'}$ which is defined via the squared amplitude as
\begin{align}
\sum_{\k,\k'} \frac{ \left| \varepsilon_\mu q {\mathcal M}^\mu \right|^2}{2k^0 (2\pi)^3 \delta^{(3)}(0)}
\equiv \frac{T}{L_x} \frac{q^2}{2k^0}(2\pi)^3 \delta^{(2)}( k_\parallel - p_\parallel - p'_\parallel) \delta(k_y - p_y + p'_y) [\, \varepsilon_\mu \varepsilon^\ast_\nu L^{\mu\nu} _{n,n'} \, ] \, , \label{eq:sq}
\end{align}
where $T \equiv 2\pi \delta(p^0=0)$ and $ L_x $ are the whole time-interval and the system length in the $x$ direction, respectively. The factor of $1/[2k^0(2\pi)^3 \delta^{(3)}(0)]$ is inserted so as to cancel the uninterested normalization factor coming from the one-photon state (\ref{eq:gammanorm}). For notational brevity, we rewrite the amplitude ${\mathcal M}^\mu$ as
\begin{align}
{\mathcal M}^\mu
&= (2\pi)^3 \delta^{(2)}( k_\parallel - p_\parallel - p_\parallel') \delta( k_y - p_y + p'_y) \times \bar u_\k(p_n) [ \gamma_\parallel^\mu \H_0 + \gamma_\perp^\mu \H_1 ] v_{\k'}(\bar p_{n'}^\prime) \, ,
\end{align}
with
\begin{align}
\H_0 \equiv \prj_+ {\Gam}_{n, n'} + \prj_- {\Gam}_{n-1,n'-1} \ , \quad
\H_1 \equiv \prj_+ {\Gam}_{n-1,n'} + \prj_- {\Gam}_{n,n'-1} \, .
\end{align}
Note that $\H_0$ and $\H_1$ control the amplitudes for spin-zero and spin-one lepton pair productions, respectively, as we remarked below Eq.~(\ref{eq:Gam0}). Then, after the spin summation $\sum_{\k,\k'}$, one can express $L^{\mu\nu}_{n,n'}$ as
\begin{align}
L^{\mu\nu}_{n,n'}
&= {\rm tr} \left[ \, (\slashed p_n - m) ( \gamma_\parallel^\mu \H_0 + \gamma_\perp^\mu \H_1 ) (\bar {\slashed p}'_{n'} + m) ( \H^\dagger_0 \gamma_\parallel^\nu + \H^\dagger_1\gamma_\perp^\nu ) \, \right] \nonumber\\
&= T_1 - 2 |qB| \sqrt{ n n'} T_2 - \sqrt{ 2 n |qB|} T_3 + \sqrt{ 2 n' |q B|} T_4\, , \label{eq:leptontensor1}
\end{align}
where
\begin{subequations}
\label{eq:traces}
\begin{align}
T_1 &\equiv {\rm tr} \left[ \, (\slashed p_\parallel - m) ( \gamma_\parallel^\mu \H_0 + \gamma_\perp^\mu \H_1 ) ({\slashed p}'_\parallel+ m) ( \H^\dagger_0 \gamma_\parallel^\nu + \H^\dagger_1 \gamma_\perp^\nu ) \, \right]\, , \\
T_2 &\equiv {\rm tr} \left[ \, \gamma^1 ( \gamma_\parallel^\mu \H_0 + \gamma_\perp^\mu \H_1 )\gamma^1 ( \H^\dagger_0 \gamma_\parallel^\nu + \H^\dagger_1 \gamma_\perp^\nu ) \, \right]\, , \\
T_3 &\equiv {\rm tr} \left[ \, \gamma^1( \gamma_\parallel^\mu \H_0 + \gamma_\perp^\mu \H_1 ) ( {\slashed p}'_\parallel+ m) ( \H^\dagger_0 \gamma_\parallel^\nu + \H^\dagger_1 \gamma_\perp^\nu ) \, \right]\, , \\
T_4 &\equiv {\rm tr} \left[ \, (\slashed p_\parallel - m) ( \gamma_\parallel^\mu \H_0 +\gamma_\perp^\mu \H_1 ) \gamma^1 ( \H^\dagger_0 \gamma_\parallel^\nu + \H^\dagger_1 \gamma_\perp^\nu ) \, \right]\, .
\end{align}
\end{subequations}
The gauge-dependent Schwinger phase goes away by the squaring operation, and thus the gauge and translational invariances have been restored explicitly in Eq.~(\ref{eq:leptontensor1}).
The rotational symmetry has also been restored here, although it may not be obvious at a glance;
see Appendix~\ref{app-b2} for an explicit demonstration.
Before proceeding, we introduce several notations in order to simplify the traces (\ref{eq:traces}) in a physically transparent way. We first introduce photon's circular polarization vectors with respect to the direction of the magnetic field,
\begin{align}
\varepsilon^\mu_\pm \equiv -( g^{\mu1} \pm i\, {\rm sgn} (qB) g^{\mu2} )/\sqrt{2} = ( 0, 1 , \pm i\, {\rm sgn} (qB) , 0)/\sqrt{2} \, , \label{eq:pol}
\end{align}
which are ortho-normalized as $ \varepsilon^\mu_\pm \varepsilon^*_{\mp\mu} = 0 $ and $ \varepsilon^\mu_\pm \varepsilon^*_{\pm\mu} = -1 $ and satisfy $\varepsilon^{\mu*}_\pm=\varepsilon^\mu_\mp$. We inserted a sign function in the above definition (\ref{eq:pol}) because the direction of fermion's spin changes depending on $ {\rm sgn} (qB)$. Those polarizations couple to di-leptons carrying total spin $s+s'= \pm 1 $ states as we see below. Next, we introduce helicity-projection operators for circularly polarized photons
\begin{align}
\Q_\pm^{\mu\nu}
\equiv - \varepsilon_\pm^\mu \varepsilon_\pm^{\nu\ast}
= ( g_\perp^{\mu\nu} \pm i\, {\rm sgn} (qB) \varepsilon_\perp^{\mu\nu} )/2 \, ,
\end{align}
where $\varepsilon_\perp^{\mu\nu} \equiv g^{\mu1} g^{\nu2} -g^{\mu2} g^{\nu1}$. Those operators have eigenvectors $ \varepsilon^\mu_\pm $ which satisfy $\Q_\pm^{\mu\nu} \varepsilon_{\pm \nu} = \varepsilon_\pm^\mu $ and $ \Q_\pm^{\mu\nu} \varepsilon_{\mp \nu} =0$. Finally, we introduce scalar and longitudinal photon polarization vectors
\begin{align}
\varepsilon_0^\mu \equiv (1,0,0,0) \ ,\quad
\varepsilon_\parallel^\mu \equiv (0,0,0,1)\, ,
\end{align}
respectively, which are ortho-normalized as $\varepsilon_0^\mu\varepsilon^*_{\parallel\mu}=0$, $\varepsilon_0^\mu\varepsilon^*_{0\mu}=+1$, and $\varepsilon_\parallel^\mu\varepsilon^*_{\parallel\mu}=-1$. Clearly, those vectors are orthogonal to the circular polarization vectors as $0=\varepsilon_\pm^\mu \varepsilon_{0\mu}=\varepsilon_\pm^\mu \varepsilon_{\parallel\mu}$ and do not couple to the helicity-projection operators, i.e., $0=\Q_\pm^{\mu\nu} \varepsilon_{0\mu}=\Q_\pm^{\mu\nu} \varepsilon_{\parallel\mu}$. The scalar and longitudinal photons couple to $s+s'=0$ channel of di-leptons, as we see below. Note that the four polarization vectors $\varepsilon_{0,\pm, \parallel}$ satisfy the following completeness relation
\begin{align}
g^{\mu\nu} = \varepsilon^\mu_{0}\varepsilon^{\nu*}_{0} - \varepsilon^\mu_{+}\varepsilon^{\nu*}_{+} -\varepsilon^\mu_{-}\varepsilon^{\nu*}_{-} -\varepsilon^\mu_{\parallel}\varepsilon^{\nu*}_{\parallel}\, , \label{eq:completeness}
\end{align}
and form a complete basis for the photon polarization vector $\varepsilon^\mu$.
Now, we are ready to simplify the traces in Eq.~(\ref{eq:traces}). For $T_1$, a straightforward calculation yields
\begin{align}
T_1
&= {\rm tr} \left[ \, (\slashed p_\parallel - m) \gamma_\parallel^\mu ({\slashed p}'_\parallel+ m) \gamma_\parallel^\nu | \H_0 |^2\, \right] + {\rm tr} \left[ \, (\slashed p_\parallel - m) \gamma_\perp^\mu ({\slashed p}'_\parallel+ m) | \H_1|^2 \gamma_\perp^\nu \, \right] \\
&= ( | {\Gam}_{n , n'} |^2 + |{\Gam}_{n-1 , n'-1 } |^2 ) L_\parallel^{\mu\nu} - 4 (p_\parallel \cdot p'_\parallel + m^2) \left( \, | {\Gam}_{n-1 , n'} |^2 \Q_+^{\mu\nu} + |{\Gam}_{n , n'-1 } |^2 \Q_-^{\mu\nu} \, \right) \nonumber \, .
\end{align}
Here, we introduced the lepton tensor in the $(1+1)$-dimensional form
\begin{align}
L_\parallel^{\mu\nu} = 2 \left[ \, p_\parallel^\mu p_\parallel^{\prime \nu} + p_\parallel^\nu p_\parallel^{\prime \mu} - (p_\parallel \cdot p'_\parallel + m^2) g_\parallel^{\mu\nu} \, \right] \, ,
\end{align}
which couples only to the scalar and longitudinal photon polarizations, i.e., $0 \neq L^{\mu\nu}_{\parallel} \varepsilon_{0\mu}, L^{\mu\nu}_{\parallel} \varepsilon_{\parallel\mu}$ and $0 = L^{\mu\nu}_{\parallel} \varepsilon_{\pm\mu}$. Remember that the terms proportional to $\H_0 $ and $\H_1$ are originated from spin-zero and spin-one di-lepton configurations. Therefore, the first term is responsible for spin-zero di-lepton production and is coupled to the photon mode longitudinal to the direction of the magnetic field. On the other hand, the last two terms describe spin-one di-lepton production and are coupled to circularly polarized photons. Note that, among all the terms, only the term proportional to $ | {\Gam}_{0, 0}|^2 $ survives in the lowest Landau level approximation. Next, we turn to evaluate the remaining terms $T_2, T_3$, and $T_4$:
\begin{align}
T_2
&= {\rm tr} \left[ \, \gamma^1 \gamma_\parallel^\mu (\H_0 \gamma^1 \H^\dagger_0 ) \gamma_\parallel^\nu \, \right] + {\rm tr} \left[ \, \gamma^1 \gamma_\perp^\mu ( \H_1 \gamma^1 \H^\dagger_1) \gamma_\perp^\nu \, \right] \nonumber\\
&= 4 \left[ \ {\rm Re}[ {\Gam}_{n , n'} {\Gam}_{n-1 , n'-1} ^\ast ] g_\parallel^{\mu\nu} + {\Gam}_{n-1, n'} {\Gam}_{n, n'-1}^\ast \varepsilon_+^\mu \varepsilon_-^{\nu \ast} + [ {\Gam}_{n-1, n'} {\Gam}_{n, n'-1}^\ast ]^\ast \varepsilon_-^\mu \varepsilon_+^{\nu \ast} \ \right]\, .
\end{align}
The $ \gamma^1 $'s in between $\H_0, \H_0\dagger$ and $\H_1, \H_1\dagger$ induce a spin flip, unlike the case in $T_1$. Therefore, the first term in the last line and the others are responsible for the interferences between the amplitudes for two spin-zero and spin-one di-lepton configurations, respectively, which consist of fermion pairs with the same energy level but distinct spin combinations due to the spin flipping (recall the spin degeneracy in each energy level). In the remaining two traces $T_3$ and $T_4$, the mass terms do not survive, since one cannot hold an even number of $ \gamma_\parallel^\mu $ and $ \gamma_\perp^\mu $ simultaneously. Therefore,
\begin{subequations}
\begin{align}
T_3
&= {\rm tr} \left[ \, \gamma^1 \gamma_\parallel^\mu {\slashed p}'_\parallel \H_0 \H^\dagger_1 \gamma_\perp^\nu \, \right] + {\rm tr} \left[ \, \gamma^1 \gamma_\perp^\mu {\slashed p}'_\parallel \H_1 \H^\dagger_0 \gamma_\parallel^\nu
\, \right] \nonumber \\
&= - 2 \sqrt{2} \big[ \ {\Gam}_{n,n'} {\Gam}_{n-1,n'} ^\ast p_\parallel^{\prime \mu} \varepsilon_-^\nu + {\Gam}_{n,n'} ^\ast
{\Gam}_{n-1,n'} \varepsilon_+^\mu p_\parallel^{\prime \nu} \nonumber \\
&\quad + {\Gam}_{n-1,n'-1} {\Gam}_{n,n'-1} ^\ast p_\parallel^{\prime \mu} \varepsilon_+^\nu +{\Gam}_{n-1,n'-1} ^\ast {\Gam}_{n,n'-1} \varepsilon_-^\mu p_\parallel^{\prime \nu} \ \big]
\, , \\
T_4
&= {\rm tr} \left[ \, \slashed p_\parallel \gamma_\parallel^\mu \H_0 \gamma^1 \H^\dagger_1 \gamma_\perp^\nu \, \right] + {\rm tr} \left[ \, \slashed p_\parallel \gamma_\perp^\mu \H_1 \gamma^1 \H^\dagger_0 \gamma_\parallel^\nu \, \right] \nonumber\\
&= - 2 \sqrt{2} \big[ \ {\Gam}_{n,n'} {\Gam}_{n,n'-1} ^\ast p_\parallel^\mu \varepsilon_+^\nu + {\Gam}_{n,n'}^\ast {\Gam}_{n,n'-1} \varepsilon_-^\mu p_\parallel^\nu \nonumber \\
&\quad + {\Gam}_{n-1,n'-1} {\Gam}_{n-1,n'}^\ast p_\parallel^\mu \varepsilon_-^\nu +{\Gam}_{n-1,n'-1}^\ast {\Gam}_{n-1,n'} \varepsilon_+^\mu p_\parallel^\nu\ \big] \, .
\end{align}
\end{subequations}
As clear from the appearance of $ \H_0 $ coupled to $ \H_1 $ and vice versa, all those terms are for the interferences between the amplitudes for two spin-zero and spin-one di-lepton configurations having distinct fermion spin contents.
Getting all the above contributions together, we arrive at the lepton tensor in a magnetic field:
\begin{align}
L^{\mu\nu}_{n,n'}
&= ( | {\Gam}_{n , n'} |^2 + | {\Gam}_{n-1 , n'-1 } |^2 ) L_\parallel^{\mu\nu} - 4 (p_\parallel \cdot p'_\parallel + m^2) \left( \, | {\Gam}_{n-1 , n'} |^2 \Q_+^{\mu\nu} + | {\Gam}_{n , n'-1 } |^2 \Q_-^{\mu\nu} \, \right) \nonumber \\
&\quad - 4 |qB| \sqrt{nn'} \;{\rm Re}\left[ \ 2 {\Gam}_{n , n'} {\Gam}_{n-1 , n'-1} ^\ast g_\parallel^{\mu\nu} + {\Gam}_{n-1, n'} {\Gam}_{n, n'-1}^\ast \varepsilon_+^\mu \varepsilon_+^{\nu } \ \right] \nonumber \\
&\quad + 8 \sqrt{ n|qB|} \;{\rm Re} \left[ \ \left( {\Gam}_{n,n'}^\ast {\Gam}_{n-1,n'} + {\Gam}_{n-1,n'-1} {\Gam}_{n,n'-1} ^\ast \right) p_\parallel^{\prime \mu} \varepsilon_+^\nu \ \right] \nonumber \\
&\quad - 8 \sqrt{ n' |qB|} \;{\rm Re} \left[ \ \left( {\Gam}_{n,n'} {\Gam}_{n,n'-1} ^\ast + {\Gam}_{n-1,n'-1}^\ast {\Gam}_{n-1,n'} \right) p_\parallel^\mu \varepsilon_+^\nu \ \right] \label{eq:L}\, .
\end{align}
The general form of the lepton tensor~(\ref{eq:L}), together with the analytic expression for the scalar form factor (\ref{eq:FF-Landau}), is one of the main results of the present paper. We will further inspect its basic behaviors and apply it to compute the di-lepton yields. To verify the correctness of the expression (\ref{eq:L}), we provide some consistency checks in Appendix~\ref{app-b2}. We have confirmed the following three points: (i) $ |\epsilon_\mu {\mathcal M}^\mu_{n,n'}|^2 \propto \varepsilon_\mu \varepsilon_\nu^\ast L^{\mu\nu}_{n,n'} $ is a real-valued quantity; (ii) $ L^{\mu\nu}_{n,n'} $ satisfies the Ward identity, i.e., $k_\mu L^{\mu\nu}_{n,n'} = k_\nu L^{\mu\nu}_{n,n'}=0 $; (iii) The rotational symmetry in the transverse plane is restored in $ \varepsilon_\mu \varepsilon_\nu^\ast L^{\mu\nu}_{n,n'} $ for an arbitrary photon polarization $ \varepsilon_\mu $ in spite of the use of the Landau gauge, which apparently breaks the rotational symmetry.
\subsection{Polarization-projected lepton tensors} \label{sec:3.3}
In the matrix element squared (\ref{eq:leptontensor1}),
the lepton tensor (\ref{eq:L}) appears in contraction with photon polarization vectors $\varepsilon^\mu, \varepsilon^{\ast}_{\nu}$, and here we discuss the basic behaviors of the contracted lepton tensors. The magnitudes of the contracted lepton tensors are controlled by the square of the scalar form factor $ | {\Gam}_{n, n'} |^2 $.
For each photon polarization $\varepsilon^\mu = \varepsilon^\mu_{0,\pm,\parallel}$, we find
\begin{align}
\varepsilon_{\mu}^0 \varepsilon^{0\ast}_{\nu} L_{n,n'}^{\mu\nu}
&= 2 ( \epsilon_n \epsilon'_{n'} + p_z p'_z -m^2 ) ( | {\Gam}_{n , n'} |^2 + | {\Gam}_{n-1 , n'-1 } |^2 ) \nonumber\\
&\quad -4 |q B| \left[ - |{\bm k}_\perp| ^2 |{\Gam}_{n-1,n'}|^2 + n | {\Gam}_{n, n'} |^2 + n' | {\Gam}_{n-1,n'-1}|^2 \right] \, , \nonumber\\
\varepsilon_{\mu}^+ \varepsilon^{+\ast}_{\nu} L_{n,n'}^{\mu\nu}
&= 4 ( \epsilon_n \epsilon'_{n'} - p_z p'_z + m^2 ) | {\Gam}_{n , n'-1 } |^2 \, ,\nonumber\\
\varepsilon_{\mu}^- \varepsilon^{-\ast}_{\nu} L_{n,n'}^{\mu\nu}
&= 4 ( \epsilon_n \epsilon'_{n'} - p_z p'_z + m^2 ) | {\Gam}_{n-1 , n'} |^2 \, , \nonumber\\
\varepsilon_{\mu}^\parallel \varepsilon^{\parallel\ast}_{\nu} L_{n,n'}^{\mu\nu}
&= 2 ( \epsilon_n \epsilon'_{n'} + p_z p'_z + m^2 ) ( | {\Gam}_{n , n'} |^2 + |{\Gam}_{n-1 , n'-1 } |^2 ) \nonumber\\
&\quad+4 |q B| \left[ - | {\bm k}_\perp| ^2 |{\Gam}_{n-1,n'}|^2 + n | {\Gam}_{n, n'} |^2 + n' | {\Gam}_{n-1,n'-1}|^2 \right] \, , \label{eq:projected-L}
\end{align}
where we used an identity [which follows from Eq.~(\ref{eq:identities})]:
\begin{align}
\label{eq:GG}
2 \sqrt{n n'} \, {\Gam}_{n, n'} {\Gam}^\ast_{n-1,n'-1}
= - | {\bm k}_\perp| ^2 |{\Gam}_{n-1,n'}|^2 + n | {\Gam}_{n, n'} |^2 + n' | {\Gam}_{n-1,n'-1}|^2\, .
\end{align}
Note that the two circularly ($\pm$) polarized photons give distinct contracted lepton tensors (and thus distinct production number of di-leptons) because $ | {\Gam}_{n , n'-1 } |^2 \neq | {\Gam}_{n-1 , n'} |^2 $ unless $ n=n' $ [cf. identities (\ref{eq:identities})]. Note also that the contracted lepton tensors depend only on the photon transverse momentum $ |{\bm k}_\perp| $ and do not depend explicitly on the longitudinal variables $k_0$ and $k_z$. One may understand, therefore, that basic behaviors of the di-lepton production are determined by the transverse variables in the system, rather than the longitudinal ones. This might look reasonable in the sense that magnetic fields never affect the longitudinal motion. Nevertheless, the longitudinal variables do affect the production through the kinematics, i.e., the delta function in front of the contracted lepton tensor in the matrix element squared (\ref{eq:leptontensor1}). We postpone to discuss effects of the longitudinal variables and the kinematics until Sec.~\ref{sec4.2}.
To get a qualitative understanding of the contracted lepton tensors, we discuss basic behaviors of the square of the scalar form factor $| {\Gam}_{n, n'} |^2 $, whose explicit expression is given by
\begin{align}
\label{eq:GG2}
| {\Gam}_{n, n'} |^2
= |\bar {\bm k}_\perp|^{2|\Delta n|} e^{- |\bar {\bm k}_\perp|^2} \left( \frac{n!}{n'!} \right)^{ {\rm sgn} (\Delta n)} \left| L^{|\Delta n|}_{\min(n,n')} ( |\bar {\bm k}_\perp|^2) \right|^2 \ .
\end{align}
This form factor $ |{\Gam}_{n, n'}|^2 $ behaves differently depending on the strength of the magnetic field compared to the typical resolution scale set by the transverse photon momentum, i.e., $ |\bar {\bm k}_\perp|^2 = |{\bm k}_\perp|^2/|2qB| $. $ |{\Gam}_{n, n'}|^2 $ has peaks in the $|\bar {\bm k}_\perp|$ dependence originating from an oscillation of the associated Laguerre polynomial, which is a reminiscent of the transverse momentum conservation (recall the definition of $ {\Gam}_{n, n'} $ with the overlap among the wave functions (\ref{eq:Gam0}) that yields a delta function in the absence of a magnetic field). The structure of the peaks deviates from a delta function due to the fermion's dressed wave functions in a magnetic field. Further basic properties are summarized as follows.
When $|\bar {\bm k}_\perp| \lesssim 1$ (i.e., $|{\bm k}_\perp|$ is small and/or $|qB|$ is large relative to each other), $|{\Gam}_{n, n'} |^2$ is suppressed by the power factor $|\bar {\bm k}_\perp|^{2|\Delta n|}$. The suppression becomes larger for a larger $ |\Delta n| $, and $ | {\Gam}_{n, n'} |^2 $ eventually vanishes in the limit $|\bar {\bm k}_\perp| \to 0$, unless $\Delta n = 0$. Indeed, one can show that
\begin{align}
\lim_{|\bar {\bm k}_\perp|\to0 } | {\Gam}_{n, n'} |^2
= \delta_{n,n'} \Big[ \, L_{\min(n,n')} ( 0 ) \, \Big]^2
= \delta_{n,n'} \, , \label{eq:G(k->0)}
\end{align}
where $L^{0}_{n} ( 0 ) = 1$. This behavior is anticipated from the definition (\ref{eq:Gam0}), which just reduces to the orthonormal relation for the transverse wave function $\phi_{n,p_y}$ in the limit $ |\bar {\bm k}_\perp| \to 0 $. Intuitively, this property can be understood from the reminiscent of the transverse-momentum conservation mentioned above. A dynamical photon with $|{\bm k}_\perp| \neq 0$ ($|{\bm k}_\perp| = 0$) produces a di-lepton carrying a nonzero transverse momentum in total, and hence the produced fermion and anti-fermion would have distinct (the same) magnitudes of transverse momentum. This implies that a larger transverse-momentum difference in the produced fermions requires a larger photon transverse momentum. Therefore, production of di-lepton with a large $|\Delta n|$ is suppressed in a small $|\bar{\bm k}_\perp|$.
In the opposite regime $|\bar {\bm k}_\perp| \gtrsim 1$, $|{\Gam}_{n, n'} |^2$ is suppressed exponentially by the factor of $e^{- |\bar {\bm k}_\perp|^2}$. This suppression originates from the exponentially small overlap between the fermion and anti-fermion wave functions in the transverse plane, which are squeezed around the center of the cyclotron motion with length scale $\sim 1/\sqrt{|qB|}$ (i.e., the Landau quantization). To be specific, let us take the Landau gauge as an example. In this gauge, the fermion momentum, and thus the photon momentum, is related to the center coordinate of the cyclotron motion as $ k_y = p_y - p_y' = qB ( x_c - x_c')$ [see Eq.~(\ref{eq:WF_Landau})]. Thus, peaks of the fermions' transverse wave functions recede from each other when the photon momentum $ k_y $ becomes large. The other component $k_x $ appears in Eq.~(\ref{eq:Gam0}) as the Fourier mode of the overlap between the fermions' transverse wave functions. The Fourier power spectrum is exponentially suppressed when the resolution scale $k_x $ is much larger than the structure of the fermions' transverse wave functions, whose characteristic scale is given by the inverse of the cyclotron radius $ \sim 1/ \sqrt{|qB|} $. Thus, we get the factor of $e^{- |\bar {\bm k}_\perp|^2}$.
\section{Di-lepton production by a single photon} \label{sec--4}
We provide more detailed discussions about the di-lepton production by a single photon with fixed momentum and polarization in a magnetic field. The di-lepton spectrum is given by the squared amplitude (\ref{eq:sq}) with the lepton tensor obtained in Eq.~(\ref{eq:L}). We demonstrate that the di-lepton spectrum becomes anisotropic with respect to the magnetic-field direction and exhibits discrete and spike structures due to the kinematics in the Landau quantization and that the production with a lowest-Landau level fermion or anti-fermion is strictly prohibited, depending on the photon polarization and/or the fermion mass due to the conservation of spin or chirality. Note that in realistic situations such as ultra-peripheral heavy-ion collisions, one should consider a photon source or distribution and convolute it with the di-lepton production rate to make some predictions, which will be discussed in a forthcoming publication.
Before proceeding, we recall that the transverse momentum $ p^{(\prime)}_y $ is {\it canonical} momentum and is a gauge-dependent quantity in the Landau gauge (\ref{eq:Landau-gauge}). Also, the transverse components of the {\it kinetic} momentum are not conserved, as one can imagine from the classical cyclotron motion. These facts mean that each component of the transverse fermion momenta $p^{(\prime)}_x$ and $p^{(\prime)}_y$ is not a good quantum number nor measurable. Within the current set-up of problem with a constant magnetic field, one can only measure the norm of the kinetic transverse momentum, assuming that the magnetic field is adiabatically damped out in the asymptotic future.
\subsection{Spikes in the longitudinal-momentum distribution} \label{sec4.2}
We first discuss how the Landau quantization manifests itself in kinematics (or more specifically, the energy-momentum conservation) in the photon--to--di-lepton conversion process. We show that the fermion and anti-fermions' longitudinal momenta can only take discrete values because of the kinematics, resulting in spike structures in the distribution.
The kinematical constraints are incorporated in the delta function in the squared amplitude (\ref{eq:sq}). Because of the delta function, the di-lepton production occurs only when
\begin{align}
k_0 = \epsilon_n + \epsilon'_{n'}
\label{eq:energy}
\end{align}
is satisfied, which is nothing but the energy conservation. Noting the longitudinal momentum conservation $p'_z=k_z-p_z$, one can explicitly solve the condition (\ref{eq:energy}) and finds that the kinematically allowed $p_z$ for given $ k_0,k_z $ reads
\begin{align}
p_z
= \frac{ k_z ( k_\parallel^2 + m_n^2- m_{n'}^2 ) \pm k^0 \sqrt{ \big( \, k_\parallel^2 - (m_n^2 + m_{n'}^2) \, \big)^2 - 4 m_n^2 m_{n'}^2 } }{2k_\parallel^2}
\equiv p_{n,n'}^\pm \ , \label{eq:discpz}
\end{align}
where
\begin{align}
m_n \equiv \epsilon_n(p_z=0) =\sqrt{m^2 + 2 n|qB| } .
\end{align}
It is evident that only two discrete values are allowed for $p_z=p_{n,n'}^+$ and $p_{n,n'}^-$ (accordingly, $p'_z = k_z-p_{n,n'}^\pm \equiv p_{n,n'}^{\prime \pm}$), once photon momenta $ k_0,k_z $ and the Landau levels $ n,n' $ are specified. In other words, while magnetic fields do not directly quantize fermions' longitudinal momenta, the energy-momentum conservation and the Landau quantization force the longitudinal momenta to take discrete values. For $p_{n,n'}^\pm$ to be real-valued, the inside of the square root must be non-negative. This condition sets a threshold energy of the incident photon as\footnote{This semi-positivity condition itself admits another region $ k_\parallel^2 \leq ( m_n - m_{n'})^2 $, which is, however, not compatible with the energy conservation (\ref{eq:energy}). The energy conservation tells us that $ k_\parallel^2 = ( \sqrt{p_z^2 + 2 n \vert qB \vert + m^2} + \sqrt{(k_z-p_z)^2 + 2 n' \vert qB \vert + m^2} )^2 \geq ( m_n + m_{n'})^2 $, where we evaluated the boost-invariant quantity $ k_\parallel^2$ in the Lorentz frame such that $ k_z=0 $. }
\begin{align}
k_\parallel^2 \geq ( m_n + m_{n'})^2 \, . \label{eq:thres}
\end{align}
The right-hand side is the smallest possible invariant mass of a di-lepton. As the system is boost invariant along the magnetic field, $k_\parallel^2$ on the left-hand side is boost invariant and gives the minimum photon energy for the di-lepton production in the Lorentz frame such that $k_z=0 $. Note that both of the two on-shell momenta $p_{n,n'}^\pm$ take the same limiting value at the threshold energy (\ref{eq:thres}) as
\begin{align}
p_{n,n'}^\pm \to k_z \frac{m_n}{ m_n + m_{n'} } \, . \label{eq:limpz}
\end{align}
\begin{figure}
\begin{center}
\mbox{\small For $ (k_z/m, |qB|/m^2) = (0, 1)$}\\
\includegraphics[width=0.49\hsize]{fig2-11}
\includegraphics[width=0.49\hsize]{fig2-12}
\\
\mbox{\small For $ (k_z/m, |qB|/m^2) = (0, 3)$}\\
\includegraphics[width=0.49\hsize]{fig2-21}
\includegraphics[width=0.49\hsize]{fig2-22}
\\
\mbox{\small For $ (k_z/m, |qB|/m^2) = (3, 3)$}\\
\includegraphics[width=0.49\hsize]{fig2-31}
\includegraphics[width=0.49\hsize]{fig2-32}
\end{center}
\caption{
The longitudinal momentum $p_{n,n'}^\pm$ (left) and zenith angle $\phi_{n,n'}^\pm$ (right) allowed for the di-lepton to take, when converted from a photon carrying energy $ k_0 $ and momentum $ k_z $ with parameter sets: $ (k_z/m, |qB|/m^2) = (0, 1)$ [first row], $ (0, 3)$ [second row], and $ (3, 3)$ [third row]. Only the first ten pairs of Landau levels are shown in ascending order with respect to the threshold energy (\ref{eq:thres}) from red (which is always the lowest Landau level pair $n=n'=0$) to blue. }
\label{fig:pz-phi}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=0.45\hsize]{fig3-1} \hspace*{5mm}
\includegraphics[width=0.45\hsize]{fig3-2}
\end{center}
\vspace{-7mm}
\caption{A schematic picture of the on-shell fermion momentum $p_{n,n'}^\pm$ and the corresponding zenith angle $\phi^\pm_{n,n'}$ for a vanishing photon longitudinal momentum $k_z=0$ (left) and the boost effect by a nonzero $ k_z > 0 $ (right). Shown is the case for $ n=n' $ where the two cones appear in the same size at $ k_z=0$.}
\label{fig:boost}
\end{figure}
We discuss more details about the discretized fermion's on-shell longitudinal momentum $p_{n,n'}^\pm$. The left column in Fig.~\ref{fig:pz-phi} shows $p_{n,n'}^\pm$ as a function of the photon energy normalized by the fermion mass $ k_0 /m$. The photon momentum $ k_z $ and the magnetic field strength $ |qB| $ are also normalized in the same manner and are fixed in each plot as $ (k_z/m, |qB|/m^2) = (0, 1)$ [first row], $ (0, 3)$ [second row], and $ (3, 3)$ [third row]. Each colored curve corresponds to each pair of Landau levels $ (n,n') $. The solid and dotted curves show $ p_{n,n'}^+ $ and $ p_{n,n'}^- $, respectively. The threshold energy (\ref{eq:thres}) is the point where $p_{n,n'}^+=p_{n,n'}^-$ holds [cf. Eq.~(\ref{eq:limpz})], i.e., where the solid and dotted curves merge. We find the following:
\begin{itemize}
\item The di-lepton spectrum converted from a photon carrying a fixed energy $ k_0/m $ is given as a superposition of $p_{n,n'}^\pm $ allowed for each pair of $ (n,n') $. The spectrum is given as a set of intersections among the curves and a vertical line at $ k_0/m = {\rm const.}$, and looks like a bunch of spikes with a vanishing width.
Namely, the produced di-leptons exhibit a discrete spike spectrum in the longitudinal direction. As we vary other continuous parameters and/or consider convolution with some photon source, the spikes may acquire a finite width. An incident photon carrying a larger energy can be converted to di-leptons in higher Landau levels. Those are the common features in the three panels.
\item When $ k_z =0$ (first and second rows), the plots are symmetric in the reflection with respect to the horizontal axis because $ p_{n,n'}^+ + p_{n,n'}^- = k_z = 0$ due to the momentum conservation. When $ k_z > 0 $ (third row), the spectrum is shifted to the positive $ p_z $ direction. This is a boost effect by the finite $ k_z $ with respect to the center-of-momentum frame of a di-lepton where $ k_z = p_z + p_z' =0$ (see Fig.~\ref{fig:boost}). Also, there are two-fold degeneracies $p_{n,n'}^\pm = p_{n',n}^\pm$ for $k_z=0$ because of the reflection symmetry, while they are resolved by nonzero $ k_z \neq 0$.
\item Since the spacings of the energy levels in the Landau quantization increase with $ |qB| $ (which we call Landau-level spacings), we find larger spacings between adjacent $p_{n,n'}^{\pm}$'s in the second and third rows ($|qB|/m^2=3$) as compared to the first row ($|qB|/m^2=1$). In particular, in a very strong magnetic field such that $|qB|>[(\sqrt{|k_\parallel^2|}-m)^2-m^2]/2$, only the lowest Landau level $n=n'=0$ can contribute to the di-lepton production, and there are only two discrete spikes in the longitudinal $p_z$-distribution at $p_z = p_{0,0}^{\pm}=[k_z \pm {\rm sgn} (k^2_\parallel) k^0\sqrt{1-4m^2/k_\parallel^2}]/2$. Note that the lowest-Landau energy level $\epsilon_0$ is independent of $qB$, and $p_{0,0}^{\pm}$ stays in the low-energy regime even in the (infinitely) strong-field limit. For weak magnetic fields, the spacings between $p_{n,n'}^{\pm}$'s get smaller, i.e., many Landau levels can contribute to the di-lepton production, which smears out the spike structures.
\end{itemize}
\subsection{Zenith-angle distribution} \label{sec:4.2}
One can predict the zenith-angle distribution measured from the direction of the magnetic field (see Fig.~\ref{fig:boost}):
\begin{align}
\phi_{n,n'}^\pm (k_\parallel) \equiv
\tan^{-1} \frac{ \sqrt{ \epsilon_n^2 - (p_{n,n'}^\pm)^2 -m^2 } }{p_{n,n'}^\pm} \ , \quad
\phi_{n,n'}^{\prime\pm} (k_\parallel) \equiv
\tan^{-1} \frac{ \sqrt{ \epsilon_{n'}^{\prime2} - (p_{n,n'}^{\prime\pm})^2 -m^2 }}{p_{n,n'}^{\prime\pm}} \ ,\label{eq:phil}
\end{align}
for a fermion and anti-fermion, respectively. The numerator $\sqrt{ \epsilon^{(\prime) \, 2}_{n^{(\prime)}} - (p_{n,n'}^{(\prime)\pm})^2 -m^2} = \sqrt{2n^{(\prime)}|qB|}$ corresponds to the magnitude of the transverse momentum under the Landau quantization. The range of the angle is defined as $ 0 \leq \phi_{n,n'}^{(\prime)\pm} \leq \pi $. Both of $\phi_{n,n'} ^\pm$ and $\phi_{n,n'}^{\prime\pm} $ are discrete quantities for a given photon momentum because of the discretization of $\epsilon^{(\prime)}$ and $p_z^{(\prime)}$, and each of them takes two values corresponding to $ p_{n,n'}^{(\prime)\pm}$.
Note that $ \phi_{n,n'}^{\prime \pm}$ is obtained from $ \phi_{n,n'}^{\pm}$
just by exchanging the labels as $n \leftrightarrow n'$ and $+ \leftrightarrow -$,
because $ p^{\prime\pm}_{n,n'} = p_{n',n}^\mp $ under those exchanges as a manifestation of the CP symmetry.
Thus, it is sufficient to focus on $ \phi_{n,n'}^{\pm}$ in the following. Remark again that one cannot predict the azimuthal-angle distribution, which is equivalent to predicting ${\bm k}_\perp$-distribution (not $|{\bm k}_\perp|$-distribution), within the current set-up of the problem with a constant magnetic field. To get this information, one needs to know when the produced fermions are released from the cyclotron motion, implying that one has to go beyond the constant magnetic field and solve the dynamics with a time-dependent magnetic field damped out in time. This is a dynamical and process-dependent issue beyond the scope of the present work.
The right column in Fig.~\ref{fig:pz-phi} shows the zenith-angle distribution $\phi_{n,n'}^\pm$. The plotting style is the same as the left column for $p_{n,n'}^\pm$, and one may enjoy correspondences between $p_{n,n'}^\pm$ and $\phi_{n,n'}^\pm$ in the left and right columns. We find the following:
\begin{itemize}
\item In the lowest Landau level $n=0$, the produced fermion moves precisely along the magnetic-field direction, and thus $\phi _{0,n'}^{\pm}= 0$ and $\pi$ for ${\rm sgn}\,\phi _{0,n'}^{\pm}>0$ and $<0$, respectively. On the other hand, fermions with higher Landau levels are emitted with a finite transverse momentum, and hence $ 0 < \phi_{n,n'}^\pm (k_\parallel) < \pi $. Note that there are discontinuous jumps in the lowest Landau level $n=0$ for $k_z \neq 0$ in the third row. This behavior just originates from the change of the sign of $ p_{n,n'}^- $ ($ p_{n,n'}^+$) for $k_z>0$ ($k_z<0$) due to the boost effect; see the left column. Those jumps, however, do not occur for massless fermions in the lowest Landau level, as the Lorentz boost cannot change the sign of their momenta.
\item The zenith-angle distribution is limited to few discrete directions when the photon energy is small, where fermions can take only a few number of low-lying Landau levels due to the threshold condition (\ref{eq:thres}). As we increase the photon energy, more number of higher Landau levels start contributing to the production. This results in the smearing of the spike structures in the zenith-angle distribution with narrower spacings.
\item Comparing panels in the first ($|qB|/m^2=1$) and second ($|qB|/m^2=3$) rows, we understand that major effects of strong magnetic fields are two-fold: (i) shift of $ \phi_{n,n'}^{\pm}$ to higher photon energies, and (ii) squeezing of $ \phi_{n,n'}^{\pm} \to \pi/2 $. Namely, (i) the photon threshold energy (\ref{eq:thres}) is lifted up except for the lowest Landau level when $ |qB|$ is increased. Therefore, the contributions from higher Landau levels are shifted to higher photon energies (rightward), and we find only the lowest-Landau level contribution in the strong field limit $\sqrt{|qB|} \gg k_\parallel^2$. (ii) Under a stronger magnetic field, a larger portion of the photon energy need to be converted to the fermion transverse energy $ \sqrt{2 n|qB| } $. Only the remaining portion can be converted to the longitudinal momentum $ p_z $, and thus the magnitude of $p_z$ is reduced. Therefore, the angle $\phi_{n,n'}^\pm$ for each pair of Landau levels approaches $ \pi/2 $ as we increase $ |qB| $. This tendency is seen as squeezing of a bunch of curves toward the center at $ \pi/2 $.
\item When $ k_z=0 $ (the first and second rows) and thus $p_{n,n'}^+ = -p_{n,n'}^-$,
we find that $ \phi_{n,n'}^+ + \phi_{n,n'}^- = \pi $ holds. This is a consequence of the reflection symmetry with respect to the transverse plane or a flip of the magnetic-field direction (cf. Fig.~\ref{fig:boost}). When $ k_z > 0$ (the third row), the reflection symmetry is broken and the curves in the plot become asymmetric with respect to the horizontal axis. This originates from the positive increase of $ p_{n,n'}^\pm $ in the left column and is a consequence of the Lorentz boost of the cones in the longitudinal direction (see Fig.~\ref{fig:boost}), i.e., one of the cones shrinks while the other expands.
\end{itemize}
We emphasize that the kinematics in the Landau quantization is essential in the above observations. Thus, the results obtained in the last two subsections \ref{sec4.2} and \ref{sec:4.2} are insensitive to the size of the transverse photon momentum $ |{\bm k}_\perp| $, as the kinematics is determined by the longitudinal variables $k_0$ and $k_z$ only. The di-lepton production acquires $ |{\bm k}_\perp| $-dependence only via the scalar form factor ${\Gam}_{n,n'}$, as we have discussed in Sec.~\ref{sec:3.3} and will further demonstrate below.
\subsection{Inclusive photon--to--di-lepton conversion rate} \label{sec:conversion}
Having explained the basic behaviors of the contracted lepton tensors in Sec.~\ref{sec:3.3} and the kinematics of the di-lepton production in Secs.~\ref{sec4.2} and \ref{sec:4.2}, we discuss more details about the di-lepton production by investigating the inclusive photon--to--di-lepton conversion rate $\rate$. It is obtained by integrating the squared amplitude (\ref{eq:sq}) as
\begin{align}
\rate_{\lambda} (k)
&\equiv \sum_{n,n'} \sum_{s,s'} \int \frac{dp_z dp_y}{(2\pi)^2(2\epsilon_n)} \int \frac{dp'_z dp'_y}{(2\pi)^2(2\epsilon_{n'})} \frac{ |\varepsilon_\mu q {\mathcal M}^\mu|^2}{2k^0(2\pi)^3\delta^{(3)}(0)} \nonumber\\
&= \sum_{n,n'}\left[ q^2 T \frac{|qB|}{2\pi} \sum_{ p_z = p_{n,n'}^\pm} \frac{ \Theta( \, k_\parallel^2 - ( m_n + m_{n'})^2 \,) }{ 8 k^0 |p_z\epsilon'_{n'} - (k_z - p_z) \epsilon_n | } \left. \ [ \varepsilon_\mu^\lambda \varepsilon^{\lambda\ast}_\nu L^{\mu\nu}_{n,n'} ] \right|_{p'_z=k_z-p_z} \right] \nonumber\\
&\equiv \sum_{n,n'} \rate_{\lambda}^{n,n'} (k) \, , \label{eq:num}
\end{align}
where we used $\int dp_y = |qB| L_x$ and introduced $ \lambda =0,\pm,\parallel$ and $\Theta$, representing the four photon polarization mode and a step function, respectively. We also used
\begin{align}
\delta ( \epsilon_n + \epsilon'_{n'} - k^0 )
&= \sum_{\alpha = \pm} \delta(p_z-p_{n,n'}^\alpha)
\left| \frac{\epsilon_n \epsilon'_{n'}}{p_z\epsilon'_{n'} - (k_z - p_z) \epsilon_n } \right|
\Theta( \, k_\parallel^2 - ( m_n + m_{n'})^2 \, )
\label{eq:delta-function}
\ ,
\end{align}
which accounts for the kinematics discussed in the previous subsections. As explained in Sec.~\ref{sec-3}, the coupling between the lepton tensor (\ref{eq:L}) and an incident photon depends on the photon polarization mode via the tensor structures such as $L_\parallel^{\mu\nu}$ and $\Q_\pm^{\mu\nu}$. Plugging Eq.~(\ref{eq:projected-L}) into Eq.~(\ref{eq:num}), we obtain the polarization-projected conversion rates
\begin{subequations}\label{eq:N-lambda}
\begin{align}
\rate_{0}^{n,n'}
&\!= q^2 T \frac{|qB|}{2\pi} \!\!\!\sum_{ p_z = p_{n,n'}^\pm}\!\!\! \frac{ \Theta( \, k_\parallel^2 - ( m_n + m_{n'})^2 \,) }{ 4 k^0|p_z\epsilon'_{n' }- (k_z - p_z) \epsilon_n | } \Big[ ( \epsilon_n \epsilon'_{n' } + p_z p'_z - m^2 ) ( | {\Gam}_{n , n'} |^2 + |{\Gam}_{n-1 , n'-1 } |^2 ) \nonumber \\
&\quad\quad\quad\quad\quad\quad\quad\quad - 2 |q B| \left( - |\bar {\bm k}_\perp| ^2 |{\Gam}_{n-1,n'}|^2 + n | {\Gam}_{n, n'} |^2 + n' | {\Gam}_{n-1,n'-1}|^2 \right) \Big]_{p'_z=k_z-p_z} \, , \label{eq:N0} \\
\rate_{+}^{n,n'}
&\!= q^2 T \frac{|qB|}{2\pi} \!\!\!\sum_{ p_z = p_{n,n'}^\pm}\!\!\! \frac{\Theta( \, k_\parallel^2 - ( m_n + m_{n'})^2 \,) }{4 k^0|p_z\epsilon'_{n' }- (k_z - p_z) \epsilon_n | } \Big[ 2 ( \epsilon_n \epsilon'_{n' } - p_z p'_z + m^2 ) | {\Gam}_{n , n'-1 } |^2\Big]_{p'_z=k_z-p_z} \, , \label{eq:Np} \\
\rate_{-}^{n,n'}
&\!= q^2 T \frac{|qB|}{2\pi} \!\!\!\sum_{ p_z = p_{n,n'}^\pm} \!\!\! \frac{\Theta( \, k_\parallel^2 -( m_n + m_{n'})^2 \,) }{4 k^0|p_z\epsilon'_{n' }- (k_z - p_z) \epsilon_n |} \Big[ 2 ( \epsilon_n \epsilon'_{n' } - p_z p'_z + m^2 ) | {\Gam}_{n-1 , n' } |^2\Big]_{p'_z=k_z-p_z} \, , \label{eq:Nm} \\
\rate_{\parallel}^{n,n'}
&\!= q^2 T \frac{|qB|}{2\pi} \!\!\!\sum_{ p_z = p_{n,n'}^\pm} \!\!\! \frac{ \Theta( \, k_\parallel^2 - ( m_n + m_{n'})^2 \,) }{4 k^0 |p_z\epsilon'_{n' }- (k_z - p_z) \epsilon_n |} \Big[ ( \epsilon_n \epsilon'_{n' } + p_z p'_z + m^2 ) ( | {\Gam}_{n , n'} |^2 + |{\Gam}_{n-1 , n'-1 } |^2 ) \nonumber \\
&\quad\quad\quad\quad\quad\quad\quad\quad + 2 |q B| \left( - |\bar {\bm k}_\perp| ^2 |{\Gam}_{n-1,n'}|^2 + n | {\Gam}_{n, n'} |^2 + n' | {\Gam}_{n-1,n'-1}|^2 \right) \Big]_{p'_z=k_z-p_z} \, . \label{eq:Npara}
\end{align}
\end{subequations}
Note that $ \rate_{+}^{n,n'} = \rate_{-}^{n',n} $ (but $ \rate_{+}^{n,n'} \neq \rate_{-}^{n,n'} $ in general) and that $ \rate_{+} = \rate_{-} $ after the summation over $ n, \, n' $. This is a natural manifestation of the fact that the lepton tensor does not contain any parity-breaking effect. At the algebraic level, one can show those identities by using the facts that $ |{\Gam}_{n , n'-1 } |^2 = | {\Gam}_{n'-1 , n} |^2 $ [cf. Eq.~(\ref{eq:FF-Landau-2})] and that the denominator $ |p_z\epsilon'_{n' }- (k_z - p_z) \epsilon_n | $ as well as the other parts is invariant with respect to simultaneous interchanges between $ n , n' $ and between $ p^\pm _{n,n'} , p^\mp _{n,n'} $ (cf. $ p^\pm _{n',n} = k_z - p^\mp_{n,n'}$).
In the following, we examine dependences of the conversion rates $ \rate_\lambda $ on the physical parameters such as the photon momentum and the magnetic field strength. We normalize all the dimensionful parameters by the fermion mass assuming that $m\neq 0$, except in Sec.~\ref{sec434} where we discuss the massless limit ($m\to 0$).
\subsubsection{Photon-energy dependence}\label{sec4.3.1}
We show the photon-energy dependence of the conversion rates $ \rate_\lambda $ in Fig.~\ref{fig:Npm(k0)}, with different sets of photon longitudinal momentum $ k_z /m$ and magnetic field strength $|qB|/m^2$. The photon transverse momentum is fixed at $ |{\bm k}_\perp|/m =1 $, and dependences on $ |{\bm k}_\perp| $ will be discussed in Sec.~\ref{sec4.3.2}. In each plot, the colored curves show contributions from each pair of Landau levels $ D_\lambda^{n,n'}$ (the lowest Landau level pair $n=n'=0$ is in red, and the color changes to blue as we go to higher Landau level pairs), while the black curve shows the total contribution summed over the Landau levels $ D_\lambda $. Note that we plotted $D_\pm$ in a single plot because $\rate_{+}^{n,n'} = \rate_{-}^{n', n}$ and $D_+ = D_-$, as we remarked below Eq.~(\ref{eq:N-lambda}).
The most important message of Fig.~\ref{fig:Npm(k0)} is that there are an infinite number of thresholds for the di-lepton production, at which points the conversion rates exhibit resonant behaviors, i.e., the spike structures as a function of $k_0$. This is essentially the same as the cyclotron resonances in quantum mechanics under a weak magnetic field, and the presence of such thresholds is a direct manifestation of the Landau quantization. The locations of the thresholds are given by Eq.~(\ref{eq:thres}) and are specified by the colored dots on the horizontal axis in the plots. One observes that the interval between two adjacent thresholds, that is nothing but the Landau-level spacing, increases as one increases the magnetic field strength, as is evident from the comparison between the first row ($|qB|/m^2=1$) and the second and third rows ($|qB|/m^2=3$) in Fig.~\ref{fig:Npm(k0)}. Also, comparing the second ($k_z/m=0$) and third ($k_z/m=3$) rows in Fig.~\ref{fig:Npm(k0)}, one notices that the locations of the thresholds shift to higher photon energies when the photon longitudinal momentum is nonvanishing $ k_z \neq 0$. This is because a nonzero $ k_z $ requires a nonzero di-lepton longitudinal momentum for the momentum conservation, which costs an additional energy for the di-lepton production.
\begin{figure}
\begin{center}
\mbox{\small For $ (k_z/m, |qB|/m^2) = (0, 1)$}\\
\includegraphics[width=0.34\hsize]{fig4-11} \hspace*{-4mm}
\includegraphics[width=0.34\hsize]{fig4-12} \hspace*{-4mm}
\includegraphics[width=0.34\hsize]{fig4-13}
\\
\mbox{\small For $ (k_z/m, |qB|/m^2) = (0, 3)$}\\
\includegraphics[width=0.34\hsize]{fig4-21} \hspace*{-4mm}
\includegraphics[width=0.34\hsize]{fig4-22} \hspace*{-4mm}
\includegraphics[width=0.34\hsize]{fig4-23}
\\
\mbox{\small For $ (k_z/m, |qB|/m^2) = (3, 3)$}\\
\includegraphics[width=0.34\hsize]{fig4-31} \hspace*{-4mm}
\includegraphics[width=0.34\hsize]{fig4-32} \hspace*{-4mm}
\includegraphics[width=0.34\hsize]{fig4-33}
\end{center}
\caption{Photon-energy dependences of the conversion rates: $ \rate_0$ (left), $ \rate_\pm$ (middle), and $ \rate_\parallel$ (right). The photon transverse momentum is fixed at $|{\bm k}_\perp|/m=1 $, and the other parameters are at $ (k_z/m, |qB|/m^2) = (0, 1)$ [first row], $(0, 3)$ [second row], and $ (3, 3)$ [third row]. The colored dots on the horizontal axes indicate the threshold energies (\ref{eq:thres}) in ascending order from red, representing the lowest threshold for the lowest Landau level pair $n=n'=0$, to blue for higher Landau level pairs. The colored lines originating from each dot show the contributions from each Landau level pair $ \rate_\lambda^{n,n'} $. The black lines show the total contribution summed over the Landau levels $ \rate_\lambda$.
}
\label{fig:Npm(k0)}
\end{figure}
Next, we look more closely into the resonant behaviors at the thresholds and analytically show that the heights of the spikes are divergent if $m\neq 0$. As the boost-invariant photon energy $ k_\parallel^2 $ approaches each threshold (\ref{eq:thres}) from the above $k_\parallel^2 \to ( m_n + m_{n'})^2 + 0^+$, the on-shell fermion longitudinal momentum (\ref{eq:discpz}) behaves as
\begin{align}
\label{eq:pz-expansion}
p_{n,n'}^\pm
= k_z \frac{ m_n }{ m_n + m_{n'} } \pm k_0 \frac{ \sqrt{ m_n m_{n'}}}{ ( m_n + m_{n'})^2 } \delta k_\parallel + {\mathcal O}( \delta k_\parallel^2) \, ,
\end{align}
where $\delta k_\parallel^2 \to 0^+$ is the deviation from the threshold such that
\begin{align}
\delta k_\parallel \equiv \sqrt{ k_\parallel^2 - ( m_n + m_{n'})^2 } \, .
\end{align}
Then, the common factor in the denominators of the conversion rates (\ref{eq:N-lambda}) goes to zero as
\begin{align}
\label{eq:denom-expand}
|p_z\epsilon'_{n'} - (k_z - p_z) \epsilon_n |
= \sqrt{ m_n m_{n'}} \delta k_\parallel + {\mathcal O}( \delta k_\parallel ^3) \to 0^+ \, .
\end{align}
Therefore, the conversion rates diverge $ \sim ({\rm numerator})/\delta k_\parallel \to \infty $ as the photon energy approaches every threshold, unless the numerator is ${\mathcal O}(\delta k_\parallel)$. The numerator is always ${\mathcal O}(1)$ for $m\neq 0$ but can be ${\mathcal O}(\delta k_\parallel)$ for $m=0$ because of the linear dispersion of the lowest Landau level, as we will discuss in Sec.~\ref{sec434}. This divergent behavior is seen as the spike structures in Fig.~\ref{fig:Npm(k0)}, and its inverse square-root dependence $ (\sim 1/\delta k_\parallel) $ is a typical threshold behavior in the (1+1) dimensions.
We remark that the di-lepton production with a lowest Landau level fermion and/or anti-fermion (i.e., $n$ or $n'=0$) is prohibited for particular photon polarization modes, and the corresponding conversion rates are vanishing even above the threshold\footnote{Note that we here concentrate on the massive case ($m \neq 0$). There are further prohibitions in the massless limit $m\to 0$ due to a chirality reason, which we will show in Sec.~\ref{sec434}. }. Namely, circularly polarized photons with $ \lambda=+ $ ($ \lambda=- $) do not couple to $n'=0$ anti-fermion ($n=0$ fermion), and $D_+^{n,0}=D_-^{0,n'}=0$ for any $n,n'$. Physically, this is because those photon polarization modes carry nonzero spin components $ \pm1 $ along the magnetic field, whereas the di-lepton states with $n$ or $n'=0$ carry either spin zero or $\mp1$ [recall discussions below Eq.~(\ref{eq:pol})]. On the other hand, photons with $ \lambda=0,\parallel $ can couple to di-leptons with $n$ or $n'=0$, since those photons can have the same spin state as that of di-leptons. Yet the case of $D_0^{0,0}$ is somewhat exceptional in that it is vanishing at $k_z=0$. Indeed, when $ n=n'=0 $, a factor in the numerator of $ D_0^{0,0} $ can be evaluated as $ \epsilon_n \epsilon'_{n' } + p_z p'_z - m^2 = k_z^2/2 +{\mathcal O} (\delta k_\parallel^2)$ and the other numerator factors are ${\mathcal O}(1)$. Taking into account the denominator factor (\ref{eq:denom-expand}), we find $D_0^{0,0} \propto k_z^2/( \delta k_\parallel\sqrt{m_n m_{n'}} )$, which is vanishing at $k_z= 0$.
\subsubsection{Transverse-momentum dependence} \label{sec4.3.2}
\begin{figure}
\begin{center}
\includegraphics[width=0.49\hsize]{fig5-11}
\includegraphics[width=0.49\hsize]{fig5-12}
\includegraphics[width=0.49\hsize]{fig5-21}
\includegraphics[width=0.49\hsize]{fig5-22}
\end{center}
\caption{Photon transverse momentum $|{\bm k}_\perp|$-dependences of the conversion rates $\rate_\lambda^{n,n'}$. The Landau level of the fermion is fixed at $ n=1 $, while that of the anti-fermion runs on $n'=0,1,2,3,4,5$. Other parameters are fixed at $|qB|/m^2 =1$, $k_0/m=10$, and $k_z/m=0$.
}
\label{fig:Npm}
\end{figure}
We provide quantitative discussions about the $|{\bm k}_\perp|$-dependence of the conversion rates (\ref{eq:N-lambda}), which is determined by the scalar form factor $|{\Gam}_{n,n'}|^2$ (i.e., the overlap between the produced fermion and anti-fermion transverse wave functions); see Sec.~\ref{sec:3.3} for analytical discussions. Figure~\ref{fig:Npm} demonstrates that the conversion rate for each pair of Landau levels $ \rate_{\lambda}^{n,n'} $ is suppressed for large values of $ |{\bm k}_\perp| \to \infty$ and exhibits the peaked structure, which is the reminiscent of the transverse momentum conservation modified by the magnetic field. The peak location is determined by the Landau levels appearing as indices of the Laguerre polynomials in $|{\Gam}_{n,n'}|^2$. We took $n=1 $ and $n'=0, 1,2,3,4,5 $ just for a demonstration. Among those cases, one finds that $ \rate_+^{1,0}=0 $ identically due to the reason for spin configurations as we have remarked in Sec.~\ref{sec4.3.1}. One also finds that the conversion rates vanish in the limit of $|{\bm k}_\perp| \to 0$ except for $D_{+}^{1,2}$, $D_{-}^{1,0}$, and $D_\parallel^{1,1}$. This is a consequence of the property discussed around Eq.~(\ref{eq:G(k->0)}); in general, only $D_0^{n,n}$, $D_+^{n,n+1}$, $D_-^{n+1,n}$, and $D_\parallel^{n,n}$ can be nonvanishing in the limit of $|{\bm k}_\perp| \to 0$. Note that the indices of the form factor $ {\Gam}_{n,n'}$ appearing in the conversion rates (\ref{eq:N-lambda}) are not necessarily $(n,n')$ but are shifted by some terms, because the di-lepton spin configurations are different depending on the photon polarizations. When one further takes the limit of $k_z\to0$, $D_0^{n,n}$, and thus $ D_0^{1,1} $ in Fig.~\ref{fig:Npm}, vanishes because $D_0^{n,n} \propto \epsilon_n \epsilon'_{n} + p_z p'_z - m^2 -2n|qB| = {\mathcal O}(k_z^2)$, while the other three stay finite.
\begin{figure}
\begin{center}
\mbox{\small For $|qB|/m^2 = 1$}\\
\includegraphics[width=0.34\hsize]{fig6-11} \hspace*{-4mm}
\includegraphics[width=0.34\hsize]{fig6-12} \hspace*{-4mm}
\includegraphics[width=0.34\hsize]{fig6-13}
\\
\mbox{\small For $|qB|/m^2 = 3$}\\
\includegraphics[width=0.34\hsize]{fig6-21} \hspace*{-4mm}
\includegraphics[width=0.34\hsize]{fig6-22} \hspace*{-4mm}
\includegraphics[width=0.34\hsize]{fig6-23}
\end{center}
\caption{The photon transverse momentum $|k_\perp|$-dependences of the conversion rates summed over the Landau levels $ \rate_\lambda$ at $k_z/m=0$. The lines with different colors distinguish the photon energy $k_0/m=3,5,7$, and $9$. The magnetic field strengths are taken as $|qB|/m^2 =1 $ (top) and $=3$ (bottom).
}
\label{fig:Npm-summed}
\end{figure}
The Landau-level summed conversion rates $\rate_{\lambda}$, shown in Fig.~\ref{fig:Npm-summed}, exhibit an oscillating behavior, resulting from superposition of the peaks in each Landau-level contribution $\rate_{\lambda}^{n,n'}$. The structure of the oscillation changes as the parameters vary, i.e., the oscillation becomes (i) finer for larger photon energies $k_0$ and (ii) more moderate for stronger magnetic fields. Those changes are attributed to the number of Landau levels that can be excited with a given photon energy. Namely, the larger photon energy can excite the more higher Landau levels, so that the oscillation acquires a finer structure with contributions of higher modes. Also, the number of contributing Landau levels decreases as we increase $|qB|/m^2 $, with which the Landau-level spacing increases. In particular, only the lowest-lying pair of the Landau levels, $(n,n')=(1,0)$ or $(0,1)$, can contribute when the photon energy is small and the magnetic field is strong (e.g., the red line in the bottom right panel for $k_0/m =3$ and $|qB|/m^2=3 $). Note that the conversion rates $\rate_{\lambda}$ fall off in a large $ |{\bm k}_\perp|/m $, and they fall off slower for larger $k_0$. This is because larger $k_0$ can excite higher Landau levels having large $|\Delta n|$, which are favorably produced with a large $|{\bm k}_\perp|$ because of the reminiscent of the transverse momentum conservation.
\subsubsection{Magnetic-field dependence}
\label{sec:B-dep}
We discuss dependences on the magnetic field strength. As shown in Fig.~\ref{fig:8}, the conversion rates $D_\lambda$ have spike structures with respect to the magnetic field strength $|qB|$ as well and there exists an upper limit $|qB|_{\rm max}$ for each pair of Landau levels above which the production is prohibited. Those behaviors are determined by the threshold condition (\ref{eq:thres}), which tells us that fewer Landau levels can contribute to the production as the magnetic field strength $|qB|$, and accordingly the Landau-level spacing, is increased. Namely, when $|qB|$ is increased from a certain value with a fixed photon energy $k_0$ (or equivalently $k_\parallel^2$), peaks appear when every pair of higher Landau levels stop contributing to the di-lepton production at $|qB|=|qB|_{\rm max}$. In the end (i.e, in the strong field limit), only the lowest-lying pair with $n=n'=0$ can satisfy the threshold condition (\ref{eq:thres}), which is independent of $|qB|$ for this pair and is satisfied as long as $k_\parallel^2 \geq 4m$. In other words, there is no upper limit $|qB|_{\rm max}$ for $n=n'=0$. For the other pairs of Landau levels, one can find the upper limit $|qB|_{\rm max}$ by solving the threshold condition~(\ref{eq:thres}) in terms of $|qB|$ as
\begin{align}
|qB|_{\rm max}
= \left\{ \begin{array}{ll}
\displaystyle \frac{ k_\parallel^2-4m^2}{8n} & \ \ {\rm for}\ n=n'\neq 0 \\[10pt]
\displaystyle \frac{(n+n') k_\parallel^2 - 2\sqrt{(n-n')^2m^2 k_\parallel^2 + nn' k_\parallel^4 }}{2(n-n')^2} &\ \ {\rm for}\ n\neq n'
\end{array} \right.\, , \label{eq:upper}
\end{align}
where $ k_\parallel^4 = (k_\parallel^2)^2 $ and $ k_\parallel^2 > 4m^2$ are understood. $|qB|_{\rm max}$ is an increasing function of $k_0$ or $k_\parallel^2$, meaning that more energetic photons can excite more energetic di-leptons under stronger $|qB|$. Note that the positivity of the upper limits $ |qB|_{\rm max}$ (\ref{eq:upper}) is guaranteed as long as $ k_\parallel^2 > 4m^2$.
The summed conversion rates $D_\lambda$ as well as the contributions from each pair of the Landau levels $D_\lambda^{n,n'}$ increase with $qB$, roughly, linearly. This is because the phase-space volume in a constant magnetic field is proportional to the magnetic field strength as $\int d^2p_\perp \propto \frac{|qB|}{2\pi} \sum_n$, where the factor $|qB|/2\pi$ is the so-called the Landau degeneracy factor and is a manifestation of the Landau quantization.
\begin{figure}
\begin{center}
\includegraphics[width=0.34\hsize]{fig7-11} \hspace*{-4mm}
\includegraphics[width=0.34\hsize]{fig7-12} \hspace*{-4mm}
\includegraphics[width=0.34\hsize]{fig7-13}
\end{center}
\caption{The magnetic-field dependence of the conversion rates $ \rate_0$ (left), $ \rate_\pm$ (middle), and $ \rate_\parallel$ (right). The parameters are fixed as $k_0/m=3$, $|{\bm k}_\perp|/m=1 $ and $k_z/m=0$. The colored dots on the horizontal axes indicate the upper limits for the magnetic field strength $|qB|_{\rm max}$ (\ref{eq:upper}) in descending order from red (which is always for the lowest Landau level pair $n=n'=0$) to blue. The colored lines originating from each dot show the contributions from each Landau level pair $ \rate_\lambda^{n,n'} $, and the black lines the total value summed over the Landau levels $ \rate_\lambda$.
}
\label{fig:8}
\end{figure}
\subsubsection{Massless limit} \label{sec434}
\begin{figure}
\begin{center}
\includegraphics[width=0.34\hsize]{fig8-11} \hspace*{-4mm}
\includegraphics[width=0.34\hsize]{fig8-12} \hspace*{-4mm}
\includegraphics[width=0.34\hsize]{fig8-13}
\end{center}
\caption{The conversion rates $ \rate_0$ (left), $ \rate_\pm$ (middle), and $ \rate_\parallel$ (right) in the massless limit $m = 0$, plotted against the photon energy $k_0/|qB|^{1/2}$. The other parameters are fixed at $|{\bm k}_\perp|/|qB|^{1/2}=1 $ and $k_z/|qB|^{1/2}=0$. As in Fig.~\ref{fig:Npm(k0)}, the colored dots on the horizontal axes indicate the threshold energies (\ref{eq:thres}) in ascending order from red to blue. The colored lines originating from each dot show the contributions from each Landau level pair $ \rate_\lambda^{n,n'} $, and the black lines the total value summed over the Landau levels $ \rate_\lambda$.
}
\label{fig:9}
\end{figure}
So far, we have used the fermion mass parameter $m$ just to make the other dimensionful quantities (such as the photon energy $k_0$ and the magnetic field strength $|qB|$) dimensionless. Such a treatment makes sense only when $m\neq 0$, and we here discuss the massless limit ($m \to 0$) separately. While the basic features of the di-lepton production are unchanged in the massless limit, we highlight several differences as demonstrated in Fig.~\ref{fig:9}.
In the massless limit, the di-lepton production in the lowest Landau level pair $n=n'=0$ is strictly prohibited,
regardless of the photon polarization $ \lambda $.
The physical reason behind this prohibition is the absence of the chirality mixing in a strictly massless theory. Namely, fermions and anti-fermions in the lowest Landau level belong to different chirality eigenstates in a massless theory, and are not directly coupled with each other. One can understand this statement from the spin polarization and the kinematics. Spin in the lowest Landau level is polarized due to the Zeeman effect, and thus the spins of fermions and anti-fermions are polarized in the opposite direction to each other. In addition, if di-lepton production occurred, the produced fermion and anti-fermion would recede from each other in a back-to-back direction in the center-of-momentum frame of the di-lepton,\footnote{
One can safely take the center-of-momentum frame $k_z=0$, since the incident photon
must be time-like in the $(1+1)$-dimensional sense $k_\parallel^2>0$ to produce di-leptons because of the threshold condition Eq.~(\ref{eq:thres}).
}
indicating that the fermion and anti-fermion would have the same helicity. Indeed, a fermion and an anti-fermion carrying the {\it same} helicity are created/annihilated by Weyl spinors in {\it different} chirality eigenstates. Therefore, the di-lepton production in the lowest Landau level cannot occur, unless there is mixing between the right and left chirality eigenstates via a finite mass term. In case of vector theories, di-fermion production in this massless channel also contradicts with the chiral symmetry or the conservation of axial charge (at the classical level). A similar prohibition mechanism is known as ``helicity suppression'' in the leptonic decay of charged pions \cite{Donoghue:1992dd, Zyla:2020zbs}.
At the algebraic level, this statement can be recognized as follows:
There are three scalar form factors, ${{\Gam}}_{n,n'}$, ${{\Gam}}_{n-1,n'}$, and ${{\Gam}}_{n-1,n'-1}$,
in the conversion rates (\ref{eq:N-lambda}), among which only ${\Gam}_{n,n'}$ can be nonvanishing for $n=n'=0$.
However, the coefficients in front of $|{{\Gam}}_{0,0}|^2$ identically vanishes
in the massless limit, i.e., $ \epsilon_0 \epsilon'_0 + p_z p'_z \pm m^2 = {\mathcal O}(m^2)$.
Thus, $D^{0,0}_{\lambda} = 0$ holds for any photon polarization $\lambda$.
This limiting behavior coincides with the above prohibition mechanism in a strictly massless theory
(in spite of the fact that the naive massless limit
does not reproduce the correct dispersion relation of the lowest Landau level
in a massless theory $ \epsilon_0 =\pm p_z $ with signs for the right and left chirality).
Another distinct feature in the massless limit is that the resonances at the thresholds take finite values, unlike $m \neq 0$ case that we have discussed in Sec.~\ref{sec4.3.1}. Remember the discussion below Eq.~(\ref{eq:denom-expand}), in which we claimed that even if the common denominator factor in the conversion rates (\ref{eq:N-lambda}) is vanishing $|p_z \epsilon'_{n'}-(k_z-p_z)\epsilon_n| = {\mathcal O}(\delta k_\parallel)$, the rates are not necessarily divergent if the other factors in the numerator are also vanishing. This is actually the case in the massless limit $m\to 0$ when either fermion or anti-fermion is in the lowest Landau level (i.e., $n$ or $n'=0$), as shown in Fig.~\ref{fig:9}. Indeed, the common numerator factors $\epsilon_n \epsilon'_{n'} + p_z p'_z \pm m^2$ become ${\mathcal O}(\delta k_\parallel)$ because of the linear dispersion relation of the lowest Landau level in the massless limit $\epsilon_0 = |p_{0,n'}^\pm| \to {\mathcal O}(\delta k_\parallel)$ and $\epsilon'_0 = |p_{n,0}^{\prime\pm}| \to {\mathcal O}(\delta k_\parallel)$ at the thresholds.
\section{Summary and outlook} \label{summary}
\subsection{Summary}
We have studied the di-lepton production rate from a single photon under a constant strong magnetic field. In Sec.~\ref{sec-3}, we have analytically evaluated the squared matrix element for the photon--to--di-lepton conversion vertex by the use of the Ritus-basis formalism (reviewed in Sec.~\ref{sec-2}), in which the mode expansion is organized with the eigenstates of the Dirac operator. This means that the Dirac operator is diagonalized without any perturbative expansion with respect to the interaction with the strong magnetic field. Therefore, we have taken into account the interactions between the di-lepton and the strong magnetic field to all orders in the QED coupling constant non-perturbatively (see Fig.~\ref{fig:diagram}). This treatment is necessary when the magnetic field is so strong that its strength compensates the smallness of the coupling constant. On the other hand, we have included the coupling of the incident dynamical photon to the di-lepton at the leading order in the coupling constant and regards mutual interactions between the fermion and anti-fermion as higher-order corrections. These perturbative treatments are justified as long as the QED coupling constant is small, as in the usual perturbation theory.
The squared matrix element (\ref{eq:sq}) is given as a product of the lepton tensor $L^{\mu\nu}_{n,n'}$ and delta functions, accounting for the kinematics of the production process (i.e., the energy and momentum conservation). We have established the analytical expression of the lepton tensor (\ref{eq:L}) together with the scalar form factor (\ref{eq:FF-Landau}) to all order of the Landau-level summation. Those analytical expressions enable us to write down an explicit formula for the conversion rates of a single photon into a di-lepton $\rate$ [e.g., Eq.~(\ref{eq:N-lambda}) for the polarization-projected ones]. Notably, we have confirmed in Appendix~\ref{app-b} that the obtained lepton tensor and thus the conversion rates possess the rotational invariance with respect to the direction of the magnetic field and the gauge invariances with respect to both the incident dynamical photon and the external magnetic field, although our calculation has been carried out in the Landau gauge that superficially breaks those symmetries in intermediate steps of the calculation. Those rigorous consistency checks qualify our results.
In Sec.~\ref{sec--4}, we have discussed quantitative aspects of the di-lepton production. First, we have discussed how the kinematics of the production process affects the di-lepton spectrum in the final state. We have shown that not only the transverse momentum of the produced fermions $|{\bm p}_\perp| \to \sqrt{2n|qB|}$ but also the longitudinal one $p_z \to p^\pm_{n,n'}$ is discretized, because of the energy conservation. As a result, the di-lepton spectrum has spike structures in the longitudinal-momentum distribution as well as in the (Landau-quantized) transverse-momentum distribution. We have also discussed the di-lepton spectrum in terms of the zenith angle $\phi$ that is defined as the emission angle of the fermions measured from the magnetic-field direction and thus succeeds the spike structures.
Finally, we have investigated the inclusive conversion rates $D$ of a single photon, carrying a fixed polarization and momentum, into di-leptons. The conversion rates exhibit spikes located at the threshold photon energy specified by each pair of the Landau levels. In case of a massive fermion ($m \neq 0$), the height of the spikes is infinite for any pair of Landau levels $(n,n')$; this is a typical threshold behavior in the $(1+1)$-dimensions. On the other hand, in the massless case, the height is finite when either of a pair is in the lowest Landau level ($n=0$ or $n'=0$). In particular, the di-lepton production is strictly prohibited for massless fermions when both of a pair is in the lowest Landau levels, which is an analogue of the so-called helicity suppression. We have confirmed all these fundamental behaviors with analytic expressions.
\subsection{Outlook}
Having established the fundamental formulas with clear physical interpretations, one can proceed to investigate phenomenological consequences in, e.g., relativistic heavy-ion collisions, neutron stars/magnetars, and high-intensity lasers. We emphasize that our di-lepton production rate predicts not only the photon attenuation rate captured by a complex-valued refractive index but also the entire di-lepton spectrum within a constant magnetic field with the complete resolution of the photon-polarization dependences. This differential information is necessary, for example, to consider a cascade (or avalanche) process in which the photon--to--di-lepton conversion and its reciprocal reaction occur successively together with other magnetic-field-induced processes such as the cyclotron radiation. Our results pave the way toward tracking energy and momentum distributions all the way through a cascade process induced by a strong magnetic field. More specifically, if one uses, for example, a kinetic equation to describe the cascade process, our results provide a collision kernel. Such a cascade process is not only interesting in its own as characteristic dynamics in a strong magnetic field but also important for understanding actual physics observables as we mention briefly below.
One of the most interesting applications of our results is relativistic heavy-ion collisions. Relativistic heavy-ion collisions induces the ever strongest magnetic field in the present universe, which is of the order of $|qB| = {\mathcal O}(1\;{\rm GeV}^2)$ just after a collision of the two ions. The magnetic-field strength depends on the collision geometry and becomes larger for smaller impact parameters $b\searrow$, larger collision energies $\sqrt{s} \nearrow$ and larger atomic numbers of the nuclei $Z \nearrow$ (see, e.g., a review article \cite{Hattori:2016emy} and references therein). In particular, ultra-peripheral events provide clear cuts of electromagnetic processes without contaminations from QCD processes and/or medium effects (see Refs.~\cite{Zha:2018tlq, Klein:2018fmp, Li:2019yzy, Li:2019sin, Xiao:2020ddm, Klein:2020jom} for recent theoretical proposals and Refs.~\cite{Adams:2004rz, Aaboud:2017bwk, Sirunyan:2018fhl, Aad:2019ock, Adam:2019mby} for significant progress in the recent measurements with RHIC and the LHC). Among others, one can study the longitudinal momentum and/or the zenith-angle $\phi$ distribution of di-leptons with respect to those parameters. Also, the fermion-mass dependence of the di-lepton production rate may be an interesting signature of the magnetic-field effects in analogue with the helicity suppression which explains the dominance of the muon channel over the electron channel in charged-pion decay modes \cite{Donoghue:1992dd, Zyla:2020zbs}. In relativistic heavy-ion collisions, electrons can be regarded as massless particles as compared to the typical energy scales of the problem, whereas the muon mass is comparable in magnitude as typical QCD scales such as the pion mass. Therefore, the ``helicity suppression'' of the electron pair production in the lowest-Landau levels could give rise to significant modifications in the low-energy di-lepton spectra. In analogy with the pion decay modes, muons are more abundantly produced than electrons in between the lowest and the second-lowest energy thresholds specified by the muon mass and the magnetic-field strength, respectively. We will report quantitative estimates of those effects in a forthcoming paper.
Another interesting application is neutron stars. Neutron stars, in particular the so-called magnetars, may have stable strong magnetic fields close to or beyond the critical field strength of QED $eB_{\rm cr} \equiv m_e^2$ in their magnetospheres with $m_e $ being the electron mass \cite{Harding:2006qn,Harding:2013ij, Enoto:2019vcg}. Our results imply a strong polarization dependence in photons emitted from the stars with strong magnetic fields $eB \gtrsim eB_{\rm cr}$, where only the parallel polarization mode ($\lambda = \parallel$) can produce di-leptons, while the strong polarization dependence may be smeared out in the stars with weaker magnetic fields $eB \lesssim eB_{\rm cr}$. This could serve as a complementary method to estimate strengths and/or distributions of magnetic fields, which have been commonly estimated via the so-called P-$ \dot {\rm P} $ diagram from observation \cite{Harding:2006qn,Harding:2013ij, Enoto:2019vcg}. Also, our results indicate that strong magnetic-field effects such as the threshold effects become more prominent near the low-lying Landau-level thresholds than the higher thresholds.
To get quantitative understanding of the above expectations, however, it is important to take into account the aforementioned cascade process induced by a strong magnetic field, which may amplify the strong-magnetic field effects. We leave this as a future work.
Finally, we discuss implications for laser physics. Thanks to the recent developments in lasers (e.g., chirped pulse amplification technique \cite{DiPiazza:2011tq, Zhang:2020lxl, Ejlli:2020yhk}), available electromagnetic field strength is rapidly rising and may reach the critical value $eB_{\rm cr}$ in the future. One of the possible experimental setups to test our predictions is to combine an intense laser with an electron accelerator, by which energetic photons are supplied via Compton backscatterings (cf. Ref.~\cite{Bragin:2017yau}). At the present, the available magnetic field strength is limited to $eB \lesssim 10^{-3} \times eB_{\rm cr}$ \cite{Yanovsky:08}. In this weak-field regime, the vacuum dichroism may be controlled solely by the quantum non-linearity parameter $\chi \equiv \sqrt{|k^\mu F_{\mu\nu}|^2}/m^3$ rather than the field strength $eB$ (or the Lorentz invariants ${\mathcal F}\equiv F_{\mu\nu} F^{\mu\nu}/2m^4 = ({\bm B}^2 - {\bm E}^2)/m^4$ and $F_{\mu\nu} \tilde{F}^{\mu\nu}/4m^4 = {\bm E}\cdot{\bm B}/m^4$) and may be described essentially within the locally constant crossed field approximation, in which $\chi \neq 0, {\mathcal F} = {\mathcal G}= 0$ \cite{Ritus:thesis}.
As the magnetic field strength increases,
the pair of the Lorentz invariants approaches a different class such that ${\mathcal F} \gtrsim 1, {\mathcal G} =0$.
In this class, such an approximate treatment breaks down and our calculation becomes more appropriate. One then would observe the strong-magnetic field effects such as the discrete spectra and squeezed zenith-angle distributions of di-leptons. In general, the cascade process would take place, giving rise to some modifications to our lowest order prediction. Such modifications become important for laser fields with a sufficiently large spatial extension, while it would be suppressed for those with a small spatial extension comparable to or smaller than the typical mean-free path for radiative processes under strong magnetic fields.
\section*{Acknowledgments}
The authors thank Xu-Guang~Huang for his contributions in the early stage of this work and useful discussions. K.~H. thanks Kazunori~Itakura for discussions, which were useful to improve Sections~\ref{sec-2} and \ref{sec-3}, and also Yoshimasa~Hidaka for useful discussions. S.~Y. is grateful for hospitality at Yukawa Institute for Theoretical Physics, Kyoto University where a part of the initial ideas was discussed. K.~H. and H.~T. benefited from the international molecule-type workshops at Yukawa Institute for Theoretical Physics (YITP) ``Quantum kinetic theories in magnetic and vortical fields (YITP-T-19-06)'' and ``Potential Toolkit to Attack Nonperturbative Aspects of QFT -- Resurgence and related topics -- (YITP-T-20-03)'', where they had fruitful discussions. K.~H. is supported in part by JSPS KAKENHI under grant No.~JP20K03948. S.~Y. is supported by the research startup funding at South China Normal University, National Natural Science Foundation in China (NSFC) under grant No.~11950410495, and Guangdong Natural Science Foundation under No.~2020A1515010794.
| proofpile-arXiv_065-5551 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
Comets are remnants from the Solar protoplanetary disc
\citep{Willacy2015}. Being formed beyond the ice-line orbiting the Sun, on
average, at distances further than the asteroid belt, and experiencing less
processing, they are thought to represent the most pristine matter of the
Solar System.
The first detection of phosphorus in a comet came over 30 years ago
from the report by \cite{Kissel1987} with a
line at m/z=31 in the PUMA mass spectrometer. This was from the summed spectra from cometary dust collected during flyby of Comet 1P/Halley by
the Vega 1 mission in 1986. The interpretation was that no molecule could have
survived the impact during dust capture at a velocity of $>$ 70 kms$^{-1}$, and thus this line could only be attributed to atomic
phosphorus (Kissel, private communication). It is unknown in what kind of parent mineral this
phosphorus was contained.
The second detection of phosphorus has been reported in dust particles collected by the NASA Stardust spacecraft during the flyby of comet 81P/Wild in 2004, and returned to Earth in 2006 \citep{Flynn2006,Joswiak2012}. It was further
analyzed by \cite{Rotundi2014}, where phosphorus was detected in a
single cometary particle and associated with the presence of calcium. They concluded that
phosphorus was most likely contained within an apatite particle
\citep{Rotundi2014}. However, studies of the nanoscale mineralogy of Wild 2 also suggested phosphide minerals as phosphorus carriers \citep{Joswiak2012}.
Another detection of phosphorus and fluorine came from the ROSINA DFMS
instrument on board Rosetta \citep{Altwegg2016,Dhooghe2017,Rivilla2020}. In these cases, they detected
elemental phosphorus, PO and CF in the gas phase of 67P/Churyumov-Gerasimenko (67P/C-G).
We report here the detection phosphorus and fluorine in mass spectra measured from solid
dust particles of 67P/C-G.
Previous studies have already reported detection of C, H, N and O in the dust particles of comet 67P/C-G \citep{Fray2016,Fray2017,Bardyn2017,Paquette2018,Isnard2019}. It is also estimated that the organic component of the dust particles is made of high molecular weight material \citep{Fray2016}, that represents about 45\% in mass of the dust particles \citep{Bardyn2017}.
S has been well detected showing a strong signal in the mass spectra
\citep{Paquette2017,Bardyn2017}.
In the process of forming life, water soluble reactive phosphorus
compounds were required to convert nucleotide precursors by
phosphorylation to active nucleotides.
Reduced phosphorus minerals, such as schreibersite (Fe,Ni)$_3$P
\citep{Pasek2017b}, could have been available on early Earth both from
meteoritic, and very hot volcanic sources
\citep{Pasek2017a,Britvin2015,Turner2018}. However, unlike the
other elements required for life (CHNOS), gaseous forms of phosphorus
were unlikely to have been present as a major species in the early
Earth atmosphere, and thus was required to be in solid and soluble
form \citep{Pasek2017b,Pasek2019}.
\section{Methods}
The COmetary Secondary Ion Mass Analyser (COSIMA), designed in the late 1990s and launched on-board Rosetta in 2004, is a Time-of-Flight
Secondary Ion Mass Spectrometer (TOF-SIMS) with a mass resolution of about
$m/{\Delta m}$=1400 at m/z=100 on board the Rosetta spacecraft, which
accompanied the comet 67P/Churyumov-Gerasimenko from August 2014
to September 2016 \citep{Kissel2007,Glassmeier2007,Hilchenbach2016}.
During this time period COSIMA collected particles that originate from 67P/C-G
\citep{Langevin2016,Merouane2017}, at low impact
velocity ($< 10$ kms$^{-1}$) \citep{Rotundi2015} on silver and gold substrates. The grand total number of dust
particle fragments collected by COSIMA is more than 35,000 from an
estimated 1200-1600 original particles \citep{Merouane2017}, which
were fractured upon impact or in the subsequent collisions inside the
instrument.
A beam of primary $^{115}$In$^+$ ions accelerated to 8 keV impacts the sample and releases secondary ions from the top surface of the particle or substrate \citep{Kissel2007}. The m/z of these secondary ions is measured by the time of flight spectrometer. The temperature inside the COSIMA instrument is about 283 K \citep{Bardyn2017}. We suppose that the interior of the instrument is in equilibrium with outside gas pressure of about 5$\times$10$^{-11}$ mbar, as measured by the instrument COPS \citep{Hoang2017}, the pressure changes due to heliocentric
distance, latitude and location, but is correct to an order of magnitude. This is practically a vacuum, so the volatiles on the surface of the particles are lost between collection and measurement. The particles were stored between a few days and up to a year and a half before measurement, giving ample time for volatiles to escape. The instrument has two modes,
positive and negative, sensitive to positive and negative ions,
respectively.
Mass spectra were obtained from particles collected on 21 substrates, but due to limited time and resources, only about 250 particles have been analysed by TOF-SIMS. Most of the particles were given a name, to ease discussion about specific particles, and very small particles were numbered.
The instrument has a known contaminant, polydimethylsiloxane (PDMS), with significant peaks at m/z=73.05, in positive mode and m/z=74.99 is negative mode. These correspond to the ion fragments: Si(CH$_3$)$_3^+$ and CH$_3$SiO$_2^-$, respectively.
For the fitting process of spectra we have used a Levenberg–Marquardt
fit, fitting up to four peaks at each integer mass \citep{Stenzel2017}. The method
does not have a fixed peak list, but attempts to search for the
combination, which best fits the overall shape.
\section{The detection of phosphorus and fluorine in particles collected by COSIMA}
For the purpose of this study, a set of summed spectra from 24 selected particles were analysed,
comparing them to a sample of a nearby background set. A background
set is a reference location on a substrate close to a particle, where there is
no visible cometary matter. Summing spectra allows elements with lower yield to be detected. For example, P$^+$ is particularly challenging for TOF-SIMS, as it yields signals over an order of magnitude lower than Fe$^+$ \citep{Stephan2001}.
The main focus in this study was to find various ionic species of
phosphorus present in TOF-SIMS spectra. We look for PO$_X^-$ in
negative spectra, ionic P
in negative and positive spectra and any other phosphorus associated compounds.
PH$_3^+$ (phosphine) and PH$_4^+$ (hydrogenated phosphine) were absent
from any analysed individual and summed positive spectra, which was
expected due to their volatile nature. This is the
same result as obtained by the ROSINA instrument in the gas phase, where PH$_3^+$ was not reliably detected \citep{Altwegg2016}. They
also did not find any indication of a parent mineral for phosphorus, but attribute it to be from a PO molecule \citep{Rivilla2020}.
We noticed that phosphorus (P), mono-isotopic at m/z=30.97, is
detected when we sum a large amount of spectra. However, the signal is
too weak when viewed spectrum by spectrum. To aid detection we summed
the positive spectra acquired from a given particle. Out of the tested
24 particle sets, we found a significant contribution of phosphorus in
comparison to a local background in four particles: Uli, Vihtori,
Günter and Fred (see supplementary materials for more details). Fred
and Uli (shown in Figure \ref{PositiveP}), as well as Vihtori and Günter (see supplementary material), show clear cometary signals for CF$^+$ (m/z=30.9984) as well as P$^+$. This marks the first detection of both
CF$^+$ and P$^+$ in solid cometary dust. The detection of CF$^+$ originating from the dust particles complements the previous detection of F$^+$ presented in \cite{Dhooghe2017}. We searched for the signal of PO$^{2-}$ and PO$^{3-}$ in the cometary particles, but as the background spectra present a quite high signal of PO$^{2-}$ and PO$^{3-}$, there was no clear contribution of the cometary particles to these ions to be found.
\begin{figure*}
\includegraphics[scale=0.75]{Fig_1-crop.pdf}
\caption{Summed positive spectra (black line) for particle Fred (top left, 17 spectra) and Uli (bottom left, 10 spectra), and their comparative background sets (black line in the middle column, 5 and 2 spectra, respectively). The plots show the individual fits (red, cyan and purple) of multiple ions and the overall fit (orange line).The positions of the m/z of P$^+$, CF$^+$, CH$_3$O$^+$ and CNH$_5^+$ are shown in all panels to guide the eye. The right column shows the subtraction between the spectra on the particles and the normalized respective background spectra, where red shows the sum of the selected spectra taken on the particle and black shows the sum of the selected spectra taken on the target (next to the particle and at the same date), which has been normalised to the intensity of the PDMS fragment at m/z = 73.05. The spectra taken on the particles present a shift toward the left compared to the spectra acquired on the target (contamination). Thus at m/z = 31, the cometary contribution is located on the left side of the peak (the cometary contribution should have a negative mass defect at m/z = 31) which is an argument in favour of a contribution of the cometary particles to the signal attributed to P$^+$ and CF$^+$. The red and cyan individual fits (left column) are attributed to P$^+$ and CF$^+$, respectively. The errors on the position of these fits are less than one TOF channel which is of the same order than the difference between the positions of these fits and the exact mass of P$^+$ and CF$^+$.}\label{PositiveP}
\end{figure*}
\section{Comparison to reference samples}
Using our reference COSIMA instrument at Max Planck Institute für
Sonnensystemforschung (MPS), Göttingen, Germany, we measured and
analysed two reference samples, fluoroapatite and schreibersite. Both
of these belong to families known to be found in meteorites
\citep{Hazen2013}. The first reference sample, fluoroapatite,
Ca$_5$(PO$_4$)$_3$F contains oxidized phosphorus. Apatite was chosen for its
terrestrial availability and its presence in meteorites. It was
commercially purchased and sourced from Cerro de Mercado,
Durango, Mexico. The apatite shows a clear signature of Ca$^+$,
P$^\pm$, PO$_2^-$ and PO$_3^-$. We do not see calcium in significant
amounts in these cometary dust samples, so the phosphorus likely cannot
be explained by apatite-like minerals. See table \ref{P-Comparison}
for a comparison of the cometary particles' yield of calcium and iron
in comparison to the reference samples. For all the particles on which P has been detected, the Ca$^+$/P$^+$ ionic ratio is much smaller than on apatite (Table \ref{P-Comparison}). Thus, we can rule out the apatite as the source of phosphorus.
\begin{table*}
\caption{Ionic ratios of $^{40}$Ca$^+$/ P$^+$ and $^{56}$Fe$^+$ / P$^+$. In all the cases, both of the ratios are much lower on the cometary particles than on the reference samples of apatite and schreibersite, which allows to rule out, at a significant level, the presence of Apatite, and possibly schreibersite in the cometary particles. Thus, the main carrier of phosphorus remains unknown. The errors are calculated from the Poisson error for the fitted lines, and are equivalent to 1-$\sigma$ errors.}\label{P-Comparison}
\begin{center}
\begin{tabular}[htbp]{lcc}
\hline
Substrate \& particle name & $^{40}$Ca$^+$/P$^+$ & $^{56}$Fe$^+$/P$^+$\\
\hline
1CF/Uli & 8.57 $\pm$ 1.93 & 38.17 $\pm$ 7.46 \\
2CF/Vihtori & 1.20 $\pm$ 0.15 & 7.75 $\pm$ 0.70 \\
2CF/Fred & 2.95 $\pm$ 0.53 & 20.24 $\pm$ 2.83 \\
1D2/Günter & 0.74 $\pm$ 0.09 & 6.17 $\pm$ 0.49 \\
\hline
Apatite & 1332.5 $\pm$ 218.1 & N/A \\
Schreibersite & N/A & 428.4 $\pm$ 15.0 \\
\hline
\end{tabular}
\end{center}
\end{table*}
Another reference sample was from the Fe-Ni-P phosphide group, schreibersite (Fe,Ni)$_3$P with reduced phosphorus. The schreibersite sample, of unknown (meteoritic, terrestrial or otherwise) origin, was obtained from the Mineral Sciences at Smithsonian Museum of Natural History. It shows a clear signature of P \& Fe. Our cometary samples show these, with a lower Fe to P ratio (Table \ref{P-Comparison}), which cannot confirm that the source of phosphorus is schreibersite. This is expected, if the phosphorus containing area is smaller than the beam size of the COSIMA primary beam (35 x 50$\mu$m$^2$).
Our conclusion is that here, the measured phosphorus is not from
apatite. The ion ratios for schreibersite also do not fit our findings. Phosphorus must come from another source, such as elemental phosphorus, or some other non-calcium containing mineral, although, as previously mentioned, it is probably not a phosphate because we could not find a clear cometary contribution of PO$^{2-}$ and PO$^{3-}$. Also, this means that the source of the fluorine is not from a fluorapatite.
\section{Discussion}
One of the challenges in understanding the origin of life processes, is the lack of soluble
phosphorus containing molecules in the terrestrial environments
\citep{Yamagata1991}.
It has been experimentally shown that soluble P, HCN and H$_2$S can
serve as suitable feed stock for the prebiotic synthesis of
nucleotides, amino acids and phosphoglycerine backbones
\citep{Patel2015,Stairs2017}.
These reactions could be driven most efficiently by highly reduced
phosphorus, e.g. different mineral phosphides, such as those
belonging to the iron-nickel-phosphide group, known to occur mostly in
meteoritic materials
\citep{Gull2015,Pasek2005,Bryant2006,Herschy2018} or possibly elemental phosphorus. The phosphite anion
(PO$_3^{3-}$ or, given the conditions, HPO$_3^{2-}$) is a soluble and
highly reactive molecule, and is readily formed e.g. by the
hydrolysis of schreibersite \citep{Gull2015}.
So far, the different organic materials and feed-stocks regarding the
origin of life have been suggested to be derived either from
meteoritic or geochemical origins
(\citealt{Kurosawa2013,Bada2013,Patel2015,Britvin2015}, and references
therein). However, the detection of all the life promoting compounds,
i.e.\ various CH compounds \citep{Fray2016,Isnard2019}, N \citep{Fray2017}, O
\citep{Paquette2018}, S \citep{Paquette2017}, and here, the solid forms
of P, in comet 67P/C-G, means that we have detected, in solid form, many ingredients regarded as important in current theories about origin of life. It is conceivable that
early cometary impacts onto the planet surface have been less energetic, as
compared to the impacts of the heavy stony meteorites \citep{Morbidelli2015}, thus preserving
the prebiotic molecules in a more intact condition.
\cite{Clark1988,Clark2018} suggested primeval procreative comet ponds
as the possible environment for the origin of life, while \cite{Chatterjee2016} suggested that hydrothermal impact craters
in icy environments could create another suitable cradle for life. In both cases phosphorus could be delivered by comets.
The results here indicate that elements for life can originate from solid cometary matter. It is possible to seed the required elements with solid cometary matter, that is rich in
volatiles. Although, more importantly, the compounds must be reactive and soluble, no matter how they are delivered. The solubility of the detected cometary phosphorus from 67P/C-G is not clear, but we can conclude that it cannot be Apatite, which is a common mineral source of phosphorus in meteorites. Additionally, other phosphate minerals are unlikely, because we could not find a clear cometary contribution of PO$^{2-}$ and PO$^{3-}$.
The presence of all the CHNOPS-elements give a strong premise for a future cometary sample-return mission to a comet. This could confirm the presence of all compounds and their possible mineral sources and the possible solubility of the matter. This would also allow for a comprehensive analysis of the relative amounts of these CHNOPS-elements.
\section*{Acknowledgements}
COSIMA was built by a consortium led by the
Max-Planck-Institut für Extraterrestrische Physik, Garching, Germany, in collaboration with Laboratoire de Physique et Chimie
de l’Environnement, Orléans, France, Institut d’Astrophysique
Spatiale, CNRS/INSU and Université Paris Sud, Orsay, France,
the Finnish Meteorological Institute, Helsinki, Finland, Universität Wuppertal, Wuppertal, Germany, von Hoerner und Sulger
GmbH, Schwetzingen, Germany, Universität der Bundeswehr, Neubiberg, Germany, Institut für Physik, Forschungszentrum Seibersdorf, Seibersdorf, Austria, and Institut für Weltraumforschung,
Österreichische Akademie der Wissenschaften, Graz, Austria, and
is lead by the Max-Planck-Institut für Sonnensystemforschung,
Göttingen, Germany. The support of the national funding agencies
of Germany (DLR), France (CNES), Austria and Finland and the
ESA Technical Directorate is gratefully acknowledged. We thank
the Rosetta Science Ground Segment at ESAC, the Rosetta Mission
Operations Centre at ESOC and the Rosetta Project at ESTEC for
their outstanding work enabling the science return of the Rosetta
Mission.\\
H. J. Lehto, E. Gardner and K. Lehto acknowledge the support of the
Academy of Finland (grant number 277375).\\
We acknowledge the Mineral Sciences at Smithsonian Museum of Natural
History for providing the schreibersite.\\
\section*{Data Availability}
The data underlying this article are available in the Planetary
Science Archive of ESA https://www.cosmos.esa.int/web/psa/psa-introduction, and in the
Planetary Data System archive of NASA https://pds.nasa.gov/.
\newcommand{}{}
\bibliographystyle{mnras}
| proofpile-arXiv_065-5568 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
Relative hyperbolicity is first introduced by Gromov \cite{G87} to study various algebraic and geometric examples, such as fundamental groups of finite volume hyperbolic manifolds, small cancellation quotients of free products, etc. There are several equivalent definitions for relative hyperbolic groups proposed by different authors, among which we will use the one due to C. Dru\c{t}u, D. Osin and M. Sapir. This is based on the notion of tree-graded structure of a metric space introduced in \cite{DS05}.
\begin{Definition}[tree-graded spaces]
Let ${X}$ be a complete geodesic metric space and let $\mathcal{P}$ be a collection of closed geodesic subsets (called pieces). We say ${X}$ is \textit{tree-graded with respect to} $\mathcal{P}$ if the following two properties are satisfied:
\begin{itemize}
\item[$(T_1)$] Every two different pieces have at most one common point.
\item[$(T_2)$] Every simple geodesic triangle in $\mathbb{X}$ is contained in one piece.
\end{itemize}
\end{Definition}
A metric space $X$ is called \textit{asymptotically tree-graded} with respect to a collection of subsets $\mathcal{A}$ if every asymptotic cone of $X$ is tree-graded with respect to the collection of limit sets of sequences in $\mathcal{A}$. See Section \ref{subsection:AsymptoticCones} for precise definitions.
In \cite{DS05}, a characterization of relative hyperbolicity for finitely generated groups is presented, which will be taken as the definition of relatively hyperbolic groups in the present paper. Namely, a group $G$ generated by $S$ is hyperbolic relative to some subgroups $H_1,\cdots,H_m$ if and only if the Cayley graph $\mathscr{G}(G,S)$ is asymptotically tree-graded with respect to the collection of all left cosets of $H_1,\cdots,H_m$. Subgroups $H_1,\cdots,H_m$ are usually called the \textit{peripheral subgroups}.
Throughout the paper, all groups under consideration are assumed to be finitely generated.
The above characterization is further generalized in Dru\c{t}u \cite{D09} by allowing to consider peripheral subsets instead of peripheral subgroups, and thus gives a truely metric version of relative hyperbolicity.
\begin{Theorem}\cite[Proposition 5.1 and its proof]{D09} \label{thm:TreeGradeImplyRelHyper}
Let $G$ be a group generated by a finite set $S$. Assume that $\mathscr{G}(G,S)$ is asymptotically tree-graded with respect to a collection $\mathcal{B}$ of subsets and $G$ permutes those subsets in $\mathcal{B}$, then the following holds:
\begin{itemize}
\item there exist $B_1,\cdots,B_k\in\mathcal{B}$, such that $\mathcal{B}=\{gB_i:g\in G,1\leq i\leq k\}$;
\item $G$ is hyperbolic relative to some subgroups $H_1,\cdots,H_m$, such that every $H_i$ is the stabilizer subgroup of some $B_{j_i}$ for $1\leq j_i\leq k$ and the Hausdorff distance between $H_i$ and $B_{j_i}$ is bounded.
\end{itemize}
\end{Theorem}
If the assumption ``$G$ permutes the subsets in $\mathcal{B}$'' were removed, the conclusion that $G$ is relatively hyperbolic also holds, which is a deeper result of Dru\c{t}u \cite[Theorem 1.6]{D09}.
The goal of the present paper is to characterize relatively hyperbolic groups in the class of graphical small cancellation groups introduced by Gromov \cite{G03}. This is a generalization of classical small cancellation groups, which serves a powerful tool to construct groups with prescribed embedded subgraphs in their Cayley graphs.
Given a directed graph $\Gamma$ labeled by a finite set $S$ of letters, let $G(\Gamma)$ be the group presented by $\langle S| ~\text{labels of simple circles in}~ \Gamma\rangle$. A \textit{piece} is a labelled path that embeds into $\Gamma$ in two different ways up to label-preserving automorphisms of $\Gamma$. The $Gr'(\frac{1}{6})$ condition imposed on the graph is a generalized graphical version of $C'(\frac{1}{6})$ condition, which requires the length of each piece is less than $\frac{1}{6}$ of the length of simple circles containing this piece. See Section \ref{subsection:GraphicalSmallCancellation} for precise definitions.
A fundamental fact of graphical small cancellation theory is that under the graphical $Gr'(\frac{1}{6})$ condition, every component of $\Gamma$ admits a convex embedding into the Cayley graph of $G(\Gamma)$ (see Lemma \ref{lemma:EmbedConvex}). It provides a rich source of finitely generated groups with exotic or extreme properties: Gromov monster groups with an infinite expander family of graphs in their Cayley graphs \cite{G03,AD08}, counterexamples to the Baum-Connes conjecture with coefficients \cite{HLS02}, etc.
Recently, Gruber and Sisto \cite{GS18} showed that non-elementary graphical $Gr(7)$-labelled, in particular, $Gr'(\frac{1}{6})$-labelled groups are acylindrically hyperbolic (a weaker condition than relative hyperbolicity). It is indicated in \cite{GS18} that the Cayley graph of such group presentation is weekly hyperbolic relative to the embedded components of the defining graph. This suggested an analogy with relatively hyperbolic space: the embedded components of the defining graph corresponding to the peripheral regions of relatively hyperbolic space. This is also well informed in the work of Arzhantseva-Cashen-Gruber-Hume \cite{ACGH19}. Informally, typical results there say that geodesics which have bounded penetration into embedded components are strongly contracting, hence they behave like hyperbolic geodesics. However, the first example of strongly contracting element being unstable under changing generating set is constructed there as well. This is different from the fact in relatively hyperbolic groups that every hyperbolic element is strongly contracting for any generating set. Therefore, it is an interesting question to understand which $Gr'(\frac{1}{6})$-labelled group is relatively hyperbolic.
The main result of this paper is the following characterization of the asymptotically tree-graded structure of $Gr'(\frac{1}{6})$-labelled groups with respect to the embedded components.
\begin{Theorem}\label{thm:RelativeHyperbolicity}
Let $\Gamma$ be a $Gr'(\frac{1}{6})$-labelled graph labelled by $S$. Then the Cayley graph $\mathscr{G}(G(\Gamma),S)$ is asymptotically tree-graded with respect to the collection $\mathcal{A}$ of all embedded components of $\Gamma$ if and only if all pieces of $\Gamma$ have uniformly bounded length.
\end{Theorem}
For a $Gr'(\frac{1}{6})$-labelled graph $\Gamma$, $G(\Gamma)$ permutes embedded components in $\mathcal{A}$. If $A_i\in \mathcal{A}$ is an embedded component in $\mathscr{G}(G(\Gamma),S)$ corresponding to a component $\Gamma_i$ of $\Gamma$, then the stabilizer group $\mathrm{Stab}(A_i)$ is isomorphic to the label-preserving automorphism group of component $\Gamma_i$ by Lemma \ref{lemma:Aut=Stab}. Therefore by Theorem \ref{thm:TreeGradeImplyRelHyper}, we obtain the following corollary.
\begin{Corollary}\label{cor:RelativeHyperbolicity}
Let $\Gamma$ be a $Gr'(\frac{1}{6})$-labelled graph labelled by $S$. If the pieces of $\Gamma$ have uniformly bounded length, then $\Gamma$ has only finitely many components up to label-preserving automorphism and $G(\Gamma)$ is hyperbolic relative to some subgroups $H_1,\cdots,H_m$, where $H_i$ are the label-preserving automorphism groups of these components.
\end{Corollary}
The (relative) hyperbolicity has been considered by other authors in the literature. For instacne, A. Pankrat'ev \cite{P99} and M. Steenbock \cite[Theorem 1]{S15} studied the hyperbolicity of (graphical) small cancellation quotients of free products of hyperbolic groups. In his thesis, D. Gruber \cite[Theorem 2.9]{dG15Thesis} generalized their results to establish the relative hyperbolicity of the free products of groups subject to finite labelled graphs satisfying the graphical small cancellation condition.
\vspace{.5em}
\noindent\textbf{Outline of the proof.}
Assume that the pieces of $\Gamma$ are uniformly bounded.
Let $\mathcal{A}$ be the collection of all embedded components of $\Gamma$. We want to show $\mathscr{G}(G(\Gamma),S)$ is asymptotically tree-graded with respect to $\mathcal{A}$. Combining \cite[Lemma 4.5]{DS05} and \cite[Corollary 4.19]{D09} together, it is sufficient to verify that $\mathcal{A}$ satisfies the following list of geometric properties:
\begin{Lemma}\cite{DS05,D09}\label{lemma:tree-graded}
Let $(Y,d_Y)$ be a geodesic metric space and let $\mathcal{B}$ be a collection of subsets of $Y$. The metric space $Y$ is
asymptotically tree-graded with respect to $\mathcal{B}$ if and only if the following properties are satisfied:
\begin{itemize}
\item[($\Lambda_1$)] finite radius tubular neighborhoods of distinct elements in $\mathcal{B}$ are either disjoint or intersecting with uniformly bounded diameters.
\item[($\Lambda_2$)] a geodesic with endpoints at distance at most one third of its length from a set $B$ in $\mathcal{B}$ intersects a uniformly bounded radius tubular neighborhood of $B$.
\item[($\Omega_3$)] for any asymptotic cone $\mathrm{Con}_{\omega}(X; (e_n), (l_n))$, any simple triangle with edges limit geodesics is contained in a subset from $\mathcal{B}_\omega$.
\end{itemize}
\end{Lemma}
Note that the set $\mathcal{B}_\omega$ is the $\omega$-limit of $\mathcal{B}$. Limit geodesic is the one that can be represented as an $\omega$-limit of a sequence geodesics.
First of all, we follow the approach of \cite{ACGH19} by using Strebel's classification of combinatorial geodesic triangles to show that $\mathcal{A}$ is a strongly contracting system. Thus, the properties $(\Lambda_1)$ and $(\Lambda_2)$ of $\mathcal{A}$ follow from the strongly contracting property.
To verify the property $(\Omega_3)$, we take a detour by first showing the property $(\Omega_2)$: for any asymptotic cone $\mathrm{Con}_{\omega}(X; (e_n), (l_n))$, any simple bigon with edges limit geodesics is contained in a subset from $\mathcal{B}_\omega$. The reason why we do not verify property $(\Omega_3)$ directly is that there is little known about the classification of combinatorial hexagons. Note that any simple bigon with edges limit geodesics can be represented as an $\omega$-limit of a sequence `fat' simple geodesic quadrangles by the strategy in the proof of \cite[Proposition 4.14]{D09}. However, the classification of `special' combinatorial quadrangles provided by Arzhantseva-Cashen-Gruber-Hume \cite{ACGH19} shows us that there is no `fat' special combinatorial quadrangle. This gives us great help to show the property $(\Omega_2)$. Further analysis are carried out to obtain property $(\Omega_3)$.
The ``only if'' part of the theorem is obvious by the ($\Lambda_1$) property and the fact that pieces of $\Gamma$ can always be realized as the paths contained in the intersections of two different embedded components.
\vspace{.5em}
\noindent\textbf{The structure of this paper.}
Section \ref{section:Preliminaries} sets up the notations used in the present paper and contains preliminaries on graphical small cancellation groups, asymptotic cones, tree-graded metric spaces. The proof of Theorem \ref{thm:RelativeHyperbolicity} takes up the whole Section \ref{section:proof} and we will verify in order the desired properties listed in Lemma \ref{lemma:tree-graded}. Some examples are also presented at the end to illustrate the applications of the main theorem.
\vspace{.5em}
\noindent\textbf{Acknowledgement.}
The author wishes to thank Prof. Wen-yuan Yang for suggesting the problem, many useful comments and helpful conversations. The author also would like to thank Prof. G. N. Arzhantseva for many useful suggestions on the writing, and thank Dr. A. Genevois for bringing to me the references \cite{S15} and \cite{dG15Thesis} and useful suggestions.
\section{Preliminaries} \label{section:Preliminaries}
Let $A$ be a subset in a metric space $(X,d)$. We denote by $N_\delta(A)$ (resp. $\overline{N}_\delta(A)$) the set $\{x\in X | d(x,A)<\delta\}$
(resp. $\{x\in X | d(x,A)\leq\delta\}$), which we call the (resp. closed) $\delta$-tubular neighborhood of $A$.
And we denote by $\mathrm{diam}(A)$ the diameter of $A$. Given a point $x\in X$, and a subset $Y\subseteq X$, let $\pi_Y(x)$ be the set of point $y$ in $Y$ such that $d(x,y)=d(x,Y)$. The \textit{projection} of $A$ to $Y$ is the set $\pi_Y(A):=\cup_{a\in A}\Pi_Y(a)$.
The path $\gamma$ in $X$ under consideration is always assumed to be rectifiable with arc-length parametrization $[0, |\gamma|]\to \gamma$, where $|\gamma|$ denotes the length of $\gamma$. Denote by $\gamma_-,\gamma_+$ the initial and terminal points of $\gamma$ respectively, if $\gamma_i$ has subscript, we will denote by $\gamma_{i-},\gamma_{i+}$ for simplicity. And we denote by $\overline{\gamma}$ the inverse of $\gamma$, i.e. $\overline{\gamma}$ has parametrization $\overline{\gamma}(t)=\gamma(|\gamma|-t)$. For any two parameters $a<b\in [0, |\gamma|]$, we denote by $[\gamma(a),\gamma(b)]_\gamma:=\gamma([a,b])$ the closed subpath of $\gamma$ between $\gamma(a)$ and $\gamma(b)$, The symbols $(\gamma(a),\gamma(b))_\gamma:=\gamma((a,b))$, $[\gamma(a),\gamma(b))_\gamma:=\gamma([a,b))$ and $(\gamma(a),\gamma(b])_\gamma:=\gamma((a,b])$
are defined analogously. For any $x,y\in X$, we denote by $[x,y]$ a choice of geodesic in $X$ from $x$ to $y$.
\subsection{Graphical small cancellation} \label{subsection:GraphicalSmallCancellation}
As a generalization of classical small cancellation theory, the main benefit of graphical small cancellation theory is to embed a series of graphs into the Cayley graph of a group.
It was introduced by Gromov \cite{G03}, and then was modified by various authors \cite{O06,AD08,dG15}.
We will state the graphical small cancellation conditions following \cite{dG15}, before which we describe the group defined by a labelled graph, which first appeared in \cite{RS87}.
Let $\Gamma$ be a possibly infinite, and possibly non-connected, directed graph. A \textit{labelling} of $\Gamma$ by a set $S$ is a map assigning to each edge an element of $S$. The label of an edge path is the word in $M(S)$ (the free monoid on $S\sqcup S^{-1}$) reading along the path. If the corresponding edge is traversed in the opposite direction, then we read the formal inverse latter of the given label, otherwise just read the labelled letter.
The labelling is \textit{reduced} if at every vertex $v$, any two oriented edges both originating from $v$ or both terminating at $v$ have distinct labels.
Let $R$ be the set of words in $F(S)$ (the free group on $S$) read on simple cycles in $\Gamma$. Reducibility implies that the words in $R$ are cyclically reduced. The \textit{group $G(\Gamma)$ defined by $\Gamma$} is given by the presentation $\langle S|R\rangle$.
\begin{Definition}(Piece \cite[Definition 1.5]{dG15Thesis}) \label{def:piece}
Let $\Gamma, \alpha$ be reduced $S$-labelled graphs. Two label-preserving maps of labelled graphs $\psi_1,\psi_2: \alpha\rightarrow\Gamma$ are called \textit{essentially distinct} if there is no label-preserving automorphism $\eta$ of $\Gamma$ with $\psi_2=\eta\circ\psi_1$.
A \textit{piece} of $\Gamma$ is a labelled path $\alpha$ (treated as a labelled graph) that admits two essentially distinct label-preserving maps $\phi_1,\phi_2:\alpha\rightarrow\Gamma$.
\end{Definition}
\begin{Definition}[$Gr'(\frac{1}{6})$ and $C'(\frac{1}{6})$ conditions] \label{def:GraphicalSmallCancellation}
A reduced $S$-labelled graph $\Gamma$ is $Gr'(\frac{1}{6})$-labelled if any piece $\alpha$ contained in a simple cycle $\mathcal{O}$ of $\Gamma$ satisfies $|\alpha|<\frac{1}{6}|\mathcal{O}|$.
A presentation $\langle S|R\rangle$ satisfies the classical $C'(\frac{1}{6})$ condition if the graph, constructed as disjoint union of cycle graphs labelled by the words in $R$, is $Gr'(\frac{1}{6})$-labelled.
\end{Definition}
Let $Y$ be a geodesic metric space, a subset $A\subseteq Y$ is \textit{convex} if every geodesic segment with endpoints in $A$ is contained in $A$.
\begin{Lemma}\cite[Lemma 2.15]{GS18} \label{lemma:EmbedConvex}
Let $\Gamma_0$ be a component of a $Gr'(\frac{1}{6})$-labelled graph $\Gamma$. For any choice of base vertices $x\in \mathscr{G}(G(\Gamma),S)$, $y\in \Gamma_0$, there is a unique label-preserving map $f:\Gamma_0\rightarrow \mathscr{G}(G(\Gamma),S)$ with $f(y)=x$. Moreover, such $f$ is an isometric embedding and its image in $X$ is convex.
\end{Lemma}
\begin{Definition}(Embedded component \cite[Definition 2.5]{ACGH19})
The image of the label-preserving map given by Lemma \ref{lemma:EmbedConvex} is called an \textit{embedded component} of $\Gamma$.
\end{Definition}
For each component $\Gamma_i$ of $\Gamma$, choose a basepoint $y_i\in \Gamma_i$.
Let $\Gamma_i'$ be the embedded component of $\Gamma$ which is the image of the label-preserving map $f_i:\Gamma_i\rightarrow \mathscr{G}(G(\Gamma),S)$ with $f_i(y_i)=1_{G(\Gamma)}$ (the identity element of $G(\Gamma)$).
Then an arbitrary embedded component of $\Gamma$ in $X$ is always a $G(\Gamma)$-translate of some $\Gamma_i'$.
The following observation is an consequence of the definition of pieces in $\Gamma$.
\begin{Lemma} \label{lemma:PieceIntersection}
A labelled path $p$ is a piece of $\Gamma$ if and only if $p$ can embed into the intersection of two different embedded components.
\end{Lemma}
The following result is probably well-known. However, the author cannot find an exact reference in the literature, so a proof is provided here for completeness.
\begin{Lemma} \label{lemma:Aut=Stab}
Let $\Gamma_0$ be a component of a $Gr'(\frac{1}{6})$-labelled graph $\Gamma$. For any label-preserving embedding $f:\Gamma_0\hookrightarrow\mathscr{G}(G(\Gamma),S)$ provided by Lemma \ref{lemma:EmbedConvex}, the label-preserving automorphism group $\mathrm{Aut}(\Gamma_0)$ of the component $\Gamma_0$ is isomorphic to the stabilizer group $\mathrm{Stab}(f(\Gamma_0))$ in $G(\Gamma)$ of the embedded component $f(\Gamma_0)$.
\end{Lemma}
\begin{proof}
For any two different label-preserving maps $f_1,f_2:\Gamma_0\rightarrow\mathrm{Cay}(G(\Gamma),S)$ provided by Lemma \ref{lemma:EmbedConvex}, the stabilizer groups $\mathrm{Stab}(f_1(\Gamma_0)),\mathrm{Stab}(f_2(\Gamma_0))$ are isomorphic. Thus, without loss of generality, we may assume that $1_{G(\Gamma)}\in f(\Gamma_0)$. Let $x_0$ be the vertex of $\Gamma_0$ such that $f(x_0)=1_{G(\Gamma)}$.
It is obvious that $f:\Gamma_0\rightarrow f(\Gamma_0)$ is a label-preserving isomorphism. This implies a natural monomorphism:
\begin{align*}
\Phi: \mathrm{Stab}(f(\Gamma_0)) &\hookrightarrow \mathrm{Aut}(\Gamma_0) \\
g &\mapsto \Phi(g)= f^{-1}gf.
\end{align*}
Observe first that for any $h\in \mathrm{Aut}(\Gamma_0)$, $h$ is determined by $h(x_0)$.
Since $\Gamma$ is reduced, every two edges originated from (resp. terminated at) $h(x_0)$ have different labels. Hence an edge originated from $h(x_0)$ and labelled by $s\in S$ must be the image under $h$ of the unique edge originated from $x_0$ and labelled also by $s$. The same holds for edges terminated at $h(x_0)$. Since $\Gamma$ is connected, the observation follows from induction.
We now prove that $\Phi$ is surjective. If $g_h=f(h(x_0))$ is the corresponding vertex in the embedded component $f(\Gamma_0)$ in $\mathscr{G}(G(\Gamma),S)$, then $g_h=g_h(1_{G(\Gamma)})$ translates the edges adjacent to $1_{G(\Gamma)}$ in $f(\Gamma_0)$ to the ones adjacent to the vertex $g_h$ {which must be contained in $f(\Gamma_0)$ as well}.
By induction again, we know that $g_h$ stabilizes $f(\Gamma_0)$, and thus $g_h\in\mathrm{Stab}(f(\Gamma_0))$. It is easy to see that $\Phi(g_h)=h$, so $\Phi$ is an isomorphism.
\end{proof}
\subsection{Combinatorial polygons}
Diagram is one of the main tools in small cancellation theory. In this subsection, we introduce several relevant notions for future discussions.
A (singular disk) \textit{diagram} is a finite, contractible, $2$-dimensional CW-complex embedded in $\mathbb{R}^2$. It is $S$-\textit{labelled} if its $1$-skeleton, when treated as graph, is endowed with a labelling by $S$. It is a diagram \textit{over} the presentation $\langle S|R\rangle$ if it is $S$-labelled and the word read on the boundary cycle of each $2$-cell belongs to $R$.
A diagram is \textit{simple} if it is homeomorphic to a disc. We call $1$-cells \textit{edges} and $2$-cells \textit{faces}.
If $D$ is a diagram over $\langle S|R\rangle$, then for any choice of base vertices $x\in \mathscr{G}(G(\Gamma),S)$ and $y\in D$, there exists a unique label-preserving map $g$ from $1$-skeleton of $D$ to $\mathscr{G}(G(\Gamma),S)$ with $g(y)=x$. The map $g$ need not be an immersion in general. In the following, we sometimes confuse a diagram with its image in $\mathscr{G}(G(\Gamma),S)$ to simplify notations, however, it is easy to distinguish them from context.
An \textit{arc} in a diagram $D$ is an embedded edge path whose interior vertices have valence $2$ and whose initial and terminal vertices have valence different from $2$. An arc is \textit{exterior} if it is contained in the boundary of $D$ otherwise it is \textit{interior}. A face of a diagram $D$ is called \textit{interior} if its boundary arcs are all interior arcs, otherwise it is called a \textit{boundary face}. A \textit{$(3,7)$-diagram} is a diagram such that every interior vertices have valence at least $3$ and the boundary of every interior face is consisting of at least $7$ arcs.
If $\Pi$ is a face in a diagram, its \textit{interior degree} $i(\Pi)$ is the number of interior arcs in its boundary $\partial\Pi$, and its \textit{exterior degree} $e(\Pi)$ is the number of exterior arcs in $\partial\Pi$.
\begin{Definition}(combinatorial polygon \cite[Definition 2.11]{GS18})
A \textit{combinatorial $n$-gon} $(D,(\gamma_i))$ is a $(3,7)$-diagram $D$
with a decomposition of $\partial D$ into $n$ reduced subpaths (called \textit{sides}) $\partial D=\gamma_0\gamma_1\cdots\gamma_{n-1}$, such that every boundary face $\Pi$ with $e(\Pi)=1$ for which the exterior arc in $\partial\Pi$ is contained in one of the $\gamma_i$ satisfies $i(\Pi)\geq 4$.
\end{Definition}
A valence $2$ vertex that belongs to two sides is called a \textit{distinguished vertex}.
A face who contains a distinguished vertex is called a \textit{distinguished face}.
We adopt the convention in \cite{ACGH19} that the ordering of the sides of a combinatorial $n$-gon is considered up to cyclic permutation, with subscripts modulo $n$. By convention, $2$-gons, $3$-gons, and $4$-gons will be called bigons, triangles, and quadrangles respectively.
Now we return to the setting of $Gr'(\frac{1}{6})$-labelled graph $\Gamma$.
The following lemma is useful in diagrams over graphical small cancellation presentations.
\begin{Lemma} \cite[Lemma 2.13]{dG15} \label{Lemma:diagram}
Let $\Gamma$ be a $Gr'(\frac{1}{6})$-labelled graph. If $w\in F(S)$ satisfy $w=1_{G(\Gamma)}$ in $G(\Gamma)$, then, there exists a diagram $D$ over $\langle S|R\rangle$ such that $\partial D$ is labelled by $w$ and every interior arc of $D$ is a piece.
\end{Lemma}
A geodesic $n$-gon $P$ in $\mathscr{G}(G(\Gamma),S)$ is a cycle path that is a concatenation of $n$ geodesic segments $\gamma_0',\cdots,\gamma_{n-1}'$, which are called \textit{sides} of $P$.
The numbers appeared in the definition of combinatorial polygons are motivated by the following:
\begin{Lemma}\cite[Proposition 3.5]{ACGH19} \label{Lemma:CombPolygon}
Let $P=\gamma_0'\cdots\gamma_{n-1}'$ be a geodesic $n$-gon in $\mathscr{G}(G(\Gamma),S)$. Then there exists a diagram $D$ over $\langle S|R\rangle$ whose interior arcs are pieces such that:
\begin{itemize}
\item $\partial D=\gamma_0\cdots\gamma_{n-1}$ and the word reads on $\gamma_i$ is the same as the one reads on $\gamma_i'$ for $0\leq i\leq n-1$,
\item $D$ is a combinatorial $n$-gon after forgetting interior vertices of valence $2$.
\end{itemize}
\end{Lemma}
\subsubsection{Combinatorial bigons and triangles}
In the remainder of this subsection, our discussions about diagrams are always combinatorial, not necessarily over presentations.
\begin{Theorem}(Strebel's classification, \cite[Theorem 43]{S90}). \label{thm:ClassificationForTriangle}
Let $D$ be a simple diagram.
\begin{itemize}
\item If $D$ is a combinatorial bigon, then $D$ has the form $\mathrm{I}_1$ as depicted in Figure \ref{figure:classification-for-triangle}.
\item If $D$ is a combinatorial triangle\footnote{In our definition of combinatorial polygon, the form $\mathrm{III}_2$ of Strebel's classification can not appear.},
then $D$ has one of the forms $\mathrm{I}_2$, $\mathrm{I}_3$, $\mathrm{II}$, $\mathrm{III}$, $\mathrm{IV}$, or $\mathrm{V}$ as depicted in Figure \ref{figure:classification-for-triangle}.
\end{itemize}
\end{Theorem}
\vspace{-1.7em}
\setlength{\unitlength}{3.5cm}
\begin{picture}(1,1)
\qbezier(0, 0.4)(0.5,0.9)(1, 0.4)
\qbezier(0, 0.4)(0.5,-0.1)(1, 0.4)
\put(0.35,0.63){\line(0,-1){0.46}}
\put(0.65,0.63){\line(0,-1){0.46}}
\put(0.45,0.4){$\cdots$}
\put(0,0.7){$\mathrm{I}_1$}
\put(1.2,0){\line(1,0){1}}
\put(1.2,0){\line(2,3){0.5}}
\put(2.2,0){\line(-2,3){0.5}}
\put(1.5,0.45){\line(1,0){0.4}}
\put(1.4,0.3){\line(1,0){0.6}}
\put(1.685,0.34){$\vdots$}
\put(1.2,0.7){$\mathrm{I}_2$}
\put(2.4,0){\line(1,0){1}}
\put(2.4,0){\line(2,3){0.5}}
\put(3.4,0){\line(-2,3){0.5}}
\put(2.6,0){\line(-2,3){0.1}}
\put(2.8,0){\line(-2,3){0.2}}
\put(2.54,0.1){$\cdots$}
\put(3,0){\line(2,3){0.2}}
\put(3.2,0){\line(2,3){0.1}}
\put(3.13,0.1){$\cdots$}
\put(2.4,0.7){$\mathrm{I}_3$}
\end{picture}
\vspace{-1.3em}
\setlength{\unitlength}{3.3cm}
\begin{picture}(1,1) \label{figure:classification-for-triangle}
\put(0,0){\line(1,0){1}}
\put(0,0){\line(2,3){0.5}}
\put(1,0){\line(-2,3){0.5}}
\put(0.2,0){\line(-2,3){0.1}}
\put(0.4,0){\line(-2,3){0.2}}
\put(0.14,0.1){$\cdots$}
\put(0.6,0){\line(2,3){0.2}}
\put(0.8,0){\line(2,3){0.1}}
\put(0.73,0.1){$\cdots$}
\put(0.3,0.45){\line(1,0){0.4}}
\put(0.4,0.6){\line(1,0){0.2}}
\put(0.485,0.49){$\vdots$}
\put(0,0.7){$\mathrm{II}$}
\put(1.2,0){\line(1,0){1}}
\put(1.2,0){\line(2,3){0.5}}
\put(2.2,0){\line(-2,3){0.5}}
\put(1.5,0){\line(-2,3){0.15}}
\put(1.7,0){\line(-2,3){0.25}}
\put(1.43,0.1){$\cdots$}
\put(1.7,0){\line(2,3){0.25}}
\put(1.9,0){\line(2,3){0.15}}
\put(1.84,0.1){$\cdots$}
\put(1.5,0.45){\line(1,0){0.4}}
\put(1.6,0.6){\line(1,0){0.2}}
\put(1.685,0.49){$\vdots$}
\put(1.2,0.7){$\mathrm{III}$}
\put(2.4,0){\line(1,0){1}}
\put(2.4,0){\line(2,3){0.5}}
\put(3.4,0){\line(-2,3){0.5}}
\put(2.6,0){\line(-2,3){0.1}}
\put(2.8,0){\line(-2,3){0.2}}
\put(2.54,0.1){$\cdots$}
\put(3,0){\line(2,3){0.2}}
\put(3.2,0){\line(2,3){0.1}}
\put(3.13,0.1){$\cdots$}
\put(2.7,0.45){\line(1,0){0.4}}
\put(2.8,0.6){\line(1,0){0.2}}
\put(2.885,0.49){$\vdots$}
\put(2.9,0.225){\line(0,1){0.225}}
\put(2.9,0.225){\line(2,-1){0.185}}
\put(2.9,0.225){\line(-2,-1){0.185}}
\put(2.4,0.7){$\mathrm{IV}$}
\put(3.6,0){\line(1,0){1}}
\put(3.6,0){\line(2,3){0.5}}
\put(4.6,0){\line(-2,3){0.5}}
\put(3.8,0){\line(-2,3){0.1}}
\put(4,0){\line(-2,3){0.2}}
\put(3.74,0.1){$\cdots$}
\put(4.2,0){\line(2,3){0.2}}
\put(4.4,0){\line(2,3){0.1}}
\put(4.33,0.1){$\cdots$}
\put(3.9,0.45){\line(1,0){0.4}}
\put(4,0.6){\line(1,0){0.2}}
\put(4.085,0.49){$\vdots$}
\put(4.1,0.225){\line(0,-1){0.225}}
\put(4.1,0.225){\line(2,1){0.26}}
\put(4.1,0.225){\line(-2,1){0.26}}
\put(3.6,0.7){$\mathrm{V}$}
\put(0.3,-0.2){F\textsc{igure} 2.2.1.~ Strebel's classification of combinatorial bigons and triangles.}
\end{picture}
\vspace{2em}
\subsubsection{Special combinatorial quadrangles}
In this subsection we introduce some notions from \cite{ACGH19}.
A combinatorial $n$-gon $(D,(\gamma_i))$ is \textit{degenerate} if there exists $i$, such that $$(D, \gamma_0, \cdots,\gamma_i\gamma_{i+1}, \cdots, \gamma_{n-1})$$ is a combinatorial $(n-1)$-gon. In this case the terminal vertex of $\gamma_i$ is called a \textit{degenerate vertex}.
\begin{Definition}\cite[Definition 3.12]{ACGH19}
A combinatorial $n$-gon $D$ is \textit{reducible} if it admits a vertex or face reduction as defined below, otherwise it is \textit{irreducible}.
\end{Definition}
Let $(D,(\gamma_i))$ be a combinatorial $n$-gon. We denote by $\Pi^\circ$ (resp. $e^\circ$) the interior of a face $\Pi$ (resp. an edge $e$).
\noindent\textbf{Vertex reduction.}
If $v\in D$ is a cut vertex such that $v$ is contained in exactly two boundary faces, then the closures of the two components of $D\setminus v$ denoted by $D', D''$ respectively are then subcomplexes of $D$ in a natural way. The \textit{vertex reduction of $(D,(\gamma_i))$ at $v$} gives two combinatorial polygons $D',D''$ whose sides are either these $\gamma_i$ or subsegments of $\gamma_i$ cut out by $v$.
\noindent\textbf{Face reduction.}
Suppose $\Pi\subseteq D$ is a separating boundary face with two boundary arcs $e,e'\subseteq \partial\Pi$ such that $D\setminus(\Pi^\circ\cup e^\circ\cup (e')^\circ)$ has exactly two components, and each contains a distinguished vertex. The \textit{face reduction of $(D,(\gamma_i))$ at $(\Pi,e,e')$} produces two combinatorial polygons obtained by collapsing a simple path connecting $e$ and $e'$ whose interior contained in $\Pi^\circ$ to a vertex, and then performing vertex reduction at the resulting vertex.
The inverse operation of vertex (resp. face) reduction is called \textit{vertex (resp. face) combination}. Precise definitions see \cite{ACGH19}.
\vspace{.5em}
It is easy to see the definition of reducibility in \cite{ACGH19} is equivalent to that here.
A simple combinatorial $n$-gon, for $n>2$, is \textit{special} if it is non-degenerate, irreducible, and every non-distinguished vertex has valence $3$.
The classification of special combinatorial quadrangles provided by G. Arzhantseva, C. Cashen, D. Gruber, and D. Hume gives us the following useful observation.
\begin{Lemma} \label{lemma:SpecialCombGeodQuadrangles}
Let $D$ be a simple, non-degenerate, irreducible combinatorial quadrangle, then there exist two opposite sides which can be connected by a path consisting of at most $6$ interior arcs.
\end{Lemma}
\begin{proof}
The blowing up operations in the proof of \cite[Proposition 3.20]{ACGH19} produce a special combinatorial quadrangle $D'$ associated with $D$ which has the following properties:
\begin{itemize}
\item diagram $D'$ is the image of $D$ under the compositions of finitely many blow-up vertex maps,
\item the boundary of $D'$ is the same as $D$, i.e., those blowing up of the compositions restricted to $\partial D$ are identities,
\item compared with $D$, $D'$ only has more interior edges.
\end{itemize}
By the classification of special combinatorial quadrangles
\cite[Theorem 3.18]{ACGH19}, we know that there exist two opposite sides of $D'$ and a path $p'$ connecting them which is a concatenation of at most $6$ interior arcs.
Then there exists a path $p\subset D$ such that the image of $p$ under the compositions of blow-up maps above is $p'$,
in other words, $p'$ is obtained via a series blowing up vertex operations from $p$. Therefore, $p$ connects two opposite sides of $D$ corresponding to the two opposite sides of $D'$ connected by $p'$, and $p$ consists of interiors arcs no more than the number of those of $p'$. Hence $p$ is a concatenation of at most $6$ interior arcs of $D$.
\end{proof}
\subsection{Asymptotic cones of a metric space} \label{subsection:AsymptoticCones}
Asymptotic cone was first essentially used by Gromov in \cite{G81} and then formally introduced by van den Dries and Wilkie in \cite{dDW84}.
Recall that a \textit{non-principal ultrafilter} is a finitely additive measure $\omega$ on the set of all subsets of $\mathbb{N}$ (or, more generally, of a countable set) such that every subset has measure either $0$ or $1$ and all finite subsets have measure $0$. Throughout the paper all ultrafilters are non-principal.
Let $A_n$ and $B_n$ be two sequences of objects and let $\mathcal{R}$ be a relation between $A_n$ and $B_n$ for every $n$. We write $A_n\mathcal{R}_\omega B_n$ if $A_n\mathcal{R} B_n$ $\omega$-almost surely, that is,
\[\omega(\{n\in\mathbb{N}|A_n\mathcal{R} B_n\})=1\]
For example, $\in_\omega,=_\omega,<_\omega,\subseteq_\omega$.
Given a space $X$, the \textit{ultrapower} $X^\omega$ of $X$ is the quotient $X^{\mathbb{N}}/\approx$, where $(x_n)\approx(y_n)$ if $x_n=_\omega y_n$.
Let $(X,d)$ be a metric space, $\omega$ an ultrafilter over $\mathbb{N}$, $(e_n)$ an element in $X^{\omega}$, and $(l_n)$ a sequence of numbers with $\lim_\omega l_n=+\infty$.
Define
\[X_e^\omega=\{(x_n)\in X^{\mathbb{N}}| \lim_\omega\frac{d(x_n,e_n)}{l_n}<+\infty \}.\]
Define an equivalence relation $\sim$ on $X_e^\omega$:
\[(x_n)\sim(y_n)\Leftrightarrow \lim_\omega \frac{d(x_n,y_n)}{l_n}=0.\]
\begin{Definition}
The quotient $X_e^\omega/\sim$, denoted by $\mathrm{Con}_{\omega}(X; (e_n),(l_n))$, is called the \textit{asymptotic cone} of $X$ with respect to the ultrafilter $\omega$, the scaling constants $(l_n)$ and the observation point $(e_n)$. It admits a natural metric $d_\omega$ defined as following:
\[d_\omega((x_n),(y_n))=\lim_\omega\frac{d(x_n,y_n)}{l_n}.\]
\end{Definition}
For a sequence $(A_n)$ of subsets in $X$, its $\omega$-\textit{limit} in $\mathrm{Con}_{\omega}(X; (e_n),(l_n))$ is defined by
\[\lim_\omega A_n=\{\lim_\omega a_n|a_n\in_\omega A_n\}.\]
Obviously, if $\lim_\omega\frac{d(e_n,A_n)}{l_n}=+\infty$, then $\lim_\omega A_n$ is empty. It is easy to verify that every limit set $\lim_\omega A_n$ is closed.
Let $\alpha_n$ be a sequence of geodesics with length of order $O(l_n)$. Then the $\omega$-limit $\lim_\omega\alpha_n$ in $\mathrm{Con}_{\omega}(X; e,d)$ is either empty or a geodesic. If $\lim_\omega \alpha_n$ is a geodesic in $\mathrm{Con}_{\omega}(X; e,d)$, then we call it a \textit{limit geodesic}. Therefore, any asymptotic cone of a geodesic metric space is also a geodesic metric space.
Not every geodesic in $\mathrm{Con}_{\omega}(X; e,d)$ is a limit geodesic, not even in particular that $X$ is a Cayley graph of a finitely generated group (see \cite{D09} for a counterexample).
\subsection{Tree-graded metric spaces}
Let $(X,d)$ be a geodesic metric space and let $\mathcal{A}=\{A_\lambda|\lambda\in \Lambda\}$ be a collection of subsets in $X$. In every asymptotic cone $\mathrm{Con}_\omega(X;(e_n),(l_n))$, we consider the collection $\mathcal{A}_\omega$ of limit subsets
\[\{\lim_\omega A_{\lambda_n}|(\lambda_n)\in \Lambda^\omega,s.t., \lim_\omega\frac{d(e_n,A_{\lambda_n})}{l_n}<+\infty\}.\]
\begin{Definition} \cite{DS05}
The metric space $X$ is asymptotically tree-graded with respect to $\mathcal{A}$ if every asymptotic cone $\mathrm{Con}_\omega(X;(e_n),(l_n))$ of $X$ is tree-graded with respect to $\mathcal{A}_\omega$.
\end{Definition}
The notion of an asymptotically tree-graded metric space is equivalent to a list of geometric conditions (see \cite[Theorem 4.1 and Remark 4.2 (3)]{DS05}). Now we introduce some related geometric notions and results.
\begin{Definition}[fat polygons] \label{def:FatPolygons}
Let $n\ge 1$ be an integer. A geodesic $n$-gon $P$ is $\theta$-fat if the distance between any two non-adjacent edges is greater than $\theta$.
\end{Definition}
Our notion of ``fat polygons" is weaker than the notion in \cite{D09} (cf. \cite[Lemma 4.6]{D09}). However the weak version is enough for our purpose.
The method of \cite[Proposition 4.14]{D09} can also apply to bigons, therefore:
\begin{Lemma}\label{lemma:FatQuadrangles}
Let $\theta>0$. In any asymptotic cone $\mathrm{Con}_{\omega}(X; (e_n),(l_n))$ of a geodesic metric space $(X,d)$,
every simple non-trivial bigon with edges limit geodesics is the limit set $\lim_\omega Q_n$ of a sequence $(Q_n)$ of simple geodesic quadrangles that are $\theta$-fat $\omega$-almost surely.
\end{Lemma}
Let $\alpha_1$ and $\alpha_2$ be two paths. A bigon \textit{formed by} $\alpha_1$ and $\alpha_2$ is a union of a subpath of $\alpha_1$ with a subpath of $\alpha_2$ such that the two subpaths have common endpoints.
\begin{Lemma}\cite[Lemma 3.8]{D09} \label{lemma:OneImplyTheOther}
Let $Y$ be a metric space and let $\mathcal{B}$ be a collection of closed subsets of $Y$ which satisfies property $(T_1)$.
Let $\alpha$ and $\beta$ be two paths with common endpoints such that any non-trivial simple bigon formed by $\alpha$ and $\beta$ is contained in a subset in $\mathcal{B}$.
If $\alpha$ is contained in $B\in \mathcal{B}$, then $\beta$ is also contained in $B$.
\end{Lemma}
\begin{Lemma}\cite[Proposition 3.9]{D09} \label{lemma:FourGeodesics}
Let $Y$ be a metric space and let $\mathcal{B}$ be a collection of closed subsets of $Y$ which satisfies property $(T_1)$.
Let $\alpha_1$ and $\beta_1$ (resp. $\alpha_2$ and $\beta_2$) be two paths with common endpoints $u,v$ (resp. $v,w$). Assume that:
\begin{enumerate}
\item $\alpha_1\cap\alpha_2=\{v\}$, $\beta_1\cap\beta_2$ contains a point $a\neq v$;
\item all non-trivial simple bigons formed either by $\beta_1$ and $\beta_2$, or by $\alpha_1$ and $\beta_1$, or by $\alpha_2$ and $\beta_2$ are contained in a subset in $\mathcal{B}$.
\end{enumerate}
Then the bigon formed by $\beta_1$ and $\beta_2$ with endpoints $a$ and $v$ is contained in a subset in $\mathcal{B}$.
\end{Lemma}
\section{The proof of the Theorem} \label{section:proof}
In this Section, $\Gamma$ is always a $Gr'(\frac{1}{6})$-labeled graph labeled by a finite set $S$, whose pieces have length less than $M$. Let $X:=\mathscr{G}(G(\Gamma),S)$ be the Cayley graph of $(G(\Gamma),S)$, with the induced length metric $d$ on $X$ by assigning each edge length $1$.
In this setting, we want to show $\mathscr{G}(G(\Gamma),S)$ is asymptotically tree-graded with respect to the collection of all embedded components of $\Gamma$ ( denoted by $\mathcal{A}$).
\subsection{Contracting property of embedded components}
\begin{Definition}[Contracting subset]
Let $(Y,d_Y)$ be a metric space.
A subset $A\subseteq Y$ is called strongly $(\kappa,C)$-\textit{contracting} if for any geodesic $\gamma$ disjoint with $\overline{N}_C(A)$, we have $\mathrm{diam}(\pi_A(\gamma))\leq \kappa$. A collection of strongly $(\kappa,C)$-contracting subsets is referred to a strongly $(\kappa,C)$-\textit{contracting system}.
\end{Definition}
\begin{Example} \label{Example:LeftCosetsInRelHyp}
In the Cayley graph of a relatively hyperbolic group, the collection of all left cosets of peripheral subgroups is a strongly contracting system by \cite[Proposition 8.2.4]{GP15}.
\end{Example}
Now we show that $\mathcal{A}$ is a strongly contracting system.
\begin{Proposition} \label{proposition:contraction}
Let $\Gamma$ be a $Gr'(\frac{1}{6})$-labelled graph whose pieces are uniformly bounded by $M$. Then the collection of all embedded components of $\Gamma$ in $\mathscr{G}(G(\Gamma),S)$ is a strongly $(2M,0)$-contracting system.
\end{Proposition}
The method in the proof of \cite[Theorem 4.1]{ACGH19} for geodesics can also apply to embedded components. However, in the case of an embedded component, the proof is much easier.
\begin{proof}
Let $\Gamma_0$ be an embedded component, and let $\alpha$ be a geodesic disjoint from $\Gamma_0$. If $x',y'\in \pi_{\Gamma_0}(\alpha)$ are points realizing $\mathrm{diam}(\pi_{\Gamma_0}(\alpha))$, we wish to bound $d(x',y')$ by $2M$.
If $x'=y'$ we are done. Otherwise, choose $x,y\in\alpha$, such that $x'\in\pi_{\Gamma_0}(x),y'\in\pi_{\Gamma_0}(y)$. Let $\alpha'$ be the subpath of $\alpha$ or $\overline{\alpha}$ (the inverse of $\alpha$) from $x$ to $y$. Choose a path $p$ from $x'$ to $y'$ in $\Gamma_0$, and choose geodesics $\beta_1=[x,x']$ and $\beta_2=[y,y']$, then $\beta_1\cap\Gamma_0=x'$ and $\beta_2\cap\Gamma_0=y'$ since $x',y'$ are closest point projection. Choose a diagram $D$ over $\langle S|R\rangle$ as in Lemma \ref{Lemma:diagram} filling the quadrangle $\alpha'\beta_2\overline{p}\overline{\beta_1}$. Assume that we have chosen $p$ and $D$ so that $D$ has minimal number of edges among all possible choices of $p$. To simplify the notation, the sides of $D$ are also denoted by $\alpha',\beta_2,\overline{p},\overline{\beta_1}$.
First assume that $\beta_1$ and $\beta_2$ are not intersect. Then $\alpha'$ can be chosen in such a way that both $\alpha'\cap\beta_1$ and $\alpha'\cap\beta_2$ have only one point. Therefore $\alpha'\beta_2\overline{p}\overline{\beta_1}=\partial D$ is a simple cycle, and $D$ is a simple combinatorial quadrangle.
Let $a$ be an arc of $D$ contained in $p$ and let $\Pi$ be a face of $D$ containing $p$. Then $a$ has a lift to $\Gamma$ via first being a subpath of $p\subseteq \Gamma_0$ then returning to a component of $\Gamma$. Another lift of $a$ to $\Gamma$ via being a subpath of $\partial\Pi$ whose label is contained in $R$, and thus $\partial\Pi$ has a embedding to $\Gamma$. If these two lifts coincide (up to label-preserving automorphisms of $\Gamma$), then we can remove the edges of $a$ from $D$, thus we obtain a path $p'$ in $\Gamma_0$ by replacing the subpath $a$ of $p$ with the interior edges of $\Pi$, which contradicts the minimality. Hence, the two lifts are essentially distinct, and $a$ is a piece. If $p_-$ (resp. $p_+$) is a distinguished vertex, $\Pi$ is the face of $D$ containing the initial (resp. terminal) edge of $p$ and $a$ is the initial (resp. terminal) subpath of $p$ contained in $\partial\Pi\cap p$, then the above two lifts are always essentially distinct since $\beta_1\cap\Gamma_0=x'$ (resp. $\beta_2\cap\Gamma_0=y'$). Therefore, $p$ is a concatenation of pieces.
We construct a new diagram $D'$ by attaching a new face $\Pi'$ to $D$ with a proper subpath $p'$ of $\partial\Pi'$ identifying the side $p$ of $D$.
This operation is purely combinatorial: the boundary of $\Pi'$ is not labelled. We claim that $D'$ is a combinatorial triangle with a distinguished vertex in $\partial\Pi'\setminus p'$. For any interior face of $D'$, it must be contained in $D$ and its boundary is always a concatenation of pieces in $\Gamma$ by the previous paragraph, thus its interior degree is at least $7$ by the $Gr'(\frac{1}{6})$ condition. For any boundary face $\Pi$ of $D'$ with $e(\Pi)=1$ for which the exterior arc $b$ in $\partial\Pi$ contained in one side of $D'$, $\Pi$ also must be contained in $D$ and $b$ is contained in one of $\alpha',\overline{\beta_1},\beta_2$. So $b$ is a geodesic in $X$ and the interior arcs of $\Pi$ are pieces, thus the sum of the length of those interior arcs is at least half of the length of $\partial\Pi$, therefore $i(\Pi)\geq4$ by the $Gr'(\frac{1}{6})$ condition again.
By discussing all the cases listed in Theorem \ref{thm:ClassificationForTriangle} about $D'$, we have that $p$ consists of at most $2$ pieces, so $d(x',y')$ is bounded by $2M$.
Now suppose that $\beta_1$ and $\beta_2$ intersect. Let $D''$ be the maximal simple subdiagram of $D$ containing $p$ (such $D''$ exists since $p$ is disjoint with $\alpha$ and $p$ intersects $\beta_1,\beta_2$ only in their endpoints).
Then $D''$ is a simple combinatorial triangle. The same argument as above shows that $p$ is a piece (here the corresponding $D'$ can only have form $\mathrm{I}_1$ in Theorem \ref{thm:ClassificationForTriangle}), so $d(x',y')$ is bounded by $2M$ as well.
\end{proof}
We call the simple cycle of $\mathscr{G}(G(\Gamma),S)$ contained in an embedded components an \textit{embedded simple cycle}. We know from the proof of \cite[Theorem 4.1]{ACGH19} that a geodesic is strongly contracting if and only if its intersections with all embedded simple cycles are uniformly bounded.
Similarly, an embedded component is strongly contracting provided that its intersections with all other embedded components are uniformly bounded. Furthermore, an embedded component is strongly contracting provided that all pieces are uniformly bounded because the paths contained in the intersection of two different embedded components are pieces (Lemma \ref{lemma:PieceIntersection}).
Since our proof strongly depends on the contractibility of the embedded components, the condition that all pieces of the defining graph have uniformly bounded length seems can not be improved if we use the method in this paper.
\begin{Lemma} \label{lemma:ASatisfyAlpha1}
$\mathcal{A}$ satisfies property $(\Lambda_1)$.
\end{Lemma}
\begin{proof}
Given two different embedded components $\Gamma_1, \Gamma_2\in \mathcal{A}$, and $\delta\geq0$. For any two point $x_0,y_0\in N_\delta(\Gamma_1)\cap N_\delta(\Gamma_2)$, there exists $x_1,y_1\in \Gamma_1$, such that $d(x_1,x_0)\leq\delta,d(y_1,y_0)\leq\delta$. By Lemma \ref{lemma:EmbedConvex}, the embedded components of $\Gamma$ is convex in $\mathscr{G}(G(\Gamma),S)$, in particular, $\Gamma_1$ is convex. Thus we have that $\alpha=[x_1,y_1]\subseteq \Gamma_1$.
If $\alpha$ is disjoint from $\Gamma_2$, then $\mathrm{diam}(\pi_{\Gamma_2}(\alpha))\leq 2M$ by Proposition \ref{proposition:contraction}. Let $a\in \pi_{\Gamma_2}(x_1),b\in\pi_{\Gamma_2}(y_1)$, then $d(a,b)\leq \mathrm{diam}(\pi_{\Gamma_2}(\alpha))\leq 2M$. And we have
$$d(x_1,a)\leq d(x_1,x_0)+d(x_0,\Gamma_2)\leq 2\delta,~d(y_1,b)\leq d(y_1,y_0)+d(y_0,\Gamma_2)\leq2\delta.$$
Therefore, $d(x_1,y_1)\leq d(x_1,a)+d(a,b)+d(b,y_1)\leq 2M+4\delta$, so $d(x_0,y_0)\leq d(x_0,x_1)+d(x_2,y_1)+d(y_1,x_0)\leq 2M+6\delta$.
Now assume that the intersection $\alpha\cap \Gamma_2$ is nonempty.
Let $z$ be the starting point of $\alpha\cap \Gamma_2$ if $x_1\notin \Gamma_2$, i.e., $z$ is the point in $\alpha\cap \Gamma_2$ such that $[x_1,z)_{\alpha}\cap \Gamma_2=\emptyset$.
Let $w$ be the ending point of $\alpha\cap \Gamma_2$ if $y_1\notin \Gamma_2$, i.e., $w$ is the point in $\alpha\cap \Gamma_2$ such that $(w,y_1]_{\alpha}\cap \Gamma_2=\emptyset$.
With $\alpha$ replaced by either $[x_1,z]_{\alpha}$ or $[w,y_1]_{\alpha}$, the discussion in the previous paragraph still holds, so we have $d(x_1,z)\leq 2M+4\delta, d(w, y_1)\leq 2M+4\delta$.
Since $z,w\in \Gamma_1\cap \Gamma_2$, and $\Gamma_1,\Gamma_2$ are convex, we know that $[z,w]$ is contained in both $\Gamma_1$ and $\Gamma_2$. Then by Lemma \ref{lemma:PieceIntersection}, $[z,w]$ is a single piece, and thus $d(z,w)\leq M$. Therefore, $d(x_1,y_1)=d(x_1,z)+d(z,w)+d(w,y_1)\leq 5M+8\delta$, so $d(x_0,y_0)\leq 5M+10\delta$.
Since $x_0,y_0$ are taken arbitrarily, we obtained that the diameter of the intersection $N_\delta(\Gamma_1)\cap N_\delta(\Gamma_2)$ is at most $5M+10\delta$.
\end{proof}
By Proposition \ref{proposition:contraction} and Lemma \ref{lemma:ASatisfyAlpha1}, $\mathcal{A}$ is a strongly contracting system with \textit{bounded intersection property} considered in \cite{Y14}. The collection in Example \ref{Example:LeftCosetsInRelHyp} is such a system as well.
\begin{Lemma} \label{lemma:ASatisfyAlpha2}
$\mathcal{A}$ satisfies property $(\Lambda_2)$.
\end{Lemma}
\begin{proof}
Let $\alpha$ be a geodesic whose endpoints at distance at most $\frac{1}{3}|\alpha|$ from an embedded component $\Gamma_0\in\mathcal{A}$, where $|\alpha|$ is the length of $\alpha$.
If the intersection $\alpha\cap \Gamma_0$ is nonempty, we are done.
Now we assume that $\alpha\cap \Gamma_0=\emptyset$, then $\mathrm{diam}(\pi_{\Gamma_0}(\alpha))\leq 2M$ by Proposition \ref{proposition:contraction}. Let $a'\in \pi_{\Gamma_0}(\alpha_-),b'\in \pi_{\Gamma_0}(\alpha_+)$, then $d(a',b')\leq \mathrm{diam}(\pi_{\Gamma_0}(\alpha))\leq 2M$, and $d(\alpha_-,a')\leq \frac{1}{3}|\alpha|,d(\alpha_+,b')\leq\frac{1}{3}|\alpha|$. Therefore, $$|\alpha|=d(\alpha_-,\alpha_+)\leq d(\alpha_-,a')+d(a',b')+d(b',\alpha_+)\leq \frac{2}{3}|\alpha|+2M,$$
so $|\alpha|\leq 6M$, and thus $d(\alpha_-,a')\leq \frac{1}{3}|\alpha|\leq 2M$. So we always have that $\alpha\cap N_{2M+1}(\Gamma_0)\neq\emptyset$.
We have proved that for any set $\Gamma_0$ in $\mathcal{A}$, a geodesic with endpoints at distance at most one third of its length from $\Gamma_0$ intersects the $(2M+1)$-tubular neighborhood of $\Gamma_0$. So $\mathcal{A}$ satisfies property $(\Lambda_2)$.
\end{proof}
By \cite[Lemma 4.5]{DS05}, Properties $(\Lambda_1)$ and $(\Lambda_2)$ together imply property $(T_1)$.
\begin{Corollary} \label{corollary:ASatisfiesT1}
In every asymptotic cone $\mathrm{Con}_\omega(X;(e_n),(l_n))$ of $X=\mathscr{G}(G(\Gamma),S)$, the collection $\mathcal{A}$ satisfies property $(T_1)$.
\end{Corollary}
\subsection{$\mathcal{A}_\omega$ satisfies $(\Omega_3)$}
In this subsection, we shall omit mentioning the requirement $\omega$-almost surely in most statements. However, we need keep in mind that our discussion are always under such an assumption.
Since there are only finite forms of simple combinatorial triangles, we know from Theorem \ref{thm:ClassificationForTriangle} that:
\begin{Lemma} \label{lemma:OneShapeAlmostSurely}
Let $\Delta_n$ be a sequence of simple combinatorial triangles. Then $\Delta_n$ has one of the forms $\mathrm{I}_1$, $\mathrm{I}_2$, $\mathrm{I}_3$, $\mathrm{II}$, $\mathrm{III}$, $\mathrm{IV}$, $\mathrm{V}$ $\omega$-almost surely.
\end{Lemma}
Let $(Y,d_Y)$ be a geodesic metric space and let $\mathcal{B}$ be a collection of subsets of $Y$.
Recall that property $(\Omega_2)$ refers to:
\begin{itemize}
\item[$(\Omega_2)$] For any asymptotic cone $\mathrm{Con}_{\omega}(Y; (e_n),(l_n))$,
any simple non-trivial bigon with edges limit geodesics is contained in a subset from $\mathcal{B}_\omega$.
\end{itemize}
\begin{Lemma}
$\mathcal{A}_{\omega}$ satisfies property $(\Omega_2)$.
\end{Lemma}
\begin{proof}
Given an asymptotic cone $\mathrm{Con}_{\omega}(X; (e_n), (l_n))$ of $X$, let $P$ be a non-trivial simple geodesic bigon in $\mathrm{Con}_{\omega}(X; (e_n), (l_n))$ whose edges $\alpha,\beta$ are limit geodesics.
Take $\theta>6M$, then by lemma \ref{lemma:FatQuadrangles}, $P$ is the limit of a sequence of $\omega$-almost surely $\theta$-fat simple geodesic quadrangles
\[Q_n=\alpha_n[\alpha_{n+},\beta_{n+}]\overline{\beta_n}[\beta_{n-},\alpha_{n-}]\]
such that $\alpha=\lim_\omega\alpha_n,\beta=\lim_\omega\beta_n$. Denote $\gamma_n=[\alpha_{n-},\beta_{n-}], \delta_n=[\alpha_{n+},\beta_{n+}]$.
We have that $d(\alpha_{n-},\beta_{n-})=|\gamma_n|$ and $d(\alpha_{n+},\beta_{n+})=|\delta_n|$ are of order $o(l_n)$ $\omega$-almost surely.
For each $n$, let $D_n$ be a diagram over $\langle S|R\rangle$ as in Lemma \ref{Lemma:CombPolygon} filling the quadrangle $Q_n$.
\vspace{1em}
\noindent\textbf{Case 1.} $(D_n)$ are non-degenerate, irreducible $\omega$-almost surely.
Since $D_n$ is simple, Lemma \ref{lemma:SpecialCombGeodQuadrangles} tells us that
there is a path $p_n\subseteq D_n$ either between $\alpha_n$ and $\beta_n$ or between $\gamma_n$ and $\delta_n$ which is a concatenation of at most $6$ interior arcs of $D_n$. By our choice of diagram $D_n$, the interior arcs of $D_n$ are pieces, thus the length of $p_n$ satisfies $|p_n| \leq6M$. If $p_n$ is between $\gamma_n$ and $\delta_n$ $\omega$-almost surely,
then $k_n=d(\gamma_n,\delta_n)\leq_\omega 6M$. Thus,
\[d(\alpha_{n-},\alpha_{n+})\leq_\omega d(\alpha_{n-},\beta_{n-})+k_n+d(\alpha_{n+},\beta_{n+})=o(l_n),\]
which contradicts with $d(\alpha_{n-},\alpha_{n+})=O(l_n)$.
However, if $p_n$ is between $\alpha_n$ and $\beta_n$ $\omega$-almost surely, it will contradict the requirements that $Q_n$ is $\theta$-fat and $\theta>6M$.
Therefore, this case can not happen.
\vspace{1em}
\noindent\textbf{Case 2.} $(D_n)$ are non-degenerate, reducible $\omega$-almost surely.
Since $D_n$ is simple, $D_n$ is not vertex reducible. Now assume that there is a face reduction. We suppose that $\Pi_n$ is a face with edges $e_n,e_n'\subset \Pi_n$ such that $D_n$ admits a face reduction at $(\Pi_n,e_n,e_n')$. Since $D_n$ is non-degenerate, there are four distinguished faces.
\noindent\textbf{Subcase 2.1.}
Suppose $e_n,e_n'$ are edges of two opposite sides $\omega$-almost surely. First, assume $e_n\subset\alpha_n$ and $e_n'\subset\beta_n$.
Choose $x_n\in e_n, x_n'\in e_n'$, and denote by $x=\lim_\omega x_n,x'=\lim_\omega x_n'$.
Face reduction at $(\Pi_n,e_n,e_n')$ gets two combinatorial triangles $\Delta_n,\Delta_n'$ with $\gamma_n\subset\Delta_n,\delta_n\subset\Delta_n'$.
By Lemma \ref{lemma:OneShapeAlmostSurely}, $\Delta_n,\Delta_n'$ has one form $\omega$-almost surely.
If $\Delta_n$ is of form $\mathrm{II},\mathrm{III},\mathrm{IV},$ or $\mathrm{V}$, then there is a path from $\alpha_n$ to $\beta_n$ which is a concatenation of at most $4$ interior edges of $D_n$, contradicting the choice that $Q_n$ is $\theta$-fat.
By the same reason, $\Delta_n'$ cannot be one of form $\mathrm{II},\mathrm{III},\mathrm{IV},$ or $\mathrm{V}$ $\omega$-almost surely.
If $\Delta_n$ is of form $\mathrm{I}_1,\mathrm{I}_2$ or $\mathrm{I}_3$, then $[\alpha_-,x]_\alpha\cup[\beta_-,x']_\beta$ is contained in a set $A\in\mathcal{A}_\omega$.
Similarly, we can obtain that $[e,\alpha_+]_\alpha\cup[e',\beta_+]_\beta$ is contained in some set $A'\in\mathcal{A}_\omega$.
If $e=e'$, then either $e=\alpha_-$ or $e=\alpha_+$ since $P$ is simple, and thus $P=\alpha\cup\beta$ is contained in one subset from $\mathcal{A}_\omega$. Otherwise $e$ and $e'$ are two different intersection points of $A\cap A'$, then by Corollary \ref{corollary:ASatisfiesT1}, $\mathcal{A}_\omega$ satisfies property $(T_1)$, which imply $A=A'$.
Next, assume $e_n\subset\gamma_n$ and $e_n'\subset\delta_n$. Face reduction at $(\Pi_n,e_n,e_n')$ gets two combinatorial triangles $\Delta_n'',\Delta_n'''$ with $\alpha_n\subset\Delta_n'',\beta_n\subset\Delta_n'''$.
If $\Delta_n''$ is of form $\mathrm{II},\mathrm{III},\mathrm{IV},$ or $\mathrm{V}$, then there is a path from $\gamma_n$ to $\delta_n$ which is a concatenation of at most $4$ interior edges of $D_n$. Same arguments as Case 1 lead to a contradiction. Similarly, $\Delta_n'''$ cannot be one of form $\mathrm{II},\mathrm{III},\mathrm{IV},$ or $\mathrm{V}$ $\omega$-almost surely as well.
If $\Delta_n''$ is of form $\mathrm{I}_1,\mathrm{I}_2$ or $\mathrm{I}_3$, then $\alpha$ is contained in a set $A''\in\mathcal{A}_\omega$.
Similarly, we can obtain that $\beta$ is contained in some set $A'''\in\mathcal{A}_\omega$.
Since $\alpha_-=\beta_-$ and $\alpha_+=\beta_+$ are two different intersection points of $A''\cap A'''$, we have $A''=A'''$ by Corollary \ref{corollary:ASatisfiesT1} again.
\noindent\textbf{Subcase 2.2.}
Suppose $e_n,e_n'$ are edges of two adjacent sides. Without loss of generality, we assume $e_n\subseteq \alpha_n$ and $e_n'\subseteq \gamma_n$. Face reduction of $D_n$ at $(\Pi_n,e_n,e_n')$ gets a bigon and a quadrangle. Then there exists a single piece connecting $\alpha_n$ and $\gamma_n$.
However, for any such piece $p_n\subseteq D_n$ with $p_{n-}\in \alpha_n$ and $p_{n+}\in\gamma_n$, the $\omega$-limit of the triangle
\[[\alpha_{n-},p_{n-}]_{\alpha_n}\cup p_n\cup[p_{n+},\alpha_{n-}]_{\overline{\gamma_n}}\]
is a single point $\alpha_-$.
Let $a_n$ be the innermost piece between $\alpha_n$ and $\gamma_n$ (corresponding to $\alpha_{n-}$),
i.e., there is no single piece between $[a_{n-},\alpha_{n+}]_{\alpha_n}$ and $[a_{n+},\gamma_{n+}]_{\gamma_n}$. Symmetrically, let $a_n',b_n,b_n'$ be the innermost piece corresponding to $\alpha_{n+},\beta_{n-},\beta_{n+}$, respectively, if it exists. Let $D_n''$ be the subdiagram of $D_n$ with sides
\[[a_{n-},a_{n-}']_{\alpha_n},a_n'[a_{n+}',b_{n-}']_{\delta_n}b_n',[b_{n+}',b_{n+}]_{\overline{\beta_n}},
\overline{b_n'}[b_{n-},a_{n+}]_{\overline{\gamma_n}}\overline{a_n}.\]
\begin{figure}[H]
\centering
\includegraphics[width=0.4\textwidth]{subdiagram.pdf}
\caption{Subdiagram}\label{figure:subdiagram}
\vspace{-0.5cm}
\end{figure}
The new diagram $D_n''$ is still a combinatorial quadrangle, and $D_n''$ has no face reduction of the present subcase.
We can replace $D_n$ by $D_n''$, and proceed the discussion either in Subcase 2.1 or in Case 1. Therefore the result also holds for this subcase.
\vspace{1em}
\noindent\textbf{Case 3.} $(D_n)$ are degenerate $\omega$-almost surely. Then $(D_n)$ are simple combinatorial triangles $\omega$-almost surely.
If $D_n$ is of form $\mathrm{II},\mathrm{III},\mathrm{IV},$ or $\mathrm{V}$, then there is a path from $\gamma_n$ to $\delta_n$ which is a concatenation of at most $4$ interior edges of $D_n$. The same argument as in case 1 then tells us this cannot happen.
If $D_n$ is of form $\mathrm{I}_1,\mathrm{I}_2$ or $\mathrm{I}_3$, then the same argument as in Case 2 implies that $P=\alpha\cup\beta$ is contained in one set from $\mathcal{A}_\omega$.
\end{proof}
\begin{Lemma}\label{lemma:EndpointPiece}
Let $Y$ be a geodesic metric space and let $\mathcal{B}$ be a collection of convex subsets of $Y$ which satisfies property $(\Omega_2)$. Then for any asymptotic cone $\mathrm{Con}_\omega(Y; (e_n), (l_n))$ of $Y$, any limit geodesic $\alpha$ with endpoints contained in a set $B\in\mathcal{B}_\omega$ is contained in $B$.
\end{Lemma}
\begin{proof}
Set $B=\lim_\omega B_n$. Then there exist $a_n,b_n\in B_n$, so that $\alpha_-=\lim_{\omega}a_n,\alpha_+=\lim_{\omega}b_n$. Since $B_n$ is convex, the geodesic $[a_n,b_n]\subseteq B_n$, and thus the limit geodesic $\beta=\lim_{\omega}[a_n,b_n]\subseteq B$. Moreover, the edges of the simple bigon $P$ formed by $\alpha$ and $\beta$ are limit geodesics, therefore, such bigon $P$ is contained in a subset from $\mathcal{B}_\omega$ since $\mathcal{B}$ satisfies property $(\Omega_2)$.
Then by Lemma \ref{lemma:OneImplyTheOther}, we have $\alpha\subseteq B$.
\end{proof}
\begin{Lemma}\label{lemma:SimpleTriangle}
Assume that the simple triangle $\Delta$ is contained in the $\omega$-limit of a sequence simple triangles $T_n=\alpha_n\beta_n\gamma_n$, i.e., $\Delta\subseteq\lim_\omega T_n$. Assume further that the intersection of the three edges $\lim_\omega\alpha_n$, $\lim_\omega\beta_n$, $\lim_\omega\gamma_n$ of $\lim_\omega T_n$ is empty. Then $\Delta$ is contained in a set $A\in \mathcal{A}_\omega$.
\end{Lemma}
\begin{proof}
Let $(D_n)$ be diagrams over $\langle S|R\rangle$ as in Lemma \ref{Lemma:CombPolygon} filling the triangles $(T_n)$ $\omega$-almost surely.
By Lemma \ref{lemma:OneShapeAlmostSurely}, $(D_n)$ have one form $\omega$-almost surely.
It is obvious that the $\omega$-limit of a sequence single pieces is either a single point or empty. If $(D_n)$ are of form $\mathrm{III},\mathrm{IV}$, or $\mathrm{V}$ $\omega$-almost surely, then the three edges of $\lim_\omega T_n$ always have a common point, contradicting the hypothesis.
Therefore, $(D_n)$ are of form $\mathrm{I}_1,\mathrm{I}_2,\mathrm{I}_3$ or $\mathrm{II}$ $\omega$-almost surely. Since $\Delta$ is simple, it must be contained in the $\omega$-limit of a sequence of boundaries of single faces $(\Pi_n)$ of $(D_n)$. Each $\partial\Pi_n$ is contained in an embedded component $\Gamma_n$, thus $\Delta$ is contained in the set $\lim_\omega\Gamma_n\in \mathcal{A}_\omega$.
\end{proof}
\begin{Lemma} \label{lemma:ASatisfyPi3}
$\mathcal{A}$ satisfies property $(\Omega_3)$: for any asymptotic cone $\mathrm{Con}_{\omega}(Y; (e_n),(l_n))$,
any simple non-trivial triangles with edges limit geodesics is contained in a subset from $\mathcal{B}_\omega$.
\end{Lemma}
\begin{proof}
Given an asymptotic cone $\mathrm{Con}_{\omega}(X; (e_n), (l_n))$ of $X$, let $\Delta$ be a non-trivial simple geodesic triangle in $\mathrm{Con}_{\omega}(X; (e_n), (l_n))$ whose edges $\alpha_1,\beta_1,\gamma_1$ are limit geodesics. Therefore, $\alpha_1,\beta_1,\gamma_1$ can be presented as
\[\gamma_1=\lim_\omega[a_n,b_n'],\alpha_1=\lim_\omega[b_n,c_n'],\beta_1=\lim_\omega[c_n,a_n'],\]
and $d(a_n,a_n'),d(b_n,b_n')$ and $d(c_n,c_n')$ are of order $o(l_n)$ $\omega$-almost surely.
We consider the $\omega$-limit of triangles $T_n=[a_n,b_n][b_n,c_n][c_n,a_n]$. Set $a=\lim_\omega a_n$, $b=\lim_\omega b_n$, $c=\lim_\omega c_n$.
And set
\[\gamma_2=\lim_\omega[a_n,b_n], \beta_2=\lim_\omega[c_n,a_n], \alpha_2=\lim_\omega[b_n,c_n].\]
Let $x$ be the furthest point in the intersection of $\gamma_2$ and $\overline{\beta_2}$ along $\gamma_2$, i.e., $x\in \gamma_2\cap\overline{\beta_2}$ and $(x,b]_{\gamma_2}\cap\overline{\beta_2}=\emptyset$.
Similarly, let $y$ (resp. $z$) be the furthest point in the intersection of $\alpha_2$ and $\overline{\gamma_2}$ (resp. $\beta_2$ and $\overline{\alpha_2}$) along $\alpha_2$ (resp. $\beta_2$).
If $x\neq a$, then $[a,x]_{\gamma_2}\cup[a,x]_{\overline{\beta_2}}$ is contained in a subset $A_1\in\mathcal{A}_\omega$ by Lemma \ref{lemma:FourGeodesics}. Symmetrically, if $y\neq b$ (resp. $z\neq c$), then $[b,y]_{\alpha_2}\cap[b,y]_{\overline{\gamma_2}}$ (resp. $[c,z]_{\beta_2}\cup[c,z]_{\overline{\alpha_2}}$) is contained in a subset $A_2\in\mathcal{A}_\omega$ (resp. $A_3\in\mathcal{A}_\omega$).
Consider the following $6$ numbers:
\begin{align*}
& d_\omega([a,x]_{\gamma_2},\alpha_2), d_\omega([a,x]_{\overline{\beta_2}},\alpha_2), d_\omega([b,y]_{\alpha_2},\beta_2), \\
& d_\omega([b,y]_{\overline{\gamma_2}},\beta_2), d_\omega([c,z]_{\beta_2},\gamma_2), d_\omega([c,z]_{\overline{\alpha_2}},\gamma_2).
\end{align*}
\vspace{1em}
\noindent\textbf{Case 1.}\,\,
None of the $6$ numbers equals to $0$.
Then the positions of $x,y,z$ must satisfy that $y\in (x,b]_{\gamma_2}$, $z\in (x,c]_{\overline{\beta_2}}$ and $z\in (y,c]_{\alpha_2}$ (See the left one of Figure \ref{figure:case1}). Moreover, the triangle $\Delta'=[x,y]_{\gamma_2}[y,z]_{\alpha_2}[z,x]_{\beta_2}$ is simple and the intersection $[x,y]_{\gamma_2}\cap[y,z]_{\alpha_2}\cap[z,x]_{\beta_2}$ is empty.
We can choose $[x_n,y_n']\subseteq [a_n,b_n]$, $[y_n,z_n']\subseteq[b_n,c_n]$, $[z_n,x_n']\subseteq[c_n,a_n]$ $\omega$-almost surely such that $[x_n,y_n'], [y_n,z_n'], [z_n,x_n']$ are piecewise disjoint and
\[\lim_\omega[x_n,y_n']=[x,y]_{\gamma_2}, \lim_\omega[y_n,z_n']=[y,z]_{\alpha_2}, \lim_\omega[z_n,x_n']=[z,x]_{\beta_2}.\]
Let $p_n$ (resp. $q_n,r_n$) be the furthest point in the intersection of $[a_n,b_n]$ and $\overline{[c_n,a_n]}$ (resp, $[b_n,c_n]$ and $\overline{[a_n,b_n]}$, $[c_n,a_n]$ and $\overline{[b_n,c_n]}$) along $[a_n,b_n]$ (resp. $[b_n,c_n]$, $[c_n,a_n]$).
Since $d_\omega([a,x]_{\gamma_2},\alpha_2)>0$, we have that $[p_n,x_n]\subseteq[a_n,b_n]$ is disjoint from $[b_n,c_n]$. By the same reason, we know that the triangle $[p_n,q_n][q_n,r_n][r_n,p_n]\subseteq T_n$ is simple (See the right one of Figure \ref{figure:case1}).
Therefore, we have proved that $\Delta'$ is contained in the $\omega$-limit of a sequence simple triangles $[p_n,q_n][q_n,r_n][r_n,p_n]$. Then by Lemma \ref{lemma:SimpleTriangle}, $\Delta'$ is contained in a subset $A_0\in\mathcal{A}_\omega$.
If $x\neq a$, we have $x\notin\gamma_1$ or $x\notin\beta_1$ since $\Delta=\alpha_1\beta_1\gamma_1$ is simple.
Without loss of generality, we may assume that $x\notin\gamma_1$. By \cite[Lemma 3.7]{D09}, $x$ is in the interior of some simple bigon formed by $\gamma_1$ and $\gamma_2$, then property $(\Omega_2)$ tells us that this bigon is in contained a subset $A_1'\in\mathcal{A}_\omega$. Since $A_1'\cap A_0$ and $A_1'\cap A_1$ contain non-trivial sub-arcs of $\gamma_2$, property $(T_1)$ (Corollary \ref{corollary:ASatisfiesT1}) implies that $A_0=A_1'=A_1$.
Symmetrically, if $y\neq b$ (resp. $z\neq c$), we have that $A_0=A_2$ (resp. $A_0=A_3$). Therefore, we have that $A_0=A_1=A_2=A_3$. So $\Delta'$ is contained in $A_0$, and thus $\Delta$ is contained in $A_0$ as well by Lemma \ref{lemma:OneImplyTheOther}.
If $x=a$, $y=b$ or $z=c$, we can just replace the corresponding $A_i$ by $A_0$. The result still holds.
\begin{figure}[H]
\captionsetup[subfigure]{labelformat=empty}
\centering\hfill%
\begin{subfigure}[h]{.45\textwidth}\centering
\includegraphics[width=0.9\textwidth]{position.pdf}
\end{subfigure}\hfill%
\begin{subfigure}[h]{.55\textwidth}\centering
\includegraphics[width=0.95\textwidth]{simple.pdf}
\end{subfigure}\hfill
\caption{Case 1.}
\label{figure:case1}
\end{figure}
\vspace{1em}
\noindent\textbf{Case 2.}\,\,One of the $6$ numbers equals to $0$.
Without loss of generality, we set $d_\omega([a,x]_{\gamma_2},\alpha_2)=0$.
Now let $y''$ be the furthest point in the intersection of $\overline{\gamma_2}$ and $\alpha_2$ along $\overline{\gamma_2}$, then $y''\in[a,x]_{\gamma_2}$. If $y''=b$, then $\gamma_2\subseteq A_1$, otherwise $[b,y'']_{\overline{\gamma_2}}\cup[b,y'']_{\alpha_2}$ is contained in a subset $A_2'\in\mathcal{A}_\omega$ by Lemma \ref{lemma:FourGeodesics}.
Therefore $\gamma_2\subseteq A_1\cup A_2'$ and $x\in A_1\cap A_2'$. Since $\Delta$ is simple, we have that $x\notin\gamma_1$ or $x\notin\beta_1$. The same argument as in the previous case then tells us that $A_1=A_2'$. Thus we always have that $\gamma_2\subseteq A_1$. By Lemma \ref{lemma:EndpointPiece}, we know that $\gamma_1\subseteq A_1$ as well.
If $c\in A_1$, then by Lemma \ref{lemma:EndpointPiece} again, we know $\alpha_1,\beta_1\subseteq A_1$. Thus $\Delta$ is contained in $A_1$.
Next we assume that $c\notin A_1$. Let $[c,\widetilde{a})_{\beta_1}$ be the maximal segment of $\beta_1$ which is in outside of $A_1$, i.e. $\widetilde{a}\in A_1$ and $[c,\widetilde{a})_{\beta_1}\cap A_1=\emptyset$ (Notice $A_1$ is closed, such $\widetilde{a}$ exists). Similarly, let $[c,\widetilde{b})_{\overline{\alpha_1}}$ be the maximal segment of $\overline{\alpha_1}$ which is in outside of $A_1$.
By our definition of $\mathcal{A}_\omega$, there exists a sequence embedded components $(\Gamma_n)$ such that $A_1=\lim_{\omega}\Gamma_n$.
We can choose $\widetilde{a}_n\in[c_n,a_n']$ and $\widetilde{b}_n\in[b_n,c_n']$ $\omega$-almost surely, such that $\lim_\omega \widetilde{a}_n=\widetilde{a}$, $\lim_\omega\widetilde{b}_n=\widetilde{b}$, $[c_n,\widetilde{a}_n]\subseteq [c_n,a_n']$ and $[\widetilde{b}_n,c_n']\subseteq [b_n,c_n']$ are disjoint from $N_M(\Gamma_n)$.
Since $c\notin A_1$ and $\lim_\omega c_n=c=\lim_\omega c_n'$, we have that $[c_n',c_n]$ is disjoint from $N_M(\Gamma_n)$.
Since $\lim_\omega \widetilde{a}_n=\widetilde{a}$ and $\widetilde{a}\in A_1$, we obtain that $d(\widetilde{a}_n,\Gamma_n)=o(l_n)$.
We also have that $d(\widetilde{b}_n,\Gamma_n)=o(l_n)$ via the same reason.
By Proposition \ref{proposition:contraction}, $\Gamma_n$ is $(2M,0)$-contracting, so
\[\mathrm{diam}(\pi_{\Gamma_n}([\widetilde{b}_n,c_n']))\leq M, \mathrm{diam}(\pi_{\Gamma_n}([c_n',c_n]))\leq M, \mathrm{diam}(\pi_{\Gamma_n}([c_n,\widetilde{a}_n]))\leq M.\]
Thus, we have that
\begin{align*}
d(\widetilde{b}_n,\widetilde{a}_n)\leq& d(\widetilde{b}_n,\Gamma_n)+ \mathrm{diam}(\pi_{\Gamma_n}([\widetilde{b}_n,c_n']))+\mathrm{diam}(\pi_{\Gamma_n}([c_n',c_n])) \\
& + \mathrm{diam}(\pi_{\Gamma_n}([c_n,\widetilde{a}_n]))+d(\widetilde{a}_n,\Gamma_n) \\
\leq& o(l_n)+6M+o(l_n)=o(l_n).
\end{align*}
But $\widetilde{a}\neq\widetilde{b}$ imply that $d(\widetilde{b}_n,\widetilde{a}_n)=O(l_n)$. This is a a contradiction. So our assumption $c\notin A_1$ is not true.
Hence, $\Delta$ is always contained in a subset from $\mathcal{A}_\omega$. We have completed the proof.
\end{proof}
\begin{proof}[Proof of Theorem \ref{thm:RelativeHyperbolicity}]
Assume that all pieces of $\Gamma$ have uniformly bounded length. According to Lemma \ref{lemma:tree-graded}, the properties ($\Lambda_1$), ($\Lambda_2$) and ($\Omega_3$) are verified by Lemmas \ref{lemma:ASatisfyAlpha1}, \ref{lemma:ASatisfyAlpha2} and \ref{lemma:ASatisfyPi3} respectively, so $\mathscr{G}(G(\Gamma),S)$ is asymptotically tree-graded with respect to the collection $\mathcal{A}$ of all embedded components of $\Gamma$. For the converse direction, if $\mathscr{G}(G(\Gamma),S)$ is asymptotically tree-graded with respect to $\mathcal{A}$, then the ``only if'' part of Lemma \ref{lemma:tree-graded} tells us that $\mathcal{A}$ satisfies property ($\Lambda_1$), in particular, the intersection of two different components are uniformly bounded. Lemma \ref{lemma:PieceIntersection} thus implies that the pieces of $\Gamma$ are uniformly bounded.
\end{proof}
\begin{Examples}
\vspace{.5em}
\noindent\textbf{1.}
Let $G_1, G_2$ be two groups generated by $S_1, S_2$ respectively. Let $\Gamma_i$ be the Cayley graph of $(G_i,S_i)$ for $i=1,2$ and let $\Gamma=\Gamma_1\sqcup \Gamma_2$. Then $\Gamma$ has no pieces with length bigger than $0$. Obviously, $G(\Gamma)=G_1\ast G_2$ is hyperbolic relative to $G_1,G_2$.
\vspace{.5em}
\noindent\textbf{2.}
Let $S=\{a,b,x,y,z,w\}$. And let
\[G_1=\langle x,y|[x,y]\rangle, G_2=\langle z,w|[z,w]\rangle, X_1=\mathscr{G}(G_1,\{x,y\}), X_2=\mathscr{G}(G_2,\{z,w\}).\]
Let $\Gamma_1$ be the graph obtained from the Cayley graph $X_1$ as follows: for each $g\in G_1$, attach a simple path $p$ labelled by $a^2b^3$ to the pair of vertices $(g,gx^{15}y^{15})$ in $X_1$.
Similarly, let $\Gamma_2$ be the graph constructed from $X_2$ so that each pair of vertices $(g, gz^{15}w^{15})$ is connected by a simple path labeled by $a^2b^2$.
Denote $\Gamma=\Gamma_1\sqcup \Gamma_2$. Then the pieces of $\Gamma$ are those subpaths of the simple path labelled either by $a^2b^2a^2$
or by its inverse $a^{-2}b^{-2}a^{-2}$. Thus, $\Gamma$ is $Gr'(\frac{1}{6})$-labelled. Then for the basepoints $1_{G_1}\in \Gamma_1$, $1_{G_2}\in \Gamma_2$ and $1_{G}\in \mathscr{G}(G(\Gamma),S)$, the image of the label-preserving map $(\Gamma_i,1_{G_i})\rightarrow (\mathscr{G}(G(\Gamma),S),1_G)$ gives embedded components $\Gamma_i'$ in $\mathscr{G}(G(\Gamma),S)$ by Lemma \ref{lemma:EmbedConvex}. Furthermore the stabilizer of $\Gamma_1'$ is the subgroup of $G(\Gamma)$ generated by $\{x,y,a^2b^3\}$, which is $G_1$ since $a^2b^3=x^{15}y^{15}$. Similarly, the stabilizer of $\Gamma_2'$ is $G_2$.
Thus by Theorem \ref{thm:RelativeHyperbolicity}, $G(\Gamma)$ is hyperbolic relative to $G_1,G_2$.
\begin{figure}[H]
\centering
\includegraphics[width=0.7\textwidth]{graphofgroup.pdf}
\caption{The graph of groups}\label{figure:graphofgroup}
\vspace{-0.5cm}
\end{figure}
In fact, $G(\Gamma)$ can be presented as a fundamental group of a graph of group with vertex groups $\{G_1,G_2,F(a,b)\}$ and edge groups
$\{\langle a^2b^3=x^{15}y^{15}\rangle\cong\mathbb{Z},\langle a^2b^2=z^{15}w^{15}\rangle\cong\mathbb{Z}\}$, where $F(a,b)$ is the free group on $\{a,b\}$. (See Figure \ref{figure:graphofgroup})
\end{Examples}
| proofpile-arXiv_065-5569 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
Generative adversarial networks (GANs) ~\cite{goodfellow2014generative}, a kind of deep generative model, are designed to approximate the probability distribution of a massive of data. It has been demonstrated powerful for various tasks including images generation~\cite{radford2016unsupervised}, video generation~\cite{vondrick2016generating}, and natural language processing~\cite{lin2017adversarial}, due to its ability to handle complex density functions. GANs are known to be notoriously difficult to train since they are non-convex non-concave min-max problems. Gradient descent-type algorithms are usually used for training generative adversarial networks, such as Adagrad~\cite{duchi2011adaptive}, Adadelta~\cite{zeiler2012adadelta}, RMSprop~\cite{tieleman2012lecture} and Adam~\cite{kingma2014adam}. However, it recently has been demonstrated that gradient descent type algorithms are mainly designed for minimization problems and may fail to converge when dealing with min-max problems~\cite{mertikopoulos2019optimistic}.
In addition, the learning rates of gradient type algorithms are hard to tune when they are applied to train GANs due to the absence of universal evaluation criteria like loss increasing or decreasing. Another critical problem for training GANs is that they usually contain a large number of parameters, thus may cost days or even weeks to train.
During the past years, much effort has been devoted for solving the above problems. Some recent works show that the Optimistic Mirror Descent (OMD) algorithm has superior performance for the min-max problems~\cite{mertikopoulos2019optimistic, daskalakis2018training, gidel2019a}, which introduces an extra-gradient step to overcome the the divergence issue of gradient descent-type algorithms for solving the min-max problems. Recently, Daskalakis et al.~\cite{daskalakis2018training} used OMD for training GAN in single machine. In addition, a few work also develop the distributed extragradient descent type algorithms to accelerate the training process of GANs \cite{jin2016scale, lian2017can, shen2018towards, tang2018d}. Recently, Liu et al.~\cite{liu2019decentralized} have proposed distributed extragradient methods to train GANs with the decentralized network topology, but their method suffers from the large amount of parameters exchange problem since they did not compress the transmitted gradients.
In this paper, we propose a novel distributed training method for GANs, named as the
{Distributed Quantized Generative Adversarial Networks (DQGAN)}. The new method is able to accelerate the training process of GAN in distributed environment and a quantization technique is used to reduce the communication cost. To the best of our knowledge, this is the first distributed training method for GANs with quantization. The main contributions of this paper are summarized as three-fold:
\begin{itemize}
\item We have proposed a {distributed optimization algorithm for training GANs}. Our main novelty is that the gradient of communication is quantified, so that our algorithm has a smaller communication overhead to further improve the training speed. Through the error compensation operation we designed, we can solve the convergence problem caused by the quantization gradient.
\item We have proved that the new method converges to first-order stationary point with non-asymptotic convergence under some common assumptions. The results of our analysis indicate that our proposed method can achieves a linear speedup in parameter server model.
\item We have conducted experiments to demonstrate the effectiveness and efficiency of our proposed method on both synthesized and real datasets. The experimental results verify the convergence of the new method and show that our method is able to improve the speedup of distributed training with comparable performance with the state-of-the-art methods.
\end{itemize}
The rest of the paper is organized as follows. Related works and preliminaries are summarized in Section~\ref{sec:notation_pre}. The detailed {DQGAN} and its convergence rate are described in Section~\ref{sec:method}. The experimental results on both synthetic and real datasets are presented in Section~\ref{sec:experiments}. Conclusions are given in Section~\ref{sec:conclusions}.
\section{Notations and Related Work}
\label{sec:notation_pre}
In this section, we summarize the notations and definitions used
in this paper, and give a brief review of related work.
\subsection{Generative Adversarial Networks}
\label{GANS}
Generative adversarial networks (GANs) consists of two components, one of which is a discriminator ($D$) distinguishing between real data and generated data while the other one is a generator ($G$) generating data to fool the discriminator. We define $p_{r}$ as the real data distribution and $p_{n}$ as the noise probability distribution of the generator $G$. The objective of a GAN is to make the generation data distribution of generator approximates the real data distribution $p_{r}$, which is formulated as a joint loss function for $D$ and $G$~\cite{goodfellow2014generative}
\begin{equation}
\label{minmax_gan}
\min_{\theta\in\Theta} \max_{\phi\in\Phi} \mathcal{L} \left( \theta, \phi \right),
\end{equation}
where $\mathcal{L} \left( \theta, \phi \right)$ is defined as follows
\begin{equation}
\label{loss_gan}
\begin{aligned}
\mathcal{L} \left( \theta, \phi \right) \overset{\text{def}}{=}
\mathbb{E}_{\mathbf{x} \sim p_r} \left[\log D_{\phi}\left(\mathbf{x}\right) \right]
+ \mathbb{E}_{\mathbf{z} \sim p_g} \left[ \log \left( 1-D_{\phi} ( G_{\theta}(\mathbf{z}) ) \right) \right] .
\end{aligned}
\end{equation}
$D_{\phi}(\mathbf{x})$ indicate that the discriminator which outputs a probability of its input being a real sample. $\theta$ is the parameters of generator $G$ and $\phi$ is the parameters of discriminator $D$.
However, (\ref{minmax_gan}) may suffer from saturating problem at the early learning stage and leads to vanishing gradients for the generator and inability to train in a stable manner~\cite{goodfellow2014generative,arjovsky2017wasserstein}. Then we used the loss in WGAN~\cite{goodfellow2014generative,arjovsky2017wasserstein}
\begin{equation}
\label{loss_gan1}
\begin{aligned}
\mathcal{L} \left( \theta, \phi \right) \overset{\text{def}}{=}
& \mathbb{E}_{\mathbf{x} \sim p_r} \left[D_{\phi}\left(\mathbf{x}\right)\right] - \mathbb{E}_{\mathbf{z} \sim p_g} \left[ D_{\phi} ( G_{\theta}(\mathbf{z}) ) \right].
\end{aligned}
\end{equation}
Then training GAN turns into finding the following Nash equilibrium
\begin{equation}
\label{G3}
\theta^{*} \in \mathop{\arg\min}_{\theta\in\Theta} \mathcal{L}_G \left( \theta, \phi^{*} \right),
\end{equation}
\begin{equation}
\label{D3}
\phi^{*} \in \mathop{\arg\min}_{\phi\in\Phi} \mathcal{L}_D \left( \theta^{*}, \phi \right),
\end{equation}
where
\begin{equation}
\label{nonzerosumG}
\mathcal{L}_G \left( \theta, \phi \right) \overset{\text{def}}{=}
-\mathbb{E}_{\mathbf{z} \sim p_{n}} \left[ D_{\phi} (G_{\theta}(\mathbf{z}) ) \right],
\end{equation}
\begin{equation}
\label{nonzerosumD}
\begin{aligned}
\mathcal{L}_D \left( \theta, \phi \right) \overset{\text{def}}{=}
- \mathbb{E}_{\mathbf{x} \sim p_{r}} \left[ D_{\phi}\left(\mathbf{x}\right) \right] + \mathbb{E}_{\mathbf{z} \sim p_{n}} \left[ D_{\phi} (G_{\theta}(\mathbf{z}) \right].
\end{aligned}
\end{equation}
Gidel et al.~\cite{gidel2019a} defines a \emph{stationary point} as a couple $(\theta^{*},\phi^{*})$ such that the directional derivatives of both
$\mathcal{L}_G \left( \theta, \phi^{*} \right)$ and $\mathcal{L}_D \left( \theta^{*}, \phi \right)$ are non-negative, i.e.,
\begin{equation}
\label{G5}
\nabla_{\theta} \mathcal{L}_G \left( \theta^{*}, \phi^{*} \right)^{T} \left( \theta-\theta^{*} \right) \geq 0,~\forall(\theta,\phi)\in\Theta\times\Phi;
\end{equation}
\begin{equation}
\label{D5}
\nabla_{\phi} \mathcal{L}_D \left( \theta^{*}, \phi^{*} \right)^{T} \left( \phi-\phi^{*} \right) \geq 0,~\forall(\theta,\phi)\in\Theta\times\Phi,
\end{equation}
which can be compactly formulated as
\begin{equation}
\label{VI}
\begin{aligned}
F \left( \mathbf{w}^{*} \right)^{T} & \left( \mathbf{w} - \mathbf{w}^{*} \right) \geq 0, ~ \forall \mathbf{w} \in \Omega,
\end{aligned}
\end{equation}
where $\mathbf{w} \overset{\text{def}}{=} \left[\theta,~\phi\right]^{T}$, $\mathbf{w}^{*} \overset{\text{def}}{=} \left[\theta^{*},~\phi^{*}\right]^{T}$, $\Omega\overset{\text{def}}{=}\Theta\times\Phi$ and $ F \left( \mathbf{w} \right) \overset{\text{def}}{=} \left[ \nabla_{\theta} \mathcal{L}_G \left( \theta, \phi \right),~\nabla_{\phi} \mathcal{L}_D \left( \theta, \phi \right) \right]^{T}$.
Many works have been done for GAN. For example, some focus on the loss design, such as WGAN~\cite{arjovsky2017wasserstein}, SN-GAN~\cite{miyato2018spectral}, LS-GAN~\cite{qi2019loss}; while the others focus on the network architecture design, such as CGAN~\cite{mirza2014conditional}, DCGAN~\cite{radford2016unsupervised}, SAGAN~\cite{zhang2018self}. However, only a few works focus on the distributed training of GAN~\cite{liu2019decentralized}, which is our focus.
\subsection{Optimistic Mirror Descent}
\label{OMD}
The update scheme of the basic gradient method is given by
\begin{equation}
\label{gradient}
\mathbf{w}_{t+1} = P_{w} \left[ \mathbf{w}_t - \eta_t F(\mathbf{w}_t) \right],
\end{equation}
where $F(\mathbf{w}_t)$ is the gradient at $\mathbf{w}_t$, $P_{w} \left[\cdot\right]$ is the projection onto the constraint set $w$ (if constraints are present) and $\eta_t$ is the step-size. The gradient descent algorithm is mainly designed for minimization problems and it was proved to be able to converge linearly under the strong monotonicity assumption on the operator $F(\mathbf{w}_t)$~\cite{chen1997Convergence}. However, the basic gradient descent algorithm may produce a sequence that drifts away and cycles without converging when dealing with some special min-max problems~\cite{arjovsky2017wasserstein}, such as bi-linear objective in~\cite{mertikopoulos2019optimistic}.
To solve the above problem, a practical approach is to compute the average of multiple iterates, which converges with a $O(\frac{1}{\sqrt{t}})$ rate~\cite{nedic2009subgradient}. Recently, the \emph{extragradient} method~\cite{nesterov2007dual} has been used for the min-max problems, due to its superior convergence rate of $O(\frac{1}{t})$. The idea of the \emph{extragradient} can be traced back to Korpelevich~\cite{korpelevich1976extragradient} and Nemirovski~\cite{nemirovski2004prox}. The basic idea of the extragradient is to compute a lookahead gradient to guide the following step. The iterates of the extragradient are given by
\begin{equation}
\label{extragradient_11}
\mathbf{w}_{t+\frac{1}{2}} = P_{w} \left[ \mathbf{w}_t - \eta_t F(\mathbf{w}_t) \right],
\end{equation}
\begin{equation}
\label{extragradient_12}
\mathbf{w}_{t+1} = P_{w} \left[ \mathbf{w}_t - \eta_t F(\mathbf{w}_{t+\frac{1}{2}}) \right].
\end{equation}
However, we need to compute the gradient at both $\mathbf{w}_t$ and $\mathbf{w}_{t+\frac{1}{2}}$ in the above iterates. Chiang et al.~\cite{chiang2012online} suggested to use the following iterates
\begin{equation}
\label{extragradient_21}
\mathbf{w}_{t+\frac{1}{2}} = P_{w} \left[ \mathbf{w}_t - \eta_t F(\mathbf{w}_{t-\frac{1}{2}}) \right],
\end{equation}
\begin{equation}
\label{extragradient_22}
\mathbf{w}_{t+1} = P_{w} \left[ \mathbf{w}_t - \eta_t F(\mathbf{w}_{t+\frac{1}{2}}) \right],
\end{equation}
in which we only need to the compute the gradient at $\mathbf{w}_{t+\frac{1}{2}}$ and reuse the gradient $F(\mathbf{w}_{t-\frac{1}{2}})$ computed in the last iteration.
Considering the unconstrained problem without projection, (\ref{extragradient_21}) and (\ref{extragradient_22}) reduce to
\begin{equation}
\label{extragradient_31}
\mathbf{w}_{t+\frac{1}{2}} = \mathbf{w}_t - \eta_t F(\mathbf{w}_{t-\frac{1}{2}}),
\end{equation}
\begin{equation}
\label{extragradient_32}
\mathbf{w}_{t+1} = \mathbf{w}_t - \eta_t F(\mathbf{w}_{t+\frac{1}{2}}),
\end{equation}
and it is easy to see that this update is equivalent to the following one line update as in ~\cite{daskalakis2018training}
\begin{equation}
\label{omd}
\mathbf{w}_{t+\frac{1}{2}} = \mathbf{w}_{t-\frac{1}{2}} - 2\eta_t F(\mathbf{w}_{t-\frac{1}{2}})+\eta_t F(\mathbf{w}_{t-\frac{3}{2}}).
\end{equation}
The optimistic mirror descent algorithm is shown Algorithm~\ref{alg:OMD1}. To compute $\mathbf{w}_{t+1}$, it first generates an intermediate state $\mathbf{w}_{t+\frac{1}{2}}$ according to $\mathbf{w}_t$ and the gradient $F(\mathbf{w}_{t-\frac{1}{2}})$ computed in the last iteration, and then computes $\mathbf{w}_{t+1}$ according to both $\mathbf{w}_t$ and the gradient at $\mathbf{w}_{t+\frac{1}{2}}$. In the end of the iteration, $F(\mathbf{w}_{t+\frac{1}{2}})$ is stored for the next iteration. The optimistic mirror descent algorithm was used for online convex optimization~\cite{chiang2012online} and by Rakhlin and general online learning~\cite{rakhlin2013optimization}. Mertikopoulos et al.~\cite{mertikopoulos2019optimistic} used the optimistic mirror descent algorithm for training generative adversarial networks in single machine.
\begin{algorithm}[!h]
\caption{Optimistic Mirror Descent Algorithm}
\label{alg:OMD1}
\begin{algorithmic}[1]
\REQUIRE {step-size sequence $\eta_t>0$}
\FOR {$t = 0, 1, \ldots, T-1$}
\STATE {retrieve $F\left(\mathbf{w}_{t-\frac{1}{2}}\right)$}
\STATE {set $\mathbf{w}_{t+\frac{1}{2}} = \mathbf{w}_t - \eta_t F(\mathbf{w}_{t-\frac{1}{2}}) $}
\STATE {compute gradient $F\left(\mathbf{w}_{t+\frac{1}{2}}\right)$ at $\mathbf{w}_{t+\frac{1}{2}}$}
\STATE {set $\mathbf{w}_{t+1} = P_{w} \left[\mathbf{w}_t - \eta_t F(\mathbf{w}_{t+\frac{1}{2}}) \right]$}
\STATE {store $F\left(\mathbf{w}_{t+\frac{1}{2}}\right)$}
\ENDFOR
\STATE {Return $\mathbf{w}_{T}$}
\end{algorithmic}
\end{algorithm}
However, all of these works focus on the single machine
setting and we will propose a training algorithm for distributed settings.
\subsection{Distributed Training}
Distributed centralized network and distributed decentralized network are two kinds of topologies used in distributed training.
In distributed centralized network topology, each worker node can obtain information of all other worker nodes. Centralized training systems have different implementations. There exist two common models in the distributed centralized network topology, i.e., the parameter server model~\cite{li2014communication} and the AllReduce model~\cite{rabenseifner2004optimization, wang2019blink}. The difference between the parameter server model and the AllReduce model is that each worker in the AllReduce model directly sends the gradient information to other workers without a server.
In the distributed decentralized network topology, the structure of computing nodes are usually organized into a graph to process and the network topology does not ensure that each worker node can obtain information of all other worker nodes. All worker nodes can only communicate with their neighbors. In this case, the network topology connection can be fixed or dynamically uncertain. Some parallel algorithms are designed for fixed topology~\cite{jin2016scale, lian2017can, shen2018towards, tang2018d}. On the other hand, the network topology may change when the network accident or power accident causes the connection to be interrupted. Some method have been proposed for this kind of dynamic network topology~\cite{nedic2014distributed, nedic2017achieving}.
Recently, Liu et al.~\cite{liu2019decentralized} proposed to train GAN with the decentralized network topology, but their method suffers from the large amount of parameters exchange problem since they did not compress the transmitted gradients. In this paper, we mainly focus on distributed training of GAN with the distributed centralized network topology.
\begin{figure}[!t]
\centering
\includegraphics[width=\columnwidth]{distplot.pdf}
\caption{Distributed training of GAN in Algorithm~\ref{alg:DQOSG} in parameter-server model.}
\label{fig:framework}
\end{figure}
\subsection{Quantization}
Quantization is a promising technique to reduce the neural network model size by reducing the number of bits of the parameters. Early studies on quantization focus on CNNs and RNNs. For example, Courbariaux et al.~\cite{courbariaux2016binarized} proposed to use a single sign function with a scaling factor to binarize the weights and activations in the neural networks, Rastegari et al.~\cite{rastegari2016xnornet} formulated the quantization as an optimization problem in order to quantize CNNs to a binary neural network, Zhou et al.~\cite{zhou2016dorefanet} proposed to quantize the weights, activations and gradients. In the distributed training, the expensive communication cost can be reduced by quantizing the transmitted gradients.
Denote the quantization function as $Q(\mathbf{v})$ where $\mathbf{v}\in\mathbb{R}^{d}$.
Generally speaking, existing gradient quantization techniques in distributed training can be divided into two categories, i.e., \textbf{biased quantization} ($\mathbb{E}[Q(\mathbf{v})]\neq \mathbf{v}$~for~$\forall\mathbf{v}$) and \textbf{unbiased quantization} ($\mathbb{E}[Q(\mathbf{v})]= \mathbf{v}$ for any $\mathbf{v}$).
\textbf{Biased gradient quantization}: The sign function is a commonly-used biased quantization method~\cite{bernstein2018signsgd,seide20141,strom2015scalable}. Stich et al.~\cite{stich2018sparsified} proposed a top-k quantization method which only retains the top $k$ largest elements of this vector and set the others to zero.
\textbf{Unbiased gradient quantization}: Such methods usually use stochastically quantized gradients~\cite{wen2017terngrad,alistarh2017qsgd,wangni2018gradient}. For example, Alistarh et al.~\cite{alistarh2017qsgd} proposed a compression operator which can be formulated as
\begin{equation}
\label{qsgd}
Q(v_i) = sign(v_i) \cdot \|\mathbf{v}\|_{2} \cdot \xi_i(\mathbf{v},s),
\end{equation}
where $\xi_i(\mathbf{v},s)$ is defined as follows
\begin{equation}
\xi_i(\mathbf{v},s)=
\left\{
\begin{aligned}
&\frac{l}{s} &\text{with prob.} ~ 1-\left(\frac{|v_i|}{\|\mathbf{v}\|_2}\cdot s - l\right); \\
&\frac{l+1}{s} &\text{otherwise}.
\end{aligned},
\right.
\end{equation}
where $s$ is the number of quantization levels, $0 \leq l < s$ is a integer such that $|v_i|/\|\mathbf{v}\|_{2}\in[l/s,(l+1)/s]$. Hou et al.~\cite{hou2018analysis} replaced the $\|\mathbf{v}\|_{2}$ in the above method with $\|\mathbf{v}\|_{\infty}$.
\section{The Proposed Method}
\label{sec:method}
\subsection{{DQGAN} Algorithm}
Actually, we consider extensions of the algorithm to the context of a \emph{stochastic} operator, i.e., we no longer have access to the exact gradient but to an unbiased stochastic estimate of it.
Suppose we have $M$ machines.
When in machine $m$ at $t$-th iteration, we sample a mini-batch according to $\xi_{t}^{(m)} = \left( \xi_{t,1}^{(m)}, \xi_{t,2}^{(m)}, \cdots, \xi_{t,B}^{(m)} \right)$, where $B$ is the minibatch size.
In all $M$ machines, we use the same mini-batch size $B$.
We use the term $F(\mathbf{w}_{t}^{(m)};\xi_{t,b}^{(m)})$ ~ ($1\leq b \leq B$) and $F(\mathbf{w}_{t}^{(m)})$ to stand for \emph{stochastic gradient} and \emph{gradient} respectively.
When in machine $m$ at $t$-th iteration, we define \emph{mini-batch gradient} as $F(\mathbf{w}_{t}^{(m)};\xi_{t}^{(m)}) = \frac{1}{B} \sum_{b=1}^{B} F(\mathbf{w}_{t}^{(m)};\xi_{t,b}^{(m)})$.
To reduce the size of the transmitted gradients in distributed training, we introduce a quantization function $Q(\cdot)$ to compress the gradients. In this paper, we consider a general $\delta$-approximate compressor for our method
\begin{defn}
An quantization operator $Q$ is said to be $\delta$-approximate compressor for $\delta \in (0,1])$ if
\begin{equation}
\| Q\left(F(\mathbf{w})\right)-F(\mathbf{w}) \|^{2} \leq (1-\delta) \|F(\mathbf{w})\|^{2} \quad \text{for all} ~ \mathbf{w} \in \Omega.
\end{equation}
\label{definQ}
\end{defn}
\vskip -0.2in
Before sending gradient data to the central server, each worker needs quantify the gradient data, which makes our algorithm have a small communication overhead.
In addition, our algorithm is applicable to any gradient compression method that satisfies a general $\delta$-approximate compressor.
In general, the quantized gradient will lose some accuracy and cause the algorithm to fail to converge. In this paper, we have designed an error compensation operation to solve this problem, which is to incorporate the error made by the compression operator into the next step to compensate for gradients. Recently, Stich et al.~\cite{stich2018sparsified} conducted the theoretical analysis of error-feedback in the strongly convex case and Karimireddy et al.~\cite{karimireddy2019error} further extended the convergence results to the non-convex and weakly convex cases.
These error-feedback operations are designed to solve the minimization problem. Here, in order to solve the min-max problem, we need to design the error-feedback operation more cleverly. In each iteration, we compensate the error caused by gradient quantization twice.
The proposed method, named as {Distributed Quantized Generative Adversarial Networks (DQGAN)}, is summarized in Algorithm~\ref{alg:DQOSG}. The procedure of the new method is illustrated with the the parameter server model~\cite{li2014communication} in Figure~\ref{fig:framework}.
There are $M$ works participating in the model training.
In this method, $\mathbf{w}_{0}$ is first initialized and pushed to all workers, and the local variable $\mathbf{e}^{m}_{0}$ is set to $0$ for $m\in[1,M]$.
In each iteration, the state $\mathbf{w}_{t-1}^{(m)}$ is updated to obtain the intermediate state $\mathbf{w}_{t-\frac{1}{2}}^{(m)}$ on $m$-th worker. Then each worker computes the gradients and adds the error compensation. The $m$-th worker computes the gradients $F(\mathbf{w}_{t-\frac{1}{2}}^{(m)};\xi_{t}^{(m)})$ and adds the error compensation $\mathbf{e}^{(m)}_{t-1}$ to obtain $\mathbf{p}_{t}^{(m)}$, and then quantizes it to $\hat{\mathbf{p}}_{t}^{(m)}$ and transmit the quantized one to the server. The error $\mathbf{e}_{t}^{(m)}$ is computed as the difference between $\mathbf{p}_{t}^{(m)}$ and $\hat{\mathbf{p}}_{t}^{(m)}$. The server will average all quantized gradients and push it to all workers. Then the new parameter $\mathbf{w}_{t}$ will be updated.
Then we use it to update $\mathbf{w}_{t-1}$ to get the new parameter $\mathbf{w}_{t}$.
\begin{algorithm}[!h]
\caption{The Algorithm of {DQGAN}}
\begin{algorithmic}[1]
\REQUIRE {step-size $\eta>0$, Quantization function $Q\left(\cdot\right)$} is $\delta$-approximate compressor.
\STATE Initialize $\mathbf{w}_0$ and push it to all workers, set $\mathbf{w}^{(m)}_{-\frac{1}{2}} = \mathbf{w}_0$ and $\mathbf{e}^{(m)}_0=0$ for $1 \leq m \leq M$.
\FOR {$t = 1, 2, \ldots, T$}
\STATE {\textbf{on} worker m : $( m \in \{1, 2, \cdots, M\} )$}
\STATE {\quad set $\mathbf{w}_{t-\frac{1}{2}}^{(m)} = \mathbf{w}_{t-1} - \left[ \eta F(\mathbf{w}_{t-\frac{3}{2}}^{(m)};\xi_{t-1}^{(m)}) + \mathbf{e}^{(m)}_{t-1} \right]$}
\STATE {\quad compute gradient $F(\mathbf{w}_{t-\frac{1}{2}}^{(m)};\xi_{t}^{(m)})$}
\STATE {\quad set $\mathbf{p}_{t}^{(m)} = \eta F(\mathbf{w}_{t-\frac{1}{2}}^{(m)};\xi_{t}^{(m)}) + \mathbf{e}^{(m)}_{t-1} $}
\STATE {\quad set $\hat{\mathbf{p}}_{t}^{(m)} = Q\left( \mathbf{p}_{t}^{(m)} \right)$} and push it to the server
\STATE {\quad set $\mathbf{e}_{t}^{(m)} = \mathbf{p}_{t}^{(m)} - \hat{\mathbf{p}}_{t}^{(m)}$}
\STATE {\textbf{on} server:}
\STATE {\quad \textbf{pull} $\hat{\mathbf{p}}_{t}^{(m)}$ \textbf{from} each worker}
\STATE {\quad set $\hat{\mathbf{q}}_{t} = \frac{1}{M} \left[ \sum_{m=1}^{M} \hat{\mathbf{p}}_{t}^{(m)} \right]$}
\STATE {\quad \textbf{push} $\hat{\mathbf{q}}_{t}$ \textbf{to} each worker}
\STATE {\textbf{on} worker m : $( m \in \{1, 2, \cdots, M\} )$}
\STATE {\quad set $\mathbf{w}_{t} = \mathbf{w}_{t-1} - \hat{\mathbf{q}}_{t}$}
\ENDFOR
\STATE {Return $\mathbf{w}_{T}$}
\end{algorithmic}
\label{alg:DQOSG}
\end{algorithm}
\subsection{Coding Strategy}
The quantization plays an important role in our method. In this paper, we proposed a general $\delta$-approximate compressor in order to include a variety of quantization methods for our method. In this subsection, we will prove that some commonly-used quantization methods are $\delta$-approximate compressors.
According to the definition of the $k$-contraction operator~\cite{stich2018sparsified}, we can verify the following theorem
\begin{thm}
The $k$-contraction operator~\cite{stich2018sparsified} is a $\delta$-approximate compressor with $\delta=\frac{k}{d}$ where $d$ is the dimensions of input and $k\in(0,d]$ is a parameter.
\label{deltaOne}
\end{thm}
Moreover, we can prove the following theorem
\begin{thm}
The quantization methods in~\cite{alistarh2017qsgd} and~\cite{hou2018analysis} are $\delta$-approximate compressors.
\label{deltaTwo}
\end{thm}
Therefore, a variety of quantization methods can be used for our method.
\subsection{Convergence Analysis}
Throughout the paper, we make the following assumption
\begin{ass}[Lipschitz Continuous]
\begin{enumerate}
\item $F$ is $L$-Lipschitz continuous, i.e.
$\| F(\mathbf{w}_{1}) - F(\mathbf{w}_{2}) \| \leq L \| \mathbf{w}_{1} - \mathbf{w}_{2} \|$ for $\forall \mathbf{w}_{1}, \mathbf{w}_{2}$ .
\item $\| F(\mathbf{w}) \| \leq G$ for $\forall \mathbf{w}$ .
\end{enumerate}
\label{assum1}
\end{ass}
\begin{ass}[Unbiased and Bounded Variance]
For $\forall \mathbf{w}_{t-\frac{1}{2}}^{(m)}$, we have $\mathbb{E}\left[F(\mathbf{w}_{t-\frac{1}{2}}^{(m)};\xi_{t,b}^{(m)})\right] = F(\mathbf{w}_{t-\frac{1}{2}}^{(m)})$ and $\mathbb{E}\|F(\mathbf{w}_{t-\frac{1}{2}}^{(m)};\xi_{t,b}^{(m)}) - F(\mathbf{w}_{t-\frac{1}{2}}^{(m)})\|^2 \leq \sigma^2$ , where $1 \leq b \leq B$.
\label{assum2}
\end{ass}
\begin{ass}[Pseudomonotonicity]
The operator $F$ is pseudomonotone, i.e.,
$$\left\langle F(\mathbf{w}_{2}), \mathbf{w}_{1} - \mathbf{w}_{2} \right\rangle \geq 0 \Rightarrow \left\langle F(\mathbf{w}_{1}) , \mathbf{w}_{1} - \mathbf{w}_{2} \right\rangle \geq 0 \quad \text{for} \quad \forall \mathbf{w}_{1},\mathbf{w}_{2}$$
\label{assum3}
\end{ass}
Now, we give a key Lemma, which shows that the residual errors maintained in Algorithm~\ref{alg:DQOSG} do not accumulate too much.
\begin{lem}
At any iteration $t$ of Algorithm~\ref{alg:DQOSG}, the norm of the error $\frac{1}{M}\sum_{m=1}^{M}\| \mathbf{e}_{t}^{(m)} \|^{2}$ is bounded:
\begin{equation}
\mathbb{E} \left[ \| \frac{1}{M} \sum_{m=1}^{M} \mathbf{e}^{(m)}_{t} \|^2 \right]
\leq \frac{8\eta^2(1-\delta)(G^{2}+\frac{\sigma^2}{B})}{\delta^2}
\end{equation}
If $\delta = 1$, then $\|\mathbf{e}_{t}^{(m)}\|=0$ for $1 \leq m \leq M$ and the error is zero as expected.
\label{errorBounded}
\end{lem}
Finally, based on above assumptions and lemma, we are on the position to describe the convergence rate of Algorithm~\ref{alg:DQOSG}.
\begin{thm}
By picking the $\eta \leq \min \{ \frac{1}{\sqrt{BM}}, \frac{1}{6\sqrt{2}L} \}$ in Algorithm~\ref{alg:DQOSG}, we have
\begin{equation}
\begin{aligned}
\frac{1}{T} \sum_{t=1}^{T} \mathbb{E} & \left[ \| \frac{1}{M}\sum_{m=1}^{M} F\left(\mathbf{w}_{t-\frac{1}{2}}^{(m)};\xi_{t}^{(m)}\right) \|^2 \right]
\leq \frac{4 \| \tilde{\mathbf{w}}_{0} - \mathbf{w}^* \|^2}{\eta^{2} T} + \frac{1728~ L^2 \sigma^2}{B^2M^2} \\
& \quad + \frac{3456~ L^2 G^2 (M-1)}{BM^2} + \frac{9216~ L^2 (1-\delta)(G^{2}+\frac{\sigma^2}{B})(M-1)}{\delta^2 BM^2} + \frac{48~ \sigma^2}{BM}
\end{aligned}
\label{eq:converg}
\end{equation}
\label{converge}
\end{thm}
Theorem~\ref{converge} gives such a non-asymptotic convergence and linear speedup in theory.
We need to find the $\epsilon$-first-order stationary point, i.e.,
\begin{equation}
\mathbb{E} \left[ \| \frac{1}{M}\sum_{m=1}^{M} F\left(\mathbf{w}_{t-\frac{1}{2}}^{(m)};\xi_{t}^{(m)}\right) \|^2 \right] \leq \epsilon^{2}
\end{equation}
By taking $M=\mathcal{O}(\epsilon^{-2})$ and $T=\mathcal{O}(\epsilon^{-8})$,
we can guarantee that the algorithm~\ref{alg:DQOSG} can reach an $\epsilon$-first-order stationary point.
\section{Experiments}
\label{sec:experiments}
\begin{figure}[t]
\centerline{\includegraphics[width=\columnwidth]{Cifar10_is_fid.pdf}}
\caption{The Inception Score and Fr{\'e}chet Inception Distance values of three methods on the CIFAR10 dataset.}
\label{fig:results_cifa10}
\vspace{-0.4cm}
\end{figure}
\begin{figure}[t]
\centerline{\includegraphics[width=\columnwidth]{CelebA_is_fid.pdf}}
\caption{The Inception Score and Fr{\'e}chet Inception Distance values of three methods on the CelebA dataset.}
\label{fig:results_celeba}
\vspace{-0.4cm}
\end{figure}
\begin{figure}[t]
\centerline{\includegraphics[width=0.8\columnwidth]{speedup.pdf}}
\caption{The speedup of our method on the CIFAR10 and CelebA datasets (left and right respectively).}
\label{fig:speedup}
\vspace{-0.4cm}
\end{figure}
In this section, we present the experimental results on
real-life datasets. We used PyTorch~\cite{paszke2019pytorch} as the underlying deep learning framework
and Nvidia NCCL~\cite{nccl} as the communication mechanism.
We used the following two real-life benchmark datasets for the remaining experiments.
The \textbf{CIFAR-10} dataset~\cite{krizhevsky2009learning} contains 60000 32x32 images in 10 classes: airplane, automobile, bird, cat, deer, dog, frog, horse, ship and truck.
The \textbf{CelebA} dataset~\cite{liu2015faceattributes} is a large-scale face attributes dataset with more than 200K celebrity images, each with 40 attribute annotations.
We compared our method with the
{Centralized Parallel Optimistic Adam (CPOAdam)} which is our method without quantization and error-feedback,
and the Centralized Parallel Optimistic Adam with Gradients Quantization (CPOAdam-GQ)
for training the generative adversarial networks with the loss in (\ref{loss_gan1}) and
the deep convolutional generative adversarial network (DCGAN) architecture~\cite{radford2016unsupervised}.
We implemented full-precision (float32) baseline models and set the number of bits for our method to $8$.
We used the compressor in~\cite{hou2018analysis} for our method.
Learning rates and hyperparameters in these methods were chosen by an inspection of grid search results
so as to enable a fair comparison of these methods.
We use the Inception Score~\cite{salimans2016improved} and fr{\'e}chet inception distance~\cite{dowson1982frechet}
to evaluate all methods, where higher inception score and lower Fr{\'e}chet Inception Distance indicate better results.
The Inception Score and Fréchet Inception Distance values of three methods on the CIFAR10 and CelebA datasets
are shown in Figures~\ref{fig:results_cifa10} and \ref{fig:results_celeba}.
The results on both datasets show that the CPOAdam achieves high inception scores and
lower fr{\'e}chet inception distances with all epochs of training.
We also notice that our method with 1/4 full precision gradients is able to produce comparable result
with CPOAdam with full precision gradients,
finally with at most $0.6$ decrease in terms of Inception Score and
at most $30$ increase in terms of Fr{\'e}chet Inception Distance on the CIFAR10 dataset,
with at most $0.5$ decrease in terms of Inception Score and
at most $40$ increase in terms of Fr{\'e}chet Inception Distance on the CelebA dataset.
Finally, we show the speedup results of our method on the CIFAR10 and CelebA datasets in Figure~\ref{fig:speedup},
which shows that our method is able to improve the speedup of training GANs and
the improvement becomes more significant with the increase of data size.
With 32 workers, our method with $8$ bits of gradients
achieves a significant improvement
on the CIFAR10 and the CelebA dataset, compared to the CPOAdam.
This is the same as our expectations, after all, 8bit transmits less data than 32bit.
\section{Conclusion}
\label{sec:conclusions}
In this paper, we have proposed a distributed optimization algorithm for training GANs.
The new method reduces the communication cost via gradient compression,
and the error-feedback operations we designed is used to compensate for the bias caused
by the compression operation and ensure the non-asymptotic convergence.
We have theoretically proved the non-asymptotic convergence of the new method,
with the introduction of a general $\delta$-approximate compressor.
Moreover, we have proved that the new method has linear speedup in theory.
The experimental results show that our method is able to produce comparable results as the distributed OMD without quantization,
only with slight performance degradation.
Although our new method has linear speedup in theory,
the cost of gradients synchronization affects its performance in practice.
Introducing gradient asynchronous communication technology will break the synchronization barrier and
improve the efficiency of our method in real applications. We will leave this for the future work.
| proofpile-arXiv_065-5579 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
Let $G$ be a connected reductive group defined over a non-archimedean
local field $F$. Fix a minimal $F$-parabolic subgroup $B=TU$ of
$G$ with unipotent radical $U$ and whose Levi factor $T$ contains
a maximal $F$-split torus of $G$. Let $\psi$ be a nondegenerate
character of $U(F)$ and consider the induced representation $c\text{-ind}_{U(F)}^{G(F)}\psi$
realized by functions whose support is compact mod $U(F)$. It is
a result of Bushnell and Henniart \cite{BH03} that the Bernstein
components of $c\text{-ind}_{U(F)}^{G(F)}\psi$ are finitely generated.
Now let $\lambda$ be a character of $T(F)$. The pair $(T,\lambda)$
determines a Bernstein block $\mathcal{R}^{[T,\lambda]_{G}}(G(F))$
in the category of smooth representations $\mathcal{R}(G(F))$ of
$G(F)$. Bushnell-Kutzko types are known to exist for Bernstein blocks
under suitable residue characteristic hypothesis \cite{KY17, Fin}.
Let $(K,\rho)$ be a $[T,\lambda]_{G}$-type and let $\mathcal{H}(G,\rho)$
be the associated Hecke algebra. We show in Theorem \ref{thm:cyc}
that the $\rho$-isotypical component $(c\text{-ind}_{U(F)}^{G(F)}\psi)^{\rho}$
is a cyclic $\mathcal{H}(G,\rho)$-module.
Now assume that $T$ is split and $\psi$ is a non-degenerate character of $U(F)$ of generic depth zero (see \S \ref{sec:statement}). If $\lambda\neq1$, then assume further
that the center of $G$ is connected. In that case, $\mathcal{H}(G,\rho)$
is an Iwahori-Hecke algebra. It contains a finite subalgebra $\mathcal{H}_{W,\lambda}$.
The algebra $\mathcal{H}_{W,\lambda}$ has a one dimensional representation
$\text{sgn}$. We show in Theorem \ref{thm:main} that the $\mathcal{H}(G,\rho)$-module
$(c\text{-ind}_{U(F)}^{G(F)}\psi)^{\rho}$ is isomorphic to $\mathcal{H}(G,\rho)\underset{\mathcal{H}_{W,\lambda}}{\otimes}\text{sgn}$.
For positive depth $\lambda$, Theorem \ref{thm:main} assumes that
$F$ has characteristic $0$ and its residue characteristic is not
too small.
Theorems \ref{thm:cyc} and \ref{thm:main} generalize the main result
of Chan and Savin in \cite{CS18} who treat the case $\lambda=1$
for $T$ split, i.e., unramified principal series blocks of split
groups. Our proofs benefit from the ideas in \cite{CS18}. However
they are quite different. The existence of a generator in $(c\text{-ind}_{U(F)}^{G(F)}\psi)^{\rho}$
is concluded by specializing quite general results in \cite{BH03, BK98}.
For Theorem \ref{thm:main}, instead of computing the effect of intertwiners
on the generator as in \cite{CS18}, we make a reduction to depth-zero
case and then to a finite group analogue of the question. There it
holds by a result of Reeder \cite{Re02}*{\S 7.2}.
\section{Notations}
Throughout, $F$ denotes a non-archimedean local field. Let $\mathcal{O}$
denote the ring of integers of $F$. We denote by $q$, the cardinality
of the residue field $\mathbb{F}_{q}$ of $F$ and by $p$ the characteristic
of $\mathbb{F}_{q}$.
\section{\label{sec:roche}Preliminaries}
We use this section to recall some basic theory and also fix notation.
\subsection{Bernstein decomposition}
Let $\mathcal{R}(G(F))$ denote the category of smooth complex representations
of $G(F)$. The \textit{Bernstein decomposition} gives a direct product
decomposition of $\mathcal{R}(G(F))$ into indecomposable subcategories:
\[
\mathcal{R}(G(F))=\prod_{\mathfrak{s}\in\mathfrak{B}(G)}\mathcal{R}^{\mathfrak{s}}(G(F)).
\]
Here $\mathfrak{B}(G)$ is the set of \textit{inertial equivalence
classes, }i.e., equivalence classes $[L,\sigma]_{G}$ of cuspidal
pairs $(L,\sigma)$, where $L$ is an $F$-Levi, $\sigma$ is a supercuspidal
of $L(F)$ and where the equivalence is given by conjugation by $G(F)$
and twisting by unramified characters of $L(F)$. The block $\mathcal{R}^{[L,\sigma]_{G}}(G(F))$
consists of those representations $\pi$ for which each irreducible
constituent of $\pi$ appears in the parabolic induction of some supercuspidal
representation in the equivalence class $[L,\sigma]_{G}$.
\subsection{Hecke algebra \cite{BK98}}
Let $(\tau,V)$ be an irreducible representation of a compact open
subgroup $J$ of $G(F)$. The Hecke algebra $\mathcal{H}(G,\tau)$
is the space of compactly supported functions $f:G(F)\rightarrow\mathrm{End}_{\mathbb{C}}(V^{\vee})$
satisfying,
\[
f(j_{1}gj_{2})=\tau^{\vee}(j_{1})f(g)\tau^{\vee}(j_{2})\text{ }\text{for all }j_{1,}j_{2}\in J\text{ }\text{and }g\in G(F).
\]
Here $(\tau^{\vee},V^{\vee})$ denotes the dual of $\tau$. The standard
convolution operation gives $\mathcal{H}(G,\tau)$ the structure of
an associative $\mathbb{C}$-algebra with identity.
Let $\mathcal{R}_{\tau}(G(F))$ denote the subcategory of $\mathcal{R}(G(F))$
whose objects are the representations $(\pi,\mathcal{V})$ of $G(F)$
generated by the $\tau$-isotypic subspace $\mathcal{V}^{\tau}$ of
$\mathcal{V}$. There is a functor
\[
\mathbf{M}_{\tau}:\mathcal{R}_{\tau}(G(F))\rightarrow\mathcal{H}(G,\tau)\text{-Mod},
\]
given by
\[
\pi\mapsto\mathrm{Hom}_{J}(\tau,\pi).
\]
Here $\mathcal{H}(G,\tau)\text{-Mod}$ denotes the category of unital
left modules over $\mathcal{H}(G,\tau)$.
For $\mathfrak{s}\in\mathfrak{B}(G)$, the pair $(J,\tau)$ is an
$\mathfrak{s}$-type if $\mathcal{R}_{\tau}(G(F))=\mathcal{R}^{\mathfrak{s}}(G(F))$.
In that case, the functor $\mathbf{M}_{\tau}$ gives an equivalence
of categories.
\subsection{G-cover \cite{BK98}}
Let $(K_{M,}\rho_{M})$ be a $[M,\sigma]_{M}$-type. Let $(K,\rho)$
be a pair consisting of a compact open subgroup $K$ or $G(F)$ and
an irreducible representation $\rho$ of $K$. Suppose that for any
opposite pair of $F$-parabolic subgroups $P=MN$ and $\bar{P}=M\bar{N}$
with Levi factor $M$ and unipotent radicals $N$ and $\bar{N}$ respectively,
the pair $(K,\rho)$ satisfies the following properties:
\begin{enumerate}
\item[(1)]$K$ decomposes with respect to $(N,M,\bar{N})$, i.e.,
\[
K=(K\cap N)\ldotp(K\cap M)\ldotp(K\cap\bar{N}).
\]
\item[(2)]$\rho|K_{M}=\rho_{M}$ and $K\cap N$, $K\cap\bar{N}\subset\mathrm{ker}(\rho)$.
\item[(3)]For any smooth representation $(\pi,V)$ of $G(F),$ the
natural projection $V$ to the Jacquet module $V_{N}$ induces an
injection on $V^{\rho}$.
\end{enumerate}
The pair $(K,\rho)$ is then called the $G$-cover of $(K_{M},\rho_{M})$.
See \cite{Bl}*{Theorem 1} for this reformulation of the original
definition of $G$-cover due to Bushnell and Kutzko \cite{BK98}*{\S 8.1}
(see also \cite{KY17}*{\S 4.2}). If $(K,\rho)$ is a $G$-cover of
$(K_{M},\rho_{M})$, then it is an $[M,\sigma]_{G}$-type.
Suppose $(K,\rho)$ is a $G$-cover of $(K_{M},\rho_{M})$, then for
any $F$-parabolic subgroup $P'=MN'$ with Levi factor $M$ and unipotent
radical $N'$, there is a $\mathbb{C}$-algebra embedding \cite{BK98}*{\S 8.3}
\begin{equation}
t_{P'}:\mathcal{H}(M,\rho_{M})\rightarrow\mathcal{H}(G,\rho),\label{eq:HMG}
\end{equation}
with the property that for any smooth representation $\Upsilon$ of
$G(F)$,
\begin{equation}
\mathbf{M}_{\rho_{M}}(\Upsilon_{N'})\cong t_{P'}^{*}(\mathbf{M}_{\rho}(\Upsilon)).\label{eq:VMG}
\end{equation}
Here $t_{P'}^{*}:\mathcal{H}(G,\rho)\text{-mod }\rightarrow\mathcal{H}(M,\rho_{M})\text{-mod }$
induced by $t_{P'}$.
Kim and Yu \cite{KY17}, using Kim's work \cite{Kim07}, showed that
Yu's construction of supercuspidals \cite{Yu01} can be used to produce
$G$-covers of $[M,\sigma]_{M}$-types for all $[M,\sigma]_{G}\in\mathfrak{B}(G)$,
assuming $F$ has characteristic $0$ and the residue characteristic
$p$ of $F$ is suitably large. Recently Fintzen \cite{Fin}, using
an approach different from Kim, has shown the construction of types
for all Bernstein blocks without any restriction on the characteristic
of $F$ and assuming only that $p$ does not divide the order of the
Weyl group of $G$.
\section{\label{sec:GG}Gelfand-Graev spaces}
Let $G$ be a connected reductive group defined over $F$. Fix a maximal
$F$-split torus $S$ in $G$ and let $T$ denote its centralizer.
Then $T$ is the Levi factor of a minimal $F$-parabolic subgroup
$B$ of $G$. We denote the unipotent radical of $B$ by $U$. A smooth
character
\[
\psi:U(F)\rightarrow\mathbb{C}^{\times}
\]
is called non-degenerate if its stabilizer in $S(F)$ lies in the
center $Z$ of $G$.
The Gelfand-Graev representation $c\text{-ind}_{U(F)}^{G(F)}\psi$
of $G(F)$ is provided by the space of right $G(F)$-smooth compactly
supported modulo $U(F)$ functions $f:G(F)\rightarrow\mathbb{C}$
satisfying:
\[
f(ug)=\psi(u)f(g),\text{ }\forall u\in U(F),g\in G(F).
\]
Let $M$ be a $(B,T)$-standard $F$-Levi subgroup of an $F$-parabolic
$P=MN$ of $G$, i.e., $M$ contains $T$ and $P$ contains $B$.
Then $B\cap M$ is a minimal parabolic subgroup of $M$ with unipotent
radical $U_{M}:=U\cap M$. Also, $\psi_{M}:=\psi|M$ is a non-degenerate
character of $U_{M}(F)$. As before, denote by $\bar{P}=M\bar{N}$,
the opposite parabolic subgroup. We also have an isomorphism of $M(F)$
representations \cite{BH03}*{\S 2.2, Theorem}
\begin{equation}
c\text{-ind}_{U_{M}(F)}^{M(F)}\psi_{M}\cong(c\text{-ind}_{U(F)}^{G(F)}\psi)_{\bar{N}}.\label{eq:cindU}
\end{equation}
Now let $\sigma$ be a character of $T(F)$. Let $(K_{T},\rho_{T})$
be a $[T,\sigma]_{T}$-type and let $(K,\rho)$ denote its $G$-cover.
We assume that the residue characteristic $p$ does not divide the
order of the Weyl group of $G$, so that $(K,\rho)$ exists by \cite{Fin}.
Let $\bar{B}=T\bar{U}$ denote the opposite Borel. View $\mathcal{H}(T,\rho_{T})$
as a subalgebra of $\mathcal{H}(G,\rho)$ via the embedding:
\[
t_{\bar{B}}:\mathcal{H}(T,\rho_{T})\rightarrow\mathcal{H}(G,\rho),
\]
of Equation (\ref{eq:HMG}).
\begin{thm}
\label{thm:cyc}There is an isomorphism
\[
(c\emph{-ind}_{U(F)}^{G(F)}\psi)^{\rho}\cong\mathcal{H}(T,\rho_{T})
\]
of $\mathcal{H}(T,\rho_{T})$-modules. Consequently, $(c\emph{-ind}_{U(F)}^{G(F)}\psi)^{\rho}$
is a cyclic $\mathcal{H}(G,\rho)$-module.
\end{thm}
\begin{proof}
Putting $M=T$ in Equation (\ref{eq:cindU}) and observing that in
this case, $U_{M}=1$, we get an isomorphism of $T(F)$ representations
\begin{eqnarray*}
(c\text{-ind}_{U(F)}^{G(F)}\psi)_{\bar{U}} & \cong & c\text{-ind}_{1}^{T(F)}\mathbb{C}\\
& \cong & C_{c}^{\infty}(T(F)).
\end{eqnarray*}
Consequently,
\begin{eqnarray*}
(c\text{-ind}_{U(F)}^{G(F)}\psi)_{\bar{U}}^{\rho_{T}} & \cong & C_{c}^{\infty}(T(F))^{\rho_{T}}\\
& \cong & \mathcal{H}(T,\rho_{T})
\end{eqnarray*}
as $\mathcal{H}(T,\rho_{T})$-modules. Now by Equation (\ref{eq:VMG}),
\[
(c\text{-ind}_{U(F)}^{G(F)}\psi)_{\bar{U}}^{\rho_{T}}\cong(c\text{-ind}_{U(F)}^{G(F)}\psi)^{\rho}
\]
as $\mathcal{H}(T,\rho_{T})$-modules. The result follows.
\end{proof}
\section{\label{sec:PSC}Principal series component}
\subsection{Some results of Roche }
In this subsection, we summarize some results of Roche in \cite{Ro98}.
Let the notations be as in Section \ref{sec:GG}. We assume further
that $S=T$, so that $B=TU$ is now an $F$-Borel subgroup of $G$
containing the maximal $F$-split torus $T$. The pair $(B,T)$ determines
a based root datum $\Psi=(X,\Phi,\Pi,X^{\vee},\Phi^{\vee},\Pi^{\vee})$.
Here $X$ (resp. $X^{\vee}$) is the character (resp. co-character)
lattice of $T$ and $\Pi$ (resp. $\Pi^{\vee}$) is a basis (resp.
dual basis) for the set of roots $\Phi=\Phi(G,T)$ (resp. $\Phi^{\vee}$)
of $T$ in $G$.
For the results of this subsection, we assume that $F$ has characteristic
$0$ and the residue characteristic $p$ of $F$ satisfies the following
hypothesis.
\begin{hyp}\label{hyp}
If $\Phi$ is irreducible, $p$ is restricted as follows:
\begin{itemize}
\item for type $A_{n}$, $p>n+1$
\item for type $B_{n},C_{n},D_{n}$ $p\neq2$
\item for type $F_{4}$, $p\neq2,3$
\item for types $G_{2},E_{6}$ $p\neq2,3,5$
\item for types $E_{7},E_{8}$ $p\neq2,3,5,7$
\end{itemize}
If $\Phi$ is not irreducible, then $p$ excludes primes attached
to each of its irreducible factors.
\end{hyp}
We let $\mathcal{T}=T(\mathcal{O})$ denote the maximal compact subgroup
of $T(F)$, $N_{G}(T)$ to be the normalizer of $T$ in $G$ and $W=W(G,T)=N_{G}(T)/T=N_{G}(T)(F)/T(F)$
denote the Weyl group of $G$.
Let $\chi^{\#}$ be a character of $T(F)$ and put $\chi=\chi^{\#}|T(F)_{0}$,
where $T(F)_{0}$ denotes the maximal compact subgroup of $T(F)$.
Then $(T(F)_{0},\chi)$ is a $[T,\chi^{\#}]_{T}$-type.
Let $N_{G}(T)(F)_{\chi}$ (resp. $N_{G}(T)(\mathcal{O})_{\chi}$,
resp. $W_{\chi}$) denote the subgroup of $N_{G}(T)(F)$ (resp. $N_{G}(T)(\mathcal{O})$,
resp. $W$) which fixes $\chi$. The group $N_{G}(T)(F)_{\chi}$ contains
$T(F)$ and we have $W_{\chi}=N_{G}(T)(F)_{\chi}/T(F)$. Denote by
$\mathcal{W}=\mathcal{W}(G,T)=N_{G}(T)(F)/\text{\ensuremath{\mathcal{T}} }$,
the Iwahori-Weyl group of $G$. There is an identification $N_{G}(T)(F)=X^{\vee}\rtimes N_{G}(T)(\mathcal{O})$
given by the choice of a uniformizer of $F$. Since $N_{G}(T)(\mathcal{O})/\mathcal{T}=N_{G}(T)/T$,
this identification also gives an identification $\mathcal{W}=X^{\vee}\rtimes W$.
Let $\mathcal{W}_{\chi}=X^{\vee}\rtimes W_{\chi}$ be the subgroup
of $\mathcal{W}$ which fixes $\chi$.
Let
\begin{eqnarray*}
\Phi^{\prime} & = & \{\alpha\in\Phi\mid\chi\circ\alpha^{\vee}|_{\mathcal{O}^{\times}}=1\}.
\end{eqnarray*}
Then $\Phi^{\prime}$ is a sub-root system of $\Phi$. Let $s_{\alpha}$
denote the reflection on the space $\mathcal{A}=X^{\vee}\otimes_{\mathbb{Z}}\mathbb{R}$
associated to a root $\alpha\in\Phi$ and write $W^{\prime}=\langle s_{\alpha}\mid\alpha\in\Phi^{\prime}\rangle$
to be the associated Weyl group. Let $\Phi^{+}$ (resp. $\Phi^{-}$)
be the system of positive (resp. negative) roots determined by the
choice of the Borel $B$ and let $\Phi^{\prime+}=\Phi^{+}\cap\Phi^{\prime}$.
Then $\Phi^{\prime+}$ is a positive system in $\Phi^{\prime}$. Put
\[
C_{\chi}=\{w\in W_{\chi}\mid w(\Phi^{\prime+})=\Phi^{\prime+}\}.
\]
Then we have,
\[
W_{\chi}=W^{\prime}\rtimes C_{\chi}.
\]
The character $\chi$ extends to a $W_{\chi}$-invariant character
$\tilde{\chi}$ of $N_{G}(T)(\mathcal{O})_{\chi}$. Denote by $\tilde{\chi}$,
the character of $N_{G}(T)(F)_{\chi}$ extending $\tilde{\chi}$ trivially
on $X^{\vee}$.
Roche's construction produces a $[T,\chi^{\#}]_{G}$-type $(K,\rho)$.
The pair $(K,\rho)$ depends on the choice of $B,T,\chi$ but not
on the extension $\chi^{\#}$ of $\chi$. Denote by $I_{G(F)}(\rho)$,
the set of elements in $G(F)$ which intertwine $\rho$. Equivalently,
$g\in I_{G(F)}(\rho)$ iff the double coset $KgK$ supports a non-zero
function in $\mathcal{H}(G,\rho)$. We have an equality
\begin{equation}
I_{G(F)}(\rho)=K\mathcal{W_{\chi}}K.\label{eq:IGrho}
\end{equation}
For an element $w\in\mathcal{W}_{\chi}$, choose any representative
$n_{w}$ of $w$ in $N_{G}(T)(F)_{\chi}$ and let $f_{\tilde{\chi},w}$
be the unique element of the Hecke algebra $\mathcal{H}(G,\rho)$
supported on $Kn_{w}K$ and having value $q^{-\ell(w)/2}\tilde{\chi}(n_{w})^{-1}$.
Here $\ell$ is the length function on the affine Weyl group $\mathcal{W}$.
The functions $f_{\tilde{\chi},w}$ for $w\in\mathcal{W}_{\chi}$
form a basis for the $\mathbb{C}$-vector space $\mathcal{H}(G,\rho)$.
Denote by $\mathcal{H}_{W,\chi}$ the subalgebra of $\mathcal{H}(G,\rho)$
generated by $\{f_{\tilde{\chi},w}\mid w\in W'\}$. Also, identify
$\mathcal{H}(T,\chi)$ as a subalgebra of $\mathcal{H}(G,\rho)$ using
the embedding $t_{B}$. When $G$ has connected center, $C_{\chi}=1$ assuming
Hypothesis \ref{hyp}. In that case, $\mathcal{H}_{W,\chi}$
and $\mathcal{H}(T,\chi)$ together
generate the full Hecke algebra $\mathcal{H}(G,\rho)$.
\subsection{Statement of Theorem}\label{sec:statement}
We continue to assume that $G$ is split.
Extend the triple $(G,B,T)$ to a Chevalley-Steinberg pinning of $G$.
This determines a hyperspecial point $x$ in the Bruhat-Tits building
which gives $G$ the structure of a Chevalley group. With this identification,
$(G,B,T)$ are defined over $\mathcal{O}$ and the hyperspecial subgroup
$G(F)_{x,0}$ at $x$ is $G(\mathcal{O})$. Let $G(F)_{x,0+}$ denote the pro-unipotent radical of $G(F)_{x,0}$. Then $G(F)_{x,0}/G(F)_{x,0+}\cong G(\mathbb{F}_{q})$. We say that a non-degenerate character $\psi$
of $U(F)$ is of \emph{generic depth zero} at $x$ if $\psi|U(F)\cap G(F)_{x,0}$ factors through a
generic character of $\text{\ensuremath{\underbar{\ensuremath{\psi}}}}$
of $U(\mathbb{F}_{q})=U(F)\cap G(F)_{x,0}/U(F)\cap G(F)_{x,0+}$ (see \cite{DeRe10}*{\S 1} for
a more general definition). Note that if $G$ has connected center, then all non-degenerate
characters of $U(F)$ form a single orbit under $T(F)$.
Let $\mathrm{sgn}$ denote the one dimensional representation of $\mathcal{H}_{W,\chi}$
in which $f_{\tilde{\chi},w}$ acts by the scalar $(-1)^{\ell'(w)}$.
Here $\ell'$ denotes the length function on $W'$.
\begin{thm}
\label{thm:main
Let $\psi$ be a non-degenerate character of $U(F)$ of generic depth zero at $x$. If $\chi\neq1$, then assume that the center of $G$ is connected. If $\chi$ has positive depth, then assume further that
$F$ has characteristic $0$ and the residue characteristic satisfies
Hypothesis \ref{hyp}. Then $\mathcal{H}(G,\rho)$-module $(c\emph{-ind}_{U(F)}^{G(F)}\psi)^{\rho}$
is isomorphic to $\mathcal{H}(G,\rho)\underset{\mathcal{H}_{W,\chi}}{\otimes}\mathrm{sgn}$.
\end{thm}
\section{Proof of Theorem \ref{thm:main}}
We retain the notations introduced in Sections \ref{sec:GG} and \ref{sec:PSC}.
\subsection{Reduction to depth-zero}
It follows from the the proof of \cite{Ro98}*{Theorem 4.15} (see
also loc. cit., page 385, 2nd last paragraph), that there exists a
standard $F$-Levi subgroup $M$ of $G$ which is the Levi factor
of a standard parabolic $P=MN$ of $G$ and which is minimal with
the property that
\begin{equation}
I_{G(F)}(\rho)\subset KM(F)K.\label{eq:IGrho_sub}
\end{equation}
Put $(K_{M,}\rho_{M})=(K\cap M,\rho|M(F))$. From \cite{BK98}*{Theorem 7.2(ii)},
it follows that $(K,\rho)$ satisfies the requirements \cite{BK98}*{\S 8.1}
of being $G$-cover of $(K_{M},\rho_{M})$. It also follows from \cite{BK98}*{Theorem 7.2(ii)}
that there is a support preserving Hecke algebra isomorphism
\begin{equation}
\Psi^{M}:\mathcal{H}(M,\rho_{M})\overset{\simeq}{\rightarrow}\mathcal{H}(G,\rho).\label{eq:H-isom}
\end{equation}
By Equation (\ref{eq:cindU}), we have an isomorphism of $\mathcal{H}(M,\rho_{M})$-modules
\begin{equation}
(c\text{-ind}_{U_{M}(F)}^{M(F)}\psi_{M})^{\rho_{M}}\cong((c\text{-ind}_{U(F)}^{G(F)}\psi)_{\bar{N}(F)})^{\rho_{M}}.\label{eq:c-ind1}
\end{equation}
And by Equation (\ref{eq:VMG}), we have a $\Psi^{M}$-equivariant
isomorphism
\begin{equation}
(c\text{-ind}_{U(F)}^{G(F)}\psi)_{\bar{N}(F)})^{\rho_{M}}\cong(c\text{-ind}_{U(F)}^{G(F)}\psi)^{\rho}.\label{eq:c-ind2}
\end{equation}
Combining Equations (\ref{eq:c-ind1}) and (\ref{eq:c-ind2}), we
get a $\Psi^{M}$-equivariant isomorphism
\[
(c\text{-ind}_{U_{M}(F)}^{M(F)}\psi_{M})^{\rho_{M}}\cong(c\text{-ind}_{U(F)}^{G(F)}\psi)^{\rho}.
\]
Also it is shown in the proof of \cite{Ro98}*{Theorem 4.15} that
for such an $M$, there is a character $\chi_{1}$ of $M(F)$ such
that $\chi\chi_{1}$ viewed as a character of $T(F)_{0}$ is depth-zero.
We then have an isomorphism
\begin{equation}
\Psi_{\chi_{1}}:f\in\mathcal{H}(M,\rho_{M})\overset{\simeq}{\mapsto}f\chi_{1}\in\mathcal{H}(M,\rho_{M}\chi_{1}).
\end{equation}
This gives a $\Psi_{\chi_{1}}$-equivariant isomorphism
\[
(c\text{-ind}_{U_{M}(F)}^{M(F)}\psi_{M})^{\rho_{M}}\cong(c\text{-ind}_{U_{M}(F)}^{M(F)}\psi_{M})^{\rho_{M}\chi_{1}}.
\]
We thus have a $\Psi^{M}\circ\Psi_{\chi_{1}}^{-1}$-equivariant isomorphism
\begin{equation}
(c\text{-ind}_{U_{M}(F)}^{M(F)}\psi_{M})^{\rho_{M}\chi_{1}}\cong(c\text{-ind}_{U(F)}^{G(F)}\psi)^{\rho}.\label{eq:vec}
\end{equation}
By Equations (\ref{eq:IGrho}) and (\ref{eq:IGrho_sub}), it follows
that $\Psi^{M}$ restricts to an algebra isomorphism
\begin{equation}
\mathcal{H}_{W(G,T),\chi}\overset{\simeq}{\rightarrow}\mathcal{H}_{W(M,T),\chi}.
\end{equation}
From the proof of \cite{Ro98}*{Theorem 4.15}, $W(M,T)_{\chi}=W(M,T)_{\chi\chi_{1}}$
and therefore $\Psi_{\chi_{1}}$ restricts to an isomorphism
\[
\mathcal{H}_{W(M,T),\chi}\cong\mathcal{H}_{W(M,T),\chi\chi_{1}}.
\]
Thus $\Psi^{M}\circ\Psi_{\chi_{1}}^{-1}$ restricts to an isomorphism
\begin{equation}
\mathcal{H}_{W(M,T),\chi\chi_{1}}\cong\mathcal{H}_{W(G,T),\chi}.\label{eq:subalg}
\end{equation}
Note that if $G$ has connected center, then so does $M$ (see proof
of \cite{Car85}*{Propositon 8.1.4} for instance for this fact). Thus,
from Equations (\ref{eq:vec}) and (\ref{eq:subalg}), it follows
that to prove Theorem \ref{thm:main}, we can and do assume without
loss of generality that $\chi$ has depth-zero.
\begin{rem}
For a much more general statement of the isomorphism $\Psi^{M}\circ\Psi_{\chi_{1}}^{-1}$,
see \cite{AdMi}*{\S 8}.
\end{rem}
\subsection{Proof in depth-zero case}
For results of this section, no restriction on characteristic or residue
characteristic is imposed.
\begin{comment}
Extend the triple $(G,B,T)$ to a Chevalley-Steinberg pinning of $G$.
This determines a hyperspecial point $x$ in the Bruhat-Tits building
which gives $G$ the structure of a Chevalley group. With this identification,
$(G,B,T)$ are defined over $\mathcal{O}$ and the hyperspecial subgroup
$G(F)_{x,0}$ at $x$ is $G(\mathcal{O})$.
\end{comment}
Let $I$ be the Iwahori
subgroup of $G$ which is in good position with respect to $(\bar{B},T)$
(note here that we are taking opposite Borel) and let $I_{0+}$ denote
its pro-unipotent radical. Put $T(F)_{0+}=I_{0+}\cap T(F)_{0}$. Then
$I/I_{0+}\cong T(F)_{0}/T(F)_{0+}$. Since $\chi$ is depth-zero,
it factors through $T(F)_{0}/T(F)_{0+}$ and consequently lifts to
a character of $I$ which we denote by $\rho$. The pair $(I,\rho)$
is then a $G$-cover of $(T(F)_{0},\chi)$ \cite{Hai}.
\begin{comment}
Let $G(F)_{x,0+}$ denote the pro-unipotent radical of $G(F)_{x,0}$.
Then\\
$G(F)_{x,0}/G(F)_{x,0+}\cong G(\mathbb{F}_{q})$. By conjugating
with $T(F)$ if required, we can assume that $\psi$ is of generic-depth
zero at $x$, i.e., $\psi|U(F)\cap G(F)_{x,0}$ factors through a
generic character of $\text{\ensuremath{\underbar{\ensuremath{\psi}}}}$
of $U(\mathbb{F}_{q})=U(F)\cap G(F)_{x,0}/U(F)\cap G(F)_{x,0+}$.
\end{comment}
Define $\phi:G(F)\rightarrow\mathbb{C}$ to be the function supported
on $U(F).(I\cap\bar{B})$ such that $\phi(ui)=\psi(u)\chi(i)$ for
$u\in U(F)$ and $i\in I\cap\bar{B}$. There is an isomorphism of
$G(\mathbb{F}_{q})$-spaces
\[
(c\text{-ind}_{U(F)}^{G(F)}\psi)^{G(F)_{x,0+}}\cong\text{ind}_{U(\mathbb{F}_{q})}^{G(\mathbb{F}_{q})}\text{\ensuremath{\underbar{\ensuremath{\psi}}}}.
\]
Under this isomorphism, $\phi$ maps to a function $\text{\ensuremath{\underbar{\ensuremath{\phi}}}}:G(\mathbb{F}_{q})\rightarrow\mathbb{C}^{\times}$
which is supported on $U(\mathbb{F}_{q}).\bar{B}(\mathbb{F}_{q})$
and such that $\text{\ensuremath{\underbar{\ensuremath{\phi}}}}(ub)=\text{\ensuremath{\underbar{\ensuremath{\psi}}}}(u)\chi(b)$
for $u\in U(\mathbb{F}_{q})$ and $b\in\bar{B}(\mathbb{F}_{q})$.
Now $(\text{ind}_{U(\mathbb{F}_{q})}^{G(\mathbb{F}_{q})}\text{\ensuremath{\underbar{\ensuremath{\psi}}}})^{\chi}$
is an irreducible $\mathcal{H}_{W,\chi}$-module isomorphic to the
$\chi$-isotypical component of the irreducible $\text{\ensuremath{\underbar{\ensuremath{\psi}}}}$-generic
constituent of $\text{ind}_{\bar{B}(\mathbb{F}_{q})}^{G(\mathbb{\mathbb{F}}_{q})}\chi$.
Observe that $\text{\ensuremath{\underbar{\ensuremath{\phi}}}}\in(\text{ind}_{U(\mathbb{F}_{q})}^{G(\mathbb{F}_{q})}\text{\ensuremath{\underbar{\ensuremath{\psi}}}})^{\chi}$.
If $\chi$ is trivial then $(\text{ind}_{U(\mathbb{F}_{q})}^{G(\mathbb{F}_{q})}\text{\ensuremath{\underbar{\ensuremath{\psi}}}})^{\chi}$
corresponds to the Steinberg constituent of $\text{ind}_{\bar{B}(\mathbb{F}_{q})}^{G(\mathbb{\mathbb{F}}_{q})}\chi$.
If $G$ has connected center, then it is shown in \cite{Re02}*{\S 7.2, 2nd last paragraph}
that $(\text{ind}_{U(\mathbb{F}_{q})}^{G(\mathbb{F}_{q})}\text{\ensuremath{\underbar{\ensuremath{\psi}}}})^{\chi}\cong\mathrm{sgn}$.
Thus in either case, the $1$-dimensional space spanned by $\text{\ensuremath{\underbar{\ensuremath{\phi}}}}$
affords the $\mathrm{sgn}$ representation of $\mathcal{H}_{W,\chi}$.
Consequently, the $1$-dimensional space spanned by $\text{\ensuremath{\phi}}$
affords the $\mathrm{sgn}$ representation $\mathcal{H}_{W,\chi}$.
It is readily checked that $\phi$ maps to $1$ under the isomorphism
of Theorem \ref{thm:cyc}. It follows that $\phi$ is a generator
of the cyclic $\mathcal{H}(G,\rho)$-module $(c\text{-ind}_{U(F)}^{G(F)}\psi)^{\rho}$.
We have by Frobenius reciprocity:
\[
\mathrm{Hom}_{\mathcal{H}_{W,\chi}}(\mathrm{sgn},(c\text{-ind}_{U(F)}^{G(F)}\psi)^{\rho})\cong\mathrm{Hom}_{\mathcal{H}(G,\rho)}(\mathcal{H}(G,\rho)\underset{\mathcal{H}_{W,\chi}}{\otimes}\mathrm{sgn},(c\text{-ind}_{U(F)}^{G(F)}\psi)^{\rho}).
\]
This isomorphism sends $1\mapsto\phi$ to the element $1\otimes1\mapsto\phi$.
Theorem \ref{thm:main} now follows from the fact that $\mathcal{H}(G,\rho)\underset{\mathcal{H}_{W,\chi}}{\otimes}\mathrm{sgn}$
and $(c\text{-ind}_{U(F)}^{G(F)}\psi)^{\rho}$ are free $\mathcal{H}(T,\chi)$-modules
generated by $1\otimes1$ and $\phi$ respectively.
\section{acknowledgments}
The first named author benefitted from discussions with Dipendra Prasad.
The authors are thankful to the anonymous referee for pointing out a mistake in an earlier draft of this article.
\begin{bibdiv}
\begin{biblist}
\bib{AdMi}{article}{title={Regular Bernstein blocks}, author={Jeffrey D. Adler and Manish Mishra},
journal={J. Reine Angew. Math.}
status={to appear}
eprint={arXiv:1909.09966} }
\bib{Bl}{article}{AUTHOR = {Blondel, Corinne}, TITLE = {Crit\`ere d'injectivit\'{e} pour l'application de {J}acquet}, JOURNAL = {C. R. Acad. Sci. Paris S\'{e}r. I Math.}, FJOURNAL = {Comptes Rendus de l'Acad\'{e}mie des Sciences. S\'{e}rie I. Math\'{e}matique}, VOLUME = {325}, YEAR = {1997}, NUMBER = {11}, PAGES = {1149--1152}, ISSN = {0764-4442}, MRCLASS = {22E50}, MRNUMBER = {1490115}, MRREVIEWER = {David Manderscheid}, DOI = {10.1016/S0764-4442(97)83544-6}, URL = {https://doi.org/10.1016/S0764-4442(97)83544-6}, }
\bib{BH03}{article}{AUTHOR = {Bushnell, Colin J.} author={Henniart, Guy}, TITLE = {Generalized {W}hittaker models and the {B}ernstein center}, JOURNAL = {Amer. J. Math.}, FJOURNAL = {American Journal of Mathematics}, VOLUME = {125}, YEAR = {2003}, NUMBER = {3}, PAGES = {513--547}, ISSN = {0002-9327}, MRCLASS = {22E50 (11F70)}, MRNUMBER = {1981032}, URL = {http://muse.jhu.edu/journals/american_journal_of_mathematics/v125/125.3bushnell.pdf}, }
\bib{BK98}{article}{AUTHOR = {Bushnell, Colin J.} author= {Kutzko, Philip C.}, TITLE = {Smooth representations of reductive {$p$}-adic groups: structure theory via types}, JOURNAL = {Proc. London Math. Soc. (3)}, FJOURNAL = {Proceedings of the London Mathematical Society. Third Series}, VOLUME = {77}, YEAR = {1998}, NUMBER = {3}, PAGES = {582--634}, ISSN = {0024-6115}, MRCLASS = {22E50 (22E35)}, MRNUMBER = {1643417}, MRREVIEWER = {David Goldberg}, DOI = {10.1112/S0024611598000574}, URL = {https://doi.org/10.1112/S0024611598000574}, }
\bib{Car85}{book}{AUTHOR = {Carter, Roger W.}, TITLE = {Finite groups of {L}ie type}, SERIES = {Pure and Applied Mathematics (New York)}, NOTE = {Conjugacy classes and complex characters, A Wiley-Interscience Publication}, PUBLISHER = {John Wiley \& Sons, Inc., New York}, YEAR = {1985}, PAGES = {xii+544}, ISBN = {0-471-90554-2}, MRCLASS = {20G40 (20-02 20C15)}, MRNUMBER = {794307}, MRREVIEWER = {David B. Surowski}, }
\bib{CS18}{article}{title={Iwahori component of the Gelfand--Graev representation},
author={Chan, Kei Yuen},
author={Savin, Gordan},
journal={Mathematische Zeitschrift}, volume={288}, number={1-2}, pages={125--133}, year={2018}, publisher={Springer} }
\bib{DeRe10}{article}{ AUTHOR = {DeBacker, Stephen}, author={Reeder, Mark},
TITLE = {On some generic very cuspidal representations},
JOURNAL = {Compos. Math.},
FJOURNAL = {Compositio Mathematica},
VOLUME = {146},
YEAR = {2010},
NUMBER = {4},
PAGES = {1029--1055},
ISSN = {0010-437X},
MRCLASS = {20G05 (20G25 22E50)},
MRNUMBER = {2660683},
MRREVIEWER = {Dubravka Ban},
DOI = {10.1112/S0010437X10004653},
URL = {https://doi.org/10.1112/S0010437X10004653},
}
\bib{Fin}{article} {
AUTHOR = {Fintzen, Jessica},
TITLE = {Types for tame {$p$}-adic groups},
JOURNAL = {Ann. of Math. (2)},
FJOURNAL = {Annals of Mathematics. Second Series},
VOLUME = {193},
YEAR = {2021},
NUMBER = {1},
PAGES = {303--346},
ISSN = {0003-486X},
MRCLASS = {22E50},
MRNUMBER = {4199732},
DOI = {10.4007/annals.2021.193.1.4},
URL = {https://doi.org/10.4007/annals.2021.193.1.4},
}
\bib{Hai}{article}{title={On Hecke algebra isomorphisms and types for depth-zero principal series}, author={Haines, Thomas, J.}, journal={expository note available at www.math.umd.edu/tjh}}
\bib{Kim07}{article}{AUTHOR = {Kim, Ju-Lee}, TITLE = {Supercuspidal representations: an exhaustion theorem}, JOURNAL = {J. Amer. Math. Soc.}, FJOURNAL = {Journal of the American Mathematical Society}, VOLUME = {20}, YEAR = {2007}, NUMBER = {2}, PAGES = {273--320}, ISSN = {0894-0347}, MRCLASS = {22E50 (20G25 22E35)}, MRNUMBER = {2276772}, MRREVIEWER = {U. K. Anandavardhanan}, DOI = {10.1090/S0894-0347-06-00544-3}, URL = {https://doi.org/10.1090/S0894-0347-06-00544-3}, }
\bib{KY17}{incollection}{AUTHOR = {Kim, Ju-Lee}, author={Yu, Jiu-Kang}, TITLE = {Construction of tame types}, BOOKTITLE = {Representation theory, number theory, and invariant theory}, SERIES = {Progr. Math.}, VOLUME = {323}, PAGES = {337--357}, PUBLISHER = {Birkh\"{a}user/Springer, Cham}, YEAR = {2017}, MRCLASS = {22E50}, MRNUMBER = {3753917}, MRREVIEWER = {Alan Roche}, }
\bib{Re02}{article}{AUTHOR = {Reeder, Mark}, TITLE = {Isogenies of {H}ecke algebras and a {L}anglands correspondence for ramified principal series representations}, JOURNAL = {Represent. Theory}, FJOURNAL = {Representation Theory. An Electronic Journal of the American Mathematical Society}, VOLUME = {6}, YEAR = {2002}, PAGES = {101--126}, MRCLASS = {22E50 (20C08)}, MRNUMBER = {1915088}, MRREVIEWER = {A. Raghuram}, DOI = {10.1090/S1088-4165-02-00167-X}, URL = {https://doi.org/10.1090/S1088-4165-02-00167-X}, }
\bib{Ro98}{article}{AUTHOR = {Roche, Alan}, TITLE = {Types and {H}ecke algebras for principal series representations of split reductive {$p$}-adic groups}, JOURNAL = {Ann. Sci. \'{E}cole Norm. Sup. (4)}, FJOURNAL = {Annales Scientifiques de l'\'{E}cole Normale Sup\'{e}rieure. Quatri\`eme S\'{e}rie}, VOLUME = {31}, YEAR = {1998}, NUMBER = {3}, PAGES = {361--413}, ISSN = {0012-9593}, MRCLASS = {22E50}, MRNUMBER = {1621409}, MRREVIEWER = {Bertrand Lemaire}, DOI = {10.1016/S0012-9593(98)80139-0}, URL = {https://doi.org/10.1016/S0012-9593(98)80139-0}, }
\bib{Yu01}{article}{AUTHOR = {Yu, Jiu-Kang}, TITLE = {Construction of tame supercuspidal representations}, JOURNAL = {J. Amer. Math. Soc.}, FJOURNAL = {Journal of the American Mathematical Society}, VOLUME = {14}, YEAR = {2001}, NUMBER = {3}, PAGES = {579--622}, ISSN = {0894-0347}, MRCLASS = {22E50}, MRNUMBER = {1824988}, MRREVIEWER = {Bertrand Lemaire}, DOI = {10.1090/S0894-0347-01-00363-0}, URL = {https://doi.org/10.1090/S0894-0347-01-00363-0}, }
\end{biblist}
\end{bibdiv}
\end{document}
| proofpile-arXiv_065-5595 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
In mobile communications, the total throughput can be significantly enhanced by simultaneously transmitting/receiving as many data streams as possible with a large number of transmit/receive antennas. Therefore, multiple-input multiple-output (MIMO) technology may dramatically improve a system's spectral and energy efficiency. In a practical uplink multiuser large MIMO system, the numbers of transmit and receive antennas can be comparable \cite{rusek2013scaling}. In this scenario, low-complexity sub-optimal detectors, such as linear zero forcing (ZF), minimum-mean-square-error (MMSE), and successive interference cancellation (SIC) receivers, cannot achieve full diversity \cite{vardhan2008low}. In contrast, the maximum likelihood (ML) detector performs optimally, but its computational cost increases exponentially with the number of transmit antennas, which is prohibitive in large MIMO systems. Therefore, low-complexity near-optimal detection is an important challenge in optimizing large MIMO systems \cite{rusek2013scaling, chockalingam2014large, mandloi2017low}.
\subsection{Related Works}
The sequential sphere decoder (SD) with reduced complexity and near-optimal performance with respect to (w.r.t.) the optimal ML detector has been optimized well for small- and moderate-size MIMO systems. Among its variants, the Schnorr--Euchner SD (SE-SD) \cite{schnorr1994lattice, agrell2002closest} has the same performance as the conventional Fincke--Pohst SD (FP-SD) \cite{fincke1985improved, hassibi2005sphere} with reduced complexity. However, its complexity remains very high in large MIMO systems \cite{nguyen2019qr}. To address the problems of sequential SD, $K$-best SD (KSD) \cite{guo2006algorithm} was proposed to achieve fixed and reduced complexity. However, this algorithm suffers performance degradation, and does not ensure complexity reduction at high signal-to-noise ratios (SNRs).
The aforementioned challenges of the sequential SD and KSD make them infeasible for large MIMO systems. However, the increasing application of deep learning (DL) in wireless communication creates room for further optimization of SD schemes. Particularly, the initial works on DL-aided SD in \cite{askri2019dnn} and \cite{mohammadkarimi2018deep} attempt to improve SD by employing a deep neural network (DNN) to learn the initial radius. Whereas a single radius (SR) is used in \cite{askri2019dnn}, multiple radii (MR) are employed in \cite{mohammadkarimi2018deep}. In this study, to distinguish them, we refer to the former as the SR-DL-SD scheme and to the latter as the MR-DL-SD scheme. Furthermore, as an improvement of \cite{askri2019dnn} and \cite{mohammadkarimi2018deep}, Weon et al. \cite{weon2020learning} propose a learning-aided deep path-prediction scheme for sphere decoding (DPP-SD) in large MIMO systems. Specifically, the minimum radius for each sub-tree is learned by a DNN, resulting in more significant complexity reduction w.r.t. the prior SR-DL-SD and MR-DL-SD schemes. {The application of DL to symbol detection in MIMO systems is not limited to the aforementioned DL-aided SD schemes. For example, various DL models have been proposed to directly estimate the transmitted signal vector \cite{nguyen2019deep,samuel2019learning,he2018model,khani2019adaptive,gao2018sparsely,samuel2017deep,wei2020learned, takabe2019trainable, li2020deep, sun2019learning, he2020model}. In general, these schemes have been shown to perform better than traditional linear detectors, such as ZF and MMSE, with low complexity. We further discuss these schemes in Section III-A.}
In all three DL-aided SD schemes mentioned above, the common idea is to predict radii for the sequential SD. This approach has some limitations in the offline learning phase, as well as the online application. First, in the DNN training phase in \cite{askri2019dnn, mohammadkarimi2018deep,weon2020learning}, conventional SD needs to be performed first to generate training labels, i.e., the radius. Consequently, time and computational complexity requirements are high to train these DNNs. Although the training phase can be performed offline, these time and resource requirements make such schemes less efficient. Second, although the radius plays an important role in the search efficiency of conventional FP-SD, it becomes less significant in SE-SD \cite{hassibi2005sphere}. Therefore, using the predicted radius becomes less efficient in SE-SD, especially for high SNRs, for which a relatively reliable radius can be computed using the conventional formula \cite{hassibi2005sphere}. Moreover, in the KSD, the breadth-first search does not require a radius, which implies that the learning objectives in \cite{askri2019dnn, mohammadkarimi2018deep,weon2020learning} are inapplicable to the KSD scheme.
\subsection{Contributions}
In this study, we propose the fast DL-aided SD (FDL-SD) and fast DL-aided KSD (FDL-KSD) algorithms, which can overcome the limitations of the existing DL-aided SD schemes via a novel application of DL to SD. Specifically, we use a DNN to generate a highly reliable initial candidate for the search in SD, rather than generating the radius as in the existing DL-aided SD schemes \cite{askri2019dnn, mohammadkarimi2018deep,weon2020learning}. Furthermore, the output of the DNN facilitates a candidate/layer-ordering scheme and an early rejection scheme to significantly reduce the complexity. We note that the sequential SD and KSD have their own advantages and disadvantages. Specifically, the former guarantees near-optimal performance at the price of high computational complexity. By contrast, the KSD with reduced complexity can have performance degradation. In particular, the performance--complexity tradeoff of these schemes significantly depends on design parameters, which are the radius in the sequential SD and the number of surviving paths, i.e., $K$, in KSD. {In this work, we propose leveraging the fast-convergence sparsely connected detection network (FS-Net), a DNN architecture that was introduced in \cite{nguyen2019deep}, to further optimize their performance--complexity tradeoff}, and at the same time, to mitigate the dependence on the radius and $K$. Our specific contributions can be summarized as follows:
\begin{itemize}
\item For the application of DL to the SD scheme, rather than predicting the radius, we propose applying the FS-Net to generate a reliable solution to accelerate the SD scheme. Unlike other architecture that uses DNNs for learning the radius \cite{askri2019dnn, mohammadkarimi2018deep, weon2020learning}, the FS-Net can be trained easily without performing conventional SD; this considerably reduces the time and computational resources required for the training phase.
\item We propose the FDL-SD scheme, which achieves significant complexity reduction while fully preserving the performance of the conventional SD. Specifically, we exploit the output of the FS-Net to facilitate the search in SD based on the following ideas:
\textit{(i)} First, the output of the FS-Net, which is the approximate of the transmitted signal vector, is employed to determine the search order in the SD scheme. In particular, the candidates are ordered such that those closer to the FS-Net's output are tested first. This approach enhances the chance that the optimal solution is found early and accelerates the shrinking of the sphere, resulting in complexity reduction of the proposed FDL-SD scheme.
\textit{(ii)} Second, we propose a layer-ordering scheme. Specifically, we found that the sequential tree-like search in SD can be considered as the process of exploring and correcting incorrectly detected symbols. This implies that the errors at the lower layers can be explored and corrected sooner. Motivated by this, we propose ordering the layers of candidates so that errors are more likely to occur at low layers. This order is determined based on the FS-Net's output.
\item In the proposed FDL-KSD scheme, the FS-Net's output is also leveraged to optimize the search process of the KSD. In this scheme, we employ the cost metric of the FS-Net-based solution as a threshold to reject unpromising candidates early. Furthermore, the layer ordering in \textit{(ii)} is used to reduce the chance that the optimal solution is rejected early. This results in not only performance improvement, but also complexity reduction w.r.t. the conventional KSD.
\item Our extensive simulation results show that the FDL-SD scheme achieves a remarkable complexity reduction without any performance loss w.r.t. the conventional SD. In particular, the complexity reduction attained by our proposed FDL-SD scheme is significantly greater than those acquired by the existing DL-aided SD schemes. Furthermore, the proposed FDL-KSD scheme exhibits a considerable improvement in the performance--complexity tradeoff w.r.t. the conventional KSD.
\end{itemize}
We note that the aforementioned applications \textit{(i)} and \textit{(ii)} of DL to SD/KSD are not limited by the use of the FS-Net. They can also operate with an initial solution obtained by other linear or DL-based detectors. However, we found that FS-Net can be highly efficient for generating an initial solution for the proposed FDL-SD/KSD schemes, thus yielding significant complexity reduction. Specifically, the more reliable the initial solution, the greater the complexity reduction gain that can be achieved by the FDL-SD/KSD. However, it is worth noting that the computational complexity required to generate the initial solution must be included in the overall complexity of the FDL-SD/KSD. Therefore, the initial solution should be generated by a detector with a superior performance--complexity tradeoff, such as FS-Net. We further discuss this issue in Section III-A.
In general, the integration of the FS-Net with the SD/KSD in this work and that with the tabu search (TS) scheme in \cite{nguyen2019deep} are common in enabling a favorable initialization of the search. However, they are based on different motivations and ideas, and also have different efficiencies. Specifically, in the TS, the search starts from a candidate and moves over its neighbors successively to find a near-optimal solution. Because a very large number of moves are required to ensure that a near-optimal solution is attained in massive MIMO systems, it is necessary to start from a reliable point and terminate the search early. In \cite{nguyen2019deep}, we proposed using the FS-Net output as the initial point for moving among neighbors, and based on its quality, the search can be terminated early to reduce the complexity. In this sense, the search can end before the optimal solution is reached, causing performance loss for DL-TS. In contrast, the proposed FDL-SD/KSD schemes find exactly the same solution as their conventional counterparts, but much faster. To this end, candidate/layer ordering is proposed to accelerate the shrinking of the hypersphere to reach the solution as soon as possible; the search process is not terminated early. As a result, the FDL-SD fully preserves the performance, whereas the FDL-KSD provides improved performance w.r.t. the conventional SD/KSD.
\textit{Paper structure}: The rest of the paper is organized as follows: Section II presents the system model. In Section III, the existing DNNs for MIMO detection are reviewed, and the FS-Net's architecture and operation are described. The proposed FDL-SD and FDL-KSD schemes are presented in Sections IV and V, respectively. In Section VI, the simulation results and numerical discussions are presented. Finally, conclusions are drawn in Section VII.
\textit{Notations}: Throughout this paper, scalars, vectors, and matrices are denoted by lowercase, bold-face lowercase, and bold-face uppercase letters, respectively. The $i$th element of a vector $\textbf{\textit{a}}$ is denoted by $a_i$, and the $(i,j)$th element of a matrix $\textbf{\textit{A}}$ is denoted by $a_{i,j}$. $(\cdot)^T$ denotes the transpose of a matrix. Furthermore, $\abs{\cdot}$ and $\norm{\cdot}$ represent the absolute value of a scalar and the Frobenius norm of a matrix, respectively; $\sim$ means $\textit{distributed as}$.
\section{{System Model and MIMO Detection}}
\subsection{{System Model}}
Consider the uplink of a multiuser MIMO system, where the base station is equipped with $N_r$ receive antennas, and the total number of transmit antennas among all users is $N_t$, $N_r \geq N_t$. The received signal vector $\tilde{\textbf{\textit{y}}}$ is given by
\eqn{
\label{complex SM}
\tilde{\textbf{\textit{y}}} = \tilde{\mH} \tilde{\textbf{\textit{s}}} + \tilde{\textbf{\textit{n}}},
}
where $\tilde{\textbf{\textit{s}}} = [\tilde{s}_1, \tilde{s}_2, \ldots, \tilde{s}_{N_t}]$ is the vector of transmitted symbols with $\mean{\abs{\tilde{s}_i}^2} = \smt$. The transmitted symbols $\tilde{s}_i, i = 1, 2, \ldots, N_t,$ are drawn independently from a complex constellation $\tilde{\setA}$ of $\tilde{Q}$ points. The set of all possible transmitted vectors forms an $N_t$-dimensional complex constellation $\tilde{\setA}^{N_t}$ consisting of $\tilde{Q}^{N_t}$ vectors, i.e., $\tilde{\textbf{\textit{s}}} \in \tilde{\setA}^{N_t}$. In \eqref{complex SM}, $\tilde{\textbf{\textit{n}}}$ is a vector of independent and identically distributed (i.i.d.) additive white Gaussian noise (AWGN) samples, i.e., $\tilde{n}_i \sim \mathcal{CN}(0,\smn)$. Furthermore, $\tilde{\mH}$ denotes an $N_r \times N_t$ channel matrix consisting of entries $\tilde{h}_{i,j}$, where $\tilde{h}_{i,j}$ represents the complex channel gain between the $j$th transmit antenna and $i$th receive antenna.
Let $\textbf{\textit{s}}, \textbf{\textit{y}}, \textbf{\textit{n}},$ and $\mH$ denote the $\nt{M \times 1}$-equivalent real transmitted signal vector, $\nt{N \times 1}$-equivalent real received signal, $\nt{N \times 1}$-equivalent real AWGN noise signal vectors, and $(N \times M)$-equivalent real channel matrix, respectively, with $M = 2N_t, N = 2N_r$, where
\eq{
\textbf{\textit{s}} =
\begin{bmatrix}
\re{\tilde{\textbf{\textit{s}}}}\\
\im {\tilde{\textbf{\textit{s}}}}
\end{bmatrix}, \hspace{0.1cm}
\textbf{\textit{y}} =
\begin{bmatrix}
\re{\tilde{\textbf{\textit{y}}}}\\
\im {\tilde{\textbf{\textit{y}}}}
\end{bmatrix}, \hspace{0.1cm}
\textbf{\textit{n}} =
\begin{bmatrix}
\re{\tilde{\textbf{\textit{n}}}}\\
\im {\tilde{\textbf{\textit{n}}}}
\end{bmatrix},
}
and
\begin{align*}
\mH =
\begin{bmatrix}
\re {\tilde{\mH}} &-\im {\tilde{\mH}}\\
\im {\tilde{\mH}} &\re {\tilde{\mH}}
\end{bmatrix}.
\end{align*}
Here, $\re {\cdot}$ and $\im {\cdot}$ denote the real and imaginary parts of a complex vector or matrix, respectively. In practical communication systems, high-order QAM such as 16-QAM and 64-QAM is more widely employed than high-order PSK modulation schemes. Therefore, we assume that 16-QAM and 64-QAM schemes are employed for high-order modulations, whereas QPSK is considered for low-order modulation. Then, the complex signal model \eqref{complex SM} can be converted to an equivalent real-signal model
\eqn{
\textbf{\textit{y}} = \mH \textbf{\textit{s}} + \textbf{\textit{n}}. \label{real SM}
}
The set of all possible real-valued transmitted vectors forms an $M$-dimensional constellation $\setA^M$ consisting of $Q^M$ vectors, i.e., $\textbf{\textit{s}} \in \setA^M$, where $\setA$ is the real-valued symbol set. In this study, we use the equivalent real-valued signal model in \eqref{real SM} because it can be employed readily for both SD algorithms and DNNs.
\subsection{{Detection in MIMO Systems}}
\subsubsection{{Conventional optimal solution}}
The ML solution can be written as
\eqn {
\hat{\textbf{\textit{s}}}_{ML} = \arg \min_{\textbf{\textit{s}} \in \setA^{M}} \norm {\textbf{\textit{y}} - \mH \textbf{\textit{s}}}^2. \label{ML_solution}
}
The computational complexity of ML detection in \eqref{ML_solution} is exponential with $M$ \cite{nguyen2019qr, nguyen2019deep}, which results in extremely high complexity for large MIMO systems, where $M$ is very large.
\subsubsection{{DNN-based solution}}
Consider a DNN of $L$ layers with input vector $\textbf{\textit{x}}^{[1]}$, including the information contained in $\textbf{\textit{y}}$ and $\mH$, and the output vector $\hat{\textbf{\textit{s}}}^{[L]}$. Let $\hat{\textbf{\textit{s}}} = \mathcal{Q} \left( \hat{\textbf{\textit{s}}}^{[L]} \right)$,
where $\mathcal{Q} (\cdot)$ is the element-wise quantization operator that quantizes $ \hat{s}^{[L]}_m \in \mathbb{R}$ to $\hat{s}_m \in \setA, m=1, \ldots,M$. The DNN can be trained so that $\hat{\textbf{\textit{s}}}$ is an approximate of the transmitted signal vector $\textbf{\textit{s}}$. In the DNN, serial nonlinear transformations are performed to map $\textbf{\textit{x}}^{[1]}$ to $\hat{\textbf{\textit{s}}}^{[L]}$ as follows:
\begin{align*}
\hat{\textbf{\textit{s}}}^{[L]} = f^{[L]} \left( \ldots \left(f^{[1]} \left(\textbf{\textit{x}}^{[1]}; \textbf{\textit{P}}^{[1]} \right); \ldots \right); \textbf{\textit{P}}^{[L]} \right), \nbthis \label{DNN_1}
\end{align*}
where
\begin{align*}
f^{[l]} \left(\textbf{\textit{x}}^{[l]}; \textbf{\textit{P}}^{[l]} \right) = \sigma^{[l]} \left( \textbf{\textit{W}}^{[l]} \textbf{\textit{x}}^{[l]} + \textbf{\textit{b}}^{[l]} \right) \nbthis \label{f^{[l]}}
\end{align*}
represents the nonlinear transformation in the $l$th layer with the input vector $\textbf{\textit{x}}^{[l]}$, activation function $\sigma^{[l]}$, and $\textbf{\textit{P}}^{[l]} = \left\{\textbf{\textit{W}}^{[l]}, \textbf{\textit{b}}^{[l]}\right\}$ consisting of the weighting matrix $\textbf{\textit{W}}^{[l]}$ and bias vector $\textbf{\textit{b}}^{[l]}$ {whose size depends on the structure of the input vector $\textbf{\textit{x}}^{[l]}$}.
The computational complexity of a DNN depends on its depth and the number of neurons in each layer, which are both determined by the size of the input vector. {These are usually optimized and selected by simulations; however, in general, a larger input vector requires a deeper DNN and/or more neurons in each layer.} In a DNN for large MIMO detection, the input is a high-dimensional vector because it contains the information of a large-size channel matrix and the received signal vector. As a result, large-size weight matrices and bias vectors, i.e., $\textbf{\textit{W}}^{[l]}$ and $\textbf{\textit{b}}^{[l]}$, are required in \eqref{f^{[l]}}. Furthermore, in large MIMO systems, many hidden layers and neurons are required for the DNN to extract meaningful features and patterns from the large amount of input data and provide high accuracy. Therefore, the computational complexity of the detection network typically becomes very high in large MIMO systems.
\section{DNNs for MIMO Detection and the FS-Net}
In this section, we review the state-of-the-art DNN architectures for MIMO detection and explain why FS-Net is chosen for incorporation with the proposed FDL-SD and FDL-KSD schemes. Then, the network architecture and operation of the FS-Net are briefly introduced.
\subsection{DNNs for MIMO Detection}
A number of deep neural networks (DNNs) have been designed for symbol detection in large MIMO systems \cite{samuel2019learning, nguyen2019deep, he2018model, khani2019adaptive, gao2018sparsely, samuel2017deep, wei2020learned, takabe2019trainable, li2020deep, sun2019learning}. Specifically, Samuel et al. in \cite{samuel2019learning} and \cite{samuel2017deep} introduced the first DNN-based detector, called the detection network (DetNet). However, the DetNet performs poorly for large MIMO systems with $M \approx N$; it also has a complicated network architecture with high computational complexity. To overcome these challenges, the sparsely connected detection network (ScNet) \cite{gao2018sparsely} and FS-Net \cite{nguyen2019deep} {have been} proposed. They simplify the network architecture and improve the loss function of the DetNet, which leads to significant performance improvement and complexity reduction. Furthermore, in \cite{wei2020learned}, a learned conjugate gradient descent network (LcgNet) is proposed. DetNet, ScNet, FS-Net, and LcgNet are similar in the sense that they are obtained by unfolding the iterative gradient descent method. A trainable projected gradient-detector (TPG-detector) was proposed in \cite{takabe2019trainable} to improve the convergence of the projected gradient method. Recently, DL-aided detectors based on iterative search, including the DL-based likelihood ascent search (DPLAS) and learning to learn the iterative search algorithm (LISA), were proposed in \cite{li2020deep} and \cite{sun2019learning}, respectively. By unfolding the orthogonal approximate message passing (OAMP) algorithm \cite{ma2017orthogonal}, He et al. introduced the OAMP-Net \cite{he2018model,he2020model} for symbol detection in both i.i.d. Gaussian and small-size correlated channels. Furthermore, Khani et al. in \cite{khani2019adaptive} focus on realistic channels and propose the MMNet, which significantly outperforms the OAMP-Net with the same or lower computational complexity.
The main application of DL to SD in this work is to generate a highly reliable candidate $\hat{\textbf{\textit{s}}}$ that is an approximate of the transmitted signal vector $\textbf{\textit{s}}$. This can be achieved by any of the aforementioned DNNs, i.e., DetNet, ScNet, FS-Net, OAMP-Net, MMNet, and LcgNet. In this work, we chose FS-Net because of its low complexity and reliable BER performance. Specifically, among the discussed DNNs, the iterative schemes, i.e., the OAMP-Net and MMNet, require the highest computational complexity because pseudo-matrix inversion is performed in each layer to conduct the linear MMSE estimation \cite{he2018model}, and/or to compute the standard deviation of the Gaussian noise on the denoiser inputs in each layer \cite{khani2019adaptive}. Meanwhile, DetNet employs a dense connection architecture with high-dimensional input vectors in every layer \cite{samuel2019learning,samuel2017deep}, which causes an extremely high computational load. In contrast, FS-Net has a superior performance--complexity tradeoff. To achieve this, its network architecture is optimized to become very sparse, whereas its loss function is optimized for fast convergence. We also note that the FS-Net can output a reliable solution with only element-wise matrix multiplications, and no matrix inversion is required. Furthermore, the simulation results in \cite{nguyen2019deep} show that FS-Net achieves better performance with lower complexity compared to DetNet, ScNet, and Twin-DNN. Therefore, it is chosen for incorporation with the SD schemes in this work.
\subsection{Network Architecture and Operation of the FS-Net}
\begin{figure*}[t]
\centering
\subfigure[FS-Net architecture]
{
\includegraphics[scale=0.98]{FSNet}
\label{fig_dscnet}
}
\subfigure[$\psi_t(x)$ with $t = 0.5$]
{
\includegraphics[scale=0.55]{fig_phi}
\label{fig_phi}
}
\caption{Illustration of the FS-Net architecture and $\psi_t(x)$ in FS-Net for QPSK, 16-QAM, and 64-QAM.}
\end{figure*}
\begin{algorithm}[t]
\caption{FS-Net scheme for MIMO detection}
\label{al_fsnet}
\begin{algorithmic}[1]
\REQUIRE $\mH, \textbf{\textit{y}}$.
\ENSURE $\hat{\textbf{\textit{s}}}$.
\STATE Compute $\mH^T \mH$ and $\mH^T \textbf{\textit{y}}$.
\STATE {$\hat{\textbf{\textit{s}}}^{[0]} = \textbf{0}$}
\FOR {$l = 1 \rightarrow L$}
\STATE $\vz^{[l]} = \mH^T \mH \hat{\textbf{\textit{s}}}^{[l-1]} - \mH^T \textbf{\textit{y}}$
\STATE {$\hat{\textbf{\textit{s}}}^{[l]} = \psi_t \left(\textbf{\textit{w}}_1^{[l]} \odot \hat{\textbf{\textit{s}}}^{[l-1]} + \textbf{\textit{b}}_1^{[l]}\right) + \psi_t \left(\textbf{\textit{w}}_2^{[l]} \odot \vz^{[l]} + \textbf{\textit{b}}_2^{[l]}\right)$}
\ENDFOR
\STATE {$\hat{\textbf{\textit{s}}} = \mathcal{Q} \left( \hat{\textbf{\textit{s}}}^{[L]} \right)$}
\end{algorithmic}
\end{algorithm}
In the FS-Net, $\hat{\textbf{\textit{s}}}$ is updated over $L$ layers of the DNN by mimicking a projected gradient descent-like ML optimization as follows \cite{samuel2019learning, samuel2017deep}:
\begin{align*}
\hat{\textbf{\textit{s}}}^{[l+1]} &= \Pi \left[ \textbf{\textit{s}} - \delta^{[l]} \frac{\partial \norm {\textbf{\textit{y}} - \mH \textbf{\textit{s}}}^2}{\partial \textbf{\textit{s}}} \right]_{\textbf{\textit{s}} = \hat{\textbf{\textit{s}}}^{[l]}} \\
&= \Pi \left[ \hat{\textbf{\textit{s}}}^{[l]} + \delta^{[l]} (\mH^T \mH \hat{\textbf{\textit{s}}}^{[l]} - \mH^T \textbf{\textit{y}}) \right], \nbthis \label{fsnet_0}
\end{align*}
where $\Pi[\cdot]$ denotes a nonlinear projection operator and $\delta^{[l]}$ is a step size.
The network architecture of the FS-Net is illustrated in Fig. \ref{fig_dscnet}. The operation of the FS-Net is summarized in Algorithm \ref{al_fsnet}, where in step 5, $\{\textbf{\textit{w}}_1^{[l]}, \textbf{\textit{b}}_1^{[l]}, \textbf{\textit{w}}_2^{[l]}, \textbf{\textit{b}}_2^{[l]} \} \in \mathbb{R}^{M \times 1}$ represents the weights and biases of the FS-Net in the $l$th layer, and $\odot$ denotes the element-wise multiplication of two vectors. Furthermore, $\psi_t(\cdot)$ is used to guarantee that the amplitudes of the elements of $\hat{\textbf{\textit{s}}}^{[l]}$ are within the range of the corresponding modulation size, such as $[-1,1]$ for QPSK, $[-3,3]$ for 16-QAM, and $[-7,7]$ for 64-QAM, as illustrated in Fig. \ref{fig_phi}. In particular,
\begin{align*}
\psi_t(x) = -q + \frac{1}{\abs{t}} \sum_{i \in \boldsymbol{\Omega}}[\sigma(x + i + t) - \sigma(x + i- t)]
\end{align*}
with $q=1$, $\boldsymbol{\Omega} = \{0\}$ for QPSK, $q=3$, $\boldsymbol{\Omega} = \{-2,0,2\}$ for 16-QAM, $q=7$, $\boldsymbol{\Omega} = \{-6,-4,-2,0,2,4,6\}$ for 64-QAM, and $\sigma(\cdot)$ is the rectified linear unit (ReLU) activation function. The final detected symbol vector is given as $\hat{\textbf{\textit{s}}} = \mathcal{Q} \left( \hat{\textbf{\textit{s}}}^{[L]} \right)$ in step 7 of Algorithm \ref{al_fsnet}, where $\mathcal{Q} (\cdot)$ represents an element-wise quantization function. During the training of the FS-Net, the following loss function is used \cite{nguyen2019deep}:
\begin{align*}
\mathcal{L} (\textbf{\textit{s}}, \hat{\textbf{\textit{s}}}) = \sum_{l=1}^{L} \log (l) \left[ \norm{\textbf{\textit{s}} - \hat{\textbf{\textit{s}}}^{[l]}}^2 + \xi r (\hat{\textbf{\textit{s}}}^{[l]}, \textbf{\textit{s}}) \right], \nbthis \label{loss_dscnet}
\end{align*}
where $\hat{\textbf{\textit{s}}}^{[l]}$ and $\textbf{\textit{s}}$ are the output of the $l$th layer and the desired transmitted signal vector, respectively, and $r (\hat{\textbf{\textit{s}}}^{[l]}, \textbf{\textit{s}}) = 1 - (|\textbf{\textit{s}}^T \hat{\textbf{\textit{s}}}^{[l]}|) / (\norm{\textbf{\textit{s}}} \lVert \hat{\textbf{\textit{s}}}^{[l]}\rVert) $.
In the online application phase, the overall complexity of the FS-Net can be given as
\begin{align*}
\mathcal{C}_{\text{FS-Net}} = M(2N-1) + M^2(2M-1) + L(2N^2 + 5N ), \nbthis \label{comp_fsnet}
\end{align*}
where the first, second, and last terms are the total numbers of additions and multiplications required for the computations of $\mH^T \textbf{\textit{y}}$, $\mH^T \mH$, and the processing in the $L$ layers of the FS-Net, respectively.
\section{Proposed FDL-SD Scheme}
\begin{figure*}[t]
\centering
\includegraphics[scale=0.55]{tree_model}
\caption{Illustration of the SD scheme with the lattice and tree-like model.}
\label{fig_tree_model}
\end{figure*}
We first briefly review the common ideas of SD based on the description in \cite{hassibi2005sphere}. Some notations in \cite{hassibi2005sphere} are also adopted for ease of notation. Similar to the ML detection, SD attempts to find the optimal lattice point that is closest to $\textbf{\textit{y}}$, but its search is limited to the points inside a sphere of radius $d$, i.e.,
\eqn {
\hat{\textbf{\textit{s}}}_{SD} = \arg \min_{\textbf{\textit{x}} \in \mathcal{S} \subset \setA^{M}} \norm {\textbf{\textit{y}} - \mH \textbf{\textit{x}}}^2, \label{SD_solution}
}
where $\mathcal{S}$ is a hypersphere specified by the center $\textbf{\textit{y}}$ and radius $d$, {and $\textbf{\textit{x}}$ represents the hypothesized transmitted signal vector that lies inside $\mathcal{S}$}. Each time a valid lattice point, i.e., a point lying inside $\mathcal{S}$, is found, the search is further restricted, or equivalently, the sphere shrinks, by decreasing the radius, as illustrated in Fig. \ref{fig_tree_model}(a). In this way, when there is only one point in the sphere, the point becomes the final solution $\hat{\textbf{\textit{s}}}_{SD}$. The ingenuity of SD is the identification of the lattice points that lie inside the sphere, which is discussed below.
A lattice point $\mH \textbf{\textit{x}}$ lies inside a sphere of radius $d$ if and only if $\textbf{\textit{x}}$ fulfills condition $(\mathcal{C}): \norm{\textbf{\textit{y}} - \mH \textbf{\textit{x}}}^2 \leq d^2$. In SD, the QR decomposition of the channel matrix is useful in breaking $(\mathcal{C})$ into the necessary conditions for each element of $\textbf{\textit{x}}$. Let $\mH = \textbf{\textit{Q}}
\begin{bmatrix}
\textbf{\textit{R}} \\
\mathbf{0}_{(N-M) \times M}
\end{bmatrix}$,
where $\textbf{\textit{Q}} = \left[ \textbf{\textit{Q}}_1 \hspace{0.2cm} \textbf{\textit{Q}}_2 \right] $ is an $N \times N$ unitary matrix, having the first $M$ and last $N-M$ orthonormal columns in $\textbf{\textit{Q}}_1$ and $\textbf{\textit{Q}}_2$, respectively. $\textbf{\textit{R}}$ is an $M \times M$ upper triangular matrix, and $\mathbf{0}_{(N-M) \times M}$ represents a matrix of size $(N-M) \times M$ containing all zeros. Applying QR decomposition, $(\mathcal{C})$ can be rewritten as $\norm{\vz - \textbf{\textit{R}} \textbf{\textit{x}}}^2 \leq d_M^2$, where $\vz = \textbf{\textit{Q}}_1^T \textbf{\textit{y}}$ and $d_M^2 = d^2 - \norm{\textbf{\textit{Q}}_2^T \textbf{\textit{y}}}^2$. Owing to the upper-triangular structure of $\textbf{\textit{R}}$, we have
\begin{align*}
(\mathcal{C}): \sum_{m=1}^{M} \left(z_m - \sum_{i=m}^{M} r_{m,i} x_i \right) ^2 \leq d_M^2.
\end{align*}
Consequently, the necessary conditions for the elements of $\textbf{\textit{x}}$ to fulfill $(\mathcal{C})$ can be expressed as
\begin{align*}
LB_m \leq x_m \leq UB_m, m=1,2,\ldots,M, \nbthis \label{cond_cand}
\end{align*}
where $x_m$ represents the $m$th element of $\textbf{\textit{x}}$. In \eqref{cond_cand}, $LB_m$ and $UB_m$ respectively denote the lower and upper bounds of $x_m$. Without loss of generality, we assume that the entries on the main diagonal of $\textbf{\textit{R}}$ are positive, i.e., $r_{m,m} > 0, m=1,\ldots,M$. Then, $LB_m$ and $UB_m$ can be given as
\begin{align*}
LB_m = \left\lceil \frac{z_{m|m+1} - d_{m}}{r_{m,m}} \right\rceil, \hspace{0.2cm}
UB_m = \left\lfloor \frac{z_{m|m+1} + d_{m}}{r_{m,m}} \right\rfloor, \nbthis \label{bound_m}
\end{align*}
where $\lceil \cdot \rceil$ and $\lfloor \cdot \rfloor$ round a value to its nearest larger and smaller symbols in alphabet $\mathcal{A}$, respectively, and
\begin{align*}
z_{m|m+1} =
\begin{cases}
z_M, & m=M \\
z_m - \sum_{i=m+1}^{M} r_{m,i} x_i, & m<M
\end{cases}
\nbthis \label{z_m}
\end{align*}
means adjusting $z_m$ based on the chosen symbols of $\textbf{\textit{x}}$, i.e., $\{x_{m+1}, x_{m+2}, \ldots, x_M\}$. Furthermore, $d_m$ in \eqref{bound_m} is given by
\begin{align*}
d_m^2 =
\begin{cases}
d_M^2, & m=M \\
d_{m+1}^2 - \sigma_{m+1}^2, & m<M
\end{cases},\nbthis \label{d_m}
\end{align*}
where
\begin{align*}
\sigma_{m+1}^2 = \left(z_{m+1|m+2} - r_{m+1,m+1} x_{m+1}\right)^2. \nbthis \label{sigma_m}
\end{align*}
The tree-like model is useful to illustrate the candidate exploration and examination in SD schemes. It maps all possible candidates $\textbf{\textit{x}} \in \setA^{M}$ to a tree with $M$ layers, each associated with an element of $\textbf{\textit{x}}$. Layer $m$ of the tree has nodes representing possibilities for $x_m$. A candidate is examined by extending a path, starting from the root, over nodes in the layers. When the lowest layer is reached, a complete path represents a candidate $\textbf{\textit{x}}$. As an example, a tree-like model for a MIMO system with $M=4$ and QPSK is illustrated in Fig. \ref{fig_tree_model}(b), which has four layers, corresponding to $M=4$ elements of a candidate, and $\abs{\mathcal{A}}^M = 16$ complete paths, representing 16 candidates for the solution, where $\abs{\mathcal{A}}=2$ for QPSK signals. Based on the tree-like model, in the sequential SD, the candidates are explored in the depth-first search strategy. Specifically, the algorithm explores the nodes associated with the symbols satisfying \eqref{cond_cand} from the highest to lowest layers. Once the lowest layer of a candidate $\textbf{\textit{x}}$ is reached, a valid lattice point is found, and the radius is reduced to $\phi(\textbf{\textit{x}})$ $\left(< d_M^2\right)$, where $\phi(\textbf{\textit{x}}) = \norm{\vz - \textbf{\textit{R}} \textbf{\textit{x}}}^2 = d_M^2 - d_1^2 + (z_1 - r_{1,1} x_1)^2$ is the ML metric of $\textbf{\textit{x}}$ \cite{hassibi2005sphere}, as shown in Fig. \ref{fig_tree_model}(a). Based on this search procedure, we found that SD can be optimized by ordering the examined candidates and layers in conjunction with the output of the FS-Net, as presented in the following subsections.
\begin{figure*}[t]
\centering
\includegraphics[scale=0.55]{tree_error_correction}
\caption{Illustration of the effect of erroneous layer on the search efficiency of SD. The dashed arrow presents the process of exploration and correction of the erroneous symbol.}
\label{fig_tree_SESD}
\end{figure*}
\subsection{Candidate Ordering}
The complexity of the SD scheme significantly depends on the number of lattice points that lie inside the sphere, or equivalently, the number of candidates that need to be examined. The sphere shrinks after a valid lattice point is found. Therefore, it is best to start the search by examining an optimal or near-optimal point. In the best case, if the algorithm starts with the optimal ML solution, i.e., $\textbf{\textit{x}} = \hat{\textbf{\textit{s}}}_{ML}$, the radius can decrease rapidly to $\phi(\hat{\textbf{\textit{s}}}_{ML})$. As a result, no more lattice points lie inside the newly shrunken sphere, and the solution is concluded to be $\hat{\textbf{\textit{s}}}_{SD} = \hat{\textbf{\textit{s}}}_{ML}$. However, finding the optimal point requires high computational complexity, which is as challenging as performing the SD scheme itself. Furthermore, the simple linear ZF/MMSE or SIC detector cannot guarantee a highly reliable solution in practical multiuser large MIMO systems. Therefore, we propose using a DNN to find a reliable candidate for initializing the search in SD. For the reasons explained in Section III-A, the FS-Net is employed for this purpose.
In the proposed FDL-SD scheme, the search starts by examination of $\hat{\textbf{\textit{s}}}$ obtained in step 7 of Algorithm \ref{al_fsnet}. Furthermore, as $\hat{\textbf{\textit{s}}}^{[L]}$ is the output of the FS-Net, it is natural to perform the search in an order such that a candidate closer to $\hat{\textbf{\textit{s}}}^{[L]}$ is examined earlier. To this end, the symbols satisfying \eqref{cond_cand} in layer $m$ are ordered by increasing distance from $\hat{s}_m^{[L]}$, $m=1,2,\ldots,M$. Specifically, in layer $m$, the symbols are examined in the order
\begin{align*}
\mathcal{O}^{\text{FDL}}_m = \{ \hat{s}_m^{(1)}, \hat{s}_m^{(2)}, \ldots \}, \nbthis \label{order_FS}
\end{align*}
where $\hat{s}_m^{(i)} \in \mathcal{A} \text{ and } LB_m \leq \hat{s}_m^{(i)} \leq UB_m$, with $\hat{s}_m^{(i)}$ representing the $i$th-closest symbol to $\hat{s}_m^{[L]}$. Furthermore, $LB_m$ and $UB_m$ are given in \eqref{bound_m}. By using $\{\mathcal{O}^{\text{FDL}}_1, \mathcal{O}^{\text{FDL}}_2, \ldots, \mathcal{O}^{\text{FDL}}_M\}$, the first candidate examined in the FDL-SD scheme is $\hat{\textbf{\textit{s}}}$. It can be seen that in this scheme, the initial sphere is predetermined by the radius $\phi(\hat{\textbf{\textit{s}}})$. However, in the case that $\hat{\textbf{\textit{s}}}$ is unreliable, the sphere with radius $\phi(\hat{\textbf{\textit{s}}})$ can be large and inefficient for the search. Therefore, the initial radius is set to $d^2 = \min \{\alpha N_r \sigma_n^2, \phi(\hat{\textbf{\textit{s}}})\}$, where $\alpha N_r \sigma_n^2$ is the conventional radius \cite{hassibi2005sphere}.
It is worth noting that the proposed candidate ordering in the FDL-SD scheme is different from that in the SE-SD scheme. Specifically, the SE-SD scheme starts its search near the center of the sphere first, then moves outward to the surface of the sphere \cite{chan2002new}. Therefore, the order $\mathcal{O}^{\text{SE}}_m$ is employed in the SE-SD scheme\cite{chan2002new, vikalo2005sphere}, which is given by
\begin{align*}
\mathcal{O}^{\text{SE}}_m = \{ \bar{s}_m^{(1)}, \bar{s}_m^{(2)}, \ldots \}, \nbthis \label{order_SE}
\end{align*}
where $\bar{s}_m^{(i)} \in \mathcal{A}$ and $LB_m \leq \bar{s}_m^{(i)} \leq UB_m$. Here, $\bar{s}_m^{(i)}$ is the $i$th closest symbol to $\bar{s}_m = \frac{z_{m|m+1}}{r_{m,m}}$ and $LB_m$ and $UB_m$ are given in \eqref{bound_m}. Our simulation results show that the proposed FDL-SD scheme with order $\mathcal{O}^{\text{FDL}}_m$ results in considerable complexity reduction, which is much more significant than that provided by the SE-SD with order $\mathcal{O}^{\text{SE}}_m$.
\subsection{Layer Ordering}
In the SD scheme, once a candidate $\textbf{\textit{x}}$ is examined, the next step is to search for a better solution in the shrunken sphere. This is equivalent to the process of correcting sub-optimally detected symbols in $\textbf{\textit{x}}$, and the faster a wrong symbol is corrected, the earlier the optimal solution is found, which results in lower complexity of the SD scheme. Furthermore, knowledge of the positions of erroneous symbols can significantly affect the efficiency of error correction. Therefore, in this subsection, we focus on optimizing the layers of erroneous symbols, which will be referred to as \textit{erroneous layers} from now on.
Similar to the conventional SD, in the proposed FDL-SD, the search repeatedly moves downward and upward over layers to explore candidates. Given that $\hat{\textbf{\textit{s}}}$ is examined first and the order $\mathcal{O}^{\text{FDL}}_m$ in \eqref{order_FS} is used in each layer, the candidates are examined in the following order:
\begin{align*}
&{\begin{bmatrix} \hat{s}_M^{(1)} \\ \vdots \\ \hat{s}_m^{(1)} \\ \vdots \\ \hat{s}_1^{(1)} \end{bmatrix}}
\rightarrow \cdots
{\begin{bmatrix} \hat{s}_M^{(1)} \\ \vdots \\ \hat{s}_m^{(1)} \\ \vdots \\ \hat{s}_1^{\left(\abs{\mathcal{O}^{\text{FDL}}_1}\right)} \end{bmatrix}} \rightarrow \ldots
\rightarrow
{\begin{bmatrix} \hat{s}_M^{(1)} \\ \vdots \\ \hat{s}_m^{(2)} \\ \vdots \end{bmatrix}}\\
&\quad \rightarrow \cdots
{\begin{bmatrix} \hat{s}_M^{(1)} \\ \vdots \\ \hat{s}_m^{\left(\abs{\mathcal{O}^{\text{FDL}}_m}\right)} \\ \vdots \end{bmatrix}}\rightarrow \cdots
{\begin{bmatrix} \hat{s}_M^{(2)} \\ \vdots\end{bmatrix}}
\rightarrow \cdots
{\begin{bmatrix} \hat{s}_M^{\left(\abs{\mathcal{O}^{\text{FDL}}_M}\right)} \\ \vdots \end{bmatrix}}, \nbthis \label{layer_M}
\end{align*}
where the layers of candidates are reversed to reflect the tree-model-based search strategy in SD, and $\abs{\mathcal{O}^{\text{FDL}}_m}$ denotes the cardinality of $\mathcal{O}^{\text{FDL}}_m$. It is observed that, starting from $\hat{\textbf{\textit{s}}} = \left[ \hat{s}_1^{(1)}, \ldots, \hat{s}_M^{(1)} \right]^T$, all the candidate symbols for layer $1$ in $\mathcal{O}^{\text{FDL}}_1$ are examined, and it moves upward to the higher layers and performs the same examination process, as shown in \eqref{layer_M}. From the candidate examination order above, we have a note in the following remark.
\begin{remark}
If an erroneous layer is $\tilde{m}$, i.e., $\hat{s}_{\tilde{m}} \neq s_{\tilde{m}}$, $\hat{s}_{\tilde{m}}$ can be corrected by replacing it with one of the other symbols in $\mathcal{O}^{\text{FDL}}_m$. In particular, the lower the erroneous layer is, the faster the corresponding erroneous symbol is corrected. An equivalent interpretation using the tree-like model is that, if there is an erroneous node, the closer to the leaf nodes it is, the faster it is corrected. For example, if $\tilde{m}=1$, the erroneous symbol $\hat{s}_1$ is represented by a leaf node, which can be corrected after examining at most $\abs{\mathcal{O}^{\text{FDL}}_1}-1$ other leaf nodes. In contrast, if $\tilde{m}=M$, the erroneous symbol $\hat{s}_M$ is represented by a node closest to the root, and it is only corrected after examining the entire sub-tree having $\hat{s}_M$ as the root. Consequently, much higher computational complexity is required than that required to examine only the leaf nodes, as in the case for $\tilde{m}=1$.
\end{remark}
According to Remark 1, the optimal solution can be found earlier if the errors exist at low layers. This can be illustrated in Fig. \ref{fig_tree_SESD} for a MIMO system with $M=4$ and QPSK signaling. We assume that the optimal solution is $\textbf{\textit{s}} = [-1,1,-1,1]^T$ and that there is only a single erroneous symbol in $\hat{\textbf{\textit{s}}}$. In Fig. \ref{fig_tree_SESD}(a), $\hat{\textbf{\textit{s}}} = [-1,1,-1,-1]^T$, and the error occurs at the lowest layer, i.e., $\tilde{m}=1$, yielding $\hat{s}_1 \neq s_1$. It is observed that only one node is required to be examined to reach the optimal solution. In contrast, in Fig. \ref{fig_tree_SESD}(b), we assume $\hat{\textbf{\textit{s}}} = [1,1,-1,1]^T$ and $\hat{s}_4 \neq s_4$, i.e., $\tilde{m}=M$. In this case, the path associated with $\hat{\textbf{\textit{s}}}$ is in a totally different sub-tree from that associated with $\textbf{\textit{s}}$. As a result, a large number of nodes are explored to correct $\hat{s}_4$ and find the optimal solution. The example in Fig. \ref{fig_tree_SESD} clearly shows that the search efficiency in SD significantly depends on the erroneous layer. Motivated by this, we propose a layer-ordering scheme such that errors are more likely to occur at low layers.
In this scheme, the accuracy of the symbols in $\hat{\textbf{\textit{s}}}$ are evaluated. For this purpose, we propose exploiting the difference between $\hat{\textbf{\textit{s}}}^{[L]}$ and $\hat{\textbf{\textit{s}}}$, which are the output of the last layer and the final solution of the FS-Net, respectively. We recall that $\hat{\textbf{\textit{s}}}^{[L]} \in \mathbb{R}^{M \times 1}$ can contain elements both inside and outside alphabet $\setA$, as observed from step 5 in Algorithm \ref{al_fsnet} and Fig. \ref{fig_phi}. In contrast, $\hat{\textbf{\textit{s}}} = \mathcal{Q} \left( \hat{\textbf{\textit{s}}}^{[L]} \right) \in \setA^M$. Let $e_m$ denote the distance between $\hat{s}_m$ and $\hat{s}_m^{[L]}$, i.e., $e_m = \abs{ \hat{s}_m^{[L]} - \hat{s}_m }$.
For QAM signals, the distance between two neighboring real symbols is two. Furthermore, from Fig. \ref{fig_phi}, $\hat{s}_m^{[L]} \in [-1, 1]$ for QPSK, $\hat{s}_m^{[L]} \in [-3, 3]$ for 16-QAM, and $\hat{s}_m^{[L]} \in [-7, 7]$ for 64-QAM. Therefore, $0 \leq e_m \leq 1, m=1,2,\ldots,M$. It is observed that if $e_m \approx 0$, there is a high probability that the $m$th symbol in $\hat{\textbf{\textit{s}}}$ is correctly approximated by the FS-Net, i.e., $s_m = \hat{s}_m$. In contrast, if $e_m \approx 1$, there is a high probability that $\hat{s}_m$ is an erroneous estimate, i.e., $s_m \neq \hat{s}_m$. Therefore, by examining the elements of $\ve = [ e_1, e_2, \ldots, e_M ]$, we can determine the layers with high probabilities of errors.
Based on $\hat{\textbf{\textit{s}}}^{[L]}$ and $\hat{\textbf{\textit{s}}}$, $\ve$ is computed, and the layers are ordered in decreasing order of the elements of $\ve$ to increase the likelihood that the errors occur at the low layers. In other words, we rearrange layers such that the $i$th lowest layer in the tree is associated with the $i$th largest element of $\ve$. We note that ordering the layers is equivalent to ordering the elements of $\textbf{\textit{x}}$, which requires the corresponding column ordering of $\mH$. Therefore, in the proposed layer-ordering scheme, the channel columns are also ordered in the decreasing order by the magnitude of the elements of $\ve$.
\textit{Example 1:} Consider a MIMO system with $M=8$, QPSK, and $\hat{\textbf{\textit{s}}}^{[L]} = [0.1, -0.65, -0.2, 0.85,$ $0.25, 0.9, -0.3, 1]^T$, $\hat{\textbf{\textit{s}}} = [1,-1,-1,1,1,1,-1,1]^T$. Then, $\ve = \abs{ \hat{\textbf{\textit{s}}}^{[L]} - \hat{\textbf{\textit{s}}} } = [0.9, 0.35, 0.8, 0.15,$ $ 0.75, 0.1, 0.7, 0]^T$, which implies that the layer order should be $\{1,3,5,7,2,4,6,8\}$. Consequently, the channel columns should be ordered as $\underline{\mH} = \left[ \vh_1, \vh_3, \vh_5, \vh_7, \vh_2, \vh_4, \vh_6, \vh_8 \right]$, where $\vh_m$ is the $m$th column of $\mH$.
The layer ordering allows the errors to be corrected earlier, which further accelerates the shrinking of the sphere in the SD scheme. As a result, the final solution can be found with reduced complexity compared to the case when layer ordering is not applied. Our simulation results show that a significant complexity reduction is attained owing to the layer ordering, especially at low SNRs. In particular, the complexity of the FDL-SD scheme with layer ordering is almost constant w.r.t. the SNR, unlike the conventional FP-SD and SE-SD schemes.
\subsection{FDL-SD Algorithm}
\begin{algorithm}[t]
\caption{FDL-SD algorithm}
\label{al_FDL_SD}
\begin{algorithmic}[1]
\REQUIRE $\mH, \textbf{\textit{y}}$.
\ENSURE $\hat{\textbf{\textit{s}}}_{SD}$.
\STATE {Find $\hat{\textbf{\textit{s}}}$ and $\hat{\textbf{\textit{s}}}^{[L]}$ based on Algorithm \ref{al_fsnet}.}
\STATE {Obtain $\ve = \left[e_1, e_2, \ldots, e_M\right]^T$, where $e_m = \abs{ \hat{s}_m - \hat{s}_m^{[L]} }$.}
\STATE Order the channel columns in decreasing order of the elements of $\ve$ to obtain $\underline{\mH}$.
\STATE Perform QR decomposition of $\underline{\mH}$ to obtain $\textbf{\textit{Q}}_1, \textbf{\textit{Q}}_2, \textbf{\textit{R}}$.
\STATE Set $m=M$, $\vz = \textbf{\textit{Q}}_1^T \textbf{\textit{y}}$, $d^2 = \min \{\alpha N_r \sigma_n^2, \phi(\hat{\textbf{\textit{s}}})\}$, $d_M^2 = d^2 - \norm{\textbf{\textit{Q}}_2^T \textbf{\textit{y}}}^2$.
\STATE Compute $z_{m|m+1}$, $d_m^2$, $LB_m$, and $UB_m$ based on \eqref{bound_m}--\eqref{sigma_m}.
\STATE Obtain $\mathcal{O}^{\text{FDL}}_m$ based on \eqref{order_FS}.
\IF {$\mathcal{O}^{\text{FDL}}_m$ is empty}
\STATE $m \leftarrow m+1$
\ELSE
\STATE Set $x_m$ to the first element in $\mathcal{O}^{\text{FDL}}_m$.
\ENDIF
\IF {$m=1$}
\STATE $\phi(\textbf{\textit{x}}) = d_M^2 - d_1^2 + (z_1 - r_{1,1} x_1)^2$
\IF {$\phi(\textbf{\textit{x}}) \leq d_M^2$}
\STATE Update $d_M^2 = \phi(\textbf{\textit{x}})$ and $\hat{\textbf{\textit{s}}}_{SD} = \textbf{\textit{x}}$.
\STATE Remove the first element of $\mathcal{O}^{\text{FDL}}_m$ and go to step 8.
\ENDIF
\ELSE
\STATE Set $m \leftarrow m-1$ and go to step 6.
\ENDIF
\end{algorithmic}
\end{algorithm}
The FDL-SD algorithm is summarized in Algorithm \ref{al_FDL_SD}. In step 1, the FS-Net is employed to obtain $\hat{\textbf{\textit{s}}}$ and $\hat{\textbf{\textit{s}}}^{[L]}$, which is then used in steps 2 and 3 for layer ordering {and in step 5 to predetermine the radius}. In the remaining steps, the common search process of SD is conducted to obtain $\hat{\textbf{\textit{s}}}_{SD}$. Note that in step 7, all the symbols belonging to the interval $[LB_m, UB_m]$ are ordered by increasing distance from $\hat{s}_m$, as given in \eqref{order_FS}. Performing this operation for every layer allows the candidates to be examined by their increasing distance to the FS-Net's solution $\hat{\textbf{\textit{s}}}$. The remaining steps follow the well-known search procedure of SD \cite{hassibi2005sphere}.
Compared to the existing DL-aided SD schemes in \cite{askri2019dnn, mohammadkarimi2018deep,weon2020learning}, the proposed FDL-SD algorithm is advantageous in the following aspects:
\begin{itemize}
\item The application of DL in this scheme is to generate a highly reliable candidate $\hat{\textbf{\textit{s}}}$. We note that in this employment, the DNN, i.e., the FS-Net, can be trained without performing the conventional SD scheme, as will be further discussed in Section VI. In contrast, in \cite{askri2019dnn, mohammadkarimi2018deep,weon2020learning}, DL is applied to predict the radius, and its training labels are obtained by performing the conventional SD scheme. This requires considerable time and computational resources. For example, to train the DNN in \cite{weon2020learning} for a $16 \times 16$ MIMO system with QPSK, 100,000 samples are used, requiring performing the conventional SD 100,000 times to collect the same number of desired radii for training, whereas that number required in \cite{mohammadkarimi2018deep} for a $10 \times 10$ MIMO system with 16-QAM is 360,000. This computational burden in the training phase of the existing DL-aided SD schemes is non-negligible, even for offline processing.
\item The proposed scheme does not require optimizing the initial radius, in contrast to \cite{askri2019dnn, mohammadkarimi2018deep,weon2020learning}, because the initial sphere is predetermined based on $\hat{\textbf{\textit{s}}}$, {as shown in step 5 of Algorithm \ref{al_FDL_SD}}. Note that in the conventional SD, if the radius is initialized to a small value, it is possible that there will be no point inside the sphere. In this case, the search needs to restart with a larger radius, resulting in redundant complexity. In contrast, in the FDL-SD scheme, starting with $\hat{\textbf{\textit{s}}}$ guarantees that there is always at least one point inside the sphere, which is nothing but $\hat{\textbf{\textit{s}}}$. Furthermore, because $\hat{\textbf{\textit{s}}}$ has high accuracy, the number of points inside the sphere is typically small.
\item In the proposed FDL-SD, the search efficiency is improved, thus providing significant complexity reduction. Despite that, the ordering schemes in the FDL-SD do not affect the radius or terminate the search early, in contrast to \cite{askri2019dnn, mohammadkarimi2018deep,weon2020learning}. Therefore, the BER performance of the conventional SD is totally preserved in the proposed FDL-SD scheme. We will further justify this with the simulation results in Section VI.
\end{itemize}
\section{Proposed FDL-KSD Scheme}
It is intuitive from the tree-like model shown in Fig. \ref{fig_tree_model}(b) that there are $\abs{\mathcal{A}}^M$ complete paths representing all the possible candidates for the optimal solution, where $\abs{\mathcal{A}}=2$ and $M=4$ for the example in Fig. \ref{fig_tree_model}(b). In a large MIMO system with a high-order modulation scheme, i.e., when $M$ and $\abs{\mathcal{A}}$ are large, the number of paths becomes very large. Therefore, in the KSD, instead of examining all the available paths, only the $K$ best paths are selected in each layer for further extension to the lower layer, while the others are pruned early to reduce complexity. For the selection of the $K$ best paths, each path is evaluated based on its metric. Specifically, in layer $m$, if the $k$th path extends to a node $x_i$, its metric is given by
\begin{align*}
\phi_m^{(k,i)} = \phi_{m+1}^{(k)} + \left(z_m - \sum_{i=m}^{M} r_{m,i} x_i \right) ^2, \nbthis \label{path_metric}
\end{align*}
with $\phi_{M+1}^{(k)} = 0, \forall k$. Then, only a subset of $K$ paths with the smallest metrics $\{\phi_{m}^{(1)}, \ldots, \phi_{m}^{(K)}\}$ are selected for further extension. In the lowest layer, the best path with the smallest metric is concluded to be the final solution. In this study, to further optimize the KSD scheme in terms of both complexity and performance, we propose the FDL-KSD scheme with early rejection and layer ordering, which is presented in the following subsection.
\subsection{Basic ideas: Early Rejection and Layer Ordering}
\textit{Early rejection:} The idea of early rejection is that, given the output $\hat{\textbf{\textit{s}}}$ of the FS-Net, a candidate that is worse than $\hat{\textbf{\textit{s}}}$ cannot be the optimal solution, and it can be rejected early from the examination process. This definitely results in complexity reduction without any performance loss. To apply this idea to the KSD, among the $K$ chosen paths in each layer of the KSD scheme, the paths with metrics larger than $d^2 = \min \{\alpha N_r \sigma_n^2, \phi(\hat{\textbf{\textit{s}}})\}$ are pruned early because their corresponding candidates are worse than $\hat{\textbf{\textit{s}}}$ or outside the sphere. It is possible that all the $K$ paths are pruned in a layer if all of them are worse than $\hat{\textbf{\textit{s}}}$. In this case, there is no path for further extension. Hence, the examination process is terminated early, and $\hat{\textbf{\textit{s}}}$ is concluded to be the final solution. It is observed that in this early rejection approach, the paths with the metrics larger than $\phi(\hat{\textbf{\textit{s}}})$ are pruned, thus the final solution is the best one between that attained by the conventional KSD and the FS-Net-based solution. Therefore, besides providing complexity reduction, this scheme also attains performance improvement w.r.t. the conventional KSD.
\textit{Layer ordering:} One potential problem of the KSD is that the optimal solution can be rejected before the lowest layer is reached, causing its performance loss w.r.t. the sequential SD. An approach to mitigate the unexpected early rejection of the optimal solution is to apply the layer-ordering scheme proposed in Section IV-B. Specifically, it is observed from \eqref{path_metric} that if the elements of a candidate $\textbf{\textit{x}}$ are ordered such that the ones in higher layers are more reliable than those in lower layers, then the best path is more likely to have small metrics at high layers. As a result, the chance that it is early pruned is reduced. Therefore, we propose applying the layer-ordering scheme proposed in Section IV-B to the FDL-KSD scheme for performance improvement.
\subsection{FDL-KSD Algorithm}
\begin{algorithm}[t]
\caption{FDL-KSD algorithm}
\label{al_FDL_KSD}
\begin{algorithmic}[1]
\REQUIRE $\mH, \textbf{\textit{y}}, K$.
\ENSURE $\hat{\textbf{\textit{s}}}_{KSD}$.
\STATE {Find $\hat{\textbf{\textit{s}}}$ and $\hat{\textbf{\textit{s}}}^{[L]}$ based on Algorithm \ref{al_fsnet}.}
\STATE {Obtain $\ve = \left[e_1, e_2, \ldots, e_M\right]^T$, where $e_m = \abs{ \hat{s}_m - \hat{s}_m^{[L]} }$.}
\STATE Order the channel columns in decreasing order of the elements of $\ve$ to obtain $\underline{\mH}$.
\STATE Perform QR decomposition of $\underline{\mH}$ to obtain $\textbf{\textit{Q}}_1, \textbf{\textit{Q}}_2, \textbf{\textit{R}}$.
\STATE {$\vz = \textbf{\textit{Q}}_1^T \textbf{\textit{y}}$, $d^2 = \min \{\alpha N_r \sigma_n^2, \phi(\hat{\textbf{\textit{s}}})\}$}
\FOR {$m = M \rightarrow 1$}
\STATE Determine the $K$ best paths associated with the $K$ smallest metrics $\{\phi_{m}^{(1)}, \ldots, \phi_{m}^{(K)}\}$.
\STATE Prune the paths that have metrics larger than $d^2$ early.
\IF {all paths have been pruned}
\STATE Terminate the search early.
\ENDIF
\STATE Save the survival paths for further extension.
\ENDFOR
\IF {there is no survival path}
\STATE Set $\hat{\textbf{\textit{s}}}_{KSD} = \hat{\textbf{\textit{s}}}$.
\ELSE
\STATE Set $\hat{\textbf{\textit{s}}}_{KSD}$ to the survival candidate corresponding to the path with the smallest metric.
\ENDIF
\end{algorithmic}
\end{algorithm}
The proposed FDL-KSD scheme is summarized in Algorithm \ref{al_FDL_KSD}. In steps 1--3, layer ordering is performed. The $K$ best paths are selected in step 7, and a subset of them with metrics larger than $d^2 = \min \{\alpha N_r \sigma_n^2, \phi(\hat{\textbf{\textit{s}}})\}$ are pruned early in step 8. In the case where all the paths are pruned, the path examination and extension process is terminated early in step 10, and the FS-Net-based solution $\hat{\textbf{\textit{s}}}$ is concluded to be the final solution, as shown in step 15. In contrast, if early termination does not occur, the search continues until the lowest layer is reached, at which the final solution $\hat{\textbf{\textit{s}}}_{KSD}$ is set to be the best candidate among the surviving ones, as in step 17.
We further discuss the properties of $K$, i.e., the number of survival paths in the proposed FDL-KSD scheme. It can be seen that in the proposed scheme, the number of actual survival paths is dynamic, whereas it is fixed to $K$ in the conventional KSD scheme. Letting $K_m$ be the number of survival paths in the $m$th layer, we have $K_m \leq K, \forall m$, which is clear from step 8 of Algorithm \ref{al_FDL_KSD}. Furthermore, it is observed in \eqref{path_metric} that the paths' metrics increase with $m$. As a result, more paths have metrics exceeding $d^2$ as $m$ increases. Consequently, the number of survival paths becomes smaller as the search goes downward to lower layers, i.e., $K_1 \geq K_2 \geq \ldots \geq K_M$.
These properties make the design of the FDL-KSD scheme much easier than that of the conventional KSD scheme. We first note one challenge in the conventional KSD, which is to choose the optimal value for $K$. Specifically, if a large $K$ is set, many candidates are examined, resulting in high complexity. In this case, the complexity reduction of KSD w.r.t. the conventional SD is not guaranteed. In contrast, a small $K$ leads to significant performance loss because there is a high probability that the optimal path is pruned before the lowest layer is reached. It is possible to use dynamic $K$, i.e., to set different values of $K$ for different layers. However, optimizing multiple values of $\{K_1, K_2, \ldots, K_M\}$ becomes problematic, as $M$ is large in large MIMO systems. In the proposed FDL-KSD scheme, $K_m$ is already dynamic. Furthermore, because $K_m$ is adjusted in step 8 of Algorithm \ref{al_FDL_KSD}, we only need to set $K$ to a sufficiently large value to guarantee near-optimal performance, and unpromising paths are automatically rejected by the FDL-KSD scheme.
\section{Simulation Results}
\label{sec_sim_result}
In this section, we numerically evaluate the BER performance and computational complexities of the proposed FDL-SD and FDL-KSD schemes. The computational complexity of an algorithm is calculated as the total number of additions and multiplications required for online signal detection. Because the training phase of a DL model can be performed offline, the computational complexity in this phase is ignored. In our simulations, each channel coefficient is assumed to be an i.i.d. zero-mean complex Gaussian random variable with a variance of $1/2$ per dimension. SNR is defined as the ratio of the average transmit power to the noise power, i.e., SNR $=N_t\smt / \smn$.
We consider the following schemes for comparison:
\begin{itemize}
\item Conventional SD schemes: FP-SD, SE-SD, and KSD.
\item Existing DL-aided detection schemes: MR-DL-SD \cite{mohammadkarimi2018deep}, DPP-SD \cite{weon2020learning}, and FS-Net \cite{nguyen2019deep}.
\item SD with the ordered SIC (OSIC)-based initial solution (OSIC-SD).
\item TS algorithm aided by DL (DL-TS) \cite{nguyen2019deep}.
\item OAMP detection method \cite{he2018model}.
\end{itemize}
More specifically, we present the BER performance and complexity reduction attained by the proposed FDL-SD w.r.t. the conventional FP-SD and SE-SD, which is shown to be much more significant than that achieved by the existing MR-DL-SD and DPP-SD schemes. Furthermore, we also show the BER performance of the FS-Net to demonstrate the gains of incorporating FS-Net with SD in the proposed FDL-SD and FDL-KSD schemes. Similar observations are noted from the comparison between FDL-KSD and conventional KSD and FS-Net. To justify the efficiency of using the FS-Net-based initial solution, we demonstrate the performance and complexity of the OSIC-SD. Furthermore, we compare the FDL-SD and FDL-KSD to the OAMP \cite{he2018model} scheme. For the OAMP scheme, we have performed simulations to select the number of iterations, denoted by $T$, to ensure convergence. Based on this, we set $T = 4$ for systems with QPSK and $T=8$ for systems with 16-QAM and 64-QAM. For the proposed FDL-SD scheme, we also show the BER performance and computational complexity when only candidate ordering is applied, which allows us to compare the efficiency of order $\mathcal{O}^{\text{FDL}}_m$ in \eqref{order_FS} proposed for the FDL-SD scheme and $\mathcal{O}^{\text{SE}}_m$ in \eqref{order_SE} employed in the conventional SE-SD scheme. Finally, we compare the proposed FDL-SD to the DL-TS \cite{nguyen2019deep} in terms of both the BER performance and complexity to show that, although these two schemes both leverage the FS-Net, the former is more efficient than the latter in both aspects.
\begin{table*}[t]
\renewcommand{\arraystretch}{1.4}
\caption{Architectures and complexities of the DNNs used in the MR-DL-SD, DDP-SD, and the proposed FDL-SD schemes for a $16 \times 16$ MIMO system with QPSK.}
\label{tab_FCDNN}
\centering
\begin{tabular}{|c|c|c|c|c|}
\hline
DNNs & \makecell{No. of \\input nodes} & \makecell{No. of hidden nodes \\$\times$ No. of hidden layers} & \makecell{No. of \\output nodes} & \makecell{Complexity \\ (operations)} \\
\hline
\hline
FC-DNN in the MR-DL-SD & $544$ & $128 \times 1$ & $4$ & 70276 \\
\hline
FC-DNN in the DPP-SD & $34$ & $40 \times 1$ & $4$ & 1564 \\
\hline
\makecell{FS-Net in the FDL-SD, FDL-KSD, and DL-TS} & $64$ & $64 \times 10$ & $32$ & $88608$ \\
\hline
\end{tabular}
\end{table*}
\begin{figure*}[t]
\centering
\includegraphics[scale=0.55]{ber_SD}
\caption{BER performance of the proposed FDL-SD scheme compared to those of the conventional FP-SD, SE-SD, MR-DL-SD, DPP-SD, DL-TS, OSIC-SD, {and OAMP} for $16 \times 16$ MIMO with QPSK and $L=10$, $24 \times 24$ MIMO with QPSK and $L = 12$, {and $16 \times 16$ MIMO with 64-QAM and $L = 15$}.}
\label{fig_ber_SD}
\end{figure*}
\begin{figure*}[t]
\centering
\includegraphics[scale=0.52]{comp_SD}
\caption{Complexity of of the proposed FDL-SD scheme compared to those of the conventional FP-SD, SE-SD, MR-DL-SD, DPP-SD, DL-TS, OSIC-SD, {and OAMP} for $16 \times 16$ MIMO with QPSK and $L=10$, $24 \times 24$ MIMO with QPSK and $L = 12$, {and $16 \times 16$ MIMO with 64-QAM and $L = 15$}.}
\label{fig_comp_SD}
\end{figure*}
\subsection{Training DNNs}
The hardware and software used for implementing and training the DNNs are as follows. The FS-Net is implemented by using Python with the TensorFlow library \cite{abadi2016tensorflow}. In contrast, the FC-DNNs in the MR-DL-SD and DPP-SD are implemented using the DL Toolbox of MATLAB 2019a, as done in \cite{mohammadkarimi2018deep} and \cite{weon2020learning}. All the considered DNNs, i.e., the FS-Net and FC-DNNs, are trained by the Adam optimizer \cite{kingma2014adam,rumelhart1988learning, bottou2010large} with decaying and starting learning rates of $0.97$ and $0.001$, respectively. The FC-DNNs used for the MR-DL-SD and DPP-SD are trained for 100,000 samples, as in \cite{weon2020learning}. In contrast, we train the FS-Net for 10,000 epochs with batch sizes of 2,000 samples, as in \cite{nguyen2019deep}. For each sample, $\textbf{\textit{s}}, \mH$, and $\textbf{\textit{y}}$ are independently generated from \eqref{real SM}.
As discussed in Section I, one of the significant differences between the proposed and existing DL-aided SD schemes lies in the training phase. In the existing DL-aided SD schemes, including the SR-DL-SD, MR-DL-SD, and DPP-SD, the FC-DNNs are trained with the following loss function
\begin{align*}
\mathcal{L} \left(\textbf{\textit{d}}^{(i)}, \hat{\textbf{\textit{d}}}^{(i)} \right) = \frac{1}{D} \sum_{i=1}^{D} \norm{\textbf{\textit{d}}^{(i)} - \hat{\textbf{\textit{d}}}^{(i)}}^2,
\end{align*}
where $D$ is the number of training data samples, and $\textbf{\textit{d}}^{(i)}$ and $\hat{\textbf{\textit{d}}}^{(i)}$ are the label and output vectors of the DNNs, which represent the radii associated with the $i$th data sample \cite{mohammadkarimi2018deep,weon2020learning}. The training labels, i.e., $\{\textbf{\textit{d}}^{(1)}, \ldots, \textbf{\textit{d}}^{(D)}\}$, are obtained by performing the conventional SD scheme for $D$ times, where $D = \{\text{100,000}, \hspace{0.05cm} \text{360,000}\}$ for the DPP-SD and MR-DL-SD schemes, respectively \cite{mohammadkarimi2018deep, weon2020learning}. It is well known that the conventional SD is computationally prohibitive for large MIMO systems. Therefore, huge amounts of computational resources and time are required to collect a huge training data set in the existing DL-aided SD schemes.
In contrast, in the proposed application of DL to SD, the FS-Net is employed to generate $\hat{\textbf{\textit{s}}}$. The FS-Net is trained with the loss function \eqref{loss_dscnet} \cite{nguyen2019deep}. As the training labels $\textbf{\textit{s}}$ of the FS-Net are generated randomly, the conventional SD does not need to be performed to generate the training labels as done in the existing DL-aided SD schemes.
\subsection{BER Performance and Computational Complexity of the Proposed FDL-SD Algorithm}
We first note that the structures and complexities of the DNNs employed in the compared schemes are different, as illustrated in Table \ref{tab_FCDNN} for a $16 \times 16$ MIMO system with QPSK. The DNNs used in the MR-DL-SD and DPP-SD schemes have well-known fully-connected architectures, whose complexity can be calculated based on the network connections. In contrast, the complexity of the FS-Net is computed based on \eqref{comp_fsnet}. In our simulation results, the overall complexity of each considered scheme is computed as the sum of the complexity required in the DNNs, presented in Table \ref{tab_FCDNN}, and that required to perform {QR decomposition and the search process in the} algorithms themselves. The simulation parameters for the MR-DL-SD, DPP-SD, and DL-TS schemes are listed in Table \ref{tabl_sim_params}, which are set based on the corresponding prior works. It is observed that an advantage of the proposed FDL-SD scheme is that it does not require optimizing any design parameters, as done in the DL-aided detection algorithms in Table \ref{tabl_sim_params}.
\begin{table*}[t]
\renewcommand{\arraystretch}{1.4}
\caption{Simulation parameters for the MR-DL-SD, DPP-SD, and DL-TS schemes, where $\lambda_1, \lambda_2$ are the optimized design parameters of the DPP-SD scheme, $\mathcal{I}$ is the maximum number of search iterations, $\mu$ is an optimized design parameter, and $\epsilon$ is the cutoff factor for early termination in the DL-TS scheme.}
\label{tabl_sim_params}
\centering
\begin{tabular}{|c|c|}
\hline
Schemes & Parameters \\
\hline
\hline
MR-DL-SD & Number of predicted radii: 4 \\
\hline
DPP-SD & $\lambda_1 = \{ 1.1, 1.2 \ldots, 1.6\}$ for SNR $= \{2,4,\ldots, 12\}$, respectively, $\lambda_2 = \lambda_1 + 0.1$ \\
\hline
DL-TS & $\mathcal{I} = \{ 400, 700 \}, \mu = \{7, 8\}$ for $16 \times 16$ MIMO and $24 \times 24$ MIMO systems, respectively, and $\epsilon = 0.4$ \\
\hline
\end{tabular}
\end{table*}
\begin{figure*}[t]
\centering
\includegraphics[scale=0.55]{comp_SD_ratio}
\caption{Complexity ratio of the proposed FDL-SD scheme compared to those of the conventional FP-SD, SE-SD, MR-DL-SD, DPP-SD, DL-TS, OSIC-SD, {and OAMP} for $16 \times 16$ MIMO with QPSK and $L=10$, $24 \times 24$ MIMO with QPSK and $L = 12$, {and $16 \times 16$ MIMO with 64-QAM and $L = 15$}.}
\label{fig_comp_ratio_SD}
\end{figure*}
\begin{figure}[t]
\centering
\subfigure[$16 \times 16$ MIMO with QPSK]
{
\includegraphics[scale=0.55]{conv_16QPSK}
\label{conv_16QPSK} }
\subfigure[$24 \times 24$ MIMO with QPSK]
{
\includegraphics[scale=0.55]{conv_24QPSK}
\label{conv_24QPSK}
}
\caption{{Convergence of the proposed FDL-SD scheme compared to those of the conventional FP-SD, SE-SD, and DPP-SD.}}
\label{fig_conv}
\end{figure}
{In Fig. \ref{fig_ber_SD}, we show the BER performance of the schemes listed earlier for $16 \times 16$ and $24 \times 24$ MIMO systems, both with QPSK, and $L=\{10,12\}$, respectively, and a $16 \times 16$ MIMO system with 64-QAM and $L=15$. We note that the results for the MR-DL-SD and DPP-SD schemes are not presented in Figs.\ \ref{fig_ber_SD}(b) and (c) because it takes an extremely long time to collect the desired radii to train the corresponding DNNs. It is seen from Fig. \ref{fig_ber_SD} that, except for the FS-Net and OAMP schemes, the compared schemes, including the FDL-SD, MR-DL-SD, DPP-SD, DL-TS, FP-SD, and SE-SD, have approximately the same BER performance, which is near-optimal. In particular, the proposed FDL-SD schemes completely preserve the performance of the conventional FP-SD and SE-SD because the incorporation with the FS-Net solution does not affect the final solution. The OAMP scheme performs far worse than the considered SD schemes, which agrees with the observations in \cite{he2018model}. However, in a larger system with a higher modulation order, it achieves better performance than FS-Net, as observed in Figs.\ \ref{fig_ber_SD}(b) and (c).}
In Fig. \ref{fig_comp_SD}, we compare the proposed FDL-SD scheme to the conventional FP-SD, SE-SD, MR-DL-SD, DPP-SD, {OSIC-SD}, DL-TS, {and OAMP} schemes in terms of computational complexity. To ensure that the compared schemes have approximately the same BER performance, the simulation parameters in Fig. \ref{fig_comp_SD} are assumed to be the same as those in Fig. \ref{fig_ber_SD}. In Fig. \ref{fig_comp_SD}, the complexity reduction gains of the considered schemes are difficult to compare at high SNRs. Therefore, we show their complexity ratios w.r.t. the complexity of the conventional FP-SD in Fig. \ref{fig_comp_ratio_SD}. In other words, the complexity of all schemes {is} normalized by that of the FP-SD. From Figs. \ref{fig_comp_SD} and \ref{fig_comp_ratio_SD}, the following observations are noted:
\begin{itemize}
\item It is clear from Fig. \ref{fig_comp_SD} that the complexities of the conventional FP-SD, SE-SD, {OSIC-SD}, and the existing DL-aided SD schemes, including the MR-DL-SD and DPP-SD, significantly depend on SNRs. In contrast, that of the proposed FDL-SD scheme is relatively stable with SNRs.
\item In Fig. \ref{fig_comp_ratio_SD}, among the improved SD schemes, the proposed FDL-SD achieves the most significant complexity reduction w.r.t. the conventional FP-SD scheme. Specifically, in the $16 \times 16$ MIMO system, for SNR $\leq 6$ dB, the complexity reduction ratios of the proposed FDL-SD are higher than $90\%$, while those of the MR-DL-SD and DPP-SD are only around $60\%$. At SNR $=12$ dB, the DL-aided SD schemes, including MR-DL-SD, DPP-SD, and FDL-SD, have approximately the same complexity. In the $24 \times 24$ MIMO system, the complexity reduction ratio of the FDL-SD with both candidate and layer ordering is $95\% - 98\%$, which is much higher than $55\% - 80\%$ for the DPP-SD scheme.
\item Furthermore, by comparing the complexity ratios of the FDL-SD scheme in Fig. \ref{fig_comp_ratio_SD}, it can be observed that this scheme achieves more significant complexity reduction in a larger MIMO system. Specifically, in the $16 \times 16$ MIMO system, its complexity reduction ratio w.r.t. the conventional FP-SD is only around $50\% - 97\%$. In contrast, that in the $24 \times 24$ MIMO system with QPSK is $95\% - 98\%$. The reason for this improvement is that, in large MIMO systems, the complexity of the SD algorithm significantly dominates that of the FS-Net and becomes almost the same as the overall complexity. Therefore, the complexity required in the FS-Net has almost no effect on the complexity of the FDL-SD scheme. This observation demonstrates that the proposed FDL-SD scheme is suitable for large MIMO systems.
\item Notably, the proposed FDL-SD has considerably lower complexity than the DL-TS scheme although the conventional SD requires higher complexity than the TS detector \cite{nguyen2019qr, nguyen2019groupwise}. {This confirms that in the considered scenarios, the application of DL makes SD a more computationally efficient detection scheme than the TS.} Furthermore, although the OSIC-SD scheme attains complexity reduction at low and moderate SNRs, this is not guaranteed at high SNRs. This is because high complexity is required to obtain the OSIC solution, whereas that to perform SD at high SNRs is relatively low; thus, the complexity reduction achieved by using the OSIC solution cannot compensate for the complexity increase of the OSIC-SD scheme at high SNRs.
\item Compared to the conventional FP-SD, the SE-SD and the proposed FDL-SD scheme with candidate ordering only are similar in the sense that symbols are ordered in each layer based on $\mathcal{O}^{\text{FDL}}_m$ and $\mathcal{O}^{\text{SE}}_m$, respectively. However, it can be clearly seen in Fig. \ref{fig_comp_ratio_SD} that the order $\mathcal{O}^{\text{FDL}}_m$ obtained based on the FS-Net's output is considerably better than the $\mathcal{O}^{\text{SE}}_m$ used in the conventional SE-SD. Specifically, in the $24 \times 24$ MIMO system, the proposed FDL-SD with candidate ordering based on $\mathcal{O}^{\text{FDL}}_m$ achieves $80\% - 95\%$ complexity reduction w.r.t. the conventional FP-SD, while that achieved by the SE-SD with $\mathcal{O}^{\text{SE}}_m$ is only around $50\%$, as seen in Fig. \ref{fig_ber_SD}.
\item {The FS-Net performs worse for higher-order modulations, such as 64-QAM. However, the complexity reduction ratio of the FDL-SD compared to that of the FP-SD is still significant, $70\%$--$99\%$ in Fig. \ref{fig_comp_ratio_SD}(c). It can be observed from Figs.\ \ref{fig_comp_SD} and \ref{fig_comp_ratio_SD} that the OAMP scheme has relatively low complexity. In particular, at low SNRs, its complexity is much lower than that of most of the compared schemes, except for the proposed FDL-SD. However, it is noted that this does not guarantee near-optimal performance, as shown in Fig.\ \ref{fig_ber_SD}.}
\end{itemize}
{To explain the complexity reduction of the proposed FDL-SD scheme, we further investigate its convergence compared to those of the FP-SD, SE-SD, and DPP-SD schemes in Fig. \ref{fig_conv} for $16 \times 16$ and $24 \times 24$ MIMO systems with QPSK. Based on the description of the SD scheme in Section IV, the convergence of the SD schemes can be evaluated by the number of visited nodes, each associated with an examined candidate symbol. Specifically, the number of visited nodes is equal to the number of iterations that the SD schemes perform until convergence. For example, in the proposed FDL-SD algorithm, this iterative process is conducted during steps 6--21 of Algorithm \ref{al_FDL_SD}, which is similar to the conventional SD schemes \cite{hassibi2005sphere}. This iterative search process terminates when the sphere stops shrinking, i.e., when the radius stops decreasing and reaches convergence. In Fig. \ref{fig_conv}, we show the convergences of the considered schemes for $2000$ iterations/visited nodes. It is observed that the FDL-SD scheme reaches convergence much earlier than the other compared schemes.} In summary, it is clear from Figs. \ref{fig_ber_SD}--\ref{fig_conv} that the proposed FDL-SD scheme has no performance loss w.r.t. the conventional SD, whereas it attains the most significant complexity reduction among the compared schemes.
\subsection{BER Performance and Computational Complexity of the Proposed FDL-KSD Scheme}
\begin{figure}[t]
\centering
\includegraphics[scale=0.55]{ber_KSD}
\caption{BER performance of the proposed FDL-KSD scheme compared to those of the conventional KSD {and OAMP schemes} for a $(32 \times 32)$ MIMO system with $L = 15$ and QPSK, a $(16 \times 16)$ MIMO system with $L = 30$ and 16-QAM, and a {$(16 \times 16)$ MIMO system with $L = 15$ and 64-QAM}, and $K=256$.}
\label{fig_ber_KSD}
\end{figure}
\begin{figure*}[t]
\centering
\includegraphics[scale=0.6]{comp_KSD}
\caption{Computational complexity of the proposed FDL-KSD scheme compared to those of the conventional KSD {and the OAMP schemes} for a $(32 \times 32)$ MIMO system with $L = 15$ and QPSK, a $(16 \times 16)$ MIMO system with $L = 30$ and 16-QAM, and a {$(16 \times 16)$ MIMO system with $L = 15$ and 64-QAM}, and $K=256$.}
\label{fig_comp_KSD}
\end{figure*}
In Fig. \ref{fig_ber_KSD}, we show the BER performance and complexity of the proposed FDL-KSD and the conventional KSD schemes for a $32 \times 32$ MIMO system with QPSK, a $16 \times 16$ MIMO system with 16-QAM, and a {$(16 \times 16)$ MIMO system with 64-QAM}. We note that no existing work in the literature considers the application of DL to KSD. Therefore, we only compare the performance and complexity of the proposed FDL-KSD to those of the conventional KSD {and OAMP schemes}. Furthermore, we also show the performance and complexity of the proposed FDL-KSD with early rejection only, to demonstrate that this early rejection scheme attains not only performance improvement, but also complexity reduction.
\begin{figure}[t]
\centering
\subfigure[$32 \times 32$ MIMO, $L = 15$, SNR $=12$ dB with QPSK]
{
\includegraphics[scale=0.52]{tradeoff_KSD_QPSK}
\label{tradeoff_KSD_QPSK}
}
\subfigure[$16 \times 16$ MIMO, $L = 30$, SNR $=20$ dB with 16-QAM]
{
\includegraphics[scale=0.52]{tradeoff_KSD_QAM}
\label{tradeoff_KSD_QAM}
}
\caption{Tradeoff between BER performance and computational complexity of the proposed FDL-KSD scheme compared to that of the conventional KSD for $K=\{32, 64, 128, 256\}$.}
\label{tradeoff_KSD}
\end{figure}
In Fig. \ref{fig_comp_KSD}, it is observed that unlike the conventional KSD scheme, whose complexity is fixed with SNRs, the proposed FDL-KSD scheme has the complexity decreasing significantly with SNRs. Specifically, the complexities of the FDL-KSD scheme in both considered systems are reduced by approximately half as the SNR increases from low to high. Moreover, it is clear that at moderate and high SNRs, the proposed FDL-KSD scheme achieves better performance with considerably lower complexity than the conventional KSD scheme. In particular, the early rejection can achieve improved performance and reduced complexity w.r.t. the conventional KSD. The additional application of candidate ordering results in further performance improvement of the FDL-KSD algorithm, as seen in Fig. \ref{fig_ber_KSD}. Specifically, in both considered systems, a performance improvement of $0.5$ dB in SNR is achieved. At the same time, complexity reductions of $38.3\%$, $50.2\%$, {and $40\%$ w.r.t. the conventional KSD are attained at SNR $=\{12, 20, 28\}$ dB in Figs. \ref{fig_comp_KSD}(a), \ref{fig_comp_KSD}(b), and \ref{fig_comp_KSD}(c), respectively.} We note that the proposed FDL-KSD has higher or comparable complexity w.r.t. the conventional KSD at low SNRs because the complexity required for the FS-Net is included. {In particular, in Fig. \ref{fig_comp_KSD}(c), the OAMP scheme has a relatively low complexity. However, its performance is not near-optimal, whereas that of the SD-and KSD-based alternatives is.}
\begin{figure}[t]
\centering
\subfigure[BER performance]
{
\includegraphics[scale=0.55]{ber_onering}
\label{fig_ber_onering}
}
\subfigure[Computational complexity]
{
\includegraphics[scale=0.55]{comp_onering}
\label{fig_comp_onering}
}
\caption{{BER performance and complexity of the proposed schemes in one-ring correlated channels.}}
\label{fig_onering}
\end{figure}
In Fig. \ref{tradeoff_KSD}, we show the improvement in the performance--complexity tradeoff of the proposed FDL-KSD scheme w.r.t. the conventional KSD scheme for a $32 \times 32$ MIMO system with QPSK and SNR $=12$ dB and a $16 \times 16$ MIMO system with 16-QAM and SNR $=20$ dB. Various values for $K$ are considered, including $K=\{32, 64, 128, 256\}$. We make the following observations:
\begin{itemize}
\item First, it is clear that the proposed FDL-KSD scheme not only achieves better BER performance but also requires much lower complexity than the conventional KSD scheme. For example, to attain a BER of $1.96 \times 10^{-4}$ in the $32 \times 32$ MIMO system with QPSK, the conventional KSD scheme requires $K=256$, whereas only $K<64$ is sufficient for the proposed FDL-KSD scheme with early rejection only, corresponding to a complexity reduction ratio of $52.5 \%$, and only $K=32$ is required for the FDL-KSD scheme with both early rejection and candidate ordering, resulting in $55.6 \%$ complexity reduction.
\item Second, the complexity reduction is more significant as $K$ increases. This is because in the proposed FDL-KSD scheme, the number of actual survival nodes is not $K$, but $K_m \leq K, \forall m$, as discussed in Section V-B.
\item Moreover, the performance--complexity tradeoff of the conventional KSD scheme significantly depends on $K$, as discussed in Section V-B. In Fig. \ref{tradeoff_KSD}, its BER performance can be improved dramatically as $K$ increases, which, however, causes considerably high complexity. In contrast, the performance--complexity tradeoff of the proposed FDL-KSD scheme is comparatively stable with $K$. For example, its BER performance in the $32 \times 32$ MIMO system with QPSK is approximately the same for $K=128$ and $K=256$, and its complexity increases relatively slowly as $K$ increases. In contrast, in the $16 \times 16$ MIMO system with 16-QAM, $K=64$ is sufficient to achieve a BER of $4.68 \times 10^{-5}$, and a further increase of $K$ to $\{128, 256\}$ does not result in performance improvement. In this case, we can conclude that $K=64$ is optimal for the FDL-KSD scheme.
\end{itemize}
\subsection{{Performance and Complexity of FDL-SD/KSD for Highly Correlated Channels}}
{In Fig. \ref{fig_onering}, we show the BER performance and computational complexity of the proposed FDL-SD/KSD compared to those of the conventional FP-SD, SE-SD, OSIC-SD, KSD, FS-Net, and OAMP schemes under a highly correlated channel. Specifically, a $10 \times 16$ MIMO system with QPSK and the one-ring channel model \cite{he2020model} is assumed, which is also used to generate the training data of the FS-Net. Furthermore, we assume $L=10$, $T=2$, and $K=32$ for the number of layers in FS-Net, number of iterations in OAMP, and number of surviving nodes in KSD, respectively. It is observed that the performance of the considered schemes degrades significantly w.r.t. the case of i.i.d. Rayleigh channels. However, it is worth noting that in highly correlated channels, the FS-Net performs similarly to the KSD/SD, in contrast with the DetNet, OAMP-Net, and LcgNet in \cite{samuel2019learning, he2020model, wei2020learned}. As a result, the FDL-SD and FDL-KSD still achieve complexity reduction without any performance loss, similar to the observations for the i.i.d. Rayleigh channel, which further justifies that the FS-Net is a highly efficient scheme to generate initial solutions for the proposed FDL-SD/KSD schemes.}
\section{Conclusion}
In this paper, we have presented a novel application of DL to both the conventional SD and KSD, resulting in the FDL-SD and FDL-KSD schemes, respectively. The main idea is to leverage the FS-Net to generate a highly reliable initial solution with low complexity. The initial solution determined by the FS-Net is exploited for candidate and layer ordering in the FDL-SD scheme, and for early rejection and layer ordering in the FDL-KSD scheme. Unlike the existing DL-aided SD schemes, the proposed application of DL to the SD schemes does not require performing the conventional SD schemes to generate the training data. Therefore, the employed DNN, i.e., FS-Net, can be trained with significantly less time and computational resources than those required in existing works. Our simulation results justify the performance and complexity-reduction gains of the proposed schemes. Specifically, the FDL-SD scheme achieves remarkable complexity reduction, which exceeds $90\%$, without any performance loss. Moreover, the proposed FDL-KSD scheme attains a dramatically improved performance--complexity tradeoff. {We note that the proposed applications of DL to SD/KSD are not limited to the use of the FS-Net, and a scheme with a superior performance--complexity tradeoff can potentially provide higher complexity reduction gains, which motivates further developments of improved DL models for signal detection.}
\bibliographystyle{IEEEtran}
| proofpile-arXiv_065-5596 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Conclusion}
\section{Appendix}
\input ./sections/appendix.tex
\ifCLASSOPTIONcaptionsoff
\newpage
\fi
\bibliographystyle{splncs04}
\section{Acknowledgements}
This work is partially supported by the Japan Society for the
Promotion of Science (JSPS) Grant-in-Aid for Scientific Research (S)
No. 17H06099, (A) No. 18H04093, (C) No. 18K11314 and Early-Career Scientists
No. 19K20269.
\section{Conclusion and Future Work}
In this paper, we proposed a new notion of location privacy on road networks, GG-I, based on differential privacy. GG-I provides a guarantee of the indistinguishability of a true location on road networks. We revealed that GG-I is a relaxed version of Geo-I, which is defined on the Euclidean plane. Our experiments showed that this relaxation allows a mechanism to output more useful locations with the same privacy level for LBSs that function over road networks. By introducing the notions of empirical privacy gain AE and utility loss Q$^{loss}$ in addition to indistinguishability $\epsilon$, we formalized the objective function and proposed an algorithm to find an approximate solution. We showed that this algorithm has an acceptable execution time and that even an approximate solution results in improved performance.
We represented a road network as a undirected graph; this means that our solution has no directionality even though one-way roads exist, which may degrade its utility.
In this paper, the target being protected is a location, but if additional information (such as which hospital the user is in) also needs to be protected, our proposed method does not work well: the hospital could be distinguished.
This problem can be solved by introducing another metric space that represents the targets to protect instead of the road network graph. Moreover, we need to consider the fact that multiple perturbations of correlated data, such as trajectory data, may degrade the level of protection even if the mechanism satisfies GG-I as in the case of Geo-I and differential privacy. This topic has been intensely studied, and we believe that the results can be applied to GG-I.
\section{Experiments with Real-world Data}
\label{sec:experiments}
In this section, we show that GEM outperforms the baseline mechanism PLMG, which is the mechanism satisfying Geo-I, in terms of the tradeoff between utility and privacy on road networks of real-world maps.
\subsection{Comparison of GEM with PLMG}
\label{subsec:exp_gem_plmg}
We evaluate the tradeoff of GEM based on the optimized range comparing with PLMG.
We use two kinds of maps~(Fig.~\ref{fig:maps}) whose ranges are \SI{1000}{m} from the center, where points represent nodes. We assume that users are located in each node with the same probability. We use the output range of GEM obtained by Algorithm~\ref{alg:opt} according to this prior distribution.
\begin{figure}[t]
\begin{minipage}{0.5\hsize}
\centering\includegraphics[width=\hsize, height=\hsize]{img/sub_tokyo.png}
\end{minipage}
\hfill
\begin{minipage}{0.5\hsize}
\centering\includegraphics[height=\hsize]{img/sub_akita.png}
\end{minipage}
\caption{On the left is a map of Tokyo, while the right shows a map of Akita.{\label{fig:maps}}}
\end{figure}
We compare Q$^{loss}$ of GEM with that of PLGM with respect to the same AE. Here, we assume an adversary who attacks with an optimal inference attack with the knowledge of the user, that is, the uniform distribution over the nodes. Fig.~\ref{fig:map_ae_sql} shows that GEM outperforms PLMG in both maps w.r.t the trade-off between utility and privacy. Since GEM breaks the definition of Geo-I and tightly considers privacy of locations on road networks, GEM can output more useful locations than PLMG. Its performance advantage is greater on the Akita map because the difference between the Euclidean distance and the shortest distance is larger on that map than in on the Tokyo map.
\begin{figure}[t]
\begin{minipage}{0.5\hsize}
\centering\includegraphics[width=\hsize]{img/tokyo_ae_sql.png}
\end{minipage}
\hfill
\begin{minipage}{0.5\hsize}
\centering\includegraphics[width=\hsize]{img/akita_ae_sql.png}
\end{minipage}
\caption{Q$^{loss}$ comparison between PLMG and GEM on maps of Tokyo (left) and Akita (right) with respect to AE.{\label{fig:map_ae_sql}}}
\end{figure}
\subsection{Evaluation of the Effectiveness of Optimization}
Fig.~ \ref{fig:map_ae_sql} shows that the optimization works well. Here, we assume some prior knowledge and show the effectiveness of the optimization.
\subsubsection{Scenario}
\label{subsubsec:scenario}
First, we show that the approximate solution for the proposed objective function effectively improves the tradeoff between utility and privacy. We use the following real-world scenario: a bus rider who uses LBSs. In other words, the user has a higher probability of being located near a bus stop. We create a prior distribution following this scenario by using a real-world dataset, Kyoto Open Data\footnote{https://data.city.kyoto.lg.jp/28}, which includes the number of people who enter and exit buses at each bus stop per day. Fig.~\ref{fig:kyoto_bus_stop} shows the data, and Fig.~\ref{fig:kyoto_bus_prior} represents the prior distribution made by distributing node probability based one the shortest distance from that node to a bus stop and the number of people who enter and exit buses at that bus stop. We assume that a user who follows this prior distribution uses an LBS with GEM and that an adversary knows the prior distribution.
In this setting, we run Algorithm \ref{alg:opt} and obtain an approximate solution. Fig. \ref{fig:kyoto_opt} shows the example of an approximate solution. We can see that the nodes around the place with higher prior probability remain.
\begin{figure}[t]
\begin{minipage}{0.5\hsize}
\centering\includegraphics[width=\hsize]{img/kyoto_bus_stop.png}
\caption{Each point represents a bus stop, and the y-axis represents the number of people who enter and exit buses at that stop.}
\label{fig:kyoto_bus_stop}
\end{minipage}
\begin{minipage}{0.5\hsize}
\centering\includegraphics[width=\hsize]{img/kyoto_bus_prior.png}
\caption{Prior distribution created from Kyoto Open Data.}
\label{fig:kyoto_bus_prior}
\end{minipage}
\end{figure}
\begin{figure}[t]
\begin{minipage}{0.5\hsize}
\centering\includegraphics[height=0.8\hsize]{img/kyoto_eps0011.png}
\caption{The solution for the objective function against the adversary with $\epsilon=0.01$.}
\label{fig:kyoto_opt}
\end{minipage}
\hfill
\begin{minipage}{0.5\hsize}
\centering\includegraphics[width=\hsize]{img/kyoto_epsilon_mc.png}
\caption{PC\ with respect to $\epsilon$.}
\label{fig:kyoto_epsilon_mc}
\end{minipage}
\end{figure}
\subsubsection{Evaluation of Optimized Range}
First, we evaluate the PC of GEM with an optimized output range under the same $\epsilon$ as shown in Fig.~\ref{fig:kyoto_epsilon_mc}.
The result shows that a user can effectively perturb their true location for any realistic value of $\epsilon$ by using the optimized range. When the value of $\epsilon$ is small, the distribution of GEM has a gentle spread. In this case, the output of the mechanism does not contain useful information; thus, the adversary must use his/her prior knowledge, which results in a worse PC in the case of the baseline.
However, as these results show, by optimizing the output range according to the prior knowledge of the adversary, we can prevent this type of privacy leak.
\section{Geo-graph-indistinguishability}
In this section, we propose a new definition of location privacy on road networks, called Geo-Graph-Indistinguishability (GG-I). We first formally define GG-I. Then, we clarify the relationship between Geo-I and GG-I. In the following subsections, we describe the reason why GG-I restricts the output range and characteristics that GG-I inherits from $d_\mathcal{X}$-privacy~\cite{broad_dp}.
\subsection{Definition}
We assume that a graph $G=(V,E)$ representing a road network is given.
First, we introduce the privacy loss of a location on a road network as follows.
\begin{definition}[privacy loss of a location on a road network]
Given a mechanism $M:V\to\mathcal{Z}$ and $v,v^\prime\in V$, privacy loss of a location on a road network is as follows:
\begin{equation}
\nonumber
L(M,v,v^\prime)=\sup_{S\subseteq\mathcal{Z}}\left|\log\frac{\Pr(M(v)\subseteq S)}{\Pr(M(v^\prime)\subseteq S)}\right|
\end{equation}
\end{definition}
Intuitively, privacy loss measures how much different two outputs are for two inputs $v$ and $v^\prime$. If the privacy loss value $L(M,v,v^\prime)$ is small, an adversary who sees an output $M(v)$ cannot distinguish the true location from $v$ and $v^\prime$, which is the basic notion of differential privacy described in~Section~\ref{subsec:differentialprivacy}. In the same way that differential privacy guarantees the indistinguishability of a record in a database, our notion guarantees the indistinguishability of a location.
Given $\epsilon \in \mathbb{R^+}$, we define $\epsilon$-geo-graph-indistinguishability as follows.
\begin{definition}{($\epsilon$-geo-graph-indistinguishability)}
Mechanism $M:V\to V$ satisfies $\epsilon$-GG-I iff $\forall v,v^\prime\in V$,
\begin{equation}
\nonumber
L(M,v,v^\prime)\leq \epsilon d_s(v,v^\prime)
\end{equation}
where $d_s$ is the shortest path length between two vertices.
\end{definition}
Intuitively, $\epsilon$-GG-I constrains any two outputs of a mechanism to be similar when the two inputs are similar, that is, they will represent close vertices. In other words, two distributions of two outputs are guaranteed to be similar. The degree of similarity of two probability distributions is $\epsilon d_s(v,v^\prime)$. From this property, an adversary who obtains an output of the mechanism cannot distinguish the true input $v$ from other vertices $v^\prime$ according to the value of $\epsilon d_s(v,v^\prime)$. In particular, a vertex close to the true vertex cannot be distinguished. Moreover, $\epsilon$-GG-I constrains the output range to the vertices of the graph because an output consisting of locations other than those on the road network may cause empirical privacy leaks. This constraint prevents such kind of privacy leak. We provide additional explanation of this concept in Section~\ref{subsec:range}.
The definition can be also formulated as follows:
\begin{equation}
\nonumber
\forall v,v^\prime\in V, \forall W\subseteq V,\frac{\Pr(M(v)\subseteq W)}{\Pr(M(v^\prime)\subseteq W)} \leq \mathrm{e}^{\epsilon d_s(v,v^\prime)}
\end{equation}
This formulation implies that GG-I is an instance of $d_\mathcal{X}$-privacy~\cite{broad_dp} proposed by Chatzikokolakis et al. as are Geo-I and differential privacy. Chatzikokolakis et al. showed that an instance of $d_\mathcal{X}$-privacy guaranteed strong privacy property as shown in Section~\ref{subsec:chara}.
\subsection{Relationship between Geo-I and GG-I}
\label{subsec:analyze_relationship}
Geo-I~\cite{geo-i} defines location privacy on the Euclidean plane (see Section~\ref{subsec:geo-i} for details). Here, we explain the relationship between Geo-I and GG-I. To show the relationship, we introduce the following lemma.
\begin{lemma}[Post-processing theorem of Geo-I.]
\label{lemma:postprocess}
If a mechanism $M:\mathcal{X}\to\mathcal{Z}$ satisfies $\epsilon$-Geo-I, a post-processed mechanism $f\circ M$ also satisfies $\epsilon$-Geo-I for any function $f:\mathcal{Z}\to\mathcal{Z}^\prime$.
\end{lemma}
We refer readers to the appendix for the proof. Intuitively, this means that Geo-I does not degrade even if the output is mapped by any function. Moreover, if a mechanism $M:\mathcal{X}\to\mathcal{Z}$ satisfies $\epsilon$-Geo-I, the following inequality holds for any two vertices $v,v^\prime\in V$ from Inequality~(\ref{equ:shortest-path}).
\begin{equation}
\nonumber
\begin{split}
\sup_{S\in{\mathcal{Z}}}\left|\log\frac{\Pr(M(v)\in S)}{\Pr(M(v^\prime)\in S)}\right|\leq &\epsilon d_e(v,v^\prime)\\\leq
&\epsilon d_s(v,v^\prime)
\end{split}
\end{equation}
From this inequality and Lemma~\ref{lemma:postprocess}, we can derive the following theorem.
\begin{theorem}
If a mechanism $M$ satisfies $\epsilon$-Geo-I, $f\circ M$ satisfies $\epsilon$-GG-I, where $f:\mathcal{Z}\to V$ is any mapping function to a vertex of the graph.
\end{theorem}
This means that a mechanism that satisfies $\epsilon$-Geo-I can always be converted into a mechanism that satisfies $\epsilon$-GG-I by post-processing.
We note that the reverse is not always true. That is, GG-I is a relaxed version of Geo-I through the use of the metric $d_s$, allowing for us to create a mechanism that outputs a useful location. We refer to Section~\ref{subsec:privacy_geoi_geogi} for details.
For example, the planar Laplace mechanism (PLM)~(Section~\ref{subsubsec:plm}) satisfies $\epsilon$-Geo-I. Because Outputs of PLM consist locations other than locations on a road network, it may cause empirical privacy leaks as described in the next section; this is because PLM does not satisfy $\epsilon$-GG-I.
$f\circ PLM$ satisfies $\epsilon$-GG-I and prevents this privacy leaks if $f$ is a mapping function to a vertex of a graph. For utility, we can use a mapping function that maps to the nearest vertex; we call this mechanism the Planar Laplace Mechanism on a Graph (PLMG).
\subsection{Output Range from a Privacy Perspective}
\label{subsec:range}
There are two reasons why $\epsilon$-GG-I restricts output range to vertices of the graph.
First, LBSs that operate over road networks expect to receive a location on a road network as described in Section~\ref{subsec:problem}.
Second, because road networks are public information, outputting a location outside the road network may cause empirical privacy leaks. We empirically show that an adversary who knows the road network can perform a more accurate attack than can one who does not know the road network; a post-processed mechanism protects privacy from this type of attack.
To show this, we evaluate the empirical privacy gain AE of two kinds of mechanisms PLM and PLMG against the two kinds of adversaries.
For simplicity, we use a simple synthetic map illustrated in Fig.~\ref{fig:syn_map}. This map consists of 1,600 squares each of which has a side length of \SI{100}{m}; that is, the area dimensions are \SI{4000}{m} * \SI{4000}{m}, and each lattice point has a coordinate. The centerline represents a road where a user is able to be located, and the other areas represent locations where a user must not be, such as the sea.
In this map, we evaluate the empirical privacy gain AE of the two mechanisms against two kinds of adversaries with the same utility loss Q$^{loss}$. We use Euclidean distance as the metric of AE and Q$^{loss}$, denoted by AE$_e$ and Q$^{loss}_e$, respectively.
\begin{figure}[t]
\begin{minipage}{0.4\hsize}
\centering\includegraphics[width=\hsize]{img/synthegraph.png}
\caption{A synthetic map. The red line represents a road, and a user is located inside the black frame.}
\label{fig:syn_map}
\end{minipage}
\hfill
\begin{minipage}{0.6\hsize}
\centering\includegraphics[width=\hsize]{img/ge.png}
\caption{AE of each mechanism with respect to Q$^{loss}$ with the Euclidean distance, that is AE$_e$ and Q$^{loss}_e$. PLM$^*$ represents the AE of PLMG against an adversary who does not know the road network.}
\label{fig:output_outside}
\end{minipage}
\end{figure}
Fig.~\ref{fig:output_outside} shows the results.
PLM represents the empirical privacy gain AE against an adversary who knows the road network, while PLM$^*$ represents the AE against an adversary who dose not know the road network. Comparing PLM with PLM$^*$, the adversary can more accurately infer the true location by considering the road network.
The AE of PLMG is higher than the AE of PLM and almost identical to the AE of PLM$^*$.
By restricting the output to locations on the road network, the adversary cannot improve the inference of the true location because no additional information exists. In other words, post-processing to a location on road networks strengthens the empirical privacy level against an adversary who knows the road network.
\subsection{Characteristics}
\label{subsec:chara}
GG-I\ is an instance of $d_\mathcal{X}$-privacy~\cite{broad_dp}, which is a generalization of differential privacy with the following two characteristics that show strong privacy protection.
\subsubsection{Hiding function}
The first characteristic uses the concept of a hiding function $\phi:V\to V$, which hide a secret location by mapping to the other location.
For any hiding function and a secret location $v\in V$, when an attacker who has a prior distribution that includes information about the user's location obtains each output $o = M(v)$ and $o^\prime = M(\phi(v))$ of a mechanism that satisfies $\epsilon$-GG-I, the following inequality holds for each posterior distribution:
\begin{equation}
\nonumber
\left|\log\frac{\Pr(v|o)}{\Pr(v|o^\prime)}\right|\leq 2\epsilon \sup_{v\in V}d_s(v,\phi(v))
\end{equation}
This inequality guarantees that the adversary's conclusions are the same (up to $2\epsilon \sup_{v\in V}d_s(v,\phi(v))$) regardless of whether $\phi$ has been applied to the secret location.
\subsubsection{Informed attacker}
\label{subsubsec:informed}
The other characteristic can be shown by the ratio of a prior distribution and posterior distribution, which is derived by obtaining an output of the mechanism. By measuring this value, we can determine how much the adversary has learned about the secret. We assume that an adversary (informed attacker) knows that the secret location is in $N\subseteq{V}$. When the adversary obtains an output of the mechanism, the following inequality holds for the ratio of his prior distribution $\pi_{|N}(v)=\pi(v|N)$ and its posterior distribution $p_{|N}(v|o)=p(v|o,N)$:
\begin{equation}
\nonumber
\log\frac{\pi_{|N}(v)}{p_{|N}(v|o)}\leq \epsilon \max_{v,v^\prime\in N}d_s(v,v^\prime)
\end{equation}
Intuitively, this means that the more the adversary knows about the actual location, the less he will be able to learn about the location from an output of the mechanism.\par
\section{Road-Network-Indistinguishability}
In this section, we propose a new definition of location privacy called road-network-indistinguishability (GG-I), which tightly considers location privacy on road networks.
Firstly, We formally define privacy loss of locations on road networks caused by an output of a mechanism. Following this notion, we define privacy of the mechanism on road networks. Then, we derive the definition of $\epsilon$-GG-I.
Second, we describe the characteristics of GG-I\ which are inherited from $d_\mathcal{X}$-privacy~\cite{broad_dp}, which is a generalization of differential privacy.
Finally, we analyze the relationship between geo-indistinguishability and GG-I.
\subsection{Privacy Loss on Road Networks}
Here, we propose a formulation of privacy loss caused by a perturbation mechanism over a road network based on the concept of differential privacy~\cite{dwork2011differential}.
The idea is that privacy loss is regarded as the degree of the distinguishability of a true location from other locations when seeing the output of a mechanism. Then, we define privacy loss on a road network as follows.
\begin{definition}[location privacy loss on a road network]
\label{def:privacyloss}
Location privacy loss on a road network of $v\in\mathcal{V}$ with respect to $v^\prime\in\mathcal{V}$ caused by $M:\mathcal{V}\to\mathcal{Z}$ is:
\begin{equation}
L(M,v,v^\prime)=\sup_{S\subseteq{\mathcal{Z}}}|\log\frac{\Pr(M(v)\in S)}{\Pr(M(v^\prime)\in S)}|
\end{equation}
\end{definition}
Intuitively, this means that when a user at location $v$ uses mechanism $M$, an adversary can distinguish the true location $v$ from other location $v^\prime$ with the order of $L$.
The difference between ours and privacy loss of differential privacy~(Equation~\ref{equ:privacyloss}) is the domain of input and constraint of the neighboring datasets. In differential privacy, the domain is a set of dataset and neighboring datasets are defined as datasets whose only single record is different because a single record of an individual is regarded as the target to protect and the degree of its distinguishability can be regarded as the privacy. In our setting, a domain is a set of locations on road networks and any two locations corrsponds to neighboring datasets because a location is a target to protect and the degree of its distinguishability can be regarded as the privacy.
\subsection{Privacy Definition on Road Networks}
Here, we define the privacy of mechanism $M$.
As described above, privacy loss represents the degree of the distinguishability of two locations. When privacy loss caused by a mechanism is bounded by some value $l$ for any two locations, it can be said that $l$ represents the privacy protection level of the mechanism.
In other words, even if a user at location $v$ outputs a perturbed location with the mechanism, an adversary cannot distinguish the true location $v$ from other any locations $v^\prime$ in the order of $l$. This idea is formally represented as follows.
\begin{equation}
\label{equ:privacylevel}
\forall v,v^\prime\in\mathcal{V}, L(M,v,v^\prime)\leq l
\end{equation}
where $l\in\mathbb{R}^+$. Let $\mathcal{V}$ be a set of locations on road networks and $\mathcal{Z}$ be a set of outputs of a mechanism. Intuitively, if a user who is at some location uses the mechanism and outputs $z$, a true location cannot be distinguishable from all other locations on road networks to the extent of the value of $l$. If $l=0.1$, the ratio of prior probability of $v$ and posterior probability of $v$ (i.e. $\Pr(v|z)$) is up to $\mathrm{e}^{0.1}\fallingdotseq 1.1$, so an adversary cannot raise a probability that the user is at one location, which means the indistinguishability of the true location from all other locations on road networks.
Although a mechanism which satisfies Inequality~\ref{equ:privacylevel} with a small value of $l$ guarantees strong privacy protection due to the indistinguishability between any two locations, this definition restricts a mechanism too much to output a useful output.
Then, we relax the notion of neighboring locations so that neighboring locations mean the close locations from each other within the distance $r$ and we provide the indistinguishability for only the neighboring locations. Formally, we define $r$-neighboring locations as two locations such that $d_s(v,v^\prime)\leq r$ and given $\epsilon \in \mathbb{R^+}$, we define $(l,r)$-road network privacy\ as follows.
\begin{definition}[$(l,r)$-road network privacy]
\label{def:lonroad}
A mechanism $M:\mathcal{V}\to\mathcal{Z}$ satisfies $(l,r)$-road network privacy\ iff for any two $r$-neighboring locations $v,v^\prime\in\mathcal{V}$:
\begin{equation}
L(M,v,v^\prime)\leq l
\end{equation}
\end{definition}
$(l,r)$-road network privacy\ guarantees that a true location $v$ is indistinguishable from other locations $v^\prime$ such that $d_s(v,v^\prime)\leq r$. This definition is motivated by the fact that to make two locations which are distant indistinguished causes much utility loss.
\subsection{Road-Network-Indistinguishability}
So far, we defined privacy loss and privacy a mechanism, $(l,r)$-road network privacy.
Here, by using these notions, we propose a formal definition of location privacy on road networks, $\epsilon$-road-network-indistinguishability (GG-I).
$(l,r)$-road network privacy\ depends on two parameters $l$ and $r$.
Intuitively, when $r$ is small, $l$ should be small because the true location needs to be protected in small range of locations.
Conversely, when $r$ is large, some distinguishability (i.e. somewhat large $l$) of a true location should be acceptable because the distinguishability in large range of locations means stronger privacy protection than in small range.
From this idea, $\epsilon$-GG-I\ can be formulated as follows:
\begin{definition}[$\epsilon$-geo-graph-indistinguishability]
A mechanism $M:\mathcal{V}\to\mathcal{W}$ satisfies $\epsilon$-geo-graph-indistinguishability\ iff $\forall v,v^\prime\in\mathcal{V}$:
\begin{equation}
L(M,v,v^\prime)\leq\epsilon d_s(v,v^\prime)
\end{equation}
where $\mathcal{W}\subseteq{\mathcal{V}}$.
\end{definition}
In other words, $\epsilon$-GG-I\ guarantees the indistinguishability of the order of $\epsilon r$ between two locations such that the distance is less than $r$.
This means that for any $r\in\mathbb{R}$, $M$ satisfies $(\epsilon r, r)$-road network privacy. Intuitively, this definition allows a mechanism to linearly weaken the indistinguishability of two locations according to the distance to output a useful location, so the farther two locations are, the more distinguishable the two locations could be. We note that $\epsilon$-GG-I\ requires that output range $\mathcal{W}$ is a subset of $\mathcal{V}$ for reasons described in Section~\ref{subsec:range}. Therefore, even if a mechanism should satisfy $(\epsilon r, r)$-road network privacy for any $r$, the mechanism does not satisfy $\epsilon$-GG-I\ when an output of the mechanism includes other than a location on a road network (i.e. $v\notin \mathcal{V}$).
However, any mechanism which satisfies $(\epsilon r, r)$-road network privacy for any $r$ can satisfy $\epsilon$-GG-I\ by a post-processing function $f:\mathcal{Z}\to\mathcal{V}$. This is because that if $M$ satisfies $(l,r)$-road network privacy, any post-processed mechanism $f\circ M$ satisfies $(l,r)$-road network privacy\ as shown in Theorem~\ref{theo:postprocess}.
\begin{theorem}[Post-processing theorem of $(l,r)$-road network privacy.]
\label{theo:postprocess}
If a mechanism $M:\mathcal{V}\to\mathcal{Z}$ satisfies $(l,r)$-road network privacy, a post-processed mechanism $f\circ M$ also satisfies $(l,r)$-road network privacy\ for any function $f:\mathcal{Z}\to\mathcal{Z}^\prime$.
\end{theorem}
We refer the reader to the appendix for the proof. We take Planar Laplace Mechanism~\cite{geo-i} (PLM) as an example.
The distribution of PLM\ is as follows:
\begin{equation}
\Pr(PLM_\epsilon(x)=o) = \frac{\epsilon^2}{2\pi}\mathrm{e}^{-\epsilon d_e(x,o)}
\end{equation}
where $o\in\mathbb{R}^2$.
PLM\ satisfies $(\epsilon r, r)$-road network privacy for any $r$, but does not satisfy $\epsilon$-GG-I\ because output range is not on a road network. By applying post-processing function $f:\mathcal{Z}\to\mathcal{V}$ (e.g. mapping to the nearest location on a road network), post-processed PLM\, which we call Planar Laplace Mechanism on a Graph (PLMG), satisfies $\epsilon$-GG-I.
\subsection{Output Range from Privacy Perspective}
\label{subsec:range}
There are three reasons for $\epsilon$-GG-I\ to restrict output range to $\mathcal{W}\subseteq\mathcal{V}$.
First, LBSs over road networks expect to receive a location on road networks as described in Section~\ref{subsec:problem}.
Second, $(l,r)$-road network privacy\ is not only immune to post-processing $f:\mathcal{Z}\to\mathcal{V}$ (Theorem~\ref{theo:postprocess}) but also may be enhanced by post-processing. For example, if $f$ is a function which always maps to fixed one location on road networks, this mechanism satisfies $0$-GG-I. This means that post-processing always strengthen the indistinguishability. Therefore, to rigorously measure the indistinguishability of locations on road networks, the indistinguishability of a post-processed mechanism is more appropriate than the indistinguishability of the mechanism which is not post-processed.
Third, since road networks are public information, output of a location outside road networks may cause empirical privacy leaks and post-processing to a location on a road network prevents this empirical privacy leaks. We empirically show that an adversary who knows road networks can attack with higher accuracy than an adversary who does not know if a mechanism outputs a location outside a road network and a post-processed mechanism protects privacy from this attack.
To show this, we evaluate the empirical privacy gain AE~(Equation~\ref{equ:privacygain}) of two kinds of mechanisms PLM\ and PLMG\ against the two kinds of adversaries. We use function $f$, which maps to the nearest location on road networks as post-processing of PLMG.
For simplicity, we use a simple synthetic map illustrated in Fig.~\ref{fig:syn_map}. This map consists of 1600 squares with the side length of \SI{100}{m}; that is, the area dimensions are \SI{4000}{m} * \SI{4000}{m}, and each lattice point has a coordinate. The center line represents a road where a user is able to exist, and the other area represents locations where a user must not be, such as the sea.
In this map, we evaluate empirical privacy gain AE\ of the two mechanisms against two kinds of adversaries with same utility loss SQL. We use the Euclidean distance as the metric of AE\ and SQL, which are denoted by AE$_e$ and SQL$_e$.
\begin{figure}[t]
\begin{minipage}{0.4\hsize}
\centering\includegraphics[width=\hsize]{img/synthegraph.png}
\caption{A synthetic map. The red line represents a road, and a user is located inside the black frame.}
\label{fig:syn_map}
\end{minipage}
\hfill
\begin{minipage}{0.6\hsize}
\centering\includegraphics[width=\hsize]{img/ge.eps}
\caption{Privacy gain of each mechanism with respect to utility loss in the Euclidena distance, that is AE$_e$ and SQL$_e$. PLM$^*$ represents privacy gain of PLM\ against an adversary who does not know road networks.}
\label{fig:output_outside}
\end{minipage}
\end{figure}
Fig.~\ref{fig:output_outside} shows the results.
PLM\ represents empirical privacy gain AE\ against an adversary who considers a road network and PLM$^*$ represents the one against an adversary who dose not know a road network. Comparing PLM\ with PLM$^*$, it is said that the adversary can more accurately infer the true location by considering road networks.
AE\ of PLMG\ is higher than AE\ of PLM\ and is almost the same as AE\ of PLM$^*$.
By restricting output to locations on a road network, adversary cannot improve the inference of the true location because additional information does not exist. In other words, post-processing to a location on road networks strengthen a empirical privacy level against an adversary who considers road networks.
In summary, a mechanism is required to output location on road networks to use LBSs over a road network, and the degree of the indistinguishability of locations brought by the post-processed mechanism is higher than the degree of the indistinguishability of a mechanism whose output range includes locations outside a road network. Moreover, from the empirical privacy point of view, a post-processed mechanism does not cause empirical privacy leaks caused by considering road networks. Therefore, $\epsilon$-GG-I\ restricts output range to locations on a road network.
\subsection{Characteristics}
GG-I\ is an instance of $d_\mathcal{X}$-privacy~\cite{broad_dp}, which is a generalization of differential privacy, and has following two characteristics which show strong privacy protection.
\subsubsection{Hiding function}
The first characterization uses the concept of a hiding function $\phi:\mathcal{V}\to\mathcal{V}$. For any hiding function and a secret location $v\in\mathcal{V}$, when an attacker who has a prior distribution that expresses information of user's location obtains each output $o = M(v)$ and $o^\prime = M(\phi(v))$ of a mechanism which satisfies $\epsilon$-geo-graph-indistinguishability, the following inequality holds for each posterior distribution:
\begin{equation}
\label{hiding-function}
|\log\frac{\Pr(v|o)}{\Pr(v|o^\prime)}|\leq 2\epsilon d_s(\phi)
\end{equation}
Let $d_s(\phi(v))=sup_{v\in V}d_s(v,\phi(v))$ be the maximum distance between an actual vertex and its hidden version. This inequality guarantees that adversary's conclusions are the same (up to $2\epsilon d_s(\phi)$) regardless of whether $\phi$ has been applied or not to the secret location.
\subsubsection{Informed attacker}
\label{subsubsec:informed}
The other characterization is shown by the ratio of a prior distribution and its posterior distribution that is derived by obtaining an output of the mechanism. By measuring its value, we can determine how much the adversary has learned about the secret. We assume that an adversary (informed attacker) knows that the secret location is in $N\subseteq{\mathcal{V}}$. When the adversary obtains an output of the mechanism. The following inequality holds for the ratio of his prior distribution $\pi_{|N}(v)=\pi(v|N)$ and its posterior distribution $p_{|N}(v|o)=p(v|o,N)$:
\begin{equation}
\label{informedattacker}
\log\frac{\pi_{|N}(v)}{p_{|N}(v|o)}\leq \epsilon d_s(N)
\end{equation}
Let $d_s(N)=max_{v,v^\prime\in N}d_s(v,v^\prime)$ be the maximum distance between locations in $N$. This inequality guarantees that when $d_s(N)$ is small, the adversary's prior distribution and its posterior distribution are similar. In other words, the more the adversary knows about the actual location, the less he cannot learn about the location from an output of the mechanism.\par
\subsection{Analyzing the Relationship between Geo-indistinguishability and Road-network-indistinguishability}
\label{subsec:analyze_relationship}
GG-I\ is defined on the metric space where $d_s$ is defined, which restricts the domain to be on road networks. Here, we consider location privacy in the more general case of no constraints so that the domain is on the Euclidean plane $\mathcal{X}\subseteq{\mathbb{R}^2}$ where the Euclidean distance $d_e$ is defined. In this case, the definition is $\epsilon$-geo-indistinguishability~\cite{geo-i}.
\begin{definition}[$\epsilon$-geo-indistinguishability]
A mechanism $M:\mathcal{X}\to\mathcal{Z}$ satisfies $\epsilon$-geo-indistnguishability iff $\forall{x,x^\prime}\in\mathcal{X}$:
\begin{equation}
\sup_{S\in{\mathcal{Z}}}|\log\frac{\Pr(M(x)\in S)}{\Pr(M(x^\prime)\in S)}|\leq\epsilon d_e(x,x^\prime)
\end{equation}
\end{definition}
Moreover, if mechanism $M:\mathcal{X}\to\mathcal{Z}$ satisfies $\epsilon$-geo-indistinguishability, following inequality holds for all two locations on road networks $v,v^\prime\in\mathcal{V}$ from Inequality~(\ref{equ:shortest-path}).
\begin{equation}
\label{equ:geo_road}
\begin{split}
\sup_{S\in{\mathcal{Z}}}|\log\frac{\Pr(M(v)\in S)}{\Pr(M(v^\prime)\in S)}|=L(M,v,v^\prime)\leq &\epsilon d_e(v,v^\prime)\\\leq
&\epsilon d_s(v,v^\prime)
\end{split}
\end{equation}
This derives a following theorem.
\begin{theorem}
If a mechanism $M$ satisfies $\epsilon$-geo-indistinguishability, $M$ satisfies ($\epsilon r$, $r$)-road network privacy for any $r$. Then, a post-processed mechanism $f\circ{M}$ satisfies $\epsilon$-GG-I\ where $f:\mathcal{Z}\to\mathcal{V}$ is any mapping function to a location on a road network.
\end{theorem}
This is derived from Inequality~(\ref{equ:geo_road}) and post-processing theorem (Theorem~\ref{theo:postprocess}). This means that a mechanism which satisfies $\epsilon$-geo-indistinguishability\ can always convert to a mechanism which satisfies $\epsilon$-GG-I\ by post-processing.
We note that the reverse is not always true. That is, GG-I\ is the relaxed version of geo-indistinguishability\ by using the metric $d_s$ to output useful location for LBSs over road networks.
For example, although PLMG\ satisfies $\epsilon$-geo-indistinguishability\ and $\epsilon$-GG-I, graph-exponential mechanism (GEM) which we propose in the next section satisfies only GG-I.
\section{Introduction}\label{sec:introduction}}
\IEEEPARstart{I}{n} recent years, the spread of smartphones and GPS improvements have led to a growing use of location-based services (LBSs). While such services have provided enormous benefits for individuals and society, their exposure of the users' location raises privacy issues. Using the location information, it is easy to obtain sensitive personal information, such as information pertaining to home and family. In response, many methods have been proposed in the past decade to protect location privacy. These methods involve three main approaches: perturbation, cloaking, and anonymization. Most of these privacy protection methods are based on the Euclidean plane rather than on road networks; however many LBSs such as UBER\footnote{https://marketplace.uber.com/matching} and Waze\footnote{https://www.waze.com/ja/} are based on road networks to capitalize on their structures~\cite{cho2005efficient,kolahdouzan2004voronoi,papadias2003query}, resulting in utility loss and privacy leakage. Some prior works have revealed this fact~\cite{wang2009privacy, hossain2011h, duckham2005formal} and proposed methods that use road networks and are based on cloaking and anonymization. However, cloaking and anonymization also have weaknesses: if an adversary has peripheral knowledge about a true location, such as the range of a user's location, no privacy protection is guaranteed (in detail, we refer to Section~\ref{sec:related}). In this paper, based on differential privacy~\cite{dwork2011differential}, we consider a perturbation method that does not possess such weakness.
First, we review perturbation methods and differential privacy~\cite{dwork2011differential}, which are the bases of our work; then, we describe the details of our work.
Perturbation methods modify a true location to another location by adding random noise~\cite{geo-i,shokri-strategy} using a mechanism. Shokri et al.~\cite{shokri-quantify} defined location privacy introduced by a mechanism, and they constructed a mechanism that maximizes location privacy. This concept of location privacy assumes an adversary with some knowledge; this approach cannot guarantee privacy against other adversaries.
Differential privacy~\cite{dwork2011differential} has received attention as a rigorous privacy notion that guarantees privacy protection against any adversary. Andr$\acute{\rm e}$s et al.~\cite{geo-i} defined a formal notion of location privacy called geo-indistinguishability (Geo-I) by extending differential privacy. A mechanism that achieves it guarantees the indistinguishability of a true location from other locations to some extent against any adversary.
However, because this method is based on the Euclidean plane, Geo-I does not tightly protect the privacy of locations on road networks, which results in a loose tradeoff between utility and privacy. In other words, Geo-I protects privacy too much for people on road networks.
Geo-I assumes only that the given data is a location, which causes a loose tradeoff between utility and privacy for LBSs over road networks.
We make an assumption that a user is located on a road network. We model the road network using a graph and following this assumption, we propose a new privacy definition, called $\epsilon$-geo-graph-indistinguishability (GG-I), based on the notion of differential privacy. Additionally, we propose the graph-exponential mechanism (GEM), which satisfies GG-I.
Although GEM outputs a vertex of a graph that represents a road network, the output range (i.e., set of vertices) is adjustable, which induces the idea that there exists an optimal output range. Next, we introduce Shokri's notion~\cite{shokri-quantify} of privacy and utility, which we call adversarial error (AE) and quality loss (Q$_{loss}$), and analyze the relationship between output range and Shokri's notion. Moreover, we formalize the optimization problem to search the optimal range for AE and Q$_{loss}$. However, the number of combination of output ranges is $2^{|V|}$, where $|V|$ denotes the size of vertices, which makes it difficult to solve the optimization problem in acceptable time. Consequently, we propose a greedy algorithm to find an approximate solution to the optimization problem in an acceptable amount of time.
Because our definition tightly considers location privacy on road networks, it results in a better tradeoff between utility and privacy. To demonstrate this aspect, we compare GEM with the baseline mechanism proposed in \cite{geo-i}.
In our experiments on two real-world maps, GEM outperforms the baseline w.r.t. the tradeoff between utility and privacy. Moreover, we obtained the prior distribution of a user using a real-world dataset. Then, we show that the privacy protection level of a user who follows the prior distribution can be effectively improved by the optimization.
In summary, our contributions are as follows:
\begin{itemize}
\item We propose a privacy definition for locations on road networks, called $\epsilon$-geo-graph-indistinguishability (GG-I).
\item We propose a graph-exponential mechanism (GEM) that satisfies GG-I.
\item We analyze the performance of GEM and formalize optimization problems to improve utility and privacy protection.
\item We experimentally show that our proposed mechanism outperforms the mechanism proposed in~\cite{geo-i} w.r.t. the tradeoff between utility and privacy and provide an optimization technique that effectively improves it.
\end{itemize}
\begin{comment}
Our contributions are threefold.
\begin{itemize}
\item We propose a privacy definition on road networks, geo-graph-indistinguishability, and privacy gain and utility loss of its mechanism.
\item We make a mechanism satisfying road-network-indistinguishability.
\item We empirically show that our method outperforms a geo-indistinguishability method.
\item We propose a criteria that indicates the mechanism performance for a given graph and a given prior distribution and the way to improve the mechanism w.r.t this indicator.
\end{itemize}
\end{comment}
\section{A Mechanism to Achieve Geo-Graph-Indistinguishability}
Here, we assume that a graph $G=(V,E)$, which represents a road network, is given, and we propose a mechanism that satisfies GG-I, which we call the Graph-Exponential Mechanism (GEM).
Second, we explain the implementation of GEM.
Third, we describe an advantage and an issue of GEM\ caused by not satisfying Geo-I.
\subsection{Graph-Exponential Mechanism}
PLMG (Section~\ref{subsec:analyze_relationship}) satisfies GG-I, but PLMG does not take advantage of the structures of road networks to output useful locations. Here, we propose a mechanism that considers the structure of road networks so that the mechanism can output more useful locations. Given the parameter $\epsilon\in\mathbb{R^+}$ and a set of outputs $\mathcal{W}\subseteq{V}$, $GEM_\epsilon$ is defined as follows.
\begin{definition}
$GEM_\epsilon$ takes $v\in V$ as an input and outputs $o\in{\mathcal{W}}$ with the following probability.
\begin{equation}
\label{equ:gem}
\Pr(GEM_\epsilon(v)=o) = \alpha(v)\mathrm{e}^{-\frac{\epsilon}{2}d_s(v,o)}
\end{equation}
where $\alpha$ is a normalization factor $\alpha(v)=(\sum_{o\in \mathcal{W}}\mathrm{e}^{-\frac{\epsilon}{2} d_s(v,o)})^{-1}$.
\end{definition}
This mechanism employs the idea of an exponential mechanism~\cite{mcsherry2007mechanism} that is one of the general mechanisms for differential privacy.
Because this mechanism capitalizes on the road network structure by using the metric $d_s$, it can achieve higher utility for LBSs over road networks than can PLMG as shown in Section~\ref{sec:experiments}.
\begin{theorem}
GEM$_\epsilon$ satisfies $\epsilon$-GG-I.
\end{theorem}
We refer readers to the appendix for the proof.
\subsection{Computational complexity of GEM}
Since we assume that LBS providers are untrusted and there is no trusted server, a user needs to create the distribution and sample the perturbed location according to the distribution locally. Here, we explore a method to accomplish this and the issues that can be caused by the number of vertices.
GEM consists of three phases: (i) obtain the shortest path lengths to all vertices from the user's location. (ii) compute the distribution according to Equation~(\ref{equ:gem}). (iii) sample a point from the distribution. We show the pseudocode of GEM in~Algorithm~\ref{alg:gem}.
\begin{algorithm}
\caption{Graph-exponential mechanism.}
\label{alg:gem}
\begin{algorithmic}
\newlength\myindent
\setlength\myindent{2em}
\newcommand\bindent{%
\begingroup
\setlength{\itemindent}{\myindent}
\addtolength{\algorithmicindent}{\myindent}
}
\newcommand\eindent{\endgroup}
\REQUIRE {Privacy parameter $\epsilon$, true location $v$, graph $G=(V,E)$, output range $\mathcal{W}\subseteq{V}$.}
\ENSURE {Perturbed location $w$.}
\STATE \textbf{(i)} $d_s(v,\cdot) \Leftarrow Dijkstra(G=(V,E), v)$
\STATE \textbf{(ii)} Compute the distribution:
\bindent
\FOR{$v$ in $\mathcal{W}$}
\STATE $\Pr(GEM(v)=w) \Leftarrow \alpha(v)\mathrm{e}^{-\epsilon d_s(v,w)/2}$
\ENDFOR
\eindent
\STATE \textbf{(iii)} $w \sim \Pr(GEM(v)=w)$
\STATE return $w$
\end{algorithmic}
\end{algorithm}
We next analyze the computational complexity of each phase.
For phase (i), GEM\ computes the shortest path lengths to the other nodes from $v$. The computational complexity of this operation is $O(|E|+|V|\log |V|)$ by using Fibonacci heap, where $|V|$ is the number of nodes and $|E|$ is the number of edges.
This level of computational complexity does not cause a problem, but on road networks, a fast algorithm computing the shortest path length has been studied for large numbers of graph vertices; we refer the reader to~\cite{akiba2014fast} that may be applied to our algorithm. Phase (ii) has no computational problem because its computational complexity is $O(|V|)$. In phase (iii), when the number of vertices is much larger than we expect, we may not be able to effectively sample the vertices according to the distribution. This problem has also been studied and is known as consistent weighted sampling (CWS); we refer the reader to~\cite{consistent-weighted-sampling,wu_improved_2017}. We believe that these studies can be applied to our algorithm and can be computed even when the number of vertices is somewhat large.
\subsection{Privacy with Respect to Euclidean distance}
\label{subsec:privacy_geoi_geogi}
As described in Section~\ref{subsec:analyze_relationship}, PLMG satisfies $\epsilon$-Geo-I and $\epsilon$-GG-I, but GEM satisfies only $\epsilon$-GG-I.
This is because GG-I is a relaxed definition of Geo-I that allows a mechanism to output a more useful perturbed location. Therefore, GEM shows better utility as shown in experiments of Section~\ref{sec:experiments}.
It is worth investigating whether this relaxation weakens the privacy protection guarantees. In short, GG-I has no privacy protection guarantees with respect to Euclidean distance; thus, if a user is using a mechanism that satisfies GG-I to location privacy, the adversary may easily be able to distinguish the user's location from other locations even when those other locations are close to the user's location based on Euclidean distance. In what follows, we demonstrate this fact using the notion of true probability (TP). The probability that an adversary can distinguish a user's location is
\begin{equation}
\begin{split}
\nonumber
&TP(\pi_u,M,h) \\ &= \sum_{v,\hat{v}\in{\mathcal{V}},o\in{\mathcal{W}}}\pi_u(v)\Pr(M(v)=o)\Pr(h(o)=v^\prime)\delta(v,\hat{v})
\end{split}
\end{equation}
where $\delta(\hat{v},v)$ is a function that returns $1$ if $\hat{v}=v$ holds; otherwise, it returns $0$. TP is the expected probability with which an adversary can remap a perturbed location to the true location.
We assume a set of graphs, each of which has only two vertices. The Euclidean distances between the vertices are the same for all the graphs, but weights of the edges between them are different for each graph (Fig.~\ref{fig:graph_tp}). Next, we assume that each prior of a user's location is a uniform distribution on two vertices of this graph, and we compute TP of PLMG and GEM. Fig.~\ref{fig:ggi_weak_img} shows the change in TP when the weight (that is, the shortest path length) changes. Due to the guarantee of the Euclidean distance of Geo-I, PLM does not degrade TP even when the shortest path length changes, however, since GG-I does not have a guarantee of the Euclidean distance, GEM significantly degrades TP, which means that the adversary can discover the user's true location.\par
\begin{figure}[t]
\begin{minipage}{0.5\hsize}
\centering\includegraphics[width=\hsize]{img/tp.png}
\caption{Each graph has a different shortest path length with the same Euclidean distance.}
\label{fig:graph_tp}
\end{minipage}
\hfill
\begin{minipage}{0.5\hsize}
\centering\includegraphics[width=\hsize]{img/tp_graph.png}
\caption{TP\ according to GEM\ and PLMG.}
\label{fig:ggi_weak_img}
\end{minipage}
\end{figure}
A mechanism satisfying $\epsilon$-GG-I can achieve better utility than can a mechanism satisfying Geo-I by guaranteeing privacy protection in terms of the shortest distance on road networks instead of the Euclidean distance. This idea comes from the interpretation of privacy; in this paper, we assume that privacy can be interpreted as the shortest distance on road networks. Therefore, GG-I may not be suitable for protecting location privacy when the privacy needs to be interpreted as Euclidean distance, e.g., weather conditions, where a wide range of locations need to be protected.
\begin{figure}[t]
\begin{minipage}{0.5\hsize}
\centering\includegraphics[width=\hsize]{img/change_synthetic.png}
\caption{Points represent graphs nodes, which we use as the input and output of mechanisms. There are edges between neighboring nodes. The side length of each square is \SI{1000}{m}.}
\label{fig:square_graphs}
\end{minipage}
\hfill
\begin{minipage}{0.5\hsize}
\centering\includegraphics[width=\hsize]{img/lattice_sql_eps0001.png}
\caption{Utility loss when changing the number of nodes with $\epsilon=0.01$.}
\label{fig:lattice_sql}
\end{minipage}
\end{figure}
\subsection{Utility Comparison with PLMG}
Both GEM$_\epsilon$ and PLMG$_\epsilon$ satisfy $\epsilon$-GG-I, which means that both guarantee the same indistinguishability. However, outputs of GEM and PLMG are created from different distributions: the continuous distribution with post-processing and the discrete distribution, respectively. Here, we explore the change in utility yielded by their difference; consequently, we use synthetic graphs~(Blue points in Fig.~\ref{fig:square_graphs}) whose shortest path lengths and Euclidean distances between two nodes are identical to exclude the difference caused by the variations in the adopted metrics---that is, graphs that have the shape of a straight line on a Euclidean plane. We prepare several graphs by changing the number of nodes while fixing the length of the entire graph. Fig.~\ref{fig:lattice_sql} shows the utility loss (i.e., Q$^{loss}$) of GEM and PLMG with $\epsilon=0.01$ for each graph. As shown, the Q$^{loss}$ of GEM increases as the number of nodes increases, while the Q$^{loss}$ of PLMG decreases. This is also the result with other $\epsilon$ values. PLMG is post-processed by mapping to the nearest node, so when few nodes exists near the output of PLM, PLMG cannot output a useful location because the mapping to the location may be distant from the input. Conversely, GEM cannot efficiently output a useful location when there are many nodes because GEM needs to distribute the probabilities to distant nodes. As mentioned in Section~\ref{subsec:problem}, road networks are generally discretized by graphs, and it can be said that GEM is an appropriate mechanism on road networks. We will also show the effectiveness of GEM compared with PLMG according to the utility in the real-world road networks. We refer to Section~\ref{subsec:exp_gem_plmg} for details.
\section{Analyzing the Performance of GEM and Optimizing Range}
GEM requires output $z$ to be on a road network but require nothing else for the output range. This means that an optimal output range exists for privacy and utility.
In this section, first we apply Q$^{loss}$ and AE to a location setting on road networks. Then, we propose the performance criteria (PC) which represents the tradeoff between the privacy and the utility.
Next, we formalize an optimization problem for the PC.
Finally, we propose a greedy algorithm to solve the optimization problem in an acceptable amount of time.
\subsection{Performance of a Mechanism on a Road Network}
While the $\epsilon$ of GG-I indicates the degree of indistinguishability between a real and perturbed location, it does not indicate the performance of a mechanism w.r.t its utility for some user and empirical privacy against some adversary.
Therefore, we introduce the two notions Q$^{loss}_s$ and AE$_{s}$ by applying Q$^{loss}$ and AE (Section~\ref{subsubsec:aeandsql}) to the setting of road networks. We provide their definitions below.
\begin{equation}
\nonumber
Q^{loss}_s(\pi_u,M) = Q^{loss}(\pi_u,M, d_s)
\end{equation}
\begin{equation}
\nonumber
AE_{s}(\pi_a,M,h) = AE(\pi_a, M, h, d_s)
\end{equation}
Intuitively, Q$^{loss}_s$ is the expected distance on road networks between the true locations and perturbed locations, while AE$_s$ is the expected distance on road networks between the true locations and the locations inferred by an adversary.
In the following, we let Q$^{loss}$ and AE denote Q$^{loss}_s$ and AE$_s$, respectively.
We note that, as opposed to $\epsilon$, AE changes according to the assumed adversary (i.e., the specific attack method and prior distribution).
However, because AE increases as Q$^{loss}$ increases (e.g., a mechanism that outputs a distant location will result in high AE but also high Q$^{loss}$), using only AE as a performance criterion for a mechanism is not appropriate.
Then, we define a new criterion to measure the performance of a mechanism against an assumed adversary, which we call the performance criterion (PC).
\begin{equation}
\nonumber
PC=AE/Q^{loss}
\end{equation}
Intuitively, against an assumed adversary, PC represents the size of AE with respect to the Q$^{loss}$. In other words, PC measures the utility/privacy tradeoff.
For example, if an adversary with an optimal attack~\cite{shokri-strategy} cannot infer the true location at all (i.e., the adversary infers the pseudolocation as the true location), the mechanism can be considered as having the highest performance ($PC=1$). Conversely, the mechanism performs worst ($PC=0$) if the adversary can always infer the true location.
\subsection{Objective Functions}
Here, we propose an objective function to find the optimal output range of GEM with respect to the performance. We assume that the prior distribution of a user is given and adversary knows the prior distribution. An example of this is shown in Section~\ref{subsubsec:scenario}. If the prior distribution is not give, we can use uniform distribution for the general user.
Then, we can compute AE and Q$^{loss}$ by assuming an inference function (we refer to Section~\ref{subsubsec:userandadv} for detail).
We use a posterior distribution given the pseudolocation $o$ as the inference function $h$.
Then, given an output range $\mathcal{W}$, the PC of GEM with the output range $\mathcal{W}$ is formulated as follows:
\begin{equation}
\nonumber
\frac{\sum_{v,\hat{v}\in V,o\in\mathcal{W}}\pi_u(v)\Pr(GEM_\mathcal{W}(v)=o)p(\hat{v}|o)d_s(v,\hat{v})}{\sum_{v\in V,o\in\mathcal{W}}\pi_u(v)\Pr(GEM_\mathcal{W}(v)=o)d_s(v,o)}
\end{equation}
where GEM$_\mathcal{W}$ denotes GEM with the output range $\mathcal{W}$.
Then, the objective function against the adversary can be formulated as follows.
\begin{maxi}|l|
{\mathcal{W}\subseteq{V}}{PC_{\mathcal{W}}}{}{}
\nonumber
\end{maxi}
where PC$_\mathcal{W}$ is the PC of GEM$_\mathcal{W}$.
Here, GEM with the optimized output range is considered to show the best tradeoff against the adversary, but it can fail to be useful (i.e. large Q$^{loss}$) because Q$^{loss}$ has no constraints; consequently we add the following constraint to Q$^{loss}$.
\begin{maxi}|l|
{\mathcal{W}\subseteq{V}}{PC_\mathcal{W}}{}{}
\addConstraint{Q^{loss}_{\mathcal{W}} \leq \theta}
\nonumber
\end{maxi}
where Q$^{loss}_\mathcal{W}$ is the Q$^{loss}$ of GEM$_\mathcal{W}$.
The optimal GEM shows the best tradeoff in GEM with an output range that shows a better Q$^{loss}$ than $\theta$. We set Q$^{loss}_{\mathcal{W}_0}$ to $\theta$ so that the utility does not degrade by the optimization.
\subsection{Algorithm to Find an Approximate Solution}
\label{subsec:tactic}
Because the number of combinations for the output range is $2^{|V|}$, we cannot compute all combinations to find the optimal solution for the optimized problem in an acceptable amount of time; therefore, we propose a greedy algorithm that instead finds approximate solutions. The pseudocode for this algorithm is listed in Algorithm~\ref{alg:opt}. The constraint function is a function that returns a value indicating whether the constraint holds or does not hold.
\begin{algorithm}
\caption{Finding a local solution.}
\label{alg:opt}
\begin{algorithmic}[1]
\REQUIRE {Privacy parameter $\epsilon$, graph $G=(V,E)$ objective function $f$, constraint function $c$, initial output range $\mathcal{W}_0$.}
\ENSURE {Output range $\mathcal{W}$.}
\WHILE{True}
\STATE $obj \Leftarrow f(GEM_{\mathcal{W}_o})$
\FOR{$v$ in $V$}
\STATE $\mathcal{W}^\prime \Leftarrow \mathcal{W}\setminus \{v\}$
\STATE $obj^\prime \Leftarrow f(GEM_{\mathcal{W}^\prime})$
\STATE $cons \Leftarrow c(GEM_{\mathcal{W}^\prime})$
\IF{$obj^\prime - obj < 0$ and cons}
\STATE $\mathcal{W} \Leftarrow W^\prime$
\STATE $obj \Leftarrow obj^\prime$
\ENDIF
\ENDFOR
\IF{$\mathcal{W}_0 = \mathcal{W}$}
\STATE break
\ENDIF
\ENDWHILE
\STATE return $\mathcal{W}$
\end{algorithmic}
\end{algorithm}
First, we start with a given initial output range $\mathcal{W}_0$. Next, we compute a value of the objective function of the output range with one node removed. We remove that node if the objective function improves and the constraint holds. We repeat this procedure until the objective function converges, which has a computational complexity of $O(|\mathcal{W}_0|^2)$ in the worst case when the computational complexity of the objective function is $O(1)$.
As a rule of thumb, the main loop (line 2 of Algorithm \ref{alg:opt}) likely completes in only a small number of iterations.
However, the computational complexity of PC is $O(|V|^2|\mathcal{W}_0|)$, so the overall computational $O(|V|^2|\mathcal{W}_0|^3)$.
Therefore, when $|\mathcal{W}_0|$ is large, this computational complexity is not acceptable.
In the following, we propose a way of providing $\mathcal{W}_0$.
\subsubsection{Initialization of $\mathcal{W}$}
PC increases when Q$^{loss}$ decreases, so we propose to first optimize output range according to Q$^{loss}$, which is computed in the small computational complexity.
The optimization problem is as follows:
\begin{mini}|l|
{\mathcal{W}\subseteq{V}}{Q^{loss}_{\mathcal{W}}}{}{}
\nonumber
\end{mini}
Q$^{loss}_{\mathcal{W}\setminus{v}}$ can be computed using Q$^{loss}_{\mathcal{W}}$ in the computational complexity of $O(|V|)$.
Therefore, we can obtain an approximate solution according to this optimization problem using Algorithm~\ref{alg:opt} with the initial output range $V$ in the computational complexity of $O(|V|^3)$ in the worst case. As described above, the main loop likely completes in only a small number of iterations, so we can complete this algorithm in the computational complexity of $O(|V|^2)$ in the most case, and this is acceptable even when $|V|$ is somewhat large.
We use this output range as the initial output range of Algorithm~\ref{alg:opt}.
\subsection{Optimization Examples}
\begin{figure}[t]
\begin{minipage}{0.49\hsize}
\centering\includegraphics[width=\hsize]{img/time.png}
\caption{The relationship between the number of nodes and time required for the optimization.}
\label{fig:time}
\end{minipage}
\begin{minipage}{0.49\hsize}
\centering\includegraphics[width=\hsize]{img/opt_PC.png}
\caption{The relationship between PC and the number of nodes.}
\label{fig:PC_nodes}
\end{minipage}
\end{figure}
\begin{figure}[t]
\begin{minipage}{0.5\hsize}
\centering\includegraphics[width=\hsize]{img/prior_unbr_synth.png}
\caption{Synthetic map whose side length is \SI{1500}{m}. Axis represents the prior probability.}
\label{fig:synth_unbr_map}
\end{minipage}
\hfill
\begin{minipage}{0.5\hsize}
\centering\includegraphics[width=0.9\hsize]{img/synth_opt_sql.png}
\caption{The example of the solution of the output range.}
\label{fig:opt_example}
\end{minipage}
\end{figure}
Here, we show examples of the optimization using the synthetic map. First, we explore the relationship between the number of nodes and the time required for the optimization (including initialization of $\mathcal{W}$). We use several lattices with different numbers of nodes(Fig.~\ref{fig:square_graphs}). We use Python 3.7, an Ubuntu 15.10 OS, and 1 core of Intel core i7 6770k CPU with 64 GB of memory as the computational environment.
The results are shown in Fig.~\ref{fig:time} and Fig.~\ref{fig:PC_nodes}, where we can see that even when the number of nodes is large (e.g., $>5000$), the algorithm completes under $1$ minute and the PC improves by the optimization.
This time is acceptable because we can execute the algorithm to calculate future perturbations in advance.
As examples of the number of nodes, the two graphs in Fig.~\ref{fig:maps} whose ranges are \SI{1000}{m} from the center contain $1,155$ and $168$ nodes, respectively. Even when a graph is quite large, by separating it into the small graphs such as those in Fig.~\ref{fig:maps}, we can execute the algorithm in an acceptable time. Our implementation for the optimization is publicly available\footnote{https://github.com/tkgsn/GG-I}.
Next, we executed the algorithm using the synthetic map in Fig.~\ref{fig:synth_unbr_map} under the assumption of the prior distribution. We assume that there are four places where the prior probability is high, as shown in Fig.~\ref{fig:synth_unbr_map} and a user who follows this prior probability uses GEM with $\epsilon=0.01$ and an adversary has knowledge of the prior distribution. In this case, Q$^{loss}$ is \SI{328}{m} and PC is $0.9$ when we use $\mathcal{W}$ as all nodes.
A solution of the Algorithm~\ref{alg:opt} is as shown in Fig.~\ref{fig:opt_example}.
By restricting output in the place where the prior probability is high, lower utility loss ($Q^{loss} =\SI{290}{m}$) and a higher tradeoff ($PC=0.98$) can be achieved.
The adversary infers that the pseudolocation is the true location, which means that the mechanism has effectively perturbed the true location.
\section{Analyzing the Performance of GEM and Optimizing Range}
GEM requires output $z$ to be on a road network but require nothing else for the output range. This means that an optimal output range exists for privacy and utility.
In this section, first we apply SQL\ and AE\ to a location setting on road networks. Then, we propose the performance criteria (PC) which represents the tradeoff between the privacy and the utility.
Next, we formalize an optimization problem for the PC.
Finally, we propose a greedy algorithm to solve the optimization problem in an acceptable amount of time.
\subsection{Performance of a Mechanism on a Road Network}
While the $\epsilon$ of GG-I indicates the degree of indistinguishability between a real and perturbed location, it does not indicate the performance of a mechanism w.r.t its utility for some user and empirical privacy against some adversary.
Therefore, we introduce the two notions SQL$_s$\ and AE$_s$ by applying SQL\ and AE\ (Section~\ref{subsubsec:aeandsql}) to the setting of road networks. We provide their definitions below.
\begin{equation}
\label{equ:utilityonroad}
SQL_s(\pi_u,M) = SQL(\pi_u,M, d_s)
\end{equation}
\begin{equation}
\label{equ:privacygainonroad}
AE_s(\pi_a,M,h) = AE(\pi_a, M, h, d_s)
\end{equation}
Intuitively, SQL$_s$\ is the expected distance on road networks between the true locations and perturbed locations, while AE$_s$\ is the expected distance on road networks between the true locations and the locations inferred by an adversary.
In the following, we let SQL and AE denote SQL$_s$\ and AE$_s$.
We note that, as opposed to $\epsilon$, AE\ changes according to the assumed adversary (i.e., the specific attack method and prior distribution).
However, because AE increases as SQL increases (e.g., a mechanism that outputs a distant location will result in high AE but also high SQL), using only AE as a performance criterion for a mechanism is not appropriate.
Then, we define a new criterion to measure the performance of a mechanism against an assumed adversary, which we call the performance criterion (PC).
\begin{equation}
PC=AE/SQL
\end{equation}
Intuitively, against an assumed adversary, PC represents the size of AE with respect to the SQL. In other words, PC measures the utility/privacy tradeoff.
For example, if an adversary with an optimal attack~\cite{shokri-strategy} cannot infer the true location at all (i.e., the adversary infers the pseudolocation as the true location), the mechanism can be considered as having the highest performance ($PC=1$). Conversely, the mechanism performs worst ($PC=0$) if the adversary can always infer the true location.
\subsection{Objective Functions}
\subsubsection{An Objective Function for Utility}
When prior knowledge is not given, we cannot compute SQL\ and AE. However, by assuming some prior distribution of a user, SQL\ can be computed. Then, the objective function for utility can be formalized as follows.
\begin{mini}|l|
{\mathcal{W}\subseteq{V}}{SQL_{\mathcal{W}}}{}{}
\end{mini}
If a user does not follow the assumed distribution, the solution of SQL\ may not be optimal. However, for example, SQL\ for the uniform distribution represents the average of utility loss of each input (i.e., $\rm{\hat{SQL}}$) and the optimal mechanism for the uniform distribution will not result in bad utility for any input. Therefore, this optimization is worth to do even if the prior knowledge is not given.
\subsubsection{An Objective Function for Empirical Privacy}
\label{subsubsec:obj_fn_for_empirical_privacy}
Here, we consider a setting where prior knowledge is given. The example of the setting where prior knowledge is given is shown in Section~\ref{subsubsec:scenario}. We can compute AE\ in this setting by assuming some inference function (we refer to Section~\ref{subsubsec:userandadv} for detail). We use a posterior distribution given perturbed location $o$ as inference function $h$ because this inference is general.
The posterior distribution is as follows from Bayesian theorem.
\begin{equation}
\label{equ:post}
h(v) = p(v|o)=\frac{p(v)p(o|v)}{p(o)}
\end{equation}
\begin{equation}
p(o)=\sum_{v\in{\mathcal{W}}}\pi_a(v)\Pr(GEM_\mathcal{W}(v)=o)
\end{equation}
\begin{equation}
p(v)p(o|v)=\pi_a(v)\Pr(GEM_\mathcal{W}(v)=o)
\end{equation}
PC$_\mathcal{W}$ can be formulated as follows by using posterior distribution as the inference function.
\begin{equation}
\frac{\sum_{v,\hat{v}\in\mathcal{V},o\in\mathcal{W}}\pi_u(v)\Pr(GEM_\mathcal{W}(v)=o)p(\hat{v}|o)d_s(v,\hat{v})}{\sum_{v\in\mathcal{V},o\in\mathcal{W}}\pi_u(v)\Pr(GEM_\mathcal{W}(v)=o)d_s(v,o)}
\end{equation}
Then, the objective function against the adversary can be formulated as follows.
\begin{maxi}|l|
{\mathcal{W}\subseteq{V}}{PC_{\mathcal{W}}}{}{}
\end{maxi}
GEM\ with the optimized output range is considered to show the best utility-privacy trade-off against the adversary, but it can be not useful (i.e. large SQL) because SQL\ has no constraints, so we add the constraints to SQL.
\begin{maxi}|l|
{\mathcal{W}\subseteq{V}}{PC_\mathcal{W}}{}{}
\addConstraint{SQL_{\mathcal{W}} \leq \theta}
\end{maxi}
The optimal GEM\ shows the best utility-privacy trade-off in GEM\ with an output range which shows the better SQL\ than $\theta$.
\subsection{Algorithm to Find an Approximate Solution}
\label{subsec:tactic}
Since the number of combinations of the output range is $2^{|V|}$, it is difficult to find optimal solutions for these optimization problems in acceptable time, so we use a simple algorithm to find approximate solutions. We show a pseudocode of this algorithm in Algorithm~\ref{alg:opt}. The constraint function is a function which returns whether the constraint holds or does not hold.
\begin{algorithm}
\caption{Finding a local solution.}
\label{alg:opt}
\begin{algorithmic}[1]
\REQUIRE {Privacy parameter $\epsilon$, graph $G=(V,E)$ objective function $f$, constraint function $c$.}
\ENSURE {Output range $\mathcal{W}$.}
\STATE $\mathcal{W} \Leftarrow V$
\WHILE{True}
\STATE $\mathcal{W}_0 \Leftarrow \mathcal{W}$
\STATE $obj \Leftarrow f(GEM_{\mathcal{W}_o})$
\FOR{$v$ in $V$}
\STATE $\mathcal{W}^\prime \Leftarrow \mathcal{W}\setminus \{v\}$
\STATE $obj^\prime \Leftarrow f(GEM_{\mathcal{W}^\prime})$
\STATE $cons \Leftarrow c(GEM_{\mathcal{W}^\prime})$
\IF{$obj^\prime - obj < 0$ and cons}
\STATE $\mathcal{W} \Leftarrow W^\prime$
\STATE $obj \Leftarrow obj^\prime$
\ENDIF
\ENDFOR
\IF{$\mathcal{W}_0 = \mathcal{W}$}
\STATE break
\ENDIF
\ENDWHILE
\STATE return $\mathcal{W}$
\end{algorithmic}
\end{algorithm}
First, we start with an output range which includes all nodes (i.e., $V$). Next, we compute a value of the objective function of the output range whose one node is removed. We remove the node if the objective function improves and the constraint holds, and we repeat until it converges. We repeat this procedure until convergence of the objective function. This computational complexity is $O(|V|^2)$ when the computational complexity of the objective function is $O(1)$. As a rule of thumb, the main loop (line 2 of Algorithm \ref{alg:opt}) likely finish in the small number of iterations.
However, the computational complexities of SQL\ and PC\ are $O(|V|^2)$ and $O(|V|^3)$, so the overall computational complexities are $O(|V|^4)$ and $O(|V|^5)$. In the following, we approximate the objective functions so that the computational complexities are acceptable.
\subsubsection{Approximation of SQL}
We approximate SQL\ using an approximation of $\rm{\hat{SQL}}$\ as follows.
\begin{equation}
\label{equ:sql_approx}
\begin{split}
SQL_{\mathcal{W}} &= \sum_{v\in V}\pi_u(v)\hat{SQL}_\mathcal{W}(v)\\ &\approx \sum_{v\in V}\pi_u(v)\sum_{v^\prime\in A_k(v)} \Pr(GEM_{A_k}(v) = v^\prime)d_s(v,v^\prime)
\end{split}
\end{equation}
where $A_k(v)$ is the set of nodes which are the top $k$ nearest nodes in $\mathcal{W}$ from $v$. GEM\ rarely outputs a distant location from an input, so we assume that the affect of the output to SQL\ can be negligible. From this fact, we use only the $k$ nearest nodes for computing SQL.
In this paper, we use $k$ as $50$.
This computational complexity is $O(k|V|)$.
Then, the computational complexity of Algorithm \ref{alg:opt} is $O(k|V|^3)$.
We note that if nodes are uniformly distributed, the removal of one node $v_r$ affects $k$ nodes of $\hat{SQL}_\mathcal$, so the computational complexity of SQL$_{\mathcal{W}\setminus\{v_r\}}$ is $O(k^2)$. We show the example the case of $k=10$ in Fig.~\ref{fig:opt}.
In this case, the computational complexity of Algorithm \ref{alg:opt} is $O(k^2|V|^2)$.
\subsubsection{Approximation of PC}
We approximate PC\ using an approximation of SQL\ (i.e., Equation~\ref{equ:sql_approx}) and AE\ in the same way as follows.
\begin{equation}
\label{equ:ae_approx}
\begin{split}
&AE \approx\\ &\sum_{v\in\mathcal{W}}\pi_u(v)\sum_{\hat{v},v^\prime\in A_k(v)}\Pr(GEM_{A_k(v)}(v)=v^\prime)\Pr(\hat{v}|v^\prime)d_s(\hat{v},v)
\end{split}
\end{equation}
The posterior distribution (Equation~\ref{equ:post}) can be computed in $O(k^2)$ when GEM$_{A_k(v)}$ is used. Then, this computational complexity of AE\ is $O(k^2|V|)$, so the computational complexity of Algorithm \ref{alg:opt} is $O(k^2|V|^3)$.
PC\ is computed from AE\ and SQL, the computational complexity of PC\ is $O(k^2|V|)$.
As the case of SQL, if nodes are uniformly distributed, AE\ can be computed in $O(k^3)$.
In that case, the computational complexity of Algorithm \ref{fig:opt} is $O(k^3|V|^2)$.
\subsection{Examples of Optimizations}
\begin{figure}[t]
\begin{minipage}{0.5\hsize}
\centering\includegraphics[width=\hsize]{img/prior_unbr_synth.png}
\caption{Synthetic map whose side length is \SI{1500}{m}. Axis represents the prior probability.}
\label{fig:synth_unbr_map}
\end{minipage}
\hfill
\begin{minipage}{0.5\hsize}
\begin{minipage}{\hsize}
\centering\includegraphics[width=0.5\hsize,height=0.5\hsize]{img/synth_opt_sql.png}
\end{minipage}
\begin{minipage}{\hsize}
\centering\includegraphics[width=0.5\hsize,height=0.5\hsize]{img/synth_opt_mc.png}
\end{minipage}
\caption{Each solution for SQL\ (above) and PC\ (below). The blue points represent node to output.}
\label{fig:solutions_synth}
\end{minipage}
\begin{minipage}{0.5\hsize}
\centering\includegraphics[width=\hsize]{img/time.eps}
\caption{The relationship between the number of nodes and time required for the optimization.}
\label{fig:time}
\end{minipage}
\begin{minipage}{0.5\hsize}
\centering\includegraphics[width=\hsize]{img/opt.png}
\caption{An example of affected nodes by a removed node when $k=10$.}
\label{fig:opt}
\end{minipage}
\end{figure}
Here, we show the example of the optimization in the synthetic map. First, we explore the relationship between the number of nodes and the time required for the optimization. We use several lattices whose number of nodes are different (Fig.~\ref{fig:square_graphs}). We use Python 3.7, OS Ubuntu 15.10 and CPU Intel core i7 6770k and memory of 64GB as the computational environment.
We show the results in Fig.~\ref{fig:time}.
We can see that even if the node size is large (e.g., $>10000$), the algorithm finishes within $1$ hour.
This is acceptable because we can run this algorithm for future perturbation in advance.
We note that it takes more time for the optimization w.r.t SQL\ than that of PC\ despite of the larger computational complexity of PC. This is because, in my experience, main loop (line 2 of Algorithm~\ref{alg:opt}) finishes in the smaller iterations for the optimization of PC. As the examples of the number of nodes, graphs in Fig.~\ref{fig:maps} whose ranges are \SI{1000}{m} from the center have $1155$ and $168$ nodes. Even if a graph is quite large, by separating to the small graphs such as Fig.~\ref{fig:maps}, we can run the algorithm in the acceptable time.
Next, we ran the algorithm using the synthetic map (Fig.~\ref{fig:synth_unbr_map}) under the assumption of the prior distribution. We assume that there are four places where the prior probability is high as shown in Fig.~\ref{fig:synth_unbr_map} and a user who follows this prior probability uses GEM\ with $\epsilon=0.01$ and an adversary has knowledge of this prior distribution. In this case, SQL\ is \SI{328}{m} and PC\ is $0.9$.
First, a solution for utility (i.e., objective function (23)) is as the upper side of~Fig.~\ref{fig:solutions_synth}. By restricting output in the place where the prior probability is high, lower utility loss ($SQL =\SI{290}{m}$) can be achieved. Next, a solution for PC\ (i.e., objective function (28)) is as the lower side of~Fig.~\ref{fig:solutions_synth}. It can be seen that output range is more restricted to the place with the high prior probability than the case of the solution for utility. This causes utility loss ($SQL=\SI{301}{m}$), but higher PC\ is achieved ($PC =1.0$) than the solution for SQL\ where $PC = 0.98$. The adversary cannot but believe the pseudolocation is the true location, which means that the mechanism can effectively perturb the true location.
\section{Preliminaries and Problem Setting}
In this section, we first review the formulations for a perturbation mechanism, empirical privacy gain and utility loss. Next, we describe the concept of differential privacy~\cite{dwork2011differential}, which is the basis of our proposed privacy notion. Finally, we explain a setting where we define privacy.
\subsection{Perturbation Mechanism on the Euclidean Plane}
Here, we explain the formulations for a perturbation mechanism, empirical privacy gain and utility loss~\cite{shokri-strategy}.
\subsubsection{User and Adversary}
\label{subsubsec:userandadv}
Shokri et al.~\cite{shokri-strategy} assumed that user $u$ is located at location $x\in{\mathbb{R}^2}$ according to a prior distribution $\pi_u(x)$.
LBSs are used by people who wants to protect their location privacy but receive high-quality services. The user adopts a perturbation mechanism $M:\mathbb{R}^2\to\mathcal{Z}$ that sends a pseudolocation $M(x)=z\in\mathcal{Z}$ instead of his/her true location $x$ where $\mathcal{Z}\subseteq\mathbb{R}^2$. Assume that an adversary $a$ has some knowledge represented as a prior distribution about the user location $\pi_a(x)$ and tries to infer the user's true location from the observed pseudolocation $z$. In this paper, we assume that the adversary has unbounded computational power and precise prior knowledge, i.e., $\pi_a(x)=\pi_u(x)$. Although this assumption is advantageous for the adversary, protection against such an adversary confers a strong guarantee of privacy.
\subsubsection{Empirical Privacy Gain and Utility Loss}
\label{subsubsec:aeandsql}
The empirical privacy gain obtained by mechanism $M$ is defined as follows, which we call adversarial error (AE).
\begin{equation}
\label{equ:privacygain}
\begin{split}
\nonumber
&AE(\pi_a,M,h,d_q) =\\ &\sum_{\hat{x},x,z}\pi_a(x)\Pr(M(x)=z)\Pr(h(z)=\hat{x})d_q(\hat{x},x)
\end{split}
\end{equation}
\noindent
where $d_q$ is a distance over $\mathbb{R}^2$ and $h$ is a probability distribution over $\mathbb{R}^2$ that represents the inference of the adversary about the user's location. Thus, intuitively, AE\ represents the expected distance between the user's true location $x$ and the location $\hat{x}$ inferred by the adversary.
Next, we explain the model of an adversary, that is, how an adversary constructs a mechanism $h$, which is called an optimal inference attack~\cite{shokri-strategy}. An adversary who obtains a user's perturbed location $z$ tries to infer the user's true location through an optimal inference attack. In this type of attack, the adversary solves the following mathematical optimization problem to obtain the optimal probability distribution and constructs the optimal inference mechanism $h$.
Then, by applying this mechanism to the input $z$, the adversary can estimate the user's true location.
\begin{mini}|l|
{h}{AE(\pi_a,M,h,d_q)}{}{}
\addConstraint{\sum_{\hat{x}}\Pr(h(z)=\hat{x})}{=1}{, \forall z}
\addConstraint{\Pr(h(z)=\hat{x})}{\geq 0}{, \forall z,\hat{x}}
\nonumber
\end{mini}
For example, if an adversary knows a road network, the domain of his prior $\pi_a$ consists of locations on that road network, and $d_p$ is the shortest distance on the road network.
In this setting, the problem is a linear programming problem because $\Pr(h(z)=\hat{x})$ represents a variable and the other terms are constant; thus, the objective function and the constraints are linear. We solve this problem using CBC (coin-or branch and cut)\footnote{https://projects.coin-or.org/Cbc} solver from the Python PuLP library.
The utility loss caused by mechanism $M$, called quality loss (Q$^{loss}$), is defined as follows:
\begin{equation}\label{SQL_def}
\nonumber
Q^{loss}(\pi_u,M,d_q) = \sum_{x,x^\prime}\pi_u(x) \Pr(M(x)=x^\prime)d_q(x,x^\prime)
\end{equation}
Q$^{loss}$ denotes the expected distance between the user's true location $x$ and the pseudolocation $z$.
\subsection{Differential Privacy}
\label{subsec:differentialprivacy}
Differential privacy~\cite{dwork2011differential} is a mathematical definition of the privacy properties of individuals in a statistical dataset. Differential privacy has become a standard privacy definition and is widely accepted as the foundation of a mechanism that provides strong privacy protection. $d\in\mathcal{D}$ denotes a record belonging to an individual and dataset $X$ is a set of $n$ records. When neighboring datasets are defined as two datasets which differ by only a single record, then $\epsilon$-differential privacy is defined as follows.
\begin{definition}[$\epsilon$-differential privacy]
Given algorithm $M:\mathcal{D}\to \mathcal{S}$ and the neighboring datasets $X, X^\prime \in \mathcal{D}$, the privacy loss is defined as follows.
\begin{equation}
\nonumber
L_d(M,X,X^\prime)=\sup_{S\subseteq{\mathcal{S}}}\left|\log\frac{\Pr{(M(X)\in S))}}{\Pr{(M(X^\prime)\in S}))}\right|
\end{equation}
Then, mechanism $M$ satisfies $\epsilon$-differential privacy iff $L_d(M,X,X^\prime)\leq{\epsilon}$ for any neighboring datasets $X,X^\prime$.
\end{definition}
$\epsilon$-differential privacy guarantees that the outputs of mechanism $M$ are similar (i.e., privacy loss is bounded up to $\epsilon$) when the inputs are neighboring. In other words, from the output of algorithm $M$, it is difficult to infer what a single record is due to the definition of the neighboring datasets. In this study, we apply differential privacy to a setting of a location on a road network.
\subsection{Geo-indistinguishability}
\label{subsec:geo-i}
Here, we describe the definition of geo-indistinguishability (Geo-I)~\cite{geo-i}. Let $\mathcal{X}$ be a set of locations. Intuitively, a mechanism $M$ that achieves Geo-I guarantees that $M(x)$ and $M(x^\prime)$ are similar to a certain degree for any two locations $x,x^\prime\in\mathcal{X}$. This means that even if an adversary obtains an output from this mechanism, a true location will be indistinguishable from other locations to a certain degree. When $\mathcal{X}\subseteq{\mathbb{R}^2}$, $\epsilon$-Geo-I is defined as follows~\cite{geo-i}.
\begin{definition}[$\epsilon$-geo-indistinguishability~\cite{geo-i}]
Let $\mathcal{Z}$ be a set of query outputs.
A mechanism $M:\mathcal{X}\to\mathcal{Z}$ satisfies $\epsilon$-Geo-I iff $\forall{x,x^\prime}\in\mathcal{X}$:
\begin{equation}
\nonumber
\sup_{S\in{\mathcal{Z}}}\left|\log\frac{\Pr(M(x)\in S)}{\Pr(M(x^\prime)\in S)}\right|\leq\epsilon d_e(x,x^\prime)
\end{equation}
where $d_e$ is the Euclidean distance.
\end{definition}
\subsubsection{Mechanism satisfying $\epsilon$-Geo-I}
\label{subsubsec:plm}
The authors of~\cite{geo-i} introduced a mechanism called the planar Laplace mechanism (PLM) to achieve $\epsilon$-Geo-I. The probability distribution generated by PLM is called the planar Laplace distribution and---as its name suggests---is derived from a two-dimensional version of the Laplace distribution as follows:
\begin{equation}
\nonumber
\Pr(PLM_\epsilon(x)=z) = \frac{\epsilon^2}{2\pi}\mathrm{e}^{-\epsilon d_e(x,z)}
\end{equation}
where $x,z\in\mathcal{X}$.
\subsection{Problem Statement}
\label{subsec:problem}
We consider a perturbation mechanism to improve the tradeoff between utility and privacy by taking advantage of road networks. We assume that the LBSs work on road networks (e.g., UBER), that users are located on road networks, and that LBS providers expect to receive a location on a road network.
We model a road network as an undirected weighted graph $G=(V,E)$ and locations on the road network as the vertices $V$ that are on the Euclidean plane $\mathbb{R}^2$. Each edge in $E$ represents a road segment and the weight of the edge is the length of the road segment.
Then, the distance is the shortest path length $d_s$ between two nodes.
Here, the following inequality holds for any two vertices on $v,v^\prime\in\mathcal{V}$.
\begin{equation}
\label{equ:shortest-path}
d_e(v,v^\prime)\leq d_s(v,v^\prime)
\end{equation}
where $d_e$ is the Euclidean distance.
We assume that a user is located at a location on a road network $v\in V$, sends the location once to receive service from an untrusted LBS, and that an adversary knows that the user is on the road network. The user needs to protect his/her privacy on his/her own device using a perturbation mechanism $M:V\to\mathcal{W}$ where $\mathcal{W}\subseteq V$. This is the same setting as the setting of the local differential privacy~\cite{kasiviswanathan2011can}.
Goals of this paper are to formally define privacy of locations on road networks and to achieve a better tradeoff between privacy and utility by considering road networks than existing method~\cite{geo-i} based on the Euclidean plane.
The main notations used in this paper are summarized in Table \ref{notations}.
\begin{table*}[t]
\centering
\begin{tabular}{cl}
\hline
Symbol & \multicolumn{1}{c}{Meaning} \\
\hline \hline
$u,a$ & A user and an adversary.\\
$\mathbb{R}$ & Set of real numbers.\\
$\mathcal{Z}$ & Set of outputs.\\
$G=(V,E)$ & Weighted undirected graph that represents a road network.\\
$V$ & Set of vertices.\\
$E$ & Set of edges. A weight is the distance on the road segment connecting two vertices.\\
$\mathcal{W}\subseteq V$ & Set of vertices of outputs.\\
$v,v^\prime,\hat{v}$ & On a road network, a true vertex, a perturbed vertex and an inferred vertex.\\
$x,x^\prime,\hat{x}$ & On the Euclidean plane, a true location, a perturbed location and an inferred location.\\
$\pi_u(x)$ & The probability that user $u$ is at location $x$.\\
$\pi_a(x)$ & Adversary $a$'s knowledge about user's location that represents the probability of being at location $x$.\\
$M$ & A mechanism. Given a location, $M$ outputs a perturbed location.\\
$d_e(x,x^\prime)$ & An Euclidean distance between $x$ and $x^\prime$.\\
$d_s(v,v^\prime)$ & The shortest distance between $v$ and $v^\prime$ on a road network.\\
$h$ & Inference function that represents inference of an adversary.\\
$f$ & Post-processing function.\\
\hline
\vspace{3pt}
\end{tabular}
\caption{Summary of notation.}
\label{notations}
\end{table*}
\section{Related Works}
\label{sec:related}
\subsection{Cloaking}
Cloaking methods~\cite{duckham2005formal} obscure a true location by outputting an area instead of the true location. These methods are based on $k$-anonymity~\cite{spa-k-ano} which guarantees that at least $k$ users are in the same area, which prevents an attacker from inferring which user is querying the service provider. This privacy definition is practical, but there are some concerns~\cite{machanavajjhala2006diversity} regarding the rigorousness of the privacy guarantee because $k$-anonymity does not guarantee privacy against an adversary with some knowledge. If the adversary has peripheral knowledge regarding a user's location, such as range of the user's location, the obscured location can violate privacy. By considering the side knowledge of an adversary~\cite{xue2009location}, the privacy against that particular adversary can be guaranteed, but generally, protecting privacy against one type of adversary is insufficient. Additionally, introducing a cloaking method incurs additional costs for the service provider because the user sends an area rather than a location.
\subsection{Anonymization}
Anonymization methods~\cite{gedik2005location} separate a user's identifier from that user's location by assigning a pseudonym. Because tracking a single user pseudonym can leak privacy, the user must change the pseudonym periodically. Beresford et al.~\cite{mix-zone-orig} proposed a way to change pseudonyms using a place called mix zones. However, anonymization does not guarantee privacy because an adversary can sometimes identify a user by linking other information.
\subsection{Location Privacy on Road Networks}
To the best of our knowledge, this is the first study to propose a perturbation method with the differential privacy approach over road networks. However, several studies explored location privacy on road networks.\par
Tyagi et al.~\cite{tyagilocation} studied location privacy over road networks for VANET users and showed that no comprehensive privacy-preserving techniques or frameworks cover all privacy requirements or issues while still maintaining a desired location privacy level.\par
Wang et al.~\cite{wang2009privacy} and Wen et al.~\cite{Wen2018amethod} proposed a method of privacy protection for users who wish to receive location-based services while traveling over road networks. The authors used $k$-anonymity as the protection method and took advantage of the road network constraints.\par
A series of key features distinguish our solution from these studies: a) we use the differential privacy approach; consequently, our solution guarantees privacy protection against any attacker to some extent and b) we assume that no trusted server exists. We highlight these two points as advantages of our proposed method.
\subsection{State-of-the-Art Privacy Models}
Since Geo-I~\cite{geo-i} was published, many related applications have been proposed. To et al.~\cite{spatial-crowd} developed an online framework for a privacy-preserving spatial crowdsourcing service using Geo-I. Tong et al.~\cite{tong2017jointly} proposed a framework for a privacy-preserving ridesharing service based on Geo-I and the differential privacy approach. It may be possible to improve these applications by using GG-I instead of Geo-I. Additionally, Bordenabe et al.~\cite{opt-geo-i} proposed an optimized mechanism that satisfied Geo-I, and it may be possible to apply this method to GEM.\par
According to~\cite{geo-i}, using a mechanism satisfying Geo-I multiple times causes privacy degradation due to correlations in the data; this same scenario also applies to GG-I. This issue remains a difficult and intensely investigated problem in the field of differential privacy. Two kinds of approaches have been applied in attempts to solve this problem. The first is to develop a mechanism for multiple perturbations that satisfies existing notions, such as differential privacy and Geo-I~\cite{compo-theo,predic-dp}. Kairouz et al.~\cite{compo-theo} studied the composition theorem and proposed a mechanism that upgrades the privacy guarantee. Chatzikokolakis et al.~\cite{predic-dp} proposed a method of controlling privacy using Geo-I when the locations are correlated. The second approach is to propose a new privacy notion for correlated data~\cite{protec-locas,cao2018priste}. Xiao et al.~\cite{protec-locas} proposed $\delta$-location set privacy to protect each location in a trajectory when a moving user sends locations. Cao et al.~\cite{cao2018priste} proposed PriSTE, a framework for protecting spatiotemporal event privacy. We believe that these methods can also be applied to our work.
| proofpile-arXiv_065-5612 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
In this paper, we aim to estimate a sparse vector ${x}^* \in \mathbb{R}^{n}$ from its noisy nonlinear measurement ${y} \in \mathbb{R}^m$:
\begin{equation}
\label{nonlinear system}
y = f(Ax^*) + \varepsilon,
\end{equation}
where $A \in \mathbb{R}^{m \times n}$, $m \ll n$, $\varepsilon \in \mathbb{R}^m$ is the exogenous noise, and $f(\cdot)$ is an element-wise nonlinear function.
As directly minimize the $\ell_0$-norm to promote the sparsity is shown to be NP-hard~\cite{blumensath2008iterative}, the sparsity is typically achieved via minimizing the least square error augmented by $\ell_{1}$-regularization instead.
For solving sparse nonlinear regression problems, the SpaRSA (sparse reconstruction by separable approximation) method~\cite{wright2009sparse}
minimizes the upper bound of the $\ell_{1}$-regularized objective iteratively
by using a simple diagonal Hessian approximation,
which results in an iterative shrinkage thresholding algorithm.
The fast iterative soft thresholding algorithm (FISTA)~\cite{beck2009fast}
accelerates the convergence of iterations by using a very specific linear combination of the previous two outputs as the input of the next iteration.
The fixed point continuation algorithm (FPCA)~\cite{hale2008fixed}
lowers the shrinkage threshold value in a continuation strategy,
which makes the iterative shrinkage thresholding algorithm converge faster.
The iterative soft thresholding with a line search algorithm (STELA)~\cite{yang2018parallel}
uses a line search scheme to calculate the step size for updating the input of the next iteration.
Despite the fact that the $\ell_1$-regularized objective is nonconvex in general due to the nonlinearity of $f(\cdot)$,
\cite{yang2016sparse} proved that under mild conditions, with high probability the stationary points of the SpaRSA method are close to the global optimal solution.
Over the last decade, the community has made massive efforts in developing deep unfolding methods to solve sparse regression problems efficiently in a special case where $f(\cdot)$ is the identity function, in which~\eqref{nonlinear system} is reduced to the well known sparse coding model.
An early attempt named Learned Iterative Shrinkage Thresholding Algorithm (LISTA)~\cite{gregor2010learning} proposed to unfold the Iterative Shrinkage-Thresholding Algorithm (ISTA) using deep neural networks with learnable weights, whose requisite numbers of iterations to obtain similar estimation results are one or two orders of magnitude less than that of ISTA.
There are also another kind of learning-based ISTA called ISTA-Net \cite{zhang2018ista}
and some improved versions of LISTA, such as
TISTA \cite{ito2019trainable},
Step-LISTA \cite{ablin2019learning}, LISTA-AT \cite{kim2020element},
and GLISTA \cite{wusparse}.
Although the deep unfolding technique is promising \cite{hershey2014deep},
there is no learning-based approach that can deal with nonlinear cases due to the complexity caused by nonlinearity.
As a more generalized case of~\eqref{nonlinear system}, the sparse recovery problem over nonlinear dictionaries also gains some attention~\cite{chamon2019sparse,chamon2020functional}.
However, those deep unfolding methods are not directly applicable to nonlinear dictionaries due to the different formulation.
In this paper, we aim to exploit the idea of unfolding classical iterative algorithms as deep neural networks with learnable weights to solve the sparse nonlinear regression problem.
To the best of our knowledge,
our proposed Nonlinear Learned Iterative Shrinkage Thresholding Algorithm (NLISTA) is the first deep sparse learning network for the sparse nonlinear regression problem.
We provide theoretical analysis to show that under mild conditions, there exists a set of parameters for the deep neural network that could ensure NLISTA converges at a linear rate to the ground truth solution.
Experimental results on synthetic data corroborate our theoretical results and show our method outperforms state-of-the-art sparse nonlinear regression algorithms.
\section{Algorithm Description}
The iterative step of the SpaRSA method \cite{yang2016sparse} \cite{wright2009sparse}
can be formulated as:
\begin{equation}
\label{SpaRSA}
x^{(t+1)} = \eta(x^{(t)} - \frac{1}{\alpha^{(t)}} \nabla L(x^{(t)}), \frac{\lambda}{\alpha^{(t)}} )
\end{equation}
where $t$ represents the $t$-th iteration,
$ \eta(u, a) := sign(u) max$
$\{|u|-a, 0\}$ is the soft thresholding function,
$L(x) := \frac{1}{2}\|y-f(Ax)\|_{2}^{2}$ is the least square loss function,
$\lambda$ is a scalar representing the $\ell_{1}$-regularization parameter
and $\alpha^{(t)}$ is a constant larger than the largest eigenvalue of $\nabla^2 L(x^{(t)})$.
$\nabla$ represents the gradient and $\nabla^2$ represents the Hessian matrix.
Based on the relationship between $\nabla L(x^{(t)})$ and $\nabla f(Ax^{(t)})$,
\begin{equation}
\label{grad_L and grad_f}
\nabla L(x^{(t)}) = A^\mathrm{T} \nabla f(Ax^{(t)})(f(Ax^{(t)}) - y),
\end{equation}
we convert (\ref{SpaRSA}) to:
\begin{equation}
\label{SpaRSA_f}
x^{(t+1)} = \eta(x^{(t)} + \frac{1}{\alpha^{(t)}} A^\mathrm{T} \nabla f(Ax^{(t)})(y - f(Ax^{(t)})), \frac{\lambda}{\alpha^{(t)}} )
\end{equation}
Furthermore, we propose the Nonlinear Learned Iterative Shrinkage Thresholding Algorithm (NLISTA),
whose iterative step can be formulated as:
\begin{equation}
\label{NLISTA}
x^{(t+1)} = \eta(x^{(t)} + \beta^{(t)} {W^{(t)}}^\mathrm{T} \gamma^{(t)} \nabla f(Ax^{(t)})(y - f(Ax^{(t)})), \theta^{(t)})
\end{equation}
where $W^{(t)} \in \mathbb{R}^{m \times n}$, $\beta^{(t)} \in \mathbb{R}$ and $\theta^{(t)} \in \mathbb{R}$
are free parameters to train,
and $\gamma^{(t)} \in \mathbb{R}$ is defined as:
\begin{equation}
\label{gamma}
\gamma^{(t)} = \left\{
\begin{array}{ll}
1 \quad \quad \quad ,\|\nabla f(Ax^{(t)})(y - f(Ax^{(t)})) \|_{2} \le 1\\
\|\nabla f(Ax^{(t)})(y - f(Ax^{(t)})) \|_{2}^{-1} ,otherwise.\\
\end{array}
\right.
\end{equation}
The network architecture of NLISTA is illustrated in Fig.\ref{fig:NLISTA},
which remains the recurrent neural network structure.
\begin{figure}[t]
\centering
\includegraphics[width=1\linewidth]{NLISTA_new.png}
\caption{Network architecture of NLISTA with T = 2.}
\label{fig:NLISTA}
\end{figure}
The effect of $\gamma^{(t)}$,
which is not trainable,
is to restrict the product of the gradient item $\nabla f(Ax^{(t)})$
and the residual item $y - f(Ax^{(t)})$ when the product is too large.
The item $(W^{(t)})^\mathrm{T} \gamma^{(t)} \nabla f(Ax^{(t)})(y - f(Ax^{(t)}))$
represents the updating direction,
which is supposed to be close to zero when the $\ell_{2}$-norm of the residual item is small enough.
So the product of the gradient item and the residual item will not be normalized
if the norm of the product is small enough,
where we let $\gamma^{(t)}$ equal one.
The effect of $W^{(t)}$ can be viewed as adjusting the updating direction based on the training data.
The item $\beta^{(t)}$ can be treated as the updating step size,
whose effect is similar to the item $\frac{1}{\alpha^{(t)}}$ in the SpaRSA method.
Since the SpaRSA method may converge too slow if $\alpha^{(t)}$ is too large
and may not be able to converge to the global optimal solution if $\alpha^{(t)}$ is too small,
the step size is set to be trainable in NLISTA considering its great influence on the convergence property.
As the regularization parameter, $\lambda$ has a significant impact on the sparsity degree of the results.
Due to the effect of the shrinkage thresholding function,
the output of the SpaRSA method will be sparser when $\lambda$ is set larger.
However, it is hard to set a proper regularization parameter to attain a certain sparsity degree of the outputs
which is supposed to be consistent with the data.
In NLISTA, we let the threshold value $\frac{\lambda}{\alpha^{(t)}}$ be trainable and denote it as $\theta^{(t)}$,
which means that the regularization parameter, as well as the sparsity degree of the outputs,
is determined by the data instead of human intervention.
As a result, a more suitable threshold vale of each iteration will be chosen in NLISTA.
Experiment results in Fig.\ref{fig:cosx} illustrate that
the performance of NLISTA is much better than existing state-of-the-art algorithms.
Concretely, NLISTA does not only converge faster but also has a much smaller recovery error.
\section{Convergence Analysis}
In this section, we analyze the convergence property of NLISTA.
We first state two following assumptions on samples and dictionary matrices before presenting the main theorem.
\begin{assume}
\label{assume:x & epi}
The sparse vector $x^*$ and the exogenous noise $\varepsilon$ belong to the following sets:
\begin{equation}
\label{eq:x_assume}
\Omega_{x}(c_x,s) \triangleq \Big\{ x^* \in \mathbb{R}^{n} \Big| \|x^\ast \|_{\infty} \leq c_x, \|x^\ast\|_0 \leq s \Big\},
\end{equation}
\begin{equation}
\Omega_{\varepsilon}(\sigma) \triangleq \Big\{ \varepsilon \in \mathbb{R}^{m} \Big| \| \varepsilon \|_1 \le \sigma \Big\},
\end{equation}
where $c_x > 0$,$s>0$ and $\sigma > 0 $ are constants.
\end{assume}
Therefore, there are upper bounds for each value of $x^*$, the amount of the non-zero elements,
and the $\ell_{1}$-norm of $\varepsilon$,
which are actually common conditions.
\begin{assume}
\label{assume:A}
The dictionary matrix $A$ belongs to the following set:
\begin{equation}
\label{eq:A_assume}
\begin{array}{ll}
\Omega_{A} \triangleq \Big\{ A \in \mathbb{R}^{m \times n} \Big|
& A_i^\mathrm{T} A_i = 1, \mathop{\rm max}\limits_{i \neq j} | A_i^\mathrm{T} A_j | < 1,\\
& i,j= 1,2,\cdots,n \Big\},
\end{array}
\end{equation}
where $A_i$ represents the $i^{th}$ column of $A$.
\end{assume}
Therefore, the dictionary matrix $A$ is required to be column normalized and constrained in the column correlation.
Then, we need to introduce the following lemmas as the preparation for the main theorem.
\begin{lemma}
\label{lemma: mean value theorem}
$Lagrange$ $mean$ $value$ $theorem.$
\quad
If $f(\cdot)$ is continuously differentiable in $[-c_x,c_x]$, then for any t,
there exists $\xi^{(t)} \in \mathbb{R}^{m}$ subject to
\begin{equation}
f(Ax^*) - f(Ax^{(t)}) = \nabla f(\xi^{(t)})(Ax^* - Ax^{(t)}).
\label{lemma_1}
\end{equation}
\end{lemma}
Since the nonlinear function $f(\cdot)$ is element-wise,
the Lagrange mean value theorem will holds when $f(\cdot)$ is continuously differentiable in $[-c_x,c_x]$.
For expression simplicity, hereinafter all $\xi^t$ used in equations satisfy (\ref{lemma_1}).
\begin{lemma}
\label{lemma: W}
If $f(\cdot)$ is continuously differentiable in $[-c_x,c_x]$,
the gradient of $f(\cdot)$ is nonzero for any $x \in [-c_x,c_x]$,
Assumption \ref{assume:x & epi} holds and Assumption \ref{assume:A} holds,
then $\Omega_W^{(t)}$ is not an empty set,
where
\begin{equation}
\begin{aligned}
&\Omega_W^{(t)} \triangleq \Big\{ W \in \mathbb{R}^{m \times n} \Big|
\beta^{(t)} \gamma^{(t)} W_i^\mathrm{T} \nabla f(Ax^{(t)}) \nabla f(\xi^{(t)}) A_i = 1,
\\
&\mathop{\rm max}\limits_{i \neq j}
| \beta^{(t)} \gamma^{(t)} W_i^\mathrm{T} \nabla f(Ax^{(t)}) \nabla f(\xi^{(t)}) A_j | < 1,
i,j= 1,\cdots,n \Big\}
\end{aligned}
\end{equation}
\end{lemma}
The proof of Lemma \ref{lemma: W} can be found in the supplementary.
Lemma \ref{lemma: W} actually describes a specific matrix set which is critical for the following theorem.
\begin{lemma}
\label{lemma: no false positive}
If $f(\cdot)$ is continuously differentiable in $[-c_x,c_x]$,
$x^{(0)} = 0$, $\{x^{(t)}\}_{t=1}^{\infty}$ are generated by (\ref{NLISTA}),
Assumption \ref{assume:x & epi} holds,
and $\theta^{(t)} \ge \mu_1^{(t)} \|x^* - x^{(t)} \|_1 + \mu_2^{(t)} \sigma$ ,
then
\begin{equation}
x_i^{(t)} = 0, \quad \forall i \notin S, \quad \forall t \ge 0,
\end{equation}
where
\begin{equation}
\begin{array}{ll}
\mu_1^{(t)} = \mathop{\rm max}\limits_{i \neq j} | \beta^{(t)} \gamma^{(t)} W_i^\mathrm{T} \nabla f(Ax^{(t)}) \nabla f(\xi^{(t)}) A_j | ,
\\
\mu_2^{(t)} = \mathop{\rm max}\limits_{i} \| \beta^{(t)} \gamma^{(t)} W_i^\mathrm{T} \nabla f(Ax^{(t)}) \|_1 ,
i,j= 1,2,\cdots,n,
\end{array}
\end{equation}
and $S \triangleq \Big\{ i \Big| x_i^* \neq 0 \Big\}$ is the support set of $x^*$.
\end{lemma}
The proof of Lemma \ref{lemma: no false positive} can be found in the supplementary.
Lemma \ref{lemma: no false positive} actually describes a simple fact that
some elements of each iteration output will keep zero as long as the shrinking threshold is large enough.
And the specific constants defined in Lemma \ref{lemma: no false positive},
$\mu_1^{(t)}$ and $\mu_2^{(t)}$,
will be used in the following theorem
and are critical for the theorem proof.
Contrasting the constants definitions and the matrix set constraints in Lemma \ref{lemma: W},
we can find that the constants are some kind of evaluation of the learned matrix $W$,
which are expected to be as small as possible.
We now are ready to introduce following main theorem about the convergence property of NLISTA.
\begin{theorem}
\label{theorem:1}
If $f(\cdot)$ is continuously differentiable in $[-c_x,c_x]$,
$x^{(0)}=0$,
Assumption \ref{assume:x & epi} holds, Assumption \ref{assume:A} holds
and $\{x^{(t)}\}_{t=1}^{\infty}$ are generated by (\ref{NLISTA}),
then there exists a set of parameters $\{W^{(t)}, \theta^{(t)}\}_{t=0}^{\infty}$ where
$W^{(t)} \in \Omega_W^{(t)}$ and
$\theta^{(t)} \ge \mu_1^{(t)} \|x^* - x^{(t)} \|_1 + \mu_2^{(t)} \sigma$ for any $t \ge 0$,
such that
\begin{equation}
\label{eq:linear_conv}
\|x^{(t)}-x^\ast\|_2 \leq s c_x q^{t} + 2s c_{\varepsilon}\sigma,\quad \forall t = 0,1,\cdots,
\end{equation}
where $q$ and $c_{\varepsilon}$ are constants that depend on
$\{\mu_1^{(t)}\}_{t=0}^{T}$, $\{\mu_2^{(t)}\}_{t=0}^{T}$ and $s$.
$q \in (0,1)$ if $s$ is sufficiently small, and $c_{\varepsilon} > 0$.
The definitions are omitted due to space limitations and can be found in the arXiv version of the paper.
Note that the $t$ in $q^t$ is the exponent.
\end{theorem}
If $\sigma=0$, (\ref{eq:linear_conv}) reduces to
\begin{equation}
\label{eq:linear_noiseless}
\|x^{(t)}-x^\ast\|_2 \leq s c_x q^t, \quad \forall t = 0,1,\cdots.
\end{equation}
The proof of Theorem \ref{theorem:1} can be found in the supplementary.
Theorem \ref{theorem:1} means that in the noiseless case,
there exist parameters enabling the upper bound of the NLISTA estimation error to converge to zero
at a $q$-linear rate with the number of layers going to infinity.
As a result of the convergence property of the upper bound, the NLISTA estimation error also converges to zero quickly,
which is validated by Figure \ref{fig:cosx_noiseless}.
Theorem \ref{theorem:1} also demonstrates that
the existence of the noise will increase the upper bound of the NLISTA estimating error.
And the convergence speed of NLISTA under noisy conditions is also linear,
which is illustrated in Figure \ref{fig:cosx_SNR30}.
Due to the relationship between $q$ and the upper bound of $ |\nabla f(\cdot) |$,
we can derive that the upper bound of the estimating error will converge slower when the upper bound of $|\nabla f(\cdot)|$ is larger
based on Theorem \ref{theorem:1}.
As a result, the performance of NLISTA is supposed to be better for the nonlinear function $f(\cdot)$
whose supremum of $|\nabla f(\cdot)|$ is smaller,
which is validated by Table \ref{table: gradient}.
\section{Experiments}
\begin{figure*}[t]
\centering
\vspace{-1em}
\begin{tabular}{ccc}
\hspace{-1.5em}
\subfigure[Noiseless Case: $\textrm{SNR}$=$\infty$]{
\includegraphics[width=0.33\linewidth]{cosx_noiseless.png}
\label{fig:cosx_noiseless}
}
&
\hspace{-1.5em}
\subfigure[Noisy Case: $\textrm{SNR}$=30dB]{
\includegraphics[width=0.33\linewidth]{cosx_SNR30.png}
\label{fig:cosx_SNR30}
}
&
\hspace{-1.5em}
\subfigure[Performance with ill-conditioned matrix]{
\includegraphics[width=0.33\linewidth]{cosx_cond50.png}
\label{fig:cosx_cond50}
}
\end{tabular}
\caption{Validation of Theorem \ref{theorem:1} and comparison among algorithms with different settings.}
\label{fig:cosx}
\end{figure*}
To testify the effectiveness of NLISTA and validate our theorem,
we conduct following experiments
where the experiment settings and network training strategies most follow prior works
\cite{yang2016sparse}\cite{chen2018theoretical}\cite{liu2018alista}.
To be more specific, we set $m = 250$ and $n = 500$.
so the dimension of $x^*$, $y$ and $A$ is $500 \times 1$, $250 \times 1$ and $250 \times 500 $.
The elements of $A$ are sampled from Gaussian distribution with variance $\frac{1}{m}$,
and each column of $A$ is normalized to have the unit $\ell_{2}$-norm,
which ensures $A \in \Omega_A$.
The matrix $A$ is fixed in each setting where different algorithms are compared.
The elements of $x^*$ follow the Bernoulli distribution to be zero or non-zero
with the probability being 0.1,
and the non-zero elements of $x^*$ are sampled from the standard Gaussian distribution.
The nonlinear function is set as $f(x) = 2x + cos(x)$
which is same with \cite{yang2016sparse}\cite{yang2018parallel}.
Therefore, the nonlinear function is continuously differentiable, nonconvex
and the gradient is always nonzero.
The vector $y$ is generated as (\ref{nonlinear system}) where the noise $\varepsilon$ obeys the Gaussian distribution
with a certain variance according to the signal-to-noise (SNR) ratio which is set infinity as default.
So the setups ensure that there exists a constant $c_x$ , a constant $s$ and a constant $\sigma$
subject to $\forall (x^*,\varepsilon) \in X(c_x,s,\sigma)$.
We randomly synthesize in-stream $x^*$, $\varepsilon$ and $y$ for training and validating.
The training batch size is 64.
The test set $\{ x^* \}$ contains 1000 samples generated as described above,
which is fixed for all tests in our simulations.
All the compared networks have 16 layers and are trained from layer to layer in a progressive way
same with \cite{chen2018theoretical}\cite{liu2018alista}.
For NLISTA, the front eleven layers are fixed when we train the last five layers.
All learnable parameters are not shared among different layers in networks.
The training loss function is $\mathbb{E}\left\|x^{(t)}-x^{*}\right\|_{2}^{2}$
The optimizer is Adam \cite{kingma2014adam}.
The learning rate is first initialized to be 0.001
and will then decrease to 0.0001 and finally decrease to 0.00002,
if the validation loss does not decrease for 4000 iterations.
To evaluate the recovery performance, we use the normalized mean square error ( NMSE ) in dB:
\begin{equation}
\label{NMSE}
\operatorname{NMSE}\left(x^{(t)}, x^{*}\right) =
10 \log _{10}\left(\frac{\mathbb{E}\left\|x^{(t)}-x^{*}\right\|_{2}^{2}}{\mathbb{E}\left\|x^{*}\right\|_{2}^{2}}\right),
\end{equation}
where $x^{(t)}$ is the output of the t-th iteration and $x^*$ is the ground truth.
The baseline algorithms are SpaRSA\cite{yang2016sparse}, FISTA\cite{beck2009fast}, FPCA~\cite{hale2008fixed},
STELA\cite{yang2018parallel} and LISTA\cite{gregor2010learning}.
The regularization parameter for each iterative algorithm is chosen specifically in each experiment.
The detailed description of baseline algorithms and corresponding parameters can be found in the supplementary.
Other learned algorithms such as LAMP\cite{borgerding2017amp}, LISTA-cpss\cite{chen2018theoretical} and ALISTA\cite{liu2018alista}
are not compared with NLISTA because we find that they are not able to deal with nonlinear cases.
The experiment results under the noiseless condition
are reported in Fig.~\ref{fig:cosx_noiseless},
where NLISTA outperforms other algorithms significantly.
Moreover, the results support Theorem~\ref{theorem:1} that there exists a set of parameters for NLISTA
enabling the upper bound of the recovery error converges to zero at a linear rate.
The experiment results under the noisy condition
are reported in Fig.~\ref{fig:cosx_SNR30},
which demonstrate the robustness of NLISTA to deal with noisy cases
and improvement compared to other algorithms.
Contrasting the results of NLISTA in Fig.~\ref{fig:cosx_SNR30} with Fig.~\ref{fig:cosx_noiseless},
the final recovery error converges exponentially to zero in the noiseless case
and converges to a stationary level related with the noise-level in the noisy case,
which validates the discussion about the influence of the noise after Theorem~\ref{theorem:1}.
To demonstrate the robustness of NLISTA to deal with ill-conditioned matrices,
we set the condition number of the matrix $A$ equalling to 50.
In Figure \ref{fig:cosx_cond50}, the results show that
NLISTA still outperforms other algorithms significantly with the ill-conditioned matrices.
In order to explore the influence of nonlinear functions,
we compare $f(x) = 10x+cos(2x)$,$f(x) = 10x+cos(3x)$ and $f(x) = 10x+cos(4x)$,
where the main difference is the supremum of $|\nabla f(x)|$
and all gradients are nonzero for any $x$.
In Table \ref{table: gradient}, the results show that
the recovery error of NLISTA converges faster with the smaller supremum of $\nabla f(x)$,
which supports the discussion about the upper bound of $|\nabla f(x)|$ after Theorem \ref{theorem:1}.
The law also holds for other algorithms,
which reveals the impact of nonlinear functions on algorithm performance for sparse nonlinear regression problems.
The experiment results of other algorithms are not displayed due to the space limitation
and can be found in the arXiv version of the paper.
The performance of NLISTA is always the best among all algorithms.
\begin{threeparttable}[t]
\centering
\caption{Comparison among different gradient supremum.}
\label{table: gradient}
\begin{tabular}{p{3cm}<{\centering}p{3cm}<{\centering}
p{1.5cm}<{\centering}p{1cm}<{\centering}p{1cm}<{\centering}
p{1cm}<{\centering}p{1cm}<{\centering}p{1.5cm}<{\centering}}
\toprule
$f(x)$& $sup(|\nabla f(x)|)$\tnote{1} & SpaRSA & FISTA &FPCA &STELA & LISTA & NLISTA \\
\midrule
$10x+cos(2x)$ & 12 & -14.0 & -17.4 & -14.2 & -13.5 & -19.7 & \textbf{-35.7} \\
$10x+cos(3x)$ & 13 & -13.2 & -16.5 & -13.4 & -12.7 & -16.8 & \textbf{-32.2} \\
$10x+cos(4x)$ & 14 & -12.4 & -15.3 & -12.5 & -11.8 & -15.7 & \textbf{-28.4} \\
\bottomrule
\end{tabular}
\begin{tablenotes}
\item[1]$sup(|\nabla f(x)|)$ represents the supremum of $|\nabla f(x)|$.
\end{tablenotes}
\end{threeparttable}
\section{Conclusion}
\vspace{-0.1em}
In this article, we first unfold the SpaRSA method to solve the sparse nonlinear regression problem,
and we have proposed a new algorithm called NLISTA whose performance is better than existing state-of-art algorithms.
Moreover, we have proved theoretically that there exists a set of parameters enabling NLISTA to converge linearly.
The experiment results support our theorem and analysis and show that such parameters can be learned.
We plan on dealing with the situation that the nonlinear function $f(\cdot)$ is not element-wise,
where the gradient of the nonlinear function $f(\cdot)$ is more complicated.
\newpage
\bibliographystyle{IEEEbib}
| proofpile-arXiv_065-5613 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
\paragraph{Our results.}
In this paper we present a randomized $O(k^2)$-competitive online algorithm
for width $k$ layered graph traversal. The problem, whose history is discussed
in detail below, is an online version of the conventional shortest path problem
(see~\cite{Sch12}), and of Bellman's dynamic programming paradigm. It is
therefore a very general framework for decision-making under uncertainty of
the future, and it includes as special cases other celebrated models of online
computing, such as metrical task systems. The following 1989 words of
Papadimitriou and Yannakakis~\cite{PY89} still resonate today: ``the techniques
developed [for layered graph traversal] will add to the scarce rigorous methodological
arsenal of Artificial Intelligence.'' Our new upper bound on the competitive
ratio nearly matches the lower bound of $\Omega(k^2/\log^{1+\epsilon} k)$, for all
$\epsilon > 0$, given by Ramesh~\cite{Ram93}. It improves substantially over the previously
known $O(k^{13})$ upper bound from the same paper.
\paragraph{Problem definition.}
In layered graph traversal, a searcher attempts to find a short path from a starting node $a$
to a target node $b$ in an undirected graph $G$ with non-negative integer edge weights
$w:E(G)\rightarrow{\mathbb{N}}$. It is assumed that the vertices of $G$ are partitioned into layers
such that edges exist only between vertices of consecutive layers. The searcher knows initially
only an upper bound $k$ on the number of nodes in a layer (this is called the {\em width} of $G$).
The search begins at $a$, which is the only node in the first layer of the graph (indexed layer $0$).
We may assume that all the nodes of $G$ are reachable from $a$. When the searcher first reaches
a node in layer $i$, the next layer $i+1$, the edges between layer $i$ and layer $i+1$, and the weights
of these edges, are revealed to the searcher.\footnote{So, in particular, when the search starts at $a$,
the searcher knows all the nodes of layer $1$, all the edges connecting $a$ to a node in layer $1$,
and all the weights of such edges.} At this point, the searcher has to move forward from the current position
(a node in layer $i$) to a new position (a node in layer $i+1$). This can be done along a shortest path
through the layers revealed so far.\footnote{Notice that this is not necessarily the shortest path in $G$
between the nodes, because such a path may have to go through future layers that have not been
revealed so far. In fact, some nodes in layer $i+1$ might not even be reachable at this point.}
The target node $b$ occupies the last layer of the graph; the number of layers is unknown
to the searcher. The target gets revealed when the preceding layer is reached. The goal of
the searcher is to traverse a path of total weight as close as possible to the shortest path
connecting $a$ and $b$.\footnote{Note that the traversed path is not required to be simple
or level-monotone.} The searcher is said to be $C$-competitive for some $C=C(k)$ iff for every
width $k$ input $(G,w)$, the searcher's path weight is at most $C\cdot w_G(a,b)$, where
$w_G(a,b)$ denotes the distance under the weight function $w$ between $a$ and $b$. The
competitive ratio of the problem is the best $C$ for which there is a $C$-competitive searcher.
Evidently, rational weights can be converted to integer weights if we know (or even update online)
a common denominator, and irrational weights can be approximated to within any desired accuracy
using rational weights. Moreover, any bipartite graph is layered, and any graph can be converted
into a bipartite graph by subdividing edges. Thus, the layered structure of the input graph is intended
primarily to parametrize the way in which the graph is revealed over time to the online algorithm.
Also notice that if the width $k$ is not known, it can be estimated, for instance by doubling the guess
each time it is refuted. Therefore, the assumptions made in the definition of the problem are not
overly restrictive.
\paragraph{Motivation and past work.}
The problem has its origins in a paper of Baeza-Yates et al.~\cite{BCR88}, and possibly
earlier in game theory~\cite{Gal80}. In~\cite{BCR88}, motivated by applications in robotics,
the special case of a graph consisting of $k$ disjoint paths is proposed, under the name
{\em the lost cow problem}. The paper gives a deterministic $9$-competitive algorithm
for $k=2$, and more generally a deterministic $2\frac{k^k}{(k-1)^{k-1}}+1\approx 2ek+1$
competitive algorithm for arbitrary $k$. A year later, Papadimitriou and Yannakakis
introduced the problem of layered graph traversal~\cite{PY89}. Their paper shows that
for $k=2$ the upper bound of $9$ still holds, and that the results of~\cite{BCR88} are
optimal for disjoint paths. Unfortunately, the upper bound of $9$ in~\cite{PY89} is the
trivial consequence of the observation that for $k=2$, the general case reduces to the disjoint
paths case. This is not true for general $k$. Indeed, Fiat et al.~\cite{FFKRRV91} give a $2^{k-2}$
lower bound, and an $O(9^k)$ upper bound on the deterministic competitive ratio in the general
case. The upper bound was improved by Ramesh~\cite{Ram93} to $O(k^3 2^k)$, and
further by Burley~\cite{Bur96} to $O(k 2^k)$. Thus, currently the deterministic case is nearly resolved, asymptotically: the competitive ratio lies in $\Omega(2^k)\cap O(k2^k)$.
Investigation of the randomized competitive ratio was initiated in the aforementioned~\cite{FFKRRV91}
that gives a $\frac{k}{2}$ lower bound for the general case, and asymptotically tight $\Theta(\log k)$
upper and lower bounds for the disjoint paths case. In the randomized case, the searcher's strategy
is a distribution over moves, and it is $C$-competitive iff for every width $k$ input $(G,w)$, the expected
weight of the searcher's path is at most $C\cdot w_G(a,b)$.
It is a ubiquitous phenomenon of online computing that randomization improves the competitive
ratio immensely, often guaranteeing exponential asymptotic improvement (as happens in the
disjoint paths case of layered graph traversal). To understand why this might happen, one can
view the problem as a game between the designer of $G$ and the searcher in $G$. The game
alternates between the designer introducing a new layer
and the searcher moving to a node in the new layer.
The designer is oblivious
to the searcher's moves. Randomization obscures the predictability of the searcher's moves, and
thus weakens the power of the designer.\footnote{This can be formalized through an appropriate definition
of the designer's information sets.} Following the results in~\cite{FFKRRV91} and
the recurring phenomenon of exponential improvement, a natural conjecture would have been that
the randomized competitive ratio of layered graph traversal is $\Theta(k)$. However, this natural
conjecture was rather quickly refuted in the aforementioned~\cite{Ram93} that surprisingly improves
the lower bound to $\Omega(k^2/\log^{1+\epsilon} k)$, which holds for all $\epsilon > 0$. Moreover, the
same paper also gives an upper bound of $O(k^{13})$. Thus, even though for the general case of
layered graph traversal the randomized competitive ratio cannot be logarithmic in the deterministic
competitive ratio, it is polylogarithmic in that ratio. The results of~\cite{Ram93} on randomized layered
graph traversal have not since been improved prior to our current paper.
Computing the optimal offline solution, a shortest path from source to target in a weighted layered
graph, is a simple example and also a generic framework for dynamic programming~\cite{Bel57}.
The online version has applications to the design and analysis of hybrid algorithms. In particular, the
disjoint paths case has applications in derandomizing online algorithms~\cite{FRRS94}, in the
design of divide-and-conquer online algorithms~\cite{FRR90,ABM93}, and in the design of advice
and learning augmented online algorithms~\cite{LV21,ACEPS20,BCKPV22}.
In this context, Kao et al.~\cite{KRT93} resolve exactly the randomized competitive ratio of width $2$
layered graph traversal: it is roughly $4.59112$, precisely the solution for $x$ in the equation
$\ln(x-1) = \frac{x}{x-1}$; see also~\cite{CL91}. For more in this vein, see also Kao et al.~\cite{KMSY94}.
Moreover, layered graph traversal is a very general model of online computing. For example, many online
problems can be represented as chasing finite subsets of points in a metric space. This problem, introduced
by Chrobak and Larmore~\cite{CL91,CL93} under the name {\em metrical service systems}, is {\bf equivalent}
to layered graph traversal~\cite{FFKRRV91}. The width $k$ of the layered graph instance corresponds to the
maximum cardinality of any request of the metrical service systems instance. (See Section~\ref{sec: applications}.)
Metrical service systems are a
special case of metrical task systems, introduced by Borodin et al.~\cite{BLS87}. Width $k$ layered graph
traversal includes as a special case metrical task systems in $k$-point metric spaces.\footnote{Indeed, this
implies that while metrical service systems on $k$-point metrics are a special case of metrical task systems
on $k$-point metrics, also metrical task systems on $k$-point metrics are a special case of metrical service
systems using $k$-point requests.}
There is a tight bound of $2k-1$ on the deterministic competitive ratio of metrical task systems in any
$k$-point metric~\cite{BLS87}, and the randomized competitive ratio lies between an $\Omega(\log k/\log\log k)$
lower bound (Bartal et al.~\cite{BBM01,BLMN03}) and an $O(\log^2 k)$ upper bound (Bubeck et al.~\cite{BCLL19}).
Thus, width $k$ layered graph traversal is strictly a more general problem than $k$-point metrical task systems.
Another closely related problem is the $k$-taxi problem, whose best known lower bound for deterministic algorithms
is obtained via a reduction from layered graph traversal~\cite{CK19}.
\paragraph{Our methods.}
Our techniques are based on the method of online mirror descent with entropic regularization
that was pioneered by Bubeck et al.~\cite{BCLLM18,BCLL19} in the context of the $k$-server
problem and metrical task systems, and further explored in this context in a number of recent
papers~\cite{CL19,EL21,BC21}.
It is known that layered graph traversal is equivalent to its special case where the input graph is a tree~\cite{FFKRRV91}. Based on this reduction, we reduce width $k$ layered graph traversal to a problem that we name
the (depth $k$) \emph{evolving tree game}. In this game, one player, representing the algorithm,
occupies a (non-root) leaf in an evolving rooted, edge weighted, bounded
degree tree of depth $\le k$. Its opponent, the adversary, is allowed to change the metric
and topology of the tree using the following repertoire of operations: ($i$) increase the
weight of an edge incident to a leaf; ($ii$) delete a leaf and the incident edge, and smooth
the tree at the parent if its degree is now $2$;\footnote{Smoothing is the reverse operation
of subdividing. In other words, smoothing is merging the two edges incident to a degree
$2$ node. We maintain w.l.o.g. the invariant that the tree
has no degree $2$ node.} ($iii$) create two (or more) new leaves and
connect them with weight $0$ edges to an existing leaf whose combinatorial depth is
strictly smaller than $k$. The algorithm may move from leaf to leaf at any time, incurring \emph{movement cost} equal to the weight of the path between the leaves. If the algorithm occupies a leaf at
the endpoint of a growing weight edge, it pays the increase in weight. We call this
the {\em service cost} of the algorithm. If the algorithm occupies a leaf that is being deleted,
it must move to a different leaf prior to the execution of the topology change. If the algorithm
occupies a leaf that is being converted into an internal node (because new leaves are appended
to it), the algorithm must move to a leaf after the execution of the topology change. At the end
of the game, the total (movement + service) cost of the algorithm is compared against the adversary's cost, which is
the weight of the lightest root-to-leaf path.\footnote{In fact, our algorithm can handle also the
operation of reducing the weight of an edge, under the assumption that this operation incurs
on both players a cost equal to the reduction in weight, if performed at their location.}
Mirror descent is used to generate a fractional online solution to the evolving tree game.
The algorithm maintains a probability distribution on the leaves. A fractional solution can
be converted easily on-the-fly into a randomized algorithm. As in~\cite{BCLLM18,BCLL19}
the analysis of our fractional algorithm for the dynamic tree game is based on
a potential function that combines (in our case, a modification of) the Bregman divergence
associated with the entropic regularizer with a weighted depth potential. The Bregman
divergence is used to bound the algorithm's {\em service cost} against the adversary's cost. The
weighted depth potential is used to bound the algorithm's {\em movement cost} against
its own service cost.
However, in our setting, in contrast to~\cite{BCLLM18,BCLL19}, the metric on
the set of leaves, and even the topology of the underlying tree, change dynamically. This
poses a few new challenges to the approach. In particular, the potential function that works
for metrical task systems is not invariant under the topology changes that are needed here.
We resolve this problem by working with revised edge weights that slightly over-estimate the true edge weights. When a topology change would lead to an increase of the potential function (by reducing the combinatorial depth of some vertices), we prevent such an increase by downscaling the affected revised edge weights appropriately.
Even so, the extra cost incurred by the perturbation
of entropy, which is required to handle distributions close to the boundary, cannot be handled
in the same manner as in \cite{BCLLM18,BCLL19}. This issue is fixed by modifying both the Bregman
divergence and the control function of the mirror descent dynamic. The latter damps down the
movement of the algorithm when it incurs service cost at a rate close to $0$.
In the competitive ratio of $O(k^2)$, one factor $k$ comes from the maximal depth $k$ of the tree. The other factor $k$ is due to the fact that the perturbation
that we require is exponentially small in $k$. We note that implementing the mirror descent
approach in evolving trees is a major challenge to the design and analysis of online algorithms
for online problems in metric spaces (e.g., the $k$-server problem, see~\cite{Lee18}, where an
approach based on mirror descent in an evolving tree is also studied). Our ideas
may prove applicable to other problems.
\paragraph{Organization.}
The rest of this paper is organized as follows. In Section~\ref{sec: evolving tree} we define and
analyze the evolving tree game. In Section~\ref{sec: motivation} we motivate the evolving tree
algorithm and analysis. In Section~\ref{sec: applications} we discuss the application to
layered graph traversal/small set chasing.
\section{The Evolving Tree Game}\label{sec: evolving tree}
For a rooted edge-weighted tree $T=(V,E)$, we denote by $r$ its root, by $V^0:= V\setminus\{r\}$ the set of
non-root vertices and by $\mathcal{L}\subseteq V^0$ the set of leaves. For $u\in V^0$, we denote by $p_u$ the parent
of $u$ and by $w_u$ the length of the edge connecting $u$ to $p_u$.
The evolving tree game is a two person continuous time game between an adversary and an algorithm.
The adversary grows a rooted edge-weighted tree $T=(V,E)$ of bounded degree. Without loss of generality,
we enforce that the root $r$ always has degree $1$, and we denote its single child by $c_r$. Initially
$V = \{r,c_r\}$, and the two nodes are connected by a zero-weight edge. The root $r$ will be fixed throughout
the game, but the identity of its child $c_r$ may change as the game progresses. The game has continuous
steps and discrete steps.
\begin{itemize}
\item {\bf Continuous step:} The adversary picks a leaf $\ell$ and increases the weight $w_\ell$ of the
edge incident on $\ell$ at a fixed rate of $w'_\ell = 1$ for a finite time interval.
\item {\bf Discrete step:} There are two types of discrete steps:
\begin{itemize}
\item {\bf Delete step:} The adversary chooses a leaf $\ell\in\mathcal{L}$, $\ell\ne c_r$,
and deletes $\ell$ and its incident edge from $T$. If the parent $p_\ell$ of $\ell$
remains with a single child $c$, the adversary smooths $T$ at $p_\ell$ as follows:
it merges the two edges $\{c,p_\ell\}$ and $\{p_\ell,p_{p_\ell}\}$ into a single edge
$\{c,p_{p_\ell}\}$, removing the vertex $p_\ell$, and assigns $w_c\gets w_c + w_{p_\ell}$.
\item {\bf Fork step:} The adversary generates two or more new
nodes and connects all of them to an existing leaf $\ell\in\mathcal{L}$ with edges of weight $0$.
Notice that this removes $\ell$ from $\mathcal{L}$ and adds the new nodes to $\mathcal{L}$.
\end{itemize}
\end{itemize}
The continuous and discrete steps may be interleaved arbitrarily by the adversary.
A pure strategy of the algorithm maps the timeline to a leaf of $T$ that exists at that time. Thus, the start of
the game (time $0$ of step $1$) is
mapped to $c_r$, and at all times the algorithm occupies a leaf and may move from leaf to leaf. If the algorithm
occupies a leaf $\ell$ continuously while $w_\ell$ grows, it pays the increase in weight (we call this the \emph{service cost}). If the algorithm moves
from a leaf $\ell_1$ to a leaf $\ell_2$, it pays the total weight of the path in $T$ between $\ell_1$ and $\ell_2$ at
the time of the move (we call this the \emph{movement cost}). A mixed strategy/randomized algorithm is, as usual, a probability distribution over pure
strategies. A fractional strategy maps the timeline to probability distributions over the existing leaves. Writing
$x_u$ for the total probability of the leaves in the subtree of $u$, this means that a fractional strategy maintains
at all times a point in the changing polytope
\begin{equation}\label{eq: polytope}
K(T):=\left\{x\in\mathbb{R}_+^{V^0}\colon \sum_{v \colon p(v) = u} x_v = x_u\,\forall u\in V\setminus\mathcal{L}\right\},
\end{equation}
where we view $x_r:=1$ as a constant.
Notice that the tree $T$, the weight function $w$, the point $x\in K(T)$, and the derived parameters are all
functions of the adversary's step and the time $t$ within a continuous step. Thus, we shall use henceforth
the following notation: $T(j,0)$, $w(j,0)$, $x(j,0)$, etc. to denote the values of these parameters at the start of
step $j$ of the adversary. If step $j$ is a continuous step of duration $\tau$, then for $t\in [0,\tau]$, we use
$T(j,t)$, $w(j,t)$, $x(j,t)$, etc. to denote the values of these parameters at time $t$ since the start of step $j$.
If it is not required to mention the parameters $(j,t)$ for clarity, we omit them from our notation.
We require that for a fixed continuous step $j$, the function $x(j,\cdot)\colon[0,\tau]\to K(T)$ is absolutely
continuous, and hence differentiable almost everywhere. Notice that the polytope $K(T)$ is fixed during a
continuous step, so this requirement is well-defined. We denote by $x'$ the derivative of $x$ with respect
to $t$, and similarly we denote by $w'$ the derivative of $w$ with respect to $t$. The cost of the algorithm
during the continuous step $j$ is
$$
\int_0^\tau \sum_{v\in V^0} \left(w'_v(j,t) x_v(j,t) + w_v(j,t) \left|x'_v(j,t)\right|\right) dt.
$$
Notice that the first summand (the \emph{service cost}) is non-zero only at the single leaf $\ell$ for
which $w_\ell(j,t)$ is growing.
In a discrete step $j$, the topology of the tree and thus the polytope $K(T)$ changes. The old tree is $T(j,0)$
and the new tree is $T(j+1,0)$. In a delete step, when a leaf $\ell$ is deleted, the algorithm first has to move
from its old position $x(j,0)\in K(T(j,0))$ to some position $x\in K(T(j,0))$ with $x_\ell=0$. The cost of moving
from $x(j,0)$ to $x$ is given by
$$
\sum_{v\in V^0} w_v(j,0) \left|x_v(j,0) - x_v\right|.
$$
The new state $x(j+1,0)$ is the projection of $x$ onto the new polytope $K(T(j+1,0))$, where the $\ell$-coordinate
and possibly (if smoothing happens) the $p_\ell$-coordinate are removed.
In a fork step, the algorithm chooses as its new position any point in $K(T(j+1,0))$ whose projection onto $K(T(j,0))$
is the old position of the algorithm. No cost is incurred here (since the new leaves are appended at distance $0$).
The following lemma is analogous to Lemma~\ref{lm: fractional to mixed}. Its proof is very
similar. It is omitted here; we actually do not need this claim to prove the main result of the
paper.
\begin{lemma}\label{lm: DTG frac to mix}
For every fractional strategy of the algorithm there is a mixed strategy incurring the same cost in expectation.
\end{lemma}
Our main result in this section is the following theorem.
\begin{theorem}\label{thm: main}
For every $k\in{\mathbb{N}}$ and for every $\epsilon > 0$ there exists a fractional strategy of the algorithm
with the following performance guarantee. For every pure strategy of the adversary that grows
trees of depth at most $k$, the cost $C$ of the algorithm satisfies
$$
C\le O(k^2\log d_{\max})\cdot(\opt + \epsilon),
$$
where $\opt$ is the minimum distance in the final tree from the root to a leaf, and $d_{\max}$
is the maximum degree of a node at any point during the game.
\end{theorem}
Notice that for every strategy of the adversary, there exists a pure strategy that pays
exactly $\opt$ service cost and zero movement cost. We will refer to this strategy as the {\em optimal play}. The algorithm
cannot in general choose the optimal play because the evolution of the tree only gets revealed
step-by-step.
\subsection{Additional notation}
Let $j$ be any step, and let $t$ be a time in that step (so, if $j$ is a discrete step then $t=0$, and
if $j$ is a continuous step of duration $\tau$ then $t\in [0,\tau]$).
For a vertex $u\in V(j,t)$ we denote by $h_u(j,t)$ its combinatorial depth, i.e., the number of edges on the
path from $r$ to $u$ in $T(j,t)$. Instead of the actual edge weights $w_u(j,t)$, our algorithm will be based
on revised edge weights defined as
\begin{align*}
\tilde w_u(j,t) := \frac{2k-1}{2k-h_u(j,t)}\left(w_u(j,t) + \epsilon 2^{-j_u} \right),
\end{align*}
where $j_u\in\mathbb N$ is the step number when $u$ was created (or $0$ if $u$ existed in the initial tree).
The purpose of the term
$\eps2^{-j_u}$ is to ensure that $\tilde w_u(j,t)$ is strictly positive.
For $u\in V^0(j,t)$, we also define a shift parameter by induction on $h_u(j,t)$, as follows. For
$u=c_r(j,t)$, $\delta_u(j,t) = 1$. For other $u$, $\delta_u(j,t) = \delta_{p_u}(j,t) / (d_{p_u}(j,t)-1)$, where
$p_u = p_u(j,t)$, and $d_{p_u}(j,t)$ is the degree of $p_u$ in $T(j,t)$ (i.e., $d_{p_u}(j,t)-1$ is the
number of children of $p_u$ in $T(j,t)$; note that every non-leaf node in $V^0(j,t)$ has degree at least $3$).
Observe that by definition $\delta(j,t)\in K(T(j,t))$. As mentioned earlier, we often omit the parameters
$(j,t)$ from our notation, unless they are required for clarity.
\subsection{The algorithm}\label{sec:algo}
We consider four distinct types of steps: continuous steps, fork steps, deadend steps, and merge steps.
A delete step is implemented by executing a deadend step, and if needed followed by a merge step. It is
convenient to examine the two operations required to implement a delete step separately.
\paragraph{Continuous step.}
In a continuous step, the weight $w_\ell$ of some leaf $\ell\in\mathcal{L}$ grows continuously (and thus $\tilde w_\ell'> 0$).
In this case, for $u\in V^0$ we update the fractional mass in the subtree below $u$ at rate
\begin{equation}\label{eq: dynamic}
x_u' = -\,\, \frac{2 x_u}{\tilde w_u}\tilde w_u' + \frac{x_u+\delta_u}{\tilde w_u}\left(\lambda_{p_u} - \lambda_{u}\right),
\end{equation}
where $\lambda_u=0$ for $u\in\mathcal{L}$ and $\lambda_u\ge 0$ for $u\in V\setminus\mathcal{L}$ are chosen such that $x$
remains in the polytope $K(T)$.
We will show in Section~\ref{sec:MDExistence} that such $\lambda$ exists (as a function of time).\footnote{The
dynamic of $x$ corresponds to running mirror descent with regularizer
$\Phi_t(z) = \sum_{u\in V} \tilde w_u(z_u+\delta_u)\log(z_u+\delta_u)$, using
the growth rate $\tilde w'$ of the approximate weights as cost function, and scaling the rate of movement by a factor
$\frac{2x_{\ell}}{x_{\ell}+\delta_{\ell}}$ when $\ell$ is the leaf whose edge grows. See Section~\ref{sec: motivation}.}
\paragraph{Fork step.}
In a fork step, new leaves $\ell_1,\ell_2,\dots,\ell_q$ (for some $q\ge 2$) are spawned as children of a leaf $u$
(so that $u$ is no longer a leaf). They are ``born'' with $w_{\ell_1}=w_{\ell_2}=\cdots = w_{\ell_q}=0$ and
$x_{\ell_1}=x_{\ell_2}=\cdots =x_{\ell_q}=x_u/q$.
\paragraph{Deadend step.}
In a deadend step, we delete a leaf $\ell\ne c_r$. To achieve this, we first compute the limit of a continuous step where
the weight $\tilde w_\ell$ grows to infinity, ensuring that the mass $x_\ell$ tends to $0$. This, of course, changes the
mass at other nodes, and we update $x$ to be the limit of this process. Then, we remove the leaf $\ell$ along with
the edge $\{\ell,p_\ell\}$ from the tree. Notice that this changes the degree $d_{p_\ell}$. Therefore, it triggers a discontinuous
change in the shift parameter $\delta_u$ for every vertex $u$ that is a descendant of $p_\ell$.
\paragraph{Merge step.}
A merge step immediately follows a deadend step if, after the removal of the edge $\{\ell,p_\ell\}$, the vertex $v=p_\ell$
has only one child $c$ left. Notice that $v\ne r$. We merge the two edges $\{c,v\}$ of weight $w_c$ and $\{v, p_v\}$
of weight $w_v$ into a single edge $\{c,p_v\}$ of weight $w_c+w_v$. The two edges that were merged and the vertex
$v$ are removed from $T$. This decrements by $1$ the combinatorial depth $h_u$ of every vertex $u$ in the subtree
rooted at $c$. Thus, it triggers a discontinuous change in the revised weight $\tilde w_u$, for every vertex $u$ in this
subtree.
\subsection{Competitive analysis}
The analysis of the algorithm is based on a potential function argument. Let $y\in K(T)$ denote the state of the
optimal play. Note that as the optimal play is a pure strategy, the vector $y$ is simply the indicator
function for the nodes on some root-to-leaf path. We define a potential function $P = P_{k,T,w}(x,y)$,
where $x\in K(T)$ and we prove that the algorithm's cost plus the change in $P$ is at most $O(k^2\log d_{\max})$ times
the optimal cost, where $d_{\max}$ is the maximum degree of a node of $T$. This, along with the fact that
$P$ is bounded, implies $O(k^2\log d_{\max})$ competitiveness. In Section~\ref{sec: motivation} we motivate the construction
of the potential function $P$. Here, we simply define it as follows:
\begin{equation}\label{eq: potential}
P := 2\sum_{u\in V^0}\tilde w_u \left(4k y_u \log \frac{1+\delta_u}{x_u+\delta_u} + (2k-h_u)x_u\right).
\end{equation}
We now consider the cost and potential change for each of the different steps separately.
\subsubsection{Continuous step}\label{sec:growth}
\paragraph{Bounding the algorithm's cost.}
Let $\ell$ be the leaf whose weight $w_\ell$ is growing, and recall that $c_r$ is the current neighbor of the root $r$. By
definition of the game, the algorithm pays two types of cost. Firstly, it pays for the mass $x_\ell$ at the leaf $\ell$ moving
away from the root at rate $w_\ell'$. Secondly, it pays for moving mass from $\ell$ to other leaves. Notice that $x_{c_r} = 1$
stays fixed. Let $C = C(j,t)$ denote the total cost that the algorithm accumulates in
the current step $j$, up to the current time $t$.
\begin{lemma}\label{lm: growth cost}
The rate at which $C$ increases with $t$ is
$C' \le 3\tilde w_\ell'x_\ell + 2\sum_{u\in V^0} (x_u+\delta_u)\lambda_{u}$.
\end{lemma}
\begin{proof}
We have
\begin{eqnarray}
C' = w_\ell'x_\ell\, + \sum_{u\in V^0\setminus \{c_r\}} w_u |x_u'|
&\le& \tilde w_\ell'x_\ell\, + \sum_{u\in V^0\setminus \{c_r\}} \tilde w_u |x_u'|\label{eq: growth w to tilde w}\\
& = & \tilde w_\ell'x_\ell\, + \sum_{u\in V^0\setminus \{c_r\}} \left|-2x_u \tilde w_u' +
(x_u+\delta_u)(\lambda_{p_u} - \lambda_u)\right|\label{eq: growth movement}\\
&\le& 3\tilde w_\ell'x_\ell\, + \sum_{u\in V^0\setminus \{c_r\}} (x_u+\delta_u)(\lambda_{p_u} + \lambda_u)\label{eq: growth triangle} \\
&\le& 3\tilde w_\ell'x_\ell + 2\sum_{u\in V^0} (x_u+\delta_u)\lambda_{u}.\label{eq:growthCost}
\end{eqnarray}
Inequality~\eqref{eq: growth w to tilde w} uses the fact that $w_u\le \tilde w_u$ and $w_\ell'\le \tilde w_\ell'$.
Equation~\eqref{eq: growth movement} uses the definition of the dynamic in Equation~\eqref{eq: dynamic}.
Inequality~\eqref{eq: growth triangle} uses the triangle inequality.
Finally, Inequality~\eqref{eq:growthCost} uses the fact that $x,\delta\in K(T)$, so
$\sum_{v \colon p_v = u} (x_v+\delta_v) = x_u+\delta_u$ for all $u\in V\setminus\mathcal{L}$.
\end{proof}
\paragraph{Change of potential.}
We decompose the potential function as $P= 4kD - 2\Psi$, where
\begin{align*}
D := \sum_{u\in V^0}\tilde w_u\left(2y_u\log\frac{1+\delta_u}{x_u+\delta_u} \,\,\,+\,\,\, x_u\right)
\end{align*}
and
\begin{align*}
\Psi:= \sum_{u\in V^0} h_u \tilde w_u x_u.
\end{align*}
We first analyze the rate of change of $\Psi$.
\begin{lemma}\label{lm: Psi change}
The rate at which $\Psi$ changes satisfies:
$\Psi' \ge -k \tilde w_\ell' x_\ell + \sum_{u\in V^0} \lambda_u (x_u+\delta_u)$.
\end{lemma}
\begin{proof}
We have
\begin{eqnarray}
\Psi' & = & h_\ell \tilde w_\ell' x_\ell + \sum_{u\in V^0\setminus\{c_r\}} h_u \tilde w_u x_u'\nonumber\\
& = & h_\ell \tilde w_\ell' x_\ell+\sum_{u\in V^0\setminus\{c_r\}} h_u \left(-2x_u \tilde w_u' + (x_u+\delta_u)(\lambda_{p_u}-\lambda_{u})\right)\nonumber\\
& = & - h_\ell \tilde w_\ell' x_\ell + \sum_{u\in V^0\setminus\{c_r\}} h_u (x_u+\delta_u)(\lambda_{p_u}-\lambda_{u})\nonumber\\
&\ge& - k \tilde w_\ell' x_\ell +
\sum_{u\in V^0} \lambda_u \left((h_u+1)\sum_{v\colon p_v=u} \!\! (x_v+\delta_v) \quad-\quad h_u (x_u+\delta_u)\right)\label{eq: psi ineq}\\
& = & - k \tilde w_\ell' x_\ell + \sum_{u\in V^0} \lambda_u (x_u+\delta_u).\label{eq:growthWdepth}
\end{eqnarray}
Here, Inequality~\eqref{eq: psi ineq} uses the fact that $h_\ell\le k$ and, for $u=p_v$, $h_v=h_u+1$.
Equation~\eqref{eq:growthWdepth} uses the previously noted fact that, as $x,\delta\in K(T)$, then for all $u\notin \mathcal{L}$,
$\sum_{v \colon p_v = u} (x_v+\delta_v) = x_u+\delta_u$ (and if $u\in\mathcal{L}$, then $\lambda_u=0$).
\end{proof}
Next, we analyze the rate of change of $D$.
\begin{lemma}\label{lm: D change}
The rate at which $D$ changes satisfies:
$D' \le -\tilde w_\ell'x_\ell + 2(2+k\log D) y_\ell\tilde w_\ell'$.
\end{lemma}
\begin{proof}
We have
\begin{eqnarray}
D'
& = & \tilde w_\ell' \left(2y_\ell\log\frac{1+\delta_\ell}{x_\ell+\delta_\ell}+x_\ell\right) +
\sum_{u\in V^0\setminus\{c_r\}} \tilde w_ux_u'\left(\frac{-2y_u}{x_u+\delta_u} + 1\right)\nonumber\\
& = & \tilde w_\ell' \left(2y_\ell\log\frac{1+\delta_\ell}{x_\ell+\delta_\ell}+x_\ell\right) +
\sum_{u\in V^0\setminus\{c_r\}} \left(-2x_u \tilde w_u' +
(x_u+\delta_u)(\lambda_{p_u} - \lambda_u)\right)\left(\frac{-2y_u}{x_u+\delta_u} + 1\right)\nonumber\\
& = & -\tilde w_\ell'x_\ell + 2y_\ell\tilde w_\ell' \left(\log\frac{1+\delta_\ell}{x_\ell+ \delta_\ell} + \frac{2 x_\ell}{x_\ell+\delta_\ell}\right) +
\sum_{u\in V^0\setminus \{c_r\}}(\lambda_{p_u}-\lambda_u)(-2y_u+x_u+\delta_u)\nonumber\\
&\le& -\tilde w_\ell'x_\ell + 2 y_\ell\tilde w_\ell'(2+k\log d_{\max}) +
\sum_{u\in V^0} \lambda_u \left(2y_u-x_u-\delta_u - \sum_{v\colon p_v=u}(2y_v-x_v-\delta_v)\right)\label{eq: D ineq}\\
& = & -\tilde w_\ell'x_\ell + 2 y_\ell\tilde w_\ell'(2+k\log d_{\max}).\label{eq:growthBregman}
\end{eqnarray}
Inequality~\eqref{eq: D ineq} uses the fact that $\delta_\ell\ge (d_{\max})^{1-h_\ell}\ge (d_{\max})^{1-k}$ and
$2y_{c_r}-x_{c_r}-\delta_{c_r}=0$. Equation~\eqref{eq:growthBregman} uses the fact that $x,y,\delta\in K(T)$, hence
for $u\in V\setminus\mathcal{L}$, $2y_u-x_u-\delta_u = \sum_{v\colon p_v=u} (2y_v-x_v-\delta_v)$ (and for
$u\in\mathcal{L}$, $\lambda_u=0$).
\end{proof}
We obtain the following lemma, which bounds the algorithm's cost and change in potential against the service cost of the optimal play.
\begin{lemma}\label{lm: P change}
For every $k\ge 2$, it holds that $C' + P' \le O(k^2\log d_{\max}) w_\ell' y_\ell$.
\end{lemma}
\begin{proof}
Combine Equations~\eqref{eq:growthCost},~\eqref{eq:growthWdepth}, and~\eqref{eq:growthBregman},
and recall that $P=4kD-2\Psi$. We get
\begin{equation}\label{eq:growthMainBound}
C' +P' \le (2k+3-4k)\tilde w_\ell' x_\ell + 8k(2+k\log d_{\max})\tilde w_\ell' y_\ell \le O(k^2\log d_{\max}) w_\ell' y_\ell,
\end{equation}
where in the last inequality we use $\tilde w_\ell' < 2w_\ell'$.
\end{proof}
\subsubsection{Fork step}
Fork steps may increase the value of the potential function $P$, because the new edges
have revised weight $> 0$. The following lemma bounds this increase.
\begin{lemma}\label{lm: fork cost}
The total increase in $P$ due to all fork steps is at most $\epsilon \cdot O(k^2\log d_{\max})$.
\end{lemma}
\begin{proof}
Consider a fork step that attaches new leaves $\ell_1,\dots,\ell_q$ to a leaf $u$.
The new leaves are born with revised edge weights
$\frac{2k-1}{2k-h_u-1}\epsilon 2^{-j}\le \epsilon 2^{-j+1}$, where $j$ is the current step number. Since $\sum_{i=1}^q y_{\ell_i}=y_u\le 1$ and $\sum_{i=1}^q x_{\ell_i}=x_u\le 1$, the change $\Delta P$ in $P$ satisfies
\begin{eqnarray*}
\Delta P
&\le& \epsilon 2^{-j+2}\cdot\left(4k\log\frac{1+\delta_u/q}{\delta_u/q} + 2k-h_{u}-1\right) \\
&\le& \epsilon 2^{-j+2}\cdot (2k+4k^2\log d_{\max}),
\end{eqnarray*}
where the last inequality follows from $\delta_u/q\ge (d_{\max})^{1-k}$.
As the step number $j$ is different in all fork steps, the total cost of all fork steps is
at most $\epsilon\cdot(2k+4k^2\log d_{\max})\sum_{j=1}^{\infty} 2^{-j+2} = \epsilon\cdot O(k^2\log d_{\max})$.
\end{proof}
\subsubsection{Deadend step}
Recall that when a leaf $\ell$ is deleted, we first compute the limit of a continuous step as the weight
$\tilde w_\ell$ grows to infinity. Let $\bar x$ be the mass distribution that the algorithm converges to
when $\tilde w_\ell$ approaches infinity.
\begin{lemma}\label{lm: limit 0}
The limit $\bar x$ satisfies $\bar x_{\ell}=0$. Hence, $\bar x$ with the $\ell$-coordinate removed is a
valid mass distribution in the new polytope $K(T)$. Also, a deadend step decreases $P$ by at least
the cost the algorithm incurs to move to $\bar x$.
\end{lemma}
\begin{proof}
Note that $y_\ell=0$ for the ``dying'' leaf $\ell$. Thus, by Lemma~\ref{lm: P change}, the cost of the algorithm during
the growth of $\tilde w_\ell$ is bounded by the decrease of $P$ during that time. Clearly, $P$ can only decrease by
a finite amount (as it remains non-negative) and thus the algorithm's cost is finitely bounded. But this means that the
mass at $\ell$ must tend to $0$, since otherwise the service cost would be infinite. Moreover, notice that the growth
of $\tilde w_\ell$ is just a simulation and the algorithm doesn't pay the service cost, only the cost of moving from its
state $x$ at the start of the simulation to the limit state $\bar x$. However, this movement cost is at most the total
cost to the algorithm during the simulation, and $P$ decreases by at least the total cost. Finally, at $\bar x$, the term
in $P$ for $\ell$ equals $0$, so removing it does not increase $P$. Also, for every vertex $u$ in a subtree rooted at a
sibling of $\ell$ the term $\delta_u$ increases (as the degree $d_{p_\ell}$ decreases by $1$). However, this too cannot increase
$P$ (as $x_u\le 1$).
\end{proof}
\subsubsection{Merge step}
\begin{lemma}\label{lm: merge}
A merge step does not increase $P$.
\end{lemma}
\begin{proof}
Let $j$ be the step number in which the merge happens.
Substituting the expression for the revised weights, the potential $P$ can be written as
\begin{align*}
P&= 2\sum_{u\in V^0}\left(w_u + 2^{-j_u} \right) \left(\frac{2k-1}{2k-h_u}4k y_u \log \frac{1+\delta_u}{x_u+\delta_u} + (2k-1)x_u\right).
\end{align*}
Consider the two edges $\{c,v\}$ and $\{v, p_v\}$ that are to be merged, where $v=p_c(j,0)$. Firstly,
for each vertex $u$ in the subtree of $c$ (including $c$ itself), its depth $h_u$ decreases by $1$.
This cannot increase $P$. Notice also that as $d_v = 2$, we have $\delta_c = \delta_v$ and the merge
does not change any $\delta_u$. The new value $h_c(j+1,0)$ equals the old value $h_v(j,0)$. Note
also that $y_c=y_v$ and $x_c=x_v$ because $c$ is the only child of $v$. Thus, merging the two
edges of lengths $w_c$ and $w_v$ into a single edge of length $w_c+w_v$, and removing vertex
$v$, only leads to a further decrease in $P$ resulting from the disappearance of the $2^{-j_v}$ term.
\end{proof}
\subsubsection{Putting it together}
\begin{proofof}{Theorem~\ref{thm: main}}
By Lemmas~\ref{lm: P change},~\ref{lm: fork cost},~\ref{lm: limit 0} and~\ref{lm: merge},
$$
C\le O(k^2\log d_{\max}) \opt + P_0 - P_f + \epsilon\cdot O(k^2\log d_{\max}),
$$
where $P_0$ and $P_f$ are the initial and final value of $P$, respectively. Now, observe that
$P_0 = \epsilon\cdot O(k)$ and $P_f\ge 0$.
\end{proofof}
\section{Derivation of Algorithm and Potential Function}\label{sec: motivation}
We now describe how we derived the algorithm and potential function from the last section, and justify the existence of $\lambda$.
\subsection{Online mirror descent}
Our algorithm is based on the online mirror descent framework of \cite{BCLLM18,BCLL19}. In general, an algorithm in this framework is specified by a convex body $K\subset \mathbb{R}^n$, a suitable strongly convex function $\Phi\colon K\to \mathbb{R}$ (called \emph{regularizer}) and a map $f\colon [0,\infty)\times K\to \mathbb{R}^n$ (called \emph{control function}). The algorithm corresponding to $K$, $\Phi$ and $f$ is the (usually unique) solution $x\colon[0,\infty)\to K$ to the following differential inclusion:
\begin{align}
\nabla^2\Phi(x(t))\cdot x'(t) \in f(t,x(t)) - N_K(x(t)),\label{eq:MD}
\end{align}
where $\nabla^2\Phi(x)$ denotes the Hessian of $\Phi$ at $x$ and
\[N_K(x):=\{\mu\in\mathbb{R}^n\colon \langle \mu, y-x\rangle \le 0, \,\forall y\in K\}\]
is the normal cone of $K$ at $x(t)$. Intuitively, \eqref{eq:MD} means that $x$ tries to move in direction $f(t,x(t))$, with the normal cone term $N_K(x(t))$ ensuring that $x(t)\in K$ can be maintained, and multiplication by the positive definite matrix $\nabla^2\Phi(x(t))$ corresponding to a distortion of the direction in which $x$ is moving.
A benefit of the online mirror descent framework is that there exists a default potential function for its analysis, namely the Bregman divergence associated to $\Phi$, defined as
\begin{align*}
D_\Phi(y\| x):=\Phi(y)-\Phi(x)+\langle \nabla \Phi(x), x-y\rangle
\end{align*}
for $x,y\in K$. Plugging in $x=x(t)$, the change of the Bregman divergence as a function of time is
\begin{align}
\frac{d}{dt}D_\Phi(y\| x(t))
&= \langle \nabla^2\Phi(x(t))\cdot x'(t), x(t)-y\rangle\label{eq:chainRule}\\
&= \langle f(t,x(t))-\mu(t), x(t)-y\rangle\qquad\qquad\text{for some $\mu(t)\in N_K(x(t))$}\label{eq:plugMD}\\
&\le \langle f(t,x(t)), x(t)-y\rangle,\label{eq:BregmanBound}
\end{align}
where \eqref{eq:chainRule} follows from the definition of $D_\Phi$ and the chain rule, \eqref{eq:plugMD} follows from~\eqref{eq:MD}, and \eqref{eq:BregmanBound} follows from the definition of $N_K(x(t))$.
\subsection{Charging service cost for evolving trees}
In the evolving tree game, we have $K=K(T)$. For a continuous step, it would seem natural to choose $f(t,x(t))=-w'(t)$, so that \eqref{eq:BregmanBound} implies that the online service cost $\langle w'(t), x(t)\rangle$ plus change in the potential $D_\Phi(y\| x(t))$ is at most the offline service cost $\langle w'(t), y\rangle$. For the regularizer $\Phi$ (which should be chosen in a way that also allows to bound the movement cost later), the choice analogous to \cite{BCLL19} would be
\begin{align*}
\Phi(x):= \sum_{u\in V^0} w_u(x_u+\delta_u)\log(x_u+\delta_u).
\end{align*}
However, since $\Phi$ (and thus $D_\Phi$) depends on $w$, the evolution of $w$ leads to an additional change of $D_\Phi$, which the bound~\eqref{eq:BregmanBound} does not account for as it assumes the regularizer $\Phi$ to be fixed. To determine this additional change, first observe that by a simple calculation
\begin{align*}
D_\Phi(y\|x) = \sum_{u\in V^0} w_u\left[(y_u+\delta_u)\log \frac{y_u+\delta_u}{x_u+\delta_u} + x_u-y_u\right].
\end{align*}
When $w_\ell$ increases at rate $1$, this potential increases at rate $(y_\ell+\delta_\ell)\log \frac{y_\ell+\delta_\ell}{x_\ell+\delta_\ell} + x_\ell-y_\ell$. The good news is that the part $y_\ell\log \frac{y_\ell+\delta_\ell}{x_\ell+\delta_\ell}\le y_\ell \cdot O(\log\frac{1}{\delta_\ell})\le O(k) y_\ell$ can be charged to the offline service cost, which increases at rate $y_\ell$. The term $-y_\ell$ also does no harm as it is non-positive. The term $x_\ell$ might seem to be a slight worry, because it is equal to the online service cost, which is precisely the quantity that the change in potential is supposed to cancel. It means that effectively we have to cancel two times the online service cost, which can be achieved by accelerating the movement of the algorithm by a factor $2$ (by multiplying the control function by a factor $2$). The main worry is the remaining term $\delta_\ell\log \frac{y_\ell+\delta_\ell}{x_\ell+\delta_\ell}$, which does not seem controllable. We would therefore prefer to have a potential that has the $\delta_u$ terms only inside but not outside the $\log$.
Removing this term (and, for simplicity, omitting the $y_u$ at the end of the potential, which does not play any important role), our desired potential would then be a sum of the following two terms $L(t)$ and $M(t)$:
\begin{align*}
L(t) &:= \sum_{u\in V^0} w_u(t) y_u\log \frac{y_u+\delta_u}{x_u(t)+\delta_u}\\
M(t) &:= \sum_{u\in V^0} w_u(t) x_u(t)
\end{align*}
Let us study again why these terms are useful as part of the classical Bregman divergence potential by calculating their change. Dropping $t$ from the notation, and using that $\nabla^2\Phi(x)$ is the diagonal matrix with entries $\frac{w_u}{x_u+\delta_u}$, we have
\begin{align*}
L' &= \langle w',y\rangle O(k) - \langle y, \nabla^2\Phi(x)\cdot x'\rangle \\
&= \langle w',y\rangle O(k) - \langle y, f-\mu\rangle
\end{align*}
and
\begin{align*}
M' &= \langle w', x\rangle + \langle w, x'\rangle\\
&= \langle w', x\rangle + \langle x+\delta, \nabla^2\Phi(x) \cdot x'\rangle\\
&= \langle w', x\rangle + \langle x+\delta, f-\mu\rangle
\end{align*}
for some $\mu\in N_K(x)$.
For a convex body $K$ of the form $K=\{x\in\mathbb{R}^n\colon Ax\le b\}$ where $A\in\mathbb{R}^{m\times n}$, $b\in \mathbb{R}^n$, the normal cone is given by
\begin{align*}
N_K(x) = \{A^T\lambda\mid \lambda\in\mathbb{R}_+^m, \langle \lambda, Ax-b\rangle=0\}.
\end{align*}
The entries of $\lambda$ are called \emph{Lagrange multipliers}. In our case, we will have $x_u>0$ for all $u\in V^0$, so the Lagrange multipliers corresponding to the constraints $x_u\ge 0$ will be zero. So the only tight constraints are the equality constraints, and the normal cone corresponding to \emph{any} such $x$ is given by
\begin{align}
N_K(x) = \{(\lambda_{u}-\lambda_{p(u)})_{u\in V^0}\mid \lambda\in\mathbb{R}^{V}, \lambda_u=0\,\forall u\in\mathcal{L}\}.\label{eq:normalCone}
\end{align}
Since $\delta\in K$ and $\delta_u>0$ for all $u$, we thus have $N_K(x)=N_K(\delta)$. Hence, we can cancel the $\mu$ terms in $L'$ and $M'$ by taking the potential $D=2L+M$, so that
\begin{align*}
D'=2L' + M' &= \langle w',y\rangle O(k) + \langle w', x\rangle + \langle x+\delta - 2y, f-\mu\rangle\\
&\le \langle w',y\rangle O(k) + \langle w', x\rangle + \langle x+\delta - 2y, f\rangle,
\end{align*}
where the inequality uses that $\mu\in N_K(x)$ and $\mu\in N_K(\delta)$. Recalling that $w_u'=\1_{u=\ell}$, and choosing $f=-2w'$ as the control function, we get
\begin{align} \label{eq:justforbelow}
D' &\le y_\ell O(k) - x_\ell,
\end{align}
i.e., the potential charges the online service cost to $O(k)$ times the offline service cost. Indeed, the potential function $D$ used in Section~\ref{sec:growth} is given by $D=2L+M$, up to the replacement of $w$ by $\tilde w$. Moreover we note that \eqref{eq:justforbelow} remains true with the control function $f=-\frac{2x_\ell}{x_\ell+\delta_\ell}w'$, which will be helpful for the movement as we discuss next.
\subsection{Bounding movement via damped control and revised weights}
Besides the service cost, we also need to bound the movement cost. By \eqref{eq:MD} and \eqref{eq:normalCone} and since $\nabla^2\Phi(x)$ is the diagonal matrix with entries $\frac{w_u}{x_u+\delta_u}$, the movement of the algorithm satisfies
\begin{align}
w_u x_u'&=(x_u+\delta_u)(f_u + \lambda_{p(u)} - \lambda_{u})\nonumber\\
&= -2x_u w_u' + (x_u+\delta_u)(\lambda_{p(u)} - \lambda_{u}),\label{eq:boundMovementMotivation}
\end{align}
where the last equation uses $f=-\frac{2x_\ell}{x_\ell+\delta_\ell}w'$. Up to the discrepancy between $w$ and $\tilde w$, this is precisely Equation~\eqref{eq: dynamic}. Here, damping the control function $f$ by the factor $\frac{x_\ell}{x_\ell+\delta_\ell}$ is crucial: Otherwise there would be additional movement of the form $\delta_\ell w_\ell'$. Although a similar $\delta$-induced term in the movement exists also in~\cite{BCLL19}, the argument in \cite{BCLL19} to control such a term relies on $w$ being fixed and would therefore fail in our case. Scaling by $\frac{x_\ell}{x_\ell+\delta_\ell}$ prevents such movement from occurring in the first place.
To bound the movement cost,~\cite{BCLL19} employs a \emph{weighted depth potential} defined as
\begin{align*}
\Psi = \sum_{u\in V^0}h_u w_u x_u.
\end{align*}
Our calculation in Lemma~\ref{lm: Psi change} suggests that we can use the same $\Psi$ here, choosing the overall potential function as $P= 4kD - 2\Psi$. But now the problem is that the combinatorial depths $h_u$ can change during merge steps, which would lead to an increase of the overall potential $P$. To counteract this, we use the revised weights $\tilde w_u$: The scaling by $\frac{2k-1}{2k-h_u}$ in their definition means that $\tilde w_u$ slightly increases whenever $h_u$ decreases, and overall this ensures that the potential $P$ does not increase in a merge step. Since $\frac{2k-1}{2k-h_u}\in[1,2]$, such scaling loses only a constant factor in the competitive ratio. The additional term $2^{-j_u}$ in the definition of the revised weights only serves to ensure that $\tilde w_u>0$, so that $\Phi$ is strongly convex as required by the mirror descent framework.
\subsection{Existence of the mirror descent path for time-varying $\Phi_t$}\label{sec:MDExistence}
To justify the existence of our algorithm, we need the following theorem, which generalizes~\cite[Theorem 2.2]{BCLLM18} to the setting where a fixed regularizer $\Phi$ is replaced by a time-varying regularizer $\Phi_t$.
\begin{theorem}\label{thm: existence of x}
Let $K\subset \mathbb{R}^n$ be compact and convex, $f\colon[0,\infty)\times K\to \mathbb{R}^n$ continuous, $\Phi_t\colon K\to\mathbb{R}$ strongly convex for $t\ge 0$ and such that $(x,t)\mapsto \nabla^2\Phi_t(x)^{-1}$ is continuous. Then for any $x_0\in K$ there is an absolutely continuous solution $x\colon[0,\infty)\to K$ satisfying
\begin{align*}
\nabla^2\Phi_t(x(t))\cdot x'(t)&\in f(t,x(t))-N_K(x(t))\\
x(0)&=x_0.
\end{align*}
If $(x,t)\mapsto\nabla^2\Phi_t(x)^{-1}$ is Lipshitz and $f$ locally Lipschitz, then the solution is unique.
\end{theorem}
\begin{proof}
It suffices to consider a finite time interval $[0,\tau]$. It was shown in \cite[Theorem~5.7]{BCLLM18} that for $C\subset\mathbb{R}^n$ compact and convex, $H\colon C\to\{A\in\mathbb{R}^{n\times n}\mid A\succ 0\}$ continuous, $g\colon[0,\tau]\times C\to\mathbb{R}^n$ continuous and $y_0\in C$, there is an absolutely continuous solution $y\colon[0,\tau]\to C$ satisfying
\begin{align*}
y'(t)&\in H(y(t))\cdot(g(t,y(t))-N_C(y(t)))\\
y(0)&=y_0.
\end{align*}
We choose $C=[-1,\tau+1]\times K$,
\begin{align*}
H(t,x)= \begin{bmatrix}
1& 0 \\
0 & \nabla^2\Phi_t(x))^{-1}
\end{bmatrix},
\end{align*}
$g(t,(s,x))=(1,f(t,x))$ and $y_0=(0,x_0)$. Decomposing the solution as $y(t)=(s(t),x(t))$ for $s(t)\in [-1,\tau+1]$ and $x(t)\in K$, and noting that for $s(t)\in[0,\tau]$ we have $N_C(y(t))= \{0\}\times N_K(x(t))$, we get
\begin{align*}
s(t)&=t\\
x'(t)&\in \nabla^2\Phi_t(x))^{-1}\cdot(f(t,x(t))-N_K(x(t)))\\
x(0)&=x_0.
\end{align*}
By \cite[Theorem 5.9]{BCLLM18} the solution is unique provided $H$ is Lipschitz and $g$ locally Lipschitz, which is satisfied under the additional assumptions of the theorem.
\end{proof}
For every continuous step, $\Phi_t=\sum_{u\in V^0}\tilde w_u(t)(x_u+\delta_u)\log (x_u+\delta_u)$ and $f(t,x)=-\frac{2x_\ell(t)}{x_\ell(t)+\delta_\ell} w'(t)$ satisfy the assumptions of the theorem. By the calculation in~\eqref{eq:boundMovementMotivation} (with $w$ replaced by $\tilde w$), the corresponding well-defined algorithm is the one from equation \eqref{eq: dynamic}. Note that Lagrange multipliers for the constraints $x_u\ge 0$ are indeed not needed (see below).
\paragraph{Sign of Lagrange multipliers.} We stipulated in Section~\ref{sec:algo} that $\lambda_u\ge 0$ for $u\in V\setminus\mathcal{L}$, and we do not have any Lagrange multipliers for the constraints $x_u\ge 0$. To see this, it suffices to show that $\lambda_u\ge 0$ for $u\in V\setminus\mathcal{L}$ in the case that the constraints $x_u\ge 0$ are removed from $K$: If this is true, then \eqref{eq: dynamic} shows for any leaf $u\in\mathcal{L}$ that $x_u'<0$ is possible only if $x_u>0$ (since $\lambda_u=0$ when $u$ is a leaf, recalling~\eqref{eq:normalCone}). Hence, $x_u\ge 0$ holds automatically for any leaf $u$, and thus also for internal vertices $u$ due to the constraints of the polytope. Consequently, we do not need Lagrange multipliers for constraints $x_u\ge 0$.
The proof that $\lambda_u\ge 0$ for $u\in V\setminus\mathcal{L}$ is completely analogous to~\cite[Lemma~12]{BC21}. As an alternative proof, one can also see this by replacing in $K$ the constraints $\sum_{v \colon p(v) = u} x_v = x_u$ by $\sum_{v \colon p(v) = u} x_v \ge x_u$, which directly gives $\lambda_u\ge 0$ in~\eqref{eq:normalCone}; this still yields a feasible solution (with the constraints satisfied with equality) by arguments completely analogous to~\cite[Lemma~3.2]{BCLL19}.
\section{Reductions and Applications}\label{sec: applications}
In this section we show that layered graph traversal and small set chasing (a.k.a.
metrical service systems) reduce to the evolving tree game. The reductions imply
the following new bounds on the competitive ratio for these problems, the main
result of this section.
\begin{theorem}\label{thm: main layered}
There are randomized $O(k^2)$-competitive online algorithms for traversing
width $k$ layered graphs, as well as for chasing sets of cardinality $k$
in any metric space.
\end{theorem}
\subsection{Layered graph traversal}
Recall the definition of the problem in the introduction. We will introduce useful
notation. Let $V_0 = \{a\}, V_1, V_2, \dots, V_n = \{b\}$ denote the layers of the
input graph $G$, in consecutive order. Let $E_1,E_2,\dots,E_n$ be the partition
of the edge set of $G$, where for every $i=1,2,\dots,n$, every edge $e\in E_i$
has one endpoint in $V_{i-1}$ and one endpoint in $V_i$. Also recall that
$w: E\rightarrow{\mathbb{N}}$ is the weight function on the edges, and
$k = \max\{|V_i|\colon i=0,1,2,\dots,n\}$ is the {\em width} of $G$. The
input $G$ is revealed gradually to the searcher. Let
$G_i = (V_0\cup V_1\cup\cdots\cup V_i, E_1\cup E_2\cup\cdots\cup E_i)$ denote the
subgraph that is revealed up to and including step $i$. The searcher, currently at a vertex
$v_{i-1}\in V_{i-1}$ chooses a path in $G_i$ from $v_{i-1}$ to a vertex $v_i\in V_i$. Let
$w_{G_i}(v_{i-1},v_i)$ denote the total weight of a shortest path from $v_{i-1}$ to $v_i$
in $G_i$. (Clearly, the searcher has no good reason to choose a longer path.) Formally,
a pure strategy (a.k.a. deterministic algorithm) of the searcher is a function that maps,
for all $i=1,2,\dots$, a layered graph $G_i$ (given including its partition into a sequence
of layers) to a vertex in $V_i$ (i.e., the searcher's next move). A mixed strategy (a.k.a.
randomized algorithm) of the searcher is a probability distribution over such functions.
\subsubsection{Fractional strategies}
Given a mixed strategy $S$ of the searcher, we can define a sequence $P_0,P_1,P_2,\dots$,
where $P_i$ is a probability distribution over $V_i$. For every $v\in V_i$, $P_i(v)$ indicates the
probability that the searcher's mixed strategy $S$ chooses to move to $v$ (i.e., $v_i = v$). A
fractional strategy of the searcher is a function that maps, for all $i=1,2,\dots$, a layered graph
$G_i$ to a probability distribution $P_i$ over $V_i$. For a fractional strategy choosing probability
distributions $P_0,P_1,P_2,\dots,P_n$, we define its cost as follows. For $i=1,2,\dots,n$,
let $\tau_i$ be a probability distribution over $V_{i-1}\times V_i$, with marginals $P_{i-1}$ on
$V_{i-1}$ and $P_i$ on $V_i$, that minimizes
$$
w_{G_i,\tau_i}(P_{i-1},P_i) = \sum_{u\in V_{i-1}}\sum_{v\in V_i} w_{G_i}(u,v) \tau_i(u,v).
$$
The cost of the strategy is then defined as $\sum_{i=1}^n w_{G_i,\tau_i}(P_{i-1},P_i)$.
The following lemma can be deduced through the reduction to small set chasing discussed
later, the fact that small set chasing is a special case of metrical task systems, and a similar
known result for metrical task systems. Here we give a straightforward direct proof.
\begin{lemma}\label{lm: fractional to mixed}
For every fractional strategy of the searcher there is a mixed strategy incurring the same cost.
\end{lemma}
\begin{proof}
Fix any fractional strategy of the searcher, and suppose that the searcher plays $P_0,P_1,P_2,\dots,P_n$
against a strategy $G_n$ of the designer. I.e, the designer chooses the number of rounds $n$, and plays
in round $i$ the last layer of $G_i = (\{a\}\cup V_1\cup V_2\cup\cdots\cup V_i,E_1\cup E_2\cup\cdots\cup E_i)$.
The searcher responds with $P_i$, which is a function of $G_i$. Notice that when the designer reveals
$G_i$, the searcher can compute $\tau_i$, because that requires only the distance functions $w_{G_i}$
and the marginal probability distributions $P_{i-1}$ and $P_i$. We construct a mixed strategy of the searcher
inductively as follows. It is sufficient to define, for every round $i$, the conditional probability distribution on
the searcher's next move $v_i\in V_i$, given any possible play so far. Initially, at the start of round $1$, the
searcher is deterministically at $a$. Suppose that the searcher reached a vertex $v_{i-1}\in V_{i-1}$. Then,
we set $\Pr[v_i = v\in V_i\mid v_{i-1}] = \frac{\tau_i(v_{i-1},v)}{P_{i-1}(v_{i-1})}$. Notice that the searcher can
move from $v_{i-1}$ to $v_i$ along a path in $G_i$ of length $w_{G_i}(v_{i-1},v_i)$.
We now analyze the cost of the mixed strategy thus defined. We prove by induction over the
number of rounds that in round $i$, for every pair of vertices $u\in V_{i-1}$ and $v\in V_i$,
the probability that the searcher's chosen pure strategy (which is a random variable) reaches $v$
is $P_i(v)$ and the probability that this strategy moves from $u$ to $v$ is $\tau_i(u,v)$
(the latter assertion is required to hold for $i > 0$). The base case is $i=0$, which is trivial,
as the searcher's initial position is $a$, $P_0(a) = 1$, and the statement about $\tau$ is vacuous.
So, assume that the statement is true for $i-1$. By the definition of the strategy, in round $i$,
for every $v\in V_i$,
$$
\Pr[v_i = v] = \sum_{u\in V_{i-1}} \Pr[v_{i-1} = u] \cdot \Pr[v_i = v\mid v_{i-1} = u] =
\sum_{u\in V_{i-1}} P_{i-1}(u)\cdot \frac{\tau_i(u,v)}{P_{i-1}(u)} = P_i(v),
$$
where the penultimate equality uses the induction hypothesis, and the final equality uses
the condition on the marginals of $\tau_i$ at $V_i$.
Similarly,
\begin{eqnarray*}
& & \Pr[\hbox{the searcher moves from } u \hbox{ to } v] \\
& = & \Pr[v_{i-1} = u]\cdot \Pr[\hbox{the searcher moves from } u \hbox{ to } v\mid v_{i-1} = u] \\
& = & P_{i-1}(u)\cdot \frac{\tau_i(u,v)}{P_{i-1}(u)} \\
& = & \tau_i(u,v).
\end{eqnarray*}
Thus, by linearity of expectation, the searcher's expected total cost is
$$
\sum_{i=1}^n \sum_{u\in V_{i-1}} \sum_{v\in V_i} \tau_i(u,v)\cdot w_{G_i}(u,v),
$$
and this is by definition equal to the cost of the searcher's fractional strategy.
\end{proof}
\subsubsection{Layered trees}
We now discuss special cases of layered graph traversal whose solution
implies a solution to the general case. We begin with a definition.
\begin{definition}
A rooted layered tree is an acyclic layered graph, where every vertex $v\ne a$ has exactly one
neighbor in the preceding layer. We say that $a$ is the root of such a tree.
\end{definition}
\begin{theorem}[{Fiat et al.~\cite[Section 2]{FFKRRV91}}]\label{thm: layered trees suffice}
Suppose that the designer is restricted to play a width $k$ rooted layered tree with edge weights
in $\{0,1\}$, and suppose that there is a $C$-competitive (pure or mixed) strategy of the searcher
for this restricted game. Then, there is a $C$-competitive (pure or mixed, respectively) strategy
of the searcher for the general case, where the designer can play any width $k$ layered graph
with non-negative integer edge weights.
\end{theorem}
A width $k$ rooted layered tree is binary iff every vertex has at most two neighbors in the following
layer. (Thus, the degree of each node is at most $3$.)
\begin{corollary}\label{cor: layered binary trees suffice}
The conclusion of Theorem~\ref{thm: layered trees suffice} holds if there is a $C$-competitive strategy
of the searcher for the game restricted to the designer using width $k$ rooted layered binary trees with
edge weights in $\{0,1\}$. Moreover, the conclusion holds if in addition we require that between two
adjacent layers there is at most one edge of weight $1$.
\end{corollary}
\begin{proof}
Suppose that the designer plays an arbitrary width $k$ layered tree. The searcher converts the tree
on-the-fly into a width $k$ layered binary tree, uses the strategy for binary trees, and maps the moves
back to the input tree. The conversion is done as follows. Between every two layers that the designer
generates, the searcher simulates $\lceil \log_2 k \rceil-1$ additional layers. If a vertex $u\in V_{i-1}$
has $m\le k$ neighbors $v_1,v_2,\dots,v_m\in V_i$, the searcher places in the simulated layers between
$V_{i-1}$ and $V_i$ a layered binary tree rooted at $u$ with $v_1,v_2,\dots,v_m$ as its leaves. Notice
that this can be done simultaneously for all such vertices in $V_{i-1}$ without violating the width constraint
in the simulated layers. The lengths of the new edges are all $0$, except for the edges touching the leaves.
For $j=1,2,\dots,m$, the edge touching $v_j$ inherits the length $w(\{u,v_j\})$ of the original edge $\{u,v_j\}$.
Clearly, any path traversed in the simulated tree corresponds to a path traversed in the input tree that
has the same cost---simply delete from the path in the simulated tree the vertices in the simulated layers;
the edges leading to them all have weight $0$. The additional requirement is easily satisfied by now making
the following change. Between every two layers $i-1,i$ of the rooted layered binary tree insert $k-1$ simulated
layers. Replace the $j$-th edge between layer $i-1,i$ (edges are indexed arbitrarily) by a length $k$ path. If
the original edge has weight $0$, all the edges in the path have weight $0$. If the original edge has weight
$1$, then all the edges in the path have weight $0$, except for the $j$-th edge that has weight $1$.
\end{proof}
\subsection{Small set chasing}
This two-person game is defined with respect to an underlying metric space ${\cal M} = (X,\mathrm{dist})$.
The game alternates between the adversary and the algorithm. The latter starts at an arbitrary point
$x_0\in X$. The adversary decides on the number of rounds $n$ that the game will be played (this
choice is unknown to the algorithm until after round $n$). In round $i$ of the game, the adversary
chooses a finite set $X_i\subset X$. The algorithm must then move to a point $x_i\in X_i$. The game
is parametrized by an upper bound $k$ on $\max_{i=1}^n |X_i|$. The algorithm pays
$\sum_{i=1}^n \mathrm{dist}(x_{i-1},x_i)$ and the adversary pays
$$
\min\left\{\sum_{i=1}^n \mathrm{dist}(y_{i-1},y_i):\ y_0=x_0\wedge y_1\in X_1\wedge\cdots\wedge y_n\in X_n\right\}.
$$
\begin{theorem}[{Fiat et al.~\cite[Theorem 18]{FFKRRV91}}]
For every $k\in{\mathbb{N}}$ and for every $C = C(k)$, there exists a pure (mixed, respectively) $C$-competitive
online algorithm for cardinality $k$ set chasing in every metric space with integral distances iff there exists
a pure (mixed, respectively) $C$-competitive online algorithm for width $k$ layered graph traversal.
\end{theorem}
\subsection{Reduction to evolving trees}
The main result of this section, Theorem~\ref{thm: main layered}, is implied by the following reduction.
\begin{lemma}\label{lm: LGT to DTG reduction}
Let $k\in{\mathbb{N}}$, let $C = C(k)$, and let $\epsilon > 0$. Suppose that there exists a (pure, mixed, fractional,
respectively) $C$-competitive strategy for the evolving tree game on binary trees of maximum depth
$k$ that always pays a cost of at most $C\cdot (\opt + \epsilon)$. Then, there exists a (pure, mixed, fractional,
respectively) $C$-competitive strategy for traversing width $k$ layered graphs with minimum non-zero
edge weight at least $\epsilon$.\footnote{In particular, if the edges weights are integers, one can take $\epsilon = 1$.}
\end{lemma}
\begin{proof}
Consider at first fractional strategies.
By Lemma~\ref{lm: fractional to mixed} and Corollary~\ref{cor: layered binary trees suffice}, we can
restrict our attention to designing fractional strategies on width $k$ rooted layered binary trees with
edge weights in $\{0,1\}$. Now, suppose that we have a fractional $C$-competitive strategy for the
depth $k$ evolving tree game. We use it to construct a fractional $C$-competitive strategy for the
traversal of width $k$ rooted layered binary trees as follows. To simplify the proof, add a virtual layer
$-1$ containing a single node $p(a)$ connected to the source $a$ with an edge of weight $0$.
We construct the layered graph strategy by induction over the current layer. Our induction hypothesis
is that in the current state:
\begin{enumerate}
\item The evolving tree is homeomorphic to the layered subtree spanned by the paths from $p(a)$
to the nodes in the current layer.
\item In this homeomorphism, $r$ is mapped to $p(a)$ and the leaves of the evolving tree are
mapped to the leaves of the layered subtree, which are the nodes in the current layer.
\item In this homeomorphism, each edge of the evolving tree is mapped to a path of the same
weight in the layered subtree.
\item The probability assigned to a leaf by the fractional strategy for the evolving tree is equal to the
probability assigned to its homeomorphic image (a node in the current layer) by the fractional
strategy for the layered tree.
\end{enumerate}
Initially, the traversal algorithm occupies the source node $a$ with probability $1$. The evolving
tree consists of the two initial nodes $r$ and $c_r$, with a $0$-weight edge connecting them. The
homeomorphism maps $r$ to $p(a)$ and $c_r$ to $a$. The evolving tree algorithm occupies $c_r$
with probability $1$. Hence, the induction hypothesis is satisfied at the base of the induction. For
the inductive step, consider the current layer $i-1$, the new layer $i$ and edges between them. If a node in layer $i-1$ has no child in layer $i$, we delete the homeomorphic
preimage (which must be a leaf and cannot be $c_r$) in the evolving tree. If a node $v$ in layer $i-1$ has two children in layer $i$, we execute a fork step where we generate two new leaves and connect them to the preimage
of $v$ (a leaf) in the evolving tree, and extend the homeomorphism to the new leaves in the obvious
way. Otherwise, if a node $v$ in layer $i-1$ has a single child in layer $i$, we modify the
homeomorphism to map the preimage of $v$ to its child in layer $i$. After executing as many such
discrete steps as needed, if there is a weight $1$ edge connecting a node $u$ in layer $i-1$ to a
node $v$ in layer $i$, we execute a continuous step, increasing the weight of the edge incident on
the homeomorphic preimage of $v$ in the evolving tree (which must be a leaf) for a time interval of length
$1$. After executing all these steps, we simply copy the probability distribution on the leaves of the evolving
tree to the homeomorphic images in layer $i$ of the layered tree. This clearly satisfies all the
induction hypotheses at layer $i$.
Notice that since the target $b$ is assumed to be the only node in the last layer, when we reach it, the
evolving tree is reduced to a single edge connecting $r$ to the homeomorphic preimage of $b$. The weight
of this edge equals the weight of the path in the layered tree from $p(a)$ to $b$, which is the same as the
weight of the path from $a$ to $b$ (because the edge $\{p(a),a\}$ has weight $0$). Moreover, the fractional
strategy that is induced in the layered tree does not pay more than the fractional strategy in the evolving
tree. Hence, it is $C$-competitive.
Finally, deterministic strategies are fractional strategies restricted to probabilities in $\{0,1\}$, hence the claim
for deterministic strategies is a corollary. This also implies the claim for mixed strategies, as they are probability
distributions over pure strategies.
\end{proof}
We note that the depth $k$ evolving tree game is strictly more general than width $k$ layered graph
traversal. In particular, the evolving binary tree corresponding to the width $k$ layered graph game
has depth at most $k$ and also at most $k$ leaves. However, in general a depth $k$ binary tree may
have $2^{k-1}$ leaves. Our evolving tree algorithm and analysis applies without further restriction on
the number of leaves.
\newpage
\bibliographystyle{plainurl}
| proofpile-arXiv_065-5617 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}\label{sec:intro}
Vortices \cite{simula_2019_quantised} render Bose-Einstein condensate (BEC) an excellent platform for examining various scaling aspects of quantum turbulence \cite{allen_2014_quantum,madeira_2019_quantum,madeira_2020_quantum} which are quantum counterparts of classical turbulence \cite{holmes_2012_turbulence,onsager_1949_statistical,eyink_2006_onsager}. The renowned Kolmogorov's `$5/3$' law is one of the most well-known of these scaling laws among them~\cite{nore_1997_kolmogorov,kobayashi_2005_kolmogorov}. Several strategies are available to the current state BEC experiments \cite{anderson_2010_resource} to generate non-linear defects such as vortices and solitons \cite{matthews_1999_vortices,anderson_2000_vortex,chai_2020_magnetic,lannig_2020_collisions,katsimiga_2020_observation,navarro_2013_dynamics}.
These include laser stirring \cite{inouye_2001_observation,raman_2001_vortex,neely_2010_observation}, rotating the confining potential \cite{hodby_2001_vortex,williams_2010_observation,hodby_2003_experimental}, interaction with the optical vortex \cite{mondal_2014_angular,bhowmik_2016_interaction,das_2020_transfer,mukherjee_2021_dynamics}, quenching through the phase transition \cite{zurek_1985_cosmological}, and counter-flow dynamics \cite{carretero-gonzalez_2008_dynamics,xiong_2013_distortion,yang_2013_dynamical,mukherjee_2020_quench}, just to name a few \cite{tsubota_2002_vortex,leanhardt_2002_imprinting,kumakura_2006_topological}.
Theoretically, numerous intriguing aspects of three-dimensional \cite{nore_1997_kolmogorov,wells_2015_vortex,navon_2019_synthetic,garcia-orozco_2021_universal,serafini_2017_vortex} and two-dimensional (2D) quantum turbulence (QT) \cite{henn_2009_emergence,horng_2009_twodimensional,white_2010_nonclassical,numasato_2010_direct,bradley_2012_energy,reeves_2013_inverse,villasenor_2014_quantum,billam_2014_onsagerkraichnan,stagg_2015_generation,mithun_2021_decay,estrada_2022_turbulence,dasilva_2022_vortex} have been examined. Moreover, very recently developed machine learning techniques can be utilized to detect and classify quantum vortices \cite{metz_2021_deeplearningbased,sharma_2022_machinelearning}.
The incredible tunability of atom-atom interaction via Feshbach resonance \cite{chin_2010_feshbach,kohler_2006_production}, as well as the outstanding maneuverability of dimension \cite{gorlitz_2001_realization}, have also resulted in the significant development of the QT experiment in BEC. In that regard, Ref. \cite{henn_2009_emergence} shows a turbulent tangle of vortices formed by oscillating perturbation. Spontaneous clustering of the same circulation vortices has also been demonstrated experimentally \cite{gauthier_2019_giant,johnstone_2019_evolution}. It is worth noting that clustering of vortices \cite{yu_2016_theory,reeves_2014_signatures} implies the transfer of energy from small to large length scales, illustrating the so-called inverse energy cascade \cite{navon_2019_synthetic,navon_2016_emergence}, a well-known phenomenon that occurs in classical 2D turbulence \cite{kraichnan_1967_inertial,kraichnan_1975_statistical}.
The experiment in \cite{johnstone_2019_evolution}, for instance, employs a paddle that swifts through the bulk of the BEC, causing randomly distributed vortices that fast assemble into Onsager point vortex clusters, a notion that has also been theoretically studied by White \emph{et al.} \cite{white_2012_creation}.\par
Given that optical paddle potential is a dependable way to create 2D QT, we attempted to conduct a detailed theoretical examination of the production of vortex complexes, the behaviour of angular momentum and the onset of quantum turbulence in a two-component system by utilizing rotating paddle potentials.
Furthermore, we use a more complicated system with 2D binary BECs \cite{papp_2008_tunable,wang_2015_double,mccarron_2011_dualspecies}, where only one species is exposed to the rotating paddle. We specifically identify the frequency regimes of the rotating paddle where the maximum angular momentum can be imparted to the condensates, as well as systemically investigate the distinct behavior emerging from single and double paddle potential.
The system of 2D binary BECs, which exhibits a variety of instability phenomena \cite{maity_2020_parametrically,sasaki_2009_rayleightaylor,gautam_2010_rayleightaylor,suzuki_2010_crossover,ruban_2022_instabilities} and non-linear structures \cite{mueller_2002_twocomponent,schweikhard_2004_vortexlattice,kuopanportti_2012_exotic,kasamatsu_2009_vortex} is intriguing on its own right.
Using the so-called tune-out technique \cite{leblanc_2007_speciesspecific}, the previously mentioned species selective interaction resulting in the formation of optical paddle potential can be experimentally performed.
In this tune-out method, when one species interacts with the laser light, the other remains unaffected. Furthermore, we investigate a wide range of rotating frequencies of the paddle potential, allowing us to pinpoint the domain in which clustering of the same circulation vortices arises, exhibiting the well-known scaling rule of 2D QT \emph{i.e.} Kolmogorov's $-5/3$ scaling law \cite{reeves_2012_classical, bradley_2012_energy, mithun_2021_decay}. Although different stirring configurations are available in the literature \cite{sasaki_2010_benard,parker_2005_emergence,white_2012_creation,gauthier_2019_giant,muller_2022_critical}, the main objective of employing a rotating
paddle potential in this manuscript is to impart a finite net angular momentum to one of the
binary species within a specific frequency regime and transfer this momentum to the other
species.
We also look at a region dominated solely by identical signed multiple vortices. Furthermore, when the paddle rotates more vigorously, the vortical content of the system drops due to the generation of a high amount of sound waves \cite{leadbeater_2001_sound,parker_2005_emergence,horng_2009_twodimensional,simula_2014_emergence}.
When there is finite interspecies contact interaction, vortex formation can occur even in the second component of the condensate. Most importantly, the vortex in one component is connected by a complementary structure, referred to as a vortex-bright soliton \cite{law_2010_stable,mithun_2021_decay}, in the other.
Besides, we demonstrate the effect of double paddle potentials, in which paddles can rotate either in the same or opposite directions.\par
This article is arranged as follows.
Sec. \ref{sec:gp} describes our setup and delves over the Gross-Pitaevskii (GP) equations.
In Sec. \ref{sec:vor_dyna}, we investigate the non-equilibrium dynamics of a binary system consisting of a mass-imbalanced system using both single (Sec. \ref{sec:single_p}) and double paddle potential (Sec. \ref{sec:double_p}).
Section \ref{sec:energy_spec} examines the incompressible and compressible kinetic energy spectra.
Finally, we summarise our findings and discuss potential future extensions in Sec. \ref{conclusion}.
Appendix \ref{sec:equal_mass} briefly describes the creation of vortices and their dynamics in a binary BEC with equal mass. In Appendix \ref{sec:negative_paddle}, we demonstrate vortex creation using the negative paddle potential.
\section{Gross-Pitaevskii equation}\label{sec:gp}
We consider binary BECs, referred to as species A and B, that are confined in 2D harmonic trapping potentials \cite{kwon_2021_spontaneous}. The species consists of $N_i$ number of atoms of mass $m_i$ ($i=\rm A,B$). The form of the trapping potentials read $V_{\rm trap} = \frac{1}{2}m(\omega^2_x x^2 + \omega^2_y y^2 + \omega^2_z z^2)$, where $\omega_x, \omega_y$ and $\omega_z$ are trapping frequencies along $x,y$ and $z$ directions, respectively. To implement a quasi-2D BEC in the $x$-$y$ plane, we consider the following criterion for the trap frequencies, namely, $\omega_x= \omega_y=\omega \ll \omega_z$. We apply single or double stirring potential $V_P$ generated by a far-off-resonance blue-detuned laser beam shaped into an elliptic paddle in species A to induce vortices in the condensate~\cite{gauthier_2019_giant}. The potential $V_{P_{\alpha}}$, with $\alpha \in \{1,2\} $ can be expressed as \cite{white_2012_creation}
\begin{align}\label{eq:paddle_pot}
V_{P_{\alpha}}(x,y,t) = & V_0 \exp\Big[ -\frac{\eta^2 \qty(\tilde{x}_{\alpha}\cos(\omega_p t)-\tilde{y}_{\alpha}\sin(\omega_p t))^2}{d^2} \nonumber \\ &-\frac{(\tilde{y}_{\alpha}\cos(\omega_p t)+\tilde{x}_{\alpha}\sin(\omega_p t))^2}{d^2} \Big],
\end{align}
where $\tilde{x}_{\alpha}=x - x_{p, \alpha}$ and $\tilde{y}_{\alpha}=y-y_{p, \alpha}$, considering the center of the paddle potential at $(x_{p, \alpha},y_{p, \alpha})$ for the $\alpha$ paddle. Here $V_0$ is the peak strength of the potential, $\omega_p$ is the rotation frequency of the paddle, and $\eta$ and $d$ determine the paddle elongation and width, respectively.
In the quasi-2D regime, the motions of atoms along $z$-direction become insensitive and the wavefunctions $\Psi_{\rm A(B)}$ can be expressed as $\psi_{\rm A(B)}(x,y)\zeta(z)$, where $\zeta(z)=(\lambda/\pi)^\frac{1}{4} \exp(-\lambda z^2/2)$ is the ground state along $z$ direction and $\lambda=\omega_z/\omega$ is the trap aspect ratio. After integrating out the $z$ variable, the 2D dimensionless time-dependent GP equation that governs the dynamics of a BEC is given by \cite{pethick_2008_bose,pitaevskii_2003_boseeinstein}
\begin{multline}\label{eq:gp}
\mathrm{i} \pdv{\psi_i}{t}=\Bigg[-\frac{1}{2}\frac{m_{\rm B}}{m_i}\qty(\pdv[2]{}{x}+\pdv[2]{}{y}) + \frac{1}{2}\frac{m_i}{m_{\rm B}}\qty(x^2 + y^2) \\ + \sum_{j= \rm{A,B}} g_{ij}N_j\abs{\psi_j}^2 + \delta_{{\rm A}i}(V_{P_{1}}+ V_{P_{2}}) \Bigg] \psi_i,
\end{multline}
where $i= {\rm A, B}$.
Here, the effective 2D non-linear interaction coefficient is determined by the term $g_{ij}=\sqrt{\lambda/(2\pi)} 2\pi a_{ij}m_{\rm B}/m_{ij}$ with $a_{ij}$ being the scattering length, $l=\sqrt{\hbar/(m_{\rm B} \omega)}$ is the oscillator length, $m_{ij}=m_im_j/(m_i+m_j)$ denotes the reduced mass.
The dimensionless Eq. \eqref{eq:gp} is written in terms of length scale $l$, time scale $1/\omega$ and energy scale $\hbar\omega$. The $i$-th species wavefunction is normalized to $\int\abs{\psi_i}^2\dd^2{r}=1$.
In this paper, we explore the turbulent phenomena that arise from the potentials formed by the rotating single paddle, $V_{P_{1}}$ and the double paddles, $ V_{P_1} + V_{P_2}$. The paddle potentials are maintained in the condensate for the time $0 \le t \le \tau$. Afterward, the paddle is linearly ramped off to zero over a time $t=\Delta\tau$, during which the relation,
\[V_{P_{1(2)}} \rightarrow V_{P_{1(2)}}\qty(1-\frac{t-\tau}{\Delta\tau}), \] holds in the Eq.\eqref{eq:paddle_pot}. Here we consider a binary BEC of $^{133}$Cs (species A) and $^{87}${Rb} (species B) elements having different masses \cite{mccarron_2011_dualspecies}. The number of atoms in both species A and B are equal, and we take $N_{\rm A}=N_{\rm B}=60000$. The harmonic trap potential is designed to have a frequency of $\omega=2\pi\times30.832$ rad/s and the aspect ratio $\lambda=100$. The intra-species scattering lengths are $ a_{\rm AA}=280a_0 $ and $ a_{\rm BB}=100.4a_0 $, where $ a_0 $ is the Bohr radius \cite{mccarron_2011_dualspecies}. The interspecies scattering length $ a_{\rm AB} $ is chosen to reside in the miscible regime, as the following relation of the miscibility $ \emph{i.e.}~a^2_{\rm AB} \le a_{\rm AA} a_{\rm BB}$ \cite{ao_1998_binary}, is hold obeyed by the scattering lengths. We numerically solve the GP equation using the Split-step Crank-Nicolson method \cite{muruganandam_2009_fortran}. The ground state of the system is generated by propagating the wavefunctions of the BEC in imaginary time. In order to inspect the dynamical evolution of the condensate, we utilize the ground state generated in imaginary time as the initial state and solve the Eq. \eqref{eq:gp} in real-time. Moreover, the system's initial state is prepared by placing a paddle-shaped stationary obstacle, as expressed in Eq. \eqref{eq:paddle_pot}, in the component $A$. Our simulation runs on the spatial extent of $ -20.48l $ to $ 20.46l $ along both $x$ and $y$ direction with 2D $2048 \cross 2048$ grid points.
\section{Creation of vortices using paddle potential}\label{sec:vor_dyna}
As discussed in Ref.~\cite{white_2012_creation}, using an optical paddle potential vortex in BEC can be generated in a variety of ways which include $(\rm i)$ rotating the paddle about a fixed center, $(\rm ii)$ moving the paddle about a fixed center, and $(\rm iii)$ both rotating and moving paddle simultaneously in the BEC. Though we have considered only the rotation of paddle potential to generate vortices in this work, we have employed both a single paddle and a double paddle potential to generate a vortex. In particular, while the single paddle potential rotates in species A with the paddle center being located at $(x_p,y_p)=(0,0)$, the double
paddle potentials can rotate either in the same or opposite directions about their center at $(x_p,y_p)=(\pm r_{\rm A}/4,0)$, respectively, where $r_{\rm A}=6.1l$ is the root-mean-squared radius of species A $ (\text{for } a_{\rm AB}=0)$. The parameters for single paddle are $\eta=0.05 $ and $d=0.1l$; and for double paddle $\eta=0.1 $ and $d=0.1l $, are identical for both. These values determine the elliptical shape of the paddle according to Eq. \eqref{eq:paddle_pot}. Moreover, we choose the peak strength of the paddles to be $ V_0=10\mu_{\rm A} $, where $ \mu_{\rm A} $ is the chemical potential of species A. As previously stated, after establishing the initial state with a stationary paddle, at $ t=0 $, the paddle is rotated at a frequency of $\omega_p$ with full amplitude until the time $ \tau=40\omega^{-1}=206{\rm ms}$ and then ramped off to zero within $\Delta\tau=10\omega^{-1}$. With these parameters $ \omega_p, \eta, d$, and $V_0$, the paddle potentials may be externally regulated, allowing for control of the creation of vortex or antivortex in BEC. In BEC, the presence of a vortex or an antivortex yields a finite amount of angular momentum which can be expressed, for $i$-th species, as
\begin{align}\label{eq:ang_mom}
L_z^i = -\mathrm{i} \int \psi_i^*\qty(x\pdv{y}-y\pdv{x})\psi_i\dd x \dd y.
\end{align}
To study the dynamical formation of the vortices, we measure
the density-weighted vorticity of condensates as~\cite{mukherjee_2020_quench, ghoshdastidar_2022_pattern}
\begin{align} \label{eq:vorticity}
\vb{\Omega}_i = \curl{ \vb{J}_i} ,
\end{align}
for a better spatially resolved measurement with $\vb{J}_i = \frac{\mathrm{i}\hbar}{2m}(\psi_i\grad\psi_i^* - \psi_i^*\grad\psi_i)$ being the probability current density. We remark that by using the Madelung transformation \cite{madelung_1927_quantentheorie}, $\psi_i = \sqrt{n_i} e^{\mathrm{i} \phi_i}$, Eq. \eqref{eq:vorticity} can be cast into the form $\vb{\Omega}_i = \nabla \times n_i\vb{u}_i$. Notably, the multiplication of the condensate velocity $\vb{u}_i$ with the density $n_i$ ensures that we compute the vorticity of $i$-th species only in the region where the condensate is located. \newline
\subsection{Single paddle}\label{sec:single_p}
This section examines the implications of a single paddle potential rotating with frequency $ \omega_p $ about the center of the species A. Although rotation orientation can be clockwise (CW) or counter-clockwise (CCW), we focus on a paddle rotating in the CW direction. We note that the{\unskip\parfillskip 0pt \par}
\onecolumngrid
\vspace*{-2mm}
\begin{figure}[H]
\centering
\includegraphics[width=\textwidth]{CsRb_a0_f1_s_hd.eps}
\caption{Snapshot of ($a_1$)-($e_1$) density $(n_{\rm A})$ and ($a_2$)-($e_2$) vorticity $(\Omega_{\rm A})$ profiles of the species A at different instants of time (see legends). The binary BECs are made of $^{133}$Cs-$^{87}$Rb atoms. An elliptical paddle potential characterized by the parameters $\eta = 0.05$ and $d=0.1l$ is rotated with the angular frequency $\omega_{p} = \omega$ within the species A ($^{133}$Cs) in order to trigger the dynamics. The colorbars of \textit{top} and \textit{bottom} rows represent the number density ($n$) in $\mu\rm{m}^{-2}$ and the vorticity ($\Omega$), respectively. The binary BECs are initialized in a two dimensional harmonic potential with frequency $\omega /(2 \pi) = 30.832$ Hz, $\lambda=100$ and having following intra- and interspecies scattering lengths $a_{\rm AA}=280a_{0}$, $a_{\rm BB} = 100.4a_{0}$, and $a_{\rm AB} = 0$. The number of atoms for both the species are $N_{\rm A}=N_{\rm B}=60000$. }
\label{fig:csrb_den_vor_a0}
\end{figure}
\twocolumngrid
\noindent results obtained for the CCW will be essentially identical to those obtained for the CW.
At first, we demonstrate the behavior of the BEC without interspecies interaction by setting $a_{\rm AB}=0$. Due to the absence of interspecies interactions, the paddle potential does not influence species B, and therefore the latter remains unaltered during the dynamics. When the paddle rotates in species A, vortices and antivortices form around it. The number of vortices and antivortices, in particular, is strongly dependent on the rotation frequency. Figures \figref{fig:csrb_den_vor_a0}($a_1$)-($a_2$) and ($b_1$)-($b_2$) shows time evolution of density and vorticity of species A at the paddle frequency $ \omega_p=\omega $, the trap frequency.
The initial state of species A, with the paddle potential being elongated along the $x$-axis, is shown in the Fig. \figref{fig:csrb_den_vor_a0}($a_1$). At $t = 0.08 s$, after the rotation of the paddle has been established [Fig.\figref{fig:csrb_den_vor_a0}$(b_2)$], both vortices (red color) and antivortices (blue color) are generated in species A. In fact, a close inspection of the Fig. \figref{fig:csrb_den_vor_a0}$(b_2)$ reveals that the vortex-antivortex structures are located symmetrically with respect to the paddle. Additionally, the number of vortices exceeds that of the antivortices [Fig. \figref{fig:csrb_den_vor_a0}$(b_2)$].
\begin{figure}[h]
\includegraphics[width=0.5\textwidth]{number.eps}
\caption{Variation of the number of vortices $N^{\rm A}_+$ and antivortices $N^{\rm A}_-$ of species A at steady state as a function of the paddle frequency $\omega_p$. The other parameters are the same as the ones in Fig. \figref{fig:csrb_den_vor_a0}.}
\label{fig:vortex_number_a0}
\end{figure}
\begin{figure}[h]
\includegraphics[width=0.5\textwidth]{CsRb_vortex_angMom_a0_s_hd.pdf}
\caption{Snapshots of the vorticity profiles of species A $(\Omega_{\rm A})$ taken at $t=3.5\rm s$ for two different paddle frequencies ($a$) $\omega_p=3\omega$ and ($b$) $\omega_{p} = 10\omega$ of rotation of the paddle potential. Shown also $(c)$ the time-evolution ($\log$-scale) of angular momentum $(L_z^{\rm A})$ for different values of $\omega_p$ (see the legends). The vertical lines in (c) represent the times when amplitude of the paddle started to ramp off and become zero, respectively. The colorbar of \textit{top} row represents the vorticity $(\Omega)$. The other parameters are the same as the ones in Fig. \figref{fig:csrb_den_vor_a0}.}
\label{fig:csrb_vor_am_a0}
\end{figure}
The generation of vortices and antivortices continues until $ 0.258 \rm s $, at which point the paddle potential vanishes. It is worth noting that the numbers of vortices and antivortices are nearly equal around this time. Following that, a considerable number of the vortex-antivortex pairs decay due to self-annihilation or drifting out of the condensate, see Fig. \figref{fig:csrb_den_vor_a0}$(d_2)$. {However, some of the vortices and antivortices form vortex dipoles (vortex antivortex pair) or vortex pairs (pairs of identical charges) or vortex clusters, as depicted in Fig. \figref{fig:csrb_den_vor_a0}$(c_2)-(e_2)$. Without being annihilated, alongside the lone vortices and antivortices, these vortex dipoles, vortex pairs, and vortex or antivortex structures remain in the BEC for an extended period}.
When the paddle frequency $\omega_p$ increases, the vortex complexes exhibit a distinct behavior. At steady state, the number of antivortices vastly exceeds that of vortices for a CW rotation of the paddle potential with frequencies $\omega < \omega_p < 7\omega$. After removing the paddle potential, vortex-antivortex annihilation begins, finally eliminating all vortices from the condensate. In Fig. \figref{fig:vortex_number_a0}, we demonstrate the number of vortices $(N^{A}_{+})$ and antivortices $(N^{\rm A}_{-})$ as a function of the rotation frequency $\omega_p$. The imbalance $\abs{N^{\rm A}_{+}-N^{\rm A}_{-}}$ is almost zero till $\omega_p = \omega$, indicating that an equal number of vortex-antivortex pairs are generated. Afterward, such imbalance gradually increases and becomes maximum at $\omega_p \approx 3\omega$ when only the antivortices (having negative circulation) exist in the system. For $\omega_p > 3\omega$, as it is evident from the Fig.\figref{fig:vortex_number_a0}, both total number of vortices, $N^{\rm A}_{+} + N^{\rm A}_{-}$ as well as the imbalance decrease with $\omega_p$.
Figures \figref{fig:csrb_vor_am_a0}$(a)$ and \figref{fig:csrb_vor_am_a0}$(b)$ show vorticity profiles, $\Omega_{\rm A}$, of species A for $\omega_p=3\omega$ and $\omega_p=10\omega$, respectively, at $t=3.5\rm s$.
Notably, the largest number of antivortices survives for $\omega_p = 3 \omega$ [Fig.\figref{fig:csrb_vor_am_a0}$(a)$] and this number falls as $\omega_p$ increases.
As $\omega_p$ increases beyond $\omega_p > 7\omega$, only a few of both vortices and antivortices survive due to a higher annihilation rate (per unit number of vortices-antivortices) of vortex-antivortex pairs [Fig. \figref{fig:csrb_vor_am_a0}(b)]. As a result, the system has almost no vortex or antivortex structure in the long-time dynamics (density profiles not shown here for brevity), imparting very less angular momentum to the condensate as shown in Figs. \figref{fig:csrb_vor_am_a0}(c) and \figref{fig:csrb_ang_freq_a0_a80}.
The above scenario of non-linear structure formations in species A can further be elucidated by invoking the angular momentum of species A, $L^{\rm A}_z$. The time evolutions of $L^{\rm A}_{z}(t)$ for various $\omega_p$ are displayed in Fig. \figref{fig:csrb_vor_am_a0}($c$). The $L^{\rm A}_z(t)$ remains negative throughout the time evolution, indicating the surplus of antivortices. For $\omega_p = \omega$, $L^{\rm A}_z(t)$ remains nearly constant within the ballpark. The $\abs{L^{\rm A}_z(t)}$ monotonically increases at the early stage of the dynamics and reaches a maximum at a time within the time interval $ \tau$, and then decreases to reach a stationary value in the long time dynamics. The maximum value of $\abs{L^{\rm A}_z(t)}$ is the largest for $\omega_p = 3\omega$, a result which emanates from the maximum number of antivortices displayed in Fig. \figref{fig:csrb_vor_am_a0}($a$). For larger $\omega_p$, the net angular momentum imparted to the condensate by the generated vortex-antivortex drastically diminishes, indicating a smaller imbalance between vortex and antivortex numbers [\figref{fig:csrb_vor_am_a0}($b$)].
\begin{figure}[ht]
\subfloat{\includegraphics[width=0.5\textwidth]{CsRb_den_ang_freq_a80_s_hd.eps}}
\caption{Snapshots of density profiles ($(a),(b)$) of species A $(n_{\rm A})$ and species B $(n_{\rm B})$, with interspecies scattering length $a_{\rm AB}=80a_0$ and $\omega_p=3\omega$ at $t=3.5\rm s$. Also shown are the variation of angular momentum $(c)$ $L_z^{\rm A}$ and $(d)$ $L_z^{\rm B}$ varying with paddle frequency $\omega_p$ and time $t$. The colorbars of \textit{top} and \textit{bottom} rows represent the number density $(n)$ in $\mu{\rm m}^{-2}$ and the angular momentum in units of $\hbar$ respectively.}
\label{fig:csrb_den_ang_a80}
\end{figure}
\begin{figure}[ht]
\subfloat{\includegraphics[width=0.5\textwidth]{angMom_Freq_fit.pdf}}
\caption{Variation of the absolute value of time-averaged angular momentum $\abs{\bar{L}_z^{\rm A}}$ of species A at $a_{\rm AB}=0\text{ and }80a_0$, and $\abs{\bar{L}_z^{\rm B}}$ of species B at $a_{\rm AB}=80a_0$ as a function of paddle frequency (see the legends). The markers and solid curves show the values of angular momentum from the simulations and the fittings with skewed normal distribution, respectively. Here the paddle configuration is same as described in Fig. \figref{fig:csrb_den_vor_a0}.}
\label{fig:csrb_ang_freq_a0_a80}
\end{figure}
The existence of paddle potential in species A affects species B for non-zero interspecies interactions $a_{\rm AB}$. For a strong enough interaction, the repulsive paddle potential on species A effectively acts as an attractive potential on species B. Due to rotation of this attractive potential vortex and antivortex are generated in species B \cite{jackson_2000_dissipation} (see Appendix \ref{sec:negative_paddle}). In particular, vortices and antivortices are created in species B, and their number can be controlled by $a_{\rm AB}$.
Additionally, the null density region at the vortex or antivortex site in one species is filled by the other species' localized density hump. Figure \figref{fig:csrb_den_ang_a80}($a$)-($b$) show the density pattern at $t=3.5\rm s$ for the interspecies interaction $a_{\rm AB}=80a_0$ and $\omega_p = 3 \omega$. Other parameters such as $\eta=0.05, d=0.1l \text{ and } V_0=10\mu_{\rm A}$ are similar to the $a_{\rm AB} =0$ case. Notably, the scattering lengths explored here ensure that the condensates are miscible, allowing us to directly analyze the role of mean-field coupling. Moreover, the paddle potential in species A performs CW rotation. Both species accommodate only antivortices solely in the long-time dynamics, which are similar to those of the non-interacting scenario. This behavior implies that within a particular frequency range, a cluster of identical vortices forms being entirely determined by the direction of paddle rotation, regardless of the species interaction. Furthermore, it is worth mentioning that species A possesses a smaller healing length due to the larger mass and the intraspecies interaction. This makes the vortices of species A smaller in size compared to those in species B.\par
The creation and stability of vortex complexes in the presence of non-zero interspecies interaction can be further comprehended by evaluating the angular momentum $L^{i}_z$ of both species. The time evolution of $L^{\rm A}_{z}$ and $L^{\rm B}_{z}$ as a function of $\omega_p$ are shown in Fig. \figref{fig:csrb_den_ang_a80}($c$) and Fig. \figref{fig:csrb_den_ang_a80}($d$), respectively.
A close inspection indicates that the angular momenta of both species are maximum at $\omega_p \approx 3.5 \omega$, similar to that in the $a_{\rm AB} = 0$ case [Figs. \figref{fig:csrb_den_ang_a80}(c)-(d), \figref{fig:csrb_ang_freq_a0_a80}]. For $\omega_p > 7\omega$, $L^{i}_z$ becomes very small due to the higher annihilation rate of the vortex-antivortex pairs. The $L^{\rm A}_z$ is more pronounced than $L^{\rm B}_z$, indicating that the antivortex number is always high in species A. Most significantly, we find that the frequency response to the angular momentum follows skewed normal distribution
\bibnote[skew]{The skewed normal probability density function is given by,
$f(x)=\frac{2}{\sigma}\phi\qty(\frac{x-\mu}{\sigma})\Phi\qty(\alpha\frac{x-\mu}{\sigma})$
where $\phi(x)=e^{-x^2/2}/\sqrt{2\pi}$ is the standard normal probability density function and $\Phi(x)=\int_{\infty}^{x}\phi(t)\dd{t}$ is the cumulative distribution function. $\sigma,\mu$ and $\alpha$ are standard deviation, mean and skewed parameter, respectively. For $\alpha=0$, $f(x)$ becomes normal distribution.},
as depicted in the Fig.~\ref{fig:csrb_ang_freq_a0_a80}, with the maximum of the distribution occurring at $\omega_p \approx 3.2 \omega$ for the $a_{\rm AB} = 0$ and $\omega_p \approx 3.45 \omega$ for $a_{\rm AB} = 80a_{0}$.
Given that, at higher paddle frequencies ($\omega_p\gtrsim7\omega$), the annihilation process does not completely remove the vortices and antivortices from the condensates, leaving a small but finite angular momentum that leads to the long tail on the side of $\omega_p > 3\omega$, this distribution is quite expected.
Finally, let us comment that both species can end up with near-equal angular momenta the strong interaction limit $a_{\rm AB} \simeq 150a_{0}$. However, a detailed study of this regime is beyond the scope of the present manuscript.
The angular momentum achieves its maximum value for $\omega_p\approx3\omega\dash4\omega$ which can be explained by examining the sound velocity of the condensates. There are two distinct sound velocities for a binary BEC, namely, $c_+$ and $c_-$, representing the density and spin sound velocity, respectively \cite{eckardt_2004_groundstate,kim_2020_observation}. These two sound velocities can be expressed as
\smallskip
\begin{align}
c_{\pm}^2=\frac{1}{2}\qty[c_{\rm A}^2 + c_{\rm B}^2 \pm \sqrt{\qty(c_{\rm A}^2-c_{\rm B}^2)^2+4c_{\rm AB}^4}],
\end{align}
where $c_i=\sqrt{g_{ii}n_i/m_i}$ and $c_{\rm AB}^4=g_{\rm AB}^2n_{\rm A} n_{\rm B}/(m_{\rm A} m_{\rm B})$. $n_i$ represents peak density of the $i$-th condensate. For the non-interacting case with the peak density of $2.88\times10^{14}\rm{/cm}^3$, we have determined the sound velocity of species A (Cs) to be $c_{\rm A}=2.36~\rm{mm/s}$ based on the averaged peak density $n_{\rm A}/2$, see also Refs. \cite{meppelink_2009_sound,kim_2020_observation}. The velocity of rotating paddle reads as $v = a\omega_p$ where $a=d/\eta$ is the semi-major axis of the paddle. With the values of parameters $d,\eta$ and $\omega_p=3\omega$ considered herein, we find that the paddle velocity amounts to $2.26~\rm{mm/s}$ which is very close to $c_{\rm A}$. This close proximity of paddle and sound velocities results in the maximum amount of angular momentum near the paddle frequency $\omega_p=3\omega$. For the interacting case with $a_{\rm AB}=80a_0$ we found the density sound velocity to be $c_+=2.56\rm{mm/s}$ with the peak densities $n_{\rm A}=2.21\times10^{14}\rm{/cm}^3$ and $n_{\rm B}=2.27\times10^{14}\rm{/cm}^3$. The value of $c_+$ is very close the paddle velocity amounts to $2.63~\rm{mm/s}$ corresponding to the $\omega_p=3.5\omega$, where the absolute angular momenta of species A and B takes maximum values [Fig. \figref{fig:csrb_ang_freq_a0_a80}].
Note that vortex generation starts when the paddle rotation velocity $v$ surpasses a critical velocity which, in our case, is around $0.25c$, $c$ is the sound velocity. As already discussed, with the increase of paddle frequency, and hence $v$, a vortex-antivortex imbalance is created, increasing the angular momentum. When $v$ exceeds the sound velocity $c$ the drag-force becomes very pronounced resulting in stronger dissipation in the condensates \cite{frisch_1992_transition,carusotto_2006_bogoliubovcerenkov,jackson_2000_dissipation,ronning_2020_classical}. This dissipation causes the total vortex number and the vortex-antivortex imbalance to decrease, thus creating a peak of $\abs{L_z^{\rm A(\rm B)}} $ at $v\approx c$.
\subsection{Double Paddle}\label{sec:double_p}
After discussing the impact of a single paddle potential, we will look at a more complex scenario involving two paddle potentials. In this situation one can have two distinct scenarios depending upon the relative orientation of the paddle potentials. Here, we attempt to answer the question of how the addition of a second paddle potential and its relative rotational orientation relative to the first one alters the vortex structures and angular momentum of the system when compared to the case of a single paddle.
\begin{figure}[htbp]
\includegraphics[width=0.48\textwidth]{double_same_cw_AM_vor_a80.eps}
\caption{$(a)$ Time evolution ($\log$-scale) of angular momentum $L_z^{i}$ for the species $i=\rm A, B$ at interspecies interaction $a_{\rm AB}=80a_0$ for different paddle frequencies $\omega_p=\omega,2\omega,3\omega,4\omega$ and $10\omega$. Here two identical paddle rotates in species A in CW direction. The inset figures $(b)$ and $(c)$ depict the snapshot of vorticity profiles of species A ($\Omega_{\rm A}$) and species B ($\Omega_{\rm B}$), respectively, at $t=3.5\rm s$ with $\omega_p=4\omega$.}
\label{fig:csrb_double_same_am_vor_a80}
\end{figure}
To that intent, we consider two paddles rotating in species A and having a center at $(\pm r_{\rm A}/4,0)$. Moreover, we choose $\eta=0.1$ and keep $d$ same as the single paddle case. Depending on the relative rotational orientation of the two potentials, different dynamical behavior can emerge. When both paddles rotate in the same direction, the effects are similar to those mentioned previously for a single paddle. To substantiate the above statement, we demonstrate the variation of angular momentum with time ($\log$-scale) in Fig. \figref{fig:csrb_double_same_am_vor_a80} for CW rotation of the paddle potential with interspecies scattering length $a_{\rm AB} = 80a_0$. For paddle frequency close to $\omega_{p} = 4 \omega$, $L^{\rm A}_z$ and $L^{\rm B}_z$ are most prominent, and the corresponding antivortex structures generated in species A and species B are shown in Fig. \figref{fig:csrb_double_same_am_vor_a80}$(b)$-($c$), respectively. However, we should note that as we halved the paddle length with respect to the single paddle case and increased the paddle number to two, the maximum angular momentum generated in the system is reduced for the double co-rotating paddle than for the single paddle case. For example, $L^{\rm A}_z \approx -40 \hbar \omega$ at $\omega_p = 3\omega$ [Fig. \figref{fig:csrb_vor_am_a0}], whereas $L^{\rm A}_z \approx -9 \hbar \omega$ at the same $\omega_p$ [Fig. \figref{fig:csrb_double_same_am_vor_a80}] for the double paddle
{\unskip\parfillskip 0pt \par}
\onecolumngrid
\begin{figure}[H]
\includegraphics[width=\textwidth]{CsRb_a80_f5_double_opp.eps}
\caption{Snapshot of density of species A, $n_{\rm A}$ $((a_1)-(a_5))$ and species B, $n_{\rm B}$ $((b_1)-(b_5))$ with interspecies interaction $a_{\rm AB}=80a_0$ at different instants of time. Two elliptic paddles characterized by the parameters $\eta=0.1$ and $d=0.1l$ and rotating opposite to each other with frequency $\omega_p=5\omega$ within species A ($^{133}$Cs) are used to trigger the dynamics. The colorbars represent the number density $(n)$ in $\mu\rm{m}^{-2}$. The vortices (antivortices) are marked with red (blue) in $(a_3)-(a_5)$ and $(b_3)-(b_5)$ (vortex identification is not done in $(a_2)$ and $(b_2)$ due to large number).}
\label{fig:CsRb_a80_f5_double_opp}
\end{figure}
\twocolumngrid
\begin{figure}[H]
\includegraphics[width=0.48\textwidth]{CsRb_a80_double_opp_am_std_dev.pdf}
\caption{Variation of $(a)$ time-averaged angular momentum $\bar{L}_z^i$, its $(b)$ standard deviation $\sigma_i$ , $(c)$ vortex imbalance $\abs{N_+^i-N_-^i}$ at steady state, and $(d)$ $\sqrt{\abs{\Gamma_i}}$ ($\Gamma_i$ is the circulation at steady state), as a function of $\omega_p$ for the $i$-th species ($i={\rm A,B}$). The scattering lengths are given by $a_{\rm AA} = 280a_{0}$, $a_{\rm BB} =100a_{0}$ and $a_{\rm AB} = 80a_{0}$. The dynamics has been triggered by employing two paddle potentials rotating counter-clockwise and the paddle configuration is described in Fig. \figref{fig:CsRb_a80_f5_double_opp}.}
\label{fig:csrb_double_opp_am_std_dev}
\end{figure}
\noindent potentials. Surprisingly, a more interesting case occurs when one paddle rotates in the CW and the other in the CCW way [Fig.\figref{fig:CsRb_a80_f5_double_opp}($a_1$)-\figref{fig:CsRb_a80_f5_double_opp}($b_1$)].
Because the rotational directions of the paddles are opposite, each paddle contributes an equal number of vortices of the opposite sign, see Figs. \figref{fig:CsRb_a80_f5_double_opp}($a_3$) and \figref{fig:CsRb_a80_f5_double_opp}($b_3$). In the long-term dynamics of both species, this equal distribution of vortex and antivortex leads to a high rate of annihilation, meaning that just a few vortex, antivortex survive in the long-time dynamics [Figs.\figref{fig:CsRb_a80_f5_double_opp}($a_3$)-($a_5$) and \figref{fig:CsRb_a80_f5_double_opp}($b_3$)-($b_5$)].
To further appreciate the previous argument, we calculate the time average of the angular momentum defined as, $\bar{L}^{i}_z = \int L^{i}_z \dd{t}/\int \dd{t}$, for different rotation frequencies $\omega_p$ of the double paddle potentials, see Fig.~\ref{fig:csrb_double_opp_am_std_dev}(a). For $\omega_p < \omega$, the $\bar{L}^{i}_z$ remains zero. Within $2 \omega \lesssim \omega_p \lesssim 5 \omega$, both $L_{\rm A}$ and $L_{\rm B}$ shows extremely fluctuating behaviour with respect to the $\omega_p$. Recall that this is also the frequency region where a maximum number of vortex-antivortex creations occur. The vortex-antivortex either annihilates each other or either of them drifts away from the condensate, leading to a finite imbalance of the vortex-antivortex number.
The finite imbalance between vortex antivortex numbers in the dynamics can result in the finite angular momentum of either positive or negative signs, a behavior which is highly fluctuating with respect to $\omega_p$. The fluctuation is somewhat reduced in the frequency range $\omega_p > 5 \omega$. Here, for increasing $\omega_p$, the annihilation mechanism becomes the dominant mechanism responsible for reducing both vortices and antivortices, and they exist in nearly equal numbers. Consequently, the imbalance between the vortex and antivortex number decreases, leading to a relatively small fluctuation in the $\bar{L}^{\rm A(B)}_{z}$.
Additionally, we have calculated the standard deviation of time-averaged angular momentum $\bar{L}^{\rm A(B)}_z$ using data from five different runs with added noise for each run, see Fig.~\ref{fig:csrb_double_opp_am_std_dev}(b). The corresponding standard deviation $\sigma_{\rm A(B)}$ for species A (species B) is defined as,
\begin{align}
\sigma^2_{\rm A(B)}=\frac{\sum_j{\qty(\bar{L}^{{\rm A(B)}, j}_{z} - \bar{L}^{\rm A(B)}_{z,\rm mean})^2}}{N_s}
\end{align}
where $\bar{L}^{\rm A(B)}_{z,\rm mean}=\sum_j{\bar{L}^{{\rm A(B)}, j}_{z}}/N_s$ and $N_s$ is number of data sets, each with different initial random noise. Fig.~\ref{fig:csrb_double_opp_am_std_dev}(b) depicts that the fluctuations are high in the frequency range $2 \omega < \omega_p < 5 \omega$. Furthermore, the fact that the fluctuations in $\bar{L}^{\rm A(B)}_z$ is indeed due to the fluctuations in the vortex-antivortex imbalance $\abs{N^{\rm A(B)}_{+} -N^{\rm A(B)}_{-}}$ can be evinced from the Fig.~\ref{fig:csrb_double_opp_am_std_dev}(c). Finally, before closing this section let us also remark on another interesting observation from our study that the $\abs{N^{\rm A(B)}_{+} -N^{\rm A(B)}_{-}}$ scales as $\sqrt{\abs{\Gamma_{\rm A(B)}}}$, where the quantity $\Gamma_{\rm A(B)} = \int {\Omega_{\rm A(B)}} \dd x \dd y$ represents the net circulation of the vortex clusters, see Fig.~\ref{fig:csrb_double_opp_am_std_dev}(d) where we demonstrate $\sqrt{\abs{\Gamma_{\rm A(B)}}}$ as a function of $\omega_p$.
\section{Energy spectra}\label{sec:energy_spec}
To better understand the system when it is subjected to a paddle potential, we compute its kinetic energy spectrum, whose scaling laws provide insights regarding the development of quantum turbulence in the system. Note that these scaling laws have already been well reported in the literature \cite{kobayashi_2005_kolmogorov,reeves_2012_classical,madeira_2019_quantum}. However, the primary objective here is to determine how the onset of turbulence depends on paddle frequencies or under what parameter regime the binary condensate system should develop turbulent features.
In order to do so we decompose the kinetic energy into compressible and incompressible parts associated with sound waves and vortices, respectively \cite{nore_1997_kolmogorov,white_2014_vortices}. The energy decomposition is performed by defining a density weighted velocity field, which reads $\sqrt{n_{i}}\vb{u}_{i}$ with $\vb{u}_{i} =\frac{\hbar}{m}\nabla \theta_i$, where $n_i$ and $\theta_{i}$ are the position dependent condensate density and phase of the $i$-the species. The velocity field is separated into a solenoidal (incompressible) part $\vb{u}^{\rm ic}_i$ and a irrotational (compressible) part $\vb{u}^{\rm ic}_i$
such that $\vb{u}_{i} = \vb{u}^{\rm ic }_{i} + \vb{u}^{\rm c}_{i}$ and obeying $\div\vb{u}^{\rm ic}_i = 0$ and $\curl\vb{u}^{\rm c}_i = 0$. Once these velocity fields are calculated following the Refs. \cite{nore_1997_kolmogorov,horng_2009_twodimensional,mukherjee_2020_quench, ghoshdastidar_2022_pattern}, we can calculate incompressible energy $ (\mathcal{E}^{\rm ic}_i)$ and compressible energy $ (\mathcal{E}^{\rm c}_i)$,
\begin{equation}
\mathcal{E}^{\rm ic[c]}_i = \frac{1}{2} \int n_{i}\abs{\vb{u}^{\rm ic[c]}_{i}}^2 \dd{x} \dd{y}.
\end{equation}
Afterwards the compressible and incompressible energy spectra for the $i$-th species can be calculated as
\begin{equation}
E^{\rm ic[c]}_i(k) = \frac{k}{2} \sum_{q=x,y}^{}\int_{0}^{2 \pi}\abs{F_{q}(\sqrt{n_i}\vb{u}^{\rm ic [c]}_{q,i})}^2 \dd\phi,
\end{equation}
where $F_{q}(\sqrt{n_{i}}\vb{u}^{\rm ic[c]}_{i})$ denotes the Fourier transformation of $\sqrt{n_{i}\vb{u}^{\rm ic[c]}_{q,i}}$, corresponding to the $q$-th component of $\vb{u}_{i} = (u_{x,i}, u_{y, i})$.
\begin{figure}[t]
\includegraphics[width=0.45\textwidth]{kolmo_2048_Cs_a0_f138.pdf}
\caption{Incompressible kinetic energy spectra of species A, $E^{\rm ic}_{\rm A} (k)$, at different time instants (\textit{see legends}) for different paddle frequencies $\omega_p= (a)~\omega, (b)~3\omega$ and $(c)~8\omega$. The \textit{`solid'} and \textit{`dashed'} lines represent the slopes of $k^{-5/3}$ and $k^{-3}$, respectively. The interspecies scattering length $a_{\rm AB} = 0$. The dashed vertical lines define the positions at $k={2\pi}/{R_{\rm A}}$ ($R_{\rm A}=10l$ being the Thomas-Fermi radius), $k_p$ ($2\pi/a$, $a$ being the semi-major axis of the paddle), $\xi^{-1}_{\rm A}$ and $2\pi/\xi_{\rm A}$, respectively. Here the healing length $\xi_A$ of species A is $0.062l$. Except the $\omega_p$, all other parameters are the same as Fig.\figref{fig:csrb_den_vor_a0}.}
\label{fig:kolmo_a0_Cs}
\end{figure}
We present incompressible energy spectra $E^{\rm ic}_{\rm A}(k)$ of species A in Fig. \figref{fig:kolmo_a0_Cs} at various time instants and frequencies $\omega_p$ corresponding to the single paddle case at $a_{\rm AB}=0$. Due to no interspecies interaction, species B is not impacted by the paddle potential, which allows us to focus on species A. For $\omega_{p} = \omega$, $E^{\rm ic}_{\rm A}(k)$ attains a stationary state at early time ($t=0.1 {\rm s}$) and maintains it till $t = 3.5{\rm s}$, as evidenced from the Fig. \figref{fig:kolmo_a0_Cs}($a$). Moreover, $E^{\rm ic}_{\rm A}(k)$ exhibits $k^{-3}$ power-law in the region $30\lesssim k \lesssim 100$ and $k^{-5/3}$ power law in the region $2\lesssim k \lesssim 30$. The $k^{-5/3}$ and $k^{-3}$ power laws are associated with the inertial range of energy cascade and internal structure of vortex core, respectively \cite{bradley_2012_energy,mithun_2021_decay}. Note that for $\omega_p = \omega$ vortex pairs and vortex dipole are noticed in Fig. \figref{fig:csrb_den_vor_a0}$(c_2)$-$(e_2)$ \cite{gauthier_2019_giant,reeves_2013_inverse,simula_2014_emergence,groszek_2016_onsager}. Surprisingly for the frequency $\omega_p = 3\omega$, where only the same sign vortices dominate, we notice that $k^{-3}$ spectrum develops for a very narrow $k$-range in our system, see Fig. \figref{fig:kolmo_a0_Cs}($b$) , and after that ($\omega_p>3\omega$) the spectra deviate from $-3$ scaling law, see Fig. \figref{fig:kolmo_a0_Cs}($c$) . However, while $k^{-5/3}$ spectrum develops in long time dynamics for a wide $k$-range, it does not emerge in early time dynamics [Fig. \figref{fig:kolmo_a0_Cs}($b$)]. Finally, we notice that the $k$-ranges where the spectra follow $-5/3$ scaling become narrower with increase of $\omega_p$, see Fig. \figref{fig:kolmo_a0_Cs}($c$). This behaviour is expected since the system at $\omega_p \gtrsim 7 \omega$ is primarily governed by the generation of a huge number of sound waves caused by the rapid annihilation of the vortices and antivortices.
Another interesting observation from our study is that the most extended inertial range of the energy cascade occurs at the paddle frequency $\omega_p\approx3\omega$ where both species hold the maximum amount of angular momentum. The positions of the inertial ranges vary depending on $\omega_p$. For low $\omega_p (\simeq \omega)$ and high $\omega_p (> 5\omega)$ the inertial ranges occur respectively at lower and higher wavenumbers than the inverse of the healing length $(\xi_{\rm A}^{-1})$. For the intermediate frequencies it occurs at both sides of $k=\xi_{\rm A}^{-1}$, see Fig. \figref{fig:kolmo_a0_Cs}.
\begin{figure}[htbp]
\includegraphics[width=0.45\textwidth]{kolmo_2048_Rb_a80_f138.pdf}
\caption{Incompressible kinetic energy spectra of species B, $E^{\rm ic}_{\rm B}(k)$, at different time instants (\textit{see legends}) for different paddle frequencies $\omega_p= (a)~\omega, (b)~3\omega$ and $(c)~8\omega$. The \textit{`solid'} and \textit{`dashed'} lines represent the slopes of $k^{-5/3}$ and $k^{-3}$, respectively. The interspecies scattering length $a_{\rm AB} = 80a_0$. The dashed vertical lines define the positions at $k={2\pi}/{R_{\rm B}}$ ($R_{\rm B}\approx10l$ being the Thomas-Fermi radius), $k_p$ ($2\pi/a$, $a$ being the semi-major axis of the paddle), $\xi^{-1}_{\rm B}$ and $2\pi/\xi_{\rm B}$. Here the healing length $\xi_A$ of species B is $0.099l$. Except the $\omega_p$, all other parameters are same as Fig.\figref{fig:csrb_den_ang_a80}. }
\label{fig:kolmo_a80_Rb}
\end{figure}
Next, we turn to the scenario of finite interspecies interaction characterized by $a_{\rm AB} = 80a_{0}$ and investigate whether species B produces the power-law spectra in the incompressible sector of its energy, see Figs. \figref{fig:kolmo_a80_Rb}($a$)-($c$). We note that $k^{-5/3}$ and $k^{-3}$ power laws are manifested in a similar manner within the range $ 1\lesssim k \lesssim 10$ and $20\lesssim k \lesssim 100$, respectively, for $\omega_p=\omega$.
Like in species A, the ranges of the $-5/3$ scaling law in species B become narrower with the increase of paddle frequency after $\omega_p\approx3\omega$ and the positions of the inertial ranges changes with $\omega_p$.
Moreover, species B contains vortices with larger cores than that of species A. At large $\omega_p$ the high-momentum acoustic waves are less in species B compared to that in species A because of the reduced strength of paddle potential at lower interspecies interaction. This makes incompressible kinetic energy at high momentum more discernible in species B than species A. Consequently, $k^{-3}$ scaling law, which is related to the vortex core structure, appears in $E^{\rm ic}_{\rm B}(k)$ for the frequency range $\omega\lesssim\omega_p\lesssim 10\omega$.
We note that in this condition $(a_{\rm AB}=80a_0)$ the $E^{\rm ic}_{\rm A}(k)$ does not demonstrate different behaviour with regard to $\omega_p$ when compared to that of $a_{\rm AB} = 0$ case (hence not shown here for brevity).
\begin{figure}[htbp]
\includegraphics[width=0.45\textwidth]{kolmo_com_CsRb_a0_80_f186.pdf}
\caption{Compressible kinetic energy spectra of species A, $E^{\rm c}_{\rm A}(k)$ at $(a)$ $\omega_p=\omega$, $(b)~\omega_p=8\omega$ without interspecies interaction and of species B, $E^{\rm c}_{\rm B}(k)$ at $(c)~ \omega_p=6\omega$ with $a_{\rm AB}=80a_0$ at different time instants (\textit{see legends}). Black \textit{`solid'} and \textit{`dashed'} lines represent the slopes of $k^{-7/2}$ and $k$, respectively and blue \textit{`solid'} line in $(c)$ represents the slope of $k^{-3/2}$. The dashed vertical lines of $(a)-(b)$ and $(c)$ are described in Fig.\figref{fig:kolmo_a0_Cs} and \figref{fig:kolmo_a80_Rb} respectively. All the parameters are the same as discussed in Sec. \ref*{sec:single_p}. }
\label{fig:kolmo_com}
\end{figure}
We now explain the compressible kinetic energy spectra \cite{mithun_2021_decay, ghoshdastidar_2022_pattern,shukla_2013_turbulence} $E^{\rm c}_{\rm A}(k)$ of species A and $E^{\rm c}_{\rm B}(k)$ of species B for a few representative cases subjected to the single paddle potential, shown in Fig. \figref{fig:kolmo_com}. To begin, in the case of $\omega_{p} = \omega$, we notice that a power-law region with $E^{\rm c}_{\rm A}(k) \propto k$ develops in the low-$k$ region of the spectrum, a relation that expresses the frequencies of Bogoliubov’s elementary excitations at low-wave number [Fig. \figref{fig:kolmo_a80_Rb}($a$)]. The spectrum reaches a maximum near $k$ ranges from 20 to 40 ( the peak positions differ for different time instants until the system reaches an equilibrium) before rapidly dropping. As the paddle frequency increases ($\omega_p\gtrsim7\omega$), the spectra $E^{\rm c}_{\rm A}(k)$ follows a power-law exponent of $-7/2$ at large $k$, as shown in Fig. \figref{fig:kolmo_com}(b) for a specific $\omega_p=8\omega$. Notably, this scaling is associated with superfluid turbulence of equilibrium sound waves, which has also been reported in Refs. \cite{mithun_2022_measurement,mithun_2021_decay,reeves_2012_classical}.
Interestingly enough, for $a_{\rm AB}=80a_0$, we observe the scaling law $k^{-3/2}$ in the intermediate $k$ range for the frequency $\omega_p \gtrsim 5 \omega$ as shown in Fig. \figref{fig:kolmo_com}(c) for $\omega_p=6\omega$. This power law which appears at $k$ higher than the driving wavenumber $k_p$ reveals the signatures of weak wave turbulence \cite{reeves_2012_classical,nazarenko_2006_wave}. Let us note that the acoustic disturbance must not be strong for the manifestation of this scaling \cite{nazarenko_2006_wave}; hence, it is more apparent in species B under weaker interspecies interaction regimes, while huge acoustic disturbances prevent the development of the same scaling in species A. We observed for strong enough interspecies interactions (\emph{e.g.} $a_{\rm AB}=140a_0$) the $-3/2$ scaling law disappears from species B (not shown). However, a detailed discussion of this is beyond the scope of the present manuscript.
\section{Conclusions}\label{conclusion}
We have explored the phenomenon of non-linear structure formations and their dynamics using optical paddle potential in a binary BEC composed of two distinct atomic elements. One of the species (species A) experiences rotating single or double paddle potentials, while the other species (species B) is only influenced via the interspecies contact interaction. The paddles are rotated for a finite amount of time, resulting in the creation of vortices. In long-time dynamics, the sign and number of the vortex are dependent on the frequency and orientation of paddle rotation.
Additionally, we discussed the effect of paddle rotation on other species. We observed many diagnostics to obtain insight into the dynamics, including density, vorticity, the $z$-component of the angular momentum, and the species' compressible and incompressible energy spectra.
Clusters of positive and negative vortices emerge within the system when a single paddle potential is rotated with a low rotational frequency.
Interestingly, when the frequency is gradually increased, we observe a transition to a regime dominated by same-sign vortices, with species A gaining the maximum angular momentum. At larger paddle frequencies, the annihilation of vortex-antivortex pairs becomes considerable, reducing the system's total vortical content.
The behavior mentioned above holds for species A both in the absence or presence of interspecies interaction. Interestingly enough, when interspecies contact is enabled, species B exhibits similar dynamical behavior.
However, species B has a substantially lower vortex and angular momentum than species A in the miscible regime. When two paddle potentials are employed, their relative orientation becomes crucial in determining the vortical content of species A.
For the paddles rotating with the same orientation, the behavior is almost identical to the single paddle applied to species A.
However, when the two paddles rotate opposite to each other, due to the almost equal number of vortex-antivortex structures formed regardless of the rotation frequency of the paddles, the net angular momentum imparted to the system during long-time dynamics fluctuates about zero.
Following that, we explored the system's dynamics by invoking the compressible and incompressible kinetic energy spectra. However, a key highlight of this work is its examination of various power-law scalings of the kinetic energy spectra.
We observed $-5/3$ and $-3$ power-law scaling in the low and high wavenumber regimes of the incompressible energy spectrum, respectively, in the low rotation frequency regime, where we saw clusters of identical sign vortices. These scalings provide evidence for the development of quantum turbulence in our system at low frequencies. However, analogous scaling is not apparent in the incompressible energy spectrum as the rotation frequency increases.
There are many research directions to be pursued as a future research endeavor. One straight would be to extend present work in the presence of finite temperature \cite{proukakis_2008_finitetemperature}. Extending the present work to the three-dimensional setup and exploring the corresponding non-linear defect formations would be equally interesting \cite{serafini_2017_vortex,cidrim_2017_vinen,xiao_2021_controlled,halder_2022_phase}. Another vital prospect would be to employ dipolar BEC to inspect the impact of the long-range interaction \cite{lahaye_2009_physics}. Finally, the investigations discussed previously would be equally fascinating at the beyond mean-field level, where significant correlations between particles exist \cite{cao_2017_unified}.
\section{Acknowledgment}
We thank the anonymous referees for their valuable comments that immensely improved the manuscript. We acknowledge National Supercomputing Mission, Government of India, for providing computing resources of ``PARAM Shakti" at Indian Institute of Technology Kharagpur, India.
\vspace*{-0.5cm}
| proofpile-arXiv_065-5618 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}\label{sec:intro}
%
The diffuse emission spectrum of the Milky Way at photon energies of a few megaelectronvolts (MeV) is one of the least explored phenomena in astrophysics.
%
Despite its rich scientific connections to fundamental nuclear, particle, and cosmic-ray (CR) physics, only one instrument measured the Galactic emission in the 1--30\,MeV band: the Compton Telescope (COMPTEL) onboard the Compton Gamma Ray Observatory \citep{Strong1999_COMPTEL_MeV} -- more than 20 years ago.
%
The MeV spectrum provides invaluable and otherwise unavailable insight:
%
The magnitude and shape of the interstellar radiation field (ISRF) is determined through inverse Compton (IC) scattering of gigaelectronvolt (GeV) electrons \citep[e.g.][]{Moskalenko2000_IC}, resulting in an MeV continuum.
%
The low-energy CR spectrum ($\lesssim 100$\,MeV) outside the Solar System can be measured throughout the Galaxy via nuclear excitation of interstellar medium (ISM) elements, which produce de-excitation $\gamma$-ray lines \citep[e.g.][]{Benhabiles-Mezhoud2013_DeExcitation}.
%
This is otherwise only possible with the Voyager probes \citep{Stone2013_Voyager1_CR}, which, however, are only sensitive to CR spectra in the local ISM.
%
Annihilation of positrons in flight, which determines the injection energy of their sources in a steady state, shows $\gamma$ rays from 0.26\,MeV up to the particles' kinetic energy \citep{Beacom2006_511}.
%
Dark matter candidates could also leave an imprint of their nature in the MeV band \citep[e.g.][]{Boehm2004_dm,Fortin2009_DMGammaRaySpectra,Siegert2021_RetII}.
%
Currently, only one instrument is able to measure the extended emission along the Galactic plane: the Spectrometer aboard the International Gamma-Ray Astrophysics Laboratory, INTEGRAL/SPI \citep{Vedrenne2003_SPI,Winkler2003_INTEGRAL}.
%
SPI measures photons in the range between 20\,keV and 8\,MeV through a coded aperture mask.
%
Although INTEGRAL is currently in its 20th mission year and has performed deep exposures in the Galactic bulge and disc, the upper decade of SPI's spectral bandpass has barely been touched in data analysis.
%
In this paper we determine the spectrum of diffuse emission in the Milky Way between 0.5 and 8\,MeV based on 16 years of INTEGRAL/SPI data.
%
Our approach is based on spatial template fitting of GALPROP \citep{Strong2011_GALPROP} models and relies on the success of recent developments in modelling the instrumental background of SPI \citep{Diehl2018_BGRDB,Siegert2019_SPIBG}.
%
This paper is structured as follows:
%
In Sect.\,\ref{sec:problem} we explain the challenges of SPI data above 2\,MeV, the impact of the Sun and Earth's albedo, and how we handle the background.
%
Our dataset and analysis is presented in Sect.\,\ref{sec:analysis}.
%
We assess the fit quality and estimate systematic uncertainties in Sect.\,\ref{sec:systematics}.
%
The resulting spectrum and residuals are found in Sect.\,\ref{sec:results}.
%
We discuss our findings in terms of the Galactic electron population that leads to the IC spectrum and summarise in Sect.\,\ref{sec:summary}.
\section{SPI data above 2\,MeV}\label{sec:problem}
%
In the `high-energy' (HE) range of SPI, between 2 and 8\,MeV, only a few targets have been characterised spectrally:
%
the Crab \citep[e.g.][]{Jourdain2009_Crab,Jourdain2020_Crab} and the Sun \citep[e.g.][]{Gros2004_solarflare,Kiener2006_solarflare}\footnote{We note that \citet{Bouchet2008_imaging} performed an imaging analysis between 1.8 and 7.8\,MeV, though only in this one energy bin.}.
%
The latter has a huge impact on the instrumental background behaviour as a function of time, which is provided by the enhanced particle flux during solar flare events.
%
SPI data between 2--8\,MeV are recorded in $16384$ channels, corresponding to a channel resolution of $0.52$\,keV \citep{Vedrenne2003_SPI}.
%
By default, in official processing \citep[ISDC/OSA;][]{Courvoisier03} the HE range is binned into 1\,keV bins.
%
Per detector, the count rate drops from $10^{-3}$ to $10^{-5}\,\mrm{cnts\,s^{-1}\,keV^{-1}}$ from 2 to 8\,MeV.
%
This describes a notoriously noisy spectrum during one observation pointing, lasting typically 0.5--1.0\,h, and is one of the main reasons why these data are difficult to analyse.
%
\begin{figure}[!t]
\centering
\includegraphics[width=\columnwidth,trim=0.1in 0.1in 0.5in 0.7in, clip=true]{Chosen_revolutions_C12.pdf}
\caption{Background count rate of instrumental lines. The atmospheric \nuc{C}{12} line at 4.4\,MeV shows strong variations (one to three orders of magnitude) when a solar flare occurs. The instrumental \nuc{Ge}{69} line at 882.5\,keV is barely affected by solar events. We exclude all revolutions in which the \nuc{C}{12} rate is $3\sigma$ above the running median.}
\label{fig:bg_lines}
\end{figure}
%
Most of the measured counts are due to instrumental background radiation, originating from CR interactions with the satellite material.
%
In addition, the $\gamma$-ray albedo spectrum from Earth, also induced by CRs, begins to contribute significantly at these energies because SPI's anti-coincidence shield becomes more and more transparent.
%
\citet{Share2001_EarthAlbedoLines} identified about 20 atmospheric $\gamma$-ray lines between 0.5 and 7\,MeV and their impact on the spectra from the Solar Maximum Mission (SMM).
%
Owing to the composition of Earth's atmosphere, all these lines are related to either O or N, weak lines of C and B, and the positron annihilation line.
%
The authors also find a significant contribution of unresolved lines as well as an electron bremsstrahlung continuum.
%
The absolute numbers from SMM cannot directly be translated to instrumental background rates for SPI because of INTEGRAL's eccentric and high inclination orbit as well as its different design.
%
However, the relative rates among the lines and, in particular, between solar quiescence and flares serve as a good proxy for data selection (Sect.\,\ref{sec:data_description}).
\begin{figure*}[!ht]
\centering
\includegraphics[width=\textwidth,trim=0.0in 0.0in 0.0in 0.0in, clip=true]{SPI_BG_ann13_D00_horizontal.pdf}
\caption{SPI data of one detector ($00$) between INTEGRAL revolutions 777 and 795 and spectral fits. \emph{Left}: Complete spectrum of SPI's HE range between 2 and 8\,MeV (\emph{top}) and residuals (\emph{bottom}). \emph{Right}: Zoomed-in view between 2.4 and 2.9\,MeV, with individual lines indicated. The shown normalised residuals scatter around $0.04$ with a standard deviation of $0.94$ for the 6000 data points indicate a sufficiently well-described background spectrum fit.}
\label{fig:bg_spec_fit}
\end{figure*}
\subsection{Solar impact}\label{sec:data_description}
%
In Fig.\,\ref{fig:bg_lines} we show the measured count rate of the atmospheric \nuc{C}{12} line at 4438\,keV as a function of the INTEGRAL mission time in units of satellite revolutions around Earth ($\sim 3$\,d).
%
For comparison, we also show an instrumental line of \nuc{Ge}{69} at 882.5\,keV.
%
The long-term behaviour is determined by the solar cycle, being inversely proportional to the sunspot number and therefore the magnetic activity of the Sun \citep[see][for more examples]{Diehl2018_BGRDB}.
%
The short-term behaviour of the two lines is clearly different:
%
Both atmospheric lines and instrument lines are additionally excited by solar particle events, but, since the shield is more transparent at 4.4\,MeV than at 0.9\,MeV, the impact of solar events is much stronger for the former.
%
We note that for X-class solar events, such as during INTEGRAL revolutions 128 or 1861, even the 882.5\,keV line (and, correspondingly, most other lines) showed a significant increase with respect to the running mean.
%
For the \nuc{C}{12} line, and similar lines such as \nuc{O}{16} (6129\,keV) or \nuc{N}{14} (3674\,keV), solar flares increase the received count rate by up to three orders of magnitude.
%
We wanted to avoid entire revolutions in our data selection and background modelling altogether and used the \nuc{C}{12} line, which shows the largest rate ratios between flares and quiescence, as a proxy for enhanced short-term solar activity.
%
We applied a running median of 30 revolutions to estimate the baseline rate at 4438\,keV.
%
Then, we removed any revolution from our data in which the measured \nuc{C}{12} line rate is more than three standard deviations above the median rate.
%
This was applied to all energies and processing chains (see Sect.\,\ref{sec:data_set}).
%
\subsection{Handling instrumental background}\label{sec:background}
%
Based on the reduced HE database, with solar particle events filtered out as described in Sect.\,\ref{sec:data_description}, we applied the method from \citet{Siegert2019_SPIBG} to construct a high spectral resolution instrumental background database.
%
First, we integrated over the entire filtered data archive and all detectors to also identify the weakest lines.
%
We found 610 lines with rates per detector between $10^{-7}$ and $10^{-2}\,\mrm{cnts\,s^{-1}}$.
%
We then split the energy range between 2 and 8\,MeV into multiple smaller bands to determine the spectral parameters of the background lines on top of a multiple broken power law for each detector.
%
The integration time to extract the spectral information was set to the time interval between two annealing periods, which is typically half a year.
%
This has the advantage of enough counts per spectrum to determine the spectral shapes reliably, but it only estimates an average degradation of the detectors, typically broadening lines by up to 15\,\% between two annealings.
%
In Fig.\,\ref{fig:bg_spec_fit} we show the spectrum of detector 00 measured between INTEGRAL revolutions 777 and 795, together with the fit to determine the flux ratios for the final background model, and the residuals.
%
Over the full energy range, this method provides adequate fits.
%
In the right panel of Fig.\,\ref{fig:bg_spec_fit} we show the zoomed-in version between 2.4 and 2.9\,MeV and detail the instrumental background lines.
In \citet{Siegert2019_SPIBG}, this technique was applied to the diffuse emission of the 511\,keV and 1809\,keV lines and to point-like emission from continuum sources beyond 3\,MeV.
%
Here we extend this approach to diffuse continuum emission up to the boundaries of the SPI HE response of 8\,MeV.
%
While the energy range and source function in this work is a new application to the previous method, the expected signal-to-background count ratio ($S/B$) per energy bin is about the same as in the case for the 511\,keV line, for example.
%
The line shows an integrated flux of $10^{-3}\,\mrm{ph\,cm^{-2}\,s^{-1}}$ above an average background count rate of $0.6\,\mrm{cnts\,s^{-1}}$.
%
Taking the background spectrum shown in Fig.\,\ref{fig:bg_spec_fit} as representative for the whole mission, $(S/B)_{511}$ is about $1.6 \times 10^{-3}$.
%
This value changes by about 50\,\% over the course of the dataset (cf. the background variation in Fig.\,\ref{fig:bg_lines}).
%
While the expected signal in the continuum from 0.5 to 8\,MeV decreases significantly from $\sim 10^{-5}\,\mrm{ph\,cm^{-2}\,s^{-1}\,keV^{-1}}$ to $\sim 10^{-7}\,\mrm{ph\,cm^{-2}\,s^{-1}\,keV^{-1}}$ with a power-law index around $-1.7$, the background count rate also drops sharply with an index of $-3$ between 0.5 and 5\,MeV.
%
Depending on energy, the expected IC $(S/B)_{\rm IC}$ varies between $1$ and $8 \times 10^{-3}$.
%
Judging from this ratio alone, using the \citet{Siegert2019_SPIBG} method seems justified.
In order to determine the temporal variability in the background per energy bin, we followed the same approach as in \citet{Siegert2019_SPIBG}.
%
Based on the fact that the germanium detector rate alone is insufficient to model or predict the pointing-to-pointing variation, an onboard radiation monitor is typically used to fix the background behaviour.
%
We used the rate of saturating germanium detector events (GeDSat) to link the background amplitudes in time and employed it as a `tracer' function.
%
It was shown in several studies that used either this or previous methods that neither the GeDSat rate alone nor orthogonalised additional tracers can fully explain the background variability.
%
To account for unexplained variance, we split the background tracer in time and set regularly spaced nodes to re-scale the background model.
%
Because this choice is not unique, we attempted a tradeoff between the number of additional background parameters required and the likelihood.
%
This is achieved by the use of the Akaike information criterion \citep[AIC;][]{Akaike1974_AIC,Burnham2004_AICBIC},
%
\begin{equation}
\mrm{AIC} = 2 (n_{\rm par} - \ln(\mathscr{\hat{L}}))\mrm{,}
\label{eq:AIC}
\end{equation}
%
with $n_{\rm par}$ being the number of fitted parameters and $\mathscr{\hat{L}}$ the log-likelihood maximum, to determine which configuration of background variability is optimal for each energy bin.
%
We tested background variability timescales between 0.19\,d ($1/16$ of an orbit) and 30\,d (ten orbits) for each energy bin and identified the optimum AIC (see Table\,\ref{tab:data_set}).
The AIC has no absolute meaning, but its relative values can be used to identify similarly likely model configurations.
%
We used the AIC optimum value and defined a threshold based on the required number of fitted parameters to select background variability timescales that also provide an adequate fit.
%
Likewise, we can estimate a systematic uncertainty on the extracted flux per energy bin using the AIC (see Sect.\,\ref{sec:background_systematics}).
\section{Data and analysis}\label{sec:analysis}
\subsection{Filtered dataset}\label{sec:data_set}
%
Based on the considerations in Sect.\,\ref{sec:problem}, we defined 12 logarithmic energy bins and considered INTEGRAL revolutions between 43 (February 2003) and 2047 (January 2019).
%
Details about the included revolutions and the number of observations are provided in Appendix\,\ref{sec:additional_tables}.
%
The dataset covers two independent SPI processing chains: pulse-shaped-discriminated (PSD) events between 0.5 and 2.0\,MeV and HE between 2 and 8\,MeV.
%
To combine the extracted data points into one common spectrum, the PSD fluxes were scaled by the expected loss in efficiency of $1/0.85$ due to increased dead time.
%
The PSD range includes the Galactic diffuse \nuc{Al}{26} line at 1808.74\,keV, which we included in a narrow bin between 1805 and 1813\,keV.
%
We focused on a spatial region around the Galactic centre that is covered by targeted observations (pointings) falling into $-40^{\circ} \leq \ell \leq 40^{\circ}$, $-40^{\circ} \leq b \leq 40^{\circ}$.
%
Because of SPI's fully coded $16^{\circ} \times 16^{\circ}$ field of view, we considered diffuse emission out to $|\ell| \leq 47.5^{\circ}$ and $|b| \leq 47.5^{\circ}$, respectively.
%
This avoids the partially coded field of view and its edge effects when the exposure is either very small (few pointings) or shows large gradients.
%
Our flux estimates are therefore normalised to a spherical square with a side length of $95^{\circ}$, covering a solid angle of $\Omega = 2.43\,\mrm{sr}$.
%
Other data selections included radiation monitors and orbit parameters:
%
We only chose pointings in the orbit phase between 0.15 and 0.85 to avoid residual activation by the Van Allen radiation belts.
%
Whenever the running mean of the rate ratio between the anti-coincidence shield and the total rate of the Ge detectors exceeded a $3\sigma$ threshold, we excluded the observation.
%
Pointings with a cooling plate difference of more than 0.8\,K were also excluded.
%
Finally, revolutions 1554--1558 were removed due to the outburst of the microquasar V404 Cygni.
%
These selections resulted in a total of $36103$ pointings for the PSD and HE ranges.
%
Based on a background-only fit to the selected data, we investigated the residuals as a function of pointing, detector, and energy, and removed individual observations whose deviations were larger than $7\sigma$.
%
Given the expectedly low signals, any diffuse emission contribution is about 0.1--1.0\,\% of the total counts and would never distort the residuals in the broad logarithmic energy bins.
%
The additional filter removed 0.6\,\% of the PSD data, for a reduced dataset of $35892$ pointings.
%
The HE range shows no such outliers.
%
In total, the dead-time-corrected exposure time of our dataset is $68.5$\,Ms for a working detector.
The characteristics of our dataset are found in Table\,\ref{tab:data_set}, including the number of data points, the background variability timescale per energy bin, the number of degrees of freedom (dof) per energy bin, and a calculated goodness of fit criterion.
%
As described in Sect.\,\ref{sec:background}, the background variability is determined to first order by the GeDSat rate.
%
Because this tracer is not sufficient to describe the true (measured) background variability, we inserted regularly spaced time nodes to capture the unexplained variance.
%
As shown in \citet{Siegert2019_SPIBG}, the number of time nodes, or in turn the background variability, depends on the energy, the bin width, and to some extent the source strength.
%
With an optimisation to require the fewest number of parameters while at the same time obtaining the best likelihood (see the AIC approach in Sect.\,\ref{sec:background}), this timescale was determined for each energy bin individually, always taking a baseline sky model into account (see Sect.\,\ref{sec:sky_maps}).
%
The background variability not explained by the tracer alone therefore changes between 0.75 and 6 days, increasing roughly with energy.
\begin{table}[!t]
\centering
\begin{tabular}{c|rrrr|r}
\hline\hline
Energy band & $n_{\rm data}$ & $T_{\rm BG}$ & $\mrm{dof}$ & $\chi^2/\mrm{dof}$ & Proc. \\
\hline
$514$--$661$ & $578764$ & $0.75$ & $573827$ & $1.0059$ & PSD \\
$661$--$850$ & $578764$ & $0.75$ & $573827$ & $0.9984$ & PSD \\
$850$--$1093$ & $578764$ & $0.75$ & $573831$ & $0.9974$ & PSD \\
$1093$--$1404$ & $578764$ & $0.75$ & $573831$ & $0.9974$ & PSD \\
$1404$--$1805$ & $578764$ & $1.5$ & $576047$ & $0.9939$ & PSD \\
$1805$--$1813$ & $578764$ & $3$ & $577254$ & $0.9935$ & PSD \\
$1813$--$2000$ & $578764$ & $3$ & $577255$ & $0.9953$ & PSD \\
$2000$--$2440$ & $582349$ & $6$ & $581390$ & $1.0057$ & HE \\
$2440$--$3283$ & $582349$ & $3$ & $580836$ & $1.0040$ & HE \\
$3283$--$4418$ & $582349$ & $3$ & $580836$ & $1.0026$ & HE \\
$4418$--$5945$ & $582349$ & $3$ & $580836$ & $1.0064$ & HE \\
$5945$--$8000$ & $582349$ & $6$ & $581390$ & $1.0038$ & HE \\
\hline
\end{tabular}
\caption{Dataset characteristics. The columns from left to right are the energy band in units of keV, the number of data points, the background variability timescale in units of days, the corresponding number of dof, the calculated reduced $\chi^2$ value from the best fit, and the SPI processing chain.}
\label{tab:data_set}
\end{table}
\subsection{General method}\label{sec:likelihood_fits}
%
SPI data analysis relies on a comparison between the raw count data per pointing, detector, and energy, with a combination of instrumental background and celestial emission.
%
We modelled the data $d_{p}$ per pointing $p$ for each energy bin individually as
%
\begin{equation}
m_p = \sum_t \sum_j R_{jp} \sum_{k=1}^{N_S} \theta_{k,t} M_{kj} + \sum_{t'} \sum_{k=N_S+1}^{N_S+N_B} \theta_{k,t'} B_{kp}\mrm{,}
\label{eq:spimodfit_model}
\end{equation}
%
where the response $R_{jp}$ is applied to each of the $k=1 \dots N_S$ sky models $M_{kj}$ pixelised by $j$.
%
The $N_B$ background models $B_{kp}$ are independent of the response.
%
The only free parameters of this model are the amplitudes $\theta_{k,t}$ and $\theta_{k,t'}$ of the sky and background models, respectively.
%
They were estimated through a maximum likelihood fit subject to the Poisson statistics
%
\begin{equation}
\mathscr{L}(\theta|D) = \prod_{p=1}^{N_{\rm obs}} \frac{m_p^{d_{p}}\exp(-m_p)}{d_{p}!}\mrm{,}
\label{eq:poisson_likelihood}
\end{equation}
where $D = \{d_1, \dots, d_{N_{\rm obs}}\}$ is the dataset of measured counts per pointing.
Both sky and background were allowed to change on different timescales, $t$ and $t'$, respectively.
%
Source variability above 500\,keV is too faint to be detected in this dataset, and we assumed all sources as well as the diffuse emission to be constant in time.
%
We followed the approach of \citet{Siegert2019_SPIBG} to model the instrumental background from the constructed line and continuum database (Sect.\,\ref{sec:background}).
%
We built two background models per analysis bin from the newly constructed HE background database, one for the instrumental lines and one for the instrumental continuum.
%
The amplitudes of these models were fitted together with the flux(es) of expected emission model(s) (see Sect.\,\ref{sec:sky_maps}).
%
Any background variation that is not covered by this tracer was refined by additional time nodes to re-scale the GeDSat tracer function.
%
The estimated background variability timescale, changing from 0.75\,d ($\sim 1/4$ of an orbit) between 0.5 and 1.4\,MeV, up to 3--6\,d above 4\,MeV, is equivalent to $\sim 2500$ and $\sim 500$ fitted parameters, respectively.
The maximum likelihood fits to the raw data were performed with \emph{OSA/spimodfit} \citep{spimodfit,Strong2005_gammaconti}.
%
The extracted flux data points were governed by an energy redistribution matrix to take the instrument dispersion into account.
%
Spectral fits were performed with \emph{3ML} \citep{Vianello2015_3ML}.
\subsection{Emission templates}\label{sec:sky_maps}
%
Four resolved point sources, 1E\,1740.7-2942, GRS\,1758-258, IGR\,J17475-2822, and SWIFT\,J1753.5-0127, are expected in addition to the diffuse emission \citep{Bouchet2011_diffuseCR}.
%
We modelled the point sources as constant in time up to an energy of 850\,keV.
%
The 1.8\,MeV line from \nuc{Al}{26} was included as the SPI maximum entropy map by \citet{Bouchet2015_26Al}.
%
For the continuum, only the leptonic emission is relevant at the energies of interest, well below the so-called $\pi^0$ bump.
%
Previous analyses that included INTEGRAL, COMPTEL, and data from Fermi's Large Area Telescope (LAT) suggested that electron bremsstrahlung is sub-leading by at least an order of magnitude in our range of energies \citep{Strong2011_CRgamamrays}, and we neglect it in the following.
%
We used energy-dependent IC scattering emission templates from the GALPROP (v56) CR propagation code \citep{Strong2011_GALPROP}.
%
In particular, we used: (i) the model $\mrm{^SL ^Z4 ^R20 ^T150 ^C5}$ adopted in \citet{Ackermann2012_FermiLATGeV}, which reproduces Fermi/LAT gamma-ray observations well; and ii) the model in Table 2 of \citet{Bisschoff2019_Voyager1CR}, which adjusts primary CR spectra and propagation parameters, also accounting for $\mrm{Voyager}$\,1 data.
%
This latter model adopts different electron spectral indices, as well as diffusion scaling with rigidity below and above a reference rigidity of 4\,GV, as encoded in the spectral indices $\delta_1$ and $\delta_2$, whose default values are 0.3 and 0.4, respectively.
%
By using the IC template as a tracer of the diffuse emission, we avoided adopting generic descriptions of the emission with, for instance, exponential discs or 2D Gaussians and/or continuum tracer maps such as the COMPTEL 1--30\,MeV map \citep{Strong1999_COMPTEL_MeV}.
%
Such a model-inspired approach has a twofold advantage: First, our choice to resort to absolute models allowed us to gauge how far `off' the fluxes are from typical expectations.
%
Second, the predicted IC morphologies spanned by these models, together with the flexibility in the background following from the number of time nodes required, provide a measure of systematic uncertainties.
\begin{figure*}[!h]
\centering
\includegraphics[width=\textwidth,trim=0.2in 0.3in 0.2in 0.1in, clip=true]{systematics_estimate_plot_2000-2440.pdf}\\
\includegraphics[width=\textwidth,trim=0.2in 0.3in 0.2in 0.1in, clip=true]{systematics_estimate_plot_4418-5945.pdf}
\caption{Background variability and systematics for two chosen energy bins, 2000--2440\,keV (\emph{top}, dominated by statistics) and 4418--5945\,keV (\emph{bottom}, statistics and systematics of the same magnitude). \emph{Left}: Scan for optimal background variability timescale as measured with the AIC (\emph{right axis}), Eq.\,\ref{eq:AIC}. The optimum is found at timescales of 6 and 3\,d, respectively corresponding to two and one INTEGRAL orbits. For comparison, the calculated reduced Pearson $\chi^2$ is shown for each tested grid point. \emph{Right}: Corresponding flux estimates and statistical uncertainties (orange). The systematics are estimated from the standard deviation of fluxes whose $\Delta\mrm{AIC}$ values are below the number of fitted parameters at optimum AIC (shaded region).}
\label{fig:systematics}
\end{figure*}
%
To this end, we tested different variants of the $\mrm{Voyager}$ CR parameter configuration \citep{Bisschoff2019_Voyager1CR} and assessed the magnitude of systematic uncertainties.
%
We defined: (a) $\delta_1 = 0$ to represent the possibly flatter behaviour of the diffusion scaling at low rigidities \citep{Genolini2019_AMS02_B2C}; (b) $\delta_1 = \delta_2 = 0.5$ to test the effect of a single diffusion index closer to current best fits of the ratio of secondary to primary CR nuclei \citep{Genolini2019_AMS02_B2C,Weinrich2020_AMS02_halosize}; (c) \texttt{10$\times$opt} to account for a factor of 10 stronger optical ISRF, corresponding to a possible enhancement of this poorly known component of the ISM towards the inner Galaxy \citep{Bouchet2011_diffuseCR}; and (d) \texttt{thick halo} -- we adopted the extreme value $L=8$\,kpc halo half thickness, as opposed to the default 4 kpc, and we re-normalised the diffusion coefficient accordingly to account for their well-known degeneracy \citep{Weinrich2020_AMS02_halosize}.
These variants affect both the spatial distribution of the IC photons (i.e. the morphology) and the spectral IC shape.
%
It is suggested that the primary CR electron spectrum has a break around $E_e = 2.2$\,GeV, changing from a power-law index of $p_{1}=-1.6$ to $p_{2} = -2.4$ (Fermi/LAT), or $E_e = 4.0$\,GeV with power-law indices of $p_{1}=-1.9$ and $p_{2} = -2.7$ ($\mrm{Voyager}$).
%
This implies a spectral break in the photon spectrum from $\alpha_{1}^{\rm IC} = -1.3$ to $\alpha_{2}^{\rm IC} = -1.7$ around 25\,keV, 250\,keV, and 25\,MeV for the individual components of the ISRF (cosmic microwave background $\sim 0.001$\,eV, dust $\sim 0.01$\,eV, star light $\sim 1$\,eV).
%
The components were weighted with their respective spatial intensities, which results in a power-law-like spectrum with an index of $\alpha^{\rm IC} = -1.4$ to $-1.3$ up to a few MeV.
%
We also notice that the photon spectrum curvature maximum can be shifted from around 20\,MeV to around 3--5\,MeV by increasing the optical ISRF by a factor of 10 (\texttt{10$\times$opt}), which also increases the flux by at least a factor of $5$.
%
However, such a model might fall short in describing the absolute flux, especially below 4\,MeV, and shows a steeper spectrum in our analysis band than what has previously been measured.
%
\begin{figure*}[!t]
\centering
\includegraphics[width=0.9\textwidth,trim=1.25in 0.2in 1.55in 0.2in, clip=true]{Backprojection_residuals_SPI_0p5-8p0MeV_wexpo.pdf}
\caption{Count residuals projected back onto the sky as a function of energy. \emph{Top}: Background-only fits with consistent positive residuals along the Galactic plane and centre. \emph{Bottom:} Background plus IC template map fits, with the exposure map indicated. Shown are levels where the exposure drops to 50\,\% and 10\,\% of the maximum, respectively.}
\label{fig:residuals}
\end{figure*}
\section{Fit quality and systematic uncertainties}\label{sec:systematics}
\subsection{Fit quality}\label{sec:fit_quality}
%
We judged the adequacy of our maximum likelihood fits in each energy bin by the shape and distribution of the normalised residuals, $r = (d-m)/\sqrt{m}$, with data $d$ and model $m$, as a function of time (pointing).
%
To a lesser extent, mainly because the value has no proper meaning in this context but is frequently used in the literature, we considered the reduced $\chi^2$ value of our fits, $\chi^2/\mrm{dof} = \sum_i r_i^2 /\mrm{dof}$.
%
We refer the reader to \citet{Andrae2010_chi2} for why the use of $\chi^2$ can be misleading in general, and in particular in the context of this work.
A `bad fit' would be immediately seen in the residuals -- even though $\chi^2/\mrm{dof}$ might be close to the optimal value of $1.0$.
%
For example, individual outliers of even $50\sigma$ would still result in a reduced $\chi^2$ value close to $1.0$ but would distort the entire fit results.
%
Likewise, an apparently large or small reduced $\chi^2$ value must not be considered `bad' in the case of Poisson statistics because it is only a calculated value and not related to the actual data generating process.
%
Therefore, for a `good fit' we demand the temporal sequence of residuals to show no individually strong outliers ($\gtrsim 10\sigma$) and no clustered weak outliers (many neighbouring values above or below the mean).
In Fig.\,\ref{fig:fit_residuals} we show as an example the complete sequence of residuals of the camera combined and all individual detectors for the energy range 6--8\,MeV.
%
The reduced $\chi^2$ value of this fit evaluates to $1.00383$ with $581390$ dof.
%
Since we find no remaining structure in these residuals, we deem this fit adequate.
%
This is also true for the remaining energy bins analysed in this work.
\subsection{Background systematics}\label{sec:background_systematics}
For each energy bin we calculated the standard deviation of flux estimates that follow $\Delta\mrm{AIC} \leq n_{\rm par}(\Delta\mrm{AIC} = 0)$ to estimate our systematic uncertainties.
%
This inequality still demands that the fit must be `good' in the terms described above (Sect.\,\ref{sec:fit_quality}) but allows the fitted parameter of interest (the amplitude of the sky model) to vary within a reasonable range.
%
It does not describe another statistical uncertainty because the number of total parameters is increased when a new, smaller time variability scale is introduced.
%
The likelihood would always increase towards a `better' fit, which is why we used the AIC again to take the changing number of dof into account.
Since the pointing-to-pointing variation in our background model is fixed by an onboard monitor \citep{Siegert2019_SPIBG} and consequently not entirely perfect, the background is re-scaled (fitted) according to the selected time nodes (Sect.\,\ref{sec:background}).
%
Because the sky components are either localised (point sources) or show gradients (diffuse emission), it is insufficient to scale this background model once for the entire dataset; it requires additional time nodes.
%
This was realised in the maximum likelihood fit via the introduction of the background variability timescales, $t'$, or, equivalently, more background parameters, $\theta_{k,t'}$.
%
This re-scaling depends on energy, bin size, flux, and time (pointing) because different source strengths determine the total counts and because the INTEGRAL observation scheme is not a survey but pointed according to granted proposals.
%
We show two examples in Fig.\,\ref{fig:systematics} of how the AIC changes as a function of the background variability timescale and how the systematic uncertainties are estimated from this search.
%
The energy bin from 2000--2440\,keV is dominated by statistical uncertainties, meaning many, also unlikely, background model configurations result in the same flux estimate, given the same spatial template.
%
From 4418 to 5945\,keV, the systematic uncertainty is of the order of the statistical uncertainty because the background variability allows a larger range of flux estimates.
\begin{figure*}[!t]
\centering
\includegraphics[width=\columnwidth,trim=0.1in 1.0in 1.2in 1.2in, clip=true]{spectral_extraction_Dg1Dg205_ics.pdf}
\includegraphics[width=\columnwidth,trim=0.1in 1.0in 1.2in 1.2in, clip=true]{spectral_extraction_Dg1Dg205_shi.pdf}
\caption{Comparison of fitted IC source fluxes with GALPROP model $\delta_1 = \delta_2 = 0.5$ (left) and the same template maps but shifted in latitude by $+25^{\circ}$ (right). The expected spectrum is shown as a black curve and the extracted fluxes with crosses. The insets show a representative template map, with the unshifted map as contours in the right inset. No excess is found for the shifted templates, consolidating the signal found from the Galactic plane.}
\label{fig:spectrum_comparison_shifted}
\end{figure*}
\subsection{Source systematics}\label{sec:source_systematics}
%
We used the different emission templates described in Sect.\,\ref{sec:sky_maps} to estimate another source of systematic uncertainties from the IC emission itself.
%
Because the emission is expected to be weak, we refrained from extensive parameter scans and instead used a set of parameters that we explored within their uncertainties.
%
Our results and extracted fluxes can thus be used in follow-up studies to constrain the CR propagation parameters.
%
In total, we tested six different setups, one best-fit model from Fermi/LAT analyses, $\mrm{^SL ^Z4 ^R20 ^T150 ^C5}$ \citep{Ackermann2012_FermiLATGeV}, and five variants of the combined study from Voyager, Fermi/LAT, and the Alpha Magnetic Spectrometer experiment (AMS-02) data from \citet{Bisschoff2019_Voyager1CR}.
%
We list the systematics according to different spatial models in Table\,\ref{tab:fitted_parameters}.
\section{Results}\label{sec:results}
%
\subsection{Spatial residuals}\label{sec:residuals}
%
For a visual verification that we indeed measured emission from the Galactic plane, we fitted a background-only model to the data.
%
The residuals of these fits are projected back onto the celestial sphere such that we obtain an image of where the residual counts are found.
%
We caution that this is not an image reconstruction, nor should individual features be over-interpreted:
%
The backward application of the coded-mask response to the residual counts is not unique and is limited by the source strength.
%
If positive (or negative) regions consistently cluster in these residual images in the same areas as a function of energy, we can conclude that the measured fluxes are less likely to be an instrumental artefact.
%
If instrumental background lines are not modelled properly, they will appear as residuals in these images but be restricted to one particular energy.
%
Figure\,\ref{fig:residuals} shows the residuals of a background-only fit and the changed appearance after including the IC template maps.
%
We find positive residuals clustered in the region of the Galactic centre and disk for all energies.
%
The magnitude of the residuals decreases with energy, as expected from the power-law behaviour of the IC emission.
%
The residuals that include the IC template maps are devoid of the central enhancement and show a wreath-like pattern.
%
This originates from large gradients in the exposure map, dropping from long observed regions to nearly zero within a few degrees.
%
We conducted an additional test for the detection of diffuse emission from 0.5 to 8.0\,MeV by altering the IC sky model.
%
If the emission is due to an instrumental effect and not from the Galactic plane, a similar spectrum (that is, similar to that of the background) will result if the template map used has no impact on the fit.
%
We tested such a scenario by shifting the IC template maps for each energy bin by $+20^{\circ}$ in latitude and repeated the fit.
%
The resulting spectra for both cases are shown in Fig.\,\ref{fig:spectrum_comparison_shifted}.
%
Clearly, the spectrum follows a power-law shape for the template centred on the Galactic plane and is consistent with zero flux for the shifted template.
%
We conclude that there is indeed diffuse emission detected by SPI in the Galactic plane up to 8\,MeV.
\subsection{Spectrum}\label{sec:spectrum}
%
In Fig.\,\ref{fig:spectrum} we show the extracted data points from our analysis of the IC emission.
%
As expected, the \nuc{Al}{26} line at 1.8\,MeV has no spatial component following the IC morphology, and we provide an upper limit.
%
For a visual comparison to the 20 year old COMPTEL data points \citep{Strong1999_COMPTEL_MeV}, we binned our flux data points to a minimum signal-to-noise ratio of $6$.
%
We note that a comparison of `extracted fluxes' from different instruments without taking the spectral response into account can be (and most of the time is) misleading.
%
Nevertheless, it can provide a general overview of the consistency between the measurements.
%
\begin{figure}[!ht]
\centering
\includegraphics[width=\columnwidth,trim=0.0in 0.0in 0.0in 0.0in, clip=true]{SPI_diffuse_0p5-8p0MeV_COMPTEL_int_voyager_Dg1Dg205_pl_cpl.pdf}
\caption{Spectrum of the analysed region between 0.5 and 8\,MeV. The orange data points show the extracted fluxes from the energy-dependent IC template $\delta_1 = \delta_2 = 0.5$, and the fuchsia points a re-binning to a minimum signal-to-noise ratio of $6$. The fitted power-law spectrum ($F_{0.5-8.0} = (5.7 \pm 0.8) \times 10^{-8}\,\mrm{erg\,cm^{-2}\,s^{-1}}$, $\alpha = -1.39 \pm 0.09$) is shown with its 68.3, 95.4, and 99.7 percentile bands in violet, and the fitted cutoff power law with $E_C = 4.9 \pm 1.4$\,MeV in red. We compare the fluxes of this work with historic measurements by COMPTEL \citep[green;][]{Strong1999_COMPTEL_MeV}.
}
\label{fig:spectrum}
\end{figure}
%
We find an excellent agreement between SPI and COMPTEL in the overlap region from 1--8\,MeV and show that after 16 years in space, SPI's diffuse continuum measurements have smaller uncertainties than those of COMPTEL.
We fitted the spectrum phenomenologically with a power law, $C_0 (E/\mrm{1\,MeV})^{\alpha}$.
%
Our best-fit parameters are a flux density of $C_0 = (3.1 \pm 0.3) \times 10^{-6}\,\mrm{ph\,cm^{-2}\,s^{-1}\,keV^{-1}\,sr^{-1}}$ at 1\,MeV and an index of $\alpha = -1.39 \pm 0.09$.
%
The spectral index is consistent with the work by \citet{Bouchet2011_diffuseCR}, who found an index of $1.4$--$1.5$ between $0.02$ and $2.4$\,MeV.
%
Extrapolating the fitted power law to the COMPTEL band up to 30\,MeV and propagating the spectral uncertainties also shows a general agreement (violet band).
%
Using instead a cutoff power law with a normal prior for the break energy of $4.0 \pm 1.8$\,MeV \citep[cf. Table 4 in][]{Bouchet2011_diffuseCR} leads to slightly larger flux values between 1 and 4\,MeV and to slightly smaller fluxes ($\lesssim 10\,\%$ difference in both cases) elsewhere (red band).
%
The resulting power-law index is then $-0.95 \pm 0.16$ and the fitted break energy $4.9 \pm 1.4$\,MeV.
We note that SPI also detects photons above 8\,MeV; however, the official tools do not provide an imaging or spectral response at these energies.
%
On the other hand, the SPI spectrum below 0.5\,MeV is already well determined, and we refer the reader to \citet{Bouchet2011_diffuseCR} and \citet{Siegert2021_BDHanalysis} for details about this low-energy band.
%
Extending the spectrum in either direction is beyond the scope of this paper.
%
As an alternative to a generic power law, we compare the extracted data points from each GALPROP IC morphology to the expected absolute model in Fig.\,\ref{fig:spectrum_IC_models}.
%
In this way, we can determine which propagation model provides the best absolute normalisation when compared to SPI data.
%
The magnitudes of the systematic uncertainties were calculated as the mean absolute difference from the extracted flux values among the tested IC morphologies (thin error bars).
%
The flux values (crosses) in Fig.\,\ref{fig:spectrum_IC_models} and their statistical uncertainties (thick error bars) are the means of the individually extracted fluxes (see also Table\,\ref{tab:fitted_parameters}).
%
At the spectral level, excessively extreme variations in the diffusive properties, as in the model $\delta_1=0$, appear in tension with the data.
%
All other models seem to lead to quasi-parallel spectra, in broad agreement with the deduced shape.
\begin{table*}[!ht]
\centering
\begin{tabular}{l|cccccc|c}
\hline\hline
Model & $C_0$ & $\alpha$ & $F_{0.5-0.9}$ & $F_{0.9-1.8}$ & $F_{1.8-3.3}$ & $F_{3.3-8.0}$ & $F_{0.5-8.0}$ \\
\hline
\texttt{Voyager} baseline & $8.3 \pm 0.6$ & $1.42 \pm 0.08$ & $0.53 \pm 0.05$ & $1.14 \pm 0.10$ & $1.31 \pm 0.15$ & $3.1 \pm 0.5$ & $6.1 \pm 0.8$ \\
\texttt{Voyager} ($\delta_1 = 0$) & $10.4 \pm 0.9$ & $1.39 \pm 0.08$ & $0.65 \pm 0.06$ & $1.47 \pm 0.12$ & $1.73 \pm 0.19$ & $4.1 \pm 0.7$ & $8.0 \pm 1.0$ \\
\textbf{\texttt{Voyager}} ($\bm{\delta_1 = \delta_2 = 0.5}$) & $\bm{7.6 \pm 0.7}$ & $\bm{1.39 \pm 0.09}$ & $\bm{0.48 \pm 0.05}$ & $\bm{1.06 \pm 0.09}$ & $\bm{1.24 \pm 0.16}$ & $\bm{2.9 \pm 0.5}$ & $\bm{5.7 \pm 0.8}$ \\
\texttt{Voyager} (\texttt{opt$\times$10}) & $8.8 \pm 0.7$ & $1.34 \pm 0.08$ & $0.54 \pm 0.05$ & $1.23 \pm 0.10$ & $1.49 \pm 0.17$ & $3.7 \pm 0.6$ & $6.9 \pm 0.9$ \\
\texttt{Voyager} (\texttt{thick halo}) & $10.7 \pm 0.9$ & $1.45 \pm 0.08$ & $0.69 \pm 0.06$ & $1.46 \pm 0.12$ & $1.65 \pm 0.20$ & $3.7 \pm 0.6$ & $7.5 \pm 1.0$ \\
\texttt{Fermi/LAT} baseline & $8.6 \pm 0.7$ & $1.45 \pm 0.08$ & $0.55 \pm 0.05$ & $1.18 \pm 0.10$ & $1.33 \pm 0.15$ & $3.0 \pm 0.5$ & $6.0 \pm 0.8$ \\
\hline
\texttt{Sky systematics} & $1.9$ & $0.10$ & $0.14$ & $0.25$ & $0.29$ & $0.9$ & $1.7$ \\
\hline
\hline
\texttt{Extracted fluxes} & $-$ & $-$ & $6.43 \pm 0.99$ & $3.10 \pm 0.38$ & $1.26 \pm 0.14$ & $0.38 \pm 0.06$ & $1.16 \pm 0.08$ \\
\texttt{Background systematics} & $-$ & $-$ & $0.49$ & $0.49$ & $0.34$ & $0.1$ & $1.4$ \\
\hline
\end{tabular}
\caption{Fitted parameters and estimated fluxes for different morphologies to describe the IC scattering spectrum in the Milky Way. For the top section the units are, from left to right, $10^{-6}\,\mrm{ph\,cm^{-2}\,s^{-1}\,keV^{-1}}$, $1$, and $10^{-8}\,\mrm{erg\,cm^{-2}\,s^{-1}}$ for the fluxes in the bands 514--850, 850--1813, 1813--3283, 3283--8000, and 514--8000\,keV, respectively. Background systematics are estimated for the flux extraction (bottom section) in units of $10^{-6}\,\mrm{ph\,cm^{-2}\,s^{-1}\,keV^{-1}\,sr^{-1}}$. The normalisation for the solid angle in this analysis is $2.43\,\mrm{sr}$.}
\label{tab:fitted_parameters}
\end{table*}
We note that the default predictions are about a factor of 2--3 below the measured fluxes, with increasing discrepancy towards higher energies.
%
However, the plot also shows that it is hard to pin down the origin of the mismatch:
%
Variations in the diffusion properties, variations in the photon targets by a factor of 3--5, or variations in the CR source spectra by a similar factor (not shown) could be involved in explaining the mismatch.
%
\citet{Orlando2018_CR_multiwavelength} argue, however, that this last option, also invoked in \citet{Bouchet2011_diffuseCR}, would lead to an overproduction of synchrotron emission, which disfavours such a hypothesis.
%
Also, there is almost no sensitivity to the halo thickness (\texttt{thick halo}).
%
The best match is found for the model variant with $\delta_1 = \delta_2 = 0.5$ that assumes a constant diffusion coefficient index for the entire CR electron spectrum.
%
Finally, we note that part of the emission could be due to an unresolved population of Galactic sources, indistinguishable from a continuum emission.
%
Such sources might show spectra similar to the `hard tails' that have recently been detected in a few X-ray binaries \citep[e.g.][]{Cangemi2021_CygX1_hardtail,Cangemi2021_CygX3_hardtail}.
%
Emission up to $\sim 500$\,keV and beyond has been observed in individual sources, which could flatten out the cumulative spectrum of a population of weak sources.
%
In term of the energy flux, $E^2F_E$, this could lead to a peak in the unresolved point source spectrum around 0.5--3.0\,MeV, depending on the objects' properties and their luminosity function in the Milky Way.
Evaluating the different model variants, we find that the systematic uncertainties due to the background variability range between 5\,\% (0.5--0.85\,MeV) and 20\,\% (3.3--8.0\,MeV).
%
The systematic uncertainty from the IC morphology ranges between 20 and 30\,\%.
%
\begin{figure}[!t]
\centering
\includegraphics[width=\columnwidth,trim=0.0in 0.0in 0.0in 0.0in, clip=true]{SPI_0p5-8p0MeV_GALPROP_IC_comparison_allmodels_per_sr.pdf}
\caption{Extracted spectrum (black crosses) with statistical (thick error bars) and systematic (thin error bars) uncertainties, as well as a generic power-law fit (band).}
\label{fig:spectrum_IC_models}
\end{figure}
\section{Summary, discussion, and conclusions}\label{sec:summary}
%
For the first time in 20 years, we have provided a description of the Galactic diffuse $\gamma$-ray spectrum up to 8\,MeV.
%
Our results are compatible with previous estimates from COMPTEL and finally supersede its precision as measured by the signal-to-noise ratio.
%
The spectrum is adequately described empirically by a power law with an index of $-1.39 \pm 0.09$ and a flux of $(5.7 \pm 0.8_{\rm stat} \pm 1.7_{\rm syst}) \times 10^{-8}\,\mrm{erg\,cm^{-2}\,s^{-1}}$ between 0.5 and 8.0\,MeV.
%
Our general finding is in line with \citet{Bouchet2011_diffuseCR}, showing the need for a continuum emission broadly peaking in the inner Galaxy and compatible in spectrum with the expected IC scattering of CR electrons onto the ISRF.
%
Such a model, however, overshoots baseline expectations of state-of-the-art models calibrated to local $\mrm{Voyager}$\,1 and AMS-02 data by a factor of 2--3.
%
With dedicated GALPROP runs, we discussed how enhanced ISRF in the Galactic centre or modified diffusion may be responsible for a similar discrepancy.
%
A propagation model with a single diffusion index of $\delta_1 = \delta_2 = 0.5$ provides the best description of the SPI data in the photon energy range between 0.5 and 8.0\,MeV.
%
Our analysis also includes an assessment of systematic uncertainties based on realistic morphologies of IC models, which lead to a systematic flux uncertainty from the IC spatial distribution of between 20--30\,\%.
%
An $\mathcal{O}(10\%)$ sub-leading bremsstrahlung component with a less steep electron spectrum \citep{Strong2000_DiffuseContinuum,Strong2005_gammaconti,Bouchet2011_diffuseCR,Ackermann2012_FermiLATGeV} can further improve the agreement between CR propagation model expectations and data.
%
Our improved estimates of the MeV spectrum in the Milky Way for broadband $\gamma$-ray analysis will provide more stringent estimates of the Galactic electron population at GeV energies.
Nonetheless, a better sensitivity in the MeV range, and therefore a future mission covering the MeV sensitivity gap, such as the recently selected small explorer mission COSI, the Compton Spectrometer and Imager\footnote{\url{https://www.nasa.gov/press-release/nasa-selects-gamma-ray-telescope-to-chart-milky-way-evolution}} \citep{Tomsick2019_COSI}, will shed further light on the possibilities of additional continuum sources in lieu of true diffuse emission.
%
This will be of relevance not only for the astrophysical study of Galactic CR populations, but also for searches of more exotic, beyond-the-standard-model emission processes, such as from dark matter candidates \citep{AlvesBatista2021_EuCAPT_WP}.
The spectral data points and response are available in an online repository\footnote{\url{https://doi.org/10.5281/zenodo.5618448}}.
%
We encourage the use of this renewed dataset from INTEGRAL/SPI for comparisons to Galactic emission processes.
\begin{acknowledgements}
T.S.~is supported by the German Research Foundation (DFG-Forschungsstipendium SI 2502/3-1) and acknowledges support by the Bundesministerium f\"ur Wirtschaft und Energie via the Deutsches Zentrum f\"ur Luft- und Raumfahrt (DLR) under contract number 50 OX 2201. F.C., J.B.~and P.D.S. acknowledge support by the ``Agence Nationale de la Recherche'', grant n. ANR-19-CE31-0005-01 (PI: F. Calore).
\end{acknowledgements}
\bibliographystyle{aa}
| proofpile-arXiv_065-5620 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section*{Abstract}
The real-time analysis of infectious disease surveillance data, e.g., in the form of a time-series of reported cases or fatalities, is essential in obtaining situational awareness about the current dynamics of an adverse health event such as the COVID-19 pandemic. This real-time analysis is complicated by reporting delays that lead to underreporting of the number of events for the most recent time points (e.g., days or weeks). This can lead to misconceptions by the interpreter, e.g., the media or the public, as was the case with the time-series of reported fatalities during the COVID-19 pandemic in Sweden. Nowcasting methods provide real-time estimates of the complete number of events using the incomplete time-series of currently reported events by using information about the reporting delays from the past. Here, we consider nowcasting the number of COVID-19-related fatalities in Sweden. We propose a flexible Bayesian approach, extending existing nowcasting methods by incorporating regression components to accommodate additional information provided by leading indicators such as time-series of the number of reported cases and ICU admissions. By a retrospective evaluation, we show that the inclusion of ICU admissions as a leading signal improved the nowcasting performance of case fatalities for COVID-19 in Sweden compared to existing methods.
\section*{Author summary}
Nowcasting methods are an essential tool to provide situational awareness in a pandemic. The methods aim to provide real-time estimates of the complete number of events using the incomplete time-series of currently reported events and the information about the reporting delays from the past. In this paper, we consider nowcasting the number of COVID-19 related fatalities in Sweden. We propose a flexible Bayesian approach, extending existing nowcasting methods by incorporating regression components to accommodate additional information provided by leading indicators such as time-series of the number of reported cases and ICU admissions. We use a retrospective evaluation covering the second (alpha) and third (delta) wave of COVID-19 in Sweden to assess the performance of the proposed method. We show that the inclusion of ICU admissions as a regression component improved the nowcasting performance (measured by the CRPS score) of case fatalities for COVID-19 in Sweden by 4.2\% compared to an established method.
\newpage
\section*{Introduction}
The real-time analysis of infectious disease surveillance data, e.g. in the form of time-series of reported cases or fatalities, is one of the essential components in shaping the response during infectious disease outbreaks such as major food-borne outbreaks or the COVID-19 pandemic. Typically, public health agencies and governments use this type of monitoring to assess the disease dynamics and plan and assess the effectiveness of preventive actions \cite{Metcalf_2020, wu_2021}. Such real-time analysis is complicated by reporting delays that give rise to \textit{occurred-but-not-yet-reported} events which may lead to underestimation of the complete number of reported events \cite{lawless_1994}. Fig~\ref{fig:rep_vs_unrep} illustrates the problem with data of Swedish COVID-19-related fatalities as of 2022-02-01, where the reported number of fatalities per day shows a declining trend. With data available two months later \cite{FHM}, it is seen that the number of fatalities per day was at the time increasing.
\begin{figure}[!h]
\includegraphics[width=1\textwidth]{figs/fig1.png}
\vspace{0.1mm}
\caption{{\bf Daily COVID-19 fatalities in Sweden.}
Reported (black bars) and unreported (grey bars) number of daily fatalities as of 2022-02-01. The reported number of events show a declining trend when in actuality (known in hindsight) it was increasing.}
\label{fig:rep_vs_unrep}
\end{figure}
Nowcasting methods \cite{donker_etal2011,hohle_2014, mcgough_etal2020} tackle this problem by providing real-time estimates of the complete number of events using the incomplete time-series of currently observed events and information about the reporting delay from the past.
The methods have connections to insurance claims-reserving \cite{kaminsky_1987}, and its epidemiological applications trace back to HIV modelling \cite{kalbfleisch1989inference, zeger_etal1989, lawless_1994}. Nowcasting methods has been used in COVID-19 analysis for daily infections \cite{greene_2021, Tenglong_2021, seaman_2022}, and fatalities \cite{Schneble_2020, altmejd_2020, bird_2021}. A Bayesian approach to nowcasting, which constitutes the foundation of our method, was developed by Höhle and an der Heiden \cite{hohle_2014} and later extended by G{\"u}nther et al.~\cite{gunther_2020} and McGough et al. \cite{mcgough_etal2020}. Most nowcasting methods are focused on estimating the reporting delay distribution; however, an epidemic contains a temporal dependence since --depending on the mode of transmission-- adheres to certain \say{laws}, e.g. contact behavior which only changes slowly. Taking the temporal dependence of the underlying disease transmission into account has been shown to improve the nowcasting performance \cite{mcgough_etal2020, gunther_2020}.
Another approach to nowcasting, not considering the reporting delay distribution, is to use other data sources that are sufficiently correlated with the time series of interests, e.g.\ the Machine Learning approach by Peng et al.\cite{peng_2021}.
Our approach for nowcasting Swedish COVID-19 fatalities is based on a Bayesian hierarchical model that can account for temporal changes in the reporting delay distribution and, as an extension to existing methods~\cite{hohle_2014, gunther_2020}, incorporates a regression component of additional correlated data streams. Here, we consider the following two additional data streams; the time-series of the number of Intensive Care Unit (ICU) admissions and reported cases. The disease stages (infected, hospital, ICU, death) have a time order, and the number of new entries in one of the earlier compartments can help estimate what will happen for the later stages. As the additional data streams are assumed to be ahead in time, we consider them as leading indicators for the event of interest.
In this paper, we present methodological details of our approach and compare the results to existing nowcasting methods to illustrate the implication of incorporating additional data streams associated with the number of fatalities. We show with a retrospective evaluation of our method that nowcasting with leading indicators can improve performance compared to existing methods.
\section*{Materials and methods}
\subsection*{Data}
The surveillance data used for the analysis in this paper are daily counts of fatalities and ICU admissions and reported cases of people with a laboratory-confirmed SARS-CoV-2 infection in Sweden. The chosen period ranges from 2020-10-20 to 2021-05-21 and contains 117 reporting days (Tuesday to Friday excluding public holidays). During this period, there were 951 646 reported cases, 4 734 ICU admissions and 8 656 fatalities. The evaluation period covers Sweden's second (alpha) and third wave (delta) of COVID-19-related fatalities. In addition, this period also covers the introduction of vaccination which meant a change in the association between reported cases or ICU admissions and the fatalities. The times series of the number of reported cases, ICU admissions, and deaths can be seen in Fig~\ref{fig:timeseries}. The figure shows that the rise and fall of the three time series follow a similar time trend, with some time delay, during the first wave. However, in the second wave, the relative association between the fatalities and the other disease stages becomes less substantial, the main reason being the introduction of vaccination starting 2020-12-27 in Sweden.
\begin{figure}[!h]
\includegraphics[width=1\textwidth]{figs/fig2.png}
\caption{{\bf Reported cases, ICU admissions and fatalities with COVID-19 in Sweden.}
The period covers the second (alpha) and third (delta) wave and the start of vaccination in Dec 2020. Each time series is shown with a 3-week centered rolling average and scaled by its maximum value.}
\label{fig:timeseries}
\end{figure}
The data used in our analysis is publicly available from the website of the Public Health Agency of Sweden~\cite{FHM}, where new reports have been published daily from Tuesday to Friday (excluding public holidays). The aggregated daily counts are updated retrospectively at each reporting date. As the case fatalities are associated with a reporting delay, this implies that the published time series of reported COVID-19 fatalities will always show a declining trend (see Fig~\ref{fig:rep_vs_unrep} for an illustrative example). The reporting delay can not be observed in a single published report but can be obtained by comparing the aggregated numbers of fatalities of each date from previously published reports.
\subsection*{Nowcasting}
The notation and methodological details of our approach follows closely the notation introduced in G{\"u}nther et al.~\cite{gunther_2020}. Let $n_{t,d}$, be the number of fatalities occurring on day $t=0,...,T$ and reported with a delay of $d=0,1,2,...$ days, such that the reporting occurs on day $t+d$. The goal of Nowcasting is to infer the total number of fatalities $N_t$ of day $t$ based on the information available on the current day $T \ge t$. The sum $N_t$ can be written as
\begin{eqnarray*}
N_t =\sum_{d=0}^{\infty}n_{t,d}= \sum_{d=0}^{T-t}n_{t,d} + \sum_{d=T-t+1}^{\infty}n_{t,d},
\end{eqnarray*}
where the first sum is observed and the second sum is yet unknown. This can be illustrated by the so called reporting triangle (Fig~\ref{fig:rep_tri}). Where the upper left triangle are the number of reported fatalities and the lower right triangle is the number of occurred- but-not-yet-reported events with a maximum delay of \textit{D} days. The upper triangle carries the information about the reporting delay from the past and the lower triangle is what is estimated with the Nowcasting model.
\begin{figure}[!h]
\centering
\includegraphics[width=0.8\textwidth]{figs/fig3.png}
\caption{{\bf Reporting triangle for day \textit{T}.}
Green boxes (solid line) where $t \le T - D$ are the reported number of fatalities on day $T$ (today), considering a maximum delay of $D$ days. The red boxes (dashed line), corresponding to $t > T - D$, are the occurred-but- not-yet-reported number of events of day $t+D$. }
\label{fig:rep_tri}
\end{figure}
We let $\lambda_t$ denote the expected value of $N_t$, and $p_{t,d}$ denote the conditional probability of a fatality occurring on day $t$ being reported with a delay of $d$ days. Then, the number of events occurring on day $t$ with a delay of $d$ days is assumed to be negative binomial distributed
\begin{eqnarray*}
n_{t,d}|\lambda_t, p_{t,d} \sim \text{NB}(\lambda_t \cdot p_{t,d}, \phi),
\end{eqnarray*}
with mean $\lambda_t \cdot p_{t,d}$ and overdispersion parameter $\phi$. Hence, the Nowcasting task can be seen as having two parts; (1) determine the expected value of the total number of fatalities and (2) determine the reporting delay distribution to subsequently predict the $n_{t,d}$'s and finally compute the $N_t$'s.
\subsection*{Flexible Bayesian Nowcasting}
As described in the previous section, the nowcasting problem can be seen as a problem of the joint estimation of two models: (1) a model for the expected number of deaths over time, and (2) a model for the reporting delay distribution, which can also vary over time. Therefore, we let our model constitute of two distinct elements;( 1) the underlying epidemic curve determining the expected number of fatalities $\lambda_t$ and (2) the reporting delay distribution determining $p_{t,d}$. We will in the following describe the structure of each.
\subsubsection*{Component 1: The expected number of fatalities}\label{sec:mod_comp1}
Let $\lambda_t=\mathbbm{E}[N_t]$ denote the expected total number of fatalities occurring on day $t$.
We specify a baseline model for $\lambda_t$ as
\begin{eqnarray}\label{eq:model1}
\log(\lambda_t)|\lambda_{t-1} \sim N(\log(\lambda_{t-1}), \sigma^2),
\end{eqnarray}
where $t=0,...,T$ and $d=0,...,D$. Time $t=0$ is assumed to be the start of the chosen observation period, e.g. the start of the pandemic. This approach to model $\lambda_t$ as a Random Walk on the log scale is proposed by McGough et al.\cite{mcgough_etal2020} and G{\"u}nther et al.\cite{gunther_2020}. Here, we will refer to it as model R.
An alternative to model R in Eq~\eqref{eq:model1} is to assume that we can predict the total number of fatalities with additional data streams associated with the event of interest. The additional data streams are assumed to be ahead in time compared to the time series of interest, e.g.\ due to the tracked event of the stream being at an earlier stage in a typical COVID-19 disease progression or because of a smaller reporting delay; e.g.\ the number of reported cases and hospitalizations, etc. Hence, we can use the additional data stream as a leading indicator in the Nowcasting model. One approach is to consider the number of fatalities as some time-varying fraction of the numbers in those additional data streams. We denote the $i$'th of the $k$ leading indicators at time $t$ as $m_{i,t}$ and specify an regression type model for $\lambda_t$ as follows:
\begin{equation}\label{eq:model3}
\log(\lambda_t)|m_{1,t},...{m_{n,t}} \sim N\left(\beta_0 + \sum_{i=1}^k \beta_{i} m_{i,t}, \sigma^2\right),
\end{equation}
where the $\beta_0$ is an intercept and $\beta_i$ denotes the additive effect of the $i$'th stream on the log of the mean of $\lambda$. With this model specification, we assume a strong association between the case fatalities and the $k$ data streams suitably measured some days earlier. We will refer to this model as L($m_i$).
Furthermore, we propose another approach combining the random walk component of the model in Eq~\eqref{eq:model1} and the additional data streams of Eq~\eqref{eq:model3}. Here, we let the leading indicators be the relative change in e.g. case reports or hospitalizations. In other words, we assume that if there is an increase in the leading indicator, we also expect an increase in the number of fatalities. An increase in case reports is not expected to give an instant increase in the number of deaths but rather with some time delay, so as for the model in Eq~\eqref{eq:model3}, the leading indicators need to be specified with a suitable time delay. We specify this alternative model for $\lambda_t$ as
\begin{equation}\label{eq:model2}
\log(\lambda_t)|\lambda_{t-1}, m_{1,t},...,m_{n,t} \sim N\left(\log(\lambda_{t-1})+\sum_{i=1}^k\beta_i m_{i,t}, \sigma^2\right),
\end{equation}
where the $\beta_i$'s are again considered as regression coefficients for the leading indicator $m_i$. This approach combines an established method \cite{gunther_2020} with additional information that is informative of the events of interest. We note that when the $\beta$-coefficients of this model are zero, this model becomes identical to the model specified in Eq~\eqref{eq:model1}. This model will be referred to as RL($m_i$).
\subsubsection*{Component 2: The reporting delay distribution}
The model for the reporting delay distribution at day $t$ is specifying the probability of a reporting delay of $d$ days for a fatality occurring on day $t$. We denote this conditional probability
\begin{eqnarray*}\label{eq:p_td}
p_{t,d}= P(\text{delay}=d|\text{fatality day} = t).
\end{eqnarray*}
Similarly to G{\"u}nther et al.~\cite{gunther_2020}, we model the delay distribution as a discrete time hazard model $h_{t,d}=P(\text{delay}=d|\text{delay}\ge d, W_{t,d})$ as
\begin{equation}\label{eq:h_t}
\text{logit}(h_{t,d})=\gamma_d+W'_{t,d}\eta,
\end{equation}
where $d=0,...,D-1, h_{t,D}=1$, $\gamma_d$ is a constant, $W_{t,d}$ being a vector of time- and delay-specific covariates and $\eta$ the covariate effects. It can be shown how the reporting probabilities are derived from Eq~\eqref{eq:h_t}~\cite{gunther_2020}.
We are using linear effects of the time on the logit-scale with break-points every two weeks before the current day to allow for changing dynamics in the reporting delay distribution over time. We also use a categorical weekday effect to account for the weekly structure of the reporting.
\subsection*{Inference and implementation}
Inference for the hierarchical Bayesian nowcasting model is done by Markov Chain Monte Carlo using R-Stan \cite{rstan_2020} extending the work of G{\"u}nther et al.~\cite{gunther_2020}. In order to ensure reproducibility and transparency, the R-Code~\cite{r_2021} and data used for the analysis is available from \url{https://github.com/fannybergstrom/nowcasting\_covid19}.
\section*{Results}
\subsection*{Application to fatalities}\label{application}
We apply the nowcasting methods to reported COVID-19 fatalities in Sweden and let the number of reported cases and COVID-19 associated ICU admissions act as two leading indicators. In Sweden, the reporting of ICU admissions is also associated with a reporting delay but considerably shorter than the fatalities. We use model R as a benchmark model and compare it the two alternative models using leading indicators; model L where we let the leading indicator be the number of COVID-19-related ICU admissions, and model RL including both the random walk component and leading indicator here being the relative weekly change in ICU admissions. We denote the leading indicator models as L(ICU) and RL(ICU). For the leading indicator time series, we use a seven day centered rolling average to avoid the weekday effect of the reporting. The pre-specified lag between the fatalities and leading indicators is determined by fitting a linear time series model given the two model specifications of models L and RL, and choosing the lag providing the best fit. The period chosen for the time series model is 2020-04-01--2020-10-19 to use the information available only prior to the evaluation period. We use 18 days lag for the reported cases and 14 days for the ICU admissions. The reporting probability is set to be zero on non-reporting days (Saturday--Monday and public holidays). Furthermore, for practical and robustness reasons, a maximum delay of $D=35$ days is considered. For the fatalities reported with a longer than the maximum, we set their delay to the upper limit of 35 days. There were 116 case fatalities reported with a delay longer than 35 days during the evaluation period.
\subsection*{Retrospective nowcasting evaluation}
We use a retrospective evaluation in order to assess the performance of the Nowcasting models. The model-based predictions are compared to the (now assumed to be known) final number of COVID-19-related reported fatalities in Sweden. The samples from the posterior predictive distribution for the total number of reported COVID-19 fatalities $\hat N_t$ are extracted for each reporting date of the evaluation period.
As in G{\"u}nther~\cite{gunther_2020}, we use the following four metrics to quantify the model performance; 1) continuous rank probability score (CRPS), 2) log scoring rule (logS), 3) root mean squared error (RMSE), and 4) the prediction interval (PI) coverage being the proportion of times the true number of fatalities is contained within the equitailed PI. The RMSE is calculated with a point estimate being the median of the posterior predictive samples of $\hat N_t$, while the scoring rules CRPS and logS assess the quality of the probabilistic forecast by considering the full posterior distribution of $\hat N_t$~\cite{Gneiting2007StrictlyPS}. For the scoring rules, a low score indicates a better performance.
Nowcasts and the estimated reporting delay for a specific reporting date \textit{T}=2020-12-31, is shown in Fig~\ref{fig:snapshot_res}. In the left column, the black bars are the number of fatalities reported until day $T$ and the red dashed line is the true number, only known in retrospect. The solid lines are the median of the posterior predictive distribution of $\hat N_t$ and the shaded areas indicate the equitailed point-wise 95\% Bayesian prediction interval, estimated with information available at the reporting date. The right column shows the daily empirical and estimated number of days of reporting. The solid lines are the estimated and empirical median days of reporting delay and the shaded area is between the 5\% and 95\% quantile of the reporting delay. The lower bound indicate the number of days until 5\% of the total number of fatalities will be reported and the upper bound is within how many days 95\% will be reported. The empirical median and the respective quantiles are calculated with data available in hindsight and the estimated quantities are obtained with the information available at the reporting date.
\begin{figure}[H]
\includegraphics[width=1\textwidth]{figs/fig4.png}
\vspace{1mm}
\caption{{\bf Nowcasts for a specific reporting date
}
Left column shows the nowcasts of 2020-12-30, where the solid lines are the median of the posterior predictive distribution of $\hat N$ and the shaded area depict the 95\% PI. The black bars are what is yet reported and the red line is the true number, only known retrospectively. Right column shows quantiles of the estimated and empirical reporting delay distribution. The solid lines the median reporting the delay in days (for each date) and the lower and upper bounds are the 5\% and 95\% quantiles. At the 5\% quantile, 5\% of the total number of fatalities occuring on that date are estimated to be reported within the number days delay, ect. The empirical quantiles are obtained with data available in hindsight.}
\label{fig:snapshot_res}
\end{figure}
We observe an underestimation of the reporting delay for the L(ICU) model for the last days in the observation window (2020-12-25--2020-12-30) resulting in an underestimation of the daily number of fatalities (Fig~\ref{fig:snapshot_res}B). We can also note that the PI is more narrow for L(ICU) than for the other two models and that the true number is not always contained in the PI. Model R and RL(ICU) (Fig~\ref{fig:snapshot_res}A \& C) provide similar results with less underestimation of the reporting delay resulting in a point estimate of the median of the predictive distribution lying closer to the true number compared to model L(ICU). A difference between the performance between R and RL(ICU) is that RL(ICU) provides less wide PI than R. For R and RL(ICU), the true number of daily fatalities is contained in the PI for all days \textit{T-t}, $t=0,\dots,35$.
From the right column of the figure, it can be observed that the 5\% quantile of the estimated number of days of reporting delay for all three models are similar to the empirical 5\% quantile. Also, the median of the estimated number of days delay follows the corresponding empirical quantity reasonably well. Contrary, the 95\% estimated quantiles are farther from the empirical. This indicate that all three models capture the short-term trends such as the weekly reporting patterns well but do not fully capture the changing dynamics of the long reporting delays, i.e.~the high spikes in the early observation window and the rapid decrease in the final week. An alternative visualization of the empirical and estimated reporting delay distribution for the three models provided by the cumulative reporting probability is found in \nameref{S1_Appendix}~Sec 1.
Seen in Fig~\ref{fig:snapshot_res}, the PI is increasing as the final date \textit{T} of the observation window is approaching. As the number of days \textit{t} since day \textit{T} decrease, the uncertainty for the nowcast of day \textit{T-t} increase as the fraction of the reported fatalities will be decreasing. The average score as a function of \textit{T-t} is shown in Fig~\ref{fig:S2}. For all models and scores, the score is generally a decreasing function of the number of days since day \textit{T}. Hence, the farther from \say{now}, the closer are the nowscast estimates of the daily number of fatalities to the true number. The difference in performance for the three models is observable for the two weeks prior to day \textit{T}. Here, we see that model RL(ICU) has a lower CRPS and RMSE score (Fig~\ref{fig:S2}A \& C) and that model R has the lowest logS (Fig~\ref{fig:S2}B). Model L(ICU) has the overall highest values of the scores, hence it has the worst performance of the three models.
\begin{figure}[H]
\includegraphics[width=1\textwidth]{figs/fig5.png}
\vspace{1mm}
\caption{{\bf Mean scores by the number of days since the day of reporting \textit{T}.} The results are averaged over all reporting dates $T$ in the evaluation period from 2020-10-20 to 2021-05-21.
}
\label{fig:S2}
\end{figure}
The mean overall score and the coverage frequency of the 75\%, 90\%, and 95\% prediction interval of the three models for the nowcasts performed in the evaluation period is found in Table~\ref{table1}. For each reporting day $T$, we consider the average score of the last seven days; $T-6,\dots,T-0$.
Based on the CRPS and RMSE, model RL(ICU) has the best performance, with a decrease of 4.2\% and 1.0\% respectively compared to model R. Model R has the lowest logS score but only with a slight advantage compared to RL(ICU) (0.02\% improvement). Model L(ICU) has the worst performance for all three scores. The coverage of the prediction intervals for models R and RL(ICU) is of satisfactory levels. In contrast, the L(ICU) model has low coverage, indicating that model L(ICU) is less trustworthy.
\begin{table}[!ht]
\centering
\caption{
{\bf Results of the retrospective evaluation of different nowcasting models on COVID-19 related fatalities in Sweden.}}
\begin{tabular}{|l+l|l|l|l|}
\hline
\textbf{Score} & \textbf{R} & \textbf{L(ICU)} & \textbf{RL(ICU)} \\
\thickhline
CRPS & 11.89 & 12.01 & \textbf{11.39}\\
\hline
logS & \textbf{4.42} & 4.51 & 4.43 \\
\hline
RMSE & 9.18 & 9.95 & \textbf{9.09} \\
\thickhline
Cov. 75\% PI & 76.07\% & 58.97\% & 76.07\%\\
\hline
Cov. 90\% PI & 95.72\% & 76.07\% & 94.87\%\\
\hline
Cov. 95\% PI & 98.29\% & 84.61\% & 99.14\%
\\ \thickhline
\end{tabular}
\begin{flushleft} CRPS is the continuous ranked probability score, logS is the log score, and RMSE denotes the root mean squared error of the posterior median. Additionally, we provide coverage frequencies of 75\%, 90\% and 95\% credibility intervals in the estimation of the daily number of case fatalities. The scores are averaged over nowcasts for day $T-6,...,T-0$, with $T$ being all reporting dates in the evaluation period.
\end{flushleft}
\label{table1}
\end{table}
Fig~\ref{fig6} shows the retrospective true number of daily fatalities and the median of the predictive distribution of $\hat N$ and a 95\% PI of day $T$ for the three models evaluated on each reporting day in the evaluation period. In Fig~\ref{fig:snapshot_res}, this corresponds to the nowcast estimates of the final date \textit{T}=2020-12-30. We observe a similar performance over time for models R and RL(ICU) (Fig~\ref{fig6}A \& C) and the more significant deviations from the true number appear mainly on the same reporting dates for the two models. In early Jan 2021, RL(ICU) underestimates the number of daily fatalities, likely due to the rapid decrease in ICU admissions due to the introduction of vaccines at the end of Dec 2020, while the case fatalities were also on a downwards trend but not as steep. Model RL(ICU) stabilizes after approximately two weeks (same as the length of the linear change points) in mid Jan 2021 as the model adapts to the new association between ICU admissions and case fatalities. Model L(ICU) (Fig~\ref{fig6}B) does not have the high peaks in the posterior predictive distribution of $\hat N$ as the other two models. However, the deviation of the posterior median compared to the true number is visibly larger. Starting from Dec 2020, we observe an underestimation of the number of fatalities, and from Feb 2021, an overestimation for the following two months.
From Apr 2021 until the end of the evaluation period, the three models have a visibly similar performance with a posterior mean close to the true number of daily fatalities and a narrow PI containing the true number.
The performance of the alternative models with leading indicators compared to model R can be explained by the estimated association between the fatalities and the leading indicators. The changing dynamics of the association over time are captured by the estimated time-varying $\beta$-coefficients of the respective models. Details of the estimated $\beta$-coefficients for models R(ICU) and RL(ICU) over the evaluation period are reported in~\nameref{S1_Appendix} Sec 2.
\begin{figure}[!h]
\includegraphics{figs/fig6.png}
\caption{{\bf Estimated and true number of fatalities with COVID-19 in Sweden.}
The estimated number of fatalities are the nowcasts of day $T$ being each reporting date in the evaluation period from 2020-10-20 to 2021-05-21. The solid lines are the median of the posterior predictive distribution of the number of daily fatalities $\hat N_T$ and the shaded area depict the point-wise 95\% PI. The red line is the retrospective true number.}
\label{fig6}
\end{figure}
The scores of the three models evaluated at the 117 reporting dates in the evaluation period by the CRPS and LogS is shown in Fig~\ref{fig:score7}. For each reporting day $T$, we consider the average score of the last seven days; $T-6,\dots,T-0$.
For the three models, the scores are generally higher when the number of case fatalities is high. Overall, the performance of model R and RL(ICU) is similar, as could also be observed in Fig~\ref{fig6}.
From the beginning of the evaluation period until the end of 2020, model L(ICU) has an overall lower score and a more stable performance with less high spikes in the score compared to model R and RL(ICU). During Jan 2021, the performance is similar for the three models, but from Feb to Apr 2021 model L(ICU) performs significantly worse than the other models. The remaining scoring rule, the RMSE, entail similar results (\nameref{S:rmse}). After Apr 2021, the number of daily fatalities has stabilized to a low number and the score for three models becomes similar until the end of the evaluation period.
\begin{figure}[!h]
\includegraphics{figs/fig7.png}
\vspace{2mm}
\caption{{\bf Scoring rules.} Average CRPS and logS of the last 7 days; $T-6,\dots,T-0$ for each reporting day $T$, in the evaluation period.}
\label{fig:score7}
\end{figure}
In conclusion, we find that model R and model RL(ICU) performs well over the evaluation period and has a satisfactory level of PI coverage. Furthermore, model RL(ICU) provided the best performance of the three models, indicating that there is a gain (4.2\% decrease in CRPS compared to model R) of including leading indicators. Using reported cases or the combination of reported cases and ICU admissions as leading indicators does not improve performance. The results of using these leading indicators are found in~\nameref{S1_Appendix} Sec 3.
\section*{Discussion}
In the presented work, we provide an improved method for real-time estimates of infectious disease surveillance data suffering a reporting delay. The proposed method can be applied to any disease for which the data can be put the form of the reporting triangle given in Fig~\ref{fig:rep_tri}.
We apply the method to COVID-19-related fatalities in Sweden. Even though fatalities are a lagging indicator to obtain situational awareness about the pandemic and is not without difficulties itself, it is often used as a more robust indicator to assess the burden of disease because it might be less influenced by the current testing strategy. Hence, monitoring the time series of reported deaths has been of importance in the still on-going COVID-19 pandemic.
We show that using leading indicators, such as the COVID-19-associated ICU admissions, can help improve the nowcasting performance of case fatalities compared to existing methods.
Beyond using reported cases and ICU admissions as leading indicators for the case fatalities, other possible leading indicators are e.g.~vaccination, hospitalizations, and virus particles in wastewater~\cite{kreier_2021}, or using age-stratified reported cases. However, nowcasting with leading indicators should be made with caution and be reevaluated as the dynamics between the leading indicator and the event of interest change, which may not be a trivial task during an ongoing pandemic.
Furthermore, by re-estimating the association coefficients of the leading indicator at each reporting date, our method captures the changing association between ICU admissions and case fatalities over time. However, we use a pre-specified time lag unknown at the start of the pandemic and might also change throughout the pandemic. A possible extension of our work would thus be to estimate this time lag as a part of the model fitting.
The proposed method is flexible in terms of its application and thus can be a helpful tool for future pandemic stress situations. We support this by providing open-source software for the real-time analysis of surveillance data. Weekly updated nowcast estimates of COVID-19 fatalities and ICU admissions in Sweden using our proposed method, model RL, are found at
\begin{center}
\url{https://staff.math.su.se/fanny.bergstrom/covid19-nowcasting}
\end{center}
These graphs help provide the desired situational awareness and are to be interpreted as new variants emerge.
\section*{Supporting information}
\paragraph*{S1 Fig.}
\label{S:rmse}
{\bf RMSE.} Average RMSE of the last 7 days; $T-6,\dots,T-0$ for each reporting day $T$, in the evaluation period.
\paragraph*{S1 Appendix.}
\label{S1_Appendix}
{\bf Complimentary material and results.} Sec 1 contains information about the cumulative reporting probability, providing a complimentary picture of the estimated reporting delay. Sec 2 presents detailed results of the estimated regression coefficients of model L(ICU) and RL(ICU) over the evaluation period. Finally, Sec 3 covers results of including reported cases and the combination of reported cases and ICU admissions as leading indicators.
\section*{Acknowledgments}
This work is partly funded by the Nordic Research Agenc
. The computations and data handling was enabled by resources provided by the Swedish National Infrastructure for Computing (SNIC) at HPC2N partially funded by the Swedish Research Council.
We also thank Markus Lindroos for discussions and his contribution in coding of the reporting delay distribution.
\nolinenumbers
| proofpile-arXiv_065-5628 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
The observation of electronic nematicity in the phase diagrams of twisted bilayer graphene \cite{Jiang2019,Kerelsky2019,Choi2019,Cao2021} and twisted double-bilayer graphene \cite{RubioVerdu2021,Samajdar2021} provides a new setting to elucidate these electronic liquid crystalline states, which spontaneously break the rotational symmetry of the system. Shortly after nematicity was proposed to explain certain properties of high-temperature
superconductors \cite{Kivelson98}, it was recognized that the Goldstone mode of an ideal electronic nematic phase would have a profound impact on the electronic properties of a metal \cite{Oganesyan01,Kim2004,Garst09}. This is because, in contrast to other Goldstone modes such as phonons and magnons, which couple to the electronic density via a gradient term, the nematic Goldstone mode displays a direct Yukawa-like coupling to the electronic density \cite{Ashvin14}. As a result, it is expected to promote non-Fermi liquid (NFL) behavior, as manifested
in the sub-linear frequency dependence of the imaginary part of the electronic self-energy \cite{Oganesyan01,Garst2010}.
However, because the crystal lattice breaks the continuous rotational symmetry of the system, the electronic nematic order parameter realized in layered quantum materials has a discrete $Z_{q}$ symmetry, rather than the continuous XY (or O(2)) symmetry of its two-dimensional (2D) liquid crystal counterpart \cite{Fradkin_review}. In the square lattice, the $Z_{2}$ (Ising-like) symmetry is associated with selecting one of the two orthogonal in-plane directions connecting either nearest-neighbor or next-nearest-neighbor
sites \cite{Fernandes2014}. In the triangular lattice, the $Z_{3}$ ($3$-state Potts/clock) symmetry refers to choosing one of the three bonds connecting nearest-neighbor sites \cite{Hecker2018,Fernandes_Venderbos}. In both cases, the excitation spectrum in the ordered state
is gapped, i.e. there is no nematic Goldstone mode. Consequently, NFL behavior is not expected to arise inside the nematic phase -- although it can still emerge in the disordered state due to interactions mediated by possible quantum critical fluctuations \cite{Metzner03,Rech2006,max-subir,Lee-Dalid,ips-lee,ips-uv-ir,ips-subir,max-cooper,ips-sc,Lederer2015,Klein2018}.
In twisted moir\'e systems \cite{Andrei2020_review,Balents2020_review}, which usually display an emergent triangular moir\'e superlattice, another type of nematic order can arise due to the presence of the valley degrees of freedom: a \emph{valley-polarized nematic state} \cite{cenke}. Compared to the standard nematic state, valley-polarized nematic order breaks not only the threefold rotational symmetry of the lattice, but also ``inversion'' (more precisely, two-fold rotational), and time-reversal symmetries. It is another example, particularly relevant for moir\'e superlattices, of a broader class of ``non-standard" electronic nematic orders that are intertwined with additional symmetries of the system, such as the so-called nematic spin-nematic phases \cite{Kivelson_RMP,Wu_Fradkin2007,Fischer_Kim2011}.
In twisted bilayer graphene (TBG) \cite{cao-insulator,cao,Yankowitz2019,Efetov19}, while threefold rotational symmetry-breaking \cite{Jiang2019,Kerelsky2019,Choi2019,Cao2021} and time-reversal symmetry-breaking \cite{Sharpe19,Efetov19,Young19,Tschirhart2021} have been observed in different regions of the phase diagram, it is not clear yet whether a valley-polarized nematic state is realized. Theoretically, the valley-polarized nematic order parameter has a $Z_{6}$ symmetry, which corresponds to the $6$-state clock model \cite{Jose1977}. Interestingly, it is known that the $6$-state clock model transition belongs to the XY universality class in three spatial dimensions, with the sixfold anisotropy perturbation being irrelevant at the critical point \cite{Amit1982,Oshikawa2000,Sudbo2003,fucito}.
Thus, at $T=0$ and in a 2D triangular lattice, a valley-polarized nematic quantum critical point (QCP) should share the same universality class as a QCP associated with a hypothetical XY electronic nematic order parameter, that is completely decoupled from the lattice degrees of freedom \cite{Shibauchi2020}. In other words, a 2D 6-state clock model exhibits a continuous phase transition at $T=0$, that is described by a $(2+1)\rm{D}$ Ginzburg-Landau theory of an O(2) order parameter with a $Z_6$ anisotropic term -- the latter is irrelevant in the renormalization group (RG) sense. In fact, the sixfold anisotropy term is \emph{dangerously irrelevant} \cite{Amit1982}, as it becomes a relevant perturbation inside the ordered state \cite{Oshikawa2000,Sandvik2007,Okubo2015,Leonard2015,Podolski2016,Sandvik2020}. As a result, the valley-polarized nematic phase displays a pseudo-Goldstone mode, i.e., a would-be Goldstone mode with a small gap that satisfies certain scaling properties as the QCP is approached \cite{Sandvik2021}.
In this paper, we explore the properties of the valley-polarized nematic state in twisted moir\'e systems and, more broadly, in a generic metal. We start from a phenomenological SU$(4)$ model, relevant for TBG, which is unstable towards intra-valley nematicity. We show that, depending on the inter-valley coupling, the resulting nematic order can be a ``standard" nematic phase, which only breaks threefold rotational symmetry, or the valley-polarized nematic phase, which also breaks twofold and time-reversal symmetries. By employing group-theory techniques, we show that the onset of valley-polarized nematicity triggers in-plane orbital magnetism, as well as standard nematicity and different types of order in the valley degrees of freedom. The $Z_6$ symmetry of the valley-polarized nematic order parameter is translated as six different orientations for the in-plane magnetic moments. Moving beyond phenomenology, we use the six-band tight-binding model for TBG of Ref. \cite{Po2019} to investigate how valley-polarized nematic order impacts the electronic spectrum. Because the combined $C_{2z} \mathcal{T}$ symmetry is preserved, the Dirac cones remain intact, albeit displaced from the $K$ point. Moreover, band degeneracies associated with the valley degrees of freedom are lifted, and the Fermi surface acquires characteristic distortion patterns.
We next study the electronic properties of the valley-polarized nematic phase at $T=0$, when a putative quantum critical point is crossed. To make our results more widely applicable, we consider the case of a generic metal with a simple circular Fermi surface. First, we show that the phase fluctuations inside the valley-polarized phase couple directly to the electronic density. Then, using a two-patch model \cite{max-subir,Lee-Dalid,ips-lee,ips-uv-ir,ips-fflo,ips-nfl-u1}, we show that the electronic self-energy $\Sigma$ displays, along the hot regions of the Fermi surface and above a characteristic energy $\Omega^{*}$, the same NFL behavior as in the case of an ``ideal'' XY nematic
order parameter \cite{Oganesyan01,Garst2010}, i.e. $\Sigma\left(\nu_{n}\right)\sim i\left|\nu_{n}\right|^{2/3}$, where $\nu_{n}$ is the fermionic Matsubara frequency. Below $\Omega^{*}$, however, we find that $\Sigma\left(\nu_{n}\right)\sim i\,\nu_{n}$, and Fermi liquid (FL) behavior is restored. Moreover, the bosonic self-energy,
describing the phase fluctuations, acquires an overdamped dynamics due to the coupling to the fermions.
Exploiting the scaling properties of the $6$-state clock model, we argue that this NFL-to-FL crossover energy scale $\Omega^{*}$, which is directly related to the dangerously irrelevant variable $\lambda$ of the $6$-state clock model via $\Omega^{*}\sim\lambda^{3/2}$, is expected to be much smaller than the other energy scales of the problem. As a result, we expect NFL behavior to be realized over an
extended range of energies. We discuss possible experimental manifestations of this effect at finite temperatures, and the extension of this mechanism to the case of \emph{spin-polarized nematic order} \cite{Wu_Fradkin2007}, which has been proposed to occur in moir\'e systems with higher-order van Hove singularities \cite{Classen2020,Chichinadze2020}. We also discuss possible limitations of the results arising from the simplified form assumed for the Fermi surface.
The paper is organized as follows: Sec.~\ref{sec_phenomenology} presents a phenomenological description of valley-polarized nematic order in TBG, as well as its manifestations on the thermodynamic and electronic properties. Sec.~\ref{secmodel} introduces the bosonic and fermionic actions that describe the system inside the valley-polarized nematic state. Sec.~\ref{secnfl} describes the results for the electronic self-energy, obtained from both the Hertz-Millis theory and the patch methods, focussing on the onset of an NFL behavior. In Sec.~\ref{secend}, we discuss the implications of our results for the observation of NFL behavior in different types of systems.
\section{Valley-polarized nematic order in TBG} \label{sec_phenomenology}
\subsection{Phenomenological model}
In TBG, the existence of electron-electron interactions larger than the narrow bandwidth of the moir\'e bands \cite{BM_model,Tarnopolsky2019} enables the emergence of a wide range of possible ordered states involving the spin, valley, and sublattice degrees of freedom \cite{Rademaker2018,Isobe2018,Kennes2018,Venderbos18,Sherkunov2018,Thomson2018,Kang2019,Seo2019,Yuan2019_vhs,Bascones19,Natori2019,Vafek2020,cenke,Bultinck2020,Xie2020,Cea2020,Christos2020,Fernandes_ZYMeng,Bernevig_TBGVI,Brillaux2020,ips-tbg,Chichinadze2021,Song2021}. Here, we start by considering a model for TBG that has U(1) valley symmetry.
Together with the symmetry under independent spin rotations on the
two valleys, the model has an emergent SU(4) symmetry, and has been
widely studied previously \cite{Kang2019,Bultinck2020,Vafek_Kang_PRL_2020,Bernevig_TBGVI,Wang_Kang_RMF,Chichinadze2021}. Within the valley subspace $a$, we assume
that the system has an instability towards a nematic phase, i.e. an
intra-valley Pomeranchuk instability that breaks the $C_{3z}$ rotational
symmetry. Indeed, several models for TBG have found proximity
to a nematic instability \cite{Dodaro2018,Vafek2020,cenke,Bernevig_TBGVI,Brillaux2020,Nori2020,Chichinadze2020,Khalaf2020,Kontani2022}. Note here that $a=+,-$ refers to the moir\'e
valley. Hereafter, we assume that the valleys are exchanged by a $C_{2x}$
rotation. Let the intra-valley nematic order associated with valley
$a$ be described by the two-component order parameter $\boldsymbol{\varphi}_{a}=\left(\varphi_{a,1},\,\varphi_{a,2}\right)$
that transforms as the $\left(d_{x^{2}-y^{2}},d_{xy}\right)$-wave
form factors.
A single valley does not have $C_{2z}$ symmetry or $C_{2x}$ symmetry
(but it does have $C_{2y}$ symmetry); it is the full system, with
two valleys, that has the symmetries of the $D_{6}$ space group.
A $C_{2z}$ rotation (or, equivalently, a $C_{2x}$ rotation) exchanges
valleys $+$ and $-$. Time-reversal $\mathcal{T}$ has the same effect.
If the valleys were completely decoupled, the nematic free energy
would be, to leading order:
\begin{align}
F_{0}\left(\boldsymbol{\varphi}_{+},\boldsymbol{\varphi}_{-}\right)=
r_{0}\left(\boldsymbol{\varphi}_{+}^{2}
+ \boldsymbol{\varphi}_{-}^{2}\right)+\mathcal{O}\left(\boldsymbol{\varphi}_{a}^{3}\right).
\label{F_0}
\end{align}
However, since independent spatial rotations on the two valleys is
not a symmetry of the system, there must be a quadratic term coupling
the two intra-valley nematic order parameters, of the form:
\begin{align}
\bar{F}=\kappa\left(\boldsymbol{\varphi}_{+}\cdot\boldsymbol{\varphi}_{-}\right)
=\frac{\kappa}{2} \, \boldsymbol{\varphi}_{a}\cdot\tau_{aa'}^{x}
\,\boldsymbol{\varphi}_{a'}\,,
\label{delta_F}
\end{align}
where $\tau^{i}$ is a Pauli matrix in valley space. This term is
invariant under both $C_{2z}$ and $\mathcal{T}$, as it remains the
same upon exchange of the two valleys. Moreover, it is invariant under
$C_{3z}$ since it is quadratic in the nematic order parameters. It
is important to note that $C_{3z}$ must be considered as a global
threefold rotation, equal in both valleys.
Minimizing the full quadratic free energy, we find two possible orders
depending on the sign of $\kappa$, which ultimately
can only be determined from microscopic considerations. For $\kappa<0$,
the resulting order parameter
\begin{align}
\tilde{\boldsymbol{\Phi}}=\boldsymbol{\varphi}_{+}+\boldsymbol{\varphi}_{-}
\end{align}
is valley-independent.
It has the same transformation properties as $\boldsymbol{\varphi}_{a}$
under $C_{3z}$, and it is even under both $C_{2z}$ and $\mathcal{T}$.
As a result, it must transform as the $E_{2}^{+}$ irreducible representation of $D_{6}$
(the plus superscript indicates that it is even under time-reversal).
This is the usual nematic order parameter, which belongs to the $3$-state
Potts/clock model universality class. Indeed, parametrizing $\tilde{\boldsymbol{\Phi}}=\tilde{\Phi}_0\left(\cos\tilde{\alpha},\,\sin\tilde{\alpha}\right)$,
one finds a free-energy
\begin{align}
\tilde{F}=r \, \tilde{\Phi}_0^{2}
-2 \, \lambda \, \tilde{\Phi}_0^{3} \cos(3\tilde{\alpha}) + u \,\tilde{\Phi}_0^{4}\,,
\end{align}
corresponding to the $3$-state Potts/clock
model \cite{Fernandes_Venderbos,cenke}.
For $\kappa>0$, the resulting order parameter
\begin{align}
\boldsymbol{\Phi}=\boldsymbol{\varphi}_{+}-\boldsymbol{\varphi}_{-}
\end{align}
is valley-polarized.
The key difference between this phase with respect to $\tilde{\boldsymbol{\Phi}}$ is
that $\boldsymbol{\Phi}$ is odd under both $C_{2z}$ and $\mathcal{T}$,
while retaining the same transformation properties under $C_{3z}$.
Therefore, $\boldsymbol{\Phi}$ must transform as the $E_{1}^{-}$
irreducible representation of $D_{6}$, with the minus superscript indicating that it is
odd under time-reversal. This is the valley-polarized nematic order
parameter, first identified in Ref. \cite{cenke}. The full free-energy for $\boldsymbol{\Phi}$ can be obtained
from its symmetry properties rather than starting from the uncoupled
free energies in Eq.~\eqref{F_0}. Parametrizing $\boldsymbol{\Phi}=\Phi_0 \left(\cos\alpha,\,\sin\alpha\right)$,
one finds the following free-energy expansion \cite{cenke}:
\begin{align}
F=r \, \Phi_0^{2}+u \,\Phi_0^{4}-2 \, \lambda\, \Phi_0^{6} \cos(6 \alpha) \,.
\label{eq:F6}
\end{align}
The $\lambda$-term is the lowest-order term that lowers the symmetry
of $\boldsymbol{\Phi}$ from O(2) to $Z_{6}$. As a result, the action
corresponds to a $6$-state clock model. Indeed, minimization of the
action with respect to the phase $\alpha$ leads to six different
minima, corresponding to (1) $\alpha=\frac{\pi}{3}\,n$ for $\lambda>0$;
and (2) $\alpha=\frac{\pi}{3}\left(n+\frac{1}{2}\right)$ for $\lambda<0$
(with $n=0,\ldots,5$). At finite temperatures, the 2D $6$-state
clock model undergoes two Kosterlitz-Thouless transitions: the first
one signals quasi-long-range order of the phase $\alpha$, whereas
the second one marks the onset of discrete symmetry-breaking and long-range
order \cite{Jose1977}.
\subsection{Manifestations of the valley-polarized phase}
The onset of valley-polarized order leads to several observable consequences.
First, we note that the in-plane magnetic moment $\mathbf{m}=\left(m_{x},\,m_{y}\right)$
also transforms as $E_{1}^{-}$. Therefore the following linear-in-$\Phi$
free-energy coupling term is allowed:
\begin{align}
\delta F_{1}\sim\mathbf{m}\cdot\boldsymbol{\Phi} \,.
\end{align}
This implies that valley-polarized nematic order necessarily triggers
in-plane magnetic moments -- see also Ref. \cite{Berg2022} for the case of in-plane magnetic moments induced by hetero-strain. These moments are directed towards the angles $\alpha$ that minimize the sixth-order term $\Phi_0^{6}\cos (6\alpha)$
of the nematic free energy. Because the system has SU(2) spin-rotational
invariance, $\mathbf{m}$ must be manifested as an in-plane orbital
angular magnetic moment. Therefore, valley-polarized nematic order provides
a mechanism for in-plane orbital magnetism, which is complementary
to previous models for out-of-plane orbital magnetism.
There are additional manifestations coming from higher-order terms
of the free energy. Valley-polarized nematic order $\boldsymbol{\Phi}$
also induces the ``usual'' nematic order $\tilde{\boldsymbol{\Phi}}$
via the quadratic-linear coupling:
\begin{align}
\delta F_{2}\sim\left(\Phi_{1}^{2}-\Phi_{2}^{2}\right)\tilde{\Phi}_{1}-2\Phi_{1}\Phi_{2}\tilde{\Phi}_{2}=\Phi_0^{2}\tilde{\Phi}_0\cos\left(2\alpha+\tilde{\alpha}\right).
\end{align}
Moreover, $\boldsymbol{\Phi}$ also induces either the order parameter
$\eta$, which transforms as $B_{2}^{-}$, or the order parameter
$\tilde{\eta}$, which transforms as $B_{1}^{-}$. Both $\eta$ and
$\tilde{\eta}$ are even under $C_{3z}$ but odd under $C_{2z}$ and
$\mathcal{T}$. The only difference is that $\eta$ is odd under $C_{2x}$
and even under $C_{2y}$ whereas $\tilde{\eta}$ is odd under $C_{2y}$
and even under $C_{2x}$. We find the cubic-linear terms:
\begin{align}
\delta F_{3}^{(1)} & \sim\left(3\,\Phi_{1}^{2} \, \Phi_{2}-\Phi_{2}^{3}\right)\eta
=\Phi_0^{3} \,\eta\,\sin (3\alpha) \,,\nonumber \\
\delta F_{3}^{(2)} & \sim
\left(\Phi_{1}^{3}-3 \, \Phi_{1} \,\Phi_{2}^{2}\right)\tilde{\eta}
=\Phi_0^{3}\,\tilde{\eta}\,\cos(3\alpha)\,.
\end{align}
Now, since
\begin{align}
\cos^{2}3\alpha & =\frac{1+\cos (6\alpha)}{2} \text{ and }
\sin^{2}3\alpha =\frac{1-\cos (6\alpha)}{2} \,,
\end{align}
we conclude that, if the coefficient $\lambda$ of the $\Phi_0^{6}\cos (6\alpha)$
term is positive [implying $\cos (6\alpha)=+1$], $\tilde{\eta}$ is
induced. Otherwise, if $\lambda$ is negative [implying $\cos (6\alpha)=-1$], $\eta$ is induced.
Physically, $\eta$ can be interpreted as a valley charge polarization
$\eta=\rho_{+}-\rho_{-}$, where $\rho_{a}$ is the charge at valley
$a$. That is because $C_{2x}$ also switches valleys $1$ and $2$.
On the other hand, $C_{2y}$ does not involve valley switching and
is therefore an intra-valley type of order.
\subsection{Impact of the valley-polarized order on the electronic spectrum}
\begin{figure}[]
\centering
\subfigure[]{\includegraphics[width= 0.4\textwidth]{1a}}
\subfigure[]{\includegraphics[width= 0.4 \textwidth]{1b}}
\caption{\label{fig:bands}
Band structure along the high symmetry directions of the moir\'e Brillouin zone, for the (almost) flat bands of TBG. This is numerically computed from the six-orbital model of Ref.~\cite{Po2019}, without [panel (a); dashed lines] and with [panel (b); solid lines] valley-polarized nematic ordering. Red and blue lines refer to the two valleys. The parameters used are the same as in Ref.~\cite{Po2019}, and we have chosen $\Phi_0=0.01 \,t_{\kappa}$ and $\alpha=0$ for the ordered state. The energy values shown are in meV.}
\end{figure}
\begin{figure*}
\centering
\subfigure[]{\includegraphics[width= 0.15\textwidth]{2a}}
\subfigure[]{\includegraphics[width= 0.15\textwidth]{2b}}
\subfigure[]{\includegraphics[width= 0.15 \textwidth]{2c}}
\subfigure[]{\includegraphics[width= 0.15 \textwidth]{2d}}
\subfigure[]{\includegraphics[width= 0.15 \textwidth]{2e}}
\subfigure[]{\includegraphics[width= 0.15 \textwidth]{2f}}
\caption{\label{fig:fermi-sur}
Fermi surfaces in the valley-polarized nematic state arising from the flat bands of TBG: The parameters are the same as those in Fig.~\ref{fig:bands}, except for $\alpha$, which here assumes the values $ n \,\pi/3$, with $n \in [0,5]$. Panels (a) to (f) correspond to $n=0$ to $n=5$, respectively, indicating the six different domains that minimize the free energy. In panel (a), the Fermi surfaces in the absence of valley-polarized nematic order are shown by the dashed lines. Red and blue lines refer to the two different valleys.
}
\end{figure*}
To investigate how the valley-polarized nematic order parameter impacts
the electronic excitations of TBG, we use the six-band model of Ref. \cite{Po2019}. This model, which has valley $U(1)$ symmetry, is described in terms
of the electronic operator
\begin{align}
\Psi_{a}^{\dagger}\left(\mathbf{k}\right)=(p_{a,{\bf k}z}^{\dagger},p_{a,{\bf k}+}^{\dagger},p_{a,{\bf k}-}^{\dagger},s_{1a,{\bf k}}^{\dagger},s_{2a,{\bf k}}^{\dagger},s_{3a,{\bf k}}^{\dagger})
\end{align}
for valley $a$, which contains the $p$-orbitals ($p_{z}$, $p_{+}$, $p_{-}$) living
on the sites of the triangular moir\'e superlattice, and $s$-orbitals
($s_{1}$, $s_{2}$, $s_{3}$) living on the sites of the related
Kagome lattice. The non-interacting Hamiltonian is given by:
\begin{align}
\mathcal{H}_{0}=\sum_{\mathbf{k}}\left(\begin{array}{cc}
\Psi_{+}^{\dagger} & \Psi_{-}^{\dagger}\end{array}\right)\left(\begin{array}{cc}
H_{\mathbf{k}} & 0\\
0 & U_{C_{2z}}^{\dagger} H_{\mathbf{k}}\, U_{C_{2z}}
\end{array}\right)\left(\begin{array}{c}
\Psi_{+}\\
\Psi_{-}
\end{array}\right),
\label{eq:H0}
\end{align}
where the $6\times6$ matrices $H_{\mathbf{k}}$ and $U_{C_{2z}}$
are those defined in Refs. \cite{Po2019,Fernandes_Venderbos}. Generalizing the results of Ref. \cite{Fernandes_Venderbos}, the coupling to the valley-polarized
nematic order parameter $\Phi$ can be conveniently parametrized in
the $\left(p_{+}, \,p_{-}\right)$ subspace as:
\begin{align}
\mathcal{H}_{\Phi}=\sum_{\mathbf{k}}\left(\begin{array}{cc}
\Psi_{+}^{\dagger} & \Psi_{-}^{\dagger}\end{array}\right)\left(\begin{array}{cc}
H_{\Phi} & 0\\
0 & -H_{\Phi}
\end{array}\right)\left(\begin{array}{c}
\Psi_{+}\\
\Psi_{-}
\end{array}\right),
\end{align}
with the block-diagonal matrix $H_{\Phi}=\left(0_{\mathbf{1}},\delta H_{\Phi},0_{\mathbf{3}}\right)$,
where
\begin{align}
\delta H_{\Phi}=\left(\begin{array}{cc}
0 & \Phi_0\,\mathrm{e}^{-i \, \alpha}\\
\Phi_0\,\mathrm{e}^{i\, \alpha} & 0
\end{array}\right).
\end{align}
In Fig.~\ref{fig:bands}, we show the electronic structure of the moir\'e flat bands
in the normal state (a) and in the valley-polarized nematic state (b) parametrized
by $\Phi_0=0.01 \,t_{\kappa}$ and $\alpha=0$, where $t_{\kappa}=27\ \mathrm{meV}$
is a hopping parameter of $H_{\mathbf{k}}$ \cite{Po2019}. The high-symmetry points
$\Gamma$, $K$, and $M$ all refer to the moir\'e Brillouin zone. The
main effect of the nematic valley-polarized order on the flat bands
is to lift the valley-degeneracy along high-symmetry directions. Although
$C_{2z}$ and $\mathcal{T}$ symmetries are broken, the combined symmetry
$C_{2z}\mathcal{T}$ remains intact in the valley-polarized nematic
phase. As a result, the Dirac cones of the non-interacting band structure
are not gapped, but instead move away from the $K$ points, similarly
to the case of standard (i.e. non-polarized) nematic order. We also
note that the van Hove singularity at the $M$ point is also altered
by valley-polarized nematicity.
The Fermi surfaces corresponding to each of the six nematic valley-polarized
domains, described by $\alpha=n\, \pi/3$ with $n=0,1,\ldots,5$, are
shown in Fig.~\ref{fig:fermi-sur}. The Fermi surface of the normal state is also shown
in panel (a) for comparison (dashed lines). In the ordered state, the Fermi
surfaces arising from different valleys are distorted in different
ways, resulting in a less symmetric Fermi surface as compared to the
previously studied case of standard (i.e. non-polarized) nematicity.
While the Fermi surface is no longer invariant under out-of-plane
two-fold or three-fold rotations, it remains invariant under a two-fold
rotation with respect to an in-plane axis. Moreover, the Fermi surfaces
from different valleys continue to cross even in the presence
of valley-polarized nematic order.
\section{Pseudo-Goldstone modes in the valley-polarized nematic phase at zero temperature}
\label{secmodel}
In the previous section, we studied the general properties of valley-polarized nematic order in TBG. We now proceed to investigate the unique properties of the valley-polarized nematic state at $T=0$ in a metallic system, which stem from the emergence of a pseudo-Goldstone mode. As a first step, we extend the free energy in Eq. (\ref{eq:F6}) to a proper action. To simplify the notation, we introduce the complex valley-polarized nematic order parameter $\Phi = \Phi_1 - i \,\Phi_2 = \Phi_0 \,\mathrm{e}^{i\, \alpha}$. We obtain (see also Ref.~\cite{cenke}):
\begin{align}
S=\frac{1}{2}\int d^{2}x\,d\tau & \left[\frac{1}{c^{2}}\,|\partial_{\tau}\Phi|^{2}
+|\partial_{\mathbf{x}}\Phi|^{2}+r\,|\Phi|^{2}\right. \nonumber \\
& \quad \left.+ \,u\,|\Phi|^{4}-\lambda\left(\Phi^{6}+{{\Phi}^{*}}^{6}\right)\right].
\label{actionboson0}
\end{align}
Here, $\mathbf{x}$ denotes the position vector, $\tau$ the imaginary time, and $c$ the bosonic velocity. The quadratic coefficient $r$ tunes the system towards a putative quantum critical point (QCP) at $r=r_{c}$, and the quartic coefficient $u>0$. Because of the anisotropic $\lambda$-term, the action corresponds to a $6$-state clock model. As explained in Sec.~\ref{sec_phenomenology}, at finite temperatures, the behavior of this model is the same as that of the two-dimensional (2D) $6$-state clock model. This model is known \cite{Jose1977} to first undergo a Kosterlitz-Thouless transition towards a state where the phase $\alpha$ has quasi-long-range order (like in the 2D XY model), which is then followed by another Kosterlitz-Thouless transition towards a state where $\alpha$ acquires a long-range order, pointing along one of the six directions that minimize the sixth-order term.
At $T=0$, near a valley-polarized nematic QCP, the bosonic model in Eq.~(\ref{actionboson0}) maps onto the three-dimensional (3D) $6$-state clock model \cite{Sudbo2003,Sandvik2021}. One of the peculiarities of this well-studied model is that the $\lambda$-term is a \emph{dangerously irrelevant perturbation} \cite{Oshikawa2000,Sandvik2007,Okubo2015,Leonard2015,Podolski2016,Sandvik2020}. Indeed, the scaling dimension $y$ associated with the $\lambda$ coefficient is negative; while an $\epsilon$-expansion around the upper critical dimension $d_{c}=4$ gives $y=-2-\epsilon$ \cite{Oshikawa2000}, recent Monte Carlo simulations report $y\approx-2.55$ for the classical 3D $6$-state clock model \cite{Okubo2015,Sandvik2020}.
To understand what happens inside the ordered state, we use the parametrization $\Phi=\left|\Phi_{0}\right|\mathrm{e}^{i\,\alpha}$, with fixed
$\left|\Phi_{0}\right|$, and consider the action for the phase variable $\alpha$ only, as shown below:
\begin{align}
S_{\alpha}
& =\frac{1}{2}\int d^{2}x\,d\tau
\Bigg[\rho_{\tau}\,|\partial_{\tau}\alpha|^{2}
+\rho_{x}\,|\partial_{\mathbf{x}}\alpha|^{2}
\nonumber \\
& \hspace{2.5 cm }
-2\,\lambda\left|\Phi_{0}\right|^{6}\,\cos(6\alpha)\Bigg]\,.
\end{align}
Here, $\rho_{x}$ and $\rho_{\tau}$ are generalized stiffness coefficients.
Expanding around one of the minima of the last term (let us call it $\alpha_{0}$) gives:
\begin{align}
S_{\alpha} & =\frac{1}{2}\int d^{2}x\,d\tau\Bigg[\rho_{\tau}
|\partial_{\tau}\,\tilde{\alpha}|^{2}
+\rho_{x}\, |\partial_{\mathbf{x}}\tilde{\alpha}|^{2}
\nonumber \\
& \hspace{2.5 cm } +36\left|\lambda\right|
\left|\Phi_{0}\right|^{6}\tilde{\alpha}^{2}\Bigg]\,,
\end{align}
where a constant term is dropped, and $\tilde{\alpha}\equiv\alpha-\alpha_{0}$.
It is clear that the $\lambda$-term, regardless of its sign, introduces a mass for the phase variable. Thus, while the $\lambda$-term is irrelevant at the critical point, which is described by the XY fixed point, it is relevant inside the ordered phase, which is described by a $Z_{6}$ fixed point, rather than the Nambu-Goldstone fixed point (that characterizes the ordered phase of the 3D XY model) \cite{Oshikawa2000,Sandvik2020,Sandvik2021}.
Importantly, due to the existence of this dangerously irrelevant perturbation, there are two correlation lengths in the ordered state \cite{Sandvik2007,Okubo2015,Leonard2015,Podolski2016,Sandvik2021}: $\xi$, associated with the usual amplitude fluctuations of $\Phi$; and $\xi'$, associated with the crossover from continuous to discrete symmetry-breaking of $\alpha$. Although both diverge at the critical point, they do so with different exponents $\nu$ and $\nu'$. Because $\nu'>\nu$, there is a wide range of length scales (and energies, in the $T=0$ case) for which the ordered phase behaves as if it were an XY ordered phase. In Monte Carlo simulations, this is signaled by the emergence of a nearly-isotropic order parameter distribution \cite{Sandvik2007}.
More broadly, this property is expected to be manifested as a small gap in the spectrum of phase fluctuations, characteristic of a \emph{pseudo-Goldstone mode} \cite{Burgess2000}.
For simplicity of notation, in the remainder of the paper, we rescale $\left(\tau,\mathbf{x}\right)$ to absorb the stiffness coefficients. Moreover, we set $\lambda>0$ and choose $\alpha_{0}=0$, such that $\tilde{\alpha}=\alpha$. Defining $m^{2}\equiv36\left|\lambda\right|\left|\Phi_{0}\right|^{6}$,
and taking the Fourier transform, the phase action becomes:
\begin{align}
S_{\alpha}=\frac{1}{2}\int_{q}\alpha(-q)\left(\omega_{n}^{2}+\mathbf{q}^{2}+m^{2}\right)\alpha(q)\,,
\label{S_phase}
\end{align}
where $q=\left(\omega_{n},\mathbf{q}\right)$, $\omega_{n}$ is the bosonic Matsubara frequency, and $\mathbf{q}$ is the momentum. Here,
we also introduced the notation $\int_{q}=
T \sum \limits_{\omega_n}\int\frac{d^{2}\mathbf{q}}{(2\pi)^{2}}$. At $T=0$, $T \sum \limits_{\omega_n} \rightarrow \int
\frac{d\omega_n} {2\pi}$; although the subscript $n$ is not necessary, since $\omega_n$ is a continuous variable, we will keep it to distinguish it from the real-axis frequency.
Having defined the free bosonic action, we now consider the electronic degrees of freedom. While our work is motivated by the properties of TBG, in this section we choose a simple, generic band dispersion to shed light on the general properties of the $T=0$ valley-polarized nematic state.
As we will argue later, this formalism also allows us to discuss the case of a spin-polarized nematic state. The free fermionic action is given by:
\begin{align}
S_{f}=\int_{k}\sum\limits _{a=1,2}\psi_{a}^{\dagger}(k)
\left[i\,\nu_{n}
+\varepsilon_{a} (\mathbf{k} )\right]\psi_{a}(k)\,,
\label{S_f}
\end{align}
where $k =\left( \nu_n, \mathbf k \right)$, $a$ is the valley index and $\nu_{n}$ is the fermionic Matsubara frequency. The electronic dispersion $\varepsilon_{a}\left(\mathbf{k}\right)$ of valley $a$ could, in principle, be derived from the tight-binding model of Eq. (\ref{eq:H0}); for our purposes, however, we keep it generic. In this single-band version of the model, the valley-polarized nematic order parameter couples to the fermionic degrees of freedom as described by the action \cite{cenke}
\begin{align}
\label{S_bf}
S_{bf} & =\gamma_{0}\int_{k,q}\sum\limits _{a=1,2}(-1)^{a+1} \,
\psi_{a}^{\dagger}(k+q)\,\psi_{a}(k) \nonumber \\
& \times\left[\frac{\Phi(q)+\Phi^{*}(q)}{2}\cos(2\theta_{k})-\frac{\Phi(q)-\Phi^{*}(q)}{2\,i}\sin(2\theta_{k})\right].
\end{align}
Here, $\gamma_{0}$ is a coupling constant, and $\tan\theta_{k}=k_{y}/k_{x}$.
Writing $\Phi=\left|\Phi_{0}\right|\mathrm{e}^{i\,\alpha}$, we obtain the coupling between the phase variable and the electronic operators inside the valley-polarized nematic state with constant $\left|\Phi_{0}\right|$. As before, we set $\alpha_{0}=0$, and expand around the minimum, to obtain:
\begin{align}
\label{S_alphaf}
S_{\alpha f} & =\gamma\int_{k,q}\sum\limits _{a=1,2}(-1)^{a+1} \,
\psi_{a}^{\dagger}(k+q)\,\psi_{a}(k) \nonumber \\
& \hspace{1 cm}
\times\left[\cos(2\theta_{k})\left(2\pi\right)^{3}\delta^{3}(k-q)
- \alpha(q)\,\sin(2\theta_{k})\right] .
\end{align}
where $\gamma\equiv\gamma_{0}\left|\Phi_{0}\right|$. The first term
in the last line shows that long-range order induces opposite nematic distortions in the Fermi surfaces with opposite valley quantum numbers. The second term shows that the phase mode couples to the charge density directly via a Yukawa-like coupling. As discussed in Ref.~\cite{Ashvin14}, this is an allowed coupling when the generator of the broken symmetry does not commute with the momentum operator.
\section{Non-Fermi liquid to Fermi liquid crossover}
\label{secnfl}
\subsection{The patch model}
Our goal is to derive the properties of the electronic degrees of
freedom in the valley-polarized nematic ordered phase, which requires the computation of the electronic self-energy. To do that in a controlled manner, we employ the patch method discussed in Ref.~\cite{max-subir,Lee-Dalid,ips-lee,ips-uv-ir,ips-fflo,ips-nfl-u1}.
This relies on the fact that fermions from different patches of a Fermi surface interact with a massless order parameter with largely disjoint sets of momenta, and that the inter-patch coupling is small in the low-energy limit, unless the tangent vectors at the patches are locally parallel or anti-parallel. Thus, the advantage of this emergent locality in momentum space is that we can now decompose the full theory into a sum of two-patch theories, where each two-patch theory describes electronic excitations near two antipodal points,
interacting with the order parameter boson with momentum along the local tangent.
This formalism has been successfully used in computing the universal properties and scalings for various non-Fermi liquid systems, such as the Ising-nematic QCP \cite{max-subir,Lee-Dalid,ips-lee,ips-uv-ir,ips-subir,ips-sound}, the FFLO QCP \cite{ips-fflo}, and a critical Fermi surface interacting with transverse gauge field(s) \cite{ips-nfl-u1}.
The only scenario that breaks this locality in momentum space is the presence of short-ranged four-fermion interactions in the pairing channel \cite{max-cooper,ips-sc}.
\begin{figure}
\centering \includegraphics[width=0.3\textwidth]{patches}
\caption{\label{figpatch}
Illustration of the patch model: $\psi_{a,+}$ denotes
the fermions located at the upper purple patch, centered at an angle $\theta=\theta_{0}$ with respect to the global coordinate system for a circular Fermi surface of valley quantum number $a$ (denoted by the purple ring). $\psi_{a,-}$ denotes the fermions in the lower purple patch, centred at the antipodal point $\theta=\pi+\theta_{0}$, whose tangential momenta are parallel to those at $\theta_{0}$. Although we show here the patch construction for a circular Fermi surface for the sake of simplicity, this can be applied to any Fermi surface of
a generic shape, as long as it is locally convex at each point.}
\end{figure}
For our case of the valley-polarized nematic order parameter, we consider two antipodal patches on a simplified Fermi surface, which is locally convex at each point. The antipodal patches feature opposite Fermi velocities, and couple with the bosonic field \cite{Lee-Dalid,ips-lee,ips-uv-ir,ips-fflo}.
Here, we choose a patch centred at $\theta_{k}=\theta_{0}$, and construct our coordinate system with its origin at $\theta_{0}$. As explained above, we must also include the fermions at the antipodal patch with $\theta_{k}=\pi+\theta_{0}$.
We denote the fermions living in the two antipodal patches as $\psi_{+}$ and $\psi_{-}$, as illustrated in Fig. \ref{figpatch}. We note that
the coupling constant remains the same for the fermions in the two antipodal points.
Expanding the spectrum around the Fermi surface patches up to an effective parabolic dispersion, and using Eqs.~(\ref{S_phase}),
(\ref{S_f}), and (\ref{S_alphaf}), we thus obtain the effective
field theory in the patch construction as:
\begin{align}
S_{f} & =\int_{k}\sum\limits _{\substack{s=\pm \\a=1,2}}
\psi_{a,s}^{\dagger}(k)
\Big(
i\,\nu_{n}+s\,k_{1}+\frac{k_{2}^{2}}{2\,k_{F}} \Big)
\psi_{a,s}(k)\,,\nonumber \\
S_{\alpha}
& =\frac{1}{2}\int_{q} \alpha(-q)
\left(\omega_{n}^{2}+q_{1}^{2}
+ q_{2}^{2}+m^{2}\right)\alpha(q)\,,
\nonumber \\
S_{\alpha f} &=\sum\limits_{\substack{s=\pm\\a=1,2}} (-1)^{a}
\int_{k,q}
\psi_{a,s}^{\dagger}(k+q)\,
\Big [
\gamma\sin(2\theta_{0})\, \alpha(q)
\nonumber \\ & \hspace{3 cm}
- \gamma\cos(2\theta_{0})
\Big ] \,\psi_{a,s}(k)\,.
\end{align}
Here, for simplicity, we have assumed that the Fermi surface is convex, and has the same shape for both the valley quantum numbers. We will discuss the impact of these approximations later in this section. Note that the fermionic momenta are expanded about the Fermi momentum $k_{F}$ at the origin of the coordinate system of that patch. In our notation, shown in Fig. \ref{figpatch}, $k_{1}$ is directed along the local Fermi momentum, whereas $k_{2}$ is perpendicular to it (or tangential to the Fermi surface). Note that the local curvature of the Fermi surface is given by
$1/ k_{F}$. Furthermore, $\psi_{a,+}$ ($\psi_{a,-}$) is the right(left)-moving fermion with valley index
$a$, whose Fermi velocity along the $k_{1}$ direction is positive
(negative).
Following the patch approach used in Refs.~\cite{Lee-Dalid,ips-lee,ips-uv-ir,ips-nfl-u1},
we rewrite the fermionic fields in terms of the two-component spinor $\Psi$, where
\begin{align}
\Psi^{T}(k) & =\left(\psi_{1,+}(k)\quad\psi_{2,+}(k)\quad\psi_{1,-}^{\dagger}(-k)\quad\psi_{2,-}^{\dagger}(-k)\right),\nonumber \\
\bar{\Psi}(k) & =\Psi^{\dagger}(k)\,\sigma_{2}\otimes\tau_{0}\,.
\end{align}
Here, $\sigma_{i}$ (with $i=1,2, 3$) denotes the $i^{\rm{th}}$ Pauli matrix acting on the patch space (consisting of the two antipodal patches), whereas $\tau_{i}$ is the $i^{\rm{th}}$ Pauli matrix acting on valley space (not to be confused with imaginary time $\tau$, which has no subscript). We use the symbols $\sigma_{0}$ and $\tau_{0}$ to denote the corresponding $2\times 2$ identity matrices.
In this notation, the full patch action becomes:
\begin{widetext}
\begin{align}
S_{f} & =\int_{k}\bar{\Psi}^{\dagger}(k)\left[
i\left( \sigma_2 \,\nu_n
+\sigma_{1}\,\delta_{k}\right)
\otimes\tau_{0} \right]\Psi(k)\,,\quad
S_{\alpha}=\frac{1}{2}\int_{q}\alpha(-q)
\left(\omega_n^{2}+q_{1}^{2}+q_{2}^{2}+m^{2}\right)\alpha(q)\,,\nonumber \\
S_{\alpha f} & =\gamma\int_{k,q}\bar{\Psi}(k+q)\left[
\left(2\pi\right)^{3}
\delta^{3}(k-q)\,\cos(2\theta_0)\,\sigma_{2}
- i\,\sin(2\theta_0)\,\alpha(q)\,\sigma_{1}\right]
\otimes\tau_{3} \, \Psi(k)\,,\quad\delta_{k}=k_{1}+\frac{k_{2}^{2}}{2\,k_{F}}\,.
\label{model}
\end{align}
\end{widetext}
For convenience, we have included the valley-dependent Fermi surface distortion $\gamma \cos(2\theta_{0})$
in the interaction action. The form of $S_{f}$ is such that it appears as if the fermionic energy disperses only in one effective direction near the Fermi surface. Hence, according to the formulation of the patch model in Ref.~\cite{Lee-Dalid,ips-lee,ips-uv-ir,ips-fflo}, the $(2+1)$-dimensional fermions can be viewed as if they were a $(1+1)$-dimensional ``Dirac'' fermion, with the momentum along the Fermi surface interpreted as a continuous flavor.
From Eq.~(\ref{model}), the bare fermionic propagator can be readily obtained as:
\begin{align}
G_{0}(k)=-i\,\frac{\sigma_{2}\,\nu_{n} + \sigma_{1}\,\delta_{k}}
{\nu_{n}^{2}+\delta_{k}^{2}}\otimes\tau_{0}\,.
\label{fermprop}
\end{align}
We note that the strength of the coupling constant between
the bosons and the fermions, given by $\gamma\,\sin(2\theta_k)$, depends on the value of $\theta_k $. For the patch centered at $\theta_k =\theta_{0}$,
the leading order term from the loop integrals can be well-estimated by assuming $\theta=\theta_{0}$ for the entire patch, as long as $\sin(2\theta_{0})\neq0$. However, for $\sin(2\theta_{0})=0$, we need to go beyond the leading order terms (which are zero), while performing the loop integrals. The patches centered around $\theta_k =\theta_0 $, with $\sin(2\theta_{0})\sim0$, are the so-called ``cold spots''; we will refer to the other patches as belonging to the ``hot regions'' of the Fermi surface.
\subsection{Electronic self-energy}
We first compute the one-loop bosonic self-energy $\Pi_{1}$, which takes the form:
\begin{widetext}
\begin{align}
\Pi_{1}(q) & =-\left(i\,\gamma\right)^{2}\int\frac{d^{3}k}{(2\,\pi)^{3}}\left[\sin^{2}(2\theta_{0})
+\frac{4\,k_{2}^{2}\cos(4\theta_{0})}{k_{F}^{2}}+\frac{2\,k_{2}\sin(4\theta_{0})}{k_{F}}\right]\text{Tr}\left[\sigma_{1}\,G_{0}(k+q)\,\sigma_{1}G_{0}(k)\right]\nonumber \\
& =-\frac{\gamma^{2}\sin^{2}(2\theta_{0})\,k_{F}\,|\omega_{n}|}{\pi\,|q_{2}|}+\frac{2\,\gamma^{2}\sin(4\theta_{0})\,k_{F}\,\delta_{q}\,|\omega_{n}|}{\pi\,q_{2}\,|q_{2}|}+\frac{4\,\gamma^{2}\cos(4\theta_{0})\,k_{F}\,|\omega_{n}|\left[\pi\left(q_{0}^{2}-\delta_{q}^{2}\right)-2|q_{0}|\,|q_{2}|\right]}{\pi^{2}\,|q_{2}|^{3}}\,.
\end{align}
\end{widetext}
This result is obtained by considering a patch centered
around $\theta_k =\theta_{0}$, and then expanding $\sin^{2}(2\theta_{0}+2\,k_{2}/k_{F})$
in inverse powers of $k_{F}$. In the limits $\frac{\left|\omega_{n}\right|}{|q_{2}|}\ll1$,
$k_{F}\gg|\mathbf{q}|$, and $|\mathbf{q}|\rightarrow\mathbf{0}$,
we have, to leading order:
\begin{align}
\Pi_{1}(q)\Big\vert_{\text{hr}}
=-\frac{\left|\omega_{n}\right|}{|q_{2}|}\frac{\gamma^{2}\sin^{2}(2\theta_{0})\,k_{F}}{\pi},
\end{align}
as long as $\sin(2\theta_{0})\neq0$ (i.e., at the hot regions). For the cold spots, the leading-order term is given by
\begin{align}
\Pi_{1}(q)\Big\vert_{\text{cs}}=-\frac{8\,\gamma^{2}\cos(4\theta_{0})\,k_{F}\,}{\pi^{2}}\frac{\omega_{n}^{2}}{q_{2}^{2}},
\end{align}
Here, the subscript hr (cs) denotes hot regions (cold spots).
A similar result was previously obtained in Refs.~\cite{Oganesyan01,Garst09} using a different approach, and for the case of an XY nematic order parameter (see also \cite{Carvalho2019}). We, therefore, conclude that the pseudo-Goldstone mode in the valley-polarized nematic phase is overdamped at the hot regions.
\begin{figure*}
\centering
\includegraphics[width=0.60 \textwidth]{sigmafig}
\caption{\label{figsigma} Fermionic self-energy $i\,\bar{\Sigma}(\nu_{n})/m^3$
as a function of the scaled Matsubara frequency $\tilde \nu_{n} = \nu_n/\Omega^* $, obtained from the numerical integration of Eq.~(\ref{sigma_numerics}) by setting $m = 0.1$ and $ k_F = 100$.
The dashed lines correspond to the frequency dependencies obtained
from the asymptotic results in Eq.~(\ref{sigma_NFL}) [ i.e., $i\,\bar{\Sigma}\left(\nu_{n}\right)\sim\left|\nu_{n}\right|^{2/3}$], and in Eq.~(\ref{sigma_FL}), [i.e., $i\,\bar{\Sigma}\left(\nu_{n}\right)\sim\nu_{n}$].}
\end{figure*}
We can now define the dressed bosonic propagator, that includes the one-loop bosonic self-energy, as:
\begin{align}
D_{1}(q)=\frac{1}{q^{2}+m^{2}-\Pi_{1}(q)}\,.
\label{eqbosprop}
\end{align}
The one-loop fermionic self-energy $\Sigma_{1}(k)$ can then be expressed in terms of $\tilde{\Sigma}$, defined as:
\begin{align}
& \tilde{\Sigma}(k)\equiv\Sigma_{1}(k)-\gamma\,\cos(2\,\theta_{0})\,\sigma_{2}\otimes\tau_{3}\nonumber \\
& =-\gamma^{2}\sin^{2}(2\theta_{0})\int_{q}\left(\sigma_{1}\otimes\tau_{3}\right)G_{0}(k+q)\left(\sigma_{1}\otimes\tau_{3}\right)D_{1}(-q)\,,
\label{eqselfen}
\end{align}
where we use the notation $q=(\omega_{n'}, \mathbf q)$.
In order to be able to perform the integrals, we will neglect the $q_{1}^{2}$ and $\omega_{n'}^{2}$ contributions in the bosonic propagator, which are anyway irrelevant
in the RG sense \cite{Lee-Dalid,ips-lee}.
This is justified because the contributions to the integral
are dominated by $q_{1}\sim \nu_{n}$, $ \omega_{n'} \sim \nu_{n}$, and $q_{2}\sim|\nu_{n}|^{1/3}$,
and we are interested in the small $|\nu_{n}|$ limit (where $\nu_{n}$ is the external fermionic Matsubara frequency). In the limit $m\rightarrow0$,
we can obtain analytical expressions for $\tilde{\Sigma}(k)$
as follows:
\begin{align}
& \tilde{\Sigma}(k)\Big\vert_{\text{hr},m\rightarrow0}\nonumber \\
& =-\gamma^{2}\sin^{2}(2\theta_{0})\int_{q}\left(\sigma_{1}\otimes\tau_{3}\right)G_{0}(k+q)\left(\sigma_{1}\otimes\tau_{3}\right)D_{1}(-q
\nonumber \\
& =
- \frac{i
\left[\gamma\sin(2\theta_{0})\right]^{4/3}
\text{sgn}(\nu_{n})\,|\nu_{n}|^{2/3}}{2\,\sqrt{3}\,\pi^{2/3}\,k_{F}^{1/3}}\,\sigma_{2}\otimes\tau_{0}\,
\label{sigma_NFL}\\
& \tilde{\Sigma}(k)\Big\vert_{\text{cs},m\rightarrow0}
\nonumber \\ & =
- \frac{i\,\gamma^{3/2}\cos^{\frac{3}{4}}\left(4\theta_{0}\right)\text{sgn}(\nu_{n})\,|\nu_{n}|^{1/2}
\,k_{2}^{2}
}
{2^{1/4}\,\sqrt{\pi}\,k_{F}^{9/4}}
\,\sigma_{2}\otimes\tau_{0} \,.
\end{align}
The one-loop corrected self-energy is then given by: $G^{-1}(k) = G_0^{-1}(k) -\Sigma_1(k) $. The frequency dependence of $\tilde{\Sigma}$ at the hot regions, in the limit $m\rightarrow0$, corresponds to an NFL behavior, since the fermionic lifetime has a sublinear dependence on frequency, implying the absence of well-defined quasiparticles. The same $|\nu_n|^{2/3}$ dependence on the frequency
was found in the case of an ideal XY nematic in Refs.~\cite{Oganesyan01,Garst2010}.
However, for the valley-polarized nematic state, $m$ is not zero in the ordered state, as it is proportional to the dangerously irrelevant variable $\lambda$ in the bosonic action. The limit of large $m$ is straightforward to obtain, and gives an FL-correction to the electronic Green's function, because
\begin{align}
& \tilde{\Sigma}(k)\Big\vert_{\text{hr},m\gg
\left[
\frac{3 \,\sqrt{3} \,\gamma ^2 \,k_F
\sin^2\left(2 \theta _0\right) \,\left| \nu _n\right| }
{2 \,\pi }
\right]^{1/3}
}
\nonumber \\ &
= -
\frac{ \left(2+2^{2/3}\right)
\gamma ^2 \sin ^2\left(2 \theta _0\right)}
{8 \,\pi\, m}\,i\,\nu_n \,\sigma_{2}\otimes\tau_{0}\,.
\label{sigma_FL}
\end{align}
From Eqs.~\eqref{fermprop}, \eqref{eqbosprop}, and \eqref{eqselfen}, we find that the crossover from NFL to FL behaviour occurs when $m^2> -\Pi_1(q)$, i.e., $m^2>\frac{\left| \omega _{n'}\right| \,\gamma ^2 k_F \sin^2\left(2 \theta _0 \right)}
{\pi \, q_2}$, in the one-loop corrected bosonic propagator $D_1(q)$ inside the integral. In that situation,
the dominant contribution to the integral over $q_2$ comes from $q_2\sim m$.
On the other hand, considering the fermionic propagator contribution to the integrand, the dominant contribution comes from $ \omega_{n'} \sim \nu_n$ for the $ \omega_{n'}$-integral.
Hence, the relevant crossover scale for the fermionic frequency $\nu_n$ is approximately $\Omega^{*} = \frac{m^3}
{\gamma ^2 \,k_F
\sin^2\left(2 \theta _0\right)} $.
Because $m^2 \sim \lambda$, it follows that $\Omega^* \sim \lambda^{3/2}$.
It is therefore expected that, for finite $m$, above the characteristic
energy scale $\Omega^{*}$,
the self-energy displays NFL
behavior, captured by $\tilde{\Sigma}\sim i\,\text{sgn}(\nu_{n})\,|\nu_{n}|^{2/3}$.
For low enough energies, such that $\left|\nu_{n}\right|\ll\Omega^{*}$, the regular FL behavior with $\tilde{\Sigma}\sim i \,\nu_{n}$ should be recovered. The crucial point is that, because $\Omega^{*}$ depends on the dangerously irrelevant coupling constant $\lambda$, it is expected to be a small energy
scale. This point will be discussed in more depth in the next section. To proceed, it is convenient to write the complete expression for
$\tilde{\Sigma} = \bar{\Sigma} \times \left( \sigma_{2}\otimes\tau_{0} \right )$ for the case of an arbitrary $m$:
\begin{align}
& i\,\bar{\Sigma}(k)\Big\vert_{\text{hr}}
\nonumber \\ & =
-\int {d \tilde \omega_{n'}}
\frac{ m^3\,\text{sgn}\left( \tilde \nu _n+ \tilde \omega _{n'}\right)}
{4\, \pi ^2}
\, \sum \limits_{j=1}^3
\frac{\zeta_j(\tilde \omega_{n'}) \ln \left (-\zeta_j(\tilde \omega_{n'})\right )}
{m^2+ 3 \,\zeta_j^2(\tilde \omega_{n'})}\,,
\label{sigma_numerics}
\end{align}
where $\tilde{\nu}_{n}\equiv\nu_{n}/\Omega^{*}$, $\tilde{\omega}_{n'}\equiv\omega_{n'}/\Omega^{*}$, and $\zeta_j $ is the $j^{\text{th}}$ root of the cubic-in-$q_2$ polynomial
$
\pi \,q_2 \left( q_2^2+ m^2 \right) +
m^3 \,k_F \left| \tilde{\omega }_{n'}\right|
$.
To confirm that indeed $\Omega^{*}$ is the energy scale associated with the crossover from NFL to FL behavior, we have solved the integral in Eq.~(\ref{sigma_numerics}) numerically to obtain the self-energy for arbitrary $m$. As shown in Fig.~\ref{figsigma}, $\Omega^{*}$ separates
the two asymptotic behaviors for the self-energy $\tilde{\Sigma}$:
(1) NFL, given by Eq. (\ref{sigma_NFL}), and present for $\nu_{n}\gg\Omega^{*}$;
(2) FL, given by Eq. (\ref{sigma_FL}), and present for $\nu_{n}\ll\Omega^{*}$.
As explained in the beginning of this section, here we have considered the simplified case of two identical convex Fermi surfaces for the two valleys. This not only makes the analytic calculations more tractable, but it also allows us to extend the results for more general cases beyond TBG. This includes, for instance, the case where $a$ is not a valley quantum number, but a spin quantum number, which we will discuss in more detail in Sec. \ref{secend}.
Considering the Fermi surfaces for TBG obtained from the tight-binding model and shown in Fig.~\ref{fig:fermi-sur}, it is clear that they each have a lower three-fold (rather than continuous) rotational symmetry in the disordered state. One of the consequences is that the two patches in the patch model are no longer related by inversion, at least not within the same valley Fermi surface. Another consequence is that the Fermi surface can have points that are locally concave and not convex. The latter is an important assumption of the patch model construction of Ref.~\cite{Lee-Dalid,ips-nfl-u1}, which we have implemented here. The impact of these two effects on the self-energy behavior at moderate frequencies is an interesting question beyond the scope of this work, which deserves further investigation. While we still expect an FL to NFL crossover, the particular frequency dependence of the self-energy, in the regime where the pseudo-Goldstone mode appears gapless, may be different from what has been discussed in this section.
\subsection{Hertz-Millis approach}
We note that the same general results obtained above also follow from the usual (but uncontrolled) Hertz-Millis approach~\cite{Wolfle_RMP}. It turns out that the action in Eq.~(\ref{S_alphaf}) is analogous to the widely studied case of a metallic Ising-nematic QCP, and hence the results are well known (see, for example, Ref.~\cite{Metzner03,max-subir,Hartnoll2014,Paul2017}). Linearizing the dispersion near the Fermi level, the one-loop bosonic self-energy $\bar{\Pi}_1$ is given by:
\begin{align}
\bar{\Pi}_1(q) & = - \gamma^2 \,k_F
\int_{-\infty}^{\infty}\,\frac{d \nu_{n'} }{2\,\pi}
\int_{-\infty}^{\infty}\,\frac{dk_\perp}{2\,\pi}
\int_{0}^{2\pi}\,\frac{d\theta_k} {2\,\pi} \times \\
& \frac{\sin^2 (2\theta_k)} { \left( i \,\nu_{n'} - k_\perp\right )
\left[ i \left ( \nu_{n'} + \omega_n \right) -
\left \lbrace k_\perp + |\mathbf q | \cos(\theta_k - \theta_q) \right \rbrace \right ]} \,\nonumber
\end{align}
A straightforward computation gives the final expression:
\begin{align}
\bar{\Pi}_1(q) \propto - \gamma ^2 \sin ^2\left(2 \theta _q\right) \,\frac{\left| \omega _n\right| } { | \mathbf q|}\,.
\end{align}
Thus, we obtain the dynamical critical exponent $z=3$ for the bosons [except at the cold spots, where the coupling constant $\sin ^2\left(2 \theta _q\right)$ vanishes]. This is the usual Hertz-Millis result for a bosonic QCP in a metal, whose ordered state has zero wavevector \cite{Wolfle_RMP}. Most importantly, it gives an NFL fermionic self-energy $\bar{\Sigma}_1 \propto i \, |\nu_n|^{2/3}$ if the bosonic mass $ m =0$, and the usual FL expression with $\bar{\Sigma}_1 \propto i \, \nu_n$ for $m \neq0$ (see, for example, Ref.~\cite{Wolfle_RMP}).
As mentioned above, these results are analogous to those for an Ising-nematic QCP in a metal. The difference here is that the QCP is approached from the ordered state, rather than from the disordered state. More importantly, in our case, it is not the gap in the amplitude fluctuations, but the small mass of the pseudo-Goldstone mode associated with phase fluctuations, that restores the FL behavior as one moves away from the QCP. These phase fluctuations, in turn, couple to the fermionic degrees of freedom via a Yukawa-like, rather than a gradient-like coupling (typical for phonons). The key point is that because the pseudo-Goldstone behavior arises from a dangerously irrelevant variable, its relevant critical exponent $\xi'$ iis different from the critical exponent $\xi$ associated with the amplitude fluctuations.
\section{Discussion and conclusions}
\label{secend}
Our calculations with the patch model, assuming convex Fermi surfaces with antipodal patches with parallel tangent vectors, show that the energy scale $\Omega^{*}$, associated with the NFL-to-FL crossover, is directly related to the dangerously irrelevant coupling constant $\lambda$ of the $6$-state clock model, according to $\Omega^{*}\sim\lambda^{3/2}$. This has important consequences for the energy range in which the NFL is expected to be observed in realistic settings. In the classical 3D $Z_{6}$ clock model, it is known that the dangerously irrelevant variable $\lambda$ introduces a new length scale $\xi'$ in the ordered phase \cite{Sandvik2007,Okubo2015,Leonard2015,Podolski2016}. It is only beyond this length scale that the discrete nature of the broken symmetry is manifested; below it, the system essentially behaves as if it were in the ordered state of the XY model. Like the standard correlation length $\xi$, which is associated with fluctuations of
the amplitude mode, $\xi'$ also diverges upon approaching the QCP from the ordered state. However, its critical exponent $\nu'$ is larger than the XY critical exponent $\nu$, implying that $\xi'\gg\xi$ as the QCP is approached. As a result, there is a wide range of length scales for which the ordered state is similar to that of the XY model.
Applying these results to our quantum model, we therefore expect a wide energy range for which the fermionic self-energy displays the same behavior as fermions coupled to a hypothetical XY nematic order parameter, i.e., the NFL behavior $\Sigma\sim i \,\mathrm{sgn}(\nu_n) \left|\nu_{n}\right|^{2/3}$. Thus, the actual crossover energy scale $\Omega^{*}$ should be very small compared to other energy scales of the problem. This analysis suggests that the valley-polarized nematic state in a triangular lattice is a promising candidate to display the strange metallic behavior predicted originally for the ``ideal'' (i.e., hypothetically uncoupled from the lattice) XY nematic phase in the square lattice \cite{Oganesyan01}.
It is important to point out a caveat with this analysis. Although the aforementioned critical behavior of the $Z_{6}$ clock model has been verified by Monte Carlo simulations for both the 3D classical case and the 2D quantum case \cite{Sandvik2021}, the impact of the coupling to the fermions remains to be determined. The results of our patch model calculations for the bosonic self-energy show the emergence of Landau damping in the dynamics of the phase fluctuations, which is expected
to change the universality class of the QCP -- and the value of the exponent $\nu$ -- from 3D XY to Gaussian, due to the reduction of the upper critical dimension. The impact of Landau damping on the crossover exponent $\nu'$ is a topic that deserves further investigation, particularly since even in the purely bosonic case, there are different proposals for the scaling expression for $\nu'$ (see Ref.~\cite{Sandvik2021} and references therein).
We also emphasize the fact that our results have been derived for $T=0$. Experimentally, NFL behavior is often probed by the temperature dependence of the resistivity. Leaving aside the important differences between quasiparticle inverse-lifetime and transport scattering rate \cite{Maslov2011,Hartnoll2014,Carvalho2019}, it is therefore important to determine whether the NFL behavior of the self-energy persists at a small nonzero temperature. At first sight, this may seem difficult, since in the classical 2D $Z_{6}$ clock model, the $\lambda$-term
is a relevant perturbation. In fact, as discussed in Sec.~\ref{secmodel}, the system in 2D displays two Kosterlitz-Thouless transitions, with crossover temperature scales of $T_{\mathrm{KT},1}$ and $T_{\mathrm{KT},2}$, with $Z_{6}$ symmetry-breaking setting on below $T_{\mathrm{KT},2}$ \cite{Jose1977}. However, a more in-depth analysis, as outlined in Ref.~\cite{Podolski2016}, indicates that as the QCP is approached, a new
crossover temperature $T^{*}<T_{\mathrm{KT},2}$ emerges, below which the ordered state is governed by the QCP (rather than the thermal transition). Not surprisingly, the emergence of $T^{*}$ is rooted on the existence of the dangerously irrelevant perturbation along the $T=0$ axis.
Therefore, as long as $\Omega^{*}<T^{*}$, the NFL behavior is expected to be manifested in the temperature dependence of the transport and thermodynamic quantities.
An obvious candidate to display a valley-polarized nematic state is twisted bilayer graphene and twisted moir\'e systems more broadly. Experimentally, as we showed in this paper, a valley-polarized nematic state would be manifested primarily as in-plane orbital ferromagnetism, breaking threefold rotation, twofold rotation, and time-reversal symmetries. While several experiments have reported evidence for out-of-plane orbital ferromagnetism \cite{Sharpe19,Efetov19,Young19,Tschirhart2021}, it remains to be seen whether there are regions in the phase diagram where the magnetic moments point in-plane \cite{Berg2022}. An important property of the valley-polarized nematic state is that the Dirac points remain protected, albeit displaced from the $K$ point, since the combined $C_{2z} \mathcal{T}$ operation remains a symmetry of the system.
A somewhat related type of order, which has also been proposed to be realized in twisted bilayer graphene and other systems with higher-order van Hove singularities \cite{Chichinadze2020,Classen2020}, is the \emph{spin-polarized nematic order} \cite{Kivelson_RMP,Wu_Fradkin2007,Fischer_Kim2011}. It is described by an order parameter of the form $\overrightarrow{\varphi}=\left(\overrightarrow{\varphi}_{1},\,\overrightarrow{\varphi}_{2}\right)$,
where the indices denote the two $d$-wave components associated with the irreducible representation $E_{2}$ of the point group $\mathrm{D}_{6}$. The arrows denote that these quantities transform as vectors in spin space. The main difference of $\overrightarrow{\varphi}$ with respect to the valley-polarized
nematic state is that the spin-polarized nematic state does not break the $C_{2z}$ symmetry. It is therefore interesting to ask whether our results would also apply for this phase. The main issue is that $\overrightarrow{\varphi}$ is not described by a $6$-state clock model, since an additional quartic term is present in the action (see Ref.~\cite{Classen2020}), which goes as:
\begin{align}
S_{\vec{\varphi}}\sim\left(\vec{\varphi}_{1}\cdot\vec{\varphi}_{2}\right)^{2}-\left|\vec{\varphi}_{1}\right|^{2}\left|\vec{\varphi}_{2}\right|^{2} \,.
\label{spin_polarized}
\end{align}
However, if spin-orbit coupling is present in such a way that $\overrightarrow{\varphi}$ becomes polarized along the $z$-axis, this additional term vanishes. The resulting order parameter $\varphi^{z}=\left(\varphi_{1}^{z},\,\varphi_{2}^{z}\right)$ transforms as the $E_{2}^{-}$ irreducible representation, and its corresponding action is the same as Eq.~(\ref{actionboson0}), i.e., a $6$-state clock model. Moreover, the coupling to the fermions has
the same form as in Eq.~(\ref{S_bf}), with $a$ now denoting the spin projection, rather than the valley quantum number. Consequently, we also expect an NFL-to-FL crossover inside the Ising spin-polarized nematic state.
In summary, we presented a phenomenological model for the emergence of valley-polarized nematic order in twisted moir\'e systems, which is manifested as in-plane orbital ferromagnetism. More broadly, we showed that when a metallic system undergoes a quantum phase transition to a valley-polarized nematic state, the electronic self-energy at $T=0$ in the ordered state displays a crossover from the FL behavior (at very low energies) to NFL behavior (at low-to-moderate
energies). This phenomenon is a consequence of the $6$-state-clock ($Z_{6}$) symmetry of the valley-polarized nematic order parameter, which implies the existence of a pseudo-Goldstone mode in the ordered state, and of a Yukawa-like coupling between the phase mode and the
itinerant electron density. The existence of the pseudo-Goldstone mode arises, despite the discrete nature of the broken symmetry, because the anisotropic $\lambda$-term in the bosonic action [cf. Eq.~(\ref{actionboson0})], which lowers the continuous O(2) symmetry to $Z_{6}$, is a dangerously irrelevant perturbation. Our results thus provide an interesting route to realize NFL behavior in twisted moir\'e systems.
\begin{acknowledgments}
We thank A. Chakraborty, S.-S. Lee, A. Sandvik, and C. Xu for fruitful discussions.
RMF was supported by the U. S. Department of Energy, Office
of Science, Basic Energy Sciences, Materials Sciences and Engineering
Division, under Award No. DE-SC0020045.
\end{acknowledgments}
| proofpile-arXiv_065-5629 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |